All of β-redex's Comments + Replies

I am not a 100% convinced by the comparison, because technically LLMs are only "reading" a bunch of source code, they are never given access to a compiler/interpreter. IMO actually running the code one has written is a very important part of learning, and I think it would be a much more difficult task for a human to learn to code just by reading a bunch of books/code, but never actually trying to write & run their own code.[1]

Also, in the video linked earlier in the thread, the girlfriend playing Terraria is deliberately not given access to the wiki, a... (read more)

1ErickBall
I kind of see your point about having all the game wikis, but I think I disagree about learning to code being necessarily interactive. Think about what feedback the compiler provides you: it tells you if you made a mistake, and sometimes what the mistake was. In cases where it runs but doesn't do what you wanted, it might "show" you what the mistake was instead. You can learn programming just fine by reading and writing code but never running it, if you also have somebody knowledgeable checking what you wrote and explaining your mistakes. LLMs have tons of examples of that kind of thing in their training data.

And as a separate note, I'm not sure what the appropriate human reference class for game-playing AIs is, but I challenge the assumption that it should be people who are familiar with games. Rather than, say, people picked at random from anywhere on earth.

If you did that for programming, AIs would already be considered strongly superhuman. Just like we compare AI's coding knowledge to programmers, I think it's perfectly fair to compare their gaming abilities to people who play video games.

1ErickBall
Yeah but we train AIs on coding before we make that comparison. And we know that if you train an AI on a videogame it can often get superhuman performance. Here we're trying to look at pure transfer learning, so I think it would be pretty fair to compare to someone who is generally competent but has never played videogames. Another interesting question is to what extent you can train an AI system on a variety of videogames and then have it take on a new one with no game-specific training. I don't know if anyone has tried that with LLMs yet.
4MondSemmel
By this I was mainly arguing against claims like that this performance is "worse than a human 6-year-old".

Notably, this was exactly the sort of belief I was trying to show is false

Please point out if there is a specific claim I made in my comment that you believe to be false. I said that "I don't think a TC computer can ever be built in our universe.", which you don't seem to argue with? (If we assume that we can only ever get access to a finite number of atoms. If you dispute this I won't argue with that, neither of us has a Theory of Everything to say for certain.)

Just to make precise why I was making that claim and what it was trying to argue against, ta... (read more)

3Noosphere89
I think I understand the question now. I actually agree that if we assume that there's a finite maximum of atoms, we could in principle reformulate the universal computer as a finite state automaton, and if we were willing to accept the non-scalability of a finite state automaton, this could actually work. The fundamental problem is that now we would have software that only works up to a specified memory limit, because we essentially burned the software into the hardware of the finite automaton and if you are ever uncertain of how much memory or time a problem requires, or more worryingly if we were ever uncertain about how much resources we could actually use, then our "software" for the finite automaton is no longer usable and we'd have to throw it away and recreate a new computer for every input length. Turing Machine models automatically handle arbitrarily large inputs without having to throw away expensive work on developing the software. So in essence, if you want to handle the most general case, or believe unbounded atoms are possible, like me, then you really want the universal computer architecture of modern computers. The key property of real computers that makes them Turing Complete in theory is that they can scale with more memory and time arbitrarily without changing the system descriptior/code. More below: https://www.dwarkesh.com/p/adam-brown

I don't think a TC computer can ever be built in our universe. The observable universe has a finite number of atoms, I have seen numbers around thrown around. Even if you can build a RAM where each atom stores 1 bit,[1] this is still very much finite.

I think a much more interesting question is why TC machines are — despite only existing in theory — such useful models for thinking about real-world computers. There is obviously some approximation going on here, where for the vast majority of real-world problems, you can write them in such a way that the... (read more)

2Noosphere89
Notably, this was exactly the sort of belief I was trying to show is false, and your observation about the physical universe does not matter for the argument I made here, because the question is whether with say 2^1000000 atoms, you can solve larger problem sizes with the same code, and Turing-complete systems say yes to the question. In essence, it's a question of whether we can scale our computers with more memory and time without having to change the code/algorithms, and basically all modern computers can do this in theory. Short version, because you can always extend them with more memory and time, and it really matters a lot in practical computing if you can get a general coding solution that is also cheap to work with, because it can handle upgrades to it's memory very well. In essence, the system descriptor/code doesn't have to change if the memory and time increases (for a finite state machine or look-up table, they would have to change.

Let be the state space of our finite/physical computer, where is the number of bits of state the computer has. This can include RAM, non-volatile storage, CPU registers, cache, GPU RAM, etc... just add up all the bits.

The stateless parts of the computer can be modeled as a state transition function , which is applied at every time step to produce the next state. (And let's suppose that there is some special halting state .)

This is clearly a FSM with states, and not TC. The halting problem can be trivially solved for it: it is guarante... (read more)

2Noosphere89
Notably, this is why we focus on the arbitrarily large memory and time case, where we can assume that the machine has arbitrarily large memory and time to work with. The key question here is whether a finite physical computer can always be extended with more memory and time without requiring us to recode the machine into a different program/computer, and most modern computers can do this (modulo physical issues of how you integrate more memory and time). In essence, the key property of modern computers is that the code/systems descriptor doesn't change if we add more memory and time, and this is the thing that leads to Turing-completeness if we allow unbounded memory and time.

I think the reacts being semantic instead of being random emojis is what makes this so much better.

I wish other platforms experimented with semantic reacts as well, instead of just letting people react with any emoji of their choosing, and making you guess whether e.g. "thumbs up" means agreement, acknowledgement, or endorsement, etc.

This was my first time taking this, looking forward to the results!

5Jacob G-W
Same!

I know of Robert Miles, and Writer, who does Rational Animations. (In fact Robert Miles' channel is the primary reason I discovered LessWrong :) )

Don't leave me hanging like this, does the movie you are describing exist? (Though I guess your description is a major spoiler, you would need to go in without knowing whether there will be anything supernatural.)

4Ben
Many of the feature length Scooby Doo cartoon films I watched as a child do a big reveal where, actually this time (because its a longer story) some aspect of the magic or monster turns out to be real. I think one actually has both the fake and real monster.
1Ninety-Three
There is a narrative-driven videogame that does exactly this, but unfortunately I found the execution mediocre. I can't get spoilers to work in comments or I'd name it. Edit: It's
2Raemon
Oh, no I just made it up, alas. This is a sketch of how someone else should construct such a movie. (Maybe the movie exists but yeah if it did it'd unfortunately now be a spoiler in this context. :P)
  1. The Thing: classic
  2. Eden Lake
  3. Misery
  4. 10 Cloverfield Lane
  5. Gone Girl: not horror, but I specifically like it because of how agentic the protagonist is

2., 3. and 4. have in common that there is some sort of abusive relationship that develops, and I think this adds another layer of horror. (A person/group of people gain some power over the protagonist(s), and they slowly grow more abusive with this power.)

5TekhneMakre
I was going to say The Thing. https://en.wikipedia.org/wiki/The_Thing_(1982_film)

Somewhat related: does anyone else strongly dislike supernatural elements in horror movies?

It's not that I have anything against a movie exploring the idea of "what if we suddenly discovered that we live in a universe where supernatural thing X exist", but the characters just accept this without much evidence at all.

I would love a movie though where they explore the more likely alternate hypotheses first (mental issues, some weird optical/acoustic phenomenon, or just someone playing a super elaborate prank), but then the evidence starts mounding, and eventually they are forced to accept that "supernatural thing X actually exists" is really the most likely hypothesis.

3Elizabeth
I saw a review of a reality show like this. The contractor agent was just one "your door hinges were improperly installed" after another. One couple saw a scary shadow face on their wall and it turned out to a patch job on the paint that matched perfectly in daylight but looked slightly different in low light. Oculus isn't perfect for this, she starts out already believing in the haunted house due to past experience, but has a very scientific approach to proving it.
3Raemon
I think my favorite [edit: ideal hope] for this sort of thing has 2-3 killers/antogonists/scary-phenomena, and one of them turns out to be natural, and one supernatural. So the audience actually has to opportunity to figure it out, rather than just be genre savvy and know it'll eventually turn out to be supernatural.

These examples show that, at least in this lower-stakes setting, OpenAI’s current cybersecurity measures on an already-deployed model are insufficient to stop a moderately determined red-teamer.

I... don't actually see any non-trivial vulnerabilities here? Like, these are stuff you can do on any cloud VM you rent?

Cool exploration though, and it's certainly interesting that OpenAI is giving you such a powerful VM for free (well actually not because you already pay for GPT-4 I guess?), but I have to agree with their assessment which you found that "it's expected that you can see and modify files on this system".

The malware is embedded in multiple mods, some of which were added to highly popular modpacks.

Any info on how this happened? This seems like a fairly serious supply chain attack. I have heard of incidents with individual malicious packages on npm or PyPI, but not one where multiple high profile packages in a software repository were infected in a coordinated manner.

Uhh this first happening in 2023 was the exact prediction Gary Marcus made last year: https://www.wired.co.uk/article/artificial-intelligence-language

Not sure whether this instance is a capability or alignment issue though. Is the LLM just too unreliable, as Gary Marcus is saying? Or is it perfectly capable, and just misaligned?

I don't see why communicating with an AI through a BCI is necessarily better than through a keyboard+screen. Just because a BCI is more ergonomic and the AI might feel more like "a part of you", it won't magically be better aligned.

In fact the BCI option seems way scarier to me. An AI that can read my thoughts at any time and stimulate random neurons in my brain at will? No, thanks. This scenario just feels like you are handing it the "breaking out of the box" option on a silver platter.

3RussellThor
The idea is that the BCI is added slowly and you integrate the new neurons into you in a continuous identity preserving way., the AI thinks your thoughts.

Why is this being downvoted?

From what I am seeing people here are focusing way too much on having a precisely calibrated P(doom) value.

It seems that even if P(doom) is 1% the doom scenario should be taken very seriously and alignment research pursued to the furthest extent possible.

The probability that after much careful calibration and research you would come up with a P(doom) value less than 1% seems very unlikely to me. So why invest time into refining your estimate?

5the gears to ascension
because it fails to engage with the key point: that the low predictiveness of the dynamics of ai risk makes it hard for people to believe there's a significant risk at all. I happen to think there is; that's why I clicked agree vote. but I clicked karma downvote because of failing to engage with the key epistemic issue at hand.

There was a recent post estimating that GTP-3 is equivalent to about 175 bees. There is also a comment there asserting that a human is about 140k bees.

I would be very interested if someone could explain where this huge discrepancy comes from. (One estimate is equating synapses with parameters, while this one is based on FLOPS. But there shouldn't be such a huge difference.)

2LawrenceC
Author of the post here — I don’t think there’s a huge discrepancy here, 140k/175 is clearly within the range of uncertainty of the estimates here! That being said the Bee post really shouldn’t be taken too seriously. 1 synapse is not exactly one float 16 or int8 parameter, etc
2the gears to ascension
I didn't read that post. should I? is it more than a joke? edit: I read it. it was a lot shorter than I expected, sometimes I'm a dumbass about reading posts and forget to check length. it's a really simple point, made in the first words, and I figured there would be more to it than that for some reason. there isn't.
6benjamincosman
10^15 - 10^30 is not at all a narrow range! So depending on what the 'real' answer is, there could be as little as zero discrepancy between the ratios implied by these two posts, or a huge amount. If we decide that GPT-3 uses 10^15 FLOPS (the inference amount) and meanwhile the first "decent" simulation of the human brain is the "Spiking neural network" (10^18 FLOPS according to the table), then the human-to-GPT ratio is 10^18 / 10^15 which is almost exactly 140k / 175. Whereas if you actually need the single molecules version of the brain (10^43 FLOPS), there's suddenly an extra factor of ten septillion lying around.
1RomanS
I wouldn't call it a huge discrepancy. If both values are correct, it means the human brain requires only 10² - 10³ more compute than GPT-3.  The difference could've been in dozens or even hundreds of OOMs, but it's only 2 - 3, which is quite interesting. Why the difference in compute is so small, if the nature of the two systems is so different? 

Indeed (as other commenters also pointed out) the ability to sexually reproduce seems to be much more prevalent than I originally thought when writing the above comment. (I thought that eukaryotes only capable of asexual reproduction were relatively common, but it seems that there may only be a very few special cases like that.)

I still disagree with you dismissing the importance of mitochondria though. (I don't think the OP is saying that mitochondria alone are sufficient for larger genomes, but the argument for why they are at least necessary is convincing to me.)

I disagree with English (in principle at least) being inadequate for software specification.

For any commercial software, the specification basically is just "make profit for this company". The rest is implementation detail.

(Obviously this is an absurd example, but it illustrates how you can express abstractions in English that you can't in C++.)

I don't think the comparison of giving a LLM instructions and expecting correct code to be output is fair. You are vastly overestimating the competence of human programmers: when was the last time you wrote perfectly correct code on the very first try?

Giving the LLM the ability to run its code and modify it until it thinks its right would be a much fairer comparison. And if, as you say, writing unit tests is easy for a LLM, wouldn't that just make this trial-and-error loop trivial? You can just bang the LLM against the problem until the unit tests pass.

(And this process obviously won't produce bug-free code, but humans don't do that in the first place either.)

Not all eukaryotes employ sexual reproduction. Also prokaryotes do have some mechanisms for DNA exchange as well, so copying errors are not their only chance for evolution either.

But I do agree that it's probably no coincidence that the most complex life forms are sexually reproducing eukaryotes.

1DPiepgrass
I see that someone strongly-disagreed with me on this. But are there any eukyrotes that cannot reproduce sexually (and are not very-recently-decended from sexual-reproducers) but still maintain size or complexity levels commonly associated with eukyrotes?
5Steven Byrnes
I’m pretty sure that I read (in Nick Lane’s The Vital Question) that all eukaryotes employ sexual reproduction at least sometimes. It’s true that they might reproduce asexually for a bunch of generations between sexual reproduction events. (It’s possible that other people disagree with Nick Lane on this, I dunno.)

I barely registered the difference between small talk and big talk

I am still confused about what "small talk" is after reading this post.

Sure, talking about the weather is definitely small talk. But if I want to get to know somebody, weather talk can't possibly last for more than 30 seconds. After that, both parties have demonstrated the necessary conversational skills to move on to more interesting topics. And the "getting to know each other" phase is really just a spectrum between surface level stuff and your deepest personal secrets, so I don't reall... (read more)

3[anonymous]
You can show disinterest or make up excuses to leave the social interaction. These behaviors are a lot worse, IMO, than just telling them straight up, but these are the socially accepted methods. Talking about sociopathic tendencies in the general public...

It was actually this post about nootropics that got me curious about this. Apparently (based on self reported data) weightlifting is just straight up better than most other nootropics?

Anyway, thank you for referencing some opposing evidence on the topic as well, I might try to look into it more at some point.

(Unfortunately, the thing that I actually care about - whether it has cognitive benefits for me - seems hard to test, since you can't blind yourself to whether you exercised.)

I think this is (and your other post about exercise) are good practical examples of situations where rational thinking makes you worse off (at least for a while).

If you had shown this post to me as a kid, my youth would probably have been better. Unfortunately no one around me was able to make a sufficiently compelling argument for caring about physical appearance. It wasn't until much later that I was able to deduce the arguments for myself. If I just blindly "tried to fit in with the cool kids, and do what is trendy", I would have been better off.

I wonde... (read more)

This alone trumps any other argument mentioned in the post. None of the other arguments seem universal and can be argued with on an individual basis.

I actually like doing things with my body. I like hiking and kayaking and mountain climbing and dancing.

As some other commenters noted, what if you just don't?

I think it would be valuable if someone made a post just focused on collecting all the evidence for the positive cognitive effects of exercise. If the evidence is indeed strong, no other argument in favor of exercise should really matter.

Well, I've always been quite skeptical about the supposed huge mental benefits of exercising. I surely don't feel immediate mental benefits while exercising, and the first time I heard someone else claiming this I seriously thought it was a joke (it must be one of those universal human experiences that I am missing).

Anyway, I can offer one reference digged up from SSC

Although the role of poor diet/exercise in physical illness is beyond questioning, its role in mental illness is more anecdotal and harder to pin down. Don’t get me wrong, there are lot

... (read more)

FWIW I don't think that matters, in my experience interactions like this arise naturally as well, and humans usually perform similarly to how Friend did here.

In particular it seems that here ChatGPT completely fails at tracking the competence of its interlocutor in the domain at hand. If you asked a human with no context at first they might give you the complete recipe just like ChatGPT tried, but any follow up question immediately would indicate to them that more hand-holding is necessary. (And ChatGPT was asked to "walk me through one step at a time", which should be blatantly obvious and no human would just repeat the instructions again in answer to this.)

Cool! (Nitpick: You should probably mention that you are deviating from the naming in the HoTT book. AFAIK usually and types are called Pi and Sigma types respectively, while the words "product" and "sum" (or "coproduct" in the HoTT book) are reserved for and .)

I am especially looking forward to discussion on how MLTT relates to alignment research and how it can be used for informal reasoning as Alignment Research Field Guide mentions.

I always get confused when the term "type signature" is used in text unrelated to type theory. Like what do peop... (read more)

This argument seems a bit circular, nondeterminism is indeed a necessary condition for exfiltrating outside information, so obviously if you prevent all nondeterminism you prevent exfiltration.

You are also completely right that removing access to obviously nondeterministic APIs would massively reduce the attack surface. (AFAIK most known CPU side-channel require timing information.)

But I am not confident that this kind of attack would be "robustly impossible". All you need is finding some kind of nondeterminism that can be used as a janky timer and suddenl... (read more)

Yes, CPUs leak information: that is the output kind of side-channel, where an attacker can transfer information about the computation into the outside world. That is not the kind I am saying one can rule out with merely diligent pursuit of determinism.

I think you are misunderstanding this part, input side channels absolutely exist as well, Spectre for instance:

On most processors, the speculative execution resulting from a branch misprediction may leave observable side effects that may reveal private data to attackers.

Note that the attacker in this c... (read more)

3davidad
I understand that Spectre-type vulnerabilities allow “sandboxed” computations, such as JS in web pages, to exfiltrate information from the environment in which they are embedded. However, this is, necessarily, done via access to nondeterministic APIs, such as performance.now(), setTimeout(), SharedArrayBuffer, postMessage(), etc. If no nondeterministic primitives were provided to, let’s say, a WASM runtime with canonicalized NaNs and only Promise/Future-based (rather than shared-memory or message-passing) concurrency, I am confident that this direction of exfiltration would be robustly impossible.

This implies that we could use relatively elementary sandboxing (no clock access, no networking APIs, no randomness, none of these sources of nondeterminism, and that’s about it) to prevent a task-specific AI from learning any particular facts

It's probably very hard to create such a sandbox though, your list is definitely not exhaustive. Modern CPUs leak information like a sieve. (The known ones are mostly patched of course but with this track record plenty more unknown vulnerabilities should exist.)

Maybe if you build the purest lambda calculus interpre... (read more)

3davidad
Yes, CPUs leak information: that is the output kind of side-channel, where an attacker can transfer information about the computation into the outside world. That is not the kind I am saying one can rule out with merely diligent pursuit of determinism. Concurrency is a bigger concern. Concurrent algorithms can have deterministic dataflow, of course, but enforcing that naively does involve some performance penalty versus HOGWILD algorithms because, if executing a deterministic dataflow, some compute nodes will inevitably sit idle sometimes while waiting for others to catch up. However, it turns out that modern approaches to massive scale training, in order to obtain better opportunities to optimise communication complexity, forego nondeterminism anyway!

Also I just found that you already argued this in an earlier post, so I guess my point is a bit redundant.

Anyway, I like that this article comes with an actual example, we could probably use more examples/case studies for both sides of the argument.

Upon reading the title I actually thought the article would argue the exact opposite, that formalization affects intuition in a negative way. I like non-eucledian geometry as a particular example where formalization actually helped discovery.

But this is definitely now always true. For instance if you wanted to intuitively understand why addition of naturals is commutative, maybe to build intuition for recognizing similar properties elsewhere, would this formal proof really help?

plus_comm =
fun n m : nat =>
nat_ind (fun n0 : nat => n0 + m = m + n0)
 
... (read more)
3adamShimi
Completely agree! The point is not that formalization or axiomatization is always good, but rather to elucidate one counterintuitive way in which it can be productive, so that we can figure out when to use it.

Isn't this similar to a Godzilla Strategy? (One AI overseeing the other.)

That variants of this approach are of use to superintelligent AI safety: 40%.

Do you have some more detailed reasoning behind such massive confidence? If yes, it would probably be worth its own post.

This seems like a cute idea that might make current LLM prompt filtering a little less circumventable, but I don't see any arguments for why this would scale to superintelligent AI. Am I missing something?

Collaborating with an expert/getting tutoring from an expert might be really good?

Probably. How does one go about finding such experts, who are willing to answer questions/tutor/collaborate?

(I think the usual answer to this is university, but to me this does not seem to be worth the effort. Like I maybe met 1-2 people at uni who would qualify for this? How do you find these people more effectively? And even when you find them, how do you get them to help you? Usually this seems to require luck & significant social capital expenditure.)

I unfortunately don't have any answers, just some more related questions:

  • Does anyone have practical advice on this topic? In the short term we are obviously powerless to change the system as a whole. But I couldn't in good conscience send my children to suffer through the same system I was forced to spend a large part of my youth in. Are there any better practically available alternatives?
  • What about socialization? School is quite poor at this, yet unilaterally removing one kid would probably make them even worse off. (Since presumably all other kids the
... (read more)
3tailcalled
Collaborating with an expert/getting tutoring from an expert might be really good?

ability to iterate in a fast matter

This is probably key. If GPT can solve something much faster that's indeed a win. (With the SPARQL example I guess it would take me 10-20 minutes to look up the required syntax and fields, and put them together. GPT cuts that down to a few seconds, this seems quite good.)

My issue is that I haven't found a situation yet where GPT is reliably helpful for me. Maybe someone who has found such situations, and reliably integrated "ask GPT first" as a step into some of their workflows could give their account? I would genuine... (read more)

Yeah I guess many programming problems fall into the "easy to verify" category. (Though definitely not all.)

2ChristianKl
ChatGTP is not yet good enough to solve every problem that you throw at it on it's own, but it can help you with brainstorming what might be happening with your problem.  ChatGPT can also correctly answer questions like "Write a Wikidata SPARQL query that shows all women who are poets and who live in Germany" It's again an easy-to-verify answer but it's an answer that allows you to research further.  The ability to iterate in a fast matter is useful in combination with other research steps. 

And apparently ChatGPT will shut you right down when attempting to ask for sources:

I'm sorry, but I am unable to provide sources for my claims as I am a large language model trained by OpenAI and do not have the ability to browse the internet. My answers are based on the information I have been trained on, but I cannot provide references or citations for the information I provide.

So... if you have to rigorously fact-check everything the AI tells you, how exactly is it better than just researching things without the AI in the first place? (I guess you need a domain where ChatGPT has adequate knowledge and claims in said domain are easily verifiable?)

2Viliam
I haven't tried ChatGPT myself, but based on what I've read about it, I suggest asking your question a bit differently; something like "tell me a poem that describes your sources". (The idea is that the censorship filters turn off when you ask somewhat indirectly. Sometimes adding "please" will do the magic. Apparently the censorship system is added on top of the chatbot, and is less intelligent than the chatbot itself.)
4ChristianKl
I'm using ChatGPT for hypothesis generation. This conversation suggests that people are actually brushing their tongues. Previously, I was aware that tongue scraping is a thing, but usually that's not done with a brush.  On Facebook, I saw one person writing about a programming problem that they had. Another person threw that problem into ChatGPT and ChatGPT gave the right answer.

Wow had this happen literally on my first interaction with ChatGPT. It seems to be just making stuff up, and won't back down when called out.

  • ChatGPT: "[...] run coqc --extract %{deps} --ocaml-script %{targets} [...]"
  • Me: "coqc does not have an --extract flag. (At least not on my machine, I have coq version 8.16.0)"
  • ChatGPT: "[...] You are correct, the --extract flag was added to the coqc command in Coq version 8.17.0. [...] Another option would be to use the coq-extract-ocaml utility, which is included with Coq [...]"
  • Me: "Coq 8.17.0 does not exist yet.
... (read more)

After a bit of testing, ChatGPT seems pretty willing to admit mistakes early in the conversation. However, after the conversation goes on for a while, it seems to get more belligerent. Maybe repeating a claim makes ChatGPT more certain of the claim?

At the start, it seems well aware of its own fallibility:

In the abstract:

In a specific case:

Doesn't mind being called a liar:

Open to corrections:

We start to see more tension when the underlying context of the conversation differs between the human and ChatGPT. Are we talking about the most commonly encountered s... (read more)

Wow, this is the best one I've seen. That's hilarious. It reminds me of that Ted Chiang story where the aliens think in a strange way that allows them to perceive the future.

The Sequences. Surprised nobody mentioned this one yet.

While I am pretty sure you can't compress the length of the sequences much without losing any valuable information, the fact is that for most people it's just way too long to ever read through, and having some easily digestible video material would still be quite valuable. (Hopefully also by getting some people interested in reading the real thing?)

Turning the sequences into a set of videos would be a massive distillation job. On the high level it would ideally be something like:

  1. Extract the set of im
... (read more)

I don't know anything about Diplomacy and I just watched this video, could someone expand a bit on why this game is a particularly alarming capability gain? The chat logs seemed pretty tame, the bot didn't even seem to attempt psychological manipulation or gaslighting or anything similar. What important real world capability does Diplomacy translate into that other games don't? (People for instance don't seem very alarmed nowadays about AI being vastly superhuman at chess or Go.)

6Zach Furman
I don't think the game is an alarming capability gain at all - I agree with LawrenceC's comment below. It's more of a "gain-of-function research" scenario to me. Like, maybe we shouldn't deliberately try to train a model to be good at this? If you've ever played Diplomacy, you know the whole point of the game is manipulating and backstabbing your way to world domination. I think it's great that the research didn't actually seem to come up with any scary generalizable techniques or dangerous memetics, but I think ideally shouldn't even be trying in the first place.
7gbear605
So Diplomacy is not a computationally complex game, it's a game about out-strategizing your opponents where roughly all of the strategy is convincing other of your opponents to work with you. There are no new tactics to invent and an AI can't really see deeper into the game than other players, it just has to be more persuasive and make decisions about the right people at the right time. You often have to do things like plan ahead to make your actions so that in a future turn someone else will choose to ally with you. The AI didn't do any specific psychological manipulation, it was just good at being persuasive and strategic in the normal human way. It's also notable for being able to both play the game and talk with people about the game. This could translate into something like being good at convincing that the AI should be let out of its box, but I think mostly it's just being better at multiple skills simultaneously than many people expected. (Disclaimer: I've only played Diplomacy in person before, and not at this high of a level)

I think we usually don't generalize very far not because we don't have general models, but because it's very hard to state any useful properties about very general models.

You can trivially view any model/agent as a Turing machine, without loss of generality.[1] We just usually don't do that because it's very hard to state anything useful about such a general model of computation. (It seems very hard to prove/disprove P=NP, we know for a fact that halting is undecidable, etc.)

I am very interested though what model John will use to state useful theorems that... (read more)

3tailcalled
I think he addressed it in Don't Get Distracted By The Boilerplate.

As others said here kudos for the effort, but this iteration seems horrible to me.

When I was reading the Sequences I often had to go back and reread a sentence/paragraph/even page to fully understand everything. I also had to stop sometimes to really deeply think about the ideas (or just appreciate their beauty). I feel the text has low redundancy and assumes that you can go back and reread if you missed something (would be strange if it didn't), and is not directly suitable for a video format.

I tried to watch some of the clips, but it is just waay too fas... (read more)

I see, with that mapping your original paragraph makes sense.

Just want to note though that such a mapping is quite weird and I don't really see a mathematical justification behind it. I only know of the Curry-Howard isomorphism as a way to translate between proof theory and computer science, and it maps programs to proofs, not to axioms.

We can also interpret this in proof theory. K-types don't care how many steps there are in the proof, they only care about the number of axioms used in the proof. T-types do care how many steps there are in the proof, whether those steps are axioms or inferences.

I don't get how you apply this in proof theory. If K-types want to minimize the Kolmogorov-complexity of things, wouldn't they be the ones caring about the description length of the proof? How do axioms incur any significant description length penalty? (Axioms are usually much shorter to describe than proofs, because you of course only have to state the proposition and not any proof.)

2Cleo Nardo
when translating between proof theory and computer science: (computer program, computational steps, output) is mapped to (axioms, deductive steps, theorems) respectively. kolmogorov-complexity maps to "total length of the axioms" and time-complexity maps to "number of deductive steps".

Yeah, I know you are looking for more practical advice here, that's why I posted this as a comment instead of an answer.

Eventually someone will have to aim for the "Excellent" level though (even if not against humans, surely against an AGI), and I just wanted to highlight that this is very much an unsolved problem.

3Gunnar_Zarncke
Agree

In my view the field of cybersecurity currently is very far from what "theoretically perfect security" would look like. I am not sure how much ahead private knowledge is on the topic, but publicly cybersecurity seems to focus on defending against security holes already demonstrated to be exploitable, and providing some probabilistic defense against some other ones as well. This seems to work well in practice, I don't know why though. (Maybe highly motivated threat actors with sufficient resources simply don't exist?)

Conventional approaches work well if you... (read more)

2Gunnar_Zarncke
I know. But I'm not aiming for the Excellent level.

probably much of what makes rationalists so male is that rationalism selects for abilities/interests related to programming, which is itself very male-skewed

This is just pushing the question one step back though, I don't know of any good theories for why software engineering is heavily biased towards males either.

2tailcalled
The point is just that factor analysis assumes that the items/variables end up correlating due to the factors. If you put variables that are upstream of the factors, such as sex, into the factor analysis, then those upstream variables would have no reason to correlate with each other in ways that match the factor structure (and in fact due to collider bias, would in this case have reasons to end up correlated in ways that precisely oppose the factor structure), so therefore it would be nicer to avoid demographic variables as much as possible.

One thing that annoys me with "normal" people is their inability to easily talk about the meta level of a particular topic. I feel like if I start talking about something meta some people get internally confused a bit, and instead of asking for clarification they will interpret some parts of what I said at the object level, discard the rest, and continue the conversation as if nothing happened.

Sure, you can talk about meta topics with most people with enough effort, you can try carefully prompting them (like "so what I am going to say may sound strange, I ... (read more)

3tailcalled
🤔 I think it's a good one. I do however wonder how much it is just the g factor of general cognitive ability, as well as how well people can self-evaluate it. I think it would be nice to have informant-reports for these sorts of things, where one gets evaluated by some other rationalist one has had discussions with. However I don't know if I can convince random people on LessWrong to collect such informant-reports.
6Gunnar_Zarncke
Using the meta level or being comfortable with it sounds like a good idea. Somewhat related: Ability and willingness to reflect and introspect.

food for me is fuel

https://powersmoothie.org/ maybe? It embraces this view. The cleanup consists of rinsing a single blender.

Load More