Just a general comment on style: I think this article would be much easier to parse if you included different headings, sections, etc. Normally when I approach writing here in LessWrong I scroll slowly to the bottom, do some diagonal reading and try to get a quick impression about the contents and whether the article is worth reading. My impression is that most people will ignore articles that are big uninterrupted long chunks of text like this one
Thank you for the comprehensive answer and for correcting the points where I wasn't clear. Also, thank you for pointing out that the Kolmogorov complexity of a program is the length of the program that writes that program
The complexity of the algorithms was totally arbitrary and for the sake of the example.
I still have some doubts, but everything is more clear now (see my answer to Charlie Steiner also)
I think that re-reading again your answer made something click. So thanks for that
The observed data is not **random**, because random is not a property of the data itself.
The hypotheses that we want to evaluate are not random either, because we are analysing Turing machines that generate those data deterministically.
If the data is HTHTHT, we do not test a python script that is doing:
random.choices(["H","T"], k=6)
What we test instead is something more like
["H"] +["T"]+["H"]+["T"]+["H"]+["T"]
And
["HT"]*3
In this case, this last script will be simpler and...
The part I understood is, that you weigh the programs based on the length in bits, the longer the program the less weight it has. This makes total sense.
I am not sure that I understand the prefix thing and I think that's relevant. For example, it is not clear to me if once I consider a program that outputs 0101 I will simply ignore other programs that output that same thing plus one bit (e.g. 01010).
I also find still fuzzy (and know at least I can put my finger on it) is the part where Solomonoff induction is extended to deal with randomness.
Let me see if ...
I think this is pointing to what I don't understand: how do you account for hypotheses that explain data generated randomly? How do you compare a hypothesis which is a random number generator with some parameters against a hypothesis which has some deterministic component?
Is there a way to understand this without reading the original paper (which will probably take me quite long)?
When you understood this, how was your personal process that took you from knowing about probabilities and likelihood to understanding Solomonoff induction? Did you ha...
What happens if there is more than one powerful agent just playing the charade game? Is there any good article about what happens in a universe where multiple AGI are competing among them? I normally find only texts that consider that once we get AGI we all die so there is no room for these scenarios.
I have been (and I am not the only one) very put off by the trend in the last months/years of doomerism pervading LW, with things like "we have to get AGI right at the first try or we all die" repeated constantly as a dogma.
To someone who is very skeptical of the classical doomist position (aka AGI will make nanofactories and will kill everyone at once), this post is very persuasive and compelling. This is something I could see happening. This post serves as an excellent example for those seeking effective ways to convince skeptics.
Yeah, sorry about that. I didn't put much effort into my last comment.
Defining intelligence is tricky, but to paraphrase EY, it's probably wise not to get too specific since we don't fully understand Intelligence yet. In the past, people didn't really know what fire was. Some would just point to it and say, "Hey, it's that shiny thing that burns you." Others would invent complex, intellectual-sounding theories about phlogiston, which were entirely off base. Similarly, I don't think the discussion about AGI and doom scenarios gets much benefit from a super ...
I guess the crux here for most people is the timescale. I agree actually that things can get eventually very bad if there is no progress in alignment etc, but the situation is totally different if we have 50 or 70 years to work on that problem or, as Yudkowsky keeps repeating, we don't have that much time because AGI will kill us all as soon as it appears.
The standard argument you will probably listen is that AGI will be capable of killing everyone because they can think so much faster than humans. I haven't seen yet a serious engagement from doomers to the argument of capabilities. I agree with everything you said here and to me these arguments are obviously right.
I would place myself also in the right upper quadrant, close to the doomers, but I am not one of them.
The reason is that it is not very clear to me the exact meaning of "tractable for a SI". I do think that nanotechnology/biotechnology can progress enormously with SI, but the problem is not only developing the required knowledge, but creating the economic conditions to make these technologies possible, building the factories, making new machines, etc. For example nowadays, in spite of the massive demand of microchips worldwide, there are very very fe...
A historical analogy could be the invention of computer by Charles Babbage, who couldn't build a working prototype because the technology of his era did not all allow precision necessary for the components.
The superintelligence could build its own factories, but that would require more time, more action in real world that people might notice, the factory might require some unusual components or raw materials in unusual quantities; some components might even require their own specialized factory, etc.
I wonder, if humanity ever gets to the "can make simulati...
I don't see how that implies that everyone dies.
It's like saying, weapons are dangerous, imagine what would happen if they fall in the wrong hands. Well, it does happen and sometimes that have bad consequences but there is no logical connection between that and everyone dying, which is what doom means. Do you want to argue that LLMs are dangerous? Fine. No problem with that. But doom is not that.
There are some other assumptions that go into Eliezer's model that are required for doom. I can think of one very clearly which is:
5. The transition to that god-AGI will be as quick that other entities won't have the time to reach also superhuman capabilities. There are no "intermediate" AGIs that can be used to work on Alignment related problems or even as a defence from unaligned AGIs
I believe I have found a perfect example where the "Medical Model is Wrong," and I am currently working on a post about it. However, I am swamped with other tasks, I wonder if I will ever finish it.
In my case, I am highly confident that my model is correct, while the majority of the medical community is wrong. Using your bullet points:
1.Personal: I have personally experienced this disease and know that the standard treatments do not work.
2.Anecdotal: I am aware of numerous cases where the conventional treatment has failed. In fact, I am not awa...
Yes, I agree. I think it is important to remind that achieving AGI and doom are two separate events. Many people around here do make a strong connection between them, but not everyone. I'm on the camp that we are 2 or 3 years away to an AGI (it's hard to see why GPT4 does not qualify as that), I don't think that implies the imminent extinction of human beings. It is much easier to convince people of the first point because the evidence is already out there
I guess I will break my recently self-imposed rule of not talking about this anymore.
I can certainly envision a future where multiple powerful AGIs fight against each other and are used as weapons, some might be rogue AGIs and some others might be at the service of human-controlled institutions (such as Nation Estates). To put it more clearly: I have trouble imagining a future where something along these lines DOES NOT end up happening.
But, this is NOT what Eliezer is saying. Eliezer is saying:
The Alignment problem has to be solved AT THE FIRST TRY b...
Your comment is sitting at positive karma only because I strong upvoted it. It is a good comment, but people on this site are very biased in the opposite direction. And this bias is going to drive non-doomers eventually away from this site (probably many have already left), and LW will continue descending in a spiral of non-rationality. I really wonder how people in 10 or 15 years, when we are still around in spite of powerful AGI being widespread, will rationalize that a community devoted to the development of rationality ended up being so irrational. And that was my last comment showing criticism of doomers, everytime I do it costs me a lot of karma.
12 Angry Men
Connection to rationality:
This is just the perfect movie about rationality. Damn, there is even a fantastic YouTube series discussing this movie in the context of instrumental rationality! And besides, I have never met anyone who did not enjoy this classic film.
This classic film is a masterclass in group decision-making, overcoming biases, and the process of critical thinking. The plot revolves around a jury deliberating the guilt of a young man accused of murder. Initially, 11 out of the 12 jurors vote "guilty," but one juror... (read more)