All of mukashi's Comments + Replies

Answer by mukashi121

12 Angry Men

Connection to rationality: 

This is just the perfect movie about rationality.  Damn, there is even a fantastic YouTube series discussing this movie in the context of instrumental rationality! And besides, I have never met anyone who did not enjoy this classic film. 


This classic film is a masterclass in group decision-making, overcoming biases, and the process of critical thinking. The plot revolves around a jury deliberating the guilt of a young man accused of murder. Initially, 11 out of the 12 jurors vote "guilty," but one juror... (read more)

2Yoav Ravid
Just watched it upon your recommendation. Thanks! It is indeed a fantastic film, and a great example of (epistemic) rationality.

Just a general comment on style: I think this article would be much easier to parse if you included different headings, sections, etc.  Normally when I approach writing here in LessWrong I scroll slowly to the bottom, do some diagonal reading and try to get a quick impression about the contents and whether the article is worth reading. My impression is that most people will ignore articles that are big uninterrupted long chunks of text like this one

3atomantic
Thanks for the feedback. I've added some headers to break it up.

Strong upvoted for visibility and because this sort of posts contribute to create a healthy culture of free speech and rational discussion. 

4Alex Vermillion
I'm conflicted. I appreciate the effort put into the post, but it seems like a lot of the posters are genuinely creating lots of low quality content and I'd much rather have a small amount of good content than a large amount of meh-or-bad content to sift through to find the good stuff. I've settled on a net downvote, but would probably do a upvote and a disagree vote if that was an option.

Thank you for the comprehensive answer and for correcting the points where I wasn't clear. Also, thank you for pointing out that the Kolmogorov complexity of a program is the length of the program that writes that program

The complexity of the algorithms was totally arbitrary and for the sake of the example.

I still have some doubts, but everything is more clear now (see my answer to Charlie Steiner also)

I think that re-reading again your answer made something click. So thanks for that

The observed data is not **random**, because random is not a property of the data itself.
The hypotheses that we want to evaluate are not random either, because we are analysing Turing machines that generate those data deterministically.

If the data is HTHTHT,  we do not test a python script that is doing:

random.choices(["H","T"], k=6)

What we test instead is something more like

["H"] +["T"]+["H"]+["T"]+["H"]+["T"]

And

["HT"]*3

In this case, this last script will be simpler and... (read more)

The part I understood is, that you weigh the programs based on the length in bits, the longer the program the less weight it has. This makes total sense.

I am not sure that I understand the prefix thing and I think that's relevant. For example, it is not clear to me if once I consider a program that outputs 0101 I will simply ignore other programs that output that same thing plus one bit (e.g. 01010).

I also find still fuzzy (and know at least I can put my finger on it) is the part where Solomonoff induction is extended to deal with randomness.

Let me see if ... (read more)

7drocta
No, the thing about prefixes is about what strings encode a program, not about their outputs. The purpose of this is mostly just to define a prior over possible programs, in a way that conveniently ensures that the total probability assigned over all programs is at most 1. Seeing as it still works for different choices of language, it probably doesn't need to exactly use this kind of defining the probabilities, and I think any reasonable distribution over programs will do (at least, after enough observations) But, while I think another distribution over programs should work, this thing with the prefix-free language is the standard way of doing it, and there are reasons it is nice. The analogy for a normal programming language would be if no python script was a prefix of any other python script (which isn't true of python scripts, but could be if they were required to end with some "end of program" string) There will be many different programs which produce the exact same output when run, and will all be considered when doing Solomonoff induction.   This may be pedantic of me, but I wouldn't call the lengths of the programs, the Kolmogorov complexity of the program. The lengths of the programs are (upper bounds on) the Kolmogorov complexity of the outputs of the programs. The Kolmogorov complexity of a program g, would be the length of the shortest program which outputs the program g, not the length of g. When you say that program C has 4 bits, is that just a value you picked, or are you obtaining that from somewhere? Also, for a prefix-free programming language, you can't have 2^5 valid programs of length 5, and 2^6 programs of length 6, because if all possible binary strings of length 5 were valid programs, then no string of length 6 would be a valid program. This is probably getting away from the core points though (You could have the programming language be such that, e.g. 00XXXXX outputs the bits XXXXX, and 01XXXXXX outputs the bits XXXXXX, and other programs

Yes, this is something I can see easily, but I am not sure how Solomonoff induction accounts for that 

I think this is pointing to what I don't understand: how do you account for hypotheses that explain data generated randomly? How do you compare a hypothesis which is a random number generator with some parameters against a hypothesis which has some deterministic component?

 Is there a way to understand this without reading the original paper (which will probably take me quite long)? 

When you understood this, how was your personal process that took you from knowing about probabilities and likelihood to understanding Solomonoff induction? Did you ha... (read more)

3Charlie Steiner
"Randomness," in this way of thinking, isn't a property of hypotheses. There is no one hypothesis that is "the random hypothesis." Randomness is just what happens when the outcome depends sensitively on things you don't know. I mean, there are explanations of solomonoff induction on this site that are fine, but for actually getting a deep understanding you've probably gotta do stuff like read An Introduction To Kolmogorov Complexity by Li and Vitanyi.

I have followed a similar strategy using Anki cards. However, I think that allocating a specific time slot to review your principles and then "act" on then is probably much more effective than passively remind those principles. I will adopt this.

3Jonathan MoregƄrd
Simply memorizing the principles a la anki seems risky - it's easy to accidentally disconnect the principle from its insight-generating potential, turning it into a disconnected fact to memorize. This risk is minimised by reviewing the principles in connection to real life.

What happens if there is more than one powerful agent just playing the charade game? Is there any good article about what happens in a universe where multiple AGI are competing among them? I normally find only texts that consider that once we get AGI we all die so there is no room for these scenarios.

8Thane Ruthenis
Coincidentally, I've just made a post on that very topic. Though the comments fairly point out my analysis might've been somewhat misaimed there. You might find this post by Andrew Critch, or this and that posts by Paul Christiano, more to your liking.

I have been (and I am not the only one) very put off by the trend in the last months/years of doomerism pervading LW, with things like "we have to get AGI right at the first try or we all die" repeated constantly as a dogma.

To someone who is very skeptical of the classical doomist position (aka AGI will make nanofactories and will kill everyone at once), this post is very persuasive and compelling. This is something I could see happening.  This post serves as an excellent example for those seeking effective ways to convince skeptics.
 

2RussellThor
Yes this is a slow-takeoff scenario that it is realistic to be worried about. 
4Noosphere89
Yep, that's the source I was looking for to find the original source of the claim.

Many of the calculations on the brain capacity are based on wrong assumptions. Is there an original source for that 2.5 PB calculation? This video is very relevant to the topic if you have some time to check it out:

 

4Noosphere89
Reber (2010) was my original source for the claim that the human brain has 2.5 petabytes of memory, but it's definitely something that got reported a lot by secondary sources like the Scientific American.

Same I would do in Slack! I simply have some work groups in Discord, that's why

6Adam B
I've made a basic version of Fatebook for Discord - you can install it here!
1Adam B
Currently it's not - just Slack and web. What would do you think you'd use it for in Discord?

Great! Can you make that, if I input P for hypothesis A, 1 - P appears automatically for Hypothesis B?

3Adele Lopez
Hmm, you could use the slider to set the prior P for hypothesis A and it will set the prior for hypothesis B to 1 - P; does that not work for you for some reason? The problem with having that behavior when you type in the number is that I want people to be able to enter the priors as odds, so I don't want to presume that the other numbers will change to allow for that.

This should be curated. Just reading this list is a good exercise for those people that attribute a very high probability to a single possible scenario.

I don't see why Jaynes is wrong. I guess it depends on the interpretation? If two humans are chasing the same thing there is a limited amount of it, of course they are in conflict with each other. Isn't that what Jaynes is pointing at?

2rotatingpaguro
The way Jaynes says it, looks like it is meant to be a more general property than something that applies only "If two humans are chasing the same thing there is a limited amount of".

Good post, I hope to read more from you

Yeah, sorry about that. I didn't put much effort into my last comment.

Defining intelligence is tricky, but to paraphrase EY, it's probably wise not to get too specific since we don't fully understand Intelligence yet. In the past, people didn't really know what fire was. Some would just point to it and say, "Hey, it's that shiny thing that burns you." Others would invent complex, intellectual-sounding theories about phlogiston, which were entirely off base. Similarly, I don't think the discussion about AGI and doom scenarios gets much benefit from a super ... (read more)

1M. Y. Zuo
That's probably true, but that would imply we would understand even less what 'artificial intelligence' or 'artificial general intelligence' are? Spelling it out like that made me realize how odd talking about AI or AGI is. In no other situation, that I've heard of, would a large group of folks agree that there's a vague concept with some confusion around it and then proceed to spend the bulk of their efforts to speculate on even vaguer derivatives of that concept.
4Vladimir_Nesov
I think a good definition for AGI is capability for open-ended development, the point where the human side of the research is done, and all it needs to reach superintelligence from that point on is some datacenter maintenance and time, so that eventually it can get arbitrarily capable in any domain it cares for, on its own. This is a threshold relevant for policy and timelines. GPT-4 is below that level (it won't get better without further human research, no matter how much time you give it), and ability to wipe out humans (right away) is unnecessary for reaching this threshold.

What is an AGI? I have seen a lot of "not a true scotman" around this one.

1M. Y. Zuo
This seems like a non-sequitor, there might or might not even be such a thing as 'AGI' depending on how one understands intelligence, hence why it is a prerequisite crux. Can you clarify what your trying to say?

I guess the crux here for most people is the timescale. I agree actually that things can get eventually very bad if there is no progress in alignment etc, but the situation is totally different if we have 50 or 70 years to work on that problem or, as Yudkowsky keeps repeating, we don't have that much time because AGI will kill us all as soon as it appears.

The standard argument you will probably listen is that AGI will be capable of killing everyone because they can think so much faster than humans. I haven't seen yet a serious engagement from doomers to the argument of capabilities. I agree with everything you said here and to me these arguments are obviously right.

1Seth Herd
The arguments do seem right. But they eat away at the edges of AGI x-risk arguments, without addressing the core arguments for massive risks. I accept the argument that doom isn't certain, that takeoff won't be that fast, and that we're likely to get warning shots. We're still likely to ultimately be eliminated if we don't get better technical and societal alignment solutions relatively quickly.

Any source you would recommend to know more about the specific practices of Mormons you are referring to?

2PeterMcCluskey
No. I found a claim of good results here. Beyond that I'm relying on vague impressions from very indirect sources, plus fictional evidence such as the movie Latter Days.

The Babbage example is the perfect one. Thank you, I will use it

This would clearly put my point in a different place from the doomers

I would place myself also in the right upper quadrant, close to the doomers, but I am not one of them. 

The reason is that it is not very clear to me the exact meaning of "tractable for a SI". I do think that nanotechnology/biotechnology can progress enormously with SI, but the problem is not only developing the required knowledge, but creating the economic conditions to make these technologies possible, building the factories, making new machines, etc. For example nowadays, in spite of the massive demand of microchips worldwide, there are very very fe... (read more)

A historical analogy could be the invention of computer by Charles Babbage, who couldn't build a working prototype because the technology of his era did not all allow precision necessary for the components.

The superintelligence could build its own factories, but that would require more time, more action in real world that people might notice, the factory might require some unusual components or raw materials in unusual quantities; some components might even require their own specialized factory, etc.

I wonder, if humanity ever gets to the "can make simulati... (read more)

I agree with this take, but do those plans exist, even in theory?

This is fantastic. Is there anything remotely like this available for Discord?

6Sage Future
Thanks! There's a Manifold Markets Discord bot which lets you quickly create and bet on markets We might create a Fatebook Discord bot if there's enough interest (some other people have asked for it) - though first we're making a web version.

I don't see how that implies that everyone dies.

It's like saying, weapons are dangerous, imagine what would happen if they fall in the wrong hands. Well, it does happen and sometimes that have bad consequences but there is no logical connection between that and everyone dying, which is what doom means. Do you want to argue that LLMs are dangerous? Fine. No problem with that. But doom is not that.

2Iknownothing
That's fair. Edited to reflect that. I do think it could be a useful way to convince someone who is completely skeptical of risk from AI. 

Thanks for this post. It's refreshing to hear about how this technology will impact our lives in the near future without any references to it killing us all

There are some other assumptions that go into Eliezer's model that are required for doom. I can think of one very clearly which is:

5.  The transition to that god-AGI will be as quick that other entities won't have the time to reach also superhuman capabilities. There are no "intermediate" AGIs that can be used to work on Alignment related problems or even as a defence from unaligned AGIs

Answer by mukashi110

I believe I have found a perfect example where the "Medical Model is Wrong," and I am currently working on a post about it. However, I am swamped with other tasks, I wonder if I will ever finish it.

In my case, I am highly confident that my model is correct, while the majority of the medical community is wrong.  Using your bullet points:

1.Personal: I have personally experienced this disease and know that the standard treatments do not work. 

2.Anecdotal: I am aware of numerous cases where the conventional treatment has failed. In fact, I am not awa... (read more)

3Elo
Please subscribe me to your newsletter! If you have a Google doc, I'd be interested to read it or offer comments!

Yes, I agree. I think it is important to remind that achieving AGI and doom are two separate events. Many people around here do make a strong connection between them, but not everyone. I'm on the camp that we are 2 or 3 years away to an AGI (it's hard to see why GPT4 does not qualify as that), I don't think that implies the imminent extinction of human beings. It is much easier to convince people of the first point because the evidence is already out there

Has he tried personally to interact with GPT4? Can't think of a better way. It convinced even Bryan Caplan, who had bet publicly against it

I would certainly appreciate knowing the reason for the downvotes

3Raemon
FYI I upvoted your most recent comment, but downvoted your previous few in this thread. Your most recent comment seemed to do a good job spelling out your position and gesturing at your crux. My guess is maybe other people were just tired of the discussion and downvoting sort of to make the whole discussion go away.

I guess I will break my recently self-imposed rule of not talking about this anymore. 

I can certainly envision a future where multiple powerful AGIs fight against each other and are used as weapons, some might be rogue AGIs and some others might be at the service of human-controlled institutions (such as Nation Estates). To put it more clearly: I have trouble imagining a future where something along these lines DOES NOT end up happening.

But, this is NOT what Eliezer is saying. Eliezer is saying:

The Alignment problem has to be solved AT THE FIRST TRY b... (read more)

2mukashi
I would certainly appreciate knowing the reason for the downvotes

But I think they do believe what they say. Is it maybe that they are ... pointing to something else? when using the word AGI? In fact, I do not even know if there is a commonly accepted definition of AGI.  

I don't see either how some people can say that AGI will take decades when GPT4 is already almost there

5lc
They say it because they are trying to say the things that make them seem like sober experts, not the things that they actually believe.
5Mitchell_Porter
I finally noticed your anti-doom post. Mostly you seem to be skeptical about the specific idea of the single superintelligence that rapidly bootstraps its way to control of the world. The complexity and uncertainty of real life means that a competitive pluralism will be maintained.  But even if that's so, I don't see anything in your outlook which implies that such a world will be friendly to human beings. If people are fighting for their lives under conditions of AI-empowered social Darwinism, or cowering under the umbrella of AI superpowers that are constantly chipping away at each other, I doubt many people are going to be saying, oh those foolish rationalists of the 2010s who thought it was all going to be over in an instant.  Any scenario in which AIs have autonomy, general intelligence, and a need to compete, just seems highly unstable from the perspective of all-natural unaugmented human beings remaining relevant. 
3Raemon
Downvoted for the pattern of making a vague claim about LWers being biased, and then responding to followup questions with vague evasive answers with no arguments.
5Mitchell_Porter
How about AIs that are off the leash of human control, making their own decisions and paying their own way in the world? Would there be any of those?

Your comment is sitting at positive karma only because I strong upvoted it. It is a good comment, but people on this site are very biased in the opposite direction. And this bias is going to drive non-doomers eventually away from this site (probably many have already left), and LW will continue descending in a spiral of non-rationality. I really wonder how people in 10 or 15 years, when we are still around in spite of powerful AGI being widespread, will rationalize that a community devoted to the development of rationality ended up being so irrational.  And that was my last comment showing criticism of doomers, everytime I do it costs me a lot of karma. 

5Mitchell_Porter
I wonder what you envision when you think of a world where "powerful AGI" is "widespread". 
-1[anonymous]
I mean I have almost 1000 total karma and am gaining over time. The doomers would be convinced the AGIs are just waiting to betray, to "heel turn" on us.

I can't agree more with you. But this is a complicated position to maintain here in LW, and one that gives you a lot of negative karma

1[anonymous]
Yep.  I have some posts that are +10 karma -15 disagree or more. Nobody ever defends their disagreements though... One person did and they more or less came around to my pov.

+1 here

Sorry, I assumed you posted that just before the interview

1Johannes C. Mayer
If I remember correctly, the interview was the reason that I made this list in the first place šŸ˜€

Well, it seems it is your lucky day:

1Johannes C. Mayer
Well, that video is already in the playlist, if you look, but thanks for the suggestion.
Load More