Kawoomba comments on On saving the world - Less Wrong

101 Post author: So8res 30 January 2014 08:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (166)

You are viewing a single comment's thread.

Comment author: Kawoomba 30 January 2014 08:45:20PM 18 points [-]

You'll excuse me if I don't mention them here: there is a lot of inferential distance. Perhaps one day I'll write a sequence.

What a tease! Why not give us a short bullet point list of your conclusions, most readers around here wouldn't dismiss them out of hand, even lacking a chain of arguments leading up to them. It's enjoyable to jump across inferential chasms. Especially if you think of your conclusions as important. Are they reactionary?

It's tempting to say "either I present my conclusions in their most convincing form, as a sequence, or not at all", but remember that in resource constrained environments, the perfect is the enemy of the good.

Comment author: shminux 30 January 2014 10:52:17PM *  25 points [-]

Why not give us a short bullet point list of your conclusions, most readers around here wouldn't dismiss them out of hand, even lacking a chain of arguments leading up to them.

We sure would. We think we are smart, and the inferential gap the OP mentioned is unfortunately almost invisible from this side. That's why Eliezer had to write all those millions of words.

Comment author: christopherj 31 January 2014 02:18:32AM 11 points [-]

Easy test: send a summary/bullet point/whatever as a private message to a few select people from LessWrong, and ask them for their reactions. Possible loss: a few select members become biased due to large inferential gap against the ideas that you gave up to pursue a more important goal. Possible gains: rational feedback to your ideas, supporters, and an estimate of the number of supporters you could gain by sharing your ideas more widely on this site.

Comment author: wedrifid 31 January 2014 07:49:51PM 9 points [-]

Easy test: send a summary/bullet point/whatever as a private message to a few select people from LessWrong, and ask them for their reactions.

That is an interesting test but it is not testing quite the same thing as whether the conclusions would be dismissed out of hand in a post. "Herding cats" is a very different thing to interacting with a particular cat with whom you have opened up a direct mammalian social exchange.

Comment author: So8res 31 January 2014 04:47:58PM 4 points [-]

Perhaps. People, PM me if you're interested. No guarantees.

Comment author: Kaj_Sotala 31 January 2014 08:50:33AM 4 points [-]

In case So8res wants to try this, I'd be quite curious to see the bullet points.

Comment author: jazmt 31 January 2014 07:35:55PM 1 point [-]

me too

Comment author: MrMind 31 January 2014 04:20:29PM 1 point [-]

Me too.

Comment author: blacktrance 31 January 2014 04:31:21PM 0 points [-]

And me as well.

Comment author: EndlessStrategy 31 January 2014 10:56:59PM -2 points [-]

I think you underestimate the potential loss. Worst case scenario one of the people he PMs his ideas to puts them online and spreads links around this site.

Comment author: [deleted] 30 January 2014 11:49:39PM 5 points [-]

Do we presume Eliezer had to write all those millions of words?

Comment author: shminux 31 January 2014 12:06:36AM *  3 points [-]

Write a bullet-point summary for each sequence and tell me that one would not be tempted to "dismiss them out of hand, even lacking a chain of arguments leading up to them", unless one is already familiar with the arguments.

Comment author: MrMind 31 January 2014 04:36:59PM 11 points [-]

I'll try, just for fun, to summarize Eliezer's conclusions of the pre-fun-theory and pre-community-building part of the sequence:

  • artificial intelligence can self-improve;
  • with every improvement, the rate at which it can improve increases;
  • AGI will therefore experience exponential improvement (AI fooms);
  • even if there's a cap to this process, the resulting agent will be a very powerful agent, incomprehensibly so (singularity);
  • an agent effectiveness does not constrain its utility function (orthogonality thesis);
  • humanity's utility function occupy a very tiny and fragmented fraction of the set of all possible utility functions (human values are fragile);
  • if we fail to encode the correct human utility function in a self-improving AGI, even tiny differences will results in a catastrophically unpleasant future (UFAI as x-risk);
  • an AGI is about to come pretty soon, so we better hurry to figure out how to do the latter point correctly.
Comment author: JamesAndrix 06 February 2014 04:21:51AM 4 points [-]

Anecdote: I think I've had better responses summarizing LW articles in a few paragraphs without linking, than linking to them with short explanations.

It does take a lot to crosss those inferential distances, but I don't think quite that much.

To be fair, my discussions may not cover a whole sequence, I have the opportunity to pick out what is needed in a particular instance.

Comment author: [deleted] 31 January 2014 10:08:28AM *  4 points [-]

That would kind of require that I spend my time reading dozens to hundreds of blog entries espousing a mixture of basic good sense and completely unfalsifiable theories extrapolated from pure mathematics, just so I can summarize them in terms of their most surprising conclusions.

EDIT: The previous comment is not meant as personal disrespect. They're just meant to point out that treating Eliezer's Sequences as epistemically superlative and requiring someone to read them all to have even well-informed views on anything is... low-utility, especially considering I have read a fair portion.

Comment author: shminux 31 January 2014 03:50:14PM 1 point [-]

I agree with all that, actually. My original point was not that Eliezer was right about everything, or that the Sequences should be canonized into scriptures, but that the conclusions are far enough from the mainstream as to be easily dismissed if presented on their own.

Comment author: [deleted] 31 January 2014 06:09:38PM 1 point [-]

Which ones?

Comment author: EGarrett 31 January 2014 09:46:27PM 1 point [-]

Eli, I want to +1 this comment because I agree with the awkwardness of expecting people to read such a large amount of information to participate in a conversation, but it looks like you're suggesting also that those articles are "just basic good sense." Unless I misunderstood you, that's "obviousness-in-retrospect" (aka hindsight bias). So I won't go +1 or -1.

Comment author: [deleted] 31 January 2014 10:15:18PM 5 points [-]

I wouldn't say retrospect, no. Maybe it's because I've mostly read the "Core Sequences" (covering epistemology rather than more controversial subjects), but most of it did seem like basic good sense, in terms of "finding out what is true and actually correcting your beliefs for it". As in, I wasn't really surprised all that much by what was written there, since it was mostly giving me vocabulary for things I had already known on some vaguer level.

Maybe I just had an abnormally high exposure to epistemic rationality prior to coming across the Sequences via HPMoR since I found out about those at age 21 rather than younger and was already of the "read everything interesting in sight" bent? Maybe my overexposure to an abnormally scientific clade of people makes me predisposed to think some degree of rationality is normal?

Maybe it was the fact that when I heard about psychics as a kid I bought myself a book on telekinesis, tried it out, and got bitterly disappointed by its failure to work -- indicating an anbnormal predisposition towards taking ideas seriously and testing them?

Screw it. Put this one down as "destiny at work". Everyone here has a story like that; it's why we're here.

Comment author: EGarrett 31 January 2014 11:01:24PM 1 point [-]

I think we see eye-to-eye, we both came here with a large amount of pre-existing knowledge and understanding of rationality...and I think for both of us reading all of the sequences is just not going to be a realistic expectation. But by the same token, I can't go with you when you say the ideas are basic.

Even if you knew them already, they are still very important and useful ideas that most people don't seem to know or act upon. I have respect for them and the people that write about them, even if I don't have time to go through all of them, and the inability to do that forms a significant barrier to my participation in the site.

Comment author: itaibn0 30 January 2014 11:16:29PM 5 points [-]

Personally I think it is plausible that I would find such a bullet point list as true or mostly true. However, I have already dismissed out of hand the possibility that it would be both true, important, and novel.

Comment author: MTGandP 10 March 2014 11:52:13PM 0 points [-]

When I read this story, I became emotionally invested in Nate (So8res). I empathized with him. He's the protagonist of the story. Therefore, I have to accept his ideas because otherwise I'd be rejecting his status as protagonist.