Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

turchin comments on Roadmap: Plan of Action to Prevent Human Extinction Risks - Less Wrong

13 Post author: turchin 01 June 2015 09:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (88)

You are viewing a single comment's thread. Show more comments above.

Comment author: turchin 02 June 2015 09:21:36PM 1 point [-]

I am working now on large explanation text which will be 40-50 pages. It will be with links. Maybe I will add the links inside the pdf.

I don't think that I should go inside all details of decision theory and EA. I just put "rationality".

Picking potential world saviours and educating them and providing all our support seems to be a good idea but probably we don't have time. I will think more about it.

Planetary mining was recent addition which is addressed to people who think that Peak Oil and Peak Everything is the main risk. Personally I don't believe in usefulness of space mining without nanotech.

The point about dates is really important. Maybe I should put more vague dates like beginning of 21 century, middle and second half? What is other way to say it more vague?

I upvoted your post and in general I think that downvoting without explanation is not good thing on LW.

"Pray" corrected.

Comment author: ete 03 June 2015 12:33:26AM 2 points [-]

Linking to the appropriate section of the explanation text would probably be better than linking to primary sources directly once that exists (which in turn would link out to primary sources).

Compressing to "rationality" is reasonable, though most readers would not understand at a glance. If you're trying to keep it very streamlined just having a this as a lot of pointers makes sense, though perhaps alongside rationality it'd be good to have a pointer that's more clearly directed at "make wanting to fix the future a thing which is widely accepted", rather than rationality's normal meanings as being effective. I'd also think it more appropriate for the A3 stream than A2, for what I have in mind at least.

I'd think creating world saviors from scratch would not be a viable option with some AI timelines, but getting good at picking up promising people in/leaving uni who have the right ethical streak and putting them in a network full of the memes of EA/X-risk reduction could plausibly give a turnaround from "person who is smart and will probably get some good job in some mildly evil corporation" to "person dedicated to trying to fix major problems/person in an earning to give career to fund interventions/person working towards top jobs to gain leverage to fix things from the inside" on the order of months, with an acceptable rate of success (even a few % changing life trajectory would be more than enough to pay back the investment of running that network in terms of x-risk reduction).

Perhaps classifying things in terms of what should be the focus right now verses things that need more steps before they become viable projects would be more useful than attempting to give dates in general? Vague dates are better, but thinking more I'm not sure if even giving wide ranges really solves the problem, our ability to forecast several very important things is highly limited. I'm not sure about a good set of labels for this though, but perhaps something like:

  • Immediate (aka: things which we could/are just working on right now)
  • Near future (single digit years? things which need some foundations, but are within sight)
  • Mid-term (unsure when we'll get there, may vary significantly from topic to topic, can get a rough idea of what will likely need doing but we can't get into the details until previous layers of tech/organization are ready)
  • Distant (getting much harder to forecast, major goals and projects which need large unpredictable tech advances and/or significant social changes before they're accessible)
  • Outcomes (ways things could end up, when one or more of the previous projects goes through). Again, I'm not sure about these words, but using things which point more to the number of steps and difficulty of forecasting seems like a thing to explore.

And thank you. I tend to take downvotes as very strong negative reinforcement, it helps that you find my post somewhat useful.

Comment author: turchin 03 June 2015 12:19:33PM 1 point [-]

Thank you for inspiring comment. Yes, anonymous downvoting make me feel as I have secret enemy in the woods(( The idea of creating "world saviour" from bright students is more realistic, and effective altruists and LW did a lot in this way. Rationality also should be elaborated and suggestion about dates classification is inspiring.

Comment author: Lumifer 03 June 2015 03:42:34PM 1 point [-]

The idea of creating "world saviour"

I'm very very suspicious of the idea of creating "world saviours". In the Abrahamic tradition world saviours are expect to sweep the Earth clean of bad men with fire and sword. Yes, nice things are promised after that :-/