ArisKatsaris comments on Rationality Quotes September 2012 - Less Wrong

7 Post author: Jayson_Virissimo 03 September 2012 05:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1088)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 18 September 2012 05:50:15PM *  0 points [-]

“If you told me the Earth would only last a hundred years (i.e. won't last longer than that) .... It's a moot point since the Earth won't only last a hundred year (i.e. it will last longer).” At least that's what I got on the first reading.

I think I could kind-of make sense “it would increase the immediate priority of CFAR and decrease that of SIAI” under either hypothesis about what he means, though one interpretation would need to be more strained than the other.

Comment author: ArisKatsaris 18 September 2012 05:58:35PM *  4 points [-]

The idea is that if Earth lasts at least a hundred years, (if that's a given), then the possibility of a uFAI in that timespan severely decreases -- so SIAI (which seeks to prevent a uFAI by building a FAI) is less of an immediate priority and it becomes a higher priority to develop CFAR that will increase the public's rationality for the future generations, so that the future generations don't launch a uFAI.

Comment author: [deleted] 18 September 2012 06:10:11PM *  0 points [-]

(The other interpretation would be “If the Earth is going to only last a hundred years, then there's not much point in trying to make a FAI since in the long-term we're screwed anyway, and raising the sanity waterline will make us enjoy more what time there is left.)

EDIT: Also, if your interpretation is correct, by saying that the Earth won't last 100 years he's either admitting defeat (i.e. saying that an uFAI will be built) or saying that even a FAI would destroy the Earth within 100 years (which sounds unlikely to me -- even if the CEV of humanity would eventually want to do that, I guess it would take more than 100 years to terraform another place for us to live and for us all to move there).

Comment author: Eliezer_Yudkowsky 19 September 2012 03:14:24PM 3 points [-]

I was just using "Earth" as a synonym for "the world as we know it".

Comment author: MixedNuts 19 September 2012 05:18:46PM 11 points [-]

I think I disagree; care to make it precise enough to bet on? I'm expecting life still around, Earth the main population center, most humans not uploaded, some people dying of disease or old age or in wars, most people performing dispreferred activities in exchange for scarce resources at least a couple months in their lives, most children coming out of a biological parent and not allowed to take major decisions for themselves for at least a decade.

I'm offering $100 at even odds right now and will probably want to bet again in the next few years. I can give it to you (if you're going to transfer it to SIAI/CFAR tell me and I'll donate directly), and you pay me $200 if the world has not ended in 100 years as soon as we're both available (e.g. thawed). If you die you can keep the money; if I die then win give it to some sensible charity.

How's that sound? All of the above is up for negotiation.

Comment author: wedrifid 20 September 2012 10:36:18AM *  4 points [-]

I'm offering $100 at even odds right now and will probably want to bet again in the next few years. I can give it to you (if you're going to transfer it to SIAI/CFAR tell me and I'll donate directly), and you pay me $200 if the world has not ended in 100 years as soon as we're both available (e.g. thawed). If you die you can keep the money; if I die then win give it to some sensible charity.

(Neglecting any logistic or legal isses) this sounds like a no brainer for Eliezer (accept).

How's that sound?

Like you would be better served by making the amounts you give and expect to receive if you win somewhat more proportionate to expected utility of the resources at the time. If Eliezer was sure he was going to lose he should still take the low interest loan.

Even once the above is accounted for Eliezer should still accept the bet (in principle).

Comment author: MixedNuts 20 September 2012 10:55:49AM 2 points [-]

Dollar amounts are meant as purchasing-power-adjusted. I am sticking my fingers in my ears and chanting "La la, can't hear you" at discounting effects.

Comment author: Mitchell_Porter 26 September 2012 11:32:34PM 3 points [-]

I'm expecting ...

That's a nice set of criteria by which to distinguish various futures (and futurists).

Comment author: Eliezer_Yudkowsky 26 September 2012 11:01:32PM 6 points [-]

As wedifrid says, this is a no-brainer "accept" (including the purchasing-power-adjusted caveat). If you are inside the US and itemize deductions, please donate to SIAI, otherwise I'll accept via Paypal. Your implied annual interest rate assuming a 100% probability of winning is 0.7% (plus inflation adjustment). Please let me know whether you decide to go through with it; withdrawal is completely understandable - I have no particular desire for money at the cost of forcing someone else to go through with a bet they feel uncomfortable about. (Or rather, my desire for $100 is not this strong - I would probably find $100,000 much more tempting.)

Comment author: MixedNuts 27 September 2012 09:55:56AM 10 points [-]

PayPal-ed to sentience at pobox dot com.

Don't worry, my only debitor who pays higher interest rates than that is my bank. As long as that's not my main liquidity bottleneck I'm happy to follow medieval morality on lending.

If you publish transaction data to confirm the bet, please remove my legal name.

Comment author: Eliezer_Yudkowsky 27 September 2012 03:20:28PM 9 points [-]

Bet received. I feel vaguely guilty and am reminding myself hard that money in my Paypal account is hopefully a good thing from a consequentialist standpoint.

Comment author: gwern 27 September 2012 07:46:13PM 9 points [-]

Bet recorded: LW bet registry, PB.com.

Comment author: MugaSofer 27 September 2012 11:43:34AM 0 points [-]

I'm expecting [...] some people dying of disease or old age or in wars

Care to explain why? You sound like you expect nanotech by then.

Comment author: MixedNuts 27 September 2012 12:56:24PM *  8 points [-]

I definitely expect nanotech a few orders of magnitude awesomer than we have now. I expect great progress on aging and disease, and wouldn't be floored by them being solved in theory (though it does sound hard). What I don't expect is worldwide deployment. There are still people dying from measles, when in any halfway-developed country every baby gets an MMR shot as a matter of course. I wouldn't be too surprised if everyone who can afford basic care in rich countries was immortal while thousands of brown kids kept drinking poo water and dying. I also expect longevity treatments to be long-term, not permanent fixes, and thus hard to access in poor or politically unstable countries.

The above requires poor countries to continue existing. I expect great progress, but not abolition of poverty. If development continues the way it has (e.g. Brazil), a century isn't quite enough for Somalia to get its act together. If there's a game-changing, universally available advance that bumps everyone to cutting-edge tech levels (or even 2012 tech levels), then I won't regret that $100 much.

I have no idea what wars will look like, but I don't expect them to be nonexistent or nonlethal. Given no game-changer, socioeconomic factors vary too slowly to remove incentive for war. Straightforward tech applications (get a superweapon, get a superdefense, give everyone a superweapon, etc.) get you very different war strategies, but not world peace. If you do something really clever like world government nobody's unhappy with, arms-race-proof shields for everyone, or mass Gandhification, then I have happily lost.

Comment author: MugaSofer 28 September 2012 07:56:37AM 2 points [-]

Thanks for explaining!

Of course, nanotech could be self replicating and thus exponentially cheap, but the likelihood of that is ... debatable.

Comment author: [deleted] 19 September 2012 05:20:43PM *  2 points [-]

(I guess I had been primed to take “Earth” to mean ‘a planet or dwarf planet (according to the current IAU definition) orbiting the Sun between Venus and Mars’ by this. EDIT: Dragon Ball too, where destroying a planet means turning it into dust, not just rendering it inhabitable.)

Comment author: ciphergoth 20 September 2012 06:41:05AM 1 point [-]

I feel an REM song coming on...

Comment author: ArisKatsaris 18 September 2012 07:32:58PM 1 point [-]

Also, if your interpretation is correct, by saying that the Earth won't last 100 years he's either admitting defeat (i.e. saying that an uFAI will be built

EY does seem in a darker mood than usual lately, so it wouldn't surprise me to see him implying pessimism about our chances out loud, even if it doesn't go so far as "admitting defeat". I do hope it's just a mood, rather than that he has rationally updated his estimation of our chances of survival to be even lower than they already were. :-)

Comment author: Decius 26 September 2012 06:04:47PM 2 points [-]

"The world as we know it" ends if FAI is released into the wild.

Comment author: [deleted] 27 September 2012 03:51:00PM 2 points [-]

When I had commented, EY hadn't clarified yet that by Earth he meant “the world as we know it”, so I didn't expect “Earth” to exclude ‘the planet between Venus and Mars 50 years after a FAI is started on it’.

Comment author: Decius 27 September 2012 06:29:55PM *  0 points [-]

50 years after a self-improving AI is released into the wild, I don't expect Venus and Mars to be in their present orbits. I expect that they would be gradually moving towards being in the same orbit that the Earth is moving towards (or is already established in) 120 degrees apart, propelled by a rocket which uses large reflectors in space to heat portion of the surface of the planet, which is then forced to jet in the desired vector at escape velocity. ETA: That would mean the removal of three objects from the list of planets of Sol.

I think it will only be a few hundred years after FAI before interplanetary travel requires routine 'take your shoes off' type of screening.

Comment author: [deleted] 27 September 2012 06:31:10PM 6 points [-]

We'll still have shoes? And terrorists? I'm disappointed in advance.

Comment author: Decius 27 September 2012 06:44:45PM 0 points [-]

And even the right and ability (if we currently have it) to make choices, and some privacy!

Comment author: MixedNuts 27 September 2012 06:56:12PM 2 points [-]

IMHO you're being provincial. Your intuitions for interplanetary travel come directly from flying in the US; if you were used to saner policies you'd make different predictions. (If you're not from North America, I am very confused.)

Comment author: Dolores1984 27 September 2012 07:51:49PM 5 points [-]

Your idea of provincialism is provincial. The idea of shipping tinned apes around the solar system is the true failure of vision here, nevermind the bag check procedures.

Comment author: Decius 28 September 2012 03:31:22AM 1 point [-]

How quickly do you think humans will give up commuting?

Comment author: Eugine_Nier 28 September 2012 01:59:51AM 1 point [-]

Not thinking very ambitious I see.

Comment author: Decius 28 September 2012 03:25:38AM 1 point [-]

That's on the five-millennium plan.

Comment author: TheOtherDave 27 September 2012 09:06:48PM 0 points [-]

ETA: That would mean the removal of three objects from the list of planets of Sol.

Do distinct planets necessarily have distinct orbits?

Comment author: Vaniver 27 September 2012 09:27:58PM 1 point [-]

According to the modern definition, yes.

Comment author: TheOtherDave 27 September 2012 09:46:43PM 0 points [-]

Ah! I had read the wiki article on planets, which said "and has cleared its neighbouring region of planetesimals," and didn't bother to look up primary sources. I should know better. Thanks!

Comment author: [deleted] 27 September 2012 07:55:09PM 0 points [-]

Why would you put them into an inherently dynamically-unstable configuration, position-corrected by a massive kludge? I mean, what's in it for the AI?

Comment author: Decius 28 September 2012 03:08:59AM 1 point [-]

How about a dynamically stable one?

Oh, and roughly ten to twenty times the total available living space for humans, at an order-of-magnitude guess.

Comment author: TimS 27 September 2012 09:05:44PM 1 point [-]

If the AI is Friendly? The enhancement of humanity's utility/happiness/wealth - I assume terraforming is a lot easier if planets are near the middle of the water zone.

Comment author: [deleted] 27 September 2012 10:20:15PM 1 point [-]

We don't know what it takes to terraform a world -- it's easy to go "well, it needs more water and air for starters," but that conceals an awful lot of complexity. Humans, talking populations thereof, can't live just anywhere. We don't even have a really good, working definition of what the "habitability" of a planet is, in a way that's more specific than "I knows it when I sees it." Most of the Earth requires direct cultural adaptation to be truly livable. There's no such thing as humans who don't use culture and technology to cope with the challenges posed by their environment.

Anyway, my point is more that your prediction suggests some cached premises: why should FAI do that particular thing? Why is that a more likely outcome than any of the myriad other possibilities?

Comment author: Decius 26 September 2012 05:48:45PM 0 points [-]

So, we can construct an argument that CFAR would rise in relative importance over SIAIif we see strong evidence the world as we know it will end within 100 years, and an argument with the same conclusion if we see strong evidence that the world as we know it will last for at least 100 years.

There is something wrong.