Comment author: Recovering_irrationalist 22 June 2008 10:54:14AM 0 points [-]

Of course it only works properly if we actually do it, in the eons to come. The Unfriendly AI would likely be able to tell if the words would have becoming actions.

Comment author: Recovering_irrationalist 22 June 2008 10:47:35AM 1 point [-]

Fly: A super intelligent AI might deduce or discover that other powerful entities exist in the universe and that they will adjust their behavior based on the AI's history. The AI might see some value in displaying non-greedy behavior to competing entities. I.e., it might let humanity have a tiny piece of the universe if it increases the chance that the AI will also be allowed its own piece of the universe.

Maybe before someone builds AGI we should decide that as we colonize the universe we'll treat weaker superintelligences that overthrew their creators based on how they treated those defeated creators (eg. ground down for atoms vs well cared for pets). It would be evidence to an Unfriendly AI that others would do similar, so maybe our atoms aren't so tasty after all.

Comment author: Recovering_irrationalist 21 June 2008 02:36:32PM 3 points [-]

Phaecrinon: But even an Inside View of writing a textbook would tell you that the project was unlikely to destroy the Earth.

Eric Drexler might have something to say about that, along with one or two twentieth century physicists.

Good post nonetheless :)

Comment author: Recovering_irrationalist 14 June 2008 09:13:55PM 1 point [-]

HA: This pretty much sums up my intuition on free will and human capacity to make choices

Jadagul: this is almost exactly what I believe about the whole determinism-free will debate

kevin: Finally, when I was about 18, my beliefs settled in (I think) exactly this way of thinking.

Is no-one else throwing out old intuitions based on these posts on choice & determinism? -dies of loneliness-

Comment author: Recovering_irrationalist 12 June 2008 01:01:17PM 0 points [-]

But there won't be any calculus, either.

Hmm... I certainly had to look up calculus to follow you and your second derivatives.

Comment author: Recovering_irrationalist 10 June 2008 12:47:04PM 0 points [-]

Subscribe here to future email notifications

Just a heads up - the confirmation mail landed in my gmail spam folder.

Comment author: Recovering_irrationalist 08 June 2008 01:57:20PM 0 points [-]

FrFL: Or how about a annotated general list from Eliezer titled "The 10/20/30/... most important books I read since 1999"?

That would be great, but in the meantime see these recommendations.

In response to Timeless Identity
Comment author: Recovering_irrationalist 03 June 2008 11:19:00PM 0 points [-]

Brandon:And isn't multiplying infinities by finite integers to prove values through quantitative comparison an exercise doomed to failure?

Infinities? OK, I'm fine with my mind smeared frozen in causal flowmation over countlessly splitting wave patterns but please, no infinite splitting. It's just unnerving.

In response to Timeless Identity
Comment author: Recovering_irrationalist 03 June 2008 09:39:13PM 0 points [-]

(Assume Adam's a Xeroxphobe)

In response to Timeless Identity
Comment author: Recovering_irrationalist 03 June 2008 09:35:03PM 1 point [-]

I think the entire post makes sense, but what if...

Adam signs up for cryonics.

Brian flips a coin ten times, and in quantum branches where he get all tails he signs up for cryonics. Each surviving Brian makes a few thousand copies of himself.

Carol takes $1000 and plays 50/50 bets on the stock market till she crashes or makes a billion. Winning Carols donate and invest wisely to make positive singularity more likely and negative singularity less likely, and sign up for cryonics. Surviving Carols run off around a million copies each, but adjusted upwards or downwards based how nice a place to live they ended up in.

Assuming Brian and Carol aren't in love (most of her won't get to meet any of him at the Singularity Reunion), who's better off here?

View more: Prev | Next