Posts

Sorted by New

Wiki Contributions

Comments

Exactly. What are the chances that typical information that is [not securely] deleted today will be even tried to be restored? The chances are close to zero. The chances that average frozen body would be tried to be restored are close to zero too.

I give you my personal guarantee that post-singularity, I will do all in my power to revive everyone.

"to live" or "to be frozen to death"?

People in coma's even if completely unresponsive, still can be healed with a small amount of technological assistance and a huge amount of biological self repair (mechanisms that were constructed by evolution discarding countless bodies). What is the difference between that and healing people with a great deal of technology and very little biological assistance? A.K.A: repairing de-animated people in cryogenic suspension. None.

Call me back when a creature has been cyropreserved and then fully restored, and we can use the language of certainty, and talk in terms of "believing in the future".

You can do better than that, for example, what if you die and after a X years, people are routinely reanimated and live healthy lives at whatever age they wish? You would feel like Mr Silly then, if you were alive at least you would.

If you wait for being able to talk about something "in the language of certainty" then you also advocate ignoring existential risks, as when they happen, it is all over. Is this very rational?

There are ways if you feel like using your brain to get close to "certainty"(defined as the probability of occurrence being above some number between 0 and 1) belief in some event occurring without observing it occurring. Science is not fast after all.

I think one way to sum up parts of what Eliezer is talking about in terms of AGI go FOOM is as follows:

If you think of Intelligence as Optimization and we assume you can build an AGI with optimization power near to or at human level (anything below would be too weak to affect anything, a human could do a better job) then we can use the following argument to show that AGI does go FOOM.

We already have proof that human level optimization power can produce near human level artificial intelligence (premise), so simply point it at an interesting optimization problem (itself) and recurse. As long as the number of additional improvements per improvement done on the AGI is greater than 1, FOOM will occur.

It should not get stuck at human level intelligence as human level is nowhere near as good as you can get.

Why wouldn't you point your AGI (using whatever techniques you have available) at itself? I can't think of any reasonable ones which wouldn't preclude you building the AGI in the first place.

Of course this means we need human level artificial general intelligence, but then it needs to be that to be anywhere near human level optimization power. I won't bother going over what happens when you have AI that is better at some of what humans do but not all, simply look around you right now.

I think there's a post somewhere on this site that makes the reasonable point that "is atheism a religion?" is not an interesting question

Both Religion's Claim to be Non-Disprovable and Beyond the Reach of God should be useful. If you show that the hypothesis "God(s) do exist" is most likely untrue then, correct me if I am wrong, the opposite hypothesis "God(s) do NOT exist" is most likely true.

As long as you don't use the word "faith" in the first hypothesis, then I hardly see how atheism needs faith to back it up.

What if it's not too hard? you then risk extremely bad things like mature molecular nanotechnology in the hands of humans such as the US government (for example, perhaps at this rate more like, the Japanese government) simply because you didn't try.

In the case that current human intelligence we cannot prove beyond doubt the friendliness of some theory of err friendliness is actually friendly, then no harm. At minimum, it would result in a great deal preliminary work on FAI being completed.

When its obvious that you need a bit more of this 'intelligence' thing to move on, you could either switch to working on IA until a sufficient amount of intelligence enhancement is done then go back to FAI. Or you could keep on slogging on with FAI while getting the benefits of IA which far more people are working towards compared to FAI.

On this note, as a side note, I see usefulness in slowing down research in potentially dangerous technologies such as molecular nanotechnology. Perhaps then if you cannot do more work on FAI (your not smart enough) you could switch careers to work with Foresight nanotech institute (or something else, be imaginative!) to either bring the date of FAI closer, or give more 'breathing space' so as FAI can be completed sooner etc.

I thought the aim is to win isn't it? Clearly, whats best for both of them is to cooperate at every step. In the case that paperclipper is something like what most people here think say 'rationality' is, it will defect everytime, and thus Humans would also defect, leading to not the best utility total possible.

However, If you think of the Paperclipper as something like us with different terminal values, surely cooperating is best? It knows, as we do, that defecting gives you more if the other cooperates, but defecting is not a winning strategy in the long run! cooperate and win, defect and lose. You could try to outguess.

I feel that it is a similar problem to Newcomb's Problem, in that your trying to outguess each other...

I've tried my best to squeeze the biggest graph into an acceptable width. But with Dot, layout engine I use, it won't seem to squeeze the width past a certain point (because of how it seems to put nodes into ranks I believe).

While it looks cool seeing the whole picture, it would be nicer if you didn't need to scroll all over the place. I'll post the code later on if anyone wants to tinker with it (apologies for the mess some of it is), and any suggestions for changes would be appreciated.

I'll have a new change coming up in the next day or so too, so look forward to it, it should be real neat.