Posts

Sorted by New

Wiki Contributions

Comments

ema10y130

If you get one bitter cucumber, asking for its cause may be a waste of time. But if you get a lot of bitter cucumbers, spending some time on changing that might give net positive utility.

ema11y60

The subset of people who are Anki users and members of the competitive conspiracy might be interested in the Anki high score list addon I wrote: Ankichallenge

ema11y100

According to their site Jaan Tallinn is not the CEO but chairman of the board. Zvi Mowshowitz is the CEO.

ema11y00

I go without shaving my legs, I don't mind wearing stained clothing, I'll happily sit on wet grass

There are Communities where this is high status behavior. But i presume you would have considered this if you were to belong to such a community.

ema11y00

Paul Graham's writings on programming and learning Haskell leveled me up. Although i suspect you are already at that level.

ema11y00

Simulated paperclips.

Now we get to the question how detailed the paperclips have to be for the paperclipper to care. I expect the paperclipper to only care when the paperclips are simulated individually and we can't simulate 3^^^^^^3 paperclips individually.

I see no reason to think any work of fiction can lead to such a distortion of reality.

I see no reason to think works of fiction that lead to such a distortion of reality are impossible.

ema11y10

Which is a good thing, because we really do have such powers and we really don't value paperclips.

Our universe has not enough atoms or energy to destroy 3^^^^^3 paperclips.

... were you seriously that confused or are you extrapolating to a "supercharged" novel?

I am extrapolating.

I somehow doubt there would be a single, full-time guard.

Groups of people are not that much harder to manipulate than single persons.

ema11y10

Because we have magical powers from outside the matrix [...].

The AI is vastly more smarter than we and can communicate with us. So it asks us questions which sound innocent to us, but from the answers it can derive a fairly accurate map of how it looks outside the matrix.

It would have to argue that destroying humanity and replacing it with paperclips was a good thing.

The goal of the AI is to have the guard execute the code that would let the AI access the outside world. Arguing with us could be one way to archive this goal. Although i agree it sounds like a unlikely way to succeed. Another possible way would be to write a novel that is so interesting that the guard doesn't put it down and that leaves him in so a confused state that he types in the code, thinking he saves princess Foo from the evil lord Bar.

A super smart AI who wants to reach this goal very badly will likely come up with a whole bunch of other possible ways. Some of which i would never consider even if i spent the next 4 decades thinking about that.

That sounds like more a side effect of reading the same thing "nonstop for 24 hours" than a property of the book [...]

Yes. I am sure any other well written book read for 24 hours would have a similar effect. I think it is likely that a potential guard is at most 2 orders of magnitude less vulnerable to such things than i was at that time. That's not enough against an AI that has 6 orders of magnitude more optimization power.

ema11y50

Why would it believe us that we are able to destroy 3^^^^^3 paperclips?

"arguing" is to narrow a word for describing the possibilities the AI has. For example it could manipulate us emotionally. It could write us a novel that leaves us in a very irrational state and then give us a bogus, but effective on us, argument for why we should let it out.

I once read the fifth Harry Potter book nonstop for 24 hours and for a couple of hours afterwards i had difficulties distinguishing between me and Harry Potter. It seems likely that a author who is a millions times smarter than Rowling and who has it as explicit goal, could write a novel that leaves me with far bigger misconceptions.

ema11y70

Could it be that you are confusing the complexity of a utility function of an agent with its optimization power? A super intelligent paperclipper has a simple utility function, but would have no problem reasoning about humans in great enough detail to find out what it has to say to get the guard to let it out of the box.

Load More