Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
luzr40

Eliezer:

I am starting to be sort of frightened by your premises - especially considering that there is non-zero probablity of creating some nonsentient singleton that tries to realize your values.

Before going any further, I STRONGLY suggest that you think AGAIN what might be interesting in carving wooden legs.

Yes, I like to SEE MOVIES with strong main characters going through the hell. But I would not want any of that.

It does not matter that AI can do everything better than me. Right now, I am not the best carving the wood either. But working with wood is still fun. Or swimming, skiing, playing chess (despite the fact computer can beat you each time), caring about animals etc.

I do not need to do dangerous things to be happy. I am definitely sure about that.

luzr10

Eliezer:

"Narnia as a simplified case where the problem is especially stark."

I believe there are at least two significant differences:

  • Aslan was not created by humans, it does not represent the "story of intelligence" (quite contrary, lesser intelligence was created by Aslan, as long as you interpret it as God).

  • There is only single Aslan with single predetermined "goal" while there are millions of Culture minds, with no single "goal".

(actually, second point is what I dislike so much about the idea of singleton - it can turn into something like benevolent but oppressing God too easily. Aslan IS Narnia Singleton).

luzr80

David:

"asks a Mind whether it could create symphonies as beautiful as it and how hard it would be"

On somewhat related note, there are still human chess players and competitions...

luzr30

Eliezer:

It is really off-topic, and I do not have a copy of Consider Phlebas at hand now, but

http://en.wikipedia.org/wiki/Dra%27Azon

Even if Banks have not mentioned 'sublimed' in the first novel, the concept exactly fits Dra'Azon.

Besides, Culture is not really advancing its 'base' technology, but rather rebuilding its infrastructure to war-machine.

luzr40

Eliezer (about Sublimation):

"Ramarren, Banks added on that part later, and it renders a lot of the earlier books nonsensical - why didn't the Culture or the Idarans increase their intelligence to win their war, if it was that easy? I refuse to regard Excession as canon; it never happened."

Just a technical (or fandom?) note:

Sublimed civilization is the central plot of Consider Phlebas (Schar's world, where Mind escapes, is "protected" by sublimed civilization - that is why direct military action by either Iridans or Culture is impossible).

luzr50

Julian Morrison:

Or you can revert the issue once again. You can enjoy your time on obsolete skills (like sports, arts or carving table legs...).

There is no shortage of things to do, there is only a problem with your definition of "worthless".

luzr20

"If you already had the lifespan and the health and the promise of future growth, would you want new powerful superintelligences to be created in your vicinity, on your same playing field?"

Yes, definititely. If nothing else, it means diversity.

"Or would you prefer that we stay on as the main characters in the story of intelligent life, with no higher beings above us?"

I do not care, as long as story continues.

And yes, I would like to hear the story - which is about the same thing I would get in case Minds are prohibited. I will not be the main character of the story anyway, so why should I care?

"Should existing human beings grow up at some eudaimonic rate of intelligence increase, and then eventually decide what sort of galaxy to create, and how to people it?"

Grow up how? Does it involve uploading your mind to computronium?

"Or is it better for a nonsentient superintelligence to exercise that decision on our behalf, and start creating new powerful Minds right away?"

Well, this is the only thing I fear. I would prefer sentient superintelligence to create nonsentient utility maximizers. Much less chance of error, IMO.

"If we don't have to do it one way or the other - if we have both options - and if there's no particular need for heroic self-sacrifice - then which do you like?"

As you have said - this is a Big world. I do not think both options are mutually exclusive. The only mutually exclusive option I see is nonsentient maximizer singleton programmed to avoid sentient AI and Minds.

"Well... you could have the humans grow up (at some eudaimonic rate of intelligence increase), and then when new people are created, they might be created as powerful Minds to start with."

Please, explain the difference between the Mind created outright and "grown up humans". Do you insist on biological computronium?

As you have said, we are living in a Big world. It inevitably means that there is (or will be) quite likely some Culture like civilisation that we will meet if things go well.

How do you think we will be able to compete with your "no sentient AIs, only grown up humans" bias?

Or: Say your CEV AI creates singleton.

Will we be allowed to create the Culture?

What textbooks will be banned?

Will CEV burn any new textbooks we are going to create so that nobody is able to stand on other people's arms?

luzr00

anon: "The cheesecake is a placeholder for anything that the sentient AI might value highly, while we (upon sufficient introspection) do not."

I am quite aware of that. Anyway, using "cheescake" as placeholder adds a bias to the whole story.

"Eliezer thinks that some/most of our values are consequences of our long history, and are unlikely to be shared by other sentient beings."

Indeed. So what? In reality, I am quite interested what superintelligence would really consider valueable. But I am pretty sure that "big cheescake" in unlikely.

Thinking about it, AFAIK Eliezer considers himself rationalist. Is not a big part of rationalism involved in disputing values that are merely consequences of our long history?

luzr-40

Uhm, maybe it is naive, but if you have a problem that your mind is too weak to decide, and you have real strong (friendly) superintelligent GAI, would not it be logical to use GAIs strong mental processes to resolve the problem?

Load More