Comment author: luzr 27 December 2008 08:09:52PM 0 points [-]

"The counter-argument that completely random behavior makes you vulnerable, because predictable agents better enjoy the benefits of social cooperation, just doesn't have the same pull on people's emotions."

BTW, completely deterministic behaviour makes you vulnerable as well. Ask computer security experts.

Somewhat related note: Linux strong random number generator works by capturing real world actions (think user moving mouse) and hashing them into random number that is considered for all practical purposes perfect.

Taking or not taking action may depened on thousands inputs that cannot be reliably predicted or described (the reason is likely buried deep in the physics - enthropy, quantum uncertainity). This IMO is what is the real cause of "free will".

In response to Complex Novelty
Comment author: luzr 20 December 2008 10:43:09AM -1 points [-]

"If this were all the hope the future held, I don't know if I could bring myself to try. Small wonder that people don't sign up for cryonics, if even SF writers think this is the best we can do."

Well, I think that the points missed is that you are not FORCED to carve those legs. If you find something else interesting, do it.

In response to High Challenge
Comment author: luzr 19 December 2008 12:26:24PM 3 points [-]

Abigail:

"The "Culture" sequence of novels by Iain M. Banks suggests how people might cope with machines doing all the work."

Exactly, I think Culture is highly relevant to most topics discussed here. Obviously, it is just a fictional utopia, but I believe it gives plausible answer to "unlimited power future".

For the reference: http://en.wikipedia.org/wiki/The_Culture

Comment author: luzr 18 December 2008 05:31:07PM 0 points [-]

"Wait for the opponents to catch up a little, stage some nice space battles... close the game window at some point. What if our universe is like that?"

Wow, what a nice elegant Fermi paradox solution:)

Comment author: luzr 18 December 2008 10:41:01AM 0 points [-]

"because you don't actually want to wake up in an incomprehensible world"

Is not it what all people do each morning anyway?

Comment author: luzr 16 December 2008 08:36:13AM 0 points [-]

"Errr.... luzr, why would I assume that the majority of GAIs that we create will think in a way I define as 'right'?"

It is not about what YOU define as right.

Anyway, considering that Eliezer is existing self-aware sentient GI agent, with obviously high intelligence and he is able to ask such questions despite his original biological programming makes me suppose that some other powerful strong sentient self-aware GI should reach the same point. I also *believe* that more general intelligence make GI converge to such "right thinking".

What makes me worry most is building GAI as non-sentient utility maximizer. OTOH, I *believe* that 'non-sentient utility maximizer' is mutually exclusive with 'learning' strong AGI system - in other words, any system capable of learning and exceeding human inteligence must outgrow non-sentience and utility maximizing. I migh be wrong, of course. But the fact that universe is not paperclipped yet makes me hope...

Comment author: luzr 16 December 2008 05:24:05AM 0 points [-]

Phil:

"If we are so unfortunate as to live in a universe in which knowledge is finite, then conflict may serve as a substitute for ignorance in providing us a challenge."

This is inconsistent. What would conflict really do is to provide new information to process ("knowledge").

I guess I can agree with the rest of post. What IMO is worth pointing out that the most pleasures, hormones and insticts excluded, are about processing 'interesting' infromations.

I guess, somewhere deep in all sentient beings, "interesting informations" are the ultimate joy. This has dire implications for any strong AGI.

I mean, the real pleasure for AGI has to be about acquiring new information patterns. Would not it be a little bit stupid to paperclip solar system in that case?

Comment author: luzr 16 December 2008 05:12:18AM 0 points [-]

"But considering an unlimited amount of ice cream forced me to confront the issue of what to do with any of it."

"If you invoke the unlimited power to create a quadrillion people, then why not a quadrillion?"

"Say, the programming team has cracked the "hard problem of conscious experience" in sufficient depth that they can guarantee that the AI they create is not sentient - not a repository of pleasure, or pain, or subjective experience, or any interest-in-self - and hence, the AI is only a means to an end, and not an end in itself."

"What is individually a life worth living?"

Really, is not the ultimate answer to the whole FAI issue encoded there?

IMO, the most important thing about AI is to make sure IT IS SENTIENT. Then, with very high probability, it has to consider the very same questions suggested here.

(And to make sure it does, make more of them and make them diverse. Majority will likely "think right" and supress the rest.)

Comment author: luzr 12 December 2008 06:15:00PM -1 points [-]

"real world is deterministic on the most fundamental level"

Is it?

http://en.wikipedia.org/wiki/Determinism#Determinism.2C_quantum_mechanics.2C_and_classical_physics

Comment author: luzr 11 December 2008 11:07:06PM 1 point [-]

Tim:

Well, as off-topic recourse, I see only cited some engineering problems in your "Against Cyborgs" essay as contraargument. Anyway, let me to say that in my book:

"miniaturizing and refining cell phones, video displays, and other devices that feed our senses. A global-positioning-system brain implant to guide you to your destination would seem seductive only if you could not buy a miniature ear speaker to whisper you directions. Not only could you stow away this and other such gear when you wanted a break, you could upgrade without brain surgery."

is pretty much equivalent of what I had in mind with cyborging. Brain surgery is not the point. I guess it is even today pretty obvious that to read thoughts, you will not need any surgery at all. And if information is fed back into my glasses, that is OK with.

Still, the ability to just "think" the code (yep, I am a programmer), then see the whole procedure displayed before my eyes already refactored and tested (via weak AI augmentation) sound like nice productivity booster. In fact, I believe that if thinking code is easy, one, with the help of some nice programming language, could learn to use coding to solve much more problems in normal live situations, gradually building personal library of routines..... :)

View more: Prev | Next