Comment author: Tiiba3 29 June 2008 06:18:46AM 0 points [-]

Let's say I have a utlity function and a finite map from actions to utilities. (Actions are things like moving a muscle or writing a bit to memory, so there's a finite number.)

One day, the utility of all actions becomes the same. What do I do? Well, unlike Asimov's robots, I won't self-destructively try to do everything at once. I'll just pick an action randomly.

The result is that I move in random ways and mumble gibberish. Althogh this is perfectly voluntary, it bears an uncanny resemblance to a seizure.

Regardless of what else is in a machine with such a utility function, it will never surpass the standard of intelligence set by jellyfish.

Comment author: Tiiba3 27 June 2008 02:05:36PM 9 points [-]

"we could imagine that "sexiness" starts by eating an Admirer"

Harsh, but fair.

Comment author: Tiiba3 21 June 2008 07:11:55AM 0 points [-]

Julian, I think the box you're not opening is Pandora's box.

Comment author: Tiiba3 20 June 2008 04:59:06PM 0 points [-]

Virge is mixing up instrumental and terminal values. No biscuit.

Comment author: Tiiba3 19 June 2008 06:29:57AM 1 point [-]

An AI could screw us up just by giving bad advice. We'll be likely to trust it, because it's smart and we're too lazy to think. A modern GPS receiver can make you drive into a lake. An evil AI could ruin companies, start wars, or create an evil robot without lifting a finger.

Besides, it's more fun to create FAI and let it do what it wants than to build Skynet and then try to confine it forever. You'll still have only one chance to test it, whenever you decide to do that.

Comment author: Tiiba3 17 June 2008 06:40:56AM 1 point [-]

I seem to be unable to view the referenced comment.

Hmm, no replies after all this time?

Comment author: Tiiba3 01 April 2008 08:03:24PM 6 points [-]

A thought occured to me: people who are offended by the idea that a mere machine can think simply might not be imagining the right machine. They imagine maybe a hundred neurons, each extending 10-15 synapses to the others. And then they can't make head or tail of even that, because it's already too big. Scope insensitivity, in other words.

In response to Angry Atoms
Comment author: Tiiba3 31 March 2008 07:15:05PM 1 point [-]

"So I can imagine another math in which 2+2=5 is not obviously false, but needs a long proof and complicated equations..."

So, from the fact that another mind might take a long time to understand integer operations, you conclude that it has "another math"? And what does that mean for algorithms?

If an intelligence is general, it will be able to, in time, understand any concept that can be understood by any other general or narrow intelligence. And then use it to create an algorithm. Or be conquered.

In response to Angry Atoms
Comment author: Tiiba3 31 March 2008 11:24:38AM 0 points [-]

Latanius:

"Tiiba: an algorithm is a model in our mind to describe the similarities of those physical systems implementing it. Our mathematics is the way _we_ understand the world... I don't think the Martians with four visual cortexes would have the same math, or would be capable of understanding the same algorithms... So algorithms aren't fundamental, either."

One or more of us is confused. Are you saying that a Martian wiith four visual cortices would be able to compress any file? Would add two and two and get five?

They can try, sure, but it won't work.

In response to Angry Atoms
Comment author: Tiiba3 31 March 2008 04:47:35AM 0 points [-]

Please delete my post. I see that Tom said that already.

View more: Prev | Next