"we could imagine that "sexiness" starts by eating an Admirer"
Harsh, but fair.
"we could imagine that "sexiness" starts by eating an Admirer"
Harsh, but fair.
Virge is mixing up instrumental and terminal values. No biscuit.
An AI could screw us up just by giving bad advice. We'll be likely to trust it, because it's smart and we're too lazy to think. A modern GPS receiver can make you drive into a lake. An evil AI could ruin companies, start wars, or create an evil robot without lifting a finger.
Besides, it's more fun to create FAI and let it do what it wants than to build Skynet and then try to confine it forever. You'll still have only one chance to test it, whenever you decide to do that.
I seem to be unable to view the referenced comment.
Hmm, no replies after all this time?
A thought occured to me: people who are offended by the idea that a mere machine can think simply might not be imagining the right machine. They imagine maybe a hundred neurons, each extending 10-15 synapses to the others. And then they can't make head or tail of even that, because it's already too big. Scope insensitivity, in other words.
"So I can imagine another math in which 2+2=5 is not obviously false, but needs a long proof and complicated equations..."
So, from the fact that another mind might take a long time to understand integer operations, you conclude that it has "another math"? And what does that mean for algorithms?
If an intelligence is general, it will be able to, in time, understand any concept that can be understood by any other general or narrow intelligence. And then use it to create an algorithm. Or be conquered.
Latanius:
"Tiiba: an algorithm is a model in our mind to describe the similarities of those physical systems implementing it. Our mathematics is the way _we_ understand the world... I don't think the Martians with four visual cortexes would have the same math, or would be capable of understanding the same algorithms... So algorithms aren't fundamental, either."
One or more of us is confused. Are you saying that a Martian wiith four visual cortices would be able to compress any file? Would add two and two and get five?
They can try, sure, but it won't work.
Let's say I have a utlity function and a finite map from actions to utilities. (Actions are things like moving a muscle or writing a bit to memory, so there's a finite number.)
One day, the utility of all actions becomes the same. What do I do? Well, unlike Asimov's robots, I won't self-destructively try to do everything at once. I'll just pick an action randomly.
The result is that I move in random ways and mumble gibberish. Althogh this is perfectly voluntary, it bears an uncanny resemblance to a seizure.
Regardless of what else is in a machine with such a utility function, it will never surpass the standard of intelligence set by jellyfish.