What's the meaning of "consciousness", "sentient" and "person" at all? It seems to me that all these concepts (at least partially) refer to the Ultimate Power, the smaller, imperfect echo of the universe. We've given our computers all the Powers except this: they can see, hear, communicate, but still...
For understanding my words, you must have a model of me, in addition to the model of our surroundings. Not just an abstract mathematical one but something which includes what I'm thinking right now. (Why should we call something...
Doug S.: if it were 20 lines of lisp... it is'nt, see http://xkcd.com/224/ :)
Furthermore... it seems to me that a FAI which creates a nice world for us needs the whole human value system AND its coherent extrapolation. And knowing how complicated the human value system is, I'm not sure we can accomplish even the former task. So what about creating a "safety net" AI instead? Let's upload everyone who is dying or suffering too much, create advanced tools for us to use, but otherwise preserve everything until we come up with a better solution. This would fit into 20 lines, "be nice" wouldn't.
That looks so... dim. (But sadly, it sounds too true.) So I ask too: what to do next? Hack AI and... become "death, destroyer of worlds"? Or think about FAI without doing anything specific? And doing that not just using that "just for fun" curiosity, which is needed (or so it seems) for every big scientific discovery. (Or is it just me who thinks it that way?)
Anyway... Do we have any information about what the human brain is capable of without additional downloaded "software"? (Or has the co-evolution of the brain and the "software" played such an important role that certain parts of it need some "drivers" to be useful at all?)
Programmers are also supposed to search the space of Turing machines, which seems really hard. Programming in Brainfuck is hard. All the software written in higher level languages are points of a mere subspace... If optimizing in this subspace has proven to be so effective, I don't think we have a reason to worry about uncompressible subspaces containing the only working solution for our problems, namely more intelligent AI designs.
Analogy might work better for recognizing things already optimized in design space, especially if they are a product of evolution, with common ancestors (4 legs, looks like a lion, so run, even if it has stripes). And we only started designing complicated stuff a few thousand years ago at most...
"looking for reflective equilibria of your current inconsistent and unknowledgeable self; something along the lines of 'What would you ask me to do if you knew what I know and thought as fast as I do?'"
We're sufficiently more intelligent than monkeys to do that reasoning... so humanity's goal (as the advanced intelligence created by monkeys a few million years ago for getting to the Singularity) should be to use all the knowledge gained to tile the universe with bananas and forests etc.
We don't have the right to say, "if monkeys were more in...
"I think therefore I am"... So there is a little billiard ball in some model which is me, and it has a relatively stable existence in time. Can't you imagine a world in which these concepts simply make no sense? (If you couldn't, just look around, QM, GR...)
Unknown, for the fourth: yes, even highest level desires change by time, but not because we want them to be changed. I think the third one is false instead: doing what you don't want to do is a flaw in the integrity of the cognitive system, a result of that we can't reprogram our lower level desires, but what desire could drive us to reprogram our highest level ones?
There is a subsystem in our brains called "conscience". We learn what is right and what is wrong in our early years, perhaps with certain priors ("causing harm to others is bad"). These things can also change by time (slowly!) per person, for example if the context of the feelings dramatically changes (oops, there is no God).
So agreeing with Subhan, I think we just do what we "want", maximizing the good feelings generated by our decisions. We ("we" = the optimization process trying to accomplish that) don't have acce...
I think the moral is that you shouldn't try to write software for which you don't have the hardware to run on, not even if the code could run itself by emulating the hardware. A rock runs on physics, Euclid's rules don't. We have morality to run on our brains, and... isn't FAI about porting it to physics?
So shouldn't we distinguish between the symbols physics::dynamic and human_brain::dynamic? (In a way, me reading the word "dynamic" uses more computing power than running any Java applet could on current computers...)
Well... I liked the video, especially to watch how all the concepts mentioned on OB before work in... real life. But showing how you should think to be effective (which Eliezer is writing about on OB) is a different goal from persuading people that the Singularity is not some other dull pseudo-religion. No, they haven't read OB, and they won't even have a reason to if they are told "you won't understand all this all of a sudden, see inferential distances, which is a concept I also can't explain now". To get thorough their spam filter, we'll need ...
Eliezer, does this whole theory cause us to anticipate something different after thinking about it? For example, after I upload, will I (personally) feel anything or only the death-like dark nothingness comes?
I think I did find such a thing, involving copying yourself in parts varying in size. (Well, it's leading to a contradiction, by the way, but maybe that's why it's even more worthwhile to talk about.)
We have that "special" feeling: we are distinct beings from all the others, including zombie twins. I think we tend to use only one word for two different concepts, which causes a lot of confusion... Namely: 1) the ability of intelligent physical systems to reflect on themselves, imagine what we think or whatever makes us think that whichever we are talking to is "conscious" 2) that special feeling that somebody is listening in there. AGI research tries to solve the first problem, Chalmers the second one.
So let's try to create zombies...
athmwiji: if I understood correctly, you said that the concept of the physical world arises from our subjective experiences, and even if we explain that consistently, there still remain subjective experiences which we can't. We could for example imagine a simulated world in which everyone has silicon-based brains, including, at first sight, you, but in the real world you're still a human with a traditional flesh-based brain. There would be no physics then, which you could use to explain your headache with in-world mechanisms.
But without assuming that you'r...
Unknown: see Dennett: Kinds of Minds, he has a fairly good theory for what consciousness is. (To put it short: it's the capability to reflect on one's own thoughts, and so use them as tools.)
At the current state of science and AI, this is what sounds like a difficult (and a bit mysterious) question. For the hunter-gatherers, "what makes your hand move" was the same (or even more) difficult question. (The alternative explanation, "there is a God who began all movement etc." is still popular nowadays...)
Tiiba: an algorithm is a model in o...
If you personally did the astoundingly complex science and engineering to build the replicator, drinking that Earl Grey tea would be a lot more satisfying.
One of the fundamental differences between technology and magic is that two engineers do twice as much work as one would do, while a more powerful sorcerer gets farther than 10 not so powerful ones. It matters more how good you are than how many of you exist.
What NBA players do looks similar in quality to the thing you did with your friends at home, because even if you play well, you five can't put ...
Eliezer, isn't reading a good fantasy story like being transported into another world?
Jed Harris: I agree... Our world seems to have the rule: "you are not significant". You can't design and build an airplane in your backyard, no one can. Even if you've got enough money, you haven't got enough time for that. In magical worlds (including Star Trek, Asimov, etc) that is what seems to be normal. (And I've never read about a committee which coordinates the work of hundreds of sorcerers, who create new spells 8 hours a day...)
rfriel: Yes, we could bu...
In what category does "the starship from book X" fit?
Definitely not into the "real, explainable, playing by the rules of our world" category. We can't observe it's inner workings more closely, although in the world of the book everything seems to be explained. (They know how it works, we don't.)
But also not in the "does'nt exist, is not worth caring about" category: we know that it doesn't exist in the real world even before reading the full book, but is nevertheless interesting and worth reading.
I personally would be less cur...
"If we cannot learn to take joy in the merely real, our lives will be empty indeed."
It's true... but... why do we read sci-fi books then? Why should we? I don't think that after reading a novel about intelligent, faster-than-light starships the bus stopping at the bus stop nearby will be as interesting as it used to be when we were watching it on the way to the kindergarten... Or do you think it is? (Without imagining starships in place of buses, of course.)
So what non-existing things should we imagine to be rational (= to win), and how? I hope there will be some words about that in tomorrow's post, too...
So "good" creatures have a mechanism which simulates the thoughts and feelings of others, making it have similar thoughts and feelings, whether they are pleasant or bad. (Well, we have a "but this is the Enemy" mode, some others could have a "but now it's time to begin making paperclips at last" mode...)
For me, feeling the same seems to be much more important. (See dogs, infants...) So thinking in AI terms, there must be a coupling between the creature's utility function and ours. It wants us to be happy in order to be happy i... (read more)