What's the meaning of "consciousness", "sentient" and "person" at all? It seems to me that all these concepts (at least partially) refer to the Ultimate Power, the smaller, imperfect echo of the universe. We've given our computers all the Powers except this: they can see, hear, communicate, but still...
For understanding my words, you must have a model of me, in addition to the model of our surroundings. Not just an abstract mathematical one but something which includes what I'm thinking right now. (Why should we call something...
Doug S.: if it were 20 lines of lisp... it is'nt, see http://xkcd.com/224/ :)
Furthermore... it seems to me that a FAI which creates a nice world for us needs the whole human value system AND its coherent extrapolation. And knowing how complicated the human value system is, I'm not sure we can accomplish even the former task. So what about creating a "safety net" AI instead? Let's upload everyone who is dying or suffering too much, create advanced tools for us to use, but otherwise preserve everything until we come up with a better solution. This would fit into 20 lines, "be nice" wouldn't.
That looks so... dim. (But sadly, it sounds too true.) So I ask too: what to do next? Hack AI and... become "death, destroyer of worlds"? Or think about FAI without doing anything specific? And doing that not just using that "just for fun" curiosity, which is needed (or so it seems) for every big scientific discovery. (Or is it just me who thinks it that way?)
Anyway... Do we have any information about what the human brain is capable of without additional downloaded "software"? (Or has the co-evolution of the brain and the "software" played such an important role that certain parts of it need some "drivers" to be useful at all?)
Programmers are also supposed to search the space of Turing machines, which seems really hard. Programming in Brainfuck is hard. All the software written in higher level languages are points of a mere subspace... If optimizing in this subspace has proven to be so effective, I don't think we have a reason to worry about uncompressible subspaces containing the only working solution for our problems, namely more intelligent AI designs.
"looking for reflective equilibria of your current inconsistent and unknowledgeable self; something along the lines of 'What would you ask me to do if you knew what I know and thought as fast as I do?'"
We're sufficiently more intelligent than monkeys to do that reasoning... so humanity's goal (as the advanced intelligence created by monkeys a few million years ago for getting to the Singularity) should be to use all the knowledge gained to tile the universe with bananas and forests etc.
We don't have the right to say, "if monkeys were more in...
Unknown, for the fourth: yes, even highest level desires change by time, but not because we want them to be changed. I think the third one is false instead: doing what you don't want to do is a flaw in the integrity of the cognitive system, a result of that we can't reprogram our lower level desires, but what desire could drive us to reprogram our highest level ones?
There is a subsystem in our brains called "conscience". We learn what is right and what is wrong in our early years, perhaps with certain priors ("causing harm to others is bad"). These things can also change by time (slowly!) per person, for example if the context of the feelings dramatically changes (oops, there is no God).
So agreeing with Subhan, I think we just do what we "want", maximizing the good feelings generated by our decisions. We ("we" = the optimization process trying to accomplish that) don't have acce...
I think the moral is that you shouldn't try to write software for which you don't have the hardware to run on, not even if the code could run itself by emulating the hardware. A rock runs on physics, Euclid's rules don't. We have morality to run on our brains, and... isn't FAI about porting it to physics?
So shouldn't we distinguish between the symbols physics::dynamic and human_brain::dynamic? (In a way, me reading the word "dynamic" uses more computing power than running any Java applet could on current computers...)
Well... I liked the video, especially to watch how all the concepts mentioned on OB before work in... real life. But showing how you should think to be effective (which Eliezer is writing about on OB) is a different goal from persuading people that the Singularity is not some other dull pseudo-religion. No, they haven't read OB, and they won't even have a reason to if they are told "you won't understand all this all of a sudden, see inferential distances, which is a concept I also can't explain now". To get thorough their spam filter, we'll need ...
Eliezer, does this whole theory cause us to anticipate something different after thinking about it? For example, after I upload, will I (personally) feel anything or only the death-like dark nothingness comes?
I think I did find such a thing, involving copying yourself in parts varying in size. (Well, it's leading to a contradiction, by the way, but maybe that's why it's even more worthwhile to talk about.)
We have that "special" feeling: we are distinct beings from all the others, including zombie twins. I think we tend to use only one word for two different concepts, which causes a lot of confusion... Namely: 1) the ability of intelligent physical systems to reflect on themselves, imagine what we think or whatever makes us think that whichever we are talking to is "conscious" 2) that special feeling that somebody is listening in there. AGI research tries to solve the first problem, Chalmers the second one.
So let's try to create zombies...
athmwiji: if I understood correctly, you said that the concept of the physical world arises from our subjective experiences, and even if we explain that consistently, there still remain subjective experiences which we can't. We could for example imagine a simulated world in which everyone has silicon-based brains, including, at first sight, you, but in the real world you're still a human with a traditional flesh-based brain. There would be no physics then, which you could use to explain your headache with in-world mechanisms.
But without assuming that you'r...
Unknown: see Dennett: Kinds of Minds, he has a fairly good theory for what consciousness is. (To put it short: it's the capability to reflect on one's own thoughts, and so use them as tools.)
At the current state of science and AI, this is what sounds like a difficult (and a bit mysterious) question. For the hunter-gatherers, "what makes your hand move" was the same (or even more) difficult question. (The alternative explanation, "there is a God who began all movement etc." is still popular nowadays...)
Tiiba: an algorithm is a model in o...
If you personally did the astoundingly complex science and engineering to build the replicator, drinking that Earl Grey tea would be a lot more satisfying.
One of the fundamental differences between technology and magic is that two engineers do twice as much work as one would do, while a more powerful sorcerer gets farther than 10 not so powerful ones. It matters more how good you are than how many of you exist.
What NBA players do looks similar in quality to the thing you did with your friends at home, because even if you play well, you five can't put ...
Eliezer, isn't reading a good fantasy story like being transported into another world?
Jed Harris: I agree... Our world seems to have the rule: "you are not significant". You can't design and build an airplane in your backyard, no one can. Even if you've got enough money, you haven't got enough time for that. In magical worlds (including Star Trek, Asimov, etc) that is what seems to be normal. (And I've never read about a committee which coordinates the work of hundreds of sorcerers, who create new spells 8 hours a day...)
rfriel: Yes, we could bu...
You can't design and build an airplane in your backyard, no one can.
But thats exactly how it did happen! If magic was possible in 1903, then surely it is possible now.
I refuse to exept your premise that it is impossible to have enough time and/or money to persue ones dreams; indeed, I challenge it. I personaly have a low income job, and also a small, old and used sailboat, that I'm trying to renovate and make seaworty again, with the hope of one day sailing far and explore the world. I know this is possible, for my parents did it, and brought me and my brother along 10 years ago, when I was 12.
In what category does "the starship from book X" fit?
Definitely not into the "real, explainable, playing by the rules of our world" category. We can't observe it's inner workings more closely, although in the world of the book everything seems to be explained. (They know how it works, we don't.)
But also not in the "does'nt exist, is not worth caring about" category: we know that it doesn't exist in the real world even before reading the full book, but is nevertheless interesting and worth reading.
I personally would be less cur...
"If we cannot learn to take joy in the merely real, our lives will be empty indeed."
It's true... but... why do we read sci-fi books then? Why should we? I don't think that after reading a novel about intelligent, faster-than-light starships the bus stopping at the bus stop nearby will be as interesting as it used to be when we were watching it on the way to the kindergarten... Or do you think it is? (Without imagining starships in place of buses, of course.)
So what non-existing things should we imagine to be rational (= to win), and how? I hope there will be some words about that in tomorrow's post, too...
Psy-Kosh: Maybe I really tried to approach the meaning of the question from the direction of subjective experience. But I think that the concept of "existence" includes that there is some observer who can decide if that thing we're talking about does really exist or doesn't, given his/her stable existence.
Maybe that's why the question can't be easily answered (and maybe has no answer at all) because the concept of "world" includes us as well. So if we want to predict something about the existence of the world (that is what the word &quo...
Psy-Kosh: let's Taboo "exist" then... What does it exactly mean? For me, it's something like "I have some experiences, whose cause is best modeled by imagining some discrete object in the outer world". The existence or non-existence of something affects what I will feel next.
Some further expansions: "why": how can I predict one experience from another? "world": all the experiences we have? (Modeled as a discrete object... But I can't really imagine what can be modeled by the fact that there is no world.)
So the questi...
@Roko: The visual cortex isn't the only one thing we use. Other parts of the brain probably "cache" some of the insights gained by visualizing things, or trying / imagining movements etc., also common sentences, so we can use these areas for other things we've never seen before. These cached things are our concepts, I think.
You're right, I won't visualize every part of the thought "technology advances exponentially because technology feeds back positively on itself". But I've seen a lot of exponential functions in math classes, plotted ...
Are words really just pointers? If you want to refer to objects which you've visualized, they indeed are. But people even do some peculiar "arithmetic" with words, forming sentences, which has nothing to do with meanings.
For example, when I'm sleepy (half sleeping state), sometimes I notice that whole sentence structures are running through my head, without the words filled in, but I know where the sentences begin and end, and how they are connected. Even specific words show up time to time, but the whole stream has no sense at all. But if you do...
Unknown: What do we mean by "chance"? That it has a very small a priori probability... The evidence is given: the two sequences are similar. We can also assume that the evolution theory has a bigger probability a priori, than the chance to get that sequence. These insights were all included in the post, I think. So applying Bayes' theorem we get the fact that the evolution version has much bigger a posteriori probability, so we don't have to show that separately.
There are a lot of events which have a priori probabilities in that order of magnitud...
So "good" creatures have a mechanism which simulates the thoughts and feelings of others, making it have similar thoughts and feelings, whether they are pleasant or bad. (Well, we have a "but this is the Enemy" mode, some others could have a "but now it's time to begin making paperclips at last" mode...)
For me, feeling the same seems to be much more important. (See dogs, infants...) So thinking in AI terms, there must be a coupling between the creature's utility function and ours. It wants us to be happy in order to be happy i... (read more)