Comment author: timtyler 02 March 2010 10:24:38AM *  2 points [-]

These "Whole Brain Emulation" discussions are surreal for me. I think someone needs to put forward the best case they can find that human brain emulations have much of a chance of coming before engineered machine intelligence.

The efforts in that direction I have witnessed so far seem feeble and difficult to take seriously - while the case that engineered machine intelligence will come first seems very powerful to me.

Without such a case, why spend so much time and energy on a discussion of what-if?

Comment author: BenRayfield 03 March 2010 04:32:44PM *  0 points [-]

Why do you consider the possibility of smarter than Human AI at all? The difference between the AI we have now and that is bigger than the difference between those 2 technologies you are comparing.

Comment author: JamesAndrix 01 March 2010 06:35:18AM 4 points [-]

I believe Eliezer expressed it as something that tells you that even if you think it would be right (because of your superior ability) to murder the chief and take over the tribe, it still is not right to murder the chief and take over the tribe.

That's exactly the high awareness I was talking about, and most people don't have it. I wouldn't be surprised if most people here failed at it, if it presented itself in their real lives.

I mean, are you saying you wouldn't save the burning orphans?

We do still have problems with abuses of power, but I think we have well-developed ways of spotting this and stopping it.

We have checks and balances of political power, but that works between entities on roughly equal political footing, and doesn't do much for those outside of that process. We can collectively use physical power to control some criminals who abuse their own limited powers. But we don't have anything to deal with supervillains.

There is fundamentally no check on violence except more violence, and 10,000 accelerated uploads could quickly become able to win a war against the rest of the world.

Comment author: BenRayfield 03 March 2010 04:29:03PM 0 points [-]

It is the fashion in some circles to promote funding for Friendly AI research as a guard against the existential threat of Unfriendly AI. While this is an admirable goal, the path to Whole Brain Emulation is in many respects more straightforward and presents fewer risks.

I believe Eliezer expressed it as something that tells you that even if you think it would be right (because of your superior ability) to murder the chief and take over the tribe, it still is not right to murder the chief and take over the tribe.

That's exactly the high awareness I was talking about, and most people don't have it. I wouldn't be surprised if most people here failed at it, if it presented itself in their real lives.

Most people would not act like a Friendly AI therefore "Whole Brain Emulation" only leads to "fewer risks" if you know exactly which brains to emulate and have the ability to choose which brain(s).

If whole brain emulation (for your specific brain) its expensive, it might result in the brain being from a person who starts wars and steals from other countries, so he can get rich.

Most people prefer that 999 people from their country should live at the cost of 1000 people of another country would die, given no other known differences between those 1999 people. Also unlike a "Friendly AI", their choices are not consistent. Most people will leave the choice at whatever was going to happen if they did not choose, even if they know there are no other effects (like jail) from choosing. If the 1000 people were going to die, unknown to any of them, to save 999, then most people would think "Its none of my business, maybe god wants it to be that way" and let the extra 1 person die. A "Friendly AI" would maximize lives saved if nothing else is known about all those people.

There are many examples why most people are not close to acting like a "Friendly AI" even if we removed all the bad influences on them. We should build a software to be a "Friendly AI" instead of emulating brains and only emulate brains for different reasons, except maybe the few brains that think like a "Friendly AI". Its probably safer to do it completely in software.

Comment author: BenRayfield 26 January 2010 02:01:33AM *  0 points [-]

My Conclusions It seems there is Far Near and Near Near, and if you ever again find yourself with time to meta-think that you are operating in Near mode.... then you're actually in Far mode. and so I will be more suspicious of the hypothetical thought experiments from now on.

When one watches the movie series called "Saw", they will experience the "near mode" of thinking much more than the examples given in this thread. "Saw" is about people trapped in various situations, enforced by mechanical means only (no psychotic person to beg for mercy, the same way you can't beg the train to stop), where they must choose which things to sacrifice to save a larger number of lives, sometimes including their own life. For example, the first "Saw" movie starts with 2 dieing people trapped in an abandoned basement, with their legs chained to the wall, and the only way the first person can escape is to cut off their foot with the saw. Many times in the movie series, the group of trapped people chose whose turn it was to go into the next dangerous area to get the key to the next room. Similarly, the psychotic person who puts the people in those situations thinks he is doing it for their own good because he chooses people who have little respect for their own life and through the process of escaping his horrible traps some of them have a better state of mind after escaping than before. I'm not saying that would really work, but that's the main subject of the movies and is shown in many ways simultaneously. These are good examples of how to avoid "meta thinking" and really think in "near mode": Watch the "Saw" movies.

Comment author: DanArmak 31 December 2009 08:58:06PM *  6 points [-]

Rational minds need comedy too, or they go insane.

Not necessarily. It's just that we are very far from being perfectly rational.

Comment author: BenRayfield 01 January 2010 01:41:40AM 3 points [-]

Not necessarily. It's just that we are very far from being perfectly rational.

You're right. I wrote "rational minds" in general when I was thinking about the most rational few of people today. I did not mean any perfectly rational mind exists.

Most or all Human brains tend to work better if they experience certain kinds of things that may include wasteful parts, like comedy, socializing, and dreaming. Its not rational to waste more than you have to. Today we do not have enough knowledge and control over our minds to optimize away all our wasteful/suboptimal thoughts.

I have no reason to think, in the "design space" of all possible minds, there exists 0, or there exists more than 0, perfectly rational minds that tend to think more efficiently after experiencing comedy.

I do have a reason to slightly bias it toward "there exists more than 0" because Humans and monkeys have a sense of humor that helps them think better if used at least once per day, but when thinking about exponential size intelligence, that slight bias becomes an epsilon. Epsilon can be important if you're completely undecided, but usually its best to look for ideas somewhere else before considering an epsilon size chance. What people normally call "smarter than Human intelligence" is also an epsilon size intelligence in this context, so the 2 things are not epsilon when compared to eachother.

The main thing I've figured out here is to be more open-minded about if comedy (and similar things) can increase the efficiency of a rational mind or not. I let an assumption get into my writing.

Comment author: Annoyance 23 March 2009 02:50:45PM -1 points [-]

Why is this posted to LessWrong?

What does it have to do with being less wrong or sharpening our rationality?

Comment author: BenRayfield 31 December 2009 08:40:07PM 12 points [-]

We are Borg. You will be assimilated. Resistance is futile. If Star Trek's Borg Collective came to assimilate everyone on Earth, Eliezer Yudkowsky would engage them in logical debate until they agreed they should come back later after our technology has increased exponentially for some number of years, a more valuable thing for them to assimilate. Also, he would underestimate how fast our technology increases just enough that when the Borg came back, we would be the stronger force.

Why is this posted to LessWrong? What does it have to do with being less wrong or sharpening our rationality?

Rational minds need comedy too, or they go insane. Much of this is vaguely related to rational subjects so it does not fit well in other websites.

View more: Prev