Vladimir_Nesov comments on The Sword of Good - Less Wrong

85 Post author: Eliezer_Yudkowsky 03 September 2009 12:53AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (292)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 05 September 2009 08:11:45AM 6 points [-]

If you have a system that's perfectly capable of making changes on its own, debugged by millions of years of evolution, why on earth would you want to bypass those safeties?

To do better?

Comment author: pjeby 05 September 2009 01:30:54PM 1 point [-]

To do better?

You don't need to bypass the safeties to do better. What you need is not a bigger hammer with which to change the brain, but a better idea of what to change, and what to change it to.

That's the thing that annoys me the most about brain-mod discussions here -- it's like talking about opening up the case on your computer with a screwdriver, when you've never even looked at the screen or tried typing anything in -- and then arguing that all modifications to computers are therefore difficult and dangerous.

Comment author: CronoDAS 05 September 2009 10:43:47PM 2 points [-]

To use an analogy, the kind of brain modifications we're talking about would be the kind of modifications you'd have to do to a 286 in order to play Crysis (a very high-end game) on it.

Comment author: [deleted] 06 September 2009 02:01:56AM 1 point [-]

If I'm not mistaken, as far as raw computing power goes, the human brain is more powerful than a 286. The question is--and this is something I'm honestly wondering--whether it's feasible, given today's technology, to turn the brain into something that can actually use that power in a fashion that isn't horribly indirect. Every brain is powerful enough to play dual 35-back perfectly (if I had access to brain-making tools, I imagine I could make a dual 35-back player using a mere 70,000 neurons); it's simply not sufficiently well-organized.

If your answer to the above is "no way José", please say why. "It's not designed for that" is not sufficient; things do things they weren't designed to do all the time.

Comment author: Vladimir_Nesov 05 September 2009 01:52:07PM *  0 points [-]

You don't need to bypass the safeties to do better. What you need is not a bigger hammer with which to change the brain, but a better idea of what to change, and what to change it to.

But you do need a bigger hammer as well. And that bigger hammer is dangerous.

Comment author: pjeby 05 September 2009 05:25:04PM 1 point [-]

But you do need a bigger hammer as well.

For what, specifically?

Comment author: JGWeissman 05 September 2009 07:10:57PM 4 points [-]

A brain emulation may want to modify so that when it multiplies numbers together, instead of its hardware emulating all the neurons involved, it performs the multiplication on a standard computer processor.

This would be far faster, more accurate, and less memory intensive.

Implementation would involve figuring out how the hardware recognizes the intention to perform a multiplication, represent the numbers digitally, and then present the answer back to the emulated neurons. This is outside the scope of any mechanism we might have to make changes within our brains, which would not be able to modify the emulator.

Comment author: Eliezer_Yudkowsky 05 September 2009 07:10:38PM -1 points [-]

Cracking the protein folding problem, building nanotechnology, and reviving a cryonics patient at the highest possible fidelity. Redesigning the spaghetti code of the brain so as to permit it to live a flourishing and growing life rather than e.g. overloading with old memories at age 200.

I suppose you make a remarkable illustration of how people with no cosmic ambitions and brainwashed by the self-help industry, don't even have any goals in life that require direct brain editing, and aren't much willing to imagine them because it implies that their own brains are (gasp!) inadequate.

Comment author: pjeby 09 September 2009 03:24:25AM 2 points [-]

I suppose you make a remarkable illustration of how people with no cosmic ambitions and brainwashed by the self-help industry, don't even have any goals in life that require direct brain editing, and aren't much willing to imagine them because it implies that their own brains are (gasp!) inadequate.

Wow, somebody's cranky today. (I could equally note that you're an illustration of what happens when people try to build a technical solution to a human problem... while largely ignoring the human side of the problem.)

Solving cooler technical problems or having more brain horsepower sure would be nice. But as I already know from personal experience, just being smarter than other people doesn't help, if it just means you execute your biases and misconceptions with greater speed and an increased illusion of certainty.

Hence, I consider the sort of self-modification that removes biases, misconceptions, and motivated reasoning to be both vastly more important and incredibly more urgent than the sort that would let me think faster, while retaining the exact same blindspots.

But if you insist on hacking brain hardware directly or in emulation, please do start with debugging support: the ability to see in real-time what belief structures are being engaged in reaching a decision or conclusion, with nice tracing readouts of all their backing assumptions. That would be really, really useful, even if you never made any modifications outside the ones that would take place by merely observing the debugger output.

Comment author: Eliezer_Yudkowsky 09 September 2009 04:09:59AM 6 points [-]

you're an illustration of what happens when people try to build a technical solution to a human problem

If there were a motivator captioned "TECHNICAL SOLUTIONS TO HUMAN PROBLEMS", I would be honored to have my picture appear on it, so thank you very much.

Comment author: pjeby 09 September 2009 04:36:45AM 1 point [-]

If there were a motivator captioned "TECHNICAL SOLUTIONS TO HUMAN PROBLEMS", I would be honored to have my picture appear on it, so thank you very much.

You left out the "ignoring the human part of the problem" part.

The best technical solutions to human problems are the ones that leverage and use the natural behaviors of humans, rather than trying to replace those behaviors with a perfect technical process or system, or trying to force the humans to conform to expectations.

(I'd draw an analogy with Nelson's Xanadu vs. the web-as-we-know-it, but that could be mistaken for a pure Worse Is Better argument, and I certainly don't want any motivated superintelligences being built on a worse-is-better basis.)