Comment author: Tom_McCabe2 18 September 2008 02:42:49AM 6 points [-]

"And I wonder if that advice will turn out not to help most people, until they've personally blown off their own foot, saying to themselves all the while, correctly, "Clearly I'm winning this argument.""

I fell into this pattern for quite a while. My basic conception was that, if everyone presented their ideas and argued about them, the best ideas would win. Hence, arguing was beneficial for both me and the people on transhumanist forums- we both threw out mistaken ideas and accepted correct ones. Eliezer_2006 even seemed to support my position, with Virtue #5. It never really occurred to me that the best of everyone's ideas might not be good enough.

"It is Nature that I am facing off against, who does not match Her problems to your skill, who is not obliged to offer you a fair chance to win in return for a diligent effort, who does not care if you are the best who ever lived, if you are not good enough."

Perhaps we should create an online database of open problems, if one doesn't exist already. There are several precedents (http://en.wikipedia.org/wiki/Hilbert%27s_problems). So far as I know, if one wishes to attack open problems in physics/chemistry/biology/comp. sci./FAI, the main courses of action are to attack famous problems (where you're expected to fail and don't feel bad if you do), or to read the educational literature (where the level of problems is pre-matched to the level of the material).

Comment author: Tom_McCabe2 09 September 2008 03:57:28AM 0 points [-]

"Before anyone posts any angry comments: yes, the registration costs actual money this year."

For comparison: The Singularity Summit at Stanford cost $110K, all of which was provided by SIAI and sponsors. Singularity Summit 2007 undoubtedly cost more, and only $50K of that was raised through ticket sales. All ticket purchases for SS08 will be matched 2:1 by Peter Thiel and Brian Cartmell.

Comment author: Tom_McCabe2 06 September 2008 09:42:49PM 0 points [-]

My apologies, but my browser screwed up my comment's formatting; could an admin please fix it, and then delete this? Thanks.

Comment author: Tom_McCabe2 06 September 2008 09:39:07PM 0 points [-]

"Ask anyone, and they'll say the same thing: they're pretty open-minded, though they draw the line at things that are really wrong."

I generally find myself arguing against open-mindedness; because "open-mindedness" is a social virtue, a lot of people apply it indiscriminately, and so they wind up wasting time on long-debunked ideas.

"In the same way that we need statesmen to spare us the abjection of exercising power, we need scholars to spare us the abjection of learning."

How many people *want* to exercise government-type power over large numbers of people? A lot of people are, apparently, happy to let someone else tell them what to do. Most of the rest aren't very ambitious.

"Because giftedness is not to be talked about, no one tells high-IQ children explicitly, forcefully and repeatedly that their intellectual talent is a gift. That they are not superior human beings, but lucky ones. That the gift brings with it obligations to be worthy of it."

(remembers childhood)

When adults did tell me this, I didn't believe them- after all, wasn't it blatantly obvious that there was a strong negative correlation between intelligence and quality of life?

"The best part about math is that, if you have the right answer and someone disagrees with you, it really is because they're stupid."

This is true, but only for arbitrarily low values of "stupid". There are plenty of theorems which are obvious for a superintelligence, but counterintuitive to humans.

"Long-Term Capital Management had faith in diversification. Its history serves as ample notification that eggs in different baskets can and do all break at the same time."

If I recall correctly, LTCM was so highly leveraged that most of their eggs didn't *have* to break- if just 10% or so did, they were hosed anyway.

Comment author: Tom_McCabe2 30 August 2008 02:53:43AM 1 point [-]

"In fact, if you're interested in the field, you should probably try counting the ways yourself, before I continue. And score yourself on how deeply you stated a problem, not just the number of specific cases."

I got #1, but I mushed #2 and #3 together into "The AI will rewire our brains into computationally cheap super-happy programs with humanesque neurology", as I was thinking of failure modes and not reasons for why failure modes would be bad.

Comment author: Tom_McCabe2 23 August 2008 03:06:40AM 1 point [-]

"The real question is when "Because Eliezer said so!" became a valid moral argument."

You're confusing the algorithm Eliezer is trying to approximate with the real, physical Eliezer. If Eliezer was struck by a cosmic ray tomorrow and became a serial killer, me, you, and Eliezer would all agree that this doesn't make being a serial killer right.

Comment author: Tom_McCabe2 08 August 2008 05:32:07AM 1 point [-]

"Tom McCabe: speaking as someone who morally disapproves of murder, I'd like to see the AI reprogram everyone back, or cryosuspend them all indefinitely, or upload them into a sub-matrix where they can think they're happily murdering each other without all the actual murder. Of course your hypothetical murder-lovers would call this immoral, but I'm not about to start taking the moral arguments of murder-lovers seriously."

Beware shutting yourself into a self-justifying memetic loop. If you had been born in 1800, and just recently moved here via time travel, would you have refused to listen to all of our modern anti-slavery arguments, on the grounds that no moral argument by negro-lovers could be taken seriously?

"The AI would use the previous morality to select its actions: depending on the content of that morality it might or might not reverse the reprogramming."

Do you mean would, or should? My question was what the AI should do, not what a human-constructed AI is likely to do.

It should be possible for an AI, upon perceiving any huge changes in renormalized human morality, to scrap its existing moral system and recalibrate from scratch, even if nobody actually codes an AI that way. Obviously, the previous morality will determine the AI's *very next* action, but the interesting question is whether the important actions (the ones that directly affect people) map on to a new morality or the previous morality.

Comment author: Tom_McCabe2 08 August 2008 02:32:54AM 3 points [-]

"You perceive, of course, that this destroys the world."

If the AI modifies humans so that humans want whatever happens to already exist (say, diffuse clouds of hydrogen), then this is clearly a failure scenario.

But what if the Dark Lords of the Matrix reprogrammed everyone to like murder, from the perspective of both the murderer and the murderee? Should the AI use everyone's prior preferences as morality, and reprogram us again to hate murder? Should the AI use prior preferences, and forcibly stop everyone from murdering each other, even if this causes us a great deal of emotional trauma? Or should the AI recalibrate morality to everyone's current preferences, and start creating lots of new humans to enable more murders?

Comment author: Tom_McCabe2 05 August 2008 07:52:04AM 0 points [-]

"However, those objective values probably differ quite a lot from most of what most human beings find important in their lives; for example our obsessions with sex, romance and child-rearing probably aren't in there."

Several years ago, I was attracted to pure libertarianism as a possible objective morality for precisely this reason. The idea that, eg., chocolate tastes good can't possibly be represented directly in an objective morality, as chocolate is unique to Earth and objective moralities need to apply everywhere. However, the idea of immorality stemming from violation of another person's liberty seemed simple enough to arise spontaneously from the mathematics of utility functions.

It turns out that you *do* get a morality out of the mathematics of utility functions (sort of), in the sense that utility functions will tend towards certain actions and away from others unless some special conditions are met. Unfortunately, these actions aren't very Friendly; they involve things like turning the universe into computronium to solve the Riemann Hypothesis (see http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf for some examples). If libertarianism really *was* a universal morality, Friendly AI would be much simpler, as we could fail on the first try without the UFAI killing us all.

Comment author: Tom_McCabe2 01 August 2008 08:59:46AM 3 points [-]

"is true *except* where general intelligence is at work. It probably takes more complexity to encode an organism that can multiply 7 by 8 and can multiply 432 by 8902 but cannot multiply 6 by 13 than to encode an organism that can do all three,"

This is just a property of algorithms in general, not of general intelligence specifically. Writing a Python/C/assembler program to multiply A and B is simpler than writing a program to multiply A and B unless A % B = 340. It depends on whether you're thinking of multiplication as an algorithm or a giant lookup table (http://lesswrong.com/lw/l9/artificial_addition/).

View more: Prev | Next