Comment author: Roko 15 June 2009 10:03:37PM 2 points [-]

All other things being equal, increasing IQ will make people better at telling the difference between rational argument and sophistry, and at understanding marginally more complex arguments.

Decreasing akrasia for the general population is a different issue; the first thought that comes to mind is that increasing people's IQ with fixed motivation ought to improve things.

Comment author: hrishimittal 16 June 2009 12:53:29AM 2 points [-]
Comment author: hrishimittal 15 June 2009 09:42:34PM 0 points [-]

It's interesting speculation but it assumes that people use all of their current intelligence. There is still the problem of akrasia - a lot of people are perfectly capable of becoming 'smarter' if only they cared to think about things at all. Sure, they could still go mad infallibly but it would be better than not even trying.

Are you implying that more IQ may help in overcoming akrasia?

Comment author: SoullessAutomaton 15 June 2009 09:23:25PM 3 points [-]

Perhaps you should clarify what angle you're trying to get at with this question.

I expect you're raising some version of the "do you value some lives more than others" issue. There are likely at least some people here who would pick Yudkowsky over three unknown people, based on a rational evaluation of expected utility of continued existence. The same issue could be presented by replacing the child with any other person who is expected to have a large positive contribution to the world, such as a promising young surgeon who could potentially save many more than three lives over the course of his career.

Or did you have something else in mind?

Comment author: hrishimittal 15 June 2009 09:32:52PM 0 points [-]

Yes that's how I meant it.

In response to Readiness Heuristics
Comment author: hrishimittal 15 June 2009 11:35:18AM *  0 points [-]

The True Trolley Dilemma would be where the child is Eliezer Yudkowsky.

Then what would you do?

EDIT: Sorry if that sounds trollish, but I meant it as a serious question.

Comment author: asciilifeform 14 June 2009 05:53:26PM 0 points [-]

How is blindly looking for AGI in a vast search space better than stagnation?

No amount of aimless blundering beats deliberate caution and moderation (see 15th century China example) for maintaining technological stagnation.

How does working on FAI qualify as "stagnation"?

It is a distraction from doing things which are actually useful in the creation of our successors.

You are trying to invent the circuit breaker before discovering electricity; the airbag before the horseless carriage. I firmly believe that all of the effort currently put into "Friendly AI" is wasted. The bored teenager who finally puts together an AGI in his parents' basement will not have read any of these deep philosophical tracts.

Comment author: hrishimittal 14 June 2009 06:22:12PM 3 points [-]

The bored teenager who finally puts together an AGI in his parents' basement will not have read any of these deep philosophical tracts.

That truly would be a sad day.

Are you seriously suggesting hypothetical AGIs built by bored teenagers in basements are "things which are actually useful in the creation of our successors"?

Is that your plan against intelligence stagnation?

Comment author: asciilifeform 14 June 2009 03:11:40PM *  3 points [-]

Would you have hidden it?

You cannot hide the truth forever. Nuclear weapons were an inevitable technology. Likewise, whether or not Eurisko was genuine, someone will eventually cobble together an AGI. Especially if Eurisko was genuine, and the task really is that easy. The fact that you seem persuaded of the possibility of Lenat having danced on the edge of creating hard takeoff gives me more interest than ever before in a re-implementation.

Reading "value is fragile" almost had me persuaded that blindly pursuing AGI is wrong, but shortly after, "Safety is not Safe" reverted me back to my usual position: stagnation is as real and immediate a threat as ever there was, vastly dwarfing any hypothetical existential risks from rogue AI.

For instance, bloat and out-of-control accidental complexity have essentially halted all basic progress in computer software. I believe that the lack of quality programming systems will lead (and may already have led) directly to stagnation in other fields, such as computational biology. The near-term future appears to resemble Windows Vista rather than HAL. Engelbart's Intelligence Amplification dream has been lost in the noise. I thus expect civilization to succumb to Natural Stupidity in the near term future, unless a drastic reversal in these trends takes place.

Comment author: hrishimittal 14 June 2009 05:41:53PM 2 points [-]

stagnation is as real and immediate a threat as ever there was, vastly dwarfing any hypothetical existential risks from rogue AI.

How is blindly looking for AGI in a vast search space better than stagnation?

How does working on FAI qualify as "stagnation"?

Comment author: asciilifeform 14 June 2009 04:45:33PM *  1 point [-]

I am convinced that resource depletion is likely to lead to social collapse - possibly within our lifetimes. Barring that, biological doomsday-weapon technology is becoming cheaper and will eventually be accessible to individuals. Unaugmented humans have proven themselves to be catastrophically stupid as a mass and continue in behaviors which logically lead to extinction. In the latter I include not only ecological mismanagement, but, for example, our continued failure to solve the protein folding problem, create countermeasures to nuclear weapons, and to create a universal weapon against virii. Not to mention our failure of the ultimate planetary IQ test - space colonization.

Comment author: hrishimittal 14 June 2009 05:36:54PM 0 points [-]

I am convinced that resource depletion is likely to lead to social collapse - possibly within our lifetimes.

What convinced you and how convinced are you?

Comment author: pre 11 June 2009 07:48:30AM *  1 point [-]

One, probably not very useful, possibility is of course to turn up at the Subgenius party I'm running. It'll be loud and noisy and not particularly rational (after all, most of the point is to highlight irrationality by mocking and exaggerating it), but fun and weird.

Personally I'm far too busy to meet anywhere else before that though. And indeed, I'll be busy all night long at the show too, up on stage for five minutes out of every hour introducing the acts etc.

Still. You're all invited anyway.

Comment author: hrishimittal 11 June 2009 12:59:36PM 0 points [-]

That looks wicked!

Comment author: [deleted] 10 June 2009 01:37:00AM 2 points [-]

I was thinking that this game would pretty much only measure (belief of)* rationality, but now I see that it measures (belief of)* honesty to a good degree as well. By guessing 100, one is being dishonest.

Comment author: hrishimittal 10 June 2009 11:01:25PM 0 points [-]

Or just plain wrong.

Comment author: hrishimittal 09 June 2009 04:15:36PM *  0 points [-]

Surely the only point you're making in this long post is not that naïve consequentialism is a bad idea?

consider brainstorming for other goals that you might have ignored, and then attach priorities.

And how exactly does one attach priorities?

View more: Prev | Next