Comment author: Recovering_irrationalist 21 September 2008 02:22:23PM 0 points [-]

If there is that 'g'/unhappiness correlation, maybe the causality is: unhappiness->'g'. The overly happy, seeing less problems, get less problem solving practice, whereas a tendency to be analytical could boost 'g' over a lifetime, though perhaps not effective intelligence.

I wouldn't expect this to apply to most readers, who get particular pleasure from solving intelligent problems. Think general population.

In response to Optimization
Comment author: Recovering_irrationalist 14 September 2008 11:17:38AM 0 points [-]

I wouldn't assume a process seeming to churn through preference cycles to have an inconsistent preference ranking, it could be efficiently optimizing if each state provides diminishing returns. If every few hours a jailer offers either food, water or a good book, you don't pick the same choice each time!

Comment author: Recovering_irrationalist 17 August 2008 11:48:31AM 3 points [-]

I've spent some time online trying to track down the exact moment when someone noticed the vastly tangled internal structure of the brain's neurons, and said, "Hey, I bet all this giant tangle is doing complex information-processing!"

My guess is Ibn al-Haytham, early 11thC while under house arrest after realizing he couldn't, as claimed, regulate the Nile's overflows.

Wikipedia: "In the Book of Optics, Ibn al-Haytham was the first scientist to argue that vision occurs in the brain, rather than the eyes. He pointed out that personal experience has an effect on what people see and how they see, and that vision and perception are subjective. He explained possible errors in vision in detail, and as an example described how a small child with less experience may have more difficulty interpreting what he or she sees. He also gave an example of how an adult can make mistakes in vision due to experience that suggests that one is seeing one thing, when one is really seeing something else."

Comment author: Recovering_irrationalist 13 August 2008 01:19:00PM 0 points [-]

Eliezer: The overall FAI strategy has to be one that would have turned out okay if Archimedes of Syracuse had been able to build an FAI.

I'd feel a lot safer if you'd extend this back at least to the infanticidal hunter-gatherers, and preferably to apes fighting around the 2001 monolith.

Comment author: Recovering_irrationalist 06 August 2008 11:28:00PM 0 points [-]

Are you're rationally taking into account the biasing effect your heartfelt hopes exert on the set of hypotheses raised to your conscious attention as you conspire to save the world?

Recovering, in instances like these, reversed stupidity is not intelligence; you cannot say, "I wish fast takeoff to be possible, therefore it is not".

Indeed. But you can, for example, say "I wish fast takeoff to be possible, so should be less impressed, all else equal, by the number of hypothesis I can think of that happen to support it".

Do you wish fast takeoff to be possible? Aren't then Very Horrid Singularities more likely?

All you can do is try to acquire the domain knowledge and put your mind into forward form.

Yes, but even then the ballot stuffing is still going on beneath your awareness, right? Doesn't that still count as some evidence for caution?

Comment author: Recovering_irrationalist 06 August 2008 07:26:37PM 0 points [-]

Will Pearson: When you were figuring out how powerful AIs made from silicon were likely to be, did you have a goal that you wanted? Do you want AI to be powerful so it can stop death?

Eliezer: ..."Yes" on both counts....

I think you sidestepped the point as it related to your post. Are you're rationally taking into account the biasing effect your heartfelt hopes exert on the set of hypotheses raised to your concious attention as you conspire to save the world?

Comment author: Recovering_irrationalist 05 August 2008 12:42:53PM 0 points [-]

Carl Shulman: Occam's razor makes me doubt that we have two theoretical negative utilitarians (with egoistic practice) who endorse Pascal's wager, with similar writing styles and concerns, bearing Internet handles that begin with 'U.'

michael vassar: Unknown and Utilitarian could be distinct but highly correlated (we're both here after all). In principle we could see them as both unpacking the implications of some fairly simple algorithm.

With thousands of frequent-poster-pairs with many potentially matchable properties, I'm not too shocked to find a pair that match on six mostly correlated properties.

Comment author: Recovering_irrationalist 23 July 2008 02:44:11PM 0 points [-]

For example, I would be substantially more alarmed about a lottery device with a well-defined chance of 1 in 1,000,000 of destroying the world, than I am about the Large Hadron Collider switched on. If I could prevent only one of these events, I would prevent the lottery.

On the other hand, if you asked me whether I could make one million statements of authority equal to "The Large Hadron Collider will not destroy the world", and be wrong, on average, around once, then I would have to say no.

Hmm... might this be the heuristic that makes people prefer a 1% chance of 1000 deaths to a definite death for 5? The lottery would definately destroy worlds, with as many deaths as killing over six thousand people in each Everett branch. Running the LHC means a higher expected number of dead worlds by your own estimates, but it's all or nothing across universes. It will most probably just be safe.

If you had a definate number for both P(Doomsday Lottery Device Win) and P(Doomsday LHC) you'd shut up and multiply, but you haven't so you don't. But you still should because you're pretty sure P(D-LHC) >> P(DLDW) even if you don't know a figure for P(DLHC).

This assumes Paul's assumption, above.

In response to I'd take it
Comment author: Recovering_irrationalist 02 July 2008 09:50:19PM 0 points [-]

I doubt my ability to usefully spend more than $10 million/year on the Singularity. What do you do with the rest of the money?

Well I admit it's a hell of a diminishing returns curve. OK... Dear Santa, please can I have an army of minions^H^H^H^Htrusted experts to research cool but relatively neglected stuff like intelligence enhancement, life extension (but not nanotech) & how to feed and educate the third world without screwing it up. And deal with those pesky existential risks. Oh, and free cryonics for everyone - let's put those economies of scale to good use. Basically keep people alive till the FAI guys get there. Then just enjoy the ride, cos if I've just been handed $10^13 I'm probably in a simulation. More so than usual.

In response to I'd take it
Comment author: Recovering_irrationalist 02 July 2008 12:56:27PM 1 point [-]

Anonymous: I'd hire all AI researchers to date to work under Eliezer and start seriously studying to be able to evaluate myself whether flipping the "on" switch would result in a friendly singularity.

(emphasis mine)

I doubt this is the way to go. I want a medium sized, talented and rational team who seriously care, not every AI programmer in the world who smells money. I bring Eliezer a blank cheque and listen to his and those he trusts arguments for best use, though he'd have to convince me where we disagreed, he seems good at that.

Also, even after years of studying for it I wouldn't trust me, or anyone else for that matter, to make that switch-on decision alone.

View more: Prev | Next