ata comments on Value Deathism - Less Wrong

26 Post author: Vladimir_Nesov 30 October 2010 06:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (118)

You are viewing a single comment's thread. Show more comments above.

Comment author: Perplexed 31 October 2010 04:36:59AM 0 points [-]

I feel like I remember trying to answer the same question (asked by you) before ...

Perhaps you did. This time, my question was mostly rhetorical, but since you gave a thoughtful response, it seems a shame to waste it.

(1) eventually ... someone is probably going to build one anyway, probably without being extremely careful ..., and getting FAI before then will probably be the only way to prevent it;

Uh. Prevent it how. I'm curious how that particular sausage will be made.

(2) ... it's likely that humanity's technological progress over the next century will continuously lower the amount of skill, intelligence, and resources needed to accidentally or intentionally do terrible things — and getting FAI before then may be the best long-term solution to that;

More sausage. How does the FAI solve that problem? It seemed that you said the root cause of the problem was technological progress, but perhaps I misunderstood.

(3) ... it subsumes all other humanitarian causes ...

Hmmm. Amnesty International, Doctors without Borders, and the Humane Society are three humanitarian causes that come to mind. FAI subsumes these ... how, exactly?

Again, my questions are somewhat rhetorical. If I really wanted to engage in this particular dialog, I should probably do so in a top-level posting. So please do not feel obligated to respond.

It is just that if Ben Goertzel is so confused as to hope that any sufficiently intelligent entity will automatically empathize with humans, then how much confusion exists here regarding just how much humans will automatically accept the idea of sharing a planet with an FAI? Smart people can have amazing blind spots.

Comment author: ata 31 October 2010 05:03:22AM *  8 points [-]

If I knew how that sausage will be made, I'd make it myself. The point of FAI is to do a massive amount of good that we're not smart enough to figure out how to do on our own.

Hmmm. Amnesty International, Doctors without Borders, and the Humane Society are three humanitarian causes that come to mind. FAI subsumes these ... how, exactly?

If humanity's extrapolated volition largely agrees that those causes are working on important problems, problems urgent enough that we're okay with giving up the chance to solve them ourselves if they can be solved faster and better by superintelligence, then it'll do so. Doctors Without Borders? We shouldn't be needing doctors (or borders) anymore. Saying how that happens is explicitly not our job — as I said, that's the whole point of making something massively smarter than we are. Don't underestimate something potentially hundreds or thousands or billions of times smarter than every human put together.

Comment author: MichaelVassar 03 November 2010 08:13:52PM 5 points [-]

I actually think we know how to do the major 'trauma care for civilization' without FAI at this point. FAI looks much cheaper and possibly faster though, so in the process of doing the "trauma care" we should obviously fund it as a top priority. I basically see it as the largest "victory point" option in a strategy game.