ata comments on Value Deathism - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (118)
Perhaps you did. This time, my question was mostly rhetorical, but since you gave a thoughtful response, it seems a shame to waste it.
Uh. Prevent it how. I'm curious how that particular sausage will be made.
More sausage. How does the FAI solve that problem? It seemed that you said the root cause of the problem was technological progress, but perhaps I misunderstood.
Hmmm. Amnesty International, Doctors without Borders, and the Humane Society are three humanitarian causes that come to mind. FAI subsumes these ... how, exactly?
Again, my questions are somewhat rhetorical. If I really wanted to engage in this particular dialog, I should probably do so in a top-level posting. So please do not feel obligated to respond.
It is just that if Ben Goertzel is so confused as to hope that any sufficiently intelligent entity will automatically empathize with humans, then how much confusion exists here regarding just how much humans will automatically accept the idea of sharing a planet with an FAI? Smart people can have amazing blind spots.
If I knew how that sausage will be made, I'd make it myself. The point of FAI is to do a massive amount of good that we're not smart enough to figure out how to do on our own.
If humanity's extrapolated volition largely agrees that those causes are working on important problems, problems urgent enough that we're okay with giving up the chance to solve them ourselves if they can be solved faster and better by superintelligence, then it'll do so. Doctors Without Borders? We shouldn't be needing doctors (or borders) anymore. Saying how that happens is explicitly not our job — as I said, that's the whole point of making something massively smarter than we are. Don't underestimate something potentially hundreds or thousands or billions of times smarter than every human put together.
I actually think we know how to do the major 'trauma care for civilization' without FAI at this point. FAI looks much cheaper and possibly faster though, so in the process of doing the "trauma care" we should obviously fund it as a top priority. I basically see it as the largest "victory point" option in a strategy game.