Comment author: Vaniver 01 July 2011 06:11:45AM 0 points [-]

I suppose I will go with statements, rather than a question: I suspect the returns to caring about ems are low, I suspect that defining, let alone preventing, torture of ems will be practically difficult or impossible; I suspect that value systems that simply seek to minimize pain are poor value systems.

Comment author: VNKKET 02 July 2011 06:24:37AM *  1 point [-]

I suspect that value systems that simply seek to minimize pain are poor value systems.

Fair enough, as long as you're not presupposing that our value systems -- which are probably better than "minimize pain" -- are unlikely to have strong anti-torture preferences.

As for the other two points: you might have already argued for them somewhere else, but if not, feel free to say more here. It's at least obvious that anti-em-torture is harder to enforce, but are you thinking it's also probably too hard to even know whether a computation creates a person being tortured? Or that our notion of torture is probably confused with respect to ems (and possibly with respect to us animals too)?

In response to Nonperson Predicates
Comment author: HopeFox 01 May 2011 09:03:21AM 0 points [-]

This problem sounds awfully similar to the halting problem to me. If we can't tell whether a Turing machine will eventually terminate without actually running it, how could we ever tell if a Turing machine will experience consciousness without running it?

Has anyone attempted to prove the statement "Consciousness of a Turing machine is undecideable"? The proof (if it's true) might look a lot like the proof that the halting problem is undecideable. Sadly, I don't quite understand how that proof works either, so I can't use it as a basis for the consciousness problem. It just seems that figuring out if a Turing machine is conscious, or will ever achieve consciousness before halting, is much harder than figuring out if it halts.

Comment author: VNKKET 02 July 2011 06:17:17AM *  1 point [-]

Has anyone attempted to prove the statement "Consciousness of a Turing machine is undecideable"? The proof (if it's true) might look a lot like the proof that the halting problem is undecideable.

Your conjecture seems to follow from Rice's theorem, assuming the personhood of a running computation is a property of the partial function its algorithm computes. Also, I think you can prove your conjecture by taking a certain proof that the Halting Problem is undecidable and replacing 'halts' with 'is conscious'. I can track this down if you're still interested.

But this doesn't mess up Eliezer's plans at all: you can have "nonhalting predicates" that output "doesn't halt" or "I don't know", analogous to the nonperson predicates proposed here.

Comment author: Vaniver 30 June 2011 09:47:58PM -1 points [-]

Are there empirical facts that can answer that question? It looks like a question about preferences to me, which are difficult to measure.

Comment author: VNKKET 01 July 2011 04:57:01AM *  0 points [-]

I think you're right that many of the relevant empirical facts will be about your preferences. At risk of repeating myself, though, there are other facts that matter, like whether ems are conscious, how much it costs to prevent torture, and what better things we could be directing our efforts towards.

To partially answer your question ("how much effort is it worth to prevent the torture of ems?"): I sure do want torture to not happen, unless I'm hugely wrong about my preferences. So if preventing em torture turns out to not be worth a lot of effort, I predict it's because there are other bad things that can be more efficiently prevented with our efforts.

But I'm still not sure how you wanted your question interpreted. Are you, for example, wondering whether you care about ems as much as non-em people? Or whether you care about torture at all? Or whether the best strategy requires putting our efforts somewhere else, given that you care about torture and ems?

Comment author: Vaniver 29 June 2011 11:51:04PM -2 points [-]

How much effort is it worth to prevent the torture of ems?

Comment author: VNKKET 30 June 2011 06:18:11AM 2 points [-]

Are you unsure about whether em torture is as bad as non-em torture? Or do you just mean to express that we take em torture too seriously? Or is this a question about how much we should pay to prevent torture (of ems or not), given that there are other worthy causes that need our efforts?

Or, to ask all those questions at once: do you know which empirical facts you need to know in order to answer this?

What would an Incandescence about FAI look like?

1 VNKKET 01 May 2011 08:30PM

This post spoils Greg Egan's Incandescence.

Incandescence is a success story about some people who notice an existential threat and avoid it using science and engineering.  We see them figure out how gravity works, which is more interesting than it might sound, partly because their everyday experiences are full of gravitational effects that we don't notice on Earth.  At first they do science out of pure curiosity, but it turns into an urgent collective action problem when they discover that their orbit will lead them towards all sorts of disasters, including falling into a black hole.  The solution, it turns out, is to move some dirt around.

Has anyone considered writing a success story about using Friendly AI to solve an existential threat?

Comment author: VNKKET 18 March 2011 06:41:34AM *  0 points [-]

Is it easy to accidentally come up with criteria for "locally correct" that will still let us construct globally wrong results?

This comment was brought to you by a surface analogy with the Penrose triangle.

Comment author: VNKKET 01 March 2011 02:15:53AM *  3 points [-]

This makes me happy. Now, here's a question that is probably answered in the technical paper, but I don't have time to read it:

"New coins are generated by a network node each time it finds the solution to a certain calculational problem." What is this calculational problem? Could it easily serve some sinister purpose?

Comment author: VNKKET 26 February 2011 04:32:09AM 6 points [-]

I donated $250 on the last day of the challenge.

Comment author: VNKKET 27 December 2010 11:49:03PM *  0 points [-]

I finally remembered to post this here

Good timing, though: now this is fresh in our minds during the challenge.

Comment author: FormallyknownasRoko 02 December 2010 08:27:20PM 1 point [-]

Sorry, I have to disagree on matching donations.

By switching a matching donation from third-world aid to existential risk mitigation, you do double your impact.

Comment author: VNKKET 12 December 2010 07:10:06AM 0 points [-]

Oh, we agree, I was just unclear about my objection. Fixed.

View more: Prev | Next