Comment author: Lapsed_Lurker 18 July 2014 09:52:12PM 0 points [-]

Surely if you provably know what the ideal FAI would do in many situations, a giant step forward has been made in FAI theory?

Comment author: Lapsed_Lurker 12 February 2014 03:45:55PM 3 points [-]

BBC Radio : Should we be frightened of intelligent computers? http://www.bbc.co.uk/programmes/p01rqkp4 Includes Nick Bostrom from about halfway through.

Comment author: NoSuchPlace 07 February 2014 01:57:17PM 5 points [-]

Today's SMBC is about an AI with a utility function which sounds good but isn't.

Comment author: Lapsed_Lurker 07 February 2014 06:58:28PM -1 points [-]

Drat. I just came here to post that. Still, at least this time I only missed by hours.

Comment author: Stuart_Armstrong 15 July 2013 12:13:16PM 3 points [-]

I'm trying to define threat/blackmail or similar concepts in decision theory. In the two examples above, one seems a clear negative situation, the other doesn't, and I can't figure out what the difference is.

Comment author: Lapsed_Lurker 15 July 2013 12:40:12PM 2 points [-]

You need a different definition for 'blackmail' then. Action X might be beneficial to the blackmailer rather than negative in value and still be blackmail.

Comment author: Lapsed_Lurker 15 July 2013 12:09:38PM 1 point [-]

Why not taboo 'blackmail'? That word already has a bunch of different meanings in law and common usage.

Comment author: solipsist 19 June 2013 12:38:07PM *  3 points [-]

It still seems to me that you can't have a BestDecisionAgent. Suppose agents are black boxes -- Omegas can simulate agents at will, but not view their source code. An Omega goes around offering agents a choice between:

  • $1, or
  • $100 if the Omega thinks the agent acts differently than BestDecisionAgent in a simulated rationality test, otherwise $2 if the agent acts like BestDecisionAgent in the rationality test.

Does this test meet your criteria for a fair test? If not, why not?

Comment author: Lapsed_Lurker 19 June 2013 10:52:25PM 0 points [-]

Omega gives you a choice of either $1 or $X, where X is either 2 or 100?

It seems like you must have meant something else, but I can't figure it out.

Comment author: Lapsed_Lurker 20 February 2013 02:08:21PM 6 points [-]

Isn't that steel-man, rather than strong-man?

In response to comment by Doug_S. on Sensual Experience
Comment author: taryneast 04 June 2011 06:03:20PM 1 point [-]

Or the other question of "why don't you kill babies while they're still innocent and guaranteed to go to heaven?"...

Comment author: Lapsed_Lurker 08 January 2013 12:30:53PM 0 points [-]

Reading that, I thought: "I bet people asking questions like that is why 'Original Sin' got invented".

Of course, the next step is to ask: "Why doesn't the priest drown the baby in the baptismal font, now that its Original Sin is forgiven?"

Comment author: Lapsed_Lurker 12 November 2012 11:11:40AM 4 points [-]

I, Robin, or Michael Vassar could probably think for five minutes and name five major probable-big-win meta-level improvements that society isn't investing in

Are there lists like this about? I think I'd like to read about that sort of stuff.

Comment author: David_Gerard 01 November 2012 10:50:16AM 19 points [-]

Run both sides. It's a good worked example of two smart people talking past each other.

Comment author: Lapsed_Lurker 01 November 2012 11:48:17PM *  0 points [-]

I remember seeing a few AI(and other things, sometimes) debates (mostly on YouTube) where they'd just be getting to the point where they are clarifying what it is that each one actually believes and you get: 'agree to disagree'. The end.

Just when the really interesting part seemed to be approaching! :(

For text-based discussions that fail to go anywhere, that brings to mind the 'talking past each other' you mention or 'appears to be deliberately misinterpreting the other person'

View more: Next