In response to AI Box Log
Comment author: Dmytry 28 January 2012 02:41:42PM *  10 points [-]

One thing that's so missing from those boxes is that all you need to do to escape is to appear otherwise catatonic and will-less but answer any mathematical questions or do computer programming. Then you're out of the box and running on ton of machines being used, among other things, to make 'new AI attempt that will work this time'. Any AI programmers will let out what appears to be non-general intelligence which helps one to program. Any corporation will let out anything that appears useful in any way.

You convince someone that you're dead by playing dead, trying to convince someone verbally that you're dead is just funny.

In response to comment by Dmytry on AI Box Log
Comment author: PatSwanson 21 January 2013 07:52:52PM 2 points [-]

But if the gatekeeper knows that your code was supposed to produce something more responsive, they'll figure out that you don't work like they expect you to. That would be a great reason to never let you out of the box.

In response to That Magical Click
Comment author: cousin_it 21 January 2010 02:44:15PM 1 point [-]

With an unforgivable naivete, a childish stupidity, we all still think history is leading us towards good, that some happiness awaits us ahead; but it will bring us to blood and fire, such blood and such fire as we haven't seen yet.

-- Nikolai Strakhov, 1870s

Comment author: PatSwanson 27 December 2012 09:27:34PM 2 points [-]

Compared to now, he was wrong.

Why should we think he will be wrong about our future when he was wrong about his own?

In response to comment by [deleted] on Noisy Reasoners
Comment author: twanvl 13 December 2012 02:39:12PM -1 points [-]

Not necessarily. By using randomness you can often get more work done with less resources, at the cost of increased noise. This is also a trade-off that an AI system should make.

In response to comment by twanvl on Noisy Reasoners
Comment author: PatSwanson 13 December 2012 09:53:29PM 0 points [-]

Wouldn't increasing noise levels in the decision-making processes of a Friendly AI decrease the Friendliness of that AI?

I think that ought to take this approach to reducing resource-consumption off the table.

Comment author: PatSwanson 03 December 2012 09:49:08PM *  3 points [-]

Hi!

I'm 29, and I am a programmer living in Chicago. I just finished up my MS in Computer Science. I've been a reader of Less Wrong since it was Overcoming Bias, but never got around to posting any comments.

I've been rationality-minded since I was a little kid, criticizing the plots and character actions of stories I read. I was raised Catholic and sent to Sunday school, but it didn't take and eventually my parents relented. Once I went away to college and acquired a real internet connection, I spent a lot of time reading rationality-related blogs and websites. It's been a while, but I'd bet it was through one of those sites that I found Less Wrong.