Comment author: Eugine_Nier 04 November 2013 04:12:22AM 14 points [-]

One problem is that if we, say, start admiring people for acting in "more utilitarian" ways, what we may actually be selecting for is psychopathy.

Comment author: wiresnips 04 November 2013 06:15:33AM 7 points [-]

Agreed. Squicky dilemmas designed to showcase utilitarianism are not generally found in real life (as far as I know). And a human probably couldn't be trusted to make a sound judgement call even if one were found. Running on untrusted hardware and such.

Ah- and this is the point of the quote. Oh, I like that.

Comment author: Eugine_Nier 02 November 2013 06:12:33AM 10 points [-]

Utilitarianism is not in our nature. Show me a man who would hold a child’s face in the fire to end malaria, and I will show you man who would hold a child’s face in the fire and entirely forget he was originally planning to end malaria.

James A. Donald

Comment author: wiresnips 03 November 2013 07:05:40PM 10 points [-]

Utilitarianism isn't a description of human moral processing, it's a proposal for how to improve it.

Comment author: DanielLC 05 October 2013 09:57:43PM 4 points [-]

A good effort doesn't result in valuable software, but it could result in you learning to program better, increasing your human capital.

Comment author: wiresnips 05 October 2013 10:38:09PM 15 points [-]

That's not necessarily false, but it's a dangerous thing to say to yourself. Mostly when I find myself thinking it, I've just wasted a great deal of time, and I'm trying to convince myself that it wasn't really wasted. It's easy to tell myself, hard to verify, and more pleasant than thinking my time-investment was for nothing.

Comment author: D_Malik 10 May 2013 12:01:01PM *  26 points [-]

To encourage yourself to do some massive, granular task:

  • Upon completion of each granule, give yourself a reward with some probability.

  • A reward is a small piece of food or a sip of a drink, etc.

  • Never eat or drink anything except as a reward for working on the task.

This really works extremely well for me; I have been doing this for about 2 months, at first only with anki reviews and more recently for several other things. The feeling is very similar to addictions like video games or entertaining websites; I often think "I should probably go do X, but let me instead do just one more anki card" and a half-hour later I realize I still haven't done X.

More things:

  • Make the rewards unlikely and small so that you stay constantly hungry. Bonus: caloric restriction.

  • Create a timed reminder, say half-hourly, to do just a few granules of the task. This encourages episodes of the "just one more" effect.

  • Put reinforcers within arm's reach, both temporally (make granules easy and quick, so that hunger feels like an urge to do the task rather than an urge to cheat the system) and spacially (so that you are constantly reminded of your hunger and tempted to do the task).

I repeat: this works extremely well for me and I strongly encourage other people to try it. More details here.

Here is a graph showing the number of Anki reviews I've done every month for the past year, as an example of the results this method can produce.

Comment author: wiresnips 19 May 2013 08:01:46PM 1 point [-]

This is transformative. Thank you.

Comment author: Jayson_Virissimo 03 April 2013 07:32:21AM 27 points [-]

If knowledge can create problems, it is not through ignorance we can solve them.

-- Isaac Asimov

Comment author: wiresnips 09 April 2013 05:39:21PM -2 points [-]

This may not be strictly true. Consider the basilisk.

Comment author: Will_Newsome 10 February 2012 11:31:20AM 2 points [-]

Is that true or is Gombrich just handling a needle convincingly?

Comment author: wiresnips 10 February 2012 10:55:20PM 1 point [-]

Either both are true, or neither.

Comment author: [deleted] 01 May 2011 08:13:41PM 2 points [-]

As I understand it from reading the sequences, Eliezer's position roughly boils down to "most AI researchers are dilettantes and no danger to anyone at the moment. Anyone capable of solving the problems in AI at the moment will have to be bright enough, and gain enough insights from their work, that they'll probably have to solve Friendliness as part of it - or at least be competent enough that if SIAI shout loud enough about Friendliness they'll listen. The problem comes if Friendliness isn't solved before the point where it becomes possible to build an AI without any special insight, just by throwing computing power at it along with a load of out-of-the-box software and getting 'lucky'."

In other words, if you're convinced by the argument that Friendly AI is the most important problem facing us, the thing to do is work on Friendly AI rather than prevent other people working on unFriendly AI. Find an area of the problem no-one else is working on, and do that. That might sound hard, but it's infinitely more productive than finding the baddies and shooting at them.

In response to comment by [deleted] on Sarah Connor and Existential Risk
Comment author: wiresnips 01 May 2011 08:25:46PM 1 point [-]

Anyone smart enough to be dangerous is smart enough to be safe? I'm skeptical- folksy wisdom tells me that being smart doesn't protect you from being stupid.

But in general, yes- the threat becomes more and more tangible as the barrier to AI gets lower and the number of players increases. At the moment, it seems pretty intangible, but I haven't actually gone out and counted dangerously smart AI researchers- I might be surprised by how many there are.

To be clear, I was NOT trying to imply that we should actually right now form the Turing Police.

Comment author: [deleted] 01 May 2011 07:31:47PM *  5 points [-]

Given that (redacted) It is a very, very, VERY bad idea to start talking about (redacted), and I would suggest you should probably delete this post to avoid encouraging such behaviour.

EDIT: Original post has now been edited, and so I've done likewise here. I ask anyone coming along now to accept that neither the original post nor the original version of this comment contained anything helpful to anyone, and that I was not suggesting censorship of ideas, but caution about talking about hypotheticals that others might not see as such.

In response to comment by [deleted] on Sarah Connor and Existential Risk
Comment author: wiresnips 01 May 2011 08:04:37PM 3 points [-]

Edited, in the interest of caution.

However, this is exactly the issue I'm trying to discuss. It looks as though, if we take the threat of uncaring AI seriously, this is a real problem and it demands a real solution. The only solution that I can see is morally abhorrent, and I'm trying to open a discussion looking for a better one. Any suggestions on how to do this would be appreciated.

Comment author: Giles 01 May 2011 03:28:52PM -1 points [-]

I think that would just yield your revealed preference function. As I said, trying to optimize that is like a falling apple trying to optimize "falling". It doesn't describe what you want to do; it describes what you're going to do next no matter what.

Comment author: wiresnips 01 May 2011 05:55:30PM 4 points [-]

If we accept that what someone 'wants' can be distinct from their behaviour, then "what do I want?" and "what will I do?" are two different questions (unless you're perfectly rational). Presumably, a FAI scanning a brain could answer either question.

Comment author: Gray 04 April 2011 03:51:04PM 0 points [-]

I'm thinking either "lazy" or "irresponsible".

Comment author: wiresnips 04 April 2011 05:35:44PM 0 points [-]

The question of which is kind of still there, though. Procrastination is lazy, but getting drunk at work is irresponsible.

View more: Next