Comment author: rpmcruz 30 January 2016 04:55:58PM *  2 points [-]

I am new here. But what about just disable downvoting? Good comments will be voted up, bad comments will not be voted at all, and will rot in the bottom. Why remove them?

Possibly, you could have a "report" button to ask a moderator to review a very offensive comment.

Comment author: RichardKennaway 30 January 2016 09:01:07PM 2 points [-]

Possibly, you could have a "report" button to ask a moderator to review a very offensive comment.

I believe there used to be one, but it went away some years ago. I don't know why. Maybe it was being abused, or was found to just not be useful.

Comment author: IlyaShpitser 30 January 2016 05:30:33PM 0 points [-]

I think comparing Harvard to a research group is a type error, though. Research groups don't typically do this. I am not going to defend Unis shaking alums down for money, especially given what they do with it.

Comment author: RichardKennaway 30 January 2016 08:59:09PM 0 points [-]

Research groups don't typically do this.

In my experience, research groups exist inside universities or a few corporations like Google. The senior members are employed and paid for by the institution, and only the postgrads, postdocs, and equipment beyond basic infrastructure are funded by research grants. None of them fly "in orbit" by themselves but only as part of a larger entity. Where should an independent research group like MIRI seek permanent funding?

Comment author: Fluttershy 28 January 2016 10:55:30AM 0 points [-]

Oops. I've tried to clarify that he's only interested in FAI research, not AI research on the whole.

Comment author: RichardKennaway 28 January 2016 11:41:56AM 1 point [-]

FAI is only a problem because of AI. The imminence of the problem depends on where AI is now and how rapidly it is progressing. To know these things, one must know how AI (real, current and past AI, not future, hypothetical AI, still less speculative, magical AI) is done, and to know this in technical terms, not fluff.

I don't know how much your friend knows already, but perhaps a crash course in Russell and Norvig, plus technical papers on developments since then (i.e. Deep Learning) would be appropriate.

Comment author: Fluttershy 28 January 2016 10:07:24AM *  2 points [-]

I'm trying to help a dear friend who would like to work on FAI research, to overcome a strong fear that arises when thinking about unfavorable outcomes involving AI. Thinking about either the possibility that he'll die, or the possibility that an x-risk like UFAI will wipe us out, tends to strongly trigger him, leaving him depressed, scared, and sad. Just reading the recent LW article about how a computer beat a professional Go player triggered him quite strongly.

I've suggested trying to desensitize him via gradual exposure; the approach would be similar to the way in which people who are afraid of snakes can lose their fear of snakes by handling rope (which looks like a snake) until handling rope is no longer scary, and then looking at pictures of snakes until such pictures are no longer scary, and then finally handling a snake when they are ready. However, we've been struggling to think of what a sufficiently easy and non-scary first step might be for my friend; everything I've come up with as a first step akin to handling rope has been too scary for him to want to attempt so far.

I don't think that I'll even be able to convince my friend that desensitization training will be worth it at all--he's afraid that the training might trigger him, and leave him in a depression too deep for him to climb out of. At the same time, he's so incredibly nice, and he really wants to help with FAI research, and maybe even work for MIRI in the "unlikely" (according to him) event that he is able to overcome his fears. Are there reasonable alternatives to, say, desensitization therapy? Are there any really easy and non-scary first steps he might be okay with trying if he can be convinced to try desensitization therapy? Is there any other advice that might be helpful to him?

Comment author: RichardKennaway 28 January 2016 11:27:46AM 2 points [-]

He sounds like someone with a phobia of fire wanting to be a fireman. Why does he want to work on FAI? Would not going anywhere near the subject work for him instead?

Comment author: Clarity 27 January 2016 12:20:08PM 0 points [-]

How true is the proverb: 'To break habit you must make a habit'

Comment author: RichardKennaway 27 January 2016 01:03:00PM 0 points [-]

Was that "How true?" or "How true!"?

I think it is true, with the proviso that the habit to make can be the habit of noticing when the old habit is about to happen and not letting it.

Comment author: username2 25 January 2016 07:36:31PM 0 points [-]

Given that Eugine very likely will be able to get around an IP ban, I wonder if it is legally possible for MIRI to take out a restraining order that prevents him from posting to Less Wrong? This will of course only be possible if we can discover his real identity.

Comment author: RichardKennaway 25 January 2016 07:58:33PM 1 point [-]

That would be an absurd overreaction. I can't see the law taking the matter seriously, even if anyone knew "Eugine's" real identity.

Comment author: lifelonglearner 25 January 2016 02:01:35PM 0 points [-]

That's definitely true. The planning fallacy is a huge issue, and I don't address it here when I talk about plans to reach your goals.

I think finding the motivation to get things done is also a central part of the "achieving goals" target.

I'd like to try and address both of those in some form or another. Do you feel the essay would be strengthened if I added it in passing, or devoted smaller, separate pieces to cover those two?

Comment author: RichardKennaway 25 January 2016 04:30:41PM 1 point [-]

I think there's an overemphasis on planning in more and more detail. Some things are opaque at the point of making the plan. For example, some parts of a plan may require you do to things you don't know how to do. That breaks down into (1) find out how, and (2) do it. But you don't know what you're going to find, and what acting on you find will look like. (2) is opaque at the planning stage, and may not even exist if the answer to (1) suggests a different way of going about the parent goal.

Also, things can go wrong during execution. No complicated car repair ever goes exactly as the Haynes manual says, and for all the convenience of satnavs, you sometimes have to notice that it's sending you along a stupid route.

I recently had the goal of taking a piece of software I wrote in 100,000 lines of C++ and getting it to be callable from a web page, returning results to be embedded into the same web page, and running on a web server that it had never been compiled for before, starting from a position of knowing nothing about how to do dynamic web pages. It got done, but a plan would have looked like "1. Find a suitable technology for doing dynamic web pages. 2. Use it."

Comment author: bogus 25 January 2016 02:48:52PM 1 point [-]

No, it means he's saying that all the examples I gave are of people who aren't actually any good at what they do and are interesting only because for a black person to be able to attempt those tasks at all is remarkable.

In all fairness, this describes a lot of lists of "achievements of minority X in field Y". To some extent, it's a natural result of looking for "achievements" from a tiny minority (e.g. Turks or whatever) in a field where they don't really have a comparative advantage.

Comment author: RichardKennaway 25 January 2016 03:58:57PM 0 points [-]

Eugene is saying not that "they don't really have a comparative advantage", but that they have a comparative disadvantage so strong that any purported great achievements should be dismissed as fakery, exaggeration, or, if it seems that one of them really has achieved something, "exceptions". In Eugene's view, they're still nothing more than performing dogs, they've just managed the miracle, despite their intrinsic inferiority, of doing it as well as the best real people.

Comment author: gjm 25 January 2016 02:29:18PM *  1 point [-]

No, it means he's saying that all the examples I gave are of people who aren't actually any good at what they do and are interesting only because for a black person to be able to attempt those tasks at all is remarkable. The stupidity and obnoxiousness of that doesn't depend on a comparison with animals.

In any case, one reason why people use metaphors is precisely the fact that the literal sense of the metaphor produces an effect. You call someone a "dancing bear", and your readers are going to get a mental image of a dancing bear and (in so far as they accept what you say) associate it with the person you're talking about. You don't get to do that and claim you're not comparing the person to an animal.

[EDITED to fix a trivial typo.]

Comment author: RichardKennaway 25 January 2016 03:49:16PM 0 points [-]

BTW, the original, sourceable quotation uses the image of "a dog walking on its hind legs". Your response still applies.

Comment author: RichardKennaway 25 January 2016 01:28:23PM 0 points [-]

Full marks for the pep talk, but the prescription of "planning" is surely only part of what is needed. How would you handle the planning fallacy? I don't think "better planning" is the answer.

View more: Prev | Next