In response to Ugh fields
Comment author: gwillen 14 April 2010 05:58:24AM 17 points [-]

I suffer from this severely and pervasively, and was already aware of that before reading this post. So I just wanted to comment that your post fits spot-on with my experience. I tend to develop ugh fields around projects at work when I get stuck on them for awhile, and get emails from people asking when they will be done, and start to fear getting email about them at all, and then about thinking about them, and so on.

I have also gone through periods of 'ugh'ness centered on my voicemail box and my email inbox, each time when I knew or suspected they contained items likely to make me feel shittier about myself for whatever reason (e.g. reminding me about some thing I was in the process of failing to get done.)

In response to comment by gwillen on Ugh fields
Comment author: Jess_Riedel 14 April 2010 02:16:52PM *  1 point [-]

I suffer from exactly the same thing, but I don't think this what Roko is worring about, is it? He seems to worry about "ugh fields" around important life decisions (or "serious personal problems"), whereas you and I experience them around normal tasks (e.g. responding to emails, tackling stuck work, etc.). The latter may be important tasks -- making this an important motivation/akrasia/efficiency issue -- but it's not a catastrophic/black-swan type risk.

For example, if one had an ugh field around their own death and this prevented them from considering cryonics, this would be a catastrophic/black-swan type risk. Personally, I rather enjoy thinking about these types of major life decisions, but I could see how others might not.

Comment author: Toby_Ord 09 April 2010 01:29:34PM 8 points [-]

We can't use the universal prior in practice unless physics contains harnessable non-recursive processes. However, this is exactly the situation in which the universal prior doesn't always work. Thus, one source of the 'magic' is through allowing us to have access to higher levels of computation than the phenomena we are predicting (and to be certain of this).

Also, the constants involved could be terrible and there are no guarantees about this (not even probabilistic ones). It is nice to reach some ratio in the limit, but if your first Graham's number of guesses are bad, then that is very bad for (almost) all purposes.

Comment author: Jess_Riedel 12 April 2010 06:07:53PM 0 points [-]

Could you suggest a source for further reading on this?

Comment author: olimay 04 April 2010 01:30:14AM 1 point [-]

Discussion, mostly ad-hoc. On some occasions the discussion has been more focused it was assumed participants had read certain LW related things.

Comment author: Jess_Riedel 12 April 2010 06:02:59PM *  4 points [-]

I attended a meetup in Santa Barbara which I found largely to be a waste of time. The problem there--and I think, frankly, with LW in general--is that there just aren't that many of us with something insightful to say. (I certainly don't have much.) While it's great, I guess, that the participants acknowledge the importance behind some of the ideas championed by Yudkowsky and Hanson, most of us don't have anything to add. Some of us may be experts in other fields, but not in rationality.

Here's the perfect analogy: it's like listening to a bunch of college guys who've never played sports at a high level discuss a professional game; they all repeat the stuff they hear on ESPN, and the discussion isn't wildly wrong, but they're just regurgitating what they hear.

Do you feel like this described the NYC meetup at all? Do you think the meetup was worthwhile?

Comment author: Jess_Riedel 30 March 2010 05:57:18PM 1 point [-]

What happens at the meetups?

Comment author: CronoDAS 25 September 2009 03:30:16AM *  17 points [-]

When I contribute to charity, it's usually to avoid feeling guilty rather than to feel good as such... imagining myself as being the guy who doesn't rescue a drowning swimmer because he doesn't want to get his suit wet isn't a state I want to be in.

These charities can save someone's life for about $1,000. If you spend $1,000 on anything else, you've as good as sentenced someone to death. I find this to be really disturbing, and thinking about it makes think about doing crazy things, such as spending my $20,000 savings on a ten year term life insurance policy worth $10,000,000 and then killing myself and leaving the money to charity. At $1,000 a life, that's ten thousand lives saved. I suspect that most people who literally give their lives for others don't get that kind of return on investment.

Comment author: Jess_Riedel 10 February 2010 03:13:16PM *  3 points [-]

In most books, insurance fraud is morally equivalent to stealing. A deontological moral philosophy might commit you to donating all your disposable income to GiveWell-certified charities while not permitting you to kill yourself for the insurance money. But, yea, utilitarians will have a hard time explaining why they don't do this.

In response to Normal Cryonics
Comment author: byrnema 21 January 2010 04:33:01PM *  10 points [-]

Curiously -- not indignantly -- how should I interpret your statement that all but a handful of parents are "lousy"? Does this mean that your values are different from theirs? This might be what is usually meant when someone says someone is "lousy".

Your explicit argument seems to be that they're selfish if they're purchasing fleeting entertainment when they could invest that money in cryonics for their children. However, if they don't buy cryonics for themselves, either, it seems like cryonics is something they don't value, not that they're too selfish to buy it for their children.

In response to comment by byrnema on Normal Cryonics
Comment author: Jess_Riedel 22 January 2010 03:14:35PM 1 point [-]

Exactly. If a parent doesn't think cryonics makes sense, then they wouldn't get it for their kids anyways. Eliezer's statement can only criticize parents who get cryonics for themselves but not their children. This is a small group, and I assume it is not the one he was targeting.

Comment author: alyssavance 13 November 2009 01:40:44AM *  3 points [-]

Yes, it is. How could examples of X not be evidence that the "norm is X"? It may not be sufficiently strong evidence, but if this one example is not sufficiently damning, there are certainly plenty more.

Comment author: Jess_Riedel 13 November 2009 03:18:12PM 1 point [-]

Yes, of course it is weak evidence. But I can come up with a dozen examples off the top of my head where powerful organizations did realize important things, so you're examples are very weak evidence that this behavior is the norm. So weak that it can be regarded as negligible.

Comment author: alyssavance 13 November 2009 12:20:19AM 7 points [-]

"If it is, it would be surprising if nobody in the powerful organizations I'm talking about realizes it, especially if a few breakthroughs are made public and as we get closer to AGI."

Nobody in powerful organizations realizes important things all the time. To take a case study, in 2001, Warren Buffett and Ted Turner just happened to notice that there were hundreds of nukes in Russia, sitting around in lightly guarded or unguarded facilities, which anyone with a few AK-47s and a truck could have just walked in and stolen. They had to start their own organization, called the Nuclear Threat Initiative, to take care of the problem, because no one else was doing anything.

Comment author: Jess_Riedel 13 November 2009 01:01:04AM *  0 points [-]

The existence of historical examples where people in powerful organizations failed to realize important things is not evidence that it is the norm or that it can be counted on with strong confidence.

Comment author: CronoDAS 12 November 2009 07:22:45AM 0 points [-]

I'd guess that legalizing gay marriage would be pretty low-hanging fruit, but I don't know how politically possible it is.

Comment author: Jess_Riedel 13 November 2009 12:56:36AM *  4 points [-]

It's hard to think of a policy which would have a smaller impact on a smaller fraction of the wealthiest population on earth. And it faces extremely dedicated opposition.

Comment author: SilasBarta 10 August 2009 10:05:35PM 3 points [-]

I agree with your point here -- strongly. But I also think you're being unfair to Caplan. While his position is (I now realize) ridiculous, the example you gave is not.

In his "gun to the head" analogy, Caplan suggests that OCD isn't really a disease! After all, if we put a gun to the head of someone doing (say) repetitive hand washing, we could convince them to stop. Instead, Caplan thinks it's better to just say that the person just really likes doing those repetitive behaviors.

His position would not be that they like doing those behaviors per se, but rather, they have a very strange preference that makes those behaviors seem optimal. Caplan would probably call it "a preference for an unusually high level of certainty about something". For example, someone with OCD needs to perceive 1 million:1 odds that they're hands are now clean, while normal people need only 100:1 odds.

So the preference is for cleanliness-certainty, not the act of hand-washing. To get that higher level of certainty requires that they wash their hands much more often.

Likewise, an OCD victim who has to lock their door 10 times before leaving has an unusually high preference for "certainty that the door is locked", not for locking doors.

Again, I don't agree with this position, but it's handling of OCD isn't that stupid.

Comment author: Jess_Riedel 11 August 2009 06:03:37AM *  2 points [-]

I still think that Caplan's position is dumb. It's not so much a question of whether his explanation fits the data (although I think Psychohistorian has shown that in this case it does not), it's that it's just plain weird to characterize the obsessive behavior done by people with OCD as a "preference". I mean, suppose that you were able to modify the explanation you've offered (that OCD people just have high preferences for certainty) in a way that escapes Psychohistorian's criticism. Suppose, for instance, you simply say "OCD people just have a strong desire for things happening a prime number of times". This would still be silly! OCD people clearly have a minor defect in their brains, and redefining "preference" won't change this.

Ultimately, this might just be a matter of semantics. Caplan may be using "preference" to mean "a contrived utility function which happens to fit the data", which can always be done so long as the behavior isn't contradictory. But this really isn't helpful. After all, I can say that the willow's "preference" is to lean in the direction of the wind and this will correctly describe the Willow's behavior. But calling it a preference is silly.

Thanks for the comment. This discussion has helped to clarify my thinking.

View more: Prev | Next