Posts

Sorted by New

Wiki Contributions

Comments

I suppose it depends what kinds of decisions you're talking about making. (eg keeping AIs from destroying humanity.) I was thinking along the lines of day-to-day decision making, in which people generally manage to survive for decades in spite of ridiculously flawed beliefs -- so it seems there are lots of situations where performance doesn't appear to degrade nearly so sharply.

At any rate, I guess I'm with ciphergoth, the more interesting question is why 99% accurate is "maybe maybe" okay, but 95% is "hell no". Where do those numbers come from?

On the other hand, people also often seem to think that a propensity to be nice to others entitles them to have an overly thin skin.

Assuming any action anywhere short of optimal results in zero value, sure. In practice?

Regarding the last point, at the very least SIAI would be better off not advertising that several projects are partially funded to the tune of $5 out of thousands. It doesn't exactly motivate one to open up his own wallet for a similar small donation.

If you need to keep contributors updated on the extent to which their projects have received funding, perhaps do so privately by email on request?

In this model, people aren't just seeking status, they're (also? instead?) seeking a state of affairs that allows them to believe they have status.

It seems like most situations that this theory covers are already explained by either: (a) people seek status not only in the context of society at large but also in the context of small groups (b) for the cases where no one else knows, ego -- people seek to feel good about themselves (including that they are smart)

Perhaps the (b) cases are explained better by the "seeking plausible belief in own status" model, but I'm not sure that that's clear, at least from what's been written so far.