Comment author: Nick_Tarleton 04 November 2013 01:42:46AM 11 points [-]

Have Eliezer's views (or anyone else's who was involved) on the Anthropic Trilemma changed since that discussion in 2009?

Comment author: Eliezer_Yudkowsky 20 December 2012 12:11:57AM 17 points [-]

1) In the long run, for CFAR to succeed, it has to be supported by a CFAR donor base that doesn't funge against SIAI money. I expect/hope that CFAR will have a substantially larger budget in the long run than SIAI. In the long run, then, marginal x-risk minimizers should be donating to SIAI.

2) But since CFAR is at a very young and very vital stage in its development and has very little funding, it needs money right now. And CFAR really really needs to succeed for SIAI to be viable in the long-term.

So my guess is that a given dollar is probably more valuable at CFAR right this instant, and we hope this changes very soon (due to CFAR having its own support base)...

...but...

...SIAI has previously supported CFAR, is probably going to make a loan to CFAR in the future, and therefore it doesn't matter as much exactly which organization you give to right now, except that if one maxes out its matching funds you probably want to donate to the other until it also maxes...

...and...

...even the judgment about exactly where a marginal dollar spent is more valuable is, necessarily, extremely uncertain to me. My own judgment favors CFAR at the current margins, but it's a very tough decision. Obviously! SIAI has given money to CFAR. If it had been obvious that this amount should've been shifted in direction A or direction B to minimize x-risk, we would've necessarily been organizationally irrational, or organizationally selfish, about the exact amount. SIAI has been giving CFAR amounts on the lower side of our error bounds because of the hope (uncertainty) that future-CFAR will prove effective at fundraising. Which rationally implies, and does actually imply, that an added dollar of marginal spending is more valuable at CFAR (in my estimates).

The upshot is that you should donate to whichever organization gets you more excited, like Luke said. SIAI is donating/loaning round-number amounts to CFAR, so where you donate $2K does change marginal spending at both organizations - we're not going to be exactly re-fine-tuning the dollar amounts flowing from SIAI to CFAR based on donations of that magnitude. It's a genuine decision on your part, and has a genuine effect. But from my own standpoint, "flip a coin to decide which one" is pretty close to my own current stance. For this to be false would imply that SIAI and I had a substantive x-risk-estimate disagreement which resulted in too much or too little funding (from my perspective) flowing to CFAR. Which is not the case, except insofar as we've been giving too little to CFAR in the uncertain hope that it can scale up fundraising faster than SIAI later. Taking this uncertainty into account, the margins balance. Leaving it out, a marginal absolute dollar of spending at CFAR does more good (somewhat) (in my estimation).

Comment author: Nick_Tarleton 07 June 2013 09:04:45PM 2 points [-]

So my guess is that a given dollar is probably more valuable at CFAR right this instant, and we hope this changes very soon (due to CFAR having its own support base)...

an added dollar of marginal spending is more valuable at CFAR (in my estimates).

Is this still your view?

Comment author: Eliezer_Yudkowsky 11 January 2013 03:38:25PM 8 points [-]

I sometimes get the impression that I am the only person who reads MoR who actually thinks MoR!Hermione is more awesome than MoR!Quirrell. Of course I have access to at least some info others don't, but still...

Comment author: Nick_Tarleton 11 January 2013 08:18:30PM 1 point [-]

I didn't, and still don't... but now I'm a little bit disturbed that I don't, and want to look a lot more closely at Hermione for ways she's awesome.

In response to Morality is Awesome
Comment author: PhilGoetz 10 January 2013 11:08:18PM *  21 points [-]

Whether to use "awesome" instead of "virtuous" is the question, not the answer. This is the question asked by Nietzsche in Beyond Good and Evil. If you've gotten to the point where you're set on using "awesome" instead of "good", you've already chosen your answer to most of the difficult questions.

The challenge to awesome theory is the same one it has been for 70 years: Posit a world in which Hitler conquered the world instead of shooting himself in his bunker. Explain how that Hitler was not awesome. Don't look at his outcomes and conclude they were not awesome because lots of innocent people died. Awesome doesn't care how many innocent people died. They were not awesome. They were pathetic, which is the opposite of awesome. Awesome means you build a space program to send a rocket to the moon instead of feeding the hungry. Awesome history is the stuff that happened that people will actually watch on the History Channel. Which is Hitler, Napoleon, and the Apollo program.

If you don't think Hitler was awesome, odds are very good that you are trying to smuggle in virtues and good-old-fashioned good, buried under an extra layer of obfuscation, by saying "I don't know exactly what awesome is, but someone that evil can't be awesome." Hitler was evil, not bad.

You think you can just redefine words, but you can't,

That's exactly right. Including "awesome". Tornadoes, hurricanes, earthquakes, and floods are awesome. A God who will squish you like a bug if you dare not to worship him is awesome, awe-full, and awful.

If you think "happiness" is the stuff, you might get confused and try to maximize actual happiness. If you think awesomeness is the stuff, it is much harder to screw it up.

Saying that it's good because it's vague, because it's harder to screw up when you don't know what you're talking about, is contrary to the spirit of LessWrong.

That is, "awesome" already refers to the same things "good" is supposed to refer to.

Awesome already refers to the same things good is supposed to refer to, for those people who have already decided to use "awesome" instead of "good". The "Is this right?" question that invokes virtues and rules is not a confused notion of what is awesome. It's a different, incompatible view of what we "ought" to do.

Comment author: Nick_Tarleton 11 January 2013 08:13:22PM *  3 points [-]

Upvoted; whatever its relationship to what the OP actually meant, this is good.

Saying that it's good because it's vague, because it's harder to screw up when you don't know what you're talking about, is contrary to the spirit of LessWrong.

Reminding yourself of your confusion, and avoiding privileging hypotheses, by using vague terms as long as you remember that they're vague doesn't seem so bad.

Comment author: Nick_Tarleton 05 January 2013 01:19:13AM 19 points [-]

I kept expecting someone to object that "this Turing machine never halts" doesn't count as a prediction, since you can never have observed it to run forever.

Comment author: Nick_Tarleton 02 January 2013 06:47:06AM *  9 points [-]

But the blogger's position is one that is often met with hostility round these parts, for reasons that are unclear to me.

I think some of it is a defensive reaction to perceived possible vaguely-defined moral demands/condemnation. Here's a long comment I wrote about that in a different context.

Also simple contrarianism, though that's not much of an explanation absent a theory of why this is the thing people are contrarian against.

the parts of social engineering that I think LW is worst at.

What are those?

Comment author: Nick_Tarleton 03 January 2013 06:53:34AM *  6 points [-]

More sympathetically, people might (well, I'm sure some people do) see avoiding stereotype-based jokes as a step towards there being things you can't say, and prefer some additional risk of saying harmful things to moving in that direction (possibly down a slippery slope).

Comment author: TimS 02 January 2013 03:58:42AM 15 points [-]

This post about jokes and attitudes the provide cover for bad social actors really caught my interest. But the blogger's position is one that is often met with hostility round these parts, for reasons that are unclear to me.

The point of the blog post is that jokes about certain gender and relationship stereotypes (men are idiots, women are the ball-and-chain) allow actual abusers slide by under the radar by asserting that they are joking whenever they are publically called out on inappropriate behavior. It really resonated with me - and to be frank, it seems aimed at the parts of social engineering that I think LW is worst at.

Comment author: Nick_Tarleton 02 January 2013 06:47:06AM *  9 points [-]

But the blogger's position is one that is often met with hostility round these parts, for reasons that are unclear to me.

I think some of it is a defensive reaction to perceived possible vaguely-defined moral demands/condemnation. Here's a long comment I wrote about that in a different context.

Also simple contrarianism, though that's not much of an explanation absent a theory of why this is the thing people are contrarian against.

the parts of social engineering that I think LW is worst at.

What are those?

Comment author: Peterdjones 01 January 2013 08:21:57PM 4 points [-]

The trouble I have with models that postulate stupidity,

But I am not convinced that your examples actualy do that.

Here the rube is the managers bosses, why are they so stupid as to think that mismanagement is evidence of superior management qualities? Why haven’t these idiots been sacked?

The idiots are where they are because they have Won -- they have been playing the games of Climb The Coroporate Ladder and Look After Number One But Don't Make It Obvious quite succesfuly. It/s a lesswrongian prejjudice that the only game anyone would want to play is Highly Competent But Criminally Underappreciated Backroom Boffin. They don;t get sacked because their superiors are playing the same game according to the same rules.

You could object that companies where dick-swinging is appreciated more than achieving goals and targets won't have a long term future. Well, if there is someone in the chain who is playing Build A Company with a Lasting Future, then they're being stupid. But rationality is achiveing your goals. They've achieved theirs.

Comment author: Nick_Tarleton 01 January 2013 11:41:55PM *  2 points [-]

It/s a lesswrongian prejjudice that the only game anyone would want to play is Highly Competent But Criminally Underappreciated Backroom Boffin.

Yes. The general case of this prejudice is probably something like 'behavior morally should be evaluated according to its stated far-mode purpose; other purposes are possible and important, but dirty'. Of course, this has the large upside of making us seriously evaluate things according to their stated purpose at all....

Comment author: Patrick 01 January 2013 08:49:25AM 0 points [-]

I just mean the latter. I think explanations involving pandering can work. The trouble I have with models that postulate stupidity, is that they need people to be stupid in a convenient direction. Stupidity is a much larger target than intelligence after all. I think explanation involving pandering work if you can explain (like you did with the affect hueristic) why these tricks will work on people.

Out of curiosity, what are the connotations of the word "rube" that make you suspicious?

Comment author: Nick_Tarleton 01 January 2013 09:05:33AM *  2 points [-]

Out of curiosity, what are the connotations of the word "rube" that make you suspicious?

Low status, contemptibility, etc. I expect making status hierarchies salient to make people less rational (hence fully generic suspicion), and I had the specific hypothesis that you might see people using 'signaling' models as judging others as contemptible and be offended by this.

Relatedly, I dislike calling the behavior in question "pandering", since I expect using condemnatory terms for phenomena to make them aversive to look at closely, and to lead to bias in attribution (against seeing them in oneself/'good' people and towards seeing them in 'bad' people, as well as towards seeing people who unambiguously exhibit them as 'bad').

Comment author: Nick_Tarleton 01 January 2013 08:32:43AM *  11 points [-]

I have a hard time telling whether you're trying to say that 'signaling' models are inaccurate, or just that calling them 'signaling' is misleading. I agree with the latter insofar as 'signaling' means this specific economic model, because the behaviors in question aren't directed at economically rational agents. I also can't tell if you dislike models that postulate stupidity (the strong status connotations of the word "rube" make me suspicious).

If you mean the former: I think you greatly overestimate median rationality in your take on the manager and butcher examples. All positive traits get conflated with each other by default. People can and do override their affective impressions with explicit reasoning, but more often than not they don't, especially when evaluating performance is difficult — and it's almost always more difficult than evaluating "does this person look like a winner?".

I also used to think that simple non-costly signaling couldn't possibly stably work, but experience (often with my own irrationality) changed my mind. This is less confusing if I think of it as social-primate (rather than general-intelligence) behavior; liking things/people other people like is socially useful. (This would likely be significant in the manager example in real life, e.g., I'll look better to my superiors if I make similar evaluations of my subordinates to them.)

The quality proposed was "status", but outrage is cheap. Any fool can be outraged at a blog post mentioning rape.

Now, status signaling is overused as an explanation. If the "HOW DARE YOU" comments are signaling (or 'signaling') anything, the obvious thing is alignment with the perceived-as-socially-powerful (implicit-Schelling-point-)faction condemning Robin, not status.

View more: Prev | Next