Comment author: Ruby 09 June 2015 04:28:13AM 4 points [-]

I'm surprised by this idea of treating SSC as a rationalist hub. I love Scott, Scott's blog, and Scott's writing. Still, it doesn't seem like it is a "rationality blog" to me. Not directly at least. Scott is applying a good deal of epistemic rationality to his topics of interest, but the blog isn't about epistemic rationality, and even less so about practical rationality. (I would say that Brienne's and Nate's 'self-help' posts are much closer to that.) By paying attention, one might extract the rationality principles Scott is using, but they're not outlined.

There's a separate claim that while Scott's blog isn't about rationality in the same was LW is, it has attracted the same audience, and therefore can be a rationality attractor/hub. This has some legitimacy, but I still don't like it. LW has attracted a lot of people who like to debate interesting topics and ideas on the internet, with a small fraction who are interested in going out and doing things (or just staying in, but actually changing themselves). Scott's blog, being about ideas, seems that it also attract lots of people who simply like mental stimulation, but without a filter for those most interested in doing. I'd really like our rationality community hubs to select for those who want take rationality seriously and implement it in their minds and actions.

On this selecting for -or at least being about- the EA Forum is actually quite good.

Lastly, maybe I feel strong resistence to trying to open Scott's blog up because it seems like it really is his personal blog about things he wants to write about - and just because he's really successful and part of the community doesn't mean we get tell him now 'open it up'/'give it over'/co-opt it for the rest of the community.

Comment author: mughat 12 December 2014 11:38:01PM 0 points [-]

How many are comming to this event? Is there a page on meetup.com?

Comment author: Ruby 16 December 2014 03:46:21PM *  0 points [-]

No page on meetup.com, I'm afraid.

Comment author: Ruby 27 September 2014 12:35:32PM 0 points [-]

I'm on a bench near the Botanisk Have Butik. Entrance to the park is corner of Gothersgade and Ă˜ster Voldgade.

Comment author: CWG 24 June 2014 01:14:53AM 3 points [-]

I have upvoted your post in the hope that it will contribute to your positive feelings about having written it, so that you will continue to write intelligent and thoughtful posts in future.

Comment author: Ruby 24 June 2014 01:21:47AM 2 points [-]

You are very kind, good sir.

Do me one more favour - share a thought you have in response to something I wrote. There is much to still be said, but there has been no discussion.

Comment author: Sniffnoy 21 June 2014 08:57:34PM 4 points [-]

Several of the links in this post point to Google redirects rather than directly to the actual website. Could you fix this please? Thank you!

Comment author: Ruby 22 June 2014 12:09:02AM 4 points [-]

Thanks! Fixed.

Comment author: TheAncientGeek 19 June 2014 12:42:40PM *  1 point [-]

Huh?

Comment author: Ruby 19 June 2014 01:04:50PM *  0 points [-]

A goal I set is a state of the world I am actively trying to bring about, whereas a value is something which . . . has value to me. The things I value dictate which world states I prefer, but for either lack of resources or conflict, I only pursue the world states resulting from a subset of my values.

So not everything I value ends up being a goal. This includes terminal goals. For instance, I think that it is true that I terminally value being a talented artist - greatly skilled in creative expression - being so would make me happy in and of itself, but it's not a goal of mine because I can't prioritise it with the resources I have. Values like eliminating suffering and misery are ones which matter to me more, and get translated into corresponding goals to change the world via action.

I haven't seen a definition provided, but if I had to provide one for 'terminal goal' it would be that it's a goal whose attainment constitutes fulfilment of a terminal value. Possessing money is rarely a terminal value, and so accruing money isn't a terminal goal, even if it is intermediary to achieving a world state desired for its own sake. Accomplishing the goal of having all the hungry people fed is the world state which lines up with the value of no suffering, hence it's terminal. They're close, but not quite same thing.

I think it makes sense to possibly not work with terminal goals on a motivational/decision making level, but it doesn't seem possible (or at least likely) that someone wouldn't have terminal values, in the sense of not having states of the world which they prefer over others. [These world-state-preferences might not be completely stable or consistent, but if you prefer the world be one way than another, that's a value.]

Comment author: TheAncientGeek 19 June 2014 12:32:32PM -2 points [-]

There's no incoherence in defining "terminal" as "not lowest priority", which is basically what you are saying.

It just not what the word means.

Literally, etymologically, that is not what terminal means. It means maximal, or final. A terminal illness is not an illness that is a bit more serious than some other illness.

It's not even what it usually means on LW. If Clippies goals were terminal in your sense, they would be overridable .....you would be able to talk Clippie out of papercliiping.

What you are talking about is valid, is a thing. If you have any hierarchy of goals, there are some at the bottom, some in the middle, and some at the top. But you need to invent a new word for the middle ones, because, "terminal" doesn't mean "intermediate".

Comment author: Ruby 19 June 2014 12:38:54PM 1 point [-]

I feel like there's not much of a distinction being made here between terminal values and terminal goals. I think they're importantly different things.

Comment author: ialdabaoth 18 June 2014 11:52:10PM 1 point [-]

level-1 thinking is actually based on habit and instinct more than rules; rules are just a way to describe habit and instinct.

Comment author: Ruby 19 June 2014 10:25:50AM 1 point [-]

Level-1 is about rules which your habit and instinct can follow, but I wouldn't say they're ways to describe it. Here we're talking about normative rules, not descriptive System 1/System 2 stuff.

Comment author: kybernetikos 18 June 2014 11:12:47PM 4 points [-]

That's very interesting, but isn't the level-1 thinking closer to deontological ethics than virtue ethics, since it is based on rules rather than on the character of the moral agent?

Comment author: Ruby 19 June 2014 10:21:38AM *  3 points [-]

My understanding is that when Hare says rules or principles for level-1 he means it generically and is agnostic about what form they'd take. "Always be kind" is also a rule. For clarity, I'd substitute the word 'algorithm' for 'rules'/'principles'. Your level-2 algorithm is consequentialism, but then your level-1 algorithm is whatever happens to consequentially work best - be it inviolable deontological rules, character-based virtue ethics, or something else.

Comment author: Eliezer_Yudkowsky 17 June 2014 08:49:51PM 40 points [-]

"Good people are consequentialists, but virtue ethics is what works," is what I usually say when this topic comes up. That is, we all think that it is virtuous to be a consequentialist and that good, ideal rationalists would be consequentialists. However, when I evaluate different modes of thinking by the effect I expect them to have on my reasoning, and evaluate the consequences of adopting that mode of thought, I find that I expect virtue ethics to produce the best adherence rate in me, most encourage practice, and otherwise result in actually-good outcomes.

But if anyone thinks we ought not to be consequentialists on the meta-level, I say unto you that lo they have rocks in their skulls, for they shall not steer their brains unto good outcomes.

Comment author: Ruby 18 June 2014 03:23:42AM *  22 points [-]

If ever you want to refer to an elaboration and justification of this position, see R. M. Hare's two-level utilitarianism, expounded best in this paper: Ethicial Theory and Utilitarianism (see pp. 30-36).

To argue in this way is entirely to neglect the importance for moral philosophy of a study of moral education. Let us suppose that a fully informed archangelic act-utilitarian is thinking about how to bring up his children. He will obviously not bring them up to practise on every occasion on which they are confronted with a moral question the kind of arch angelic thinking that he himself is capable of [complete consequentialist reasoning]; if they are ordinary children, he knows that they will get it wrong. They will not have the time, or the information, or the self-mastery to avoid self-deception prompted by self-interest; this is the real, as opposed to the imagined, veil of ignorance which determines our moral principles.

So he will do two things. First, he will try to implant in them a set of good general principles. I advisedly use the word 'implant'; these are not rules of thumb, but principles which they will not be able to break without the greatest repugnance, and whose breach by others will arouse in them the highest indignation. These will be the principles they will use in their ordinary level-1 moral thinking, especially in situations of stress. Secondly, since he is not always going to be with them, and since they will have to educate their children, and indeed continue to educate themselves, he will teach them,as far as they are able, to do the kind of thinking that he has been doing himself. This thinking will have three functions. First of all, it will be used when the good general principles conflict in particular cases. If the principles have been well chosen, this will happen rarely; but it will happen. Secondly, there will be cases (even rarer) in which, though there is no conflict between general principles, there is something highly unusual about the case which prompts the question whether the general principles are really fitted to deal with it. But thirdly, and much the most important, this level-2 thinking will be used to select the general principles to be taught both to this and to succeeding generations. The general principles may change, and should change (because the environment changes). And note that, if the educator were not (as we have supposed him to be) arch angelic, we could not even assume that the best level-1 principles were imparted in the first place; perhaps they might be improved.

How will the selection be done? By using level-2 thinking to consider cases, both actual and hypothetical, which crucially illustrate, and help to adjudicate, disputes between rival general principles.

View more: Next