level-1 thinking is actually based on habit and instinct more than rules; rules are just a way to describe habit and instinct.
Level-1 is about rules which your habit and instinct can follow, but I wouldn't say they're ways to describe it. Here we're talking about normative rules, not descriptive System 1/System 2 stuff.
That's very interesting, but isn't the level-1 thinking closer to deontological ethics than virtue ethics, since it is based on rules rather than on the character of the moral agent?
My understanding is that when Hare says rules or principles for level-1 he means it generically and is agnostic about what form they'd take. "Always be kind" is also a rule. For clarity, I'd substitute the word 'algorithm' for 'rules'/'principles'. Your level-2 algorithm is consequentialism, but then your level-1 algorithm is whatever happens to consequentially work best - be it inviolable deontological rules, character-based virtue ethics, or something else.
"Good people are consequentialists, but virtue ethics is what works," is what I usually say when this topic comes up. That is, we all think that it is virtuous to be a consequentialist and that good, ideal rationalists would be consequentialists. However, when I evaluate different modes of thinking by the effect I expect them to have on my reasoning, and evaluate the consequences of adopting that mode of thought, I find that I expect virtue ethics to produce the best adherence rate in me, most encourage practice, and otherwise result in actually-good outcomes.
But if anyone thinks we ought not to be consequentialists on the meta-level, I say unto you that lo they have rocks in their skulls, for they shall not steer their brains unto good outcomes.
If ever you want to refer to an elaboration and justification of this position, see R. M. Hare's two-level utilitarianism, expounded best in this paper: Ethicial Theory and Utilitarianism (see pp. 30-36).
To argue in this way is entirely to neglect the importance for moral philosophy of a study of moral education. Let us suppose that a fully informed archangelic act-utilitarian is thinking about how to bring up his children. He will obviously not bring them up to practise on every occasion on which they are confronted with a moral question the kind of arch angelic thinking that he himself is capable of [complete consequentialist reasoning]; if they are ordinary children, he knows that they will get it wrong. They will not have the time, or the information, or the self-mastery to avoid self-deception prompted by self-interest; this is the real, as opposed to the imagined, veil of ignorance which determines our moral principles.
So he will do two things. First, he will try to implant in them a set of good general principles. I advisedly use the word 'implant'; these are not rules of thumb, but principles which they will not be able to break without the greatest repugnance, and whose breach by others will arouse in them the highest indignation. These will be the principles they will use in their ordinary level-1 moral thinking, especially in situations of stress. Secondly, since he is not always going to be with them, and since they will have to educate their children, and indeed continue to educate themselves, he will teach them,as far as they are able, to do the kind of thinking that he has been doing himself. This thinking will have three functions. First of all, it will be used when the good general principles conflict in particular cases. If the principles have been well chosen, this will happen rarely; but it will happen. Secondly, there will be cases (even rarer) in which, though there is no conflict between general principles, there is something highly unusual about the case which prompts the question whether the general principles are really fitted to deal with it. But thirdly, and much the most important, this level-2 thinking will be used to select the general principles to be taught both to this and to succeeding generations. The general principles may change, and should change (because the environment changes). And note that, if the educator were not (as we have supposed him to be) arch angelic, we could not even assume that the best level-1 principles were imparted in the first place; perhaps they might be improved.
How will the selection be done? By using level-2 thinking to consider cases, both actual and hypothetical, which crucially illustrate, and help to adjudicate, disputes between rival general principles.
Meetup : July Rationality Dojo: Disagreement
Discussion article for the meetup : July Rationality Dojo: Disagreement
[ATTN: The dojo roster is has all free slots starting from next month, if you would like to present at a future Dojo or suggest a topic, please fill it in on the Rationality Dojo Roster: http://is.gd/dojoroster]
The Less Wrong Sunday Rationality Dojos are crafted to be serious self-improvement sessions for those committed to the Art of Rationality and personal growth. Each month a community member will run a session involving a presentation of content, discussion, and exercises.
Continuing the succession of immensely successful dojos, James will run a session on disagreement. How can two epistemic peers, equally knowledgeable and equally competent, ever feel certain about their view when their peer disagrees?
As always, we will review the personal goals we committed to at the previous Dojo (I will have done X by the next Dojo). Scott Fowler recorded the commitments, if you didn't make it but would like to add your own goal to the records, send him a message (shokwave.sf@gmail.com).
The Dojo is likely to run for 2-3 hours, after which some people will get dinner together.
If you have any trouble finding the venue or getting in, call me on 0425-855-124.
Discussion article for the meetup : July Rationality Dojo: Disagreement
For people voluntarily donning "Ask Me Anything" stickers, I'm less nervous than I otherwise would be. The whole point of a sticker like that is that it's safe to ask.
The whole thing hinges on how much you trust people when they assure you you can say potentially upsetting thing X to them. Generally, not very much. I would never trust a sticker or declaration to the extent that I wouldn't model someone's response, it's just an update on that model.
It was emphasised that people didn't have to answer any question, but the empathy should have been equally pushed.
On this occasion, askers were very hesitant to ask questions they thought would be too personal, but those being asked invariably responded without any hesitation or unease. Discovering that you could ask personal questions you were curious about with only the positive consequences of closeness and openness was a win.
But this does all include a good deal of judgment. Not an exercise for a group not high in empathy or generally unconcerned about others' responses, nor for those who are easily pressured.
One late night conversation became a circle of people pushing the limits of what they would normally ask each other: “What is your kink (fetishes)?” “What have you done which has made you feel really morally bad?”
Yea, that's not good idea. Group pressure and the impetus of the moment leading to either strangers having a lot of power over you, or the usual deflecting pseudo-answers. Neither scenario I find particularly compelling.
I have updated towards your position.
Australian Mega-Meetup 2014 Retrospective
Overview
The first-ever Australia-wide mega-meetup took place on the second weekend of May 2014. LW clans from Melbourne, Sydney, and Canberra met in a pristine country location in NSW for a weekend of rationality, outdoors, and fine company.
The event was a hit. This post is a general retrospective, another post aimed at future organisers will provide a thorough write-up of the planning, execution and suggested improvements.
If it's great to hang out with a few friends who share your interests, values, and thought processes - then it's sheer awesome to spend a whole weekend with two dozen kindred minds. The favourite pastime at the mega-meetup was conversation. Every spare moment was spent exchanging ideas and views. We brought up a large pile of boardgames and not a single game was played - too busy talking. I consider this evidence that we need to bring more rationalists together more often.
Background
The Australian meetups had not had any prior contact to this event. Sydney1 and Canberra are new meetups for 2014. It was hoped that the mega-meetup would persuade the new meetups that the global LW community was worth being a part of. Melbourne has been invigorated since CFAR visited and was keen to share the spirit.
There is a twelve-hour drive or one-hour flight from Melbourne to Sydney. Canberra is a three hours drive from Sydney towards Melbourne. To justify travelling the distance, we made the mega-meetup a weekend retreat from Friday evening to Sunday evening.
A word of inspiration: it was six weeks from when we first started talking to the date of the camp. Only four weeks from idea to sold out with 25 attendees. LW organisers are chock-a-block with extra-agenty goodness and are a delight to work with. If you run a LW meetup and have neighbouring groups, get in touch. An enthusiastic team can make grand things happen pretty fast.
Activities
The structure of weekend was built around practical rationality sessions. Melbourne LW has accustomed its members to running sessions and we pulled on our knowledge. Most of the sessions were CFAR modules: alumni valued the revision and those who were new got stuck into the powerful techniques. The schedule for the mega-meetup can be seen here.
The campsite offered a range of outdoor activities. People voted on sailing and a high ropes course. The activities allowed attendees to bond outside of the intense rationality sessions. Other mega-meetup organisers might want to organise a fun excursions of another type.
We played the Credence Calibration Icebreaker Game in the opening session. It’s a merging of the credence game with the classic icebreaker ‘tell three statements about yourself, one of them a lie’.
Unconference/Lightning Talks were held by campfire. While roasting marshmallows, we listened to talks on cryonics, transfinite numbers, polyamory, quantified self, anthropic reasoning, the Price equation, and quite a few more.
European Sticker System
We adopted the European Sticker System, adorning our name tags with little indicators about ourselves. We ran out of ‘Hugs!’ stickers and a perceptible increase in the rate of hugs did occur. Uptake of Tell Culture stickers appeared universal, although harder to see in action. People cited my tell culture sticker before providing feedback about the meetup, indicating that I might not have received it otherwise. A Crocker’s Rules sticker was included for LW completeness but was cautioned against.
Like the ‘Hugs” sticker, ‘Ask Me Anything’ was adopted by most. One late night conversation became a circle of people pushing the limits of what they would normally ask each other:
“What is your kink (fetishes)?” “What have you done which has made you feel really morally bad?” “Given your intelligence, I am surprised by your career choice. Can you tell me about that?” “You belong to minority group X within the group here, I’m curious how that makes you feel.”
What's Next
There was no discussion of whether another mega-meetup should happen: all involved assumed that obviously it would and we should just start planning now. More people, longer, more stickers. We might invite New Zealand.
Mega-meetups are awesome and we heartily recommend everyone have them. They don’t have to be weekend-long events, your local area meetups don’t have to be large, just bring them goddamn rationalists together.
Credit
1. Sydney existed in a previous incarnation two years ago, but started up again recently.
Credence Calibration Icebreaker Game
The Aussie mega-meetup took place this past weekend. For it, a new kind of icebreaker was needed: one which is was not merely fun and sociable, but also instilled with the Way. Thus was the Credence Calibration Icebreaker forged.
A marriage of the credence game and the classic icebreaker, ‘Say three things about yourself, one of them a lie’, the game allows players to learn about each other, test their ability to deceive and detect deception, and discover just how calibrated they are.
How to play
Playing instructions here: docx pdf. Scoring spreadsheet.
Each turn a player makes three statements about themselves. One and only one the statement must be intentionally untrue. All others players assign probabilities of being false to each statement. These probability sum to 1: P(A’) + P(B’) + P(C’ ) = 1. The game is scored in the same manner as the credence game, but with reference to 33% rather than 50%.
The way we played it, a player would reveal which was the lie immediately after everyone else had assigned probabilities. The immediate feedback is more fun and allows players to recalibrate as they learn about their performance. Revealing which statements were lies at the end would require reminding everyone what the other statements were.
Many meetup groups have played the Aumann agree game where groups collectively assign credences to a collection of statements, however that game requires a collection of statements to be collected in advance. Once played, new statements must be collected for a new game. The credence calibration icebreaker has the advantage that players generate the statements allowing for easy replay.
Improvements
Restrictions should be placed on the nature of the lies in order to control which skills are tested. We played without restrictions and most players generated a lie by altering a minor detail of a true statement which didn’t affect its plausibility, e.g. ‘My father’s brain is frozen’1 vs. ‘My uncle’s brain is frozen’. This resulted in the game being less about appraising the plausibility of statements and more about detecting deception by tells and other clues.
Following the original icebreaker game, three statements were used. Reducing the number of statements to two would have the following benefits:
- The game is currently data entry intensive, requiring two numbers per question per player to be entered. Two statements would halve this number.
- Assigning probabilities of falsehood is counter-intuitive to many, using two statements would allow for the typical direct assignment of truth.
- People find generating three statements difficult, two statements would reduce the effort.
Statistics
Various statistics are computed in the scoring spreadsheet. Results from our game showed a high correlation between number correct and score, 0.72, and that players improved over the course of the game thanks to diminishing overconfidence.
1. True statement. As was 'I have three kidneys'.
Meetup : Melbourne June Rationality Dojo: Memory
Discussion article for the meetup : Melbourne June Rationality Dojo: Memory
The Less Wrong Sunday Rationality Dojos are crafted to be serious self-improvement sessions for those committed to the Art of Rationality and personal growth. Each month a community member will run a session involving a presentation of content, discussion, and exercises.
Continuing the succession of immensely successful dojos, Megan will present in June on memory.
As always, we will review the personal goals we committed to at the previous Dojo ('I will have done X by the next Dojo'). Scott Fowler recorded the commitments, if you didn't make it but would like to add your own goal to the records, send him a message (shokwave.sf@gmail.com).
The Dojo is likely to run for 2-3 hours, after which some people will get dinner together.
If you have any trouble finding the venue or getting in, call me on 0425-855-124. If you would like to present at a future Dojo or suggest a topic, please fill it in on the Rationality Dojo Roster: http://is.gd/dojoroster
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
There's no incoherence in defining "terminal" as "not lowest priority", which is basically what you are saying.
It just not what the word means.
Literally, etymologically, that is not what terminal means. It means maximal, or final. A terminal illness is not an illness that is a bit more serious than some other illness.
It's not even what it usually means on LW. If Clippies goals were terminal in your sense, they would be overridable .....you would be able to talk Clippie out of papercliiping.
What you are talking about is valid, is a thing. If you have any hierarchy of goals, there are some at the bottom, some in the middle, and some at the top. But you need to invent a new word for the middle ones, because, "terminal" doesn't mean "intermediate".
I feel like there's not much of a distinction being made here between terminal values and terminal goals. I think they're importantly different things.