Open thread, Oct. 27 - Nov. 2, 2014

5 Post author: MrMind 27 October 2014 08:58AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Comments (400)

Comment author: Thomas 27 October 2014 09:59:57AM 4 points [-]

Where are you right, while most others are wrong? Including people on LW!

Comment author: RowanE 27 October 2014 11:41:18AM 9 points [-]

I think this could be better put as "what do you believe, that most others don't?" - being wrong is, from the inside, indistinguishable from being right, and a rationalist should know this. I think there have actually been several threads about beliefs that most of LW would disagree with.

Comment author: Thomas 27 October 2014 11:59:31AM *  3 points [-]

Very well. But do you have such a belief, that others will see it as a wrong one?

(Last time this was asked, the majority of contrarian views were presented by me.)

Comment author: ZankerH 27 October 2014 01:39:03PM 2 points [-]
  • Technological progress and social/political progress are loosely correlated at best

  • Compared to technological progress, there has been little or no social/political progress since the mid-18th century - if anything, there has been a regression

  • There is no such thing as moral progress, only people in charge of enforcing present moral norms selectively evaluating past moral norms as wrong because they disagree with present moral norms

Comment author: Nate_Gabriel 27 October 2014 01:52:51PM 1 point [-]

Compared to technological progress, there has been little or no social/political progress since the mid-18th century - if anything, there has been a regression

Regression? Since the 1750s? I realize Europe may be unusually bad here (at least, I hope so), but it took until 1829 for England to abolish the husband's right to punish his wife however he wanted.

Comment author: RowanE 27 October 2014 02:32:18PM 1 point [-]

I think that progress is specifically what he's on about in his third point. It's standard neoreactionary stuff, there's a reason they're commonly regarded as horribly misogynist.

Comment author: Capla 27 October 2014 06:06:02PM 1 point [-]

I want to discuss it, and be shown wrong if I'm being unfair, but saying "It's standard [blank] stuff" seems dismissive. Suppose I was talking with someone about friendly AI or the singularity, and a third person comes around and says "Oh, that's just standard Less Wrong stuff." It may or may not be the case, but it feels like that third person is categorizing the idea and dismissing it, instead of dealing with my arguments outright. That is not conducive to communication.

Comment author: Lumifer 27 October 2014 06:15:19PM 1 point [-]

but it feels like that third person is categorizing the idea and dismissing it, instead of dealing with my arguments outright.

This comment could be (but not necessarily is) valid with the meaning of "Your arguments are part of a well-established set of arguments and counter-arguments, so there is no point in going through them once again. Either go meta or produce a novel argument.".

Comment author: RowanE 27 October 2014 07:20:19PM 2 points [-]

I was trying to say "you should not expect that someone who thinks no social, political or moral progress has been made since the 18th century to consider women's rights to be a big step forward" in a way that wasn't insulting to Nate_Gabriel - being casually dismissive of an idea makes "you seem to be ignorant about [idea]" less harsh.

Comment author: Metus 27 October 2014 02:24:31PM 3 points [-]

I think I found the neoreactionary.

Comment author: gjm 27 October 2014 03:50:13PM 2 points [-]

The neoreactionary? There are quite a number of neoreactionaries on LW; ZankerH isn't by any means the only one.

Comment author: Metus 27 October 2014 04:05:24PM 2 points [-]

Apparently LW is a bad place to make jokes.

Comment author: Lumifer 27 October 2014 04:12:47PM 3 points [-]

That's not LW, that's internet. The implied context in your head is not the implied context in other heads.

Comment author: gjm 27 October 2014 05:09:53PM 10 points [-]

The LW crowd is really tough: jokes actually have to be funny here.

Comment author: RichardKennaway 27 October 2014 03:00:28PM 1 point [-]

What do you mean by social progress, given that you distinguish it from technological progress ("loosely correlated at best") and moral progress ("no such thing")?

Comment author: ZankerH 27 October 2014 03:15:28PM *  1 point [-]

Re: social progress: see http://www.moreright.net/social-technology-and-anarcho-tyranny/

We use the term “technology” when we discover a process that lets you get more output for less investment, whether you’re trying to produce gallons of oil or terabytes of storage. We need a term for this kind of institutional metis – a way to get more social good for every social sacrifice you have to make – and “social technology” fits the bill. Along with the more conventional sort of technology, it has led to most of the good things that we enjoy today.

The flip side, of course, is that when you lose social technology, both sides of the bargain get worse. You keep raising taxes yet the lot of the poor still deteriorates. You spend tons of money on prisons and have a militarized police force, yet they seem unable to stop muggings and murder. And this is the double bind that “anarcho-tyranny” addresses. Once you start losing social technology, you’re forced into really unpleasant tradeoffs, where you have sacrifice along two axes of things you really value.

As for moral progress, see whig history. Essentially, I view the notion of moral progress as fundamentally a misinterpretation of history. Related fallacy: using a number as an argument (as in, "how is this still a thing in 2014?"). Progress in terms of technology can be readily demonstrated, as can regression in terms of social technology. The notion of moral progress, however, is so meaningless as to be not even wrong.

Comment author: Toggle 27 October 2014 05:09:04PM 1 point [-]

More Right

That use of 'technology' seems to be unusual, and possibly even misleading. Classical technology is more than a third way that increases net good; 'techne' implies a mastery of the technique and the capacity for replication. Gaining utility from a device is all well and good, but unless you can make a new one then you might as well be using a magic artifact.

It does not seem to be the case that we have ever known how to make new societies that do the things we want. The narrative of a 'regression' in social progress implies that there was a kind of knowledge that we no longer have- but it is the social institutions themselves that are breaking down, not our ability to craft them.

Cultures are still built primarily by poorly-understood aggregate interactions, not consciously designed, and they decay in much the same way. A stronger analogy here might be biological adaptation, rather than technological advancement, and in evolutionary theory the notion of 'progress' is deeply suspect.

Comment author: Lumifer 27 October 2014 05:28:40PM *  1 point [-]

Gaining utility from a device is all well and good, but unless you can make a new one then you might as well be using a magic artifact.

The fact that I can't make a new computer from scratch doesn't mean I'm using one as "a magical artifact". What contemporary pieces of technology can you make?

It does not seem to be the case that we have ever known how to make new societies that do the things we want.

You might be more familiar with this set of knowledge if we call it by its usual name -- "politics".

Comment author: Toggle 27 October 2014 05:43:44PM 1 point [-]

I was speaking in the plural. As a civilization, we are more than capable of creating many computers with established qualities and creating new ones to very exacting specifications. I don't believe there was ever a point in history where you could draw up a set of parameters for a culture you wanted, go to a group of knowledgeable experts, and watch as they built such a society with replicable precision.

You can do this for governments, of course- but notably, we haven't lost any information here. We are still perfectly capable of writing constitutions, or even founding monarchies if there were a consensus to do so. The 'regression' that Zanker believes in is (assuming the most common NRx beliefs) a matter of convention, social fabrics, and shared values, and not a regression in our knowledge of political structures per se.

Comment author: Lumifer 27 October 2014 05:57:33PM *  2 points [-]

I don't believe there was ever a point in history where you could draw up a set of parameters for a culture you wanted, go to a group of knowledgeable experts, and watch as they built such a society with replicable precision.

That's not self-evident to me. There are legal and ethical barriers, but my guess is that given the same level of control that we have in, say, engineering, we could (or quickly could learn to) build societies with custom characteristics. Given the ability to select people, shape their laws and regulations, observe and intervene, I don't see why you couldn't produce a particular kind of a society.

Of course you can't build any kind of society you wish just like you can't build any kind of a computer you wish -- you're limited by laws of nature (and of sociology, etc.), by available resources, by your level of knowledge and skill, etc.

Shaping a society is a common desire (look at e.g. communists) and a common activity (of governments and politicians). Certainly it doesn't have the precision and replicability of mass-producing machine screws, but I don't see why you can't describe it as a "technology".

Comment author: fubarobfusco 28 October 2014 04:06:02AM *  1 point [-]

How do you square your beliefs with (for instance) the decline in murder in the Western world — see, e.g. Eisner, Long-Term Historical Trends in Violent Crime?

Comment author: RowanE 27 October 2014 02:55:02PM *  8 points [-]

The most contra-LW belief I have, if you can call it that, is my not being convinced on the pattern theory of identity - EY's arguments about there being no "same" or "different" atoms not effecting me because my intuitions already say that being obliterated and rebuilt from the same atoms would be fatal. I think I need the physical continuity of the object my consciousness runs on. But I realise I haven't got much support besides my intuitions for believing that that would end my experience and going to sleep tonight won't, and by now I've become almost agnostic on the issue.

Comment author: ChristianKl 27 October 2014 04:41:09PM 3 points [-]

I think this could be better put as "what do you believe, that most others don't?" - being wrong is, from the inside, indistinguishable from being right, and a rationalist should know this.

I think you are wrong. Identifying a belief as wrong is not enough to remove it. If someone has low self esteem and you give him an intellectual argument that's sound and that he wants to believe that's frequently not enough to change the fundamental belief behind low self esteem.

Scott Alexander wrote a blog post about how asking a schizophrenic for weird beliefs makes the schizophrenic tell the doctor about the faulty beliefs.

If you ask a question differently you get people reacting differently. If you want to get a broad spectrum of answers than it makes sense to ask the question in a bunch of different ways.

I'm intelligent enough to know that my own beliefs about the social status I hold within a group could very well be off even if those beliefs feel very real to me.

If you ask me: "Do you think X is really true and everyone who disagrees is wrong?", you trigger slightly different heuristics than in me than if you ask "Do you believe X?".

It's probably pretty straightforward to demonstrate this and some cognitive psychologist might even already have done the work.

Comment author: summerstay 27 October 2014 01:52:51PM *  3 points [-]

It would be a lot harder to make a machine that actually is conscious (phenomenally conscious, meaning it has qualia) than it would be to make one that just acts as if is conscious (in that sense). It is my impression that most LW commenters think any future machine that acts conscious probably is conscious.

Comment author: polymathwannabe 27 October 2014 02:18:00PM 3 points [-]

EY has declared that P-zombies are nonsense, but I've had trouble understanding his explanation. Is there any consensus on this?

Comment author: RowanE 27 October 2014 02:43:51PM *  5 points [-]

Summary of my understanding of it: P-zombies require that there be no causal connection between consciousness and, well, anything, including things p-zombie philosophers say about consciousness. If this is the case, then a non-p-zombie philosopher talking about consciousness also isn't doing so for reasons causally connected to the fact that they are conscious. To effectively say "I am conscious, but this is not the cause of my saying so, and I would still say so if I wasn't conscious" is absurd.

Comment author: hyporational 27 October 2014 02:57:54PM *  1 point [-]

It is my impression that most LW commenters think any future machine that acts conscious probably is conscious.

I haven't gotten that impression. The p-zombie problem those other guys talk about is a bit different since human beings aren't made with a purpose in mind and you'd have to explain why evolution would lead to brains that only mimic conscious behavior. However if human beings make robots for some purpose it seems reasonable to program them to behave in a way that mimics behavior that would be caused by consciousness in humans. This is especially likely since we have hugely popular memes like the Turing test floating about.

I tend to believe that much simpler processes than we traditionally attribute consciousness to could be conscious in some rudimentary way. There might even be several conscious processes in my brain working in parallel and overlapping. If this is the case looking for human-like traits in machines becomes a moot point.

Comment author: Capla 27 October 2014 06:18:57PM 1 point [-]

I often wonder if my subconsciousness is actually conscious, it's just a different consciousnesses than me.

Comment author: hyporational 28 October 2014 09:40:49AM *  1 point [-]

I actually arrived at this supposedly old idea on my own when I was reading about the incredibly complex enteric nervous system in med school. For some reason it struck me that the brain of my gastrointestinal system might be conscious. But then thinking about it further it didn't seem very consistent that only certain bigger neural networks that are confined by arbitrary anatomical boundaries would be conscious, so I proceeded a bit further from there.

Comment author: bbleeker 27 October 2014 06:42:29PM 1 point [-]

How would you tell the difference? I act like I'm conscious too, how do you know I am?

Comment author: Daniel_Burfoot 27 October 2014 01:59:11PM 1 point [-]

Residing in the US and taking part in US society (eg by pursuing a career) is deeply problematic from an ethical point of view. Altruists should seriously consider either migrating or scaling back their career ambitions significantly.

Comment author: Lumifer 27 October 2014 03:06:15PM 5 points [-]

Interesting. This is in contrast to which societies? To where should altruists emigrate?

Comment author: Evan_Gaensbauer 28 October 2014 07:44:33AM 4 points [-]

If anyone cares, the effective altruism community has started pondering this question as a group. This might work out for those doing direct work, such as research or advocacy: if they're doing it mostly virtually, what they need the most is Internet access. If a lot of the people they'd be (net)working with as part of their work were also at the same place, it would be even less of a problem. It doesn't seem like this plan would work for those earning to give, as the best ways of earning to give often depend on geography-specific constraints, i.e., working in developed countries.

Note that if you perceive this as a bad idea, please share your thoughts, as I'm only aware of its proponents claiming it might be a good idea. It hasn't been criticized, so it's an idea worthy of detractors if criticism is indeed to be had.

Comment author: ChristianKl 27 October 2014 03:09:11PM 0 points [-]

Residing in the US and taking part in US society (eg by pursuing a career) is deeply problematic from an ethical point of view.

Do you follow some kind of utilitarian framework where you could quantify that problem? Roughly how much money donated to effective charities would make up the harm caused by participating in US society.

Comment author: Daniel_Burfoot 27 October 2014 04:29:03PM -1 points [-]

Thanks for asking, here's an attempt at an answer. I'm going to compare the US (tax rate 40%) to Singapore (tax rate 18%). Since SG has better health care, education, and infrastructure than the US, and also doesn't invade other countries or spy massively on its own citizens, I think it's fair to say that 22% extra of GDP that the US taxes its citizens is simply squandered.

Let I be income, D be charitable donations, R be tax rate (0.4 vs 0.18), U be money usage in support of lifestyle, and T be taxes paid. Roughly U=I-T-D, and T=R(I-D). A bit of algrebra produces the equation D=I-U/(1-R).

Consider a good programmer-altruist making I=150K. In the first model, the programmer decides she needs U=70K to support her lifestyle; the rest she will donate. Then in the US, she will donate D=33K, and pay T=47K in taxes. In SG, she will donate D=64K and pay T=16K in taxes to achieve the same U.

In the second model, the altruist targets a donation level of D=60, and adjusts U so she can meet the target. In the US, she payes T=36K in taxes and has a lifestyle of U=54K. In SG, she pays T=16K of taxes and lives on U=74K.

So, to answer your question, the programmer living in the US would have to reduce her lifestyle by about $20K/year to achieve the same level of contribution as the programmer in SG.

Most other developed countries have tax rates comparable or higher than the US, but it's more plausible that in other countries the money goes to things that actually help people.

Comment author: ChristianKl 27 October 2014 05:17:18PM 3 points [-]

Consider a good programmer-altruist making I=150K

I think given the same skill level the programmer-altruist making 150K while living in Silicon Valley might very well make 20K less living in Germany, Japan or Singapore.

Comment author: Nornagest 27 October 2014 09:32:04PM 5 points [-]

I don't know what opportunities in Europe or Asia look like, but here on the US West Coast, you can expect a salary hit of $20K or more if you're a programmer and you move from the Silicon Valley even to a lesser tech hub like Portland. Of course, cost of living will also be a lot lower.

Comment author: bramflakes 27 October 2014 07:39:49PM *  6 points [-]

I'm going to compare the US to Singapore

this is the point where alarm bells should start ringing

Comment author: Daniel_Burfoot 27 October 2014 09:55:09PM 1 point [-]

The comparison is valid for the argument I'm trying to make, which is that by emigrating to SG a person can enhance his or her altruistic contribution while keeping other things like take-home income constant.

Comment author: SolveIt 27 October 2014 09:04:03PM 3 points [-]

Since SG has better health care, education, and infrastructure than the US, and also doesn't invade other countries or spy massively on its own citizens, I think it's fair to say that 22% extra of GDP that the US taxes its citizens is simply squandered.

This is just plain wrong. Mostly because Singapore and the US are different countries in different circumstances. Just to name one, Singapore is tiny. Things are a lot cheaper when you're small. Small countries are sustainable because international trade means you don't have to be self-sufficient, and because alliances with larger countries let you get away with having a weak military. The existence of large countries is pretty important for this dynamic.

Now, I'm not saying the US is doing a better job than Singapore. In fact, I think Singapore is probably using its money better, albeit for unrelated reasons. I'm just saying that your analysis is far too simple to be at all useful except perhaps by accident.

Comment author: fubarobfusco 28 October 2014 01:25:52AM 0 points [-]

Things are a lot cheaper when you're small.

Things are a lot cheaper when you're large. It's called "economy of scale".

Comment author: SolveIt 28 October 2014 01:03:47PM 1 point [-]

Yes, both effects exist and they apply to different extents in different situations. A good analysis would take both (and a host of other factors) into account and figure out which effect dominates. My point is that this analysis doesn't do that.

Comment author: Daniel_Burfoot 27 October 2014 03:20:55PM *  1 point [-]

I would suggest ANZAC, Germany, Japan, or Singapore. I realized after making this list that those countries have an important property in common, which is that they are run by relatively young political systems. Scandinavia is also good. Most countries are probably ethically better than the US, simply because they are inert: they get an ethical score of zero while the US gets a negative score.

(This is supposed to be a response to Lumifer's question below).

Comment author: Lumifer 27 October 2014 03:34:32PM 4 points [-]

would suggest ANZAC, Germany, Japan, or Singapore. ... Scandinavia is also good.

That's a very curious list, notable for absences as well as for inclusions. I am a bit stumped, for I cannot figure out by which criteria was it constructed. Would you care to elaborate why do these countries look to you as the most ethical on the planet?

Comment author: Metus 27 October 2014 09:46:02PM 1 point [-]

Modern countries with developed economies lacking a military force involved and/or capable of military intervention outside of its territory. Maybe his grief is with the US military so I just went with that.

Comment author: Azathoth123 28 October 2014 03:54:16AM 4 points [-]

Which is to say they engage in a lot of free riding on the US military.

Comment author: Daniel_Burfoot 27 October 2014 10:05:33PM 1 point [-]

I don't claim that the list is exhaustive or that the countries I mentioned are ethically great. I just claim that they're ethically better than the US.

Comment author: Lumifer 28 October 2014 03:02:36PM 0 points [-]

Hmm... Is any Western European country ethically worse than the USA from your point of view? Would Canada make the list? Does any poor country qualify?

Comment author: Daniel_Burfoot 28 October 2014 03:15:29PM *  -1 points [-]

In my view Western Europe is mostly inert, so it gets an ethics score of 0, which is better than the US. Some poor countries are probably okay, I wouldn't want to make sweeping claims about them. The problem with most poor countries is that their governments are too corrupt. Canada does make the list, I thought ANZAC stood for Australia, New Zealand And Canada.

Comment author: DanielFilan 27 October 2014 10:50:13PM 2 points [-]

For reference, ANZAC stands for the "Australia and New Zealand Army Corps" that fought in WWI. If you mean "Australia and New Zealand", then I don't think there's a shorter way of saying that than just listing the two countries.

Comment author: Capla 27 October 2014 06:11:37PM *  0 points [-]

I'm not sure what you mean. Can you elaborate, with the other available options perhaps? What should I do instead?

To be more specific, what's morally problematic about wanting to be a more successful writer or researcher or therapist?

Comment author: Lumifer 27 October 2014 06:23:05PM *  3 points [-]

what's morally problematic about wanting to be a more successful writer or researcher or therapist?

The issue is blanket moral condemnation of the whole society. Would you want to become a "more successful writer" in Nazi Germany?

The simple step of a courageous individual is not to take part in the lie." -- Alexander Solzhenitsyn

Comment author: Daniel_Burfoot 27 October 2014 07:00:52PM *  1 point [-]

I'm not sure I want to make blanket moral condemnations. I think Americans are trapped in a badly broken political system, and the more power, prestige, and influence that system has, the more damage it does. Emigration or socioeconomic nonparticipation reduces the power the system has and therefore reduces the damage it does.

Comment author: Lumifer 27 October 2014 07:14:03PM 1 point [-]

I'm not sure I want to make blanket moral condemnations.

It seems to me you do, first of all by your call to emigrate. Blanket condemnations of societies do not extend to each individual, obviously, and the difference between "condemning the system" and "condemning the society" doesn't look all that big..

Comment author: faul_sname 27 October 2014 07:28:57PM *  2 points [-]

The issue is blanket moral condemnation of the whole society. Would you want to become a "more successful writer" in Nazi Germany?

...yes? I wouldn't want to write Nazi propaganda, but if I was a romance novel writer and my writing would not significantly affect, for example, the Nazi war effort, I don't see how being a writer in Nazi Germany would be any worse than being a writer anywhere else. In this context, "the lie" of Nazi Germany was not the mere existence of the society, it was specific things people within that society were doing. Romance novels, even very good romance novels, are not a part of that lie by reasonable definitions.

ETA: There are certainly better things a person in Nazi Germany could do than writing romance novels. If you accept the mindset that anything that isn't optimally good is bad, then yes, being a writer in Nazi Germany is probably bad. But in that event, moving to Sweden and continuing to write romance novels is no better.

Comment author: Lumifer 27 October 2014 07:43:08PM *  2 points [-]

I don't see how being a writer in Nazi Germany would be any worse than being a writer anywhere else

The key word is "successful".

To become a successful romance writer in Nazi Germany would probably require you pay careful attention to certain things. For example, making sure no one who could be construed to be a Jew is ever a hero in your novels. Likely you will have to have a public position on the racial purity of marriages. Would a nice Aryan Fräulein ever be able to find happiness with a non-Aryan?

You can't become successful in a dirty society while staying spotlessly clean.

Comment author: faul_sname 27 October 2014 07:48:47PM 3 points [-]

So? Who said my goal was to stay spotlessly clean? I think more highly of Bill Gates than of Richard Stallman, because as much as Gates was a ruthless and sometimes dishonest businessman, and as much as Stallman does stick to his principles, Gates, overall, has probably improved the human condition far more than Stallman.

Comment author: Lumifer 27 October 2014 08:13:59PM *  2 points [-]

Who said my goal was to stay spotlessly clean?

The question was whether "being a writer in Nazi Germany would be any worse than being a writer anywhere else".

If you would be happy to wallow in mud, be my guest.

The question of how much morality could one maintain while being successful in an oppressive society is an old and very complex one. Ask Russian intelligentsia for details :-/

Comment author: NancyLebovitz 27 October 2014 08:32:20PM 2 points [-]

Lack of representation isn't the worst thing in the world.

if you could write romance novels in Nazi Gernany (did they have romance novels?) and the novels are about temporarily and engagingly frustrated love between Aryans with no nasty stereotypes of non-Aryans, I don't think it's especially awful.

Comment author: Nornagest 27 October 2014 08:44:22PM 0 points [-]

I wouldn't want to write Nazi propaganda, but if I was a romance novel writer and my writing would not significantly affect, for example, the Nazi war effort, I don't see how being a writer in Nazi Germany would be any worse than being a writer anywhere else.

Well, there is the inconvenient possibility of getting bombed flat in zero to twelve years, depending on what we're calling Nazi Germany.

Comment author: RowanE 27 October 2014 10:21:00PM 0 points [-]

Considering the example of Nazi Germany is being used as an analogy for the United States, a country not actually at way, taking allied bombing raids into account amounts to fighting the hypothetical.

Comment author: Nornagest 27 October 2014 10:26:49PM *  1 point [-]

Is it? I was mainly joking -- but there's an underlying point, and that's that economic and political instability tends to correlate with ethical failures. This isn't always going to manifest as winding up on the business end of a major strategic bombing campaign, of course, but perpetrating serious breaches of ethics usually implies that you feel you're dealing with issues serious enough to justify being a little unethical, or that someone's getting correspondingly hacked off at you for them, or both. Either way there are consequences.

Comment author: NancyLebovitz 28 October 2014 07:16:58PM 0 points [-]

It's a lot safer to abuse people inside your borders than to make a habit of invading other countries. The risk from ethical failure has a lot to do with whether you're hurting people who can fight back.

Comment author: DanielLC 27 October 2014 07:19:57PM 2 points [-]

If I scale back my career ambitions, I won't make as much money, which means that I can't donate as much. This is not a small cost. How can my career do more damage than that opportunity cost?

Comment author: gattsuru 27 October 2014 05:56:20PM *  5 points [-]

General :

  • There are absolutely vital lies that everyone can and should believe, even knowing that they aren't true or can not be true.

  • /Everyone/ today has their own personal army, including the parts of the army no one really likes, such as the iffy command structure and the sociopath that we're desperately trying to Section Eight.

  • Systems that aim to optimize a goal /almost always/ instead optimize the pretense of the goal, followed by reproduction pressures, followed by the actual goal itself.

Political :

  • Network Neutrality desires a good thing, but the underlying rule structure necessary to implement it makes the task either fundamentally impossible or practically undesirable.

  • Privacy policies focused on preventing collection of identifiable data are ultimately doomed.

LessWrong-specific:

  • "Karma" is a terrible system for any site that lacks extreme monofocus. A point of Karma means the same thing on a top level post that breaks into new levels of philosophy, or a sufficiently entertaining pun. It might be the least bad system available, but in a community nearly defined by tech and data-analysis it's disappointing.

  • The risks and costs of "Raising the sanity waterline" are heavily underinvestigated. We recognize that there is an individual valley of bad rationality, but haven't really looked at what this would mean on a national scale. "Nuclear Winter" as argued by Sagan was a very, very overt Pascal's Wager: this Very High Value event can be avoided, so much must avoid it at any cost. It /also/ certainly gave valuable political cover to anti-nuclear war folk, may have affected or effected Russian and US and Cuban nuclear policy, and could (although not necessarily would) be supported from a utilitarian perspective... several hundred pages of reading later.

  • "Rationality" is an overloaded word in the exact sort of ways that make it a terrible thing to turn into an identity. When you're competing with RationalWiki, the universe is trying to give you a Hint.

  • The type of Atheism that is certain it will win, won't. There's a fascinating post describing how religion was driven from its controlling aspects in History, in Science, in Government, in Cleanliness ... and then goes on to describe how religion /will/ be driven from such a place on matters of ethics. Do not question why, no matter your surprise, that religion remains on a pedestal for Ethics, no matter how much it's poked and prodded by the blasphemy of actual practice. Lest you find the answer.

  • ((I'm /also/ not convinced that Atheism is a good hill for improved rationality to spend its capital on, anymore than veganism is a good hill for improved ethics to spend its capital on. This may be opinion rather than right/wrong.))

MIRI-specific:

  • MIRI dramatically weakens its arguments by focusing on special-case scenarios because those special-case situations are personally appealing to a few of its sponsors. Recursively self-improving Singularity-style AI is very dangerous... and it's several orders of complexity more difficult to describe that danger, where even minimally self-improving AI still have potential to be an existential risk and requires many fewer leaps to discuss and leads to similar concerns anyway.

  • MIRI's difficulty providing a coherent argument to predisposed insiders for its value is more worrying than its difficulty working with outsiders or even its actual value. Note: that's a value of "difficulty working with outsiders" that assumes over six-to-nine months to get the Sequences eBook proofread and into a norm-palatable format. ((And, yes, I realize that I could and should help with this problem instead of just complaining about it.))

Comment author: polymathwannabe 27 October 2014 06:35:10PM 0 points [-]

There are absolutely vital lies that everyone can and should believe, even knowing that they aren't true or can not be true.

Desirability issues aside, "believing X" and "knowing X is not true" cannot happen in the same head.

Comment author: Lumifer 27 October 2014 06:39:11PM 4 points [-]

"believing X" and "knowing X is not true" cannot happen in the same head

This is known as doublethink. Its connotations are mostly negative, but Scott Fitzgerald did say that "The test of a first rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function" -- a bon mot I find insightful.

Comment author: polymathwannabe 27 October 2014 08:35:11PM 0 points [-]

Example of that being useful?

Comment author: Lumifer 27 October 2014 08:47:01PM *  1 point [-]

Mostly in the analysis of complex phenomena with multiple in(or barely)compatible frameworks of looking at them.

A photon is a wave.
A photon is a particle.

Love is temporary insanity.
Love is the most beautiful feeling you can have.

Etc., etc.

Comment author: RowanE 27 October 2014 10:18:06PM 1 point [-]

It's possibly to use particle models or wave models to make predictions about photons, but believing a photon is both of those things is a separate matter, and is neither useful nor true - a photon is actually neither.

Truth is not beauty, so there's no contradiction there, and even the impression of one disappears if the statements are made less poetic and oversimplified.

Comment author: gattsuru 27 October 2014 10:10:44PM *  6 points [-]

(Basilisk Warning: may not be good information to read if you suffer depression or anxiety and do not want to separate beliefs from evidence.)

Having an internalized locus of control strongly correlates with a wide variety of psychological and physiological health benefits. There's some evidence that this link is causative for at least some characteristics. It's not a completely unblemished good characteristic -- it correlates with lower compliance with medical orders, and probably isn't good for some anxiety disorders in extreme cases -- but it seems more helpful than not.

It's also almost certainly a lie. Indeed, it's obvious that such a thing can't exist under any useful models of reality. There are mountains of evidence for either the nature or nurture side of the debate, to the point where we really hope that bad choices are caused by as external an event as possible because /that/, at least, we might be able fix.. At a more basic level, there's a whole lot of universe that isn't you than there is you to start with. On the upside, if your locus of control is external, at least it's not worth worrying about. You couldn't do much to change it, after all.

Psychology has a few other traits where this sort of thing pops up, most hilariously during placebo studies, though that's perhaps too easy an example. It's not the only one, though : useful lies are core to a lot of current solutions to social problems, all the way down to using normal decision theory to cooperate in an iterated prisoner's dilemma.

It's possible (even plausible) that this represents a valley of rationality -- like the earlier example of Pascal's Wagers that hold decent Utilitarian tradeoffs underneath -- but I'm not sure falsifiable, and it's certainly not obvious right now.

Comment author: Evan_Gaensbauer 28 October 2014 07:26:39AM 4 points [-]

Basilisk Warning: may not be good information to read if you suffer depression or anxiety and do not want to separate beliefs from evidence.

As an afflicted individual, I appreciate the content warning. I'm responding without having read the rest of the comment. This is a note of gratitude to you, and a data point that for yourself and others that such content warnings are appreciated.

Comment author: Nornagest 27 October 2014 06:55:21PM 5 points [-]

Systems that aim to optimize a goal /almost always/ instead optimize the pretense of the goal, followed by reproduction pressures, followed by the actual goal itself.

Isn't this basically Goodhart's law?

Comment author: gattsuru 28 October 2014 12:11:33AM 2 points [-]

It's related. Goodhart's Law says that using a measure for policy will decouple it from any pre-existing relationship with economic activity, but doesn't predict how that decoupling will occur. The common story of Goodhart's law tells us how the Soviet Union measured factory output in pounds of machinery, and got heavier but less efficient machinery. Formalizing the patterns tells us more about how this would change if, say, there had not been very strict and severe punishments for falsifying machinery weight production reports.

Sometimes this is a good thing : it's why, for one example, companies don't instantly implode into profit-maximizers just because we look at stock values (or at least take years to do so). But it does mean that following a good statistic well tends to cause worse outcomes that following a poor statistic weakly.

That said, while I'm convinced that's the pattern, it's not the only one or even the most obvious one, and most people seem to have different formalizations, and I can't find the evidence to demonstrate it.

Comment author: Evan_Gaensbauer 28 October 2014 07:24:37AM 1 point [-]

MIRI's difficulty providing a coherent argument to predisposed insiders for its value is more worrying than its difficulty working with outsiders or even its actual value. Note: that's a value of "difficulty working with outsiders" that assumes over six-to-nine months to get the Sequences eBook proofread and into a norm-palatable format. ((And, yes, I realize that I could and should help with this problem instead of just complaining about it.))

I agree, and it's something I could, maybe should, help with instead of just complaining about. What's stopping you from doing this? If you know someone else was actively doing the same, and could keep you committed to the goal in some way, would that help? If that didn't work, then, what would be stopping us?

Comment author: Viliam_Bur 28 October 2014 08:29:07AM 4 points [-]

over six-to-nine months to get the Sequences eBook proofread

This is one of the things that keep me puzzled. How can proofreading a book by a group of volunteers take more time than translating the whole book by a single person?

Is it because people don't volunteer enough for the work because proofreading seems low status? Is it a bystander effect, where everyone assumes that someone else is already working on it? Are all people just reading LW for fun, but unwilling to do any real work to help? Is it a communication problem, where MIRI has a lack of volunteers, but the potential volunteers are not aware of it?

Just print the whole fucking thing on paper, each chapter separately. Bring the papers to a LW meetup, and ask people to spend 30 minutes proofreading some chapter. Assuming many of them haven't read the whole Sequences, they can just pick a chapter they haven't read yet, and just read it, while marking the found errors on the paper. Put a signature at the end of the chapter, so it is known how many people have seen it.

Comment author: Evan_Gaensbauer 28 October 2014 09:42:57AM 1 point [-]

Bring the papers to a LW meetup, and ask people to spend 30 minutes proofreading some chapter. Assuming many of them haven't read the whole Sequences, they can just pick a chapter they haven't read yet, and just read it, while marking the found errors on the paper. Put a signature at the end of the chapter, so it is known how many people have seen it.

Thanks for the suggestion. I'll plan some meetups around this. Not the whole thing, mind you. I'll just get anyone willing at the weekly Vancouver meetup to do exactly that: take a mild amount of time reviewing a chapter/post, and providing feedback on it or whatever.

Comment author: gattsuru 28 October 2014 02:56:40PM *  2 points [-]

How can proofreading a book by a group of volunteers take more time than translating the whole book by a single person?

It's the 'norm-palatable' part more than the proofreading aspect, unfortunately, and I'm not sure that can be readily made volunteer work

As far as I can tell, the proofreading part began in late 2013, and involved over two thousand pages of content to proofread through Youtopia. As far as I can tell, the only Sequence-related volunteer work on the Youtopia site involves translation into non-English languages, so the public volunteer proofreading is done and likely has been done for a while (wild guess, probably somewhere in mid-summer 2014?). MIRI is likely focusing on layout and similar publishing-level issues, and as far as I've been able to tell, they're looking for a release at the end of the year that strongly suggests that they've finished the proofreading aspect.

That said, I may have outdated information: the Sequence eBook has been renamed several times in progress for a variety of good reasons, and I'm not sure Youtopia is the current place most of this is going on, and AlexVermeer may or may not be lead on this project and may or not be more active elsewhere than these forums. There are some public project attempts to make an eReader-compatible version, though these don't seem much stronger from a reading order perspective.

In fairness, doing /good/ layout and ePublishing does take more specialized skills and some significant time, and MIRI may be rewriting portions of the work to better handle the limitations of a book format -- where links are less powerful tools, where a large portion of viewer devices support only grayscale, and where certain media presentation formats aren't possible. At least from what I've seen in technical writing and pen-and-paper RPGs, this is not a helpfully parallel task: everyone needs must use the same toolset and design rules, or all of their work is wasted. There was also a large amount of internal MIRI rewriting involved, as even the early version made available to volunteer proofreaders was significantly edited.

Less charitably, while trying to find this information I've found references to an eBook project dating back to late 2012, so nine months may be a low-end estimate. Not sure if that's the same project or if it's a different one that failed, or if it's a different one that succeeded and I just can't find the actual eBook result.

Comment author: lmm 28 October 2014 07:15:06PM 3 points [-]

I'm just reading LW for fun and unwilling to do any real work to help, FWIW.

Comment author: lmm 27 October 2014 07:49:41PM *  4 points [-]
  • Arguing on the internet is much like a drug, and bad for you
  • Progress is real
  • Some people are worth more than others
    • You can correlate this with membership in most groups you care to name
  • Solipsism is true
Comment author: NancyLebovitz 27 October 2014 08:35:51PM 4 points [-]

Some people are worth more than others
Solipsism is true

Are these consistent with each other? Should it at least be "Some "people" are worth more than others"?

Comment author: lmm 27 October 2014 10:37:03PM 0 points [-]

Words are just labels for empirical clusters. I'm not going to scare-quote people when it has the usual referent used in normal conversation.

Comment author: Evan_Gaensbauer 28 October 2014 07:17:56AM 1 point [-]

Progress is real

What do you mean by 'progress'? There is more than one conceivable type of progress: political, philosophical, technological, scientific, moral, social, etc.

What's interesting is there is someone else in this thread who believes they are right about something most others are wrong about. ZankerH believes there hasn't been much political or social progress, and that moral progress doesn't exist. So, if that's the sort of progress you are meaning, and also believe that you're right about this when most others aren't, then this thread contains some claims that would contradict each other.

Alas, I agree with you that arguing on the Internet is bad, so I'm not encouraging you to debate ZankerH. I'm just noting something I find interesting.

Comment author: bramflakes 27 October 2014 07:53:59PM 16 points [-]

My thoughts on the following are rather disorganized and I've been meaning to collate them into a post for quite some time but here goes:

Discussions of morality and ethics in the LW-sphere overwhelmingly tend to short-circuit to naive harm-based consequentialist morality. When pressed I think most will state a far-mode meta-ethical version that acknowledges other facets of human morality (disgust, purity, fairness etc) that would get wrapped up into a standardized utilon currency (I believe CEV is meant to do this?) but when it comes to actual policy (EA) there is too much focus on optimizing what we can measure (lives saved in africa) instead of what would actually satisfy people. The drunken moral philosopher looking under the lamppost for his keys because that's where the light is. I also think there's a more-or-less unstated assumption that considerations other than Harm are low-status.

Comment author: Larks 28 October 2014 02:04:36AM 2 points [-]

Do you have any thoughts on how to do EA on the other aspects of morality? I think about this a fair bit, but run into the same problem you mentioned. I have had a few ideas but do not wish to prime you. Feel free to PM me.

Comment author: Azathoth123 28 October 2014 04:00:06AM 3 points [-]

Ah, yes. The standard problem with measurement based incentives: you start optimizing for what's easy to measure.

Comment author: James_Miller 27 October 2014 09:56:53PM 3 points [-]

I've signed up for cryonics, invest in stocks through index funds, and recognize that the Fermi paradox means mankind is probably doomed.

Comment author: Ixiel 27 October 2014 11:22:25PM 4 points [-]

Inequality is a good thing, to a point.

I believe in a world where it is possible to get rich, and not necessarily through hard work or being a better person. One person owning the world with the rest of us would be bad. Everybody having identical shares of everything would be bad (even ignoring practicalities). I don't know exactly where the optimal level is, but is it closer to the first situation than the second, even if assigned by lottery.

I'm treating this as basically another contrarian views thread without the voting rules. And full disclosure I'm too biased for anybody to take my word for it, but I'd enjoy reading counterarguments.

Comment author: Nate_Gabriel 27 October 2014 11:37:29PM 2 points [-]

Do you think we currently need more inequality, or less?

Comment author: Ixiel 28 October 2014 12:33:02AM *  1 point [-]

In the US I would say more-ish. I support a guaranteed basic income, and any benefit to one person or group (benefitting the bottom without costing the top would decrease inequality but would still be good), but think there should be a smaller middle class.

I don't know enough about global issues to comment on them.

Comment author: Viliam_Bur 28 October 2014 12:48:11AM 5 points [-]

My intuition would be that inequality per se is not a problem, it only becomes a problem when it allows abuse. But that's not necessarily a function of inequality itself; it also depends on society. I can imagine a society which would allow a lot of inequality and yet would prevent abuse (for example if some Friendly AI would regulate how you are allowed to spend your money).

Comment author: lmm 28 October 2014 07:13:14PM 0 points [-]

If we're stipulating that the allocation is by lottery, I think equality is optimal due to simple diminishing returns. And also our instinctive feelings of fairness. This tends to be intuitively obvious in a small group; if you have 12 cupcakes and 4 people, no-one would even think about assigning them at random; 3 each is the obviously correct thing to do. It's only when dealing with groups larger than our Dunbar number that we start to get confused.

Comment author: pianoforte611 28 October 2014 12:36:53AM 0 points [-]

Diet and exercise generally do not cause substantial long term weight loss. Failure rates are high, and successful cases keep off about 7% of they original body weight after 5 years. I strongly suspect that this effect does not scale, you won't lose another 7% after another 5 years.

It might be instrumentally useful though for people to believe that they can lose weight via diet and exercise, since a healthy diet and exercise are good for other reasons.

Comment author: Viliam_Bur 28 October 2014 12:56:24AM *  10 points [-]

It is extremely important to find out how to have a successful community without sociopaths.

(In far mode, most people would probably agree with this. But when the first sociopath comes, most people would be like "oh, we can't send this person away just because of X; they also have so many good traits" or "I don't agree with everything they do, but right now we are in a confict with the enemy tribe, and this person can help us win; they may be an asshole, but they are our asshole". I believe that avoiding these - any maybe many other - failure modes is critical if we ever want to have a Friendly society.)

Comment author: lmm 28 October 2014 07:09:43PM 4 points [-]

I think the other half is the more important one: to have a successful community, you need to be willing to be arbitrary and unfair, because you need to kick out some people and cannot afford to wait for a watertight justification before you do.

Comment author: Jiro 28 October 2014 07:39:03PM 2 points [-]

The best ruler for a community is an uncorruptible, bias-free, dictator. All you need to do to implement this is to find an uncorruptible, bias-free dictator. Then you don't need a watertight justification because those are used to avoid corruption and bias and you know you don't have any of that anyway.

Comment author: Lumifer 28 October 2014 07:54:22PM 4 points [-]

The best ruler for a community is an uncorruptible, bias-free, dictator.

There is also that kinda-important bit about shared values...

Comment author: [deleted] 27 October 2014 12:06:08PM 10 points [-]

Recently, I started a writing wager with a friend to encourage us both to produce a novel. At the same time, I have been improving my job hunting by narrowing my focus on what I want out of my next job and how I want it. While doing these two activities, I began to think about what I was adding to the world. More specifically, I began to ask myself what good I wanted to make.

I realized that writing a novel was not from a desire to add a good to the world (I don't want to write a world changing book), but just something enjoyable. So, I looked at my job. I realized that it was much the same. I'm not driven to libraries specifically by a desire to improve the world's intellectual resources; that's just a side effect. I'm driven to them out of enjoyment for the work.

So, if I'm not producing good from the two major productions of my life, I thought about what else I could produce or if I should at all. But I couldn't think of any concrete examples of good I could add to the world outside of effective altruism. I'm not an inventor nor am I a culture-shifting artist. But I wanted to find something I could add to the world to improve it, if only for my own vanity.

I decided, for the time being, on myself. Since my two biggest enjoyments (work and play) were important to me as personal achievements, not world achievements, I decided that the best thing to start with was to make myself the most efficient version of me that I could. Part of this probably came from my reading of Theodore Roosevelt's doing much the same to transform himself from an idiot into a badass. Sure, I've already been engaging in self-improvement for a while, but this idea of making the best me is more about trying to produce an individual worth having, rather than just maximizing my utility in a few areas for a few limited goals (i.e. writing a book, getting a job).

I'm sure this sounds simplistic since much of the LW literature already discusses such things, but it was a bit of an "aha" moment for me, and it made optimization and self-improvement more interesting. It made them into concrete projects with a real world application. I'm trying to give the world one less ineffective, dangerously deluded person. That's a good goal to strive for, I like to think.

Comment author: ChristianKl 27 October 2014 03:31:55PM 6 points [-]

I'm sure this sounds simplistic since much of the LW literature already discusses such things

Important insights usually happen to sound simple but the insight still takes years to achieve.

Comment author: wadavis 27 October 2014 07:00:30PM 10 points [-]

Yes, take the Invisible Hand approach to altruism, by pursuing your own productive wellbeing you will generate wellbeing in the worlds of others. Trickle down altruism is a feasible moral policy. Come to the Dark Side and bask in Moral Libertarianism.

Comment author: Artaxerxes 27 October 2014 03:36:08PM *  17 points [-]

Is the recommended courses page on MIRI's website up to date with regards to what textbooks they recommend for each topic? Should I be taking the recommendations fairly seriously, or more with a grain of salt? I know the original author is no longer working at MIRI, so I'm feeling a bit unsure.

I remember lukeprog used to recommend Bermudez's Cognitive Science over many others. But then So8res reviewed it and didn't like it much, and now the current recommendation is for The Oxford Handbook of Thinking and Reasoning, which I haven't really seen anyone say much about.

There are a few other things like this, for example So8res apparently read Heuristics and Biases as part of his review of books on the course list, but it doesn't seem to appear on the course list anymore, and under the heuristics and biases section Thinking and Deciding is recommended (once reviewed by Vaniver).

Comment author: [deleted] 28 October 2014 08:17:29AM 1 point [-]

Is the recommended courses page on MIRI's website up to date with regards to what textbooks they recommend for each topic? Should I be taking the recommendations fairly seriously, or more with a grain of salt? I know the original author is no longer working at MIRI, so I'm feeling a bit unsure.

I think Understanding Machine Learning (out this year) is better than Bishop's book (which is, frankly, insufferably obscurantist), and that instead of model-checking you ought to be learning a proof assistant (I learned Coq from Benjamin Pierce's Software Foundations).

Comment author: Artaxerxes 28 October 2014 09:00:01AM *  2 points [-]

The book the page recommends is Kevin Murphy's Machine Learning: A Probabilistic Perspective. I don't see any of Chris Bishop's books on the MIRI list right now, was Pattern Recognition and Machine Learning there at some point? Or am I missing something you're saying.

Comment author: [deleted] 28 October 2014 09:52:54AM 2 points [-]

Oh, well all right then. I was under the mistaken impression Bishop's book was listed. My bad!

Comment author: So8res 28 October 2014 08:25:50PM *  8 points [-]

No, it's not up to date. (It's on my list of things to fix, but I don't have many spare cycles right now.) I'd start with a short set theory book (such as Naive Set Theory), follow it up with Computation and Logic (by Boolos), and then (or if those are too easy) drop me a PM for more suggestions. (Or read the first four chapters of Jaynes on Probability Theory and the first two chapters of Model Theory by Chang and Keisler.)

Edit: I have now updated the course list (or, rather, turned it into a research guide) that is fairly up-to-date (if unpolished) as of 6 Nov 14.

Comment author: Artaxerxes 27 October 2014 03:48:37PM 18 points [-]

Not all of the MIRI blog posts get cross posted to Lesswrong. Examples include the recent post AGI outcomes and civilisational competence and most of the conversations posts. Since it doesn't seem like the comment section on the MIRI site gets used much if at all, perhaps these posts would receive more visibility and some more discussion would occur if these posts were linked to or cross posted on LW?

Comment author: John_Maxwell_IV 27 October 2014 09:51:27PM *  10 points [-]

Re: "civilizational incompetence". I've noticed "civilizational incomptence" being used as a curiosity stopper. It seems like people who use the phrase typically don't do much to delve in to the specific failure modes civilization is falling prey to in the scenario they're analyzing. Heaven forbid that we try to come up with a precise description of a problem, much less actually attempt to solve it.

(See also: http://celandine13.livejournal.com/33599.html)

Comment author: Artaxerxes 28 October 2014 01:50:05AM *  3 points [-]

I too, have seen it used too early or in contexts where it probably shouldn't have been used. As long as people don't use it so much as an explanation for something, but rather as a description or judgement, its use as a curiosity stopper is avoidable.

So I suppose there is a difference between saying "bad thing x happens because of civilisational incompetence", and "bad thing x happens, which is evidence that there is civilisational incompetence."

Separate to this concern is that it also has a slight Lesswrong-exceptionalism 'peering at the world from above the sanity waterline' vibe to it as well. But that's no biggie.

Comment author: Omid 27 October 2014 03:56:13PM 8 points [-]

What chores do I need to learn how to do in order to keep a clean house?

Comment author: Emily 27 October 2014 04:03:06PM 12 points [-]

Laundry (plus ironing, if you have clothes that require that - I try not to), washing up (I think this is called doing the dishes in America), mopping, hoovering (vacuuming), dusting, cleaning bathroom and kitchen surfaces, cleaning toilets, cleaning windows and mirrors. That might cover the obvious ones? Seems like most of them don't involve much learning but do take a bit of getting round to, if you're anything like me.

Comment author: Omid 27 October 2014 04:11:18PM *  1 point [-]

Thank you, how many hours a week do you spend doing these things?

Comment author: Nornagest 27 October 2014 07:07:32PM *  7 points [-]

It's really hard to estimate that accurately, because for me something like 90% of cleanliness is developing habits that couple it with the tasks that necessitate it: always and automatically washing dishes after cooking, putting away used clothes and other sources of clutter, etc. Habits don't take mental effort, but for the same reason it's almost impossible to quantify the time or physical effort that goes into them, at least if you don't have someone standing over you with a stopwatch.

For periodic rather than habitual tasks, though, I spend maybe half an hour a week on laundry (this would take longer if I didn't have a washer and dryer in my house, though, and there are opportunity costs involved), and another half hour to an hour on things like vacuuming, mopping, and cleaning porcelain and such.

Comment author: Emily 28 October 2014 09:31:46AM 1 point [-]

My timelog tells me that over the last ~7 weeks I've spent an average of 22 mins/day doing things with the tag "chores". That time period does include a two week holiday during which I spent a lot less time than usual on that stuff, so it's probably an underestimate. Agree with Nornagest below about the importance of small everyday habits! (Personally I am good at some of these, terrible at others.)

Comment author: Emily 28 October 2014 11:33:23AM 1 point [-]

I should add that I live with another person, who does his share of the chores, so this time would probably increase if I wanted the same level of clean/tidy while living alone. I'm not sure how time per person scales with changes in the number of people though... probably not linearly, but it must depend on all sorts of things like how exactly you share out the chores, what the overhead sort of times are like for doing a task once regardless of how much task there is, and how size of living space changes with respect to number of people living in it. Also, if you add actively non-useful people like babies, I expect all hell breaks loose.

Comment author: RichardKennaway 27 October 2014 04:13:52PM 9 points [-]

I'd add, not leaving clutter lying around. It both collects dust, and makes cleaning more of an effort. Keep it packed away in boxes and cupboards. (Getting rid of clutter entirely is a whole separate subject.)

Comment author: Manfred 27 October 2014 11:37:30PM *  3 points [-]

Adding on to Emily:

Having a particular hamper or even corner of your room where you put dirty laundry, so that it isn't all over your floor. When this hamper / corner is full, do your laundry.
Analogous organized or occasionally-organized places for paperwork or whatever else is being clutter-y.
If you have ancient carpet and it's dirty and stinky, learn how to rent a Rug Doctor-type steam cleaner from a nearby supermarket.
If you have a bunch of broken or dirty / stinky stuff in your house, learn how to get the trash people to haul it away, and learn where to buy cheap used furniture / cheap online kitchen supplies / whatever to replace your old junk.
Having tools handy to tidy up nails / tighten loose screws etc. when you notice them.
Keeping a bush and plunger near your toilet.
If your sink has clogged any time in the past 6 months, also consider having chemical unclogger / a long skinny "snake" (that's what it's actually called) that you shove down the drain and wiggle around to bust clogs.
Figure out where all the places that are hard to clean are. These are the places that will have 50 years of accumulated nasty dirt that will make the whole house smell better when you get rid of it.

Comment author: RichardKennaway 28 October 2014 09:56:37AM 0 points [-]

Avoid these and you'll be off to a good start. :)

Comment author: hyporational 28 October 2014 10:57:30AM *  0 points [-]

If you've got the money and a simple enough apartment layout, I recommend a vacuum cleaning robot. My crawling saucer collects a ridiculous amount of dust from the floor every day, and this seems to keep other surfaces and the air dustless too. There's no way I could clean up that much dust myself, and I'd do the cleaning so rarely that the dust would get all over the place.

Comment author: Lumifer 27 October 2014 04:11:03PM 5 points [-]

An good semi-rant by Ken White of popehat on GamerGate. I recommend it as an excellent example of applied rationality and sorting out through the hysterics.

Comment author: Artaxerxes 27 October 2014 04:11:35PM 12 points [-]

Luke's IAMA on reddit's r/futurology in 2012 was pretty great. I think it would be cool if he did another, a lot has changed in 2+ years. Maybe to coincide with the December fundraising drive?

Comment author: Evan_Gaensbauer 28 October 2014 06:53:20AM 3 points [-]

I just sent a message to Luke. Hopefully he will notice it.

Comment author: [deleted] 28 October 2014 08:20:26AM 3 points [-]

If he could not repeat the claim that UFAI is so easily compressible it could "spread across the world in seconds" through the internet, that would be quite helpful, actually. Even in the rich world, with broadband, transferring an intelligent agent all across the world will take whole hours, especially given the time necessary for the bugger to crack into and take control of the relevant systems (packaging itself as a trojan horse and uploading itself to 4chan in a "self-extracting zip" of pornography will take even longer).

Comment author: ruelian 27 October 2014 05:13:24PM 8 points [-]

I have a question for anyone who spends a fair amount of their time thinking about math: how exactly do you do it, and why?

To specify, I've tried thinking about math in two rather distinct ways. One is verbal and involves stating terms, definitions, and the logical steps of inference I'm making in my head or out loud, as I frequently talk to myself during this process. This type of thinking is slow, but it tends to work better for actually writing proofs and when I don't yet have an intuitive understanding of the concepts involved.

The other is nonverbal and based on understanding terms, definitions, theorems, and the ways they connect to each other on an intuitive level (note: this takes a while to achieve, and I haven't always managed it) and letting my mind think it out, making logical steps of inference in my head, somewhat less consciously. This type of thinking is much faster, though it has a tendency to get derailed or stuck and produces good results less reliably.

Which of those, if any, sounds closer to the way you think about math? (Note: most of the people I've talked to about this don't polarize it quite so much and tend to do a bit of both, i.e. thinking through a proof consciously but solving potential problems that come up while writing it more intuitively. Do you also divide different types of thinking into separate processes, or use them together?)

The reason I'm asking is that I'm trying to transition to spending more of my time thinking about math not in a classroom setting and I need to figure out how I should go about it. The fast kind of thinking would be much more convenient, but it appears to have downsides that I haven't been able to study properly due to insufficient data.

Comment author: RowanE 27 October 2014 06:38:09PM 2 points [-]

I'm only a not-very-studious undergraduate (in physics), and don't spend an awful lot of time thinking about maths ourside of that, but I pretty much only think about maths in the nonverbal way - I can understand an idea when verbally explained to me, but I have to "translate it" into nonverbal maths to get use out of it.

Comment author: wadavis 27 October 2014 06:50:48PM 2 points [-]

As someone employed doing mid-level math (structural design), I'm much like most others you've talked to. The entirely non-verbal intuitive method is fast, and it tends to be highly correct if not accurate. The verbal method is a lot slower, but it lends itself nicely to being put to paper and great for getting highly accurate if not correct answers. So everything that matters gets done twice, for accurate correct results. Of course, because it is fast the intuitive method is prefered for brainstorming, then the verbal method verifies any promising brainstorms.

Comment author: ruelian 27 October 2014 07:28:04PM 2 points [-]

Could you please explain what you mean by "correct" and "accurate" in this case? I have a general idea, but I'm not quite sure I get it.

Comment author: wadavis 27 October 2014 08:14:41PM 1 point [-]

Correct and Precise may have been better terms. By correct I mean a result that I have very high confidence in, but that is not precise enough to be useable. By accurate I mean a result that is very precise but with far less confidence that it is correct.

As an example, consider a damped oscillation word problem from first year. You are very confident that as time approaches infinity that the displacement will approach a value just by looking at it, but you don't know that value. Now when you crunch the numbers (the verbal process in the extreme) you get a very specific value that the function approaches, but have less confidence that that value is correct, you could have made any of a number of mistakes. In this example the classic wrong result is the displacement is in the opposite direction as the applied force.

This is a very simple example so it may be hard to separate the non-verbal process from the verbal, but there are many cases where you know what the result should look like but deriving the equations and relations can turn into a black box.

Comment author: ruelian 27 October 2014 08:40:03PM *  2 points [-]

Right, that makes much more sense now, thanks.

One of my current problems is that I don't understand my brain well enough for nonverbal thinking not to turn into a black box. I think this might be a matter of inexperience, as I only recently managed intuitive, nonverbal understanding of math concepts, so I'm not always entirely sure what my brain is doing. (Anecdotally, my intuitive understanding of a problem produces good results more often than not, but any time my evidence is anecdotal there's this voice in my head that yells "don't update on that, it's not statistically relevant!")

Does experience in nonverbal reasoning on math lend actually itself to better understanding of said reasoning, or is that just a cached thought of mine?

Comment author: wadavis 27 October 2014 10:02:24PM 1 point [-]

Doing everything both ways, nonverbal and verbal, has lent itself to better understanding of the reasoning. Which touches on the anecdote problem, if you test every nonverbal result; you get something statistically relevant. If your odds are more often than not with nonverbal, testing every result and digging for the mistakes will increase your understanding (disclaimer: this is hard work).

Comment author: ruelian 28 October 2014 05:23:58PM 2 points [-]

So, essentially, there isn't actually any way of getting around the hard work. (I think I already knew that and just decided to go on not acting on it for a while longer.) Oh well, the hard work part is also fun.

Comment author: Strangeattractor 27 October 2014 06:52:59PM 3 points [-]

I usually think about math nonverbally. I am not usually doing such thinking to come up with proofs. My background is in engineering, so I got a different sort of approach to math in my education about math than the people who were in the math faculty at the university I attended.

Sometimes I do go through a problem step by step, but usually not verbally. I sometimes make notes to help me remember things as I go along. Constraints, assumptions, design goals, etc. Explicitly stating these, which I usually do by writing them on paper, not speaking them aloud, if I'm working by myself on a problem, can help. But sometimes I am not working by myself and would say them out loud to discuss them with other people.

Also, there is often more than one way to visualize or approach a problem, and I will do all of them that come to mind.

I would suggest, to spend more time thinking about math, find something that you find really beautiful about math and start there, and learn more about it. Appreciate it, and be playful with it. Also, find a community where you can bounce ideas around and get other people's thoughts and ideas about the math you are thinking about. Some of this stuff can be tough to learn alone. I'm not sure how well this advice might work, your mileage may vary.

When I am really understanding the math, it seems like it goes directly from equations on the paper right into my brain as images and feelings and relations between concepts. No verbal part of it. I dream about math that way too.

Comment author: ruelian 27 October 2014 07:26:20PM 2 points [-]

I only got to a nonverbal level of understanding of advanced math fairly recently, and the first time I experienced it I think it might have permanently changed my life. But if you dream about math...well, that means I still have a long way to go and deeper levels of understanding to discover. Yay!

Follow-up question (just because I'm curious): how do you approach math problems differently when working on them from the angle of engineering, as opposed to pure math?

Comment author: Strangeattractor 28 October 2014 07:57:17AM 3 points [-]

It seemed to me that the people I knew who were studying pure math spent a lot of time on proofs, and that math was taught to them with very little context for how the math might be used in the real world, and without a view as to which parts were more important than others.

In engineering classes we proved things too, but that was usually only a first step to using the concepts to work on some other problem. There was more time spent on some types of math than on others. Some things were considered to be more useful and important than others. Usually some sort of approximations or assumptions would be used, in order to make a problem simpler and able to be solved, and techniques from different branches of math were combined together whenever useful, often making for some overlap in the notation that had to be dealt with.

There was also the idea that any kind of math is only an approximate model of the true situation. Any model is going to fail at some point. Every bridge that has been built has been built using approximations and assumptions, and yet most bridges stay up. Learning when one can trust the approximations and assumptions is vital. People can die if you get it wrong. Learning the habit of writing down explicitly what the assumptions and approximations are, and to have a sense for where they are valid and where they are not, is a skill that I value, and have carried over into other aspects of my life.

Another thing is that math is usually in service of some other goal. There are design constraints and criteria, and whatever math you can bring in to get it done is welcome, and other math is extraneous. The beauty of math can be admired, but a kludgy theory that is accurate to real world conditions gets more respect than a pretty theory that is less accurate. In fact, sometimes engineers end up making kludgy theory that solves engineering problems into some sophisticated mathematics that looks more formal and has some interesting properties, and then it has a beauty of its own, although some of the beauty comes from knowing how it fits into a real world phenomenon.

Also, engineers tend to work in teams, not alone. So communicating with each other, and making sure that all the people on the team have a similar understanding of a situation, is a non-trivial part of the work. You don't want a situation where one person has one type of abstraction in their head, and another person has a different one, and they don't realize it, and when they go off to do their separate work, it doesn't match up. This can lead to all sorts of problems, not limited to cost overruns, design flaws, delays, and even deaths. So, if you hear engineers discussing nitpicky details and going over technical concepts more than once, that is one major reason why. You really need people to be on the same page.

Teamwork is so important to engineering that when taking classes, we were encouraged to talk to each other and work together on problems, before submitting answers. Whereas the people over in math were forbidden to talk to each other about their work before handing it in. That policy might be different at different schools. But I think it shows an important difference in culture.

Math is certainly something that can be enjoyed and practiced solo. But especially on some of the most tricky concepts of math that I have learned, I benefitted a lot from being able to discuss it with people, and get new insights and understanding from their perspectives. Sometimes I didn't even realize that I didn't properly understand a concept until I attempted to use it, and got a completely different answer from someone else who was attempting to use it.

I said it can get kludgy, and that the focus is on real world problems, but there are times when it does feel clean and pure, especially when people make real world objects that correspond pretty well to ideal mathematical objects. For example, using 4th-order differential equations to calculate the bending moments for I-beams felt peaceful and pretty, once I got the hang of it, and I think not it is not unlike something you might find in a pure math course.

I'm pretty enthusiastic about math, it's one of my favourite things to think about and do.

Comment author: lmm 27 October 2014 07:36:52PM 1 point [-]

I don't really draw that distinction. I'd say that my thinking about mathematics is just as verbal as any other thinking. In fact, a good indication that I'm picking up a field is when I start thinking in the language of the field (i.e. I will actually think "homology group" and that will be a term that means something, rather than "the group formed by these actions...")

Comment author: ruelian 27 October 2014 08:13:33PM 1 point [-]

I'd say that my thinking about mathematics is just as verbal as any other thinking.

Just to clarify, because this will help me categorize information: do you not do the nonverbal kind of thinking at all, or is it all just mixed together?

Comment author: lmm 27 October 2014 10:26:07PM 1 point [-]

I'm not really conscious of the distinction, unless you're talking about outright auditory things like rehearsing a speech in my head. The overwhelming majority of my thinking is in a format where I'm thinking in terms of concepts that I have a word for, but probably not consciously using the word until I start thinking about what I'm thinking about. Do you have a precise definition of "verbal"? But whether you call it verbal or not, it feels like it's all the same thing.

Comment author: ruelian 28 October 2014 04:50:20PM 1 point [-]

I don't really have good definitions at this point, but in my head the distinction between verbal and nonverbal thinking is a matter of order. When I'm thinking nonverbally, my brain addresses the concepts I'm thinking about and the way they relate to each other, then puts them to words. When I'm thinking verbally, my brain comes up with the relevant word first, then pulls up the concept. It's not binary; I tend to put it on a spectrum, but one that has a definite tipping point. Kinda like a number line: it's ordered and continuous, but at some point you cross zero and switch from positive to negative. Does that even make sense?

Comment author: lmm 28 October 2014 07:00:46PM 1 point [-]

It makes sense but it doesn't match my subjective experience.

Comment author: ruelian 28 October 2014 07:33:15PM 1 point [-]

Alright, that works too. We're allowed to think differently. Now I'm curious, could you define your way of thinking more precisely? I'm not quite sure I grok it.

Comment author: TsviBT 27 October 2014 09:46:38PM 1 point [-]

Personally, the nonverbal thing is the proper content of math---drawing (possibly mental) pictures to represent objects and their interactions. If I get stuck, I try doing simpler examples. If I'm still stuck, then I start writing things down verbally, mainly as a way to track down where I'm confused or where exactly I need to figure something out.

Comment author: Gunnar_Zarncke 27 October 2014 09:52:21PM 1 point [-]

I don't see a clear verbal vs. non-verbal dichotomy - or at least the non-verbal side has lots of variants. To gain an intuitive non-verbal understanding can involve

  • visual aids (from precise to vague): graphs, diagrams, patterns (esp. repetitions), pictures, vivid imagination (esp. for memorizing)

  • acoustic aids: rhythms (works with muscle memory too), patterns in the spoken form, creating sounds for elements

  • abstract thinking (from precise to vague): logical inference, semantic relationships (is-a, exists, always), vague relationships (discovering that the more of this seems to imply the more of that)

Note: Logical inference seems to be the verbal part you mean, but I don't think symbolic thinking is always verbal. Its conscious derivation may be though.

And I hear that the verbal side despite lending itself to more symbolic thinking can nonetheless work its grammar magic on an intuitive level too (though not for me).

Personally if I really want to solve a mathematical problem I immerse myself in it. I try lots of attack angles from the list above (not systematically but as it seems fit). I'm an abstract thinker and don't rely on verbal, acoustic or motor cues a lot. Even visual aids don't play a large role though I do a lot of sketching, listing/enumerating combinations, drawing relations/trees, tabulating values/items. If I suspect a repeating pattern I may tap to it to sound it out. If there is lengthy logical inference involved that I haven't internalized I speak the rule repeatedly to use the acoustic loop as memory aid. I play around with it during the day visualizing relationships or following steps, sometimes until in the evening everyting blurs and I fall asleep.

Comment author: Luke_A_Somers 27 October 2014 10:40:48PM 3 points [-]

I don't tend to do a lot of proofs anymore. When I think of math, I find it most important to be able to flip back and forth between symbol and referent freely - look at an equation and visualize the solutions, or (to take one example of the reverse) see a curve and think of ways of representing it as an equation. Since when visualizing numbers will often not be available, I tend to think of properties of a Taylor or Fourier series for that graph. I do a visual derivative and integral.

That way, the visual part tells me where to go with the symbolic part. Things grind to a halt when I have trouble piecing that visualization together.

Comment author: ruelian 28 October 2014 04:55:11PM 1 point [-]

This appears to be a useful skill that I haven't practiced enough, especially for non-proof-related thinking. I'll get right on that.

Comment author: Fhyve 28 October 2014 07:18:47AM 1 point [-]

I'm a math undergrad, and I definitely spend more time in the second sort of style. I find that my intuition is rather reliable, so maybe that's why I'm so successful at math. This might be hitting into the "two cultures of mathematics", where I am definitely on the theory builder/algebraist side. I study category theory and other abstract nonsense, and I am rather bad (relative to my peers) at Putnam style problems.

Comment author: Bundle_Gerbe 28 October 2014 07:36:22AM 1 point [-]

As someone with a Ph.D. in math, I tend to think verbally in as much as I have words attached to the concepts I'm thinking about, but I never go so far as to internally vocalize the steps of the logic I'm following until I'm at the point of actually writing something down.

I think there is another much stronger distinction in mathematical thinking, which is formal vs. informal. This isn't the same distinction as verbal vs. nonverbal, for instance, formal thinking can involve manipulation of symbols and equations in addition to definitions and theorems, and I often do informal thinking by coming up with pretty explicitly verbal stories for what a theorem or definition means (though pictures are helpful too).

I personally lean heavily towards informal thinking, and I'd say that trying to come up with a story or picture for what each theorem or definition means as you are reading will help you a lot. This can be very hard sometimes. If you open a book or paper and aren't able to get anywhere when you try do this to the first chapter, it's a good sign that you are reading something too difficult for your current understanding of that particular field. At a high level of mastery of a particular subject, you can turn informal thinking into proofs and theorems, but the first step is to be able to create stories and pictures out of the theorems, proofs, and definitions you are reading.

Comment author: RichardKennaway 28 October 2014 09:51:03AM 1 point [-]

Which of those, if any, sounds closer to the way you think about math?

Each serves its own purpose. It is like the technical and artistic sides of musical performance: the technique serves the artistry. In a sense the former is subordinate to the latter, but only in the sense that the foundation of a building is subordinate to its superstructure. To perform well enough that someone else would want to listen, you need both.

This may be useful reading, and the essays here (from which the former is linked).

Comment author: ruelian 28 October 2014 04:53:12PM 1 point [-]

reads the first essay and bookmarks the page with the rest

Thanks for that, it made for enjoyable and thought-provoking reading.

Comment author: James_Miller 27 October 2014 06:08:05PM 10 points [-]

Assume that Jar S contains just silver balls, whereas Jar R contains ninety percent silver balls and ten percent red balls.

Someone secretly and randomly picks a jar, with an equal chance of choosing either. This picker then takes N randomly selected balls from his chosen jar with replacement. If a ball is silver he keeps silent, whereas if a ball is red he says “red.”

You hear nothing. You make the straightforward calculation using Bayes’ rule to determine the new probability that the picker was drawing from Jar S.

But then you learn something. The red balls are bombs and if one had been picked it would have instantly exploded and killed you. Should learning that red balls are bombs influence your estimate of the probability that the picker was drawing from Jar S?

I’m currently writing a paper on how the Fermi paradox should cause us to update our beliefs about optimal existential risk strategies. This hypothetical is attempting to get at whether it matters if we assume that aliens would spread at the speed of light killing everything in their path.

Comment author: polymathwannabe 27 October 2014 06:13:18PM 2 points [-]

Before I actually do the math, "you hear nothing" appears to affect my estimate exactly in the same way as "you're still alive."

Comment author: private_messaging 27 October 2014 06:31:17PM 7 points [-]

I had a conversation with another person regarding this Leslie's firing squad type stuff. Basically, I came up with a cavemen analogy with the cavemen facing lethal threats. It's pretty clear - from the outside - that the cavemen which do probability correctly and don't do anthropic reasoning with regards to tigers in the field, will do better at mapping lethal dangers in their environment.

Comment author: James_Miller 28 October 2014 01:57:13PM 2 points [-]

Thanks for letting me know about "Leslie's firing squad[s]"

Comment author: private_messaging 28 October 2014 03:17:10PM *  3 points [-]

You're welcome. So what's your actual take on the issue? I never seen a coherent explanation why bombs must make a difference. I seen appeals to "but you wouldn't be thinking anything if it was red", which ought to perfectly cancel out if you apply that to the urn choice as well.

edit: i.e. this anthropics, to me, is sort of like how you could calculate the forces in a mechanical system, but make an error somewhere, and that yields an apparent perpetuum mobile, as forces on your wheel with water and magnets fail to cancel out. Likewise, you evaluate impacts of some irrelevant information, and you make an error somewhere, and irrelevant information makes a difference.

Comment author: Lumifer 27 October 2014 06:45:09PM 3 points [-]

A side note: under the cherry bomb scenario the probability of you hearing the word "red" is zero.

Comment author: Manfred 27 October 2014 11:06:35PM *  0 points [-]

If the two jar scenarios start with equal anthropic measure (i.e. looking in from the outside), then you really are less likely to have jar R if you're not dead.

Comment author: Strangeattractor 27 October 2014 06:39:38PM 1 point [-]

After reading through the Quantum Physics sequence, I would like to know more about the assumptions and theories behind the idea that an amplitude distribution factorizes, or approximately factorizes. Where would be a good place to learn more about this? I would appreciate some recommendations for journal articles to read, or specific sections of specific books, or if there's another better way to learn this stuff, please let me know.

In the blog posts in the sequence, an analogy comes up a few times, saying that it doesn't make sense to distinguish between the two factors of 3 when multiplying 3 x 3 x 2 to get 18, and that similarly, the amplitude blobs in configuration space that can sometimes appear to be like particles are factors of...I'm not sure what. The wavefunction? A probability density function (but we're calling it amplitude instead of probabilities)? Something else? I didn't entirely follow that section, so I'm not sure how to look it up.

When I searched on Google Scholar for "quantum factorization" I got journal articles about how to use quantum computers to factor prime numbers. When I looked up "particle indistinguishability" I got papers about very small numbers of particles in a state of quantum entanglement. When I searched for "amplitude distribution factorizes" I got articles about tomography and mesons and keys for quantum cryptography.

I'm also confused about: what precisely is an amplitude distribution? Amplitude of what? Distributed over what? I can make some guesses, but how do I look it up?

I would also like to know: what needs to be true in order for this concept to be true? Does it depend on the many worlds interpretation of quantum mechanics, or would it hold true in the other interpretations? Does it require the wavefunction? Just how good is the analogy about the factors of 18, and where would the analogy break down? What do the equations look like that lead to these conclusions, and what are they called, so I can look them up? What assumptions are used to formulate the equations? What is the difference between factorizing exactly and approximately? Why does the idea of roughly factorizing come up at all, why isn't it all exact? How accurate would it be to describe a person as a factor of the wave function, and what does that mean? Is there a technical term for "blob of amplitude"?

The Quantum Physics sequence is the best introduction to quantum mechanics that I've read, but it is rather incomplete about explicitly stating the assumptions it is using, or giving references to where to learn more about each topic.

Help?

Comment author: DanielLC 27 October 2014 08:47:12PM 0 points [-]

I think the factorization is a reference to from quantum field theory. I haven't learned quantum field theory though, so I can't comment much. From what I can gather, multiplying something by the creation operator gets you the same state but with an extra particle.

I can tell you that at the very minimum, assuming Copenhagen and the minimal amount of physics to allow entanglement to happen at all, whenever two of the same kind of particle are entangled, they have a 50% chance of swapping. If you use MWI, it's that I can find a universe with the same probability density in which those particles are swapped.

Comment author: Manfred 27 October 2014 10:52:40PM *  3 points [-]

Relevant wikipedia link. The keyword is something like "many-body wavefunction."

But seriously, if you're curious, try to find a textbook online, or a series of video lectures for an introductory course (you might either watch the whole course, or skip to what you want to learn and then try and figure out what the prerequisites are, then do the same thing for the prerequisites).

Comment author: Jackercrack 27 October 2014 07:14:21PM *  7 points [-]

I'd like to ask LessWrong's advice. I want to benefit from CFAR's knowledge on improving ones instrumental rationality, but being a poor graduate I do not have several thousand in disposable income nor a quick way to acquire it. I've read >90% of the sequences but despite having read lukeprog's and Alicorn's sequences I am aware that I do not know what I do not know about motivation and akrasia. How can I best improve my instrumental rationality on the cheap?

Edit: I should clarify, I am asking for information sources: blogs, book recommendations, particularly practice exercises and other areas of high quality content. I also have a good deal of interest in the science behind motivation, cognitive rewiring and reinforcement. I've searched myself and I have a number of things on my reading list, but I wanted to ask the advice of people who have already done, read or vetted said techniques so I can find and focus on the good stuff and ignore the pseudoscience.

Comment author: RomeoStevens 27 October 2014 08:34:19PM 4 points [-]

CFAR has financial aid.

Also, attending LW meetups and asking about organizing meetups based on instrumental rationality material is cheap and fun.

Comment author: Jackercrack 27 October 2014 09:56:20PM 2 points [-]

Somehow I doubt the financial aid will stretch to the full amount, and my student debt is already somewhat fearsome.

I'm on the LW meetups already as it happens. I'm currently attempting to have my local one include more instrumental rationality but I lack a decent guide of what methods work, what techniques to try or what games are fun and useful. For that matter I don't know what games there are at all beyond a post or two I stumbled upon.

Comment author: Vaniver 27 October 2014 10:28:23PM 5 points [-]

Somehow I doubt the financial aid will stretch to the full amount, and my student debt is already somewhat fearsome.

You could ask Metus how much they covered for them, or someone at CFAR how much they'd be willing to cover. The costs for asking are small, and you won't get anything you don't ask for.

Comment author: Jackercrack 27 October 2014 10:57:55PM 3 points [-]

Fair point, done. On a related note, I wonder how I can practice convincing my brain that failure does not mean death like it did in the old ancestral environment.

Comment author: Metus 27 October 2014 11:16:54PM 7 points [-]

Exposure therapy: Fail on small things, then larger ones, where it is obvious that failiure doesn't mean death. First remember past experiences where you failed and did not die, then go into new situations.

Comment author: NancyLebovitz 28 October 2014 02:27:29AM 3 points [-]

Even in the ancestral environment, not all failures (I suspect a fairly small proportion of them) meant death.

Comment author: cursed 28 October 2014 06:50:06AM 9 points [-]

I've been to several of CFAR's classes throughout the last 2 years (some test classes and some more 'official' ones) and I feel like it wasn't a good use of my time. Spend your money elsewhere.

Comment author: hyporational 28 October 2014 11:12:04AM 4 points [-]

What made it poor use of your time?

Comment author: cursed 28 October 2014 09:15:02PM *  15 points [-]

I didn't learn anything useful. They taught, among other things, "here's what you should do to gain better habits". Tried it and didn't work on me. YMMV.

One thing that really irked me was the use of cognitive 'science' to justify their lessons 'scientifically'. They did this by using big scientific words that felt like they were trying to attempt to impress us with their knowledge. (I'm not sure what the correct phrase is - the words weren't constraining beliefs? don't pay rent? they could have made up scientific sounding words and it would have had the same effect.)

Also, they had a giant 1-2 page listing of citations that they used to back up their lessons. I asked some extremely basic questions about papers and articles I've previously read on the list and they had absolutely no idea what I was talking about.

ETA: I might go to another class in a year or two to see if they've improved. Not convinced that they're worth donating money towards at this moment.

Comment author: MathiasZaman 27 October 2014 09:28:54PM *  7 points [-]

I've recently started a tumblr dedicated to teaching people what amounts to Rationality 101. This post isn't about advertising that blog, since the sort of people that actually read Less Wrong are unlikely to be the target audience. Rather, I'd like to ask the community for input on what are the most important concepts I could put on that blog.

(For those that would like to follow this endeavor, but don't like tumblr, I've got a parallel blog on wordpress)

Comment author: wadavis 27 October 2014 10:18:57PM 11 points [-]

Admitting you are wrong.

Comment author: Manfred 27 October 2014 10:48:28PM 12 points [-]

Highly related: When you even might be wrong, get curious about that possibility rather than scared of it.

Comment author: Manfred 27 October 2014 10:49:27PM 2 points [-]

Taking stock of what information you have, and what might be good sources for information, well in advance of making a decision.

Comment author: jkadlubo 28 October 2014 12:11:16PM 4 points [-]

Excercises in small rational behaviours. E.g. people genrally are very reluctant to apologize about anything, even if the case means little to them and a lot to the other person. Maybe it's "if I apologize, that will mean I was a bad person in the first place" thinking, maybe something else.

It's a nice excercise: if somebody seems to want something from you or apparently is angry with you when you did nothing wrong, stop for a moment and think: how much will it cost me to just say "I'm sorry, I didn't mean to offend you". After all, those are just words. You don't have to "win" every confrontation and convince the other person you are right and their requirements are ridiculus. And if you apologize, in fact you both will have a better day - the other person will feel appreciated and you will be proud you did something right.

(A common situation from my experience is that somebody pushes me in a queue, I say "excuse me, but please don't stand so close to me/don't look over my arm when I'm writing the PIN code etc." and then the pusher often starts arguing how my behaviour is out of line - making both of us and the cashier upset)

Come to think of it, it's a lot like Quirrell's second lesson in HPMoR...

Comment author: ruelian 28 October 2014 07:56:44PM 1 point [-]

Map and territory - why is rationality important in the first place?

Comment author: Capla 28 October 2014 12:02:10AM 2 points [-]

It had never occurred to me that the term "applause light" could be taken so literally.

Comment author: Evan_Gaensbauer 28 October 2014 07:58:30AM 2 points [-]

My friend recently attended an event at which Ray Kurzweil and an urban planner named Richard Florida were speaking. He didn't like Richard Florida as a speaker, citing how Richard Florida 'sounded just like a politician', and was speaking 'only in applause lights'. I noted it was funny to use 'applause light' in that context, as an auditorium where the speaker looks over a crowd while bathed in light, saying things specifically to garner applause, is just about the most literal interpretation of 'applause light' I could think of.

Comment author: Evan_Gaensbauer 28 October 2014 12:37:28AM *  3 points [-]

The following model is my new hypothesis for generating better OKCupid profiles for myself while remaining honest.

  • I brainstorm what I want to include in my profile in a positive way without lying. This may include goal-factoring on what honest signals I'm trying to send. Then, I see how what I brainstormed fits into the different prompts on OKCupid profiles.

  • I generate multiple clause-like chunks for each item/object/quality of myself I'm trying to express in my profile. I then A/B test the options for each item across a cross-section of individuals similar to the ones I would want to attract on OKCupid. This may include random assignment to conditions to participants to some extent. I would still need to think of metrics or ratings for this to best suit my goals.

  • Construct complete paragraphs for the various sections of my profile using whichever were the most successful Caveats: I would want enough experimental control to ensure the test participants were people I could trust to respond honestly, and without trolling me. However, this would decrease random selection. How much should I care about random selection, and thus external validity, in this case?

Otherwise, what do you think of the model? What's wrong with it? If it's not completely awful, I'll play-test it with an OKCupid profile just for the value of information, and see if we can't learn something.

Comment author: MrMind 28 October 2014 03:02:03PM 1 point [-]

Just test it and report back the result :) That will teach you and us many things we can't see right now.

Comment author: cursed 28 October 2014 06:51:27AM *  4 points [-]

Those who are currently using Anki on a mostly daily or weekly basis: what are you studying/ankifying?

To start: I'm working on memorizing programming languages and frameworks because I have trouble remembering parameters and method names.

Comment author: Emile 28 October 2014 11:02:01AM *  4 points [-]

These days, most of my time on Anki is on Japanese (which I'm learning for fun) and Chinese (which I already know, but I'm brushing up on tones and characters).

Looking through my decks, I also have decks on:

  • Algorithms and data structures (from a couple books I read on that)
  • Communication (misc. tips on storytelling, giving talks, etc.)
  • Game Design (insights and concepts that seemed valuable)
  • German
  • Git and Unix Command Line commands
  • Haskell
  • Insight (misc. stuff that seemed interesting/important)
  • Mnemonics
  • Productivity (notes from Lukeprog's posts and vairous other sources)
  • Psychology and neuroscience
  • Rationality Habits (one of the few decks I have that come all made, from Anna Salmon I think, though I also added some stuff and delted others)
  • Statistics
  • Web Technologies (some stuff on Angular JS and CSS that I got tired of looking up all the time)

(also a few minor decks with very few cards)

I review those pretty much every day (I sometimes leave a few unfinished, depending on how much idle time I have in queues, transport, etc.)

Comment author: cursed 28 October 2014 09:23:39PM 2 points [-]

That's fantastic. How many cards total do you have, and how many minutes a day do you study?

Comment author: philh 28 October 2014 07:09:47PM 3 points [-]

Geography: "what direction [relative to central london] is this tube stop in?", English counties (locations), U.S. states (locations, capitals), Canadian territories and provinces (locations and capitals), countries (locations, capitals, and at some point I'll add flags). (Most of these came from ankiweb originally, but I had to add reverse cards.)

Bayes: conversions between odds, probabilities and decibels (specific numbers and more recently the general formulas)

Miscellaneous: the NATO phonetic alphabet, logs (base 2 of 1.25, 1.5, 1.75, and base 10 of 2 through 9), some words I can never remember how to spell (this turns out not to help), some computer stuff (the order of the arguments in python's datetime.strptime, and the difference between a left join and a right join), some definitions in machine learning, some historical dates (e.g. wars, first moon landing, introduction of the model T), some historical inflation rates, some astronomical facts.

Also a deck based on the twelve virtues of rationality essay. (This one and most of the bayes one I found through LW.)

I'm not sure most of this is useful, but most of it hasn't cost me significant effort either.

Comment author: Evan_Gaensbauer 28 October 2014 07:00:43AM 9 points [-]

I posted a link to the 2014 survey in the 'Less Wrong' Facebook group, and some people commented they filled it out. Another friend of mine started a Less Wrong account to comment that she did the survey, and got her first karma. Now I'm curious how many lurkers become survey participants, and are then incenitivized to start accounts to get the promised karma by commenting they completed it. If it's a lot, that's cool, because having one's first comment upvoted after just registering an account on Less Wrong seems like a way of overcoming the psychological barrier of 'oh, I wouldn't fit in as an active participant on Less Wrong...'

If you, or someone you know, got active on Less Wrong for the first time because of the survey, please reply as a data point. If you're a regular user who has a hypothesis about this, please share. Either way, I'm curious to discover how strong an effect this is, or is not.

Comment author: Gunnar_Zarncke 28 October 2014 08:32:52AM 12 points [-]

Today I had an aha moment when discussing coalition politics (I didn't call it that, but it was) with elementary schoolers, 3rd grade.

As a context: I offer an interdisciplinary course in school (voluntary, one hour per week). It gives a small group of pupils a glimpse of how things really work. Call it rationality training if you want.

Today the topic was pairs and triple. I used analogies from relationships: Couples, parents, friendships. What changes in a relationship when a new element appears. Why do relationships form in the first place? And this revealed differences in how friendships work among boys and among girls. And that in this class at this moment at least the girl friendships were largely coalition politics: "If you do this your are my best friend," or "No we can't be best friends if she it your best friend." For the boys it appears to be at least wquantitatively different. But maybe just the surface differs.

I the end I represented this as graphs (kind of) on the board. And the children were delighted to draw their own coalition diagrams, even abbreviating names by single letters. You wouldn't have bet that these diagrams were from 3rd grade.

Comment author: MrMind 28 October 2014 02:59:05PM 4 points [-]

I wonder what would happen if we trained monkeys to reveal this kind of detalis with us.

Comment author: Gunnar_Zarncke 28 October 2014 05:44:14PM 3 points [-]

But maybe we could. Considering the tricky setups scientists use to compare the intelligence of mice and rats I'd think that it should be possible to devise an experiment which teaches monkeys to reveal their clan structure. I'm thinking along the line of first training association of buttons with clan members (photos) and the allowing to select groups which should get or not get a treat.

Comment author: Artaxerxes 28 October 2014 10:43:30AM *  8 points [-]

Someone has created a fake Singularity Summit website.

(Link is to MIRI blog post claiming they are not responsible for the site.)

MIRI is collaborating with Singularity University to have the website taken down. If you have information about who is responsible for this, please contact luke@intelligence.org.

Comment author: Salemicus 28 October 2014 10:47:30AM 4 points [-]

It has been experimentally shown that certain primings and situations increase utilitarian reasoning; for instance, people are more willing to give the "utilitarian" answer to the trolley problem when dealing with strangers, rather than friends. Utilitarians like to claim that this is because people are able to put their biases aside and think more clearly in those situations. But my explanation has always been that it's because these setups are designed to maximise the psychological distance between the subject and the harm they're going to inflict - the more people are confronted with the potential consequences of their actions, the less likely they are to make the utilitarian mistake. And now, a new paper suggests that I was right all along! Abstract:

The hypothetical moral dilemma known as the trolley problem has become a methodological cornerstone in the psychological study of moral reasoning and yet, there remains considerable debate as to the meaning of utilitarian responding in these scenarios. It is unclear whether utilitarian responding results primarily from increased deliberative reasoning capacity or from decreased aversion to harming others. In order to clarify this question, we conducted two field studies to examine the effects of alcohol intoxication on utilitarian responding. Alcohol holds promise in clarifying the above debate because it impairs both social cognition (i.e., empathy) and higher-order executive functioning. Hence, the direction of the association between alcohol and utilitarian vs. non-utilitarian responding should inform the relative importance of both deliberative and social processing systems in influencing utilitarian preference. In two field studies with a combined sample of 103 men and women recruited at two bars in Grenoble, France, participants were presented with a moral dilemma assessing their willingness to sacrifice one life to save five others. Participants’ blood alcohol concentrations were found to positively correlate with utilitarian preferences (r = .31, p < .001) suggesting a stronger role for impaired social cognition than intact deliberative reasoning in predicting utilitarian responses in the trolley dilemma. Implications for Greene’s dual-process model of moral reasoning are discussed.

However, given my low opinion of such experiments, perhaps I should be very careful about uncritically accepting evidence that supports my priors.

Comment author: NancyLebovitz 28 October 2014 01:31:40PM 2 points [-]

I've been wondering whether utilitarianism undervalues people's loyalty to their own relationships and social networks.

Comment author: Lumifer 28 October 2014 03:27:36PM 0 points [-]

In two field studies with a combined sample of 103 men and women recruited at two bars in Grenoble, France

Field studies are hard work :-D

Comment author: ChristianKl 28 October 2014 04:44:35PM 2 points [-]

They needed the native habitat for the alcohol consumption.

Comment author: lmm 28 October 2014 07:24:39PM 3 points [-]

I highly doubt the subjects were drunk enough to have trouble figuring out that 5 > 1. So one could equally offer an interpretation that e.g. drunk people answered honestly, while sober people wanted to signal that they were too caring to kill someone under any circumstances.

It's a fascinating result, but I don't think the interpretation is a slam dunk.

Comment author: NancyLebovitz 28 October 2014 01:45:31PM *  3 points [-]

I did a little research to find out whether there are free survey sites that offer "check all answers that apply" questions.

Super Simple Survey probably does, but goddamned if I'll deal with their website to make sure.

On the almost free side, Live Journal enables fairly flexible polls (including checkboxes) for paid accounts, and you can get a paid account for a month for $3. Live Journal is a social media site.

Comment author: falenas108 28 October 2014 03:03:09PM 2 points [-]

I keep finding the statistic that "one pint of donated blood can save up to 3 lives!" But I can't find the average number of lives saved from donating blood. Does anyone know/is able to find?

Comment author: Lumifer 28 October 2014 03:11:08PM 1 point [-]

I keep finding the statistic that "one pint of donated blood can save up to 3 lives!"

The expression "can save up to" should immediately trigger your bullshit detector. It's a reliable signal that the following number is meaningless.

Comment author: ChristianKl 28 October 2014 04:53:11PM 1 point [-]

What do you mean with "lives saved by donating blood" in the first place?

Quantity people who would die without any blood donations


Liters of blood donated

That's not a pretty useful number if you want to make personal decisions based on it. If our Western system would need more blood, raising the incentives for donations isn't that hard.

Comment author: polymathwannabe 28 October 2014 05:04:30PM 1 point [-]

WHO prefers all blood donations to be unpaid:

"Regular, unpaid voluntary donors are the mainstay of a safe and sustainable blood supply because they are less likely to lie about their health status. Evidence indicates that they are also more likely to keep themselves healthy."

Comment author: ChristianKl 28 October 2014 05:39:48PM 1 point [-]

Interesting. So the core question seems to be: "How much value is produced by healthy blood donors making decisions to donate without incentives, compared to blood that's "brought"".