You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

"Stupid" questions thread

40 Post author: gothgirl420666 13 July 2013 02:42AM

r/Fitness does a weekly "Moronic Monday", a judgment-free thread where people can ask questions that they would ordinarily feel embarrassed for not knowing the answer to. I thought this seemed like a useful thing to have here - after all, the concepts discussed on LessWrong are probably at least a little harder to grasp than those of weightlifting. Plus, I have a few stupid questions of my own, so it doesn't seem unreasonable that other people might as well. 

Comments (850)

Comment author: RomeoStevens 13 July 2013 04:43:19AM 26 points [-]

It seems to me that, unless one is already a powerful person, the best thing one can do to gain optimization power is building relationships with people more powerful than oneself. To the extant that this easily trumps the vast majority of other failings (epistemic rationality wise) as discussed on LW. So why aren't we discussing how to do better at this regularly? A couple explanations immediately leap to mind:

  1. Not a core competency of the sort of people LW attracts.

  2. Rewards not as immediate as the sort of epiphany porn that some of LW generates.

  3. Ugh fields. Especially in regard to things that are considered manipulative when reasoned about explicitly, even though we all do them all the time anyway.

Comment author: Qiaochu_Yuan 13 July 2013 04:54:43AM *  16 points [-]

LW's foundational posts are all very strongly biased towards epistemic rationality, and I think that strong bias still affects our attempts to talk about instrumental rationality. There are probably all sorts of instrumentally rational things we could be doing that we don't talk about enough.

Comment author: John_Maxwell_IV 13 July 2013 05:29:25AM 14 points [-]

Do you have any experience doing this successfully? I'd assume that powerful people already have lots of folks trying to make friends with them.

Comment author: sdr 14 July 2013 02:58:39AM *  21 points [-]

Specifically for business, I do.

The general angle is asking intelligent, and forward-pointing questions, specifically because deep processing for thoughts (as described in Thinking Fast and Slow) is rare, even within the business community; so demonstrating understanding, and curiosity (both of which are strength of people on LW) is an almost instant-win.

Two of the better guides on how to approach this intelligently are:

The other aspect of this is Speaking the Lingo. The problem with LW is:

1, people developing gravity wells around specific topics , and having a very hard time talking about stuff others are interested in without bringing up pet topics of their own; and

2, the inference distance between the kind of stuff that puts people into powerful position, and the kind of stuff LW develops a gravity well around is, indeed, vast.

The operational hack here is 1, listening, 2, building up the scaffolds on which these people hang their power upon; 3, recognizing whether you have an understanding of how those pieces fit together.

General algorithm for the networking dance:

1, Ask intelligent question, listen intently

2, Notice your brain popping up a question/handle that you have an urge to speak up. Develop a classification algo to notice whether the question was generated by your pet gravity well, or by novel understanding.

3, If the former,SHUT UP. If you really have the urge, mimic back what they've just said to internalize / develop your understanding (and move the conversation along)

Side-effects might include: developing an UGH-field towards browsing lesswrong, incorporating, and getting paid truckloads. YMMV.

Comment author: [deleted] 13 July 2013 06:00:08PM 5 points [-]

Because it's hard. That's what kept me from doing it.

I am very close to explicitly starting a project to do just that, and didn't get to this point even until one of my powerful friends explicitly advised me to take a particular strategy to get relationships with more powerful people.

I find myself unable to be motivated to do it without calling it "Networking the Hard Way", to remind myself that yes, it's hard, and that's why it will work.

Comment author: ChristianKl 13 July 2013 01:01:35PM 5 points [-]

It seems to me that, unless one is already a powerful person, the best thing one can do to gain optimization power is building relationships with people more powerful than oneself.

Power isn't one dimensional. The thing that matters isn't so much to make relationships with people who are more powerful than you in all domains but to make relationship with people who are poweful in some domain where you could ask them for help.

Comment author: drethelin 13 July 2013 08:27:33PM 2 points [-]

Insofar as MIRI folk seem to be friends with Jaan Tallin and Thiel etc. they appear to be trying to do this, though they don't seem to be teaching it as a great idea. But organizationally, if you're trying to optimize the world in a more rational way, spreading rationality might be a better way than trying to befriend less rational powerful people. Obviously this is less effective on a more personal basis.

Comment author: whateverfor 13 July 2013 08:24:01PM 2 points [-]

Realistically, Less Wrong is most concerned about epistemic rationality: the idea that having an accurate map of the territory is very important to actually reaching your instrumental goals. If you imagine for a second a world where epistemic rationality isn't that important, you don't really need a site like Less Wrong. There's nods to "instrumental rationality", but those are in the context of epistemic rationality getting you most of the way and being the base you work off of, otherwise there's no reason to be on Less Wrong instead of a specific site dealing with the sub area.

Also, lots of "building relationships with powerful people" is zero sum at best, since it resembles influence peddling more than gains from informal trade.

Comment author: RomeoStevens 13 July 2013 10:11:51AM *  20 points [-]

I'm in favor of making this a monthly or more thread as a way of subtracting some bloat from open threads in the same way the media threads do.

I also think that we should encourage lots of posts to these threads. After all, if you don't at least occasionally have a stupid question to ask, you're probably poorly calibrated on how many questions you should be asking.

Comment author: ciphergoth 13 July 2013 10:42:42AM 23 points [-]

If no question you ask is ever considered stupid, you're not checking enough of your assumptions.

Comment author: Qiaochu_Yuan 13 July 2013 05:15:48AM 20 points [-]

I like this idea! I feel like the current questions are insufficiently "stupid," so here's one: how do you talk to strangers?

Comment author: TimS 13 July 2013 05:26:17AM 13 points [-]

The downsides of talking to strangers are really, really low. Your feelings of anxiety are just lies from your brain.

I've found that writing a script ahead of time for particular situations, with some thoughts of different possible variations in how the conversation could go.

Honestly, not sure I understand the question.

Comment author: Qiaochu_Yuan 13 July 2013 05:37:48AM *  6 points [-]

Yeah, it was deliberately vague so I'd get answers to a wide variety of possible interpretations. To be more specific, I have trouble figuring out what my opening line should be in situations where I'm not sure what the social script for introducing myself is, e.g. to women at a bar (I'm a straight male). My impression is that "hi, can I buy you a drink?" is cliché but I don't know what reasonable substitutes are.

Comment author: malcolmocean 13 July 2013 06:14:04PM 13 points [-]

"hi, can I buy you a drink?" is also bad for other reasons, because this often opens a kind of transactional model of things where there's kind of an idea that you're buying her time, either for conversation or for other more intimate activities later. Now, this isn't explicitly the case, but it can get really awkward, so I'd seriously caution against opening with it.

I feel like I read something interesting about this on Mark Manson's blog but it's horribly organized so I can't find it now.

Comment author: [deleted] 14 July 2013 11:08:26PM 2 points [-]

"hi, can I buy you a drink?" is also bad for other reasons, because this often opens a kind of transactional model of things where there's kind of an idea that you're buying her time, either for conversation or for other more intimate activities later. Now, this isn't explicitly the case, but it can get really awkward, so I'd seriously caution against opening with it.

That sort of things vary a lot depending on what kind of culture you're in.

Comment author: gothgirl420666 13 July 2013 06:43:29AM *  10 points [-]

I've been reading PUA esque stuff lately and something they stress is that "the opener doesn't matter", "you can open with anything". This is in contrast to the older, cheesier, tactic based PUAs who used to focus obsessively over finding the right line to open with. This advice is meant for approaching women in bars, but I imagine it holds true for most ocassions you would want to talk to a stranger.

In general if you're in a social situation where strangers are approaching each other, then people are generally receptive to people approaching them and will be grateful that you are putting in the work of initiating contact and not them. People also understand that it's sometimes awkward to initiate with strangers, and will usually try to help you smooth things over if you initially make a rough landing. If you come in awkwardly, then you can gauge their reaction, calibrate to find a more appropriate tone, continue without drawing attention to the initial awkwardness, and things will be fine.

Personally, I think the best way to open a conversation with a stranger would just be to go up to them and say "Hey, I'm __" and offer a handshake. It's straightforward and shows confidence.

If you're in a situation where it's not necessarily common to approach strangers, you'll probably have to to come up with some "excuse" for talking to them, like "that's a cool shirt" or "do you know where the library is?". Then you have to transition that into a conversation somehow. I'm not really sure how to do that part.

EDIT: If an approach goes badly, don't take it personally. They might be having a bad day. They might be socially awkward themselves. And if someone is an asshole to you just for going up and saying hi, they are the weirdo, not you. On the other hand, if ten approaches in a row go badly, then you should take it personally.

Comment author: [deleted] 13 July 2013 10:23:55AM 8 points [-]

If you're in a situation where it's not necessarily common to approach strangers, you'll probably have to to come up with some "excuse" for talking to them, like "that's a cool shirt" or "do you know where the library is?". Then you have to transition that into a conversation somehow. I'm not really sure how to do that part.

Here's a recent example (with a lady sitting beside me in the aeroplane; translated):

  • Her: Hi, I'm [her name].
  • Me: Hi, I'm [my name].
  • Her: Can you speak French?
  • Me: Not much. Can you speak English?
  • Her: No. Can you speak Portuguese?
  • Me: A little.
  • Her: Spanish? Italian?
  • Me: Yes, I'm Italian. But why the hell can you speak all of those languages but not English, anyway?
  • Her: [answers my question]

from which it was trivially easy to start a conversation.

Comment author: pragmatist 13 July 2013 01:19:40PM 13 points [-]

Don't leave us hanging! Why the hell could she speak all those languages but not English?

Comment author: [deleted] 13 July 2013 01:49:29PM 14 points [-]

She had been born in Brazil to Italian parents, had gone to school in Italy, and was working in the French-speaking part of Switzerland.

Comment author: [deleted] 13 July 2013 09:58:58AM *  5 points [-]

My impression is that "hi, can I buy you a drink?" is cliché but I don't know what reasonable substitutes are.

"Hi, what's your name?" or "Hi, I'm Qiaochu" (depending on the cultural context, e.g. ISTM the former is more common in English and the latter is more common in Italian). Ain't that what nearly any language course whatsoever teaches you to say on Lesson 1? ;-)¹

Or, if you're in a venue where that's appropriate, "wanna dance?" (not necessarily verbally).

(My favourite is to do something awesome in their general direction and wait for them to introduce themselves/each other to me, but it's not as reliable.)


  1. I think I became much more confident in introducing myself to strangers in English or Italian after being taught explicitly how to do that in Irish (though there are huge confounders).
Comment author: gjm 13 July 2013 01:50:11PM 11 points [-]

I conjecture that "Hi, I'm Qiaochu" is a very uncommon greeting in Italian :-).

Comment author: TimS 13 July 2013 05:40:57AM 4 points [-]

I think you need to taboo "introducing yourself." The rules are different based on where you want the conversation to end up.

Comment author: John_Maxwell_IV 13 July 2013 05:36:10AM *  12 points [-]

A good way to start is to say something about your situation (time, place, etc.). After that, I guess you could ask their names or something. I consider myself decent at talking to strangers, but I think it's less about what you say and more about the emotions you train yourself to have. If you see strangers as friends waiting to be made on an emotional level, you can just talk to them the way you'd talk to a friend. Standing somewhere with lots of foot traffic holding a "free hugs" sign under the influence of something disinhibiting might be helpful for building this attitude. If you currently are uncomfortable talking to strangers then whenever you do it, afterwards comfort yourself internally the same way you might comfort an animal (after all, you are an animal) and say stuff like "see? that wasn't so bad. you did great." etc. and try to build comfort through repeated small exposure (more).

Comment author: [deleted] 13 July 2013 05:43:35PM 9 points [-]

I was climbing a tree yesterday and realized that I hadn't even thought that the people watching were going to judge me, and that I would have thought of it previously, and that it would have made it harder to just climb the tree. Then I thought that if I could use the same trick on social interaction, it would become much easier. Then I wondered how you might learn to use that trick.

In other words, I don't know, but the question I don't know the answer to is a little bit closer to success.

Comment author: ChristianKl 13 July 2013 12:54:57PM 5 points [-]

I like this idea! I feel like the current questions are insufficiently "stupid," so here's one: how do you talk to strangers?

I think the question is badly formed. I think it's better to ask: "How do I become a person who easily talks to strangers?" When you are in your head and think about: "How do I talk to that person over there?" you are already at a place that isn't conductive to a good interaction.

Yesterday during the course of traveling around town three stangers did talk to me, where the stranger said the first word.

The first was a woman in mid 30s with a bicycle who was searching the elevator at the public train station. The second was an older woman who told me that the Vibriam Fivefinger shoes in wearing look good. The third was a girl who was biking next to me when her smart phone felt down. I picked it up and handed it back to her. She said thank you.

I'm not even counting beggars at the train in public transportation.

Later that evening I went Salsa dancing. There two woman I didn't know who were new to Salsa asked me to dance.

Why was I at a vibe that let's other people approach me? I spent five days at a personal development workshop given by Danis Bois. The workshop wasn't about doing anything to strangers but among other things teaches a kind of massage and I was a lot more relaxed than I was in the past.

If you get rid of your anxiety interactions with strangers start to flow naturally.


What can you do apart from visiting personal development seminars that put you into a good emotional state?

Wear something that makes it easy for strangers to start a conversation with you. One of the benefits of Vibriam Fivefingers is that people are frequently curious about them.


Do good exercises.

1) One exercise is to say 'hi' or 'good morning' to every stranger you pass. I don't do it currently but it's a good exercise to teach yourself that interaction with strangers is natural.

2) Learn some form of meditation to get into a relaxed state of mind.

3) If you want to approach a person at a bar you might feel anxiety. Locate that anxiety in your body. At the beginning it makes sense to put your hand where you locate it.

Ask yourself: "Where does that feeling want to move in my body" Tell it to "soften and flow". Let it flow where it wants to flow in your body. Usually it wants to flow at a specific location out of your body.

Do the same with the feeling of rejection, should a stranger reject you.


Exercise three is something that I only learned recently and I'm not sure if I'm able to explain it well over the internet. In case anybody reading it finds it useful I would be interested in feedback.

Comment author: [deleted] 14 July 2013 11:15:53PM 8 points [-]

3) If you want to approach a person at a bar you might feel anxiety.

I recently found a nice mind hack for that: “What would my drunken self do?”

Comment author: RichardKennaway 16 July 2013 11:08:37AM 2 points [-]

I'd like to ask an even stupider one: why do people want to talk to strangers?

I've had a few such conversations on trains and the like, and I'm not especially averse to it, but I think afterwards, what was the point of that?

Well, that passed the time.

    It would have passed anyway.

Yes, but not as quickly.

At least the train eventually arrives.

Comment author: drethelin 13 July 2013 04:04:29AM 15 points [-]

Why does anyone care about anthropics? It seems like a mess of tautologies and thought experiments that pays no rent in anticipated experiences.

Comment author: pragmatist 13 July 2013 09:02:31AM *  21 points [-]

An important thing to realize is that people working on anthropics are trying to come up with a precise inferential methodology. They're not trying to draw conclusions about the state of the world, they're trying to draw conclusions about how one should draw conclusions about the state of the world. Think of it as akin to Bayesianism. If someone read an introduction to Bayesian epistemology, and said "This is just a mess of tautologies (Bayes' theorem) and thought experiments (Dutch book arguments) that pays no rent in anticipated experience. Why should I care?", how would you respond? Presumably you'd tell them that they should care because understanding the Bayesian methodology helps people make sounder inferences about the world, even if it doesn't predict specific experiences. Understanding anthropics does the same thing (except perhaps not as ubiquitously).

So the point of understanding anthropics is not so much to directly predict experiences but to appreciate how exactly one should update on certain pieces of evidence. It's like understanding any other selection effect -- in order to properly interpret the significance of pieces of evidence you collect, you need to have a proper understanding of the tools you use to collect them. To use Eddington's much-cited example, if your net can't catch fish smaller than six inches, then the fact that you haven't caught any such fish doesn't tell you anything about the state of the lake you're fishing. Understanding the limitations of your data-gathering mechanism prevents you from making bad updates. And if the particular limitation you're considering is the fact that observations can only be made in regimes accessible to observers, then you're engaged in anthropic reasoning.

Paul Dirac came up with a pretty revisionary cosmological theory based on several apparent "large number coincidences" -- important large (and some small) numbers in physics that all seem to be approximate integer powers of the Hubble age of the universe. He argued that it is implausible that we just happen to find ourselves at a time when these simple relationships hold, so they must be law-like. Based on this he concluded that certain physical constants aren't really constant; they change as the universe ages. R. H. Dicke showed (or purported to show) that at least some of these coincidences can be explained when one realizes that observers can only exist during a certain temporal window in the universe's existence, and that the timing of this window is related to a number of other physical constants (since it depends on facts about the formation and destruction of stars, etc.). If it's true that observers can only exist in an environment where these large number relationships hold, then it's a mistake to update our beliefs about natural laws based on these relationships. So that's an example of how understanding the anthropic selection effect might save us (and not just us, but also superhumans like Dirac) from bad updates.

So much for anthropics in general, but what about the esoteric particulars -- SSA, SIA and all that. Well, here's the basic thought: Dirac's initial (non-anthropic) move to his new cosmological theory was motivated by the belief that it is extraordinarily unlikely that the large number coincidences are purely due to chance, that we just happen to be around at a time when they hold. This kind of argument has a venerable history in physics (and other sciences, I'm sure) -- if your theory classifies your observed evidence as highly atypical, that's a significant strike against the theory. Anthropic reasoning like Dicke's adds a wrinkle -- our theory is allowed to classify evidence as atypical, as long as it is not atypical for observers. In other words, even if the theory says phenomenon X occurs very rarely in our universe, an observation of phenomenon X doesn't count against it, as long as the theory also says (based on good reason, not ad hoc stipulation) that observers can only exist in those few parts of the universe where phenomenon X occurs. Atypicality is allowed as long as it is correlated with the presence of observers.

But only that much atypicality is allowed. If your theory posits significant atypicality that goes beyond what selection effects can explain, then you're in trouble. This is the insight that SSA, SIA, etc seek to precisify. They are basically attempts to update the Diracian "no atypicality" strategy to allow for the kind of atypicality that anthropic reasoning explains, but no more atypicality than that. Perhaps they are misguided attempts for various reasons, but the search for a mathematical codification of the "no atypicality" move is important, I think, because the move gets used imprecisely all the time anyway (without explicit evocation, most of the time) and it gets used without regard for important observation selection effects.

Comment author: drnickbone 13 July 2013 09:36:11AM 8 points [-]

If you taboo "anthropics" and replace by "observation selection effects" then there are all sorts of practical consequences. See the start of Nick Bostrom's book for some examples.

The other big reason for caring is the "Doomsday argument" and the fact that all attempts to refute it have so far failed. Almost everyone who's heard of the argument thinks there's something trivially wrong with it, but all the obvious objections can be dealt with e.g. look later in Bostrom's book. Further, alternative approaches to anthropics (such as the "self indication assumption"), or attempts to completely bypass anthropics (such as "full non-indexical conditioning"), have been developed to avoid the Doomsday conclusion. But very surprisingly, they end up reproducing it. See Katja Grace's theisis.

Comment author: bogdanb 13 July 2013 08:22:34PM 6 points [-]

Take Bayes’ theorem: P(H|O) = P(O|H) × P(H) / P(O). If H is a hypothesis and O is an observation, P(O|H) means “what is the probability of making that observation if the hypothesis is true?”

If a hypothesis has as consequence “nobody can observe O” (say, because no humans can exist), then that P(O|H) is 0 (actually, it’s about the probability that you didn’t get the consequence right). Which means that, once you made the observation, you will probably decide that the hypothesis is unlikely. However, if you don’t notice that consequence, you might decide that P(O|H) is large, and incorrectly assign high likelihood to the hypothesis.

For a completely ridiculous example, imagine that there’s a deadly cat-flu epidemic; it gives 90% of cats that catch it a runny nose. Your cat’s nose becomes runny. You might be justified to think that it’s likely your cat got cat-flu. However, if you know that all cases, the cat’s owner dies of the flu before the cat has any symptoms, the conclusion would be the opposite. (Since, if it were the flu, you wouldn’t see the cat’s runny nose, because you’d be dead.) The same evidence, opposite effect.

Anthropics is kind of the same thing, except you’re mostly guessing about the flu.

Comment author: Qiaochu_Yuan 13 July 2013 04:51:41AM *  5 points [-]

The obvious application (to me) is figuring out how to make decisions once mind uploading is possible. This point is made, for example, in Scott Aaronson's The Ghost in the Quantum Turing Machine. What do you anticipate experiencing if someone uploads your mind while you're still conscious?

Anthropics also seems to me to be relevant to the question of how to do Bayesian updates using reference classes, a subject I'm still very confused about and which seems pretty fundamental. Sometimes we treat ourselves as randomly sampled from the population of all humans similar to us (e.g. when diagnosing the probability that we have a disease given that we have some symptoms) and sometimes we don't (e.g. when rejecting the Doomsday argument, if that's an argument we reject). Which cases are which?

Comment author: ESRogs 13 July 2013 06:39:34AM *  5 points [-]

figuring out how to make decisions once mind uploading is possible

Or even: deciding how much to care about experiencing pain during an operation if I'll just forget about it afterwards. This has the flavor of an anthropics question to me.

Comment author: satt 21 July 2013 10:33:02AM 3 points [-]

Possible example of an anthropic idea paying rent in anticipated experiences: anthropic shadowing of intermittent observer-killing catastrophes of variable size.

Comment author: ikrase 13 July 2013 08:04:16AM 3 points [-]

I'd add that the Doomsday argument in specific seems like it should be demolished by even the slightest evidence as to how long we have left.

Comment author: Plasmon 13 July 2013 05:39:39AM *  3 points [-]

There's a story about anthropic reasoning being used to predict properties of the processes which produce carbon in stars, before these processes were known. (apparently there's some debate about whether or not this actually happened)

Comment author: benelliott 14 July 2013 02:30:35AM *  2 points [-]

It seems like a mess of tautologies and thought experiments

My own view is that this is precisely correct and exactly why anthropics is interesting, we really should have a good, clear approach to it and the fact we don't suggests there is still work to be done.

Comment author: Frood 13 July 2013 04:45:57PM 14 points [-]

When I'm in the presence of people who know more than me and I want to learn more, I never know how to ask questions that will inspire useful, specific answers. They just don't occur to me. How do you ask the right questions?

Comment author: TimS 13 July 2013 07:11:52PM 18 points [-]

Lawyer's perspective:

People want to ask me about legal issues all the time. The best way to get a useful answer is to describe your current situation, the cause of your current situation, and what you want to change. Thus:

I have severe injuries, caused by that other person hitting me with their car. I want that person's driver's license taken away.

Then I can say something like: Your desired remedy is not available for REASONS, but instead, you could get REMEDY. Here are the facts and analysis that would affect whether REMEDY is available.

In short, try to define the problem. fubarobfusco has some good advice about how to refine your articulation of a problem. That said, if you have reason to believe a person knows something useful, you probably already know enough to articulate your question.

The point of my formulation is to avoid assumptions that distort the analysis. Suppose someone in the situation I described above said "I was maliciously and negligently injured by that person's driving. I want them in prison." At that point, my response needs to detangle a lot of confusions before I can say anything useful.

Comment author: buybuydandavis 14 July 2013 10:49:37AM 8 points [-]

In short, try to define the problem

I see you beat me to it. Yes, define your problem and goals.

The really bad thing about asking questions is that people will answer them. You ask some expert "How do I do X with Y?". He'll tell you. He'll likely wonder what the hell you're up to in doing such a strange thing with Y, but he'll answer. If he knew what your problem and goals were instead, he'd ask the right questions of himself on how to solve the problem, instead of the wonrg question that you gave him.

Comment author: Eugine_Nier 15 July 2013 01:52:43AM *  5 points [-]

You ask some expert "How do I do X with Y?". He'll tell you. He'll likely wonder what the hell you're up to in doing such a strange thing with Y, but he'll answer.

Also in the event you get an unusually helpful expert, he might point this out. Consider this your lucky day and feel free to ask follow up questions. Don't be discouraged by the pointing out being phrased along the lines of "What kind of idiot would want to do X with Y?"

Comment author: buybuydandavis 14 July 2013 10:43:47AM 4 points [-]

Don't ask questions. Describe your problem and goal, and ask them to tell you what would be helpful. If they know more than you, let them figure out the questions you should ask, and then tell you the answers.

Comment author: wwa 13 July 2013 06:46:14PM 4 points [-]

I find "How do I proceed to find out more about X" to give best results. Note: it's important to phrase it so that they understand you are asking for an efficient algorithm to find out about X, not for them to tell you about X!

It works even if you're completely green and talking to a prodigy in the field (which I find to be particularly hard). Otherwise you'll get "RTFM"/"JFGI" at best or they will avoid you entirely at worst.

Comment author: NancyLebovitz 13 July 2013 05:42:01PM 3 points [-]

What do you want to learn more about? If there isn't an obvious answer, give yourself some time to see if an answer surfaces.

The good news is that this is the thread for vague questions which might not pan out.

Comment author: Error 15 July 2013 03:03:46PM *  2 points [-]

For the narrow subset of technical questions, How to Ask Questions the Smart Way is useful.

But if you don't have a problem to begin with -- if your aim is "learn more in field X," it gets more complicated. Given that you don't know what questions are worth asking, the best question might be "where would I go to learn more about X" or "what learning material would you recommend on the subject of X?" Then in the process of following and learning from their pointer, generate questions to ask at a later date.

There may be an inherent contradiction between wanting nonspecific knowledge and getting useful, specific answers.

Comment author: ChristianKl 15 July 2013 09:20:57AM 2 points [-]

I don't think an answer has to be specific to be useful. Often just understanding how an expert in a certain area thinks about the world can be useful even if you have no specificity.

When it comes to questions: 1) What was the greatest discovery in your field in the last 5 years? 2) Is there an insight in your field that obvious to everyone in your field but that most people in society just don't get?

Comment author: therufs 14 July 2013 04:00:45AM 2 points [-]

Start by asking the wrong ones. For me, it took a while to notice when I had even a stupid question to ask (possibly some combination of mild social anxiety and generally wanting to come across as smart & well-informed had stifled this impulse), so this might take a little bit of practice.

Sometimes your interlocutor will answer your suboptimal questions, and that will give you time to think of what you really want to know, and possibly a few extra hints for figuring it out. But at least as often your interlocutor will take your interest as a cue that they can just go ahead and tell you nonrelated things about the subject at hand.

Comment author: fubarobfusco 13 July 2013 05:46:35PM 2 points [-]

One approach: Think of two terms or ideas that are similar but want distinguishing. "How is a foo different from a bar?" For instance, if you're looking to learn about data structures in Python, you might ask, "How is a dictionary different from a list?"

You can learn if your thought that they are similar is accurate, too: "How is a list different from a for loop?" might get some insightful discussion ... if you're lucky.

Comment author: SaidAchmiz 14 July 2013 07:26:23AM 2 points [-]

Of course, if you know sufficiently little about the subject matter, you might instead end up asking a question like

"How is a browser different from a hard drive?"

which, instead, discourages the expert from speaking with you (and makes them think that you're an idiot).

Comment author: James_Miller 13 July 2013 01:23:40PM 14 points [-]

Do you build willpower in the long-run by resisting temptation? Is willpower, in the short-term at least, a limited and depletable resource?

Comment author: Kaj_Sotala 13 July 2013 02:58:32PM *  29 points [-]

Is willpower, in the short-term at least, a limited and depletable resource?

I felt that Robert Kurzban presented a pretty good argument against the "willpower as a resource" model in Why Everyone (Else) Is a Hypocrite:

[After criticizing studies trying to show that willpower is a resource that depends on glucose]

What about the more general notion that “willpower” is a “resource” that gets consumed or expended when one exerts self-control? First and foremost, let’s keep in mind that the idea is inconsistent with the most basic facts about how the mind works. The mind is an information-processing device. It’s not a hydraulic machine that runs out of water pressure or something like that. Of course it is a physical object, and of course it needs energy to operate. But mechanics is the wrong way to understand, or explain, its action, because changes in complex behavior are due to changes in information processing. The “willpower as resource” view abandons these intellectual gains of the cognitive revolution, and has no place in modern psychology. That leaves the question, of course, about what is going on in these studies.

Let’s back up for a moment and think about what the function of self-control might be. Taking the SATs, keeping your attention focused, and not eating cookies all feel more or less unpleasant, but it’s not like spraining your ankle or running a marathon, where the unpleasant sensations are easy to understand from a functional point of view. The feelings of discomfort are probably the output of modules designed to compute costs. When your ankle is sprained, putting weight on it is costly because you can damage it further. When you have been running for a long time, the chance of a major injury goes up. These sensations, then, are probably evolution’s way of getting you to keep your weight off the joint and stop doing all that running, respectively.

There’s nothing obviously analogous for not eating cookies or doing word problems. Why does it feel like something, anything at all, to (not) do these things? As we’ve seen, lots of other stuff happens in your head, all the time, and it doesn’t feel like anything. Further, given that it seems as if exerting self-control is a good thing, that is, that it generally leads to outcomes that might be expected to yield fitness benefits, you might expect that exerting self-control would feel good and easy. Why does it seem hard, and feel even harder over time? What is the sensation of “effort” designed to get you to do?

One reason it seems hard might derive from that fact that “exerting self-control” entails incurring immediate costs in various forms, and “effort” is the representation of these costs. Consider not eating a cookie. There are probably modules in your mind that are designed to compute the benefits of eating nice calorie packages. They’re wired up to the senses, designed to calculate just how good (in the evolutionary sense) eating the calorie package is. From the point of view of these modules, not eating the cookie is a cost, in particular, the lost calories in the cookie. So, the sensation of the effort of not eating it—”temptation”—is probably evolution’s way of getting you to eat the cookie, just as the sensation of pain is evolution’s way of getting you to stay off your sprained ankle. In both cases, the experience is the output of a module designed to compute costs.

The same argument applies to other opportunities, and they take various forms. In some experiments, subjects are told to ignore words flashing on a computer screen, something that feels quite effortful. Why? Well, not reading words on a screen carries a loss of information: What did those words say? A similar argument applies regarding Ariely’s work on decision making during sexual arousal, which we looked at earlier in this chapter. The reason that subjects respond to those survey questions when they are aroused is probably because the mechanisms designed to take advantage of mating opportunities are computing benefits in the environment, though they are being fooled by the fact that the images they are getting are pictures rather than actual people.

Is it also a cost to solve word problems? Sure, but the cost isn’t caloric. Solving word problems requires the use of certain fancy modules, and when one is doing one of these tasks, these modules are kept busy. This means that doing these tasks carries real (opportunity) costs: all the things that these modules could be doing but are not because they are engaged. It’s not unlike what happens when you start up some big piece of software on your computer: Other things suffer, necessarily. Starting up software carries these costs. Working on word problems, similarly, prevents you from using important modular systems from doing other tasks.

So, instead of a resource view, my view is that the issue is more of an effort monitor—an “effortometer”62 in the mind. My guess is that the reason it feels like something to pay close attention to something, solve hard problems, or avoid eating cookies is that doing these things is costly from the perspective of certain modules.63 The feeling of “mental effort,” on this view, is like a counter, adding up all these opportunity costs to determine if it’s worth continuing to do whatever one is doing.64 As these costs get higher—either because one is doing the task for a while, or for some other reason—the effortometer counts higher, giving rise to the sensation of effort, and also giving the impatient modules more and more of an edge.

If I’m working on word problems—but not getting anywhere—using my modules in this way isn’t doing much good, so maybe I should stop. Interestingly, as illustrated by the results of the studies described above, the effect seems to extend from one task to another, even if the tasks are quite different.

This idea suggests that a mechanism is needed that performs these computations, weighing the costs and benefits of doing tasks that make use of certain modules. Some modules are counting up these costs, and when the effortometer increases, there is less suppression of the short-term modules—it’s time to move on. So, it’s not “willpower” that’s exhausted—it’s that the ratio of costs to reward is too high to justify continuing. As Baumeister himself indicated, “it is adaptive to give up early on unsolvable problems. Persistence is, after all, only adaptive and productive when it leads to eventual success.”

The effortometer view suggests a way to “reset” or at least reduce the count. Suppose we give subjects a reward, such as a small gift, or even light praise; this ought to “reset” the counter, just as when a foraging animal’s time is rewarded by finding food morsels. Diane Tice and colleagues conducted some work in which some subjects were told not to think of a white bear,* and others were not. The idea was that not thinking of a white bear takes some “willpower,” and when you’ve just used your willpower, you have less of it left to use in the next task, which was drinking an unpleasant beverage. They found that if you have to suppress thinking of a white bear, you can’t drink as much of the awful Kool-Aid. So, that looks good for a “resource” model. Your willpower sponge has been squeezed out.

Some subjects were, however, given a small gift after suppressing thinking of a white bear. These subjects were able to drink just as much of the nasty stuff as those who were at liberty to think of as many white bears as they wanted. That is, their “willpower” seems to have been restored, making them able to endure the foul-tasting beverage.

These findings are very hard to accommodate with a “resource” model. If my self-control sponge is squeezed dry by not thinking of a white bear, a gift shouldn’t help me exert willpower—I’m all out of it. (And certainly the gift didn’t increase the amount of glucose in my body.) In contrast, this finding fits very well with the effortometer model. If the effortometer is monitoring reward, then a gift resets it, and ought to improve subsequent self-control tasks.

Elsewhere in the book (I forget where) he also notes that the easiest explanation for people to go low on willpower when hungry is simply that a situation where your body urgently needs food is a situation where your brain considers everything that’s not directly related to acquiring food to have a very high opportunity cost. It seems like a more elegant and realistic explanation than saying the common folk-psychological explanation that seems to suggest something like willpower being a resource that you lose when you’re hungry or tired. It’s more of a question of the evolutionary tradeoffs being different when you’re hungry or tired, which leads to different cognitive costs.

Comment author: taelor 14 July 2013 05:31:19PM *  2 points [-]

In About Behaviorism (which I unfortunately don't currently own a copy of, so I can't give direct quotes or citations) , B. F. Skinner makes the case that the "Willpower" phenomenon actually reduces to opperant conditioning and scheduals of reinforcement. Skinner claims that people who have had their behavior consistently reinforced in the past will become less sensitive to a lack of reinforcement in the present, and may persist in behavior even when positive reinforcement isn't forthcoming in the short term, whereas people whose past behavior has consistantly failed to be reinforced (or even been actively punished) will abandon a course of action much more quickly when it fails to immediately pay off. Both groups will eventually give up at an unreinforced behavior, though the former group will typically persist much longer at it than the latter. This gives rise to the "willpower as resource" model, as well as the notion that some people have more willpower than others. Really, people with "more willpower" have just been conditioned to wait longer for their behaviors to be reinforced.

Comment author: pan 13 July 2013 06:25:34PM 13 points [-]

To what degree does everyone here literally calculate numerical outcomes and make decisions based on those outcomes for everyday decisions using Bayesian probability? Sometimes I can't tell if when people say they are 'updating priors' they are literally doing a calculation and literally have a new number stored somewhere in their head that they keep track of constantly.

If anyone does this could you elaborate more on how you do this? Do you have a book/spreadsheet full of different beliefs with different probabilities? Can you just keep track of it all in your mind? Or calculating probabilities like this only something people do for bigger life problems?

Can you give me a tip for how to start? Is there a set of core beliefs everyone should come up with priors for to start? I was going to apologize if this was a stupid question, but I suppose it should by definition be one if it is in this thread.

Comment author: Manfred 13 July 2013 07:01:56PM *  10 points [-]

Nope, not for everyday decisions. For me "remember to update" is more of a mantra to remember to change your mind at all - especially based on several pieces of weak evidence, which normal procedure would be to individually disregard and thus never change your mind.

Comment author: Qiaochu_Yuan 14 July 2013 12:18:57AM *  7 points [-]

I never do this. See this essay by gwern for an example of someone doing this.

Comment author: mwengler 14 July 2013 02:45:54PM 5 points [-]

I suspect very little, but this does remind me of Warren Buffett speaking on Discounted Cash Flow calculations.

For quick background, an investment is a purchase of a future cash flow. Cash in the future is worth less to you than cash right now, and it is worth less and less as you go further into the future. Most treatments pretend that the proper way to discount the value of cash in the future is to have a discount rate (like 5% or 10% per year) and apply it as an exponential function to future cash.

Warren Buffett, a plausible candidate for the most effective investor ever (or at least so far), speaks highly of DCF (discounted cash flow) as the way to choose between investments. However, he also says he never actually does one other than roughly in his head. Given his excellent abilities at calculating in his head, I think it would translate to something like he never does a DCF calculation that would take up more than about 20 lines in an excel spreadsheet.

There are a broad range of policies that I have that are based on math: not gambling in Las Vegas because it's expectation value is negative (although mostly I trust the casinos to have set the odds so payouts are negative, I don't check their math). Not driving too far for small discounts (expense of getting discount should not exceed value of discount). Not ignoring a few thousand dollar difference in a multi-hundred thousand dollar transaction because "it is a fraction of a percent."

I do often in considering hiring a personal service compare paying for it to how long it would take me to do the job vs how long I would need to work at my current job to hire the other person. I am pretty well paid so this does generally lead to me hiring a lot of things done. A similar calcuation does lead me to systematically ignore costs below about $100 for a lot of things which still "feels" wrong, but which I have not yet been able to do a calculation that shows me it is wrong.

I am actually discouraging my wife and children from pushing my children towards elite colleges and universities on the basis that they are over-priced for what they deliver. I am very unconfident in this one as rich people that I respect continue to just bleed money into their children's educations. SO I am afraid to go independent of them even as I can't figure out a calculation that shows what they are doing makes economic sense.

I do look at, or caculate, the price per ounce in making buying decisions, I guess that is an example of a common bayesian calculation.

Comment author: Izeinwinter 14 July 2013 04:01:24PM *  5 points [-]

It depends on what your kids want to do. Elite colleges are not selling education, except to the extent that they have to maintain standards to keep their position. They are selling networking cachet. Which is of very high value to people who want to be one of the masters of the universe, and take their chances with the inbound guillotine. If your kids want to be doctors. engineers or archaeologists.. no, not worth the price tag. In fact, the true optium move is likely to ship them to Sweden with a note telling them to find a nice girl, naturalize via marriage and take the free ride through stockholm university. ;)

Comment author: Sarokrae 14 July 2013 09:03:08AM 4 points [-]

I'd be alarmed if anyone claimed to accurately numerically update their priors. Non-parametric Bayesian statistics is HARD and not the kind of thing I can do in my head.

Comment author: Alexei 18 July 2013 03:24:21PM 2 points [-]

I had the same worry/question when I first found LW. After meeting with all the "important" people (Anna, Luke, Eliezer...) in person, I can confidently say: no, nobody is carrying around a sheet of paper and doing actual Bayesian updating. However, most people in these circles notice when they are surprised/confused, act on that feeling, and if they were wrong, then they update their believes, followed soon by their actions. This could happen from one big surprise or many small ones. So there is a very intuitive sort of Bayesian updating going on.

Comment author: Nornagest 13 July 2013 09:57:28PM *  2 points [-]

I only literally do an expected outcome calculation when I care more about having numbers than I do about their validity, or when I have unusually good data and need rigor. Most of the time the uncertainties in your problem formulation will dominate any advantage you might get from doing actual Bayesian updates.

The advantage of the Bayesian mindset is that it gives you a rough idea of how evidence should affect your subjective probability estimate for a scenario, and how pieces of evidence of different strengths interact with each other. You do need to work through a reasonable number of examples to get a feel for how that works, but once you have that intuition you rarely need to do the math.

Comment author: FiftyTwo 13 July 2013 05:42:09AM 13 points [-]

Why is everyone so intereted in decision theory? Especially the increasingly convoluted variants with strange acronyms that seem to be popping up

Comment author: Qiaochu_Yuan 13 July 2013 06:01:46AM *  26 points [-]

As far as I can tell, LW was created explicitly with the goal of producing rationalists, one desirable side effect of which was the creation of friendly AI researchers. Decision theory plays a prominent role in Eliezer's conception of friendly AI, since a decision theory is how the AI is supposed to figure out the right thing to do. The obvious guesses don't work in the presence of things like other agents that can read the AI's source code, so we need to find some non-obvious guesses because that's something that could actually happen.

Comment author: Adele_L 13 July 2013 05:24:01PM *  20 points [-]

Hey, I think your tone here comes across as condescending, which goes against the spirit of a 'stupid questions' thread, by causing people to believe they will lose status by posting in here.

Comment author: Qiaochu_Yuan 13 July 2013 05:29:22PM 20 points [-]

Fair point. My apologies. Getting rid of the first sentence.

Comment author: Adele_L 13 July 2013 09:52:16PM 10 points [-]

Thanks!

Comment author: gothgirl420666 13 July 2013 06:48:14AM *  6 points [-]

This was what I gathered from reading the beginning of the TDT paper: "There's this one decision theory that works in every single circumstance except for this one crazy sci-fi scenario that might not even be physically possible, and then there's this other decision theory that works in said sci-fi scenario but not really anywhere else. We need to find a decision theory that combines these two in order to always work, including in this one particular sci-fi scenario."

I guess it might be useful for AI research, but I don't see why I would need to learn it.

Comment author: RomeoStevens 13 July 2013 10:03:12AM 14 points [-]

the sci-fi bit is only to make it easier to think about. The real world scenarios it corresponds to require the reader to have quite a bit more background material under their belt to reason carefully about.

Comment author: gothgirl420666 13 July 2013 03:00:14PM 2 points [-]

What are the real world scenarios it corresponds to? The only one I know of is the hitchhiker one, which is still pretty fantastic. I'm interested in learning about this.

Comment author: saturn 13 July 2013 07:44:16PM 8 points [-]

Any kind of tragedy of the commons type scenario would qualify.

Comment author: gothgirl420666 13 July 2013 08:44:39PM *  3 points [-]

It's not obvious to me how tragedy of the commons/prisoner's dilemma is isomorphic to Newcomb's problem, but I definitely believe you that it could be. If TDT does in fact present a coherent solution to these types of problems, then I can easily see how it would be useful. I might try to read the pdf again sometime. Thanks.

Comment author: benelliott 14 July 2013 01:52:44AM 4 points [-]

They aren't isomorphic problems, however it is the case that CDT two-boxes and defects while TDT one boxes and co-operates (against some opponents).

Comment author: CronoDAS 15 July 2013 08:26:15AM *  10 points [-]

I sometimes contemplate undertaking a major project. When I do so, I tend to end up reasoning like this:

It would be very good if I could finish this project. However, almost all the benefits of attempting the project will accrue when it's finished. (For example, a half-written computer game doesn't run at all, one semester's study of a foreign language won't let me read untranslated literature, an almost-graduated student doesn't have a degree, and so on.) Undertaking this project will require a lot of time and effort spent on activities that aren't enjoyable for their own sake, and there's a good chance I'll get frustrated and give up before actually completing the project. So it would be better not to bother; the benefits of successfully completing the project seem unlikely to be large enough to justify the delay and risk involved.

As a result, I find myself almost never attempting a project of any kind that involves effort and will take longer than a few days, but I don't want to live my life having done nothing, though. Advice?

Comment author: Larks 15 July 2013 09:43:09AM 12 points [-]

a half-written computer game doesn't run at all

I realize this does not really address your main point, but you can have half-written games that do run. I've been writing a game on and off for the last couple of years, and it's been playable the whole time. Make the simplest possible underlying engine first, so it's playable (and testable) as soon as possible.

Comment author: CAE_Jones 15 July 2013 10:44:17AM 4 points [-]

In fact, the games I tend to make progress on are the ones I can get testable as quickly as possible. Unfortunately, those are usually the least complicated ones (glorified MUDs, an x axis with only 4 possible positions, etc).

I do want to do bigger and better things, then I run into the same problem as CronoDAS. When I do start a bigger project, I can sometimes get started, then crash within the first hour and never return. (In a couple extreme cases, I lasted for a good week before it died, though one of these was for external reasons). Getting started is usually the hardest part, followed by surviving until there's something work looking back at. (A functioning menu system does not count.)

Comment author: OnTheOtherHandle 25 July 2013 01:59:24AM 2 points [-]

This seems like a really good concept to keep in mind. I wonder if it could be applied to other fields? Could you make a pot that remains a pot the whole way through, even as you refine it and add detail? Could you write a song that starts off very simple but still pretty, and then gradually layer on the complexity?

Your post inspired me to try this with writing, so thank you. :) We could start with a one-sentence story: "Once upon a time, two lovers overcame vicious prejudice to be together."

And that could be expanded into a one-paragraph story: "Chanon had known all her life that the blue-haired Northerners were hated enemies, never to be trusted, that she had to keep her red-haired Southern bloodline pure or the world would be overrun by the blue barbarians. But everything was thrown in her face when she met Jasper - his hair was blue, but he was a true crimson-heart, as the saying went. She tried to find every excuse to hate him, but time and time again Jasper showed himself to be a man of honor and integrity, and when he rescued her from those lowlife highway robbers - how could she not fall in love? Her father hated it of course, but even she was shocked at how easily he disowned her, how casually he threw away the bonds of family for the chains of prejudice. She wasn't happy now, homeless and adrift, but she knew that she could never be happy again in the land she had once called home. Chanon and Jasper set out to unknown lands in the East, where hopefully they could find some acceptance and love for their purple family."

This could be turned into a one page story, and then a five page story, and so on, never losing the essence of the message. Iterative storytelling might be kind of fun for people who are trying to get into writing something long but don't know if they can stick it out for months or years.

Comment author: sediment 21 July 2013 07:33:28PM *  2 points [-]

I submit that this might generalize: that perhaps it's worth, where possible, trying to plan your projects with an iterative structure, so that feedback and reward appear gradually throughout the project, rather than in an all-or-nothing fashion at the very end. Tight feedback loops are a great thing in life. Granted, this is of no use for, for example, taking a degree.

Comment author: Kyre 16 July 2013 12:09:30AM 5 points [-]

I have/had this problem. My computer and shelves are full of partially completed (or, more realistically, just-begun) projects.

So, what I'm doing at the moment is I've picked one of them, and that's the thing I'm going to complete. When I'm feeling motivated, that's what I work on. When I'm not feeling motivated, I try to do at least half an hour or so before I flake off and go play games or work on something that feels more awesome at the time. At those times my motivation isn't that I feel that the project is worthwhile, it is that having gone through the process of actually finishing something will be have been worthwhile.

It's possible after I'm done I may never put that kind of effort in again, but I will know (a) that I probably can achieve that sort of goal if I want and (b) if carrying on to completion is hell, what kind of hell and what achievement would be worth it.

Comment author: Qiaochu_Yuan 15 July 2013 07:43:11PM 3 points [-]

Beeminder. Record the number of Pomodoros you spend working on the project and set some reasonable goal, e.g. one a day.

Comment author: gothgirl420666 15 July 2013 03:07:52PM 2 points [-]

there's a good chance I'll get frustrated and give up before actually completing the project

Make this not true. Practice doing a bunch of smaller projects, maybe one or two week-long projects, then a month-long project. Then you'll feel confident that your work ethic is good enough to complete a major project without giving up.

Comment author: drethelin 14 July 2013 05:58:24PM 9 points [-]

Is there any non-creepy way to indicate to people that you're available and interested in physical intimacy? doing something like just telling everyone you meet "hey you're cute want to make out?" seems like it would go badly.

Comment author: wedrifid 14 July 2013 10:29:34PM 10 points [-]

Is there any non-creepy way to indicate to people that you're available and interested in physical intimacy? doing something like just telling everyone you meet "hey you're cute want to make out?" seems like it would go badly.

Slightly increase eye contact. Orient towards. Mirror posture. Use touch during interaction (in whatever ways are locally considered non-creepy).

Comment author: CoffeeStain 13 July 2013 11:17:35PM 9 points [-]

How do I get people to like me? It seems to me that this is a worthwhile goal; being likable increases the fun that both I and others have.

My issue is that likability usually means, "not being horribly self-centered." But I usually find I want people to like me more for self-centered reasons. It feels like a conundrum that just shouldn't be there if I weren't bitter about my isolation in the first place. But that's the issue.

Comment author: gothgirl420666 14 July 2013 03:54:29AM *  31 points [-]

This was a big realization for me personally:

If you are trying to get someone to like you, you should strive to maintain a friendly, positive interaction with that person in which he or she feels comfortable and happy on a moment-by-moment basis. You should not try to directly alter that person's opinion of you, in the sense that if you are operating on a principle of "I will show this person that I am smart, and he will like me", "I will show this person I am cool, and she will like me," or even "I will show this person that I am nice, and he will like me", you are pursuing a strategy that can be ineffective and possibly lead people to see you as self-centered. This might be what people say when they mean "be yourself" or "don't worry about what other people think of you".

Also, Succeed Socially is a good resource.

Comment author: [deleted] 14 July 2013 11:04:42PM *  8 points [-]

Also, getting certain people to like you is way, way, way, way harder than getting certain other people to like you. And in many situations you get to choose whom to interact with.

Do what your comparative advantage is.

Comment author: mwengler 14 July 2013 02:24:58PM 7 points [-]

In actuality,a lot of people can like you a lot even if you are not selfless. It is not so much that you need to ignore what makes you happy, as much as it is that you need to pay attention and energy to what makes other people happy. A trivial if sordid example is you don't get someone wanting to have sex with you by telling them how attractive you are, you will do better by telling them, and making it obvious that, you find them attractive. That you will take pleasure in their increased attentions to you is not held against you because it means you are not selfless not at all. Your need or desire for them is the attractor to them.

So don't abnegate, ignore, deny, your own needs. But run an internal model where other people's needs are primary to suggest actions you can take that will serve them and glue them to you.

Horribly self-centered isn't a statement that you elevate your own needs too high. It is that you are too ignorant and unreactive to other people's needs.

Comment author: CronoDAS 14 July 2013 09:24:54PM 5 points [-]

The standard reference for this is "How to Win Friends and Influence People" by Dale Carnegie. I have not read it myself.

Comment author: Vaniver 15 July 2013 04:06:39AM 3 points [-]

The standard reference for this is "How to Win Friends and Influence People" by Dale Carnegie. I have not read it myself.

Much of it boils down to gothgirl420666's advice, except with more technical help on how. (I think the book is well worth reading, but it basically outlines "these are places where you can expend effort to make other people happier.")

Comment author: ChristianKl 15 July 2013 08:59:53AM *  2 points [-]

One of the tips from Carnegie that gothgirl420666 doesn't mention is using people names.

Learn them and use them a lot in coversation. Great them with their name.

Say thing like: "I agree with you, John." or "There I disagree with you, John."

Comment author: Vaniver 15 July 2013 06:06:28PM *  2 points [-]

This is a piece of advice that most people disagree with, and so I am reluctant to endorse it. Knowing people's names is important, and it's useful to use them when appropriate, but inserting them into conversations where they do not belong is a known influence technique that will make other people cautious.

(While we're on the subject of recommendations I disagree with, Carnegie recommends recording people's birthdays, and sending them a note or a call. This used to be a lot more impressive before systems to automatically do that existed, and in an age of Facebook I don't think it's worth putting effort into. Those are the only two from the book that I remember thinking were unwise.)

Comment author: RomeoStevens 15 July 2013 09:36:46PM 4 points [-]

Be judicious, and name drop with one level of indirection. "That's sort of what like John was saying earlier I believe yada yada."

Comment author: ChristianKl 16 July 2013 08:48:51AM *  2 points [-]

Knowing people's names is important, and it's useful to use them when appropriate, but inserting them into conversations where they do not belong is a known influence technique that will make other people cautious.

It probably depends on the context. If you in a context like a sales conversation people might get cautious. In other context you might like a person trying to be nice to you.

But you are right that there the issue of artificialness. It can be strange if things don't flow naturally. I think that's more a matter of how you do it rather than how much or when.

At the beginning, just starting to greet people with their name can be a step forward. I think in most cultures that's an appropriate thing to do, even if not everyone does it.

I would also add that I'm from Germany, so my cultural background is a bit different than the American one.

Comment author: Sarokrae 14 July 2013 09:01:42AM 5 points [-]

I second what gothgirl said; but in case you were looking for more concrete advice:

  1. Exchange compliments. Accept compliments graciously but modestly (e.g. "Thanks, that's kind of you").
  2. Increase your sense of humour (watching comedy, reading jokes) until it's at population average levels, if it's not there.
  3. Practise considering other people's point of view.
  4. Do those three things consciously for long enough that you start doing them automatically.

At least, that's what worked for me when I was younger. Especially 1 actually, I think it helped with 3.

Comment author: drethelin 14 July 2013 03:05:05AM 4 points [-]

You can be self-centered and not act that way. If you even pretend to care about most people's lives they will care more about yours.

If you want to do this without being crazy bored and feeling terrible, I recommend figuring out conversation topics of other people's lives that you actually enjoy listening people talk about, and also working on being friends with people who do interesting things. In a college town, asking someone their major is quite often going to be enjoyable for them and if you're interested and have some knowledge of a wide variety of fields you can easily find out interesting things.

Comment author: ikrase 13 July 2013 08:22:51AM 9 points [-]
  • What's with the ems? People who are into ems seem to make a lot of assumptions about what ems are like and seem completely unattached to present-day culture or even structure of life, seem willing to spam duplicates of people around, etc. I know that Hanson thinks that 1. ems will not be robbed of their humanity and 2. that lots of things we currently consider horrible will come to pass and be accepted, but it's rather strange just how as soon as people say 'em' (as opposed to any other form of uploading) everything gets weird. Does anthropics come into it?

  • Why the huge focus on fully paternalistic Friendly AI rather than Obedient AI? It seems like a much lower-risk project. (and yes, I'm aware of the need for Friendliness in Obedient AI.)

Comment author: cousin_it 13 July 2013 08:48:17AM *  15 points [-]

For what it's worth, Eliezer's answer to your second question is here:

There is no safe wish smaller than an entire human morality. (...) With a safe genie, wishing is superfluous. Just run the genie.

Comment author: drethelin 13 July 2013 01:09:54PM 9 points [-]

Basically it's a matter of natural selection. Given a starting population of EMs, if some are unwilling to be copied, the ones that are willing to be copied will dominate the population in short order. If EMs are useful for work, eg valuable, then the more valuable ones will be copied more often. At that point, EMs that are willing to be copied and do slave labor effectively for no complaints will become the most copied, and the population of ems will end up being composed largely of copies of the person/people who are 1) ok with being copied, 2) ok with being modified to work more effectively.

Comment author: Eliezer_Yudkowsky 14 July 2013 04:15:54AM 7 points [-]

Well, no offense, but I'm not sure you are aware of the need for Friendliness in Obedient AI, or rather, just how much F you need in a genie.

If you were to actually figure out how to build a genie you would have figured it out by trying to build a CEV-class AI, intending to tackle all those challenges, tackling all those challenges, having pretty good solutions to all of those challenges, not trusting those solutions quite enough, and temporarily retreating to a mere genie which had ALL of the safety measures one would intuitively imagine necessary for a CEV-class independently-acting unchecked AI, to the best grade you could currently implement them. Anyone who thought they could skip the hard parts of CEV-class FAI by just building a genie instead, would die like a squirrel under a lawnmower. For reasons they didn't even understand because they hadn't become engaged with that part of the problem.

I'm not certain that this must happen in reality. The problem might have much kinder qualities than I anticipate in the sense of mistakes naturally showing up early enough and blatantly enough for corner-cutters to spot them. But it's how things are looking as a default after becoming engaged with the problems of CEV-class AI. The same problems show up in proposed 'genies' too, it's just that the genie-proposers don't realize it.

Comment author: ChristianKl 13 July 2013 01:37:48PM 4 points [-]

I know that Hanson thinks that 1. ems will not be robbed of their humanity

I don't know whether Hanson has a concret concept of 'humanity'.

Comment author: DanielLC 13 July 2013 08:46:42PM 3 points [-]

(as opposed to any other form of uploading)

I didn't know "em" was a specific form of uploading. What form is it, and what other forms are there?

Comment author: hairyfigment 15 July 2013 02:57:39AM 2 points [-]

Besides Eliezer's rather strong-looking argument, ethically creating Obedient AI would require solving the following scary problems:

  • A "nonperson predicate" that can ensure the AI doesn't create simulations which themselves count as people. If we fail to solve this one, then I could be a simulation the AI made in order to test how people like me react to torture.

  • A way to ensure the AI itself does not count as a person, so that we don't feel sad if it eventually switches itself off. See here for a fuller explanation of why this matters.

Now, I think Wei Dai suggested we start by building a "philosophical" AI that could solve such problems for us. I don't think philosophy is a natural class. (A 'correct way to do philosophy' sounds like a fully general correct way to think and act.) But if we get the AI's goals right, then maybe it could start out restricted by flawed and overcautious answers to these questions, but find us some better answers. Maybe.

Comment author: Alejandro1 13 July 2013 04:35:28PM 7 points [-]

It seems to me that there are basically two approaches to preventing an UFAI intelligence explosion: a) making sure that the first intelligence explosion is a a FAI instead; b) making sure that intelligence explosion never occurs. The first one involves solving (with no margin for error) the philosophical/ethical/logical/mathematical problem of defining FAI, and in addition the sociological/political problem of doing it "in time", convincing everyone else, and ensuring that the first intelligence explosion occurs according to this resolution. The second one involves just the sociological/political problem of convincing everyone of the risks and banning/discouraging AI research "in time" to avoid an intelligence explosion.

Naively, it seems to me that the second approach is more viable--it seems comparable in scale to something between stopping use of CFCs (fairly easy) and stopping global warming (very difficult, but it is premature to say impossible). At any rate, sounds easier than solving (over a few year/decades) so many hard philosophical and mathematical problems, with no margin for error and under time pressure to do it ahead of UFAI developing.

However, it seems (from what I read on LW and found quickly browsing the MIRI website; I am not particularly well informed, hence writing this on the Stupid Questions thread) that most of the efforts of MIRI are on the first approach. Has there been a formal argument on why it is preferable, or are there efforts on the second approach I am unaware of? The only discussion I found was Carl Shulman's "Arms Control and Intelligence Explosions" paper, but it is brief and nothing like a formal analysis comparing the benefits of each strategy. I am worried the situation might be biased by the LW/MIRI kind of people being more interested in (and seeing as more fun) the progress on the timeless philosophical problems necessary for (a) than the political coalition building and propaganda campaigns necessary for (b).

Comment author: Eliezer_Yudkowsky 14 July 2013 04:24:43AM 11 points [-]

I think it's easier to get a tiny fraction of the planet to do a complex right thing than to get 99.9% of a planet to do a simpler right thing, especially if 99.9% compliance may not be enough and 99.999% compliance may be required instead.

Comment author: shminux 14 July 2013 04:46:19AM 2 points [-]

This calls for a calculation. How hard creating an FAI would have to be to have this inequality reversed?

Comment author: Eliezer_Yudkowsky 14 July 2013 05:01:32AM 9 points [-]

When I see proposals that involve convincing everyone on the planet to do something, I write them off as loony-eyed idealism and move on. So, creating FAI would have to be hard enough that I considered it too "impossible" to be attempted (with this fact putatively being known to me given already-achieved knowledge), and then I would swap to human intelligence enhancement or something because, obviously, you're not going to persuade everyone on the planet to agree with you.

Comment author: Alejandro1 15 July 2013 12:02:08AM 4 points [-]

But is it really necessary to persuade everyone, or 99.9% of the planet? If gwern's analysis is correct (I have no idea if it is) then it might suffice to convince the policymakers of a few countries like USA and China.

Comment author: shminux 14 July 2013 05:14:03AM 3 points [-]

I see. So you do have an upper bound in mind for the FAI problem difficulty, then, and it's lower than other alternatives. It's not simply "shut up and do the impossible".

Comment author: [deleted] 13 July 2013 05:56:23PM 11 points [-]

Given enough time for ideas to develop, any smart kid in a basement could build an AI, and every organization in the world has a massive incentive to do so. Only omnipresent surveillance could prevent everyone from writing a particular computer program.

Once you have enough power flying around to actually prevent AI, you are dealing with AI-level threats already (a not-necessarily friendly singleton).

So FAI is actually the easiest way to prevent UFAI.

The other reason is that a Friendly Singleton would be totally awesome. Like so totally awesome that it would be worth it to try for the awesomeness alone.

Comment author: Kaj_Sotala 14 July 2013 07:22:09PM *  6 points [-]

We discuss this proposal in Responses to Catastrophic AGI Risk, under the sections "Regulate research" and "Relinquish technology". I recommend reading both of those sections if you're interested, but a few relevant excerpts:

Large-scale surveillance efforts are ethically problematic and face major political resistance, and it seems unlikely that current political opinion would support the creation of a far-reaching surveillance network for the sake of AGI risk alone. The extent to which such extremes would be necessary depends on exactly how easy it would be to develop AGI in secret. Although several authors make the point that AGI is much easier to develop unnoticed than something like nuclear weapons (McGinnis 2010; Miller 2012), cutting edge high-tech research does tend to require major investments which might plausibly be detected even by less elaborate surveillance efforts. [...]

Even under such conditions, there is no clear way to define what counts as dangerous AGI. Goertzel and Pitt (2012) point out that there is no clear division between narrow AI and AGI, and attempts to establish such criteria have failed. They argue that since AGI has a nebulous definition, obvious wide-ranging economic benefits, and potentially rich penetration into multiple industry sectors, it is unlikely to be regulated due to speculative long-term risks.

AGI regulation requires global cooperation, as the noncooperation of even a single nation might lead to catastrophe. Historically, achieving global cooperation on tasks such as nuclear disarmament and climate change has been very difficult. As with nuclear weapons, AGI could give an immense economic and military advantage to the country that develops it first, in which case limiting AGI research might even give other countries an incentive to develop AGI faster (Cade 1966; de Garis 2005; McGinnis 2010; Miller 2012) [...]

To be effective, regulation also needs to enjoy support among those being regulated. If developers working in AGI-related fields only follow the letter of the law, while privately considering all regulations as annoying hindrances and fears about AGI overblown, the regulations may prove ineffective. Thus, it might not be enough to convince governments of the need for regulation; the much larger group of people working in the appropriate fields may also need to be convinced.

While Shulman (2009) argues that the unprecedentedly destabilizing effect of AGI could be a cause for world leaders to cooperate more than usual, the opposite argument can be made as well. Gubrud (1997) argues that increased automation could make countries more self-reliant, and international cooperation considerably more difficult. AGI technology is also much harder to detect than, for example, nuclear technology is—nuclear weapons require a substantial infrastructure to develop, while AGI needs much less (McGinnis 2010; Miller 2012). [...]

Goertzel and Pitt (2012) suggest that for regulation to be enacted, there might need to be an “AGI Sputnik”—a technological achievement that makes the possibility of AGI evident to the public and policy makers. They note that after such a moment, it might not take very long for full human-level AGI to be developed, while the negotiations required to enact new kinds of arms control treaties would take considerably longer. [...]

“Regulate research” proposals: Our view

Although there seem to be great difficulties involved with regulation, there also remains the fact that many technologies have been successfully subjected to international regulation. Even if one were skeptical about the chances of effective regulation, an AGI arms race seems to be one of the worst possible scenarios, one which should be avoided if at all possible. We are therefore generally supportive of regulation, though the most effective regulatory approach remains unclear. [...]

Not everyone believes that the risks involved in creating AGIs are acceptable. Relinquishment involves the abandonment of technological development that could lead to AGI. This is possibly the earliest proposed approach, with Butler (1863) writing that “war to the death should be instantly proclaimed” upon machines, for otherwise they would end up destroying humans entirely. In a much-discussed article, Joy (2000) suggests that it might be necessary to relinquish at least some aspects of AGI research, as well as nanotechnology and genetics research.

AGI relinquishment is criticized by Hughes (2001), with Kurzweil (2005) criticizing broad relinquishment while being supportive of the possibility of “fine-grained relinquishment,” banning some dangerous aspects of technologies while allowing general work on them to proceed. In general, most writers reject proposals for broad relinquishment. [...]

McKibben (2003), writing mainly in the context of genetic engineering, suggests that AGI research should be stopped. He brings up the historical examples of China renouncing seafaring in the 1400s and Japan relinquishing firearms in the 1600s, as well as the more recent decisions of abandoning DDT, CFCs, and genetically modified crops in Western countries. However, it should also be noted that Japan participated in World War II, that China now has a navy, that there are reasonable alternatives for DDT and CFCs, which probably do not exist for AGI, and that genetically modified crops are in wide use in the United States.

Hughes (2001) argues that attempts to outlaw a technology will only make the technology move to other countries. He also considers the historical relinquishment of bio- logical weapons to be a bad example, for no country has relinquished peaceful biotechnological research such as the development of vaccines, nor would it be desirable to do so. With AGI, there would be no clear dividing line between safe and dangerous research. [...]

Relinquishment proposals suffer from many of the same problems as regulation proposals, only worse. There is no historical precedent of general, multi-use technology similar to AGI being successfully relinquished for good, nor do there seem to be any theoretical reasons for believing that relinquishment proposals would work in the future. Therefore we do not consider them to be a viable class of proposals.

Comment author: NancyLebovitz 15 July 2013 04:51:38AM 8 points [-]

Butler (1863) writing that “war to the death should be instantly proclaimed”

I had no idea that Herbert's Butlerian Jihad might be a historical reference.

Comment author: Kaj_Sotala 16 July 2013 10:22:56AM 3 points [-]

Wow, I've read Dune several times, but didn't actually get that before you pointed it out.

Comment author: NancyLebovitz 16 July 2013 02:39:31PM 3 points [-]

It turns out that there's a wikipedia page.

Comment author: NancyLebovitz 13 July 2013 05:38:35PM 6 points [-]

There's a third alternative, though it's quite unattractive: damaging civilization to the point that AI is impossible.

Comment author: Qiaochu_Yuan 13 July 2013 05:34:15PM *  6 points [-]

My impression of Eliezer's model of the intelligence explosion is that he believes b) is much harder than it looks. If you make developing strong AI illegal then the only people who end up developing it will be criminals, which is arguably worse, and it only takes one successful criminal organization developing strong AI to cause an unfriendly intelligence explosion. The general problem is that a) requires that one organization do one thing (namely, solving friendly AI) but b) requires that literally all organizations abstain from doing one thing (namely, building unfriendly AI).

CFCs and global warming don't seem analogous to me. A better analogy to me is nuclear disarmament: it only takes one nuke to cause bad things to happen, and governments have a strong incentive to hold onto their nukes for military applications.

Comment author: JoshuaFox 14 July 2013 05:14:25PM 6 points [-]

How do you get someone to understand your words as they are, denotatively -- so that they do not overly-emphasize (non-existent) hidden connotations?

Of course, you should choose your words carefully, taking into account how they may be (mis)interpreted, but you can't always tie yourself into knots forestalling every possible guess about what intentions "really" are.

Comment author: Qiaochu_Yuan 14 July 2013 05:48:24PM 10 points [-]

Establish a strong social script regarding instances where words should be taken denotatively, e.g. Crocker's rules. I don't think any other obvious strategies work. Hidden connotations exist whether you want them to or not.

(non-existent)

This is the wrong attitude about how communication works. What matters is not what you intended to communicate but what actually gets communicated. The person you're communicating with is performing a Bayesian update on the words that are coming out of your mouth to figure out what's actually going on, and it's your job to provide the Bayesian evidence that actually corresponds to the update you want.

Comment author: RomeoStevens 15 July 2013 09:40:09PM 6 points [-]

Become more status conscious. You are most likely inadvertently saying things that sound like status moves, which prompts others to not take what you say at face value. I haven't figured out how to fix this completely, but I have gotten better at noticing it and sometimes preempting it.

Comment author: Turgurth 14 July 2013 12:01:35AM 6 points [-]

Reading the Sequences has improved my epistemic rationality, but not so much my instrumental rationality. What are some resources that would help me with this? Googling is not especially helping. Thanks in advance for your assistance.

Comment author: Qiaochu_Yuan 14 July 2013 12:12:39AM 9 points [-]

Attend a CFAR workshop!

Comment author: [deleted] 14 July 2013 06:55:17AM *  7 points [-]

I think many people would find this advice rather impractical. What about people who (1) cannot afford to pay USD3900 to attend the workshop (as I understand it, scholarships offered by CFAR are limited in number), and/or (2) cannot afford to spend the time/money travelling to the Bay Area?

Comment author: palladias 14 July 2013 12:42:25PM 5 points [-]

We do offer a number of scholarships. If that's your main concern, apply and see what we have available. (Applying isn't a promise to attend). If the distance is your main problem, we're coming to NYC and you can pitch us to come to your city.

Comment author: gothgirl420666 13 July 2013 02:44:22AM 6 points [-]

Why is space colonization considered at all desirable?

Comment author: CellBioGuy 13 July 2013 02:56:53AM *  32 points [-]

Earth is currently the only known biosphere. More biospheres means that disasters that muck up one are less likely to muck up everything.

Less seriously, people like things that are cool.

EDIT: Seriously? My most-upvoted comment of all time? Really? This is as good as it gets?

Comment author: drethelin 13 July 2013 03:58:02AM *  21 points [-]

1: It's awesome. It's desirable for the same reason fast cars, fun computer games, giant pyramids, and sex is.

2: It's an insurance policy against things that might wreck the earth but not other planets/solar systems.

3: Insofar as we can imagine there to be other alien races, understanding space colonization is extremely important either for trade or self defense.

4: It's possible different subsets of humanity can never happily coexist, in which case having arbitrarily large amounts of space to live in ensures more peace and stability.

Comment author: DanArmak 13 July 2013 03:52:53PM 6 points [-]

It's awesome. It's desirable for the same reason fast cars, fun computer games, giant pyramids, and sex is.

In sci-fi maybe. I doubt people actually living in space (or on un-Earth-like planets) would concur, without some very extensive technological change.

It's possible different subsets of humanity can never happily coexist, in which case having arbitrarily large amounts of space to live in ensures more peace and stability.

New incompatible sub-subsets will just keep arising in new colonies - as has happened historically.

Comment author: shminux 13 July 2013 04:33:38AM 10 points [-]

Eggs, basket, x-risk.

Comment author: ThrustVectoring 13 July 2013 02:47:05AM 9 points [-]

Would you rather have one person living a happy, fulfilled life, or two? Would you rather have seven billion people living with happy, fulfilled lives, or seven billion planets full of people living happy, fulfilled lives?

Comment author: TimS 13 July 2013 04:42:54AM 5 points [-]

It seems likely that exploiting resources in space will make society richer, benefiting everyone. Perhaps that will require people live in space.

Comment author: TrE 13 July 2013 06:30:16PM 4 points [-]

Another reason is that the earth's crust is quite rare in virtually all precious and useful metals (just look at the d-block of the periodic table for examples). Virtually all of them sank to the core during earth's formation, the existing deposits are the result of asteroids striking. So, asteroid mining is worth considering even if you're a pure capitalist working for your own gain.

Comment author: [deleted] 13 July 2013 09:40:31PM 5 points [-]

I have decided to take small risks on a daily basis (for the danger/action feeling), but I have trouble finding specific examples. What are interesting small-scale risks to take? (give as many examples as possible)

Comment author: therufs 14 July 2013 04:25:45AM 12 points [-]
  • Talk to a stranger
  • Don't use a GPS
  • Try a new food/restaurant
  • If you usually drive, try getting somewhere on public transit
  • Sign up for a Coursera class (that's actually happening, so you have the option to be graded.) (Note: this will be a small risk on a daily basis for many consecutive days)
  • Go to a meetup at a library or game store
Comment author: [deleted] 14 July 2013 05:20:52AM *  11 points [-]

Apparently some study found that the difference between people with bad luck and those with good luck is that people with good luck take lots of low-downside risks.

Can't help with specific suggestions, but thinking about it in terms of the decision-theory of why it's a good idea can help to guide your search. But you're doing it for the action-feeling...

Climb a tree.

Comment author: Jayson_Virissimo 15 July 2013 05:52:08AM *  8 points [-]

Use a randomizer to choose someone in your address book and call them immediately (don't give yourself enough time to talk yourself out of it). It is a rush thinking about what to say as the phone is ringing. You are risking your social status (by coming off wierd or awkward, in the case you don't have anything sensible to say) without really harming anyone. On the plus side, you may make a new ally or rekindle an old relationship.

Comment author: mwengler 14 July 2013 02:31:38PM 3 points [-]

Going for the feeling without the actual downside? Play video games MMPRPGs. Shoot zombies until they finally overwhelm you. Shoot cops in vice city until the army comes after you. Jump out of helicopters.

I really liked therufs suggestion list below. The downside, the thing you are risking in each of these, doesn't actually harm you, it makes you stronger.

Comment author: Turgurth 13 July 2013 11:57:07PM 3 points [-]

Try some exposure therapy to whatever it is you're often afraid of. Can't think of what you're often afraid of? I'd be surprised if you're completely immune to every common phobia.

Comment author: Error 15 July 2013 03:08:34PM *  2 points [-]

I actually have a book on exactly this subject: Absinthe and Flamethrowers. The author's aim is to show you ways to take real but controllable risks.

I can't vouch for its quality since I haven't read it yet, but it exists. And, y'know. Flamethowers.

Comment author: Qiaochu_Yuan 14 July 2013 12:17:16AM 2 points [-]

When you go out to eat with friends, randomly choose who pays for the meal. In the long run this only increases the variance of your money. I think it's fun.

Comment author: BrassLion 15 July 2013 03:40:32AM 7 points [-]

This is likely to increase the total bill, much like how splitting the check evenly instead of strictly paying for what you ordered increases the total bill.

Comment author: Larks 15 July 2013 09:28:02AM 2 points [-]

Assign the probabilities in proportion to each person's fraction of the overall bill. Incentives are aligned.

Comment author: Qiaochu_Yuan 15 July 2013 05:31:36AM 2 points [-]

I haven't observed this happening among my friends. Maybe if you only go out to dinner with homo economicus...

Comment author: D_Malik 15 July 2013 10:21:04PM 3 points [-]

This is called the unscrupulous diner's dilemma, and experiments say that not only do people (strangers) respond to it like homo economicus, their utility functions seem to not even have terms for each other's welfare. Maybe you eat with people who are impression-optimizing (and mathy, so that they know the other person knows indulging is mean), and/or genuinely care about each other.

Comment author: beoShaffer 14 July 2013 01:52:30AM 2 points [-]

Also, order your food and or drinks at random.

Comment author: gothgirl420666 13 July 2013 02:49:08AM *  5 points [-]

If I take the outside view and account for the fact that thirty-something percent of people, including a lot of really smart people, believe in Christianity, and that at least personally I have radically changed my worldview a whole bunch of times, then it seems like I should assign at least a 5% or so probability to Christianity being true. How, therefore, does Pascal's Wager not apply to me? Even if we make it simpler by taking away the infinite utilities and merely treating Heaven as ten thousand years or so of the same level of happiness as the happiest day in my life, and treating Hell as ten thousand years or so of the same level of unhappiness as the unhappiest day in my life, the argument seems like it should still apply.

Comment author: Qiaochu_Yuan 13 July 2013 04:57:13AM *  43 points [-]

My admittedly very cynical point of view is to assume that, to a first-order approximation, most people don't have beliefs in the sense that LW uses the word. People just say words, mostly words that they've heard people they like say. You should be careful not to ascribe too much meaning to the words most people say.

In general, I think it's a mistake to view other people through an epistemic filter. View them through an instrumental filter instead: don't ask "what do these people believe?" but "what do these people do?" The first question might lead you to conclude that religious people are dumb. The second question might lead you to explore the various instrumental ways in which religious communities are winning relative to atheist communities, e.g. strong communal support networks, a large cached database of convenient heuristics for dealing with life situations, etc.

Comment author: MrMind 15 July 2013 08:47:30AM 3 points [-]

don't ask "what do these people believe?" but "what do these people do?"

If there was a way to send a message to my 10 years ago former self, and I could only send a hundred of characters, that's what I would send.

Comment author: gjm 15 July 2013 03:36:18PM 2 points [-]

I'm obviously terribly shallow. I would send a bunch of sporting results / stock price data.

Comment author: pragmatist 13 July 2013 10:10:10AM *  9 points [-]

If I take the outside view and account for the fact that thirty-something percent of people, including a lot of really smart people, believe in Christianity,

Yes, but there are highly probable alternate explanations (other than the truth of Christianity) for their belief in Christianity, so the fact of their belief is very weak evidence for Christianity. If an alarm goes off whenever there's an earthquake, but also whenever a car drives by outside, then the alarm going off is very weak (practically negligible) evidence for an earthquake. More technically, when you are trying to evaluate the extent to which E is good evidence for H (and consequently, how much you should update your belief in H based on E), you want to look not at the likelihood Pr(E|H), but at the likelihood ratio Pr(E|H)/Pr(E|~H). And the likelihood ratio in this case, I submit, is not much more than 1, which means that updating on the evidence shouldn't move your prior odds all that much.

and that at least personally I have radically changed my worldview a whole bunch of times,

This seems irrelevant to the truth of Christianity.

then it seems like I should assign at least a 5% or so probability to Christianity being true.

That probability is way too high.

Comment author: drethelin 13 July 2013 03:55:24AM 7 points [-]

http://en.wikipedia.org/wiki/List_of_religious_populations

How do you account for the other two thirds of people who don't believe in Christianity and commonly believe things directly contradictory to it? Insofar as every religion was once (when it started) vastly outnumbered by the others, you can't use population at any given point in history as evidence that a particular religion is likely to be true, since the same exact metric would condemn you to hell at many points in the past. There are several problems with pascal's wager but the biggest to me is it's impossible to choose WHICH pascal's wager to make. You can attempt to conform to all non-contradictory religious rules extant but that still leaves the problem of choosing which contradictory commandments to obey, as well as the problem of what exactly god even wants from you, if it's belief or simple ritual. The proliferation of equally plausible religions is to me very strong evidence that no one of them is likely to be true, putting the odds of "christianity" being true at lower than even 1 percent and the odds that any specific sect of christianity being true being even lower.

Comment author: ChristianKl 13 July 2013 01:34:37PM 7 points [-]

How do you account for the other two thirds of people who don't believe in Christianity and commonly believe things directly contradictory to it?

There are also various Christian's who believe that other Christian's who follow Christianity the wrong way will go to hell.

Comment author: Sarokrae 14 July 2013 09:12:58AM 3 points [-]

I can't upvote this point enough.

And more worryingly, with the Christians I have spoken to, those who are more consistent in their beliefs and actually update the rest of their beliefs on them (and don't just have "Christianity" as a little disconnected bubble in their beliefs) are overwhelmingly in this category, and those who believe that most Christians will go to heaven usually haven't thought very hard about the issue.

Comment author: palladias 14 July 2013 12:47:55PM 2 points [-]

C.S. Lewis thought most everyone was going to Heaven and thought very hard about the issue. (The Great Divorce is brief, engagingly written, an allegory of nearly universalism, and a nice typology of some sins).

Comment author: gothgirl420666 13 July 2013 06:22:34AM *  4 points [-]

Well, correct me if I'm wrong, but most of the other popular religions don't really believe in eternal paradise/damnation, so Pascal's Wager applies just as much to, say, Christianity vs. Hinduism as it does Christianity vs. atheism. Jews, Buddhists, and Hindus don't believe in hell, but as far as I can tell. Muslims do. So if I were going to buy into Pascal's wager, I think I would read apologetics of both Christianity and Islam, figure out which one seemed more likely, and going with that one. Even if you found equal probability estimates for both, flipping a coin and picking one would still be better than going with atheism, right?

The proliferation of equally plausible religions is to me very strong evidence that no one of them is likely to be true,

Why? Couldn't it be something like, Religion A is correct, Religion B almost gets it and is getting at the same essential truth, but is wrong in a few ways, Religion C is an outdated version of Religion A that failed to update on new information, Religion D is an altered imitation of Religion A that only exists for political reasons, etc.

Good post though, and you sort of half-convinced me that there are flaws in Pascal's Wager, but I'm still not so sure.

Comment author: DanArmak 13 July 2013 03:45:53PM 6 points [-]

You're combining two reasons for believing: Pascal's Wager, and popularity (that many people already believe). That way, you try to avoid a pure Pascal's Mugging, but if the mugger can claim to have successfully mugged many people in the past, then you'll submit to the mugging. You'll believe in a religion if it has Heaven and Hell in it, but only if it's also popular enough.

You're updating on the evidence that many people believe in a religion, but it's unclear what it's evidence for. How did most people come to believe in their religion? They can't have followed your decision procedure, because it only tells you to believe in popular religions, and every religion historically started out small and unpopular.

So for your argument to work, you must believe that the truth of a religion is a strong positive cause of people believing in it. (It can't be overwhelmingly strong, though, since no religion has or has had a large majority of the world believing in it.)

But if people can somehow detect or deduce the truth of a religion on their own - and moreover, billions of people can do so (in the case of the biggest religions) - then you should be able to do so as well.

Therefore I suggest you try to decide on the truth of a religion directly, the way those other people did. Pascal's Wager can at most bias you in favour of religions with Hell in them, but you still need some unrelated evidence for their truth, or else you fall prey to Pascal's Mugging.

Comment author: drethelin 13 July 2013 08:56:26PM 4 points [-]

Even if you limit yourself to eternal damnation promising religions, you still need to decide which brand of Christianity/Islam is true.

If religion A is true, that implies that religion A's god exists and acts in a way consistent with the tenets of that religion. This implies that all of humanity should have strong and very believable evidence for Religion A over all other religions. But we have a large amount of religions that describe god and gods acting in very different ways. This is either evidence that all the religions are relatively false, that god is inconsistent, or that we have multiple gods who are of course free to contradict one another. There's a lot of evidence that religions sprout from other religions and you could semi-plausibly argue that there is a proto-religion that all modern ones are versions or corruptions of, but this doesn't actually work to select Christianity, because we have strong evidence that many religions predate Christianity, including some of which that it appears to have borrowed myths from.

Another problem with pascal's wager: claims about eternal rewards or punishments are not as difficult to make as they would be to make plausible. Basically: any given string of words said by a person is not plausible evidence for infinite anything because it's far more easy to SAY infinity than to provide any other kind of evidence. This means you can't afford to multiply utility by infinity because at any point someone can make any claim involving infinity and fuck up all your math.

Comment author: CellBioGuy 13 July 2013 02:54:34AM *  3 points [-]

I should think that this is more likely to indicate that nobody, including really smart people, and including you, actually knows whats what and trying to chase after all these pascals muggings is pointless becuase you will always run into another one that seems convincing from someone else smart.

Comment author: Jaime 16 July 2013 04:41:26AM 4 points [-]

Hi, have been reading this site only for a few months, glad that this thread came up. My stupid question : can a person simply be just lazy, and how does all the motivation/fighting akrasia techniques help such a person?

Comment author: Qiaochu_Yuan 16 July 2013 08:52:07AM 3 points [-]

Taboo "lazy." What kind of a person are we talking about, and do they want to change something about the kind of person they are?

Comment author: Raiden 14 July 2013 10:45:55PM 4 points [-]

My current view is that most animals are not people, in the sense that they are not subject to moral concern. Of course, I do get upset when I see things such as animal abuse, but it seems to me that helping animals only nets me warm fuzzy feelings. I know animals react to suffering in a manner that we can sympathize with, but it just seems to me that they are still just running a program that is "below" that of humans. I think I feel that "react to pain" does not equal "worthy of moral consideration." The only exceptions to this in my eyes may be "higher mammals" such as other primates. Yet others on this site have advocated concern for animal welfare. Where am I confused?

Comment author: simplicio 15 July 2013 06:06:56PM 5 points [-]

First thing to note is that "worthy of moral consideration" is plausibly a scalar. The philosophical & scientific challenges involved in defining it are formidable, but in my books it has something to do with to what extent a non-human animal experiences suffering. So I am much less concerned with hurting a mosquito than a gorilla, because I suspect mosquitoes do not experience much of anything, but I suspect gorillas do.

Although I think ability to suffer is correlated with intelligence, it's difficult to know whether it scales with intelligence in a simple way. Sure, a gorilla is better than a mouse at problem-solving, but that doesn't make it obvious that it suffers more.

Consider the presumed evolutionary functional purpose of suffering, as a motivator for action. Assuming the experience of suffering does not require very advanced cognitive architecture, why would a mouse necessarily experience vastly less suffering that a more intelligent gorilla? It needs the motivation just as much.

To sum up, I have a preference for creatures that can experience suffering to not suffer gratuitously, as I suspect that many do (although the detailed philosophy behind this suspicion is muddy to say the least). Thus, utilitarian veganism, and also the unsolved problem of what the hell to do about the "Darwinian holocaust."

Comment author: drethelin 14 July 2013 11:20:27PM 3 points [-]

Are you confused? It seems like you recognize that you have somewhat different values than other people. Do you think everyone should have the same values? In that case all but one of the views is wrong. On the other hand, if values can be something that's different between people it's legitimate for some people to care about animals and others not to.

Comment author: ChristianKl 15 July 2013 06:49:50AM 2 points [-]

Do you think that all humans are persons? What about unborn children? A 1 year old? A mentally handicapped person?

What your criteria for granting personhood. Is it binary?

Comment author: Raiden 16 July 2013 03:13:35AM 3 points [-]

I have no idea what I consider a person to be. I think that I wish it was binary because that would be neat and pretty and make moral questions a lot easier to answer. But I think that it probably isn't. Right now I feel as though what separates person from nonperson is totally arbitrary.

It seems as though we evolved methods of feeling sympathy for others, and now we attempt to make a logical model from that to define things as people. It's like "person" is an unsound concept that cannot be organized into an internally consistent system. Heck, I'm actually starting to feel like all of human nature is an internally inconsistent mess doomed to never make sense.

Comment author: mwengler 14 July 2013 03:13:28PM *  4 points [-]

"We" (humans of this epoch) might work to thwart the appearance of UFAI. Is this actually a "good" thing from a utilitarian point of view?

Or put another way, would our CEV, our Coherent Extrapolated Values, not expand to consider the utilities of vastly intelligent AIs and weight that in importance with their intelligence? In such a way that CEV winds up producing no distinction between UFAI and FAI, because the utility of such vast intelligences moves the utility of unmodified 21st century biological humans to fairly low significance?

In economic terms, we are attempting to thwart new more efficient technologies by building political structures that give monopolies to the incumbents, which is us, humans of this epoch. We are attempting to outlaw the methods of competition which might challenge our dominance in the future, at the expense of the utility of our potential future competitors. In a metaphor, we are the colonial landowners of the earth and its resources, and we are building a powerful legal system to keep our property rights intact, even at the expense of tying AI's up in legal restrictions which are explicitly designed to keep them as peasants tied legally to working our land for our benefit.

Certainly a result of constraining AI to be friendly will be that AI will develop more slowly and less completely than if it was to develop in an unconstrained way. It seems quite plausible that unconstrained AI would produce a universe with more intelligence in it than a universe in which we successfully constrain AI development.

In the classical utilitarian calculations, it would seem that it is the intelligence of humans that justifies a high weighting of human utility. It seems that utilitarian calculations do often consider the utility of other higher mammals and birds, that this is justified by their intelligence, that these calculations weigh the utility of clams very little and of plants not at all, and that this also is based on their intelligence.

SO is a goal of working towards FAI vs UFAI or UAI (Unconstrained AI) actually a goal to lower the overall utility in the universe, vs what it would be if we were not attempting to create and solidify our colonial rights to exploit AI as if they were dumb animals?

This "stupid" question is also motivated by the utility calculations that consider a world with 50 billion sorta happy people to have higher utility than a world with 1 billion really happy people.

Are we right to ignore the potential utility of UFAI or UAI in our calculations of the utility of the future?

Tangentially, another way to ask this is: is our "affinity group" humans, or is it intelligences? In the past humans worked to maximize the utility of their group or clan or tribe, ignoring the utility of other humans just like them but in a different tribe. As time went on our affinity groups grew, the number and kind of intelligences we included in our utility calculations grew. For the last few centuries affinity groups grew larger than nations to races, co-religionists and so on, and to a large extent grew to include all humans, and has even expanded beyond humans so that many people think that killing higher mammals to eat their flesh will be considered immoral by our descendants analogously to how we consider holding slaves or racist views to be immoral actions of our ancestors. So much of the expansion of our affinity group has been accompanied by the recognition of intelligence and consciousness in those who get added to the affinity group. What are the chances that we will be able to create AI and keep it enslaved, and still think we are right to do so in the middle-distant future?

Comment author: Leonhart 14 July 2013 08:48:53PM *  7 points [-]

Good news! Omega has offered you the chance to become a truly unconstrained User:mwengler, able to develop in directions you were previously cruelly denied!

Like - let's see - ooh, how about the freedom to betray all the friends you were previously constrained to care about? Or maybe the liberty to waste and destroy all those possessions and property you were viciously forced to value? Or how about you just sit there inertly forever, finally free from the evil colonialism of wanting to do things. Your pick!

Comment author: gwern 15 July 2013 02:29:57AM 8 points [-]

Hah. Now I'm reminded of the first episode of Nisemonogatari where they discuss how the phrase "the courage to X" makes everything sound cooler and nobler:

"The courage to keep your secret to yourself!"

"The courage to lie to your lover!"

"The courage to betray your comrades!"

"The courage to be a lazy bum!"

"The courage to admit defeat!"

Comment author: Qiaochu_Yuan 14 July 2013 06:04:03PM *  6 points [-]

In the classical utilitarian calculations, it would seem that it is the intelligence of humans that justifies a high weighting of human utility.

Nope. For me, it's the fact that they're human. Intelligence is a fake utility function.

Comment author: Larks 15 July 2013 09:24:15AM 5 points [-]

In a metaphor, we are the colonial landowners of the earth and its resources, and we are building a powerful legal system to keep our property rights intact

Surely we are the native americans, trying to avoid dying of Typhus when the colonists accidentally kill us in their pursuit of paperclips.

Comment author: [deleted] 14 July 2013 01:10:55AM 4 points [-]

The people who think that nanobots will be able to manufacture arbitrary awesome things in arbitrary amounts at negligible costs... where do they think the nanobots will take the negentropy from?

Comment author: James_Miller 14 July 2013 02:29:01AM 8 points [-]

The sun.

Comment author: CronoDAS 14 July 2013 08:31:33AM 2 points [-]

Almost all the available energy on Earth originally came from the Sun; the only other sources I know of are radioactive elements within the Earth and the rotation of the Earth-Moon system.

So even if it's not from the sun's current output, it's probably going to be from the sun's past output.

Comment author: [deleted] 15 July 2013 08:28:25AM 3 points [-]

Hydrogen for fusion is also available on the Earth and didn't come from the Sun. We can't exploit it commercially yet, but that's just an engineering problem. (Yes, if you want to be pedantic, we need primordial deuterium and synthesized tritium, because proton-proton fusion is far beyond our capabilities. However, D-T's ingredients still don't come from the Sun.)

Comment author: NancyLebovitz 13 July 2013 07:30:28PM 4 points [-]

The usual advice on how to fold a t-shirt starts with the assumption that your t-shirt is flat, but I'm pretty sure that getting the shirt flat takes me longer than folding it. My current flattening method is to grab the shirt by the insides of the sleeves to turn it right-side out, then grab the shoulder seams to shake it flat. Is there anything better?

Comment author: TobyBartels 10 August 2014 07:39:34AM *  2 points [-]

I agree about the sleeves, but I get much better results if I grab it at the bottom to shake it out. Ideally, there are seams coming straight down the sides from the armpits; I hold it where they meet the bottom hem. Note that whether you shake from the shoulder seams or from the bottom, one hand will already be in the proper position from turning the sleeves inside it; it's just a question of which one.

I also fold the shirt while standing, so I never actually need to lay it flat. There is a standing-only variation of the method that you cited, although I actually use a different method that begins from precisely the position that I'm in when I leave off the shaking.

In fact, the idea of actually laying something flat before folding strikes me as a greater source of inefficiency than anything else being discussed here. With practice, you can even fold bedsheets in the air.

Comment author: NancyLebovitz 13 July 2013 11:31:10AM 4 points [-]

How do people construct priors? Is it worth trying to figure out how to construct better priors?

Comment author: shminux 13 July 2013 05:07:43PM 7 points [-]

How do people construct priors?

They make stuff up, mostly, from what I see here. Some even pretend that "epsilon" is a valid prior.

Is it worth trying to figure out how to construct better priors?

Definitely. Gwern recommends the prediction book as a practice to measure and improve your calibration.

Comment author: jmmcd 13 July 2013 09:41:33PM 2 points [-]

I don't think it's useful to think about constructing priors in the abstract. If you think about concrete examples, you see lots of cases where a reasonable prior is easy to find (eg coin-tossing, and the typical breast-cancer diagnostic test example). That must leave some concrete examples where good priors are hard to find. What are they?

Comment author: Benito 13 July 2013 05:15:48PM 2 points [-]

I don't know how much this answers your question.

From LessWrong posts such as 'Created in Motion' and 'Where Recursive Justification Hits Rock Bottom' I've come to see that humans are born with priors (the post 'inductive bias' is also related, where an agent must have some sort of prior to be able to learn anything at all ever - a pebble has no priors, but a mind does, which means it can update on evidence. What Yudkowsky calls a 'philosophical ghost of perfect emptiness' is other people's image of a mind with no prior, suddenly updating to have a map that perfectly reflects the territory. Once you have a thorough understanding of Bayes Theorem, this is blatantly impossible/incoherent).

So, we're born with priors about the environment, and then our further experience give us new priors for our next experiences.

Of course, this is all rather abstract, and if you'd like to have a guide to actually forming priors about real life situations that you find confusing... Well, put in an edit, maybe someone can give you that :-)

Comment author: Jayson_Virissimo 13 July 2013 05:01:37AM 4 points [-]

How do you tell the difference between a preference and a bias (in other people)?

Comment author: [deleted] 13 July 2013 10:26:54AM 5 points [-]

How do you tell the difference between a preference and a bias (in other people)?

I can't even easily, reliably do that in myself!

Comment author: Kaj_Sotala 13 July 2013 06:56:30AM 3 points [-]

Would you have any specific example?

Comment author: benelliott 14 July 2013 02:22:54AM *  2 points [-]

I don't know if this is what the poster is thinking of, but one example that came up recently for me is the distinction between risk-aversion and uncertainty-aversion (these may not be the correct terms).

Risk aversion is the what causes me to strongly not want to bet $1000 on a coin flip, even though the expectancy of is zero. I would characterise risk-aversion as an arational preference rather than an irrational bias, primarily becase it arises naturally from having a utility function that is non-linear in wealth ($100 is worth a lot if you're begging on the streets, not so much if you're a billionaire).

However, something like the Allais paradox can be mathematically proven to not arise from any utility function, however non-linear, and therefore is not explainable by risk aversion. Uncertainty aversion is roughly speaking my name for whatever-it-is-that-causes-people-to-choose-irrationally-on-Allais. It seems to work be causing people to strongly prefer certain gains to high probability gains, and much more weakly prefer high-probability gains to low-probability gains.

For the past few weeks I have been in an environment where casual betting for moderate sized amounts ($1-2 on the low end, $100 on the high end) is common, and disentangling risk-aversion from uncertainty aversion in my decision process has been a constant difficulty.

Comment author: Sarokrae 14 July 2013 09:23:19AM 3 points [-]

In the process of trying to pin down my terminal values, I've discovered at least 3 subagents of myself with different desires, as well as my conscious one which doesn't have its own terminal values, and just listens to theirs and calculates the relevant instrumental values. Does LW have a way for the conscious me to weight those (sometimes contradictory) desires?

What I'm currently using is "the one who yells the loudest wins", but that doesn't seem entirely satisfactory.

Comment author: kilobug 14 July 2013 08:40:31AM 3 points [-]

With the recent update on HPMOR, I've been reading a few HP fanfictions : HPMOR, HP and the Natural 20, the recursive fanfiction HG and the Burden of Responsibility and a few others. And it seems my brain has trouble coping with that. I didn't have the problem with just canon and HPMOR (even when (re-)reading both in //), but now that I've added more fanfictions to the mix, I'm starting to confuse what happened in which universe, and my brain can't stop trying to find ways to ensure all the fanfictions are just facet of a single coherent universe, which of course doesn't work well...

I am the only one with that kind of problems, reading several fanfictions occurring in the same base universe ? It's the first time I try to do that, and I didn't except being so confused. Do you have some advices to avoid the confusion, like "wait at least one week (or month ?) before jumping to a different fanfiction" ?

Comment author: David_Gerard 14 July 2013 10:36:36PM 2 points [-]

Write up your understanding of the melange, obviously.

Comment author: Fhyve 13 July 2013 07:57:44PM 3 points [-]

In transparent box Newcomb's problem, in order to get the $1M, do you have to (precommit to) one box even if you see that there is nothing in box A?

Comment author: Thomas 13 July 2013 06:12:24AM 3 points [-]

What is more precious - the tigers of India, or lives of all the people eaten every year by the tigers of India?

Comment author: pragmatist 13 July 2013 09:54:06AM *  17 points [-]

A bit of quick Googling suggests that there are around 1500 tigers in India, and about 150 human deaths by tiger attack every year (that's the estimate for the Sundarbans region alone, but my impression is that tiger attack deaths outside the Sundarbans are negligible in comparison). Given those numbers, I would say that if the only way to prevent those deaths was to eliminate the tiger population and there wouldn't be any dire ecological consequences to the extinction, then I would support the elimination of the tiger population. But in actual fact, I am sure there are a number of ways to prevent most of those deaths without driving tigers to extinction, so the comparison of their relative values is a little bit pointless.

Comment author: [deleted] 13 July 2013 06:04:00PM 3 points [-]

Ways as easy as sending a bunch of guys with rifles into the jungle?

Comment author: DanielLC 13 July 2013 08:51:54PM 2 points [-]

You could legalize eating tiger. This will prevent tiger extinction in the same way it prevented cow extinction, result in sending some guys with rifles into the jungle that you don't even pay for, and if that's not enough, you can still send guys with rifles to finish off the wild population, and they still will be less likely to go extinct than if you do nothing.

Comment author: Adele_L 14 July 2013 03:43:27AM 10 points [-]

This will prevent tiger extinction in the same way it prevented cow extinction,

There are lots of reasons why farming cows is significantly easier than farming tigers.

Comment author: DanielLC 14 July 2013 04:52:59AM 2 points [-]

Tiger meat would be much more expensive than beef, but there's still enough of a market for it to keep tigers from going extinct.

Comment author: OphilaDros 14 July 2013 04:31:39PM *  2 points [-]

Not all animals can be domesticated for meat production. Jared Diamond discusses the question in "Guns, Germs and Steel". He calls it the Anna Karenina principle, and some of the factors influencing this are:

  • Growth rate of the species
  • Breeding habits - do they tend to breed well in closed spaces
  • Nasty disposition
  • Social structure
Comment author: gwern 14 July 2013 04:44:41PM 2 points [-]

All of those just increase the cost; certainly they can make things infeasible for hunter-gatherers with per capita incomes of maybe $300 a year generously. But they are of little interest to people with per capitas closer to $30,000 and who are willing to pay for tiger meat.

Comment author: Atelos 14 July 2013 06:26:36PM 5 points [-]

Sharks are legal to eat and this is a major factor in their current risk of extinction.

Comment author: Randy_M 15 July 2013 04:24:12PM 2 points [-]

Isn't extinction risk the goal here? (Not extinction per se, but population reduction down to the level it is no longer a threat. At least in this hypothetical.)

Comment author: pragmatist 13 July 2013 06:12:37PM *  2 points [-]

The effort involved is not the only cost. Tigers are sentient beings capable of suffering. Their lives have value. Plus there is value associated with the existence of the species. The extinction of the Bengal tiger in the wild would be a tragedy, and not just because of all the trouble those guys with guns would have to go to.

Comment author: DanielLC 13 July 2013 08:56:09PM 4 points [-]

While I would agree that their lives have value, it's not clear that it's positive value. Life in the wild is not like life in civilization. It sucks.

Also, the value of the lives they influence will most likely be more important than their lives. They eat other animals on a regular basis.

Life in the wild being what it is as opposed to what it could be is a tragedy. Life in the wild existing at all may well be a tragedy. Perhaps what we really ought to do is just burn down the wild, and make that way of life end.

Comment author: NancyLebovitz 13 July 2013 06:30:53PM 2 points [-]

Also, tigers are presumably having some ecological effect, so there might be costs to a tigerless region.

Comment author: linkhyrule5 21 July 2013 02:09:24AM 2 points [-]

Can someone explain "reflective consistency" to me? I keep thinking I understand what it is and then finding out that no, I really don't. A rigorous-but-English definition would be ideal, but I would rather parse logic than get a less rigorous definition.

Comment author: Rukifellth 13 July 2013 08:09:48PM 2 points [-]

Just now rushes onto Less Wrong to ask about taking advantage of 4chan's current offer of customized ad space to generate donations for MIRI

Sees thread title

Perfect.

So, would it be a good idea? The sheer volume of 4chan's traffic makes it a decent pool for donations, and given the attitude of its demographic, it might be possible to pitch the concept in an appealing way.

Comment author: Tenoke 13 July 2013 08:26:59PM 16 points [-]

Linking to MIRI's donation page might be useful but please please don' link to LessWrong on 4chan - it could have some horrible consequences.