# Open Thread, September 30 - October 6, 2013

4 30 September 2013 05:18AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Sort By: Best
Comment author: 03 October 2013 10:18:06PM 10 points [-]

I need some advice. I recently moved to a city and I don't know how to stop myself from giving money to strangers! I consider this charity to be questionable and, at the very least, inefficient. But when someone gets my attention and asks me specifically for a certain amount of money and tells me about themselves, I won't refuse. I don't even feel annoyed that it happened, but I do want to have it not happen again. What can I do?

The obvious precommitment to make is to never carry cash. I am strongly considering this and could probably do so, but it is nice to be able to have at least enough for a bus trip, a quick lunch or for some emergency. I have tried to give myself a running tally of number of people refused and when that gets to, say, 20, I would donate something to a known legitimate charity. While doing so makes me feel better about passing beggars by, it doesn't help once someone gets me one-on-one. So I've never gotten to that tally without resetting it first by succumbing to someone. Is there some way to not look like an easy mark? Are there any good standard pieces of advice and resources for this?

However, I always find these exchanges to be really fascinating from the point of view of the Dark Arts used. The most recent time this happened, I was stopped and asked for the time which he promptly ignored. Then he told me that he had seen me around before - this is entirely plausible since I walk by there most days but is also likely to be true of a randomly selected person so could just be a shot in the dark. He shook my hand multiple times. Gave me his name and told me to call him by his nickname. He told me about being a veteran, talked to me about any veterans I knew. Tried to guess my current job and messed up in a way that implied I was younger than I am which was probably his only significant mistake as that could have annoyed some people. He then acted impressed when I corrected him. Asked where I was from and then said he had an acquaintance from nearby. Then of course he asked for train ride money which started at 8 dollars and ended up being 23.

I could practically check off the chapters of Cialdini's Influence one-by-one on this list and noticed at least two of these tactics while they were being used. Unfortunately, Cialdini's book has laughable excuses for sections on "Defense Against" said dark arts, rarely saying anything more than "just use the fact that they're using these tricks against them since now you know better!" So, here I am, knowing the nature of my foe and yet still being utterly dragged in by it.

Comment author: 04 October 2013 04:29:52PM 8 points [-]

The basic answer is not to talk to these people.

Do not answer questions about what time it is, do not enter any conversations at all. At most say "sorry" and walk on.

Just. Do. Not. Talk. To. Them.

Comment author: 04 October 2013 05:10:33AM 8 points [-]

assume that they're scamming. It will often be true and even when honest giving money to panhandlers is an inefficient use of charity. Remind yourself that you already have a budget for charity and that you're sending it to givewell or MIRI or whatever.

Comment author: 04 October 2013 11:44:38AM *  6 points [-]

Is there some way to not look like an easy mark?

Keep your head up and your back straight, look towards the horizon, walk with a certain pace.

Avoid the places with high density of scammers, if you can. (For example in my city it would around the train station.)

The most recent time this happened, I was stopped and asked for the time which he promptly ignored.

Did you notice immediately that the person is lying to you (pretending to care about time, but actually not caring), therefore you have no social obligation to interact with them?

I keep an attitude that if someone is manipulating me like this, I owe them nothing socially... I give myself permission to just walk away without any explanation or interaction, or to lie to them (even in a very transparrent manner: "sorry, I don't have any money"; they did it first, so they have no right to complain). Saying "sorry, I am in a hurry" and walking away without looking at them should work in most cases (and is even socially acceptable if you care about that aspect).

More meta, I have problem giving you good advice, because I have no idea why do you behave this way. I don't know what precisely happens in your head during the interaction, which is why I can't be specific about which parts of that you need to change (because it starts in the head). It is an interaction: they are playing their parts of the script, you are playing your part. The key is to stop playing your part (because obviously, they have no motivation to stop playing theirs).

Is it difficult for you to realize that you are being scammed? Or do you suspect this, but you don't feel certain about your judgement? Or are you pretty sure about your judgement, but you don't know how to stop the interaction without... feeling bad about yourself? Seems to me the last one is more likely. If that is true, please explain the details. Do you believe you should feel bad about yourself for not giving money to strangers (because you imagine some person would consider you bad)? Would you also feel guilty about stopping a thief from taking your wallet? Where is the difference? Can you write the words that go in your head and keep you cooperating in the scenario you described?

Comment author: 04 October 2013 09:19:31PM *  1 point [-]

It just, you know, feels like yes they could use this money more than I could. I know that there's a good chance they're lying, but they're lying to spice it up but probably do need this for one reason or another. It's not entirely rational choice, I admit, but it always seems like a rather minor favor that really won't hurt me much this time. It's just that it happens far too frequently for my own comfort that I consider it a problem. I don't even feel bad about having given them the money, even in retrospect. I just know that I can't give everyone money who asks for it and that by conceding I'm encouraging even more of this exploitation. (But neither do I think I get a significant 'warm fuzzies' feeling for giving as it seems to be cancelled by the "what am I doing?" in the back of my head.)

I guess that EY's tale here about holding open doors and letting people know they left the car trunk open is why I keep doing this. If I modify myself to completely ignore these little things... what will I lose? Can I really just not ever give anyone the time? What about all those times when they really did just need to know the time, or wanted to charge their phone, or whatever? Those happen, probably more often than times when they're just tricks for scammers. That's why I was looking at solutions like not carrying cash - a way that I can not ignore it but still can't fall for it.

For the record, this was the first time I've given out more than a dollar or two. My original post has probably made it seem like I do this more often and more egregiously than I do, partly because I was carried away by that particular exchange and partly because prior to moving this never happened so anything seems like a lot.

Edit: In fact, now I can think of at least one situation in which I had to ask strangers for some quarters in order to be able to pay to park and catch my train. The only difference in this situation being that I obviously had money on me and just didn't expect their to be automated pay booths taking only quarters. And in retrospect I had some quarters in the car I could have gotten. But regardless, there is value in having people be generally kind to strangers. And I don't think I looked particularly less like a scammer than the last guy who got me, nor was I in a less scam-likely place (possibly the opposite).

Comment author: 05 October 2013 02:00:19PM 10 points [-]

An idea: Next time try to estimate how much money such person makes. As a rough estimate, divide the money you gave them by the length of your interaction. (To get a more precise estimate, you would have to follow them and observe how much other people give them, but that could be pretty dangerous for you.)

Years ago I made a similar estimate for a beggar on a street (people dropped money to his cap, so it was easy to stand nearby, watch for a few minutes and calculate), and the conclusion was that his income was above average for my country.

By the way, these people destroy a lot of social capital by their actions. They make life more difficult for people who genuinely want to ask for the time, or how to get somewhere, or similar things. They condition people against having small talk with people they don't know. -- So if you value people being generally kind to strangers, remember that these scammers make their money by destroying that value.

Comment author: 07 October 2013 04:28:11PM 6 points [-]

feels like yes they could use this money more than I could

it feels like it, but it's wrong. And you are actively making the situation worse. Better melt your cash and burn your bills. These people could use food, shelter and some skills to earn an honest living. There are charitable organizations providing these services, find the best ones and donate to them. Next time you give a dollar to a beggar, think of how your selfish feel-good act makes the world a worse place to live.

Comment author: 16 October 2013 08:38:34PM 0 points [-]

Thanks, this is probably the tact I need to take.

Comment author: 07 October 2013 04:13:06PM 5 points [-]

It just, you know, feels like yes they could use this money more than I could.

Don't visit the third world. Ever.

Comment author: 07 October 2013 08:10:43PM 4 points [-]

On the contrary, a visit to an actually poor place might give him the context to reevaluate the first world poor.

Comment author: 16 October 2013 08:39:52PM 0 points [-]

Too late multiple times over, sorry. Though I haven't since I was old enough to really have any money on me.

Comment author: 07 October 2013 12:38:34PM *  2 points [-]

I guess that EY's tale here about holding open doors and letting people know they left the car trunk open is why I keep doing this. If I modify myself to completely ignore these little things... what will I lose? Can I really just not ever give anyone the time? What about all those times when they really did just need to know the time, or wanted to charge their phone, or whatever?

People don't leave their car trunks open for deception unless they're kidnappers. If you can't tell if people are lying or not, please just ignore them. Otherwise you're encouraging the dishonest ones to harass other people too.

Comment author: 05 October 2013 04:39:26AM *  1 point [-]

I'll be willing to help out if I hear anything other than a request for money, or if I see an obvious problem I can help with (like a cyclist with a flat tire when I have a patch kit in my pocket). I just categorically don't allow "kindness to strangers" to translate to "giving money to strangers," and as soon as money comes up I say I'm broke (which is not true, but not that far from it either), figuratively close my ears, and walk away.

I suppose it helps that most panhandlers in my area have signs. Most non-sign-carriers who approach me want directions or some such. Maybe I just look like a bad scam target though.

Comment author: 04 October 2013 03:49:49PM *  1 point [-]

Keep your head up and your back straight, look towards the horizon, walk with a certain pace.

Yes, that's what I usually do. (Sometimes I give them trivial amounts of money like €0.50 instead.)

I keep an attitude that if someone is manipulating me like this, I owe them nothing socially...

Yep... People trying to dishonestly manipulate me trigger a heckuva memetic immune response, akin to refusing an offer in the Ultimatum game.

Comment author: 03 October 2013 10:58:40PM 6 points [-]

I hate feeling I have to walk by a person pandhandling and not respond at all - it makes me feel like a bad person. I had been told not to make eye contact unless you're going to give money, but I've recently changed my strategy and started smiling before giving my standard, "no, sorry" to the request for cash. Recently I flashed a smile as I strode by a man on the sidewalk. He smiled back and said, "God bless you for that smile." It felt like we connected, which is what people are generally going for when they give money (unless it's just to avoid feeling guilty).

Yvain's take on all this: http://squid314.livejournal.com/340483.html

Comment author: 04 October 2013 05:10:58AM 1 point [-]

This is usually what I do.

Comment author: 08 October 2013 08:32:16PM *  5 points [-]

I was cured after I naively gave money to a street beggar, and was pursued for more money, to the point that I felt threatened.

My usual procedure in the US is to actively pretend that beggars, and those who look like them, don't exist. Phil Collins wouldn't like it, but after that occasion and one or two like it, I feel scared. I truly admire a certain friend who can chit-chat on a friendly basis with a street person.

As I got older and more confident, I developed other practices:

1. Someone asked for money for food, so I handed her a bag of fancy chocolate almonds I had in my hand. She looked like that wasn't what she was expecting.
2. In a friendly way, I told a collector for some ineffective charity that, in honor of his request, I would give 100 NIS more than usual to my regular charity, but not his. Chutzpah.
3. When a collector for some ineffective charity comes up to me, I solicit him, in a friendly way, to give money to my favorite charity before he has a chance to ask. Once I got 1 NIS this way, so I felt obliged to give him a (different) shekel. I then had fun ceremonially taking that 1 NIS coin to the treasurer, along with my usual donation.
4. Once I asked a phone collector for some ineffective charity, in a friendly way, to decide on my behalf: Should I give 100 NIS to a certain truly worthy cause, or deny it to that worthy cause and give it to her charity. She got quite tangled up trying to answer.

In short, I became a little obnoxious. The fact that I regularly give a good amount to charity is probably what gave me the psychological leeway to do this.

(And I wouldn't do any of that to a more-or-less worthy charity, or if a friend asked.)

Comment author: 09 October 2013 05:00:05AM *  2 points [-]

I truly admire a certain friend who can chit-chat on a friendly basis with a street person.

A few months ago I was with a co-worker in the centre of a foreign capital, waiting for some other people, and some guy approached us offering to sell us some marijuana. I told him “I quitted smoking five years ago” and we kept talking about that for about half a minute before he left.

My co-worker was very annoyed that I didn't just ignore the guy.

When a collector for some ineffective charity comes up to me, I solicit him, in a friendly way, to give money to my favorite charity before he has a chance to ask. One I got 1 NIS this way, so I felt obliged to give him a (different) shekel.

That is freakin' awesome.

Comment author: 08 October 2013 06:08:25AM 5 points [-]

However, I always find these exchanges to be really fascinating from the point of view of the Dark Arts used. The most recent time this happened, I was stopped and asked for the time which he promptly ignored. Then he told me that he had seen me around before - this is entirely plausible since I walk by there most days but is also likely to be true of a randomly selected person so could just be a shot in the dark. He shook my hand multiple times. Gave me his name and told me to call him by his nickname. He told me about being a veteran, talked to me about any veterans I knew. Tried to guess my current job and messed up in a way that implied I was younger than I am which was probably his only significant mistake as that could have annoyed some people. He then acted impressed when I corrected him. Asked where I was from and then said he had an acquaintance from nearby. Then of course he asked for train ride money which started at 8 dollars and ended up being 23.

I could practically check off the chapters of Cialdini's Influence one-by-one on this list and noticed at least two of these tactics while they were being used. Unfortunately, Cialdini's book has laughable excuses for sections on "Defense Against" said dark arts, rarely saying anything more than "just use the fact that they're using these tricks against them since now you know better!" So, here I am, knowing the nature of my foe and yet still being utterly dragged in by it.

And yet people here are still surprised that gatekeepers could lose at the AI box game.

Comment author: 07 October 2013 11:27:05AM *  5 points [-]

The obvious precommitment to make is to never carry cash.

I'm terribly sorry for my strong reaction but this whole post reeks of abuser attracting vulnerability so much that it's making me angry. It's not difficult to imagine beggars can sense you a mile away.

What the hell? People are robbing your time in the street and lying to your face to get your money too, and you are considering to inconvenience your own life to accommodate them? Just learn to tell a white lie like the rest of humanity, it doesn't even matter if you do it badly in a case like this. All you need is an attitude change, not a bag of tricks.

I'm going to appeal to your altruism. You're making lying for money profitable. When you give away your hard earned money it doesn't hurt just you or the potential charities.

I'm not sure what makes me so angry about this... it just seems that submissiveness seems to be a relatively common failure mode for otherwise smart people.

ETA: in Europe, begging is highly organized so you would likely be financing organized crime.

ETA2:

Then he told me that he had seen me around before - this is entirely plausible since I walk by there most days ... Is there some way to not look like an easy mark?

Yes, stop giving money to these people. Of course they recognize you, it's their job.

Comment author: 04 October 2013 02:45:25AM 5 points [-]

Remind yourself that the panhandler is defecting (in the Prisoner's Dilemma sense) by putting you in that situation. Remind yourself that they are actively and premeditatedly manipulating you through a set of known exploitable psychological levers. There is a strain of Dark Arts to this advice, because you are choosing to preemptively deflect your empathy with a feeling of defensiveness. It is nonetheless true that the panhandler is being rude, definitionally, and that you are being tricked.

Comment author: 09 October 2013 06:19:06PM 4 points [-]

Let me suggest a world view which is much less negative than the other replies: I view panhandlers as vendors of warm fuzzies and therefore treat them as I would any other street vendor whose product I am most likely not interested in. In particular, I have no reason to be hostile to them, or to be disrespectful of their trade.

If they engage me politely, I smile and say "No thanks." I think the second word there is helpful to my mindset and also makes their day a little better. If they become hostile or unpleasant, I feel no guilt about ignoring them; they have given me good reason to suspect their fuzzies are of low quality. If they have a particularly amusing approach, and I feel like treating myself, I give them money. (EG The woman who offered to bet me a dollar that she could "knock down this wall", gesturing at a nearby brick building. It was obviously a setup, but it was worth paying a dollar to learn the punchline, and she delivered it well.)

I developed this mindset while living in Berkeley, CA near Telegraph and walking everywhere, which I suspect means that I was encountering panhandlers at a rate about as high as anyone in the first world.

I also, of course, contribute significant portions of money to charities which can do a lot more good with it. If you are looking for a charity which specifically aids people in a situation similar to the ones you are refusing, you may want to consider the HOPE program http://www.thehopeprogram.org/ . In 2007, Givewell said about them "For donors looking to help extremely disadvantaged adults obtain relatively low-paying jobs, we recommend HOPE." http://www.givewell.org/united-states/charities/HOPE-Program . There is an argument (and Givewell makes it) that helping extremely disadvantaged adults in the first world obtain relatively low-paying jobs is so much harder than helping poor people in the third world that it should not be attempted. Without taking a side on that, if you feel guilty that you are not helping extremely disadvantaged adults in the first world, contributing to the HOPE project would do more to actually address this issue than giving to panhandlers.

Comment author: 05 October 2013 09:52:33PM 3 points [-]

"Sorry man, I don't have cash."

If you feel bad about lying (given that it's not a good idea to give money to panhandlers, you shouldn't), take a note of how much money you would have given them and donate double that to your nearest food bank/shelter. There, now you actually helped them.

Comment author: 04 October 2013 11:36:34AM 3 points [-]

Instead of thinking about stopping giving the money think about stopping giving them the time to tell you a long story.

Comment author: 04 October 2013 04:54:51AM 3 points [-]

Don't turn your head in their direction. Don't change your pace. Don't make eye contact. It gets easier.

Does your city have transit passes or RFID stored-value cards? It may be possible for you to be prepared to take the bus without carrying cash. As for lunch, is it uncommon for restaurants in your area to accept credit cards?

Comment author: 03 October 2013 11:18:48PM 4 points [-]

It seems like your problem might be too in having much empathy for strangers, which (at least when dealing with panhandlers) shouldn't theoretically be too hard to deal with. If you cultivate a mindset of viewing beggars as parasites and degenerates you ought to be able to resist any impulses of sympathy which come up, especially since you already know that you're not helping them and many are in fact con-artists. It shouldn't really affect your other charity giving much either, since my understanding is that EA mostly focuses on giving medical aid to foreigners rather than dealing with poverty in areas with high costs of living like American cities.

On the other hand, it's very possible empathy isn't your real problem here. The feeling of gratitude (even faux-gratitude) and generosity from handing a few bucks to a hobo is a big rush; I certainly get more utility out of my spare change that way than I ever would buying junk food with it. If that's your issue than it might be smart to do what you're doing now and poison the good feeling by re-framing it as something shameful.

Comment author: 05 October 2013 11:10:16AM 1 point [-]

Others have recommended keeping your eyes away from them, I'll add the possibility of wearing headphones and sunglasses to give you plausible deniability which will probably make you feel better psychologically even though it should have no impact.

Another idea is you could keep a few quarters in your pocket and just give them a quarter from your pocket as quickly as you can, then at least you are limiting the damage to a trivial amount. I have never tried this idea.

Comment author: 02 October 2013 11:52:57PM *  9 points [-]

I've heard several stories in the last few months of former theists becoming atheists after reading The God Delusion or similar Four-Horsemen tract. This conflicts with my prior model of those books being mostly paper applause lights that couldn't possibly change anyone's mind.

Insofar as atheism seems like super-low-hanging fruit on the tree of increased sanity, having an accurate model for what gets people to take a bite might be useful.

Has anyone done any research on what makes former believers drop religion? More generally, any common triggers that lead people to try to get more sane?

Edit: Found a book: Deconversion: Qualitative and Quantitative Results from Cross-Cultural Research in Germany and the United States of America. It's recent (2011) and seems to be the best research on the subject available right now. Does anyone have access to a copy?

Comment author: 03 October 2013 02:39:00AM 8 points [-]

I can tell you what triggered me becoming an atheist.

I was reading a lot of Isaac Asimov books, including the non-fiction ones. I gained respect for him. After learning he was an atheist, it started being a possibility I considered. From there, I was able to figure out which possibility was right on my own.

This seems to be a trend. I never seriously worried about animals until joining felicifia.org where a lot of people do. I never seriously considered that wild animals' lives aren't worth living until I found out some of the people on there do. I think it's a lot harder to seriously consider an idea if nobody you respect holds it. Just knowing that a good portion of the population is atheist isn't enough. Once you know one person, it doesn't matter how many people hold the opposite opinion. You are now capable of considering it.

I didn't think unfriendly AI was a serious risk until I came here, but that might have been more about the arguments. I figured that an AI could just be programmed to do what you tell it to and nothing more (and from there can be given Asimov-style laws). It wasn't until I learned more about the nature of intelligence that I realized that that is not likely going to be easy. Intelligence is inherently goal-based, and it will maximize whatever utility function you give it.

Comment author: 03 October 2013 08:14:14AM *  13 points [-]

Theism isn't about god. It has also social and therefore strong emotional consequences. If I stop being a theist, does it mean I will lose my friends, my family will become more cold to me, and I will lose an access to world's most wide social networks?

In such case the new required information isn't a disproved miracle or an essay on Occam's razor. That has zero impact on the social consequences. It's more important to get an evidence that there is a lot of atheists, they can be happy, and some of them are considered very cool even outside of atheist circles. (And after having this evidence, somehow, the essays about Occam's razor become more convincing.)

Or let's look at it from the opposite side: Even the most stupid demostrations of faith send the message that it is socially accepted to be religious; that after joining a religion you will never be alone. Religion is so widespread not because the priests are extra cool or extra intelligent. It's because they are extra visible and extra audacious: they have no problem declaring that everyone who disagrees with them is stupid and evil and will go to hell (or some more polite version of this, which still gets the message across) -- and our brains perceive that as a demonstration of social power, and it triggers our instinct to join the winning side.

Complaining about Dawkins that he is too audacious, too impolite, and seems too certain -- that is complaining that he is using the winning strategy. Certainly he would be more palatable to his opponents if he chose a losing strategy instead, like most atheists are socially conditioned to do. He should be extra humble and mumble in a silent voice "we can never know for sure..." until some cocksure priest goes around and says "shut up you idiot, I am sure, my followers are sure, and you will burn in hell" and then all believers will clap their hands about this demonstration of power. Well, Dawkins is smart enough to refuse to play this game, probably because he understands the rules.

(There is a different topic about whether this approach is optimal for epistemic rationality. Probably it isn't. But it simply means that in the middle of a battle it is not the best moment to read your textbooks; you do that in the safety of your home. Religious people are motivated to be wrong -- before that motivation is gone, they are likely to be harmed by the atheists' expressions of humility.)

Comment author: 03 October 2013 06:18:32PM 2 points [-]

That looks like more of a reply to the parent comment than to mine.

Comment author: 04 October 2013 11:21:48AM *  1 point [-]

Under the usual convention that "reply to" means "disagree with", it certainly does. :D

Although the "some of them are considered very cool even outside of atheist circles" part was inspired by you mentioning Asimov. (Only the remaining 99% aren't.)

Comment author: 03 October 2013 11:22:17PM 1 point [-]

My original question was basically asking for evidence for your hypothesis (religion is mostly a social motivated-reasoning thing, and the best way to fix it is to demonstrate (over)confidence and social acceptance) or alternative hypothesis. It sounds plausible, but I don't think anyone has actually tried to check with any degree of rigor.

Comment author: 04 October 2013 08:48:03PM *  3 points [-]

Found a book: Deconversion: Qualitative and Quantitative Results from Cross-Cultural Research in Germany and the United States of America. It's recent (2011) and seems to be the best research on the subject available right now. Does anyone have access to a copy?

Comment author: 07 October 2013 03:15:12AM 1 point [-]

Well, this is anecdata, but when I was an atheist, I found God Delusion frustrating and not worth handing to my Christian friends, since it attacked lowest common denominator Christianity a lot, and my friends tended to be nerdy Thomists. Plus, I find a lot of Four Horseman stuff frustrating because they rarely construct something of their own to defend (though I understand the sense of urgency to knock people out of their current worldview -- if you find it abhorrent enough -- and let them land where they may).

Comment author: 07 October 2013 10:53:30AM 3 points [-]

You say "when I was an atheist". Running into ex-atheists is a rare thing, especially here - may I ask what changed your mind?

Comment author: 03 October 2013 07:17:31PM 1 point [-]

Has anyone done any research on what makes former believers drop religion?

I recently came across this, from the theist perspective (i.e. they tracked down people who had left and interviewed them, with the hope to prevent that in the future), and I remember it hinged mostly on social factors. (The enthusiastic youth pastor quits, and is replaced by someone that doesn't know the Bible as well, etc.)

I'm sure there are some people who deconverted because of reading those books- but it's likely that they also would have deconverted if they moved from Town A to Town B, for example, so that doesn't seem like a terribly effective way to reach everyone.

Comment author: 10 October 2013 12:58:49PM 0 points [-]

I think another thing to remember here is sampling bias. The actual conversion/deconversion probably mostly is the end point of a lengthy intellectual process. People far along that process probably aren't very representative of people not going through it and it would be much more interesting what gets the process started.

To add some more anecdata, my reaction to that style of argumentation was almost diametrically opposed. I suspect this is fairly common on both sides of the divide, but not being convinced by some specific argument just isn't such a catchy story, so you would hear it less.

Comment author: 04 October 2013 10:59:18PM *  8 points [-]

I'm in the process of translating some of the Sequences in French. I have a quick question.

From The Simple Truth:

Mark sighs sadly. “Never mind… it’s obvious you don’t know. Maybe all pebbles are magical to start with, even before they enter the bucket. We could call that position panpebblism.”

This is clearly a joke at the expense of some existing philosophical position called pan[something] but I can't find the full name, which may be necessary to make the joke understandable in French. Can anyone help?

Comment author: 04 October 2013 11:09:10PM *  4 points [-]

I initially read it as an allusion to Panpsychism:

the view that mind or soul... is a universal feature of all things

or maybe to a generic pan-x-ism. But, in retrospect, the position that "all pebbles are magical to start with" should be called "panmagism" or something. Panpebblism means that there is a pebble in everything (or everyone). So I am no longer sure what Eliezer meant.

Comment author: 05 October 2013 09:30:26PM 2 points [-]

I think he's just using the prefix "pan-" to mean all, though perhaps pantheism is relevant.

Comment author: 06 October 2013 09:17:47PM 0 points [-]

I'll just keep the prefix/suffix as is and hope for the best then ("pancailloutisme").

Comment author: 01 October 2013 10:27:34PM 7 points [-]

I got an offer of an in-person interview from a tech company on the left coast. They want to know my current salary and expected salary. Position is as a software engineer. Any ideas on the reasonable range? I checked Glassdoor and the numbers for the company in question seem to be 100k and a bit up. I suppose, actually, that this tells me what I need to know, but honestly it feels awfully audacious to ask for twice what I'm making at the moment. On the other hand I don't want to anchor a discussion that may seriously affect my life for the next few years at too small a number. So, I'm seeking validation more than information. Always audacity?

Comment author: 02 October 2013 10:34:25AM *  15 points [-]

Always ask as much as you can. Otherwise you are just donating the money to your boss. If you hate having too much money, consider donating to MIRI or CFAR or GiveWell instead. Or just send it to me. (Possible exception is if you work for a charity, in which case asking less than you could is a kind of donation.)

The five minutes of negotiating you salary are likely to have more impact on your future income than the following years of hard work. Imagine yourself a few years later, trying to get a 10% increase and hearing a lot of bullshit about how the economical situation is difficult (hint: it is always difficult), so you should all just work harder and maybe later, but no promises.

it feels awfully audacious to ask for twice what I'm making at the moment

I know. Been there, twice. (Felt like an idiot after realising that I worked for a quarter of my market price at the first company. Okay, that's exaggerated, because my market price increased with the work experience. But it was probably half of the market price.)

The first time, I was completely inexperienced about negotiating. It went like: "So tell me how much you want." "Uhm, you tell me how much you give people in my position." "No, you tell me how much you want." Etc. After a few rounds, the frustrated interviewer asked me: "Okey, so how much did you get in your previous job?" And at that moment as if a light flashed in my head and I realized the number I say now will be the number I will get, and it will likely stay the same for years, so I... lied. I told double of what I had. And the interviewer was like: "So would it be okay to give you the same salary for the beginning, and then after a few months we will increase it?" I tried hard to remain calm, and we signed the papers. (By the way, they lied about the increasing, too.) I felt like that was the best day of my life.

The next time I was already more audacious and I just asked twice of what I had in that previous company. The interviewer jumped a bit in their chair. I was like: "Is this a problem for your company?" "Well, it's at the top of the range, but if you really have the skills... By the way, is this what you made at your previous job?" I said, calmly: "No, and that was one of the reasons I left." (Of course the other reason was a desire to work with exactly the same technology the new company was using.) Then they gave me a test, I succeeded, and we signed the papers. (I expected the test, and crammed the whole textbook the previous day. It was just like learning for a university exam.)

It always feels good afterwards. And the more you practice it, the easier it gets. You can practice it at home in front of a mirror, if necessary. Repeat it a hundred times, and it will feel natural.

(By the way: human irrationality, halo effect, etc cetera... the more money they pay you, the more they respect you. You may feel that by asking less you are doing them a favor, and they will repay the favor somehow. Not gonna happen! It's more like: if he is so cheap, he is probably not too smart.)

Comment author: 05 October 2013 10:58:58PM 2 points [-]

Always ask as much as you can.

Asking for more than all the money is trivial. Don't even get me started on how much someone who is good at math can ask for. This is obviously not a good strategy. There is an optimum amount to ask for. How do you find it?

Comment author: 05 October 2013 11:50:25PM 3 points [-]

There is an optimum amount to ask for. How do you find it?

By looking at the distribution of that industry / company's wages for someone of your qualifications and asking for something on the high end. They will then either accept or try to barter you down. Either way, you will most likely end up with more than what you would have gotten otherwise.

In other words, exactly what Viliam_Bur said to begin with.

Comment author: 02 October 2013 04:17:44PM 2 points [-]

Always ask as much as you can.

Well yes, but how much can I ask? :) At any rate I went for 125k, which seems to be in the upper third or so of what Glassdoor reports. Thanks for the encouragement.

Comment author: 02 October 2013 05:01:53PM *  3 points [-]

Well yes, but how much can I ask?

When the first two companies will say that they would hire you if you asked a bit less, and you refuse, and the third company gives you as much as you asked, then you know you are working for a market salary. Until then you are probably too cheap.

Sorry, I am not from USA, so I am unable to give specific numbers. I guess you should ask for 140k now, and be willing to get down to 125k (prepare some excuse, such as "normally I would insist on 140k, but since this is the work I always wanted to have, and [insert all the benefits your interviewer mentioned], I'd say we have a deal").

Comment author: 02 October 2013 12:51:44AM *  10 points [-]

Don't deliberately screw yourself over. Don't accept less than the average for your position and either point blank refuse to give them negotiating leverage by telling them your current salary or lie.

For better, longer advice see ［Salary Negotiation for Software Engineers］(http://www.kalzumeus.com/2012/01/23/salary-negotiation)

Comment author: 02 October 2013 04:15:57PM 7 points [-]

I'm afraid I couldn't quite bring myself to follow all the advice in your link, but at any rate I increased my number to 125k. So, it helped a bit. :)

Comment author: 02 October 2013 04:28:16PM *  7 points [-]

Look up what Ramit Sethi has to say about salary negotiation. He really outlines the how things look from the other side and how asking for your 100k is not nearly as audacious as it seems.

Comment author: 02 October 2013 07:27:29AM *  5 points [-]

You may feel better about being audacious if you do an explicit cost-of-living calculation given the rent and price differential. If you see that maintaining the same standard of living is going to cost you 80k, then 100k stops seeming like a huge number.

It's also true that there is only epsilon chance of screwing yourself. Nobody is going to reject you because the expected salary number you suggested was too high; it makes no sense. You could suggest 150k and the only bad thing that will happen is you might only get offered 120k.

Comment author: 02 October 2013 04:03:35AM 4 points [-]

Always audacity! If you ask for a number that's too high, they are extremely unlikely to withdraw the offer. Anecdotally, a very good friend of mine was just able to negotiate a 50% increase in his starting salary in a similar-sounding situation.

Comment author: 02 October 2013 04:14:31PM 2 points [-]

Ok. I took a deep breath, closed my eyes, and said "125000". Hope it wasn't too low.

Comment author: 08 October 2013 08:40:28PM 0 points [-]

Rolf, you work in an industry where people are becoming millionaires and billionaires overnight. Maybe you won't manage that, but no need to be embarrassed for raking it in.

Note that even though you don't need to reveal your salary in negotiations, current salary often anchors negotiations in your next one as well as your current one, illogical though that may be. So the deal you make now has long-term implications. Also, in yet another of those biases they talk about here, a high salary may, within limits, make your bosses think you are a better worker who deserves a higher status.

Comment author: 30 September 2013 05:26:46AM *  6 points [-]

I would like to eventually create a homeschooling repository. Probably with research that might help people in deciding whether or not to homeschool their children, as well as resources and ideas for teaching rationality (and everything else) to children.

I have noticed that there have been several question in the past open threads about homeschooling and unschooling. One of the first things I plan to do is read through all past lesswrong discussions on the topic. I haven't really started researching yet, but I wanted to start by asking if anyone had anything that they think would belong in such a repository.

I would also be interested in hearing any personal opinions on the matter.

Comment author: 30 September 2013 06:55:43AM *  6 points [-]

Homeschooling is like growing your own food (or doing any other activity where you don't take advantage of division of labor): if you enjoy it, have time for it and are good at it, it's worth trying. Otherwise it's useless frustration.

Comment author: 30 September 2013 07:14:00AM 8 points [-]

I couldn't agree more about division of labor in general, but with the current state of the public school system, I do not trust them to do a good job of teaching anything.

I do not have the time or patience for it, and probably am not good at it, but fortunately my partner would be the one teaching.

Comment author: 30 September 2013 09:10:31AM 4 points [-]

Given the Bloom two sigma phenomenom it would not surprise me if unschooling + 1 hour tuition per day beat regular school. And if you read Lesswrong there's a reasonable p() that an hour of a grad student's time isn't that expensive.

Comment author: 30 September 2013 01:46:37PM *  5 points [-]

I googled the "Bloom two sigma phenomenom" and... correct me if I am wrong, but I parsed it as:

"If we keep teaching students each lesson until they understand, and only then move to the next lesson (as opposed to, I guess, moving ahead at predetermined time intervals), they will be at top 2 percent of all students".

What exactly is the lesson here? The weaker form seems to be -- if students don't understand their lessons, it really makes a difference at tests. (I guess this is not a big surprise.) The stronger form seems to be -- in standard education, more than 90% of students don't understand the lessons. Which suggests that of the money given to education, the huge majority is wasted. Okay, not wasted completely; "worse than those who really understand" does not necessarily mean "understands nothing". But still... I wonder how much additional money would be needed to give decent education to everyone, and how much would the society benefit from that.

Based on my experience as a former teacher, the biggest problem is that many students just don't cooperate and do everything they can to disrupt the lesson. (In homeschool and private tutoring, you don't have these classmates!) And in many schools teachers are completely helpless about that, because the rules don't allow them to make anything that could really help. Any attempt to make education more efficient would have to deal with the disruptive students; perhaps to remove them from the main stream. And the remaining ones should learn until they understand. Perhaps with some option for the smarter ones to move ahead faster.

Comment author: 30 September 2013 08:19:42AM 4 points [-]

I do not trust them to do a good job of teaching anything.

Good compared to what? Compared to other developed countries, compared to what they could do if they spent their resources more wisely, compared to what you could do homeschooling your kid?

A lot of the criticism of US schools is based on the first two criteria, but the third one should be the one that matters for you - even if they do a crappy job compared to Europe or Canada, they might still do a better job than you on your own, especially if you take into account things like learning to get along with peers.

(That being said, I don't know enough about either your situation or even US schools (I live in France), I'm just wary of the jump from "schools are bad" to "I can do better than schools")

Comment author: 02 October 2013 08:12:29PM 1 point [-]

Are you kidding? Did you go to school? Teaching material to a class of 10 (let alone 20 or 50) K-12 kids, selected only by location and socio-economic class, is a ridiculously overconstrained problem. To give one of the main problems: for each concept you teach, you have to choose how long to explain it and give examples. If you move on, then any kid who didn't really get it will become very lost for the rest of the year (I'm thinking of technical subjects, where you have long dependent chains of concepts). If you keep dropping kids, then everyone gets lost. If you wait until everyone gets it, then you go absurdly slow. My little brother has been "learning" basic arithmetic in his (small, private) school for six years.

Comment author: 02 October 2013 06:28:22PM *  16 points [-]

Silk Road drugs market shut down; alleged operator busted.

Bitcoin drops from $125 to$90 in heavy trading.

Edited to add: Well, that was quick. Doesn't look like the bottom fell out.

Edited again: Here's the criminal complaint against the alleged operator. The details at least make sense as a story: in the early days of Silk Road, the alleged operator had really lousy opsec, linking his name to the Silk Road project. Then later, he seems to have got scammed by a guy who first threatened to extort him, then pretended to be a hit-man who would kill the extortionist.

Comment author: 03 October 2013 01:52:50AM 8 points [-]
Comment author: 03 October 2013 02:45:17AM 3 points [-]

Can we trust that all of these sources will eventually be archived on gwern.net?

Comment author: 03 October 2013 03:03:49AM 7 points [-]

Yes, assuming the files aren't too gigantic like 100M+. Right now I just want to collect everything relevant.

Comment author: 02 October 2013 02:35:52PM 4 points [-]

Mindkilling for utilitarians: Discussion of whether it would have made sense to shut down the government to try to prevent the war in Iraq

More generally, every form of utilitarianism I've seen assumes that you should value people equally, regardless of how close they are to you in your social network. How much damage are you obligated to do to your own society for people who are relatively distant from it?

Comment author: 01 October 2013 10:12:49PM 4 points [-]

How can I acquire melatonin without a prescription in the UK? The sites selling it all look very shady to me.

Comment author: 01 October 2013 10:56:19PM 8 points [-]

It's melatonin; melatonin is so cheap that you actually wouldn't save much, if any, money by sending your customers fakes. And the effect is clear enough that they'd quickly call you on fakes.

And they may look shady simply because they're not competently run. To give an example, I've been running an ad from a modafinil seller, and as part of the process, I've gotten some data from them - and they're easily costing themselves half their sales due to basic glaring UI issues in their checkout process. It's not that they're scammers: I know they're selling real modafinil from India and are trying to improve. They just suck at it.

Comment author: 02 October 2013 08:43:27PM 1 point [-]

Have you tried asking for a prescription?

Comment author: 30 September 2013 06:24:00AM 4 points [-]

If I make a target, but instead of making it a circle, I make it an immeasurable set, and you throw a dart at it, what's the probability of hitting the target?

Comment author: 30 September 2013 09:33:58AM *  7 points [-]

If you construct a set in real life, then you have to have some way of judging whether the dart is "in" or "out". I reckon that any method you can think of will in fact give a measurable set.

Alternatively, there are several ways of making all sets measurable. One is to reject the Axiom of Choice. The AoC is what's used to construct immeasurable sets. It's consistent in ZF without AoC that all sets are Lebesgue measurable.

If you like the Axiom of Choice, then another alternative is to only demand that your probability measure be finitely additive. Then you can give a "measure" (such finitely additive measures are actually called "charges") such that all sets are measurable. What's more you can make your probability charge agree with Lebesgue measure on the Lebesgue measurable sets. (I think you need AoC for this though.)

In L.J. Savage's "The Foundations of Statistics" the axioms of probability are justified from decision theory. He only ever manages to prove that probability should be finitely additive; so maybe it doesn't have to be countably additive. One bonus of finite additivity for Bayesians is that lots of improper priors become proper. For example, there's a uniform probability charge on the naturals.

Comment author: 30 September 2013 05:53:09PM *  5 points [-]

1. We only care about probabilities if we can be forced to make a bet.

2. In order for it to be possible to decide who won the bet, we need that (almost always) a measurement to some finite accuracy will suffice to determine whether the dart is in or out of the set.

3. Thus the set has a boundary of measure zero.

4. Thus the set is measurable.

What we have shown is that in any bet we're actually faced with, the sets involved will be measurable.

(The steps from 2 to 3 and 3 to 4 are left as exercises. (I think you need Lebesgue measurable sets rather than just Borel measurable ones))

Note that the converse fails: I believe you can't make a bet on whether or not the dart fell on a rational number, even though the rationals are measurable.

Comment author: 30 September 2013 09:56:58PM *  1 point [-]

Here's a variant which is slightly different, and perhaps stronger since it also allows some operations with "infinite accuracy".

In order to decide who won the bet, we need a referee. A natural choice is to say that the referee is a Blum-Shub-Smale machine, i.e. a program that gets a single real number x∈[0,1] as input, and whose operations are: loading real number constants; (exact) addition, substraction, multiplication and division; and branching on whether ab (exactly).

Say you win if the machine accepts x in a finite number of steps. Now, I think it's always the case that the set of numbers which are accepted after n steps is a finite union of (closed or open) intervals. So then the set of numbers that get accepted after any finite number of steps is a countable union of finite unions of intervals, hence Borel.

Comment author: 30 September 2013 06:39:50AM *  6 points [-]

Immeasurable sets are not something in the real world that you can throw a dart at.

I can rephrase your problem to be: "If I have an immeasurable set X in the unit interval, [0,1), and I generate a uniform random variable from that interval, what is the probability that that variable is in X?"

The problem is that a "uniform random variable" on a continuous interval is a more complicated concept than you think. Let me explain, by first giving an example where X is measurable, lets say X=[0,pi-3). We analyze random continuous variables by reducing to random discrete variables. We can think of a "uniform random variable" as a sequence of digits in a decimal expansion which are determined by rolling a 10 sided die. So for example, we can roll the die, and get 1,4,6,2,9,..., which would correspond to .14629..., which is not in the set X. Notice that while in principle we might have to roll the die arbitrarily many times, we actually only had to roll the die 3 times in this case, because once we got 1,4,6, we knew the number was too big to be in the set X. We can use this fact that we almost always have to roll the die only a finite number of times to get a definition of the "probability of being in X." In this case, we know that the probability is between .141 and .142, by considering 3 die rolls, and if we consider more die rolls, we get more accuracy that converges to a single number, pi-3.

Now, let's look at what goes wrong if X is not measurable. The problem here is that the set is so messy that even if we we know about the first finitely many digits of a random number, we wont be able to tell if the number is in X. This stops us from doing the procedure like above and defining what we mean.

Is this clear?

Comment author: 30 September 2013 09:22:52AM *  3 points [-]

EDIT: I retract the following. The problem with it is that Coscott is arguing that "something in the real world that you can throw a dart at" implies "measurable" and he does this by arguing that all sets which are "something in the real world that you can throw a dart at" have a certain property which implies measurability. My "counterexamples" are measurable sets which fail to have this property, but this is the opposite of what I would need to disprove him. I'd need to find a set with this property that isn't measurable. In fact, I don't think there is such a set; I think Coscott is right.

The sets with this property (that you can tell whether your number is in or out after only finitely many dice rolls) are the open sets, not the measurable sets. For example, the set [0,pi-3] is measurable but not open. If the die comes up (1,4,1,5,9,...) then you'll never know if your number is in or out until you have all the digits. For an even worse example take the rational numbers: they're measurable (measure zero) but any finite decimal expansion could be leading to a rational or an irrational.

Comment author: 30 September 2013 11:28:20AM *  2 points [-]

The sets with this property (that you can tell whether your number is in or out after only finitely many dice rolls) are the open sets, not the measurable sets.

That doesn't seem right to me. Take as my target the open set (0, pi-3). If I keep rolling zeros I'll never be able to stop. (Edit: I know that the probability of rolling all zeros approaches 0 as the number of die rolls approaches infinity, but I assume that a demon can take over the die and start feeding me all zeros, or the digits of pi-3 or whatever. As I think about this more I'm thinking maybe what you said works if there is no demon. Edit 2: Or not. If there's no demon and my first digit is 0 then I can stop, but that's only because 0 is expressible as an integer divided by a power of ten. If there's no demon and I roll the first few digits of pi-3, I know that I'll eventually go over or under pi-3, but I don't know which, and it doesn't matter whether pi-3 itself is in my target set.)

Every die roll tells me that the random number I'm generating lies in the closed interval [x, x+1/10^n], where x is the decimal expansion I've generated so far and n is how many digits I've generated. If at some point I start rolling all 0s or all 9s I'll be rolling forever if the number I'm generating is a limit point of the target set, even if it's not in the target set.

Comment author: 30 September 2013 01:12:23PM *  2 points [-]

I should have been more accurate and said "If the random number that you'll eventually get does in fact lie in the set, then you'll find out about this fact after a finite number of rolls."

This really does define open sets, since for any point in an open set there's an open ball of radius epsilon about it which is in the set, and then the interval [x, x+1/10^n] has to be in that ball once 1/10^n < epsilon/2.

EDIT: (and the converse also holds, I think, but it requires some painfully careful thinking because of the non-uniqueness of decimal expansions)

I think a more exact representation of what Coscott actually said is the following property: "We almost always only have to roll the die finitely many times to determine whether the point is in or out."

This still doesn't specify measurable sets (because of the counterexample given by the rationals). I think the type of set that this defines is "Sets with boundary of measure zero" where the boundary is the closure minus the interior. Note that the rationals in [0,1) have boundary everywhere (i.e. boundary of measure 1).

Comment author: 30 September 2013 07:07:33AM 9 points [-]

In other words, "what is the measure of an unmeasurable set?". The question is wrong.

Comment author: 30 September 2013 01:21:43PM 5 points [-]

I suppose the question is: What should you do if you're offered a bet on whether the dart will hit the target or not?

There's no way to avoid the question other than arguing somehow that you'll never encounter an immeasurable set.

Comment author: 30 September 2013 04:32:28PM 5 points [-]

Immeasurable objects are physically impossible. The actual target will be measurable, even if the way you came up with it was to try to follow the "instructions" that describe an immeasurable set.

Comment author: 09 October 2013 10:16:00AM *  0 points [-]

Hmm. What is the exact length of your, say, pen? Is it a rational number or a real number... I mean the EXACT lengh...?

Note if the answer to the last question is "it is a real number", then it is possible to construct the bet as proposed by the OP.

Before you quote "Planck's Length" in your reply, there is currently no directly proven physical significance of the Planck length (at least according to Wikipedia).

Comment author: 30 September 2013 04:23:53PM 4 points [-]

For the same reasons you outline above, I'm okay with fighting this hypothetical target.

If I must dignify the hypothesis with a strategy: my "buy" and "sell" prices for such a bet correspond to the inner and outer measures of the target, respectively.

Comment author: 30 September 2013 04:40:20PM 10 points [-]

I'll never encounter an immeasurable set.

Comment author: 04 October 2013 04:55:57PM *  3 points [-]

There is too much unwarranted emphasis on ketosis when it comes to Keto diets, rather than hunger satiation. That might sound like a weird claim since the diet is named after ketosis, but when it comes to the efficacy of the Keto diet for weight loss with no regard to potential health or cognitive effects, ketosis has little to do with weight loss. Most attempts to explain the Keto diet almost always starts with an explanation on what ketosis is with an emphasis on attaining ketosis rather than hunger satiation and caloric deficit. Here is intro excerpt from reddit's r/keto:

A Ketogenic Diet is any diet that causes ketones to be produced by the liver, shifting the body’s metabolism away from glucose towards fat utilization. Typically on a moderate to high carb diet, the body will prefer glucose for fuel (usually from dietary carbs), but by restricting carbs, the body will prefer fat for fuel. By inducing ketosis, a series of adaptations will take place.

Even on Lesswrong, discussions of keto, again puts too much emphasis on ketosis. Here is an top level comment in the solved problems repository:

As far as I can tell, ketogenic diets solve the problem of fat loss. I know, anecdotes are not data, but it's worked wonders for everyone I know who's tried it (myself included).

Err on the side of posting solutions which may not be universal but are still likely to be helpful to many people.

This is the sole reason I'm posting this. Keto works for very many people. The short story of keto is that your brain can only eat certain kinds of chemicals. Glycogen from eating carbohydrates is one of them. Ketones generated from fat is another. Your body will preferentially use the first over the second, since turning fat into ketones is expensive. So if you eat few enough carbs (<30g per day is the figure I remember) and plenty enough fat (2:1 fat to protein is what I heard), your body will eventually start doing chemistry that turns dietary and body fat into ketones.

There's some more practical advice about how to induce ketosis quickly (muscles store glycogen, so exercise helps) and how to make low-carb versions of foods you enjoy, but that's pretty much the gist of it.

The vast majority of people on the keto diet and losing weight are also eating less calories and feeling fuller, regardless if the body is going through ketosis or not. To the credit of r/keto there are many there that recognize the keto diet for what it is, but there is still a schism between those that see keto as a neat little hack to eat less, and those that see ketosis as the goal. In short, baring any health benefits of macro composition of your diet, ketosis in and of itself is not necessary to lose weight.

For the vast majority of people's physiology and physique goals, the most efficient thing to do is to find ways for you to best eat less and feel full. Some people use keto, some use intermittent fasting, some use both. I find that intermittent fasting is the least bullshity diet and most honest about how it works – restricted feeding times leads to less chances of over eating, long fasting times decrease hunger pangs, and satisfaction of full "stuffed" feeling of satiety.

Edit: Not bashing Keto, just wished people would focus more on health benefits of keto and efficacy of high protein diets to keep one satiated, rather than an emphasis on ketosis in and of itself.

http://examine.com/faq/what-should-i-eat-for-weight-loss.html#ref3

Comment author: 04 October 2013 05:13:53PM 0 points [-]

In short, baring any health benefits of macro composition of your diet, ketosis in and of itself is not necessary to lose weight.

That is true. However for many people "health benefits" beyond and above losing weight are a major advantage of keto diets.

Losing weight isn't be-all end-all of the way you eat.

Comment author: 04 October 2013 05:36:11PM 0 points [-]

Yea that is why I maintain a keto-esque diet, because I believe in the long term effects of reduced consumption of processed carbs and high glycemic index foods.

An analysis of the front page of r/keto only results in one post about positive health/cognitive effects, almost every post has to do with "look at how much weight I lost", which is a shame because you're right when you say that Losing weight isn't the be-all end-all of the way your eat.

Comment author: 30 September 2013 09:55:54AM *  3 points [-]

For some reasons that I don't understand, the Special threads wiki page has a link to this:

...but that page doesn't work well.

Comment author: 30 September 2013 09:08:10PM 3 points [-]

Douglas_Knight just fixed it. (It's a wiki; in the future, just fix it!)

Comment author: 02 October 2013 06:15:45AM *  10 points [-]

Interesting statements I ran into with regards to kabuki theater aspects of the so called United States federal government shutdown of 2013. This resulted in among other things closing down websites.

A website shouldn't just go down when the people managing it stop working, it's not like they're pedaling away inside the servers. Block the federal highways with army tanks, sorry the government is closed.

There is a nontrivial set of the voting public who legitimately believe money equals tech working via magical alchemy.

I was interested to know this kind of thing has a name: Washington Monument Syndrome.

The name derives from the National Park Service's alleged habit of saying that any cuts would lead to an immediate closure of the wildly popular Washington Monument.

Comment author: 19 October 2013 01:44:51PM *  5 points [-]

The Shutdown Wasn’t Pointless. It Revealed Information

Perhaps an analogy to war is useful. War is stupid–you could always take the result of the war, implement it without fighting, and leave everyone better off. But sometimes a weak parties believes it is stronger than it really is. This makes it overly optimistic in bargaining, leading to a breakdown and war. However, the process of fighting reveals the weakness, in turn making the weak side willing to sit down at the bargaining table. The Republicans are the weak side. The Democrats are the strong side. The costs of war are the costs of the shutdown.

William Spaniel says on twitter he is not sure about how he feels about our models of war also explaining U.S. Congress bargaining. Besides war being politics by other means, I say we obviously should expect the models to work to a limited extent. Democracy is a highly ritualized form of civil war and not any kind of war but the kind practiced in the 19th century when democracy began its march. Instead of drafting a mob and then ordering them to shoot the opposing mob, you orderly assemble your respective mobs and then count them via voting. Since Samuel Colt made men equal in the 19th century you assume the slightly larger mob wins. Some nations even factor in territory held to decide outcomes. After elections both mobs go safely home and about their business, while in theory the government implements a real outcome of the simulated war.

I'm half expecting that sooner or later someone will realize you can with current technology win civil wars with drones agains mobs and Democracy will be discarded in favor of a more stable equilibrium. On the other hand early 20th century thinkers, futurists and fiction writers expected people to realize air power changed the calculus of war and for this change to impact politics quite profoundly. Arguably maybe we would even be better off had they been right. All power to the pilots! Yet they weren't. Evidence against.

Comment author: 02 October 2013 10:31:04PM 13 points [-]

As a sysadmin, if I were to be furloughed indefinitely I would probably spin down any nontrivial servers. A server that goes wrong and can't be accessed is a really, really, really, really terrible-horrible-no-good-very-bad thing. And things go wrong on a regular basis in normal times; when the government is shut down and a million things that get done everyday suddenly stop being done, something somewhere is going to break. Some 12-year-old legacy cron job sitting in an obscure corner of an obscure server written by a long-departed contractor is going to notice that the foobar queue is empty , which turns out to be an undefined behavior because the foobar queue has always had stuff going through it before, so it executes an else branch it's never had occasion to execute, which sends raw debugging information to a production server because the contractor was bad at things, and also included passwords in their debugging because they were really bad at things...

Comment author: 02 October 2013 09:02:17PM 8 points [-]

This is actually a terrible example of Washington Monument Syndrome.

" Hi, Server admin here... We cost money as does our infrastructure, I imagine a site that large costs a very good deal, we aren't talking five bucks on bluehost here.

I am private sector, but if I were to be furloughed for an indeterminate amount of time you really have two options. Leave things on autopilot until the servers inevitably break or the site crashes at which point parts or all of it will be left broken without notice or explanation. Or put up a splash page and spin down 99% of my infrastructure (That splash page can run on a five dollar bluehost account) and then leave. I won't be able to come in while furloughed to put it up after it crashes.

If you really think web apps keep themselves running 24/7 without intervention we really have been doing a great job with that illusion and I guess the sleepless nights have been worth it to be successfully taken for-granted."

Comment author: 03 October 2013 04:04:36AM 4 points [-]

I suspect it would be illegal to run those servers. The Anti-Deficiency Act forbids the government from "involving the government in any obligation to pay money before funds have been appropriated". The Army can't purchase new tanks, NASA can't order a new space shuttle, and I bet most agencies can't rack up more obligations with their ISPs and electric companies.

This act, by the way, is the reason nonessential workers are forbidden from volunteering for work.

Comment author: 04 October 2013 11:50:20AM *  4 points [-]

I suspect it would be illegal to run those servers.

http://www.ncbi.nlm.nih.gov/ seems still to run.

Comment author: 05 October 2013 02:30:54PM 1 point [-]

BLAST and PubMed are running automatically but there is no updating of either of them with new materials.

Comment author: 06 October 2013 03:31:46PM *  -2 points [-]
Comment author: 06 October 2013 10:02:36PM 5 points [-]

In the past few hours, my total karma score has dropped by fifteen points. It looks like someone is going back through my old comments and downvoting them. A quick sample suggests that they've hit everything I've posted since some time in August, regardless of topic.

Is this happening to anyone else?

Anyone with appropriate access care to investigate?

To whoever's doing this — Here's the signal that your action sends to me: "Someone, about whom all you know is that they have an LW account that they use to abuse the voting system, doesn't like you." This is probably not what you mean to convey, but it's what comes across.

Comment author: 06 October 2013 11:01:38PM 0 points [-]
Comment author: 07 October 2013 02:53:16AM 1 point [-]

Maybe it's just me not knowing much about website design, but this seems like a problem which could be mitigated with automatic controls on the karma system. Like, for example, you have a limit of +/- n net Karma you can award to any given poster in an arbitrary time limit t. Or even that if your rate of downvoting any given poster cracks some ceiling it sends up an automatic mod flag that there might be an attack going on.

Ideally, of course, we could just abide by the honor system, but from a pragmatic perspective it might make more sense to set up stronger safeguards as an additional measure.

Comment author: 07 October 2013 04:33:09AM 1 point [-]

Like, for example, you have a limit of +/- n net Karma you can award to any given poster in an arbitrary time limit t.

That can be also implemented more softly by still allowing anyone to vote anyone as much as they want, but requiring a captcha for each vote after a given limit.

Comment author: 08 October 2013 07:51:00PM *  2 points [-]

Here's a twist on prospect theory.

I installed solar panels, which were pretty expensive, but pay back as they generate electricity.

The common question was "How long will it take to earn your investment back?" I understand why they're asking. The investment is illiquid, even more than a long-term bank deposit. But if wanted to get my money "back," I'd keep it in my checking account. The question comes from a tendency to privilege a bird in the hand over those that are still in the bush.

If the panels have a total breakdown before I get my investment back, that's bad, but it's just a negative RoI, no worse than losing money to inflation or to a down stock market.

If I get my money back, and then the panels break down right away, I won't say "at least I got my money back," I'll say "I wish my RoI had been greater." If I get my investment back, but it takes far longer than I expected, I won't say "at least I got it back." I'd say "too bad that my annualized RoI is near zero; at least it's not negative."

And if my RoI is super-duper, and I get my money back quick, I won't say "hurrah, I got my money back," I'll say "hurrah, I'm getting better RoI than any other investment I can make today," which is actually what I'll say even before I get my investment back. And then I'll keep smiling so long as the panels keep laying the golden eggs.

Comment author: 08 October 2013 08:07:00PM *  2 points [-]

Correct.

If you want to be even more correct :-) you should estimate your IRR (internal rate of return) and compare it with your opportunity costs for the money invested.

Comment author: 08 October 2013 08:16:14PM 0 points [-]

Yes, good point. It took me a while to figure out by myself the best way I should be calculating my rate of return on my variable-return investments, before discovering this in Excel.

But in this case, the panels produce (hopefully) a pretty constant annual amount of electricity, and the price I get is a fixed amount, so it seems that calculating IRR is easy.

As long as we are the topic, maybe the smart folks here can explain, mathematically, why the summation formula for IRR does not admit a closed-form solution? I asked on Quant StackExchange and didn't get much of an answer.

Comment author: 08 October 2013 08:21:32PM 2 points [-]

But in this case, the panels produce (hopefully) a pretty constant annual amount of electricity, and the price I get is a fixed amount, so it seems that calculating IRR is easy.

Your calculation is presumably for a fairly long term. In the long term prices don't remain fixed, things break down, need maintenance, etc. For example, hail might damage some of your panels. Or your roof might start to leak and the presence of the panels will substantially add to the cost of repairing it.

Comment author: 08 October 2013 08:42:15PM 0 points [-]

Excellent. I had an overly-simplistic mental model in which the panels would last until they fail.

But yes, unexpected costs that are nonetheless below full price of the panels are a real possibility.

Comment author: 06 October 2013 05:57:42AM 2 points [-]

Am I mistaken, or do the Article Navigation buttons only ever take to posts in Main, even if I start out from a post in Discussion? Is this deliberate? Why?

Comment author: 07 October 2013 01:13:51AM 2 points [-]

You correctly identify a bug. Here is another bug, which is less consistent. Posts in discussion have two URLs, one marked discussion, one not. For this open thread, here is the discussion link. Its tag links in the lower right corner of the post send you to discussion posts (but the tag links in the article navigation don't). I got that discussion link from a page of new discussion articles. But if I instead go to Coscott's submitted page, I get this link, which looks like it's in main, with tag links also in main.

Comment author: 04 October 2013 09:10:15PM 2 points [-]

Another PT:LoS question. In Chapter 8 ("Sufficiency, Ancillarity and all that"), there's a section Fisher information. I'm very interested in understanding it, because the concept has come up in improtant places in my statistics classes, without any conceptual discussion of it - it's in the Cramer-Rao bound and the Jeffreys prior, but it looks so arbitrary to me.

Jaynes's explanation of it as a difference in the information different parameter values give you about large samples is really interesting, but there's one step of the math that I just can't follow. He does what looks like a second-order taylor approximation of log p(x|theta), but there's no first-order term and the second-order term is negative for some reason?! What happened there?

Comment author: 05 October 2013 09:51:44PM *  1 point [-]

but there's no first-order term and the second-order term is negative for some reason?! What happened there?

There's no first-order term because you are expanding around a maximum of the log posterior density. Similarly, the second-order term is negative (well, negative definite) precisely because the posterior density falls off away from the mode. What's happening in rough terms is that each additional piece of data has, in expectation, the effect of making the log posterior curve down more sharply (around the true value of the parameter) by the amount of one copy of the Fisher information matrix (this is all assuming the model is true, etc.). You might also be interested in the concept of "observed information," which represents the negative of the Hessian of the (actual not expected) log-likelihood around the mode.

Comment author: 07 October 2013 03:03:42AM *  0 points [-]

ah, thank you! It makes me so happy to finally see why that first term disappears.

But now I don't see why you subtract the second-order terms.

I mean, I do see that since you're at a maximum, the value of the function has to decrease as you move away from it.

But, in the single-parameter case, Jaynes's formula becomes

$\log{p(x|\theta)}=\log{p(x|\theta_0)} - \frac{\partial^2 \log{p(x|\theta)}}{\partial \theta^2}(\delta\theta)^2$

But that second derivative there is negative. And since we're subtracting it, the function is growing as we move away from the minimum!

Comment author: 07 October 2013 05:00:19AM 1 point [-]

Yes, that formula doesn't make sense (you forgot the 1/2, by the way). I believe 8.52/8.53 should not have a minus there and 8.54 should have a minus that it's missing. Also 8.52 should have expected values or big-O probability notation. This is a frequentist calculation so I'd suggest a more standard reference like Ferguson

Comment author: 04 October 2013 06:00:52AM 2 points [-]

As the partial government shutdown enters its third day, many House Republicans are determined to keep fighting, even though they see no plausible way out of the current impasse, because they've come so far they cannot imagine backing down now. "I think there's a sense that for us to do a clean CR now -- then what the hell was this about?" one Republican House member told me. "So I don't think it's going to end anytime soon."

I find it quite possible that what superficially looks like sunk-cost fallacy is in this case actually rational (given certain goals) for deeper game-theoretic reasons. But I am not sure if all the players are conscious of this or if some operate driven by non-consciously-rational feelings that have an underlying logic in the situation they are in. See also.

Comment author: 03 October 2013 05:02:28PM *  2 points [-]

Yet another newbie question. What's the rational way to behave in a prediction market where you suspect that other participants might be more informed than you?

Here's a toy model to explain my question. Let's say Alice has flipped a fair coin and will reveal the outcome tomorrow. You participate in a prediction market over the outcome of the coin. The only participant besides you is Bob. Also you know that Alice has flipped another fair coin to decide whether to tell Bob the outcome of the first coin in advance. What trades should you offer to Bob, and what trades should you accept from Bob?

Bonus points for solving a similar toy model where Bob has a 50% chance of influencing the outcome of the coin instead. Both questions seem relevant to real-world prediction markets, and also other betting situations.

Comment author: 03 October 2013 06:33:55PM 7 points [-]

What's the rational way to behave in a prediction market where you suspect that other participants might be more informed than you?

Stay out of the market.

Alternatively, if you have a strong prior, you can treat the bets of other better-informed participants as evidence and do Bayesian updating. But it will have to be a pretty strong prior to still bet against them.

Of course, if the market has both better-informed and worse-informed participants and you know who they are, you can just bet together with the better-informed participants.

Comment author: 03 October 2013 05:09:55PM 4 points [-]

You will not take a bet with Bob. If he does not know the result of the coin, he will not take anything worse than even odds.

You should clearly not offer him even odds. If you offer him anything else, he will accept if and only if he knows you will lose.

Comment author: 03 October 2013 05:31:43PM *  2 points [-]

Hang on, I just realized there's a much simpler way to analyze the situations I described, which also works for more complicated variants like "Bob gets a 50% chance to learn the outcome, but you get a 10% chance to modify it afterward". Since money isn't created out of nothing, any such situation is a zero-sum game. Both players can easily guarantee themselves a payoff of 0 by refusing all offers. Therefore the value of the game is 0. Nash equilibrium, subgame-perfect equilibrium, no matter. Rational players don't play.

That leads to the second question: which assumptions should we relax to get a nontrivial model of a prediction market, and how do we analyze it?

Comment author: 03 October 2013 05:51:35PM 8 points [-]

Robin Hanson argues that prediction markets should be subsidized by those who want the information. (They can also be subsidized by "noise" traders who are not maximizing their expected money from the prediction market.) Under these conditions, the expected value for rational traders can be positive.

Comment author: 03 October 2013 06:01:44PM *  1 point [-]

Good link, thanks. So Robin knows that zero-sum markets will be "no-trade" in the theoretical limit. Can you explain a little about the mechanism of subsidizing a prediction market? Just give stuff to participants? But then the game stays constant-sum...

Comment author: 03 October 2013 07:20:44PM 4 points [-]

Basically, you'd like to reward everyone according to the amount of information they contribute. The game isn't constant sum overall since the amount of information people bring to the market can vary. Ideally, you'd still like the total subsidy to be bounded so there's no chance for infinite liability.

Depending on how the market is structured, if someone thinks another person has strictly more information than them, they should disclose that fact and receive no payout (at least in expectation). Hanson's market scoring rules reward everyone according to how much they improve on the last person's prediction. If Bob participates in the market before you, you should just match his prediction. If you participate before him, you can give what information you do have and then he'll add his unique information later.

Comment author: 03 October 2013 07:36:23PM *  2 points [-]

Many thanks for the pointer to LMSR! That seems to answer all my questions.

(Why aren't scoring rules mentioned in the Wikipedia article on prediction markets? I had a vague idea of what prediction parkets were, but it turns out I missed the most important part, and asked a whole bunch of ignorant questions... Anyway, it's a relief to finally understand this stuff.)

Comment author: 03 October 2013 08:46:52PM 3 points [-]

They should be. Just a matter of someone stepping up to write that section. The modern theory on market makers has existed for less than a decade and only matured in the last few years, so it just hasn't had time to percolate out. Even here on Less Wrong, where prediction markets are very salient and Hanson is well known, there isn't a good explanation of the state of the art. I have a sequence in the works on prediction markets, scoring rules, and mechanism design in an attempt to correct that.

Comment author: 03 October 2013 08:52:23PM 1 point [-]

That would be great! If you need someone to read drafts, I'd be very willing :-)

Comment author: 04 October 2013 12:47:34PM 0 points [-]

Good link, thanks. So Robin knows that zero-sum markets will be "no-trade" in the theoretical limit. Can you explain a little about the mechanism of subsidizing a prediction market? Just give stuff to participants? But then the game stays constant-sum...

There's no problem with the game being constant sum.

Comment author: 03 October 2013 08:35:41PM 0 points [-]

I always assumed it was by selling prediction securities for less than they will ultimately pay out.

Comment author: 03 October 2013 07:00:40PM 2 points [-]

The assumption you should relax is that of an objective probability. If you treat probabilities as purely subjective, and that saying that P(X)=1/3 means that my decision procedure thinks the world with not X is twice as important as the world with X, then we can make a trade.

Lets say I say P(X)=1/3 and you say P(X)=2/3, and I bet you a dollar that not X. Then I pay you a dollar in the world that I do not care about as much, and you pay me a dollar in the world that you do not care about as much. Everyone wins.

This model of probability is kind of out there, but I am seriously considering that it might be the best model. Wei Dai argues for it here.

Comment author: 01 October 2013 12:23:01AM 2 points [-]

What's the LMSR prediction market scoring rule? We've just started an ad-hoc prediction market at work for whether some system will work, but I can't remember how to score it.

Say I have these bets:

House: 50%
Me: 50%
SD: 75%
AK: 35 %

what is the payout/loss for each player?

Comment author: 01 October 2013 02:22:25AM 3 points [-]

The log market scoring rule (LMSR) depends on there being an order to the stated probabilities, so the payoffs would be different for the order NS, SD, AK than for the order AK, SD, NS.

Given a particular order, the payoff for the i-th probability submitted is log(p_i^k) - log(p_{i-1}^k) if event k occurs. For example, if the order is NS, SD, AK and the system does work, AK's payoff is log(.35) - log(.75). If the system doesn't work, AK's payoff is log(.65) - log(.25).

I haven't seen this written about anywhere, but if you just have probabilities submitted simultaneously and you don't want to fix an order, one way to score them would be log(p_i^k) - \frac{1}{n} \sum_{j \ne i} log(p_j^k) (the log of the probability person i gives to event k minus the average of the probabilities everyone else gave, including the house, assuming there are n participants plus the house). This is just averaging over the payoffs of every possibly ordering of submission. So, for these probabilities, AK's score if the system worked would be log(.35) - (log(.75) + log(.5) + log(.5))/3.

Comment author: 30 September 2013 08:06:39AM 2 points [-]

Does anyone have any short thought experiments that have caused them to experience viewquakes on their own?

Comment author: 30 September 2013 09:33:26AM *  3 points [-]

From thought experiment to real experiment. I mean really, how could they NOT build it once they thought of it? link

Comment author: 30 September 2013 04:51:15PM 2 points [-]

Topic: Investing

There seems to be a consensus among people who know what they're talking about that the fees you pay on actively managed funds are a waste of money. But I saw some friends arguing about investing on Facebook, with one guy claiming that index funds are not actually the best way to go for diversified investing that does not waste any money on fees. Does anyone know if there is anything too this? More specifically, are Vanguard's funds really as cheap as advertised, or is there some catch to them?

Comment author: 30 September 2013 05:14:14PM *  4 points [-]

The idea is that you can't, on average and long term, beat the market. So paying extra money for a fund that claims to be able to do that is an unnecessary gamble. Accumulating the expertise to evaluate a fund's ability to perform better than the market would give you the ability to just invest at that level anyway, so you might as well save your time and money and stick it in the cheapest market funds you can manage.

Yes, some strategies beat the market, sometimes (they also sometimes fail catastrophically). But you can do pretty damn well comparably in the long term by having a very low-cost, low-effort strategy that frees up a lot of time and effort for other pursuits.

You can look up expense ratios on Google, Morningstar, etc. Vanguard does pretty well. They're pretty well represented here.

Comment author: 30 September 2013 11:06:51PM 2 points [-]

The issue with an index fund that based on something like the SAP 500 is that the SAP 500 changes over time.

If a company loses their SAP 500 all the index funds that are based on the SAP 500 dump their stocks on the market. On average that's not going to be a good trade. The same goes for the trade of buying the companies that just made it into the SAP 500. On average you are going to lose some money to hedge funds or investment banks who take the other side on those trades.

In general you can expect that if you invest money into the stockmarket big powerful banks have some way to screw you. But they won't take all your money and index funds are still a good choice if you don't want to invest too much time thinking about investing.

Comment author: 01 October 2013 01:21:28AM *  2 points [-]

This sounds like a sufficiently obvious failure mode that I'd be extremely surprised to learn that modern index funds operate this way, unless there's some worse downside that they would encounter if their stock allocation procedure was changed to not have that discontinuity.

Comment author: 01 October 2013 04:14:30PM 2 points [-]

They do because their promise is to match the index, not produce better returns.

Moreover, S&P500 is cap-weighted so even besides membership changes it is rebalanced (the weights of different stocks in the portfolio change) on a regular basis. That also leads to rather predictable trades by the indexers.

Comment author: 01 October 2013 04:38:53PM *  1 point [-]

This sounds like a sufficiently obvious failure mode that I'd be extremely surprised to learn that modern index funds operate this way, unless there's some worse downside that they would encounter if their stock allocation procedure was changed to not have that discontinuity.

Being an index fund is fundamentally about changing your portfolio when the index changes. There no real way around it if you want to be an index fund.

Comment author: 01 October 2013 05:51:15PM 1 point [-]

If you could consistently make money by shorting stocks that are about to fall off an index, the advantage would arbitraged to oblivion.

Comment author: 01 October 2013 08:27:10PM 1 point [-]

If you could consistently make money by shorting stocks that are about to fall off an index, the advantage would arbitraged to oblivion.

The question is whether you know that the stocks are about to fall off the index before other market participants. If your high frequency trading algorithm is the first to know that a stock is about to fall off an index, than you make money with it.

Using the effect to make money isn't easy because it requires having information before other market participants. That doesn't change anything about whether the index funds on average lose money on trades to update their portfolio to index changes.

Comment author: 02 October 2013 09:00:03PM 1 point [-]

This seems like a really weird question. If your friend is advocating something else, how about you tell us what it is? If your friend is knocking Vanguard, but not specifying what's better, why should I care? Your last sentence suggests that Vanguard is lying about it's fees. That would be a reasonable thing to say in isolation, but it's not true.

Comment author: 01 October 2013 12:02:44AM *  1 point [-]

Asset allocation matters too. Vanguard target retirement funds follow the conventional wisdom (more stocks when you're young, more bonds when you're older) and are pretty cheap. Plowing all new investments into a single target-date fund is good advice for most people*.

I implemented a scheme to lower my expenses from 0.17% to 0.09%, but it was not worth the time, hassle, and tax complications.

*People who should do something more complicated include retirees, who should strongly consider buying an annuity, and people who are saving to donate to charity.

Comment author: 30 September 2013 04:29:45PM 2 points [-]
Comment author: 30 September 2013 04:42:08PM 7 points [-]

An earthrise that might be witnessed from the surface of the Moon would be quite unlike moonrises on Earth. Because the Moon is tidally locked with the Earth, one side of the Moon always faces toward Earth. Interpretation of this fact would lead one to believe that the Earth's position is fixed on the lunar sky and no earthrises can occur, however, the Moon librates slightly, which causes the Earth to draw a Lissajous figure on the sky. This figure fits inside a rectangle 15°48' wide and 13°20' high (in angular dimensions), while the angular diameter of the Earth as seen from Moon is only about 2°. This means that earthrises are visible near the edge of the Earth-observable surface of the Moon (about 20% of the surface). Since a full libration cycle takes about 27 days, earthrises are very slow, and it takes about 48 hours for Earth to clear its diameter.

Comment author: 30 September 2013 05:31:21PM 2 points [-]

Thanks, a correction has been made.

Comment author: 30 September 2013 07:18:32AM 1 point [-]

The occasional phenomenon where people go downvote every comment by someone they disagree with could be limited by only allowing people to downvote comments made within the last week.

Comment author: 30 September 2013 05:13:35PM 11 points [-]

Or limit the number of votes one person can give to another within a time period. I think most vendetta voting happens in the heat of the moment. I don't like not being able to vote old comments, or skewing the voting on either side.

Comment author: 30 September 2013 07:08:35PM *  1 point [-]

I like this fix. If the mass voters tend to have low karma, you could also make this a fix that only applies to people below some karma threshold.

Comment author: 02 October 2013 11:09:32AM 1 point [-]

In the only cases I've seen where I've had grounds for suspicion about who was doing the karmassassination, the person I thought was the culprit was a high-karma long-established LWer.

(But there is some bias here; such people are likely to be more salient as candidates for who the culprit is. And in no case have I been very sure who was responsible.)

Comment author: 30 September 2013 07:26:33AM *  4 points [-]

I did not know this was a thing, but I do not think this is a worthwhile fix. If a user experiences a sudden drop in karma, and a lot of -1 posts, they should be able to report the user, and a mod should be able to check and punish them and fix the problem. We do not want a fix which shows up as an inconvenience often for a problem which is only rarely a problem.

Comment author: 30 September 2013 07:40:41AM 4 points [-]

If a user experiences a sudden drop in karma, and a lot of -1 posts, they should be able to report the user, and a mod should be able to check and punish them and fix the problem.

I've never seen a mod capable of checking who downvoted what reacting in any way when this has come up.

Comment author: 30 September 2013 07:46:26AM 5 points [-]

I would guess that a mod being capable of checking that would be an easier or at least not much harder fix than a time limit on voting down.

Comment author: 30 September 2013 04:54:23PM 5 points [-]

Having been on both sides of a flash downvote (guilty!), I can tell you that these vendettas are not very effective on a forum of this size. Enough people read old comments and tend to upvote comments they otherwise wouldn't if they feel it's unfairly penalized, even if the comment is old. It's a lot more effective to post a quality reply which convinces other readers that the comment in question deserves a downvote.

Comment author: 07 October 2013 01:53:55PM 0 points [-]

Hmmm. It's true that one of the cases where I'm most likely to upvote is where I see a comment that looks fine to me that has a negative point total. But flash downvoting won't necessarily produce any negative point totals, and I'm a lot less likely to spring into action for a post that merely doesn't have quite as many positive points as it should (since I rarely have a very firm idea of how many positive points anything should have). Then again, perhaps in cases where nothing actually goes negative, not much harm is really being done anyway. So I may agree with you, but I'm not sure you've got exactly the right reason.

I guess mostly my own feeling is that the karma system seems to work pretty well as is. When I see a comment getting downvoted to oblivion, it usually seems to deserve it, and the quality of conversation around here usually seems above the internet average. I'm sure karma doesn't precisely measure what it's supposed to measure (whatever that is anyway), but I'm inclined to suspect that trying to make it do so is likely to end up being more trouble than it's worth.

Comment author: 30 September 2013 07:32:55AM 5 points [-]

I always wondered if an algorithm could be implemented akin to the Page rank algorithm. A vote from someone counts more if the person votes seldom and it counts more if the person is upvoted frequently by people with high vote weight.

Comment author: 30 September 2013 06:51:00PM 5 points [-]

A vote from someone counts more if the person votes seldom

Could you explain this bit? I'd expect someone who votes seldom to have lower quality votes, because ey're likely to read less of LW.

Comment author: 30 September 2013 07:26:00PM 1 point [-]

The assumption is that it is that we will capture the variable of "how well do they know lesswrong" by measuring how much they are upvoted. I think the most important part is that votes by people with high karma give more karma. The best kind of upvote is one by someone who is very very popular on lesswrong because they say lots of important stuff, but almost never thinks anything is worth upvoting.

Comment author: 30 September 2013 10:42:31PM 1 point [-]

Ah. If that's the goal, I suggest increasing the impact of votes the more upvoted someone is, and increasing the upness of votes the more often she downvotes relative to upvoting. If I'm popular and upvote a whole lot of things, that seems like a possible reason to weight my downvotes more strongly. But If I'm popular and don't vote for much of anything at all, it's not as clear to me why that's a reason to take my vote more seriously than if I were equally popular but participated in the voting system more. The latter just seems to discourage popular people from voting very much.

If we want to encourage our popular people to vote more, we should increase the power of their votes the more votes they make, rather than decreasing it.

Comment author: 01 October 2013 11:07:48AM 0 points [-]

I saw this post from EY a while ago and felt kind of repulsed by it:

I no longer feel much of a need to engage with the hypothesis that rational agents mutually defect in the oneshot or iterated PD. Perhaps you meant to analyze causal-decision-theory agents?

Never mind the factual shortcomings, I'm mostly interested in the rejection of CDT as rational. I've been away from LW for a while and wasn't keeping up on the currently popular beliefs on this site, and I'm considering learning a bit more about TDT (or UDT or whatever the current iteration is called). I have a feeling this might be a huge waste of time though, so before I dive into the subject I would like to confirm that TDT has objectively been proven to be clearly superior to CDT, by which I (intuitively) mean:

• There exist no problems shown to be possible in real life for which CDT yields superior results.
• There exists at least one problem shown to be possible in real life for which TDT yields superior results.

"Shown to be possible in real life" excludes Omega, many-worlds, or anything of similar dubiousness. So has this been proven? Also, is there any kind of reaction from the scientific community in regards to TDT/UDT?

Comment author: 01 October 2013 01:15:14PM *  10 points [-]

The question "which decision theory is superior?" has this flavor of "can my dad beat up your dad?"

CDT is what you use when you want to make decisions from observational data or RCTs (in medicine, and so on).

TDT is what you use when "for some reason" your decisions are linked to what counterfactual versions/copies of yourself decided. Standard CDT doesn't deal with this problem, because it lacks the language/notation to talk about these issues. I argue this is similar to how EDT doesn't handle confounding properly because it lacks the language to describe what confounding even means. (Although I know a few people who prefer a decision algorithm that is in all respects isomophic to CDT, but which they prefer to call EDT for I guess reasons having to do with the formal epistemology they adopted. To me, this is a powerful argument for not adopting a formal epistemology too quickly :) )

I think it's more fruitful to think about the zoo of decision theories out there in terms of what they handle and what they break on, rather than in terms of anointing some of them with the label "rational" and others with the label "irrational." These labels carry no information. There is probably no total ordering from "best to worst" (for example people claim EDT correctly one boxes on Newcomb, whereas CDT does not. This does not prevent EDT from being generally terrible on the kinds of problems CDT handles with ease due to a worked out theory of causal inference).

Comment author: 01 October 2013 07:43:17PM 1 point [-]

I don't like the notion of using different decision theories depending on the situation, because the very idea of a decision theory is that it is consistent and comprehensive. Now if TDT were formulated as a plugin that seamlessly integrated into CDT in such a way that the resulting decision theory could be applied to any and all problems and would always yield optimal results, then that would be reason for me to learn about TDT. However, from what I gathered this doesn't seem to be the case?

Comment author: 01 October 2013 09:33:59PM 3 points [-]

TDT performs exactly as well as CDT on the class of problems CDT can deal with, because for those problems it essentially is CDT. So in practice you just use normal CDT algorithms except for when counterfactual copies of yourself are involved. Which is what TDT does.

Comment author: 01 October 2013 12:58:17PM *  2 points [-]

This is essentially what the TDT paper argues. It's been a while since I've read it, but at the time I remember being sufficiently convinced that it was strictly superior to both CDT and EDT in the class of problems that those theories work with, including problems that reflect real life.

Comment author: 06 October 2013 03:11:07PM 1 point [-]

What's the relationship between Epistemology and Ontology? Are both important of attention or do you get the other for free when you deal with one of them?

Comment author: 07 October 2013 02:24:12PM 1 point [-]

An exceedingly complicated and controversial question! Some have argued that you only need epistemology, or even that epistemology is all you can get; you can only know what you can know, so you might as well confine your attention to the knowable, and not worry whether there might be things that are which are not knowable. Others claim that it's obvious that whether things exist or not surely doesn't depend on whether they're known, and it's even less likely that it could depend on such a suspicious, hypothetical property as knowability. Of course, the latter view doesn't entail that one should favor ontology over epistomology, but trying to balance both introduces very difficult problems of how to tie the two together, so it is fairly common to take one as primary and use it to settle questions about the other.

One might wonder what practical consequences one choice or the other might have, and here again there is much controversy. The pro-ontology faction claims that emphasizing epistemology encourages subjectivism and relativism and weakens our grasp on reality. The pro-epistemology faction replies that emphasizing ontology is exactly as relative (or non-relative) as emphasizing epistemology, it's just that when ontology is emphasized, biases are hidden because the focus is turned away from questions of how actual humans arrive at their ontological conclusions.

Personally, I am tentatively on the side of the epistemologists, but it seems to me that details matter a great deal, and there are far too many details to discuss in a comment (indeed, a book is likely insufficient).

Comment author: 07 October 2013 02:29:31PM 0 points [-]

Personally, I am tentatively on the side of the epistemologists, but it seems to me that details matter a great deal, and there are far too many details to discuss in a comment (indeed, a book is likely insufficient).

Even when insufficient, is there a book or other source that you could recommend?

Comment author: 07 October 2013 03:04:41PM *  0 points [-]

Hmmm. Bas van Fraassen's The Scientific Image takes the side of the epistemologists on scientific questions. I take Kant to be an advocate for the epistemologists in his Critique of Pure Reason, though he makes some effort to be a compromiser. Rae Langton argues that the compromises in Kant are genuinely important, and so advocates a role for both epistemology and ontology, in her Kantian Humility. Heidegger seemed to want to make ontology primary, but I can't really recommend anything he wrote. It's difficult to know exactly what to recommend, because this issue is thoroughly entangled with a host of other issues, and any discussion of it is heavily colored (and perhaps heavily distorted) by whichever other issues are also on the table. Still, those are a few possibilities which come to mind.

Comment author: 07 October 2013 04:17:07PM 0 points [-]

When focusing on a issue such as friendliness of an FAI do you think that's in the domain of epistemology or ontology?

Comment author: 09 October 2013 04:04:20PM 0 points [-]

I feel like it's more epistemological, but then I tend to think everything is. Perhaps it is another symptom of my biases, but I think it more likely that trying to build an AI will help clarify questions about ontology vs. epistemology than that anything in our present knowledge of ontology vs. epistemology will help in devising strategies for building an AI.

Comment author: 09 October 2013 04:07:02PM 0 points [-]

Cyc calls itself an ontology. Doesn't any AI need such an ontology to reason about the world?

Comment author: 09 October 2013 04:44:50PM 0 points [-]

Well, this would be an example of one of the projects that I think may teach us something. But if you are speaking of "an ontology," rather than just "ontology," you may be talking about some theory of relativized ontologies, but more likely you're not speaking about ontology in the same way as those who prioritize it over epistemology. Those who make epistemology primary still talk about things, they just disagree with the ontologists about complicated aspects of our relationship to the things and what our talk about the things means.

Comment author: 09 October 2013 05:07:23PM 0 points [-]

you may be talking about some theory of relativized ontologies, but more likely you're not speaking about ontology in the same way as those who prioritize it over epistemology.

I'm not sure. Barry Smith who leads Basic Formal Ontology which get's used for medical informatics writes in his "Against Fantology"-paper sentences like:

It underlies the logical atomism of Bertrand Russell, including the central thesis according to which all form is logical form – a thesis which, be it noted, leaves no room for a discipline of formal ontology as something separate from formal logic.

Baysianism as described by Yvain as described in seems a bit like what Barry Smith describes as spreadsheet ontology with probability values instead of logical true false values.

Even if ontological questions can't be setteled in a way to decide which ontology is more correct than another, it seems to me that you have to decide for one ontology to use for your AGI. Different choices of how you structure that ontology will have a substantial effect on the way the AGI reasons.

Comment author: 06 October 2013 03:35:12AM 1 point [-]

I'm requesting recommendations for guides to meditation.

I've had great success in the past with 'sleeping on in' to solve technical problems. This year I've been trying power-napping during lunch to solve the morning's problems in the afternoon, I'm not sure the success of power-nap is any better that the control group. The next step is to see if I can step away from the old hamfisted methods and get results from meditation.

Comment author: 06 October 2013 12:35:01PM 1 point [-]

A local teacher. I think that it's much better taught in person than via a text guide.

Comment author: 01 October 2013 04:50:32PM 1 point [-]

Does anyone have a good resource on learning how to formate graphs and diagrams?

What are the effects on the reader between having 90%, 100% or 110% spacing between letters? When should one centralize text. What about bold and italics?

Is there good research based resource that explains the effects that those choices have on the reader?

Comment author: 02 October 2013 05:06:31AM 5 points [-]

Don't have a formal source, but I can give you a quick rundown of the advice my group ends up giving to every student we work with:

• Label the dang axes.
• Make the axis labels bigger.
• Make histogram lines thicker; make dots larger.
• If the dots are very dense, don't use dots, use a color scale.
• For the sake of the absent gods, don't make your colour scale brown-yellow-lightgray-black-darkbrown-darkgray-darkyellow, as one often-used plotting package did by default. (It was an inheritance from the early nineties, and honestly it was still weird.) Make it something that humans naturally read as a scale, eg blue to red by way of violet, dark green to light green, or blue to red by way of the rainbow.
• On a white background, do not use yellow or bright green unless the individual dots or areas are large. Lines, generally speaking, are not large.
• Put a legend in one corner, explaining what the line styles mean.
• If you're using (eg) triangles for one data type and circles for another, make the points bigger. Yes, it likely looks perfectly clear on your screen, to your young eyes, at a distance of a foot. You will eventually present it on a crappy twenty-year-old projector to men of sixty and seventy sitting at the back of a large auditorium. EMBIGGEN THE DANG POINTS. Also, use colours to further clarify the difference, unless colour is indicating a different dimension of information.
• Make bin sizes a round number - 1, 2, or 5 - in a unit of interest.
• If plotting numbers of something, indicate the bin size by labeling the y axis (for example) "Events / 2 MeV".
• As a general rule, make both a linear and a semilog plot. You can skip the linear if there are no features of interest at high densities, and the semilog if there are no features of interest at low densities.
Comment author: 03 October 2013 04:26:47AM 2 points [-]

blue to red by way of the rainbow.

Here's a few reason not to do that. (Not to mention the possibility of colour-blind viewers.)

Comment author: 05 October 2013 07:51:34PM *  0 points [-]

Thanks for the link. I recommend reading it to anyone who's interested in how data gets (mis)represented.

Comment author: 02 October 2013 12:31:53PM 1 point [-]

Make the axis labels bigger. Make histogram lines thicker; make dots larger.

How do I know that they are big enough?

Comment author: 02 October 2013 03:47:20PM 2 points [-]

When the seventy-year-old at the back of the large auditorium with the cheap, ancient projector can read them. Alternatively, when your boss stops complaining. Lines are too thick if they overlap; dots are too big when you can't easily tell the difference between high and medium density. (And if this happens at the default dot size, switch to a colour scale.)

If you're doing PowerPoint or similar presentation tools, you want your axis labels to be the same size as your bullet-point text. One trick I sometimes use is to whiteout the axis labels in the image file of my plot, and put them back in using the same text tool that's creating my bullets.

Comment author: 01 October 2013 05:25:04PM *  4 points [-]

Look up Edward Tufte, and in particular his seminal book The Visual Display of Quantitative Information.

Comment author: 03 October 2013 12:51:01AM 2 points [-]

Those sorts of questions are asked in a field called Information Visualization, which is a part of Human Factors Engineering.

Comment author: 30 September 2013 07:40:12AM *  1 point [-]

So the other week I read about viewquakes. I also read about things a CS major could do that aren't covered by the usual curriculum. And then this article about the relationship escalator. Those gave me not quite a viewquake but clarified a few things I already had in mind and showed me some I had not.

What I am wondering is now, can anyone here give me a non-technical viewquake? What non-technical resources can give me the strongest viewquake akin to the CS major answer? With non-technical I mean material that doesn't fall into the usual STEM spectrum people around here should be well versed in.

Not sure this is clear enough.

Comment author: 30 September 2013 10:06:30AM *  4 points [-]

Many non-technical viewquakes are deep in the mindkilling territory. I guess I better refrain from giving specific examples, but it may seem from outside like this:

A: I read this insightful book / article / website and it completely changed the way I see the world.

B: Dude, you are completely brainwashed and delusional.

The lesson is that "dramatically changing one's world view" is not necessarily the same as "corresponding better with the territory". And it can be sometimes difficult to evaluate the latter. Just because many people firmly believe theory X is true, does not make it true. Just because many people firmly believe theory X is false, does not make it false. For many theories you will find both kinds of people.

Comment author: 30 September 2013 09:43:15AM 3 points [-]

I had a viewquake a few years ago when I stayed silent with a group of friends I normally would have interacted with. Their subconscious prodding of me to fulfill my usual social role revealed to me that I even had a specific role in the group in the first place, and subsequently opened me up to a lot of things that I had disregarded before.

Comment author: 06 October 2013 12:34:04PM 0 points [-]

Wikipedia:

In February 2013, IBM announced that Watson software system's first commercial application would be for utilization management decisions in lung cancer treatment at Memorial Sloan–Kettering Cancer Center in conjunction with health insurance company WellPoint.[13] IBM Watson’s business chief Manoj Saxena says that 90% of nurses in the field who use Watson now follow its guidance.[14]

How do you know when you work at a project like Watsen whether the work you are doing is dangerous and could result in producing an UFAI? Didn't they essentially build an oracle AGI?

What heuristic should someone building a new AI use to decide whether it's essential to talk with MIRI about it?

Comment author: 06 October 2013 06:30:29PM 3 points [-]

What heuristic should someone building a new AI use to decide whether it's essential to talk with MIRI about it?

Why would they talk to MIRI about it at all?

They're the ones with the actual AI expertise, having built the damn thing in the first place, and have the most to lose from any collaboration (the source code of a commercial or military grade AI is a very valuable secret). Furthermore, it's far from clear that there is any consensus in the AI community about the likelihood of a technological singularity (especially the subset which FOOMs belong to) and associated risks. From their perspective, there's no reason to pay MIRI any attention at all, much less bring them in as consultants.

If you think that MIRI ought to be involved in those decisions, maybe first articulate what benefit the AI researchers would gain from collaboration in terms that would be reasonable to someone who doesn't already accept any of the site dogmas or hold EY in any particular regard.

Comment author: 07 October 2013 01:48:42PM 0 points [-]

If you think that MIRI ought to be involved in those decisions

As far as I understand that's MIRI's position that they ought to be involved when dangerous things might happen.

maybe first articulate what benefit the AI researchers would gain from collaboration in terms that would be reasonable to someone who doesn't already accept any of the site dogmas or hold EY in any particular regard.

But what goes for someone who does accept the site dogma's in principle but still does some work in AI.

Comment author: 07 October 2013 02:39:16PM 1 point [-]

But what goes for someone who does accept the site dogma's in principle but still does some work in AI.

I'm sorry, I didn't get much sleep last night, but I can't parse this sentence at all. Could you rephrase it for me?

Comment author: 06 October 2013 07:47:52PM 2 points [-]

Didn't they essentially build an oracle AGI?

No, they very much didn't.

Comment author: 06 October 2013 04:35:12PM 0 points [-]

well step one is ever having heard of MIRI or thought about UFAI in any context except that of hal or skynet

Comment author: 06 October 2013 05:33:33PM 0 points [-]

I doubt that's enough. If someone still wants to do AI research after having heared of UFAI he needs some decision criteria to decide when it's time to contact MIRI.

Comment author: 06 October 2013 06:23:11PM 0 points [-]

The decision criteria are easy: talk/listen to the recognized AI research experts with a proven track record. Then weigh their arguments, as well as those of MIRI. It's the weight assignment that's not obvious.

Comment author: 06 October 2013 06:28:55PM *  0 points [-]

If you have a potentially dangerous idea then talking to recognized AI research experts might itself be dangerous.

Comment author: 06 October 2013 07:05:21PM 0 points [-]

No, not really. If the situation is anything like that in math, physics, chemistry or computer science, unless you put in your 10k hours into it, your odds of coming up with a new idea are remote.

Comment author: 07 October 2013 02:06:10PM 0 points [-]

I don't believe that to be true as ideas can something come from integrating knowledge of different fields.

An anthropologist that learned a new paradigma about human reasoning from studying the way some African tribe reasons about the world can reasonable bring a new idea into computer science. He will need some knowledge about computer science but no 10k hours.

In http://meaningness.com/metablog/how-to-think David Chapman describes how he used AI problem by using various tools.

When takling one problem the problem wasn't that difficult if you had knowledge of a certain field of logic. He solved another problem through antropology. According to him advances are often a function of having access to a particular mental tool to which no one else who tackled the problem had access.

Putting in a lot of time means that you have access to a lot of tool and know of many problems. But if you put all your time into learning the same tools that people in the field already use, you probably don't have many mental tools that few people in a given field possess.

Paradigm changing inventions often come into fields through people who are insider/outsiders. They are enough of an insider to understand the problem but they bring expertise from another field. See "The Economy of Cities" by Jane Jacobs for more on that point.

Comment author: 07 October 2013 08:33:51PM 0 points [-]

I concede that a math expert can start usefully contributing to a math-heavy area fairly quickly. Having expertise in an unrelated area can also be useful, as a supplement, not as a substitution. I do not recall a single amateur having contributed to math or physics in the last century or so.

Comment author: 07 October 2013 09:23:42PM 0 points [-]

Do you consider the invention of the Chomsky hierarchy to lie outside the field of math? Do you think that Chomsky had 10k hours of math expertise when he wrote it down?

Regardless having less than 10k hours in a field and being an amateur are two different things.

I don't hold economists in very high regard but I would expect that one of them did contribute at least a little bit in physics.

I remember chatting with a friend who studies math and computer science. My background is bioinformatics. If my memory is right he has working at a project that an applied mathematics group gave him because he knew something about mathematical technique XY. He needed to find some constants that were useful for another algorithm. He had a way to evaluate the utility of a certain value as a constant. His problem was that he had a 10 dimensional search space and didn't really know how to search effectively in it.

In my bioinformatics classes I learned algorithms that you can use for a task like that. I'm no math expert but in that particular problem I still could provide useful input.

I would expect that there are quite a few areas where statistical tools developed within bioinformatics can be useful for people outside of it.

But to come back to the topic of AI. A math expert working in some obscure subfield of math could plausible do something that advances AI a lot without being an AI expert himself.

Comment author: 07 October 2013 09:36:11PM 0 points [-]

Do you consider the invention of the Chomsky hierarchy to lie outside the field of math?

Don't know. Maybe a resident mathematician would chime in.

I don't hold economists in very high regard but I would expect that one of them did contribute at least a little bit in physics.

I am not aware of any. Possibly something minor, who knows.

But to come back to the topic of AI. A math expert working in some obscure subfield of math could plausible do something that advances AI a lot without being an AI expert himself.

Yes, indeed, that sounds quite plausible. Whether this something is important enough to be potentially dangerous is a question to be put to an expert in the area.