You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Open thread, Nov. 23 - Nov. 29, 2015

5 Post author: MrMind 23 November 2015 07:59AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Comments (257)

Sort By: Controversial
Comment author: Bound_up 23 November 2015 11:45:50AM 1 point [-]

I've heard the Beatles have some recorded song they never released because they were too low quality. I think it would be worthwhile to study their material in its full breadth, mediocrity included, to get a sense for the true nature of the minds behind some greatness.

I've saved writings and poetry and raw, potentially embarrassing past creations for the sake of a similar understanding. I wish I had recordings of my initial fumblings with the instruments I now play rather better.

So it is in this general context of seeking fuller understanding, that I ask if anyone knows where to find these legendary old writings from Eliezer Yudkowsky, reputed to be embarrassing in their hubris, etc..

Comment author: CellBioGuy 24 November 2015 05:40:35PM *  2 points [-]

These are so much fun to read!

(snapshot times chosen more or less at random, and specific pages are what I consider the highlights)

https://web.archive.org/web/20010204095400/http://sysopmind.com/beyond.html
(contains links to everything below and much more)

https://web.archive.org/web/20010213215810/http://sysopmind.com/sing/plan.html (his original founding plans for the singularity institute, extremely amusing)

https://web.archive.org/web/20010606183250/http://sysopmind.com/singularity.html

http://web.archive.org/web/20101227203946/http://www.acceleratingfuture.com/wiki/So_You_Want_To_Be_A_Seed_AI_Programmer (some... exceptional quotes in here and you can follow links)

https://web.archive.org/web/20010309014808/http://sysopmind.com/eliezer.html

https://web.archive.org/web/20010202171200/http://sysopmind.com/algernon.html

More can be found poking around on web archive and youtube and vimeo. Even more via PM.

Comment author: Viliam 23 November 2015 12:06:05PM *  7 points [-]

The "legendary old writings from Eliezer Yudkowsky" are probably easy to find, but I am not going to help you.

I do not like the idea of people (generally, not just EY) being judged for what they wrote dozens of years ago. (The "sense for the true nature" seems like the judgement is being prepared.)

Okay, I would make an exception in some situations; the rule of thumb being "more extreme things take longer time to forget". For example if someone would advocate genocide, or organize a murder of a specific person, then I would be suspicious of them even ten years later. But "embarrassing in their hubris"? Come on.

Comment author: IlyaShpitser 23 November 2015 09:39:12PM 5 points [-]

I don't think EY's ego got any smaller with time.

Comment author: Viliam 24 November 2015 08:47:03AM 1 point [-]

In the meantime he wrote the Sequences and HPMoR, and founded MIRI and CFAR. So maybe the distance between his ego and his real output got smaller.

Also, as Eliezer mentions in the Sequences, he used to have an "affective death spiral" about "intelligence", which is probably visible in his old writings, and contributes to the reader's perception of "big ego".

I don't really mind big egos as long as they drive people to produce something useful. (Yeah, we could have a separate debate about how much MIRI or HPMoR are really useful. But the old writings would be irrelevant for that debate.)

Comment author: IlyaShpitser 24 November 2015 03:17:39PM *  4 points [-]

Here is what you sound like:

"But look at all this awesome fan fiction, and furthermore this 'big ego' is all your perception anyways, and furthermore I don't even mind it."

Why so defensive about EY's very common character flaws (which don't really require any exotic explanation, btw, e.g. think horses not zebras)? They don't reflect poorly on you.


EY's past stuff is evidence.

Comment author: Viliam 25 November 2015 09:10:36AM *  14 points [-]

I'm defensive about digging in people's past, only to laugh that as teenagers they had the usual teenage hubris, and maybe as highly intelligent people they kept it for a few more years... and then use it to hint that even today 'deeply inside' they are 'essentially the same', i.e. not worth to be taken seriously.

What exactly are we punishing here; what exactly are we rewarding?

Ten or more years ago I also had a few weird ideas. My advantage is that I didn't publish them on visible places in English, and that I didn't become famous enough so people would now spend their time digging in my past. Also, I kept most of my ideas to myself, because I didn't try to organize people into anything. I didn't keep a regular diary, and when I find some old notes, I usually just cringe and quickly destroy them.

(So no, I don't care about any of Eliezer's flaws reflecting on me, or anything like that. Instead I imagine myself in a parallel universe, where I was more agenty and perhaps less introverted, so I started to spread my ideas sooner and wider, had the courage to try changing the world, and now people are digging up similar kinds of my writings. Generally, this is a mechanism for ruining sincere people's reputations: find something they wrote when they were just as sincere as now only less smart, and make people focus on that instead of what they are saying today.)

I guess I am oversensitive about this, because "pointing out that I failed at something a few years ago, therefore I shouldn't be trusted to do it, ever" was something my mother often did to me while I was a teenager. People grow up, damn it! It's not like once a baby, always a baby.

Everyone was a baby once. The difference is that for some people you have the records, and for other people you don't; so you can imagine that the former are still 'deep inside' baby-like and the latter are not. But that's confusing the map with the territory. As the saying goes, "an expert is a person who came from another city" (so you have never seen their younger self.). As the fictional evidence proves, you could have literally godlike powers, and people would still diss you if they knew you as a kid. But today on internet, everything is one big city, and anything you say can get documented forever. (Knowing this, I will forbid my children to use their real names online. Which probably will not help enough, because twenty years later there will be other methods for easily digging in people's past.)

Ah, whatever. It's already linked here anyway. So if it makes you feel better about yourself (returning the courtesy of online psychoanalysis) to read stupid stuff Eliezer wrote in the past, go ahead!

EDIT: I also see this as a part of a larger trend of intelligent people focusing too much on attacking each other instead of doing something meaningful. I understand the game-theoretical reasons for that (often it is easier to get status by attacking other people's work than presenting your own), but I don't want to support that trend.

Comment author: IlyaShpitser 25 November 2015 06:14:35PM *  2 points [-]

EY is not a baby, and was not a baby in the time period under discussion. He is in his mid thirties today.


I have zero interest in gaining status in the LW/rationalist community. I already won the status tournament I care about. I have no interest in "crabbing" for that reason. I have no interest in being a "guru" to anyone. I am not EY's competitor, I am involved in a different game.

Whether me being free of the confounding influence of status in this context makes me a more reliable narrator I will let you decide.


What I am very interested in is decomposing cult behavior into constituent pieces to try to understand why it happens. This is what makes LW/rationalists so fascinating to me -- not quite a cult in the standard Scientology sense, but there is definitely something there.

Comment author: OrphanWilde 25 November 2015 06:47:47PM 2 points [-]

Downvote explanation:

Using claim of immunity to status and authority games as evidence to assert a claim.

Which is to say, you are using a claim of immunity to status and authority games to assert status and authority.

Yes, that's right out of my own playbook, too. I welcome anybody who catches me at it to downvote me, and please let me know I've done it, as it is an insidious logical mistake I find it impossible to catch myself at.

Comment author: IlyaShpitser 25 November 2015 07:32:43PM *  0 points [-]

I am not claiming status and authority (I don't want it), I am saying EY has a big ego. I don't think I need status and authority for that, right?

Say I did gain status and authority on LW. What would I do with it? I don't go to meetups, I hardly interact with the rationalist community in real life. What is this supposed status going to buy me, in practice? I am not trying to get laid. I am not looking to lead anybody, or live in a 'rationalist house,' or write long posts read by the community. Forget status, I don't even claim to be a community member, really.

I care about status in the context relevant to me (my academic community, for example, or my workplace).


Or, to put it simply, you guys are not my tribe. I just don't care enough about status here.

Comment author: OrphanWilde 25 November 2015 08:17:29PM 1 point [-]

You're claiming to have status and authority to make a particular claim about reality - "Outsider" status, a status which gains you, with respect to adjucation of insider status and authority games... status and authority.

Now, your argument could stand or fall on its own merits, but you've chosen not to permit this, and instead have argued that you should be taken seriously on the merits of your personal relationship to the group (read: taken to have status and authority relative to the group, at least with respect to this claim).

Comment author: Lumifer 25 November 2015 07:42:27PM *  0 points [-]

What would I do with it?

Bask in the glory? :-)

You might be an exception, but empirically speaking people tend to value their status in online communities, including communities members of which they will never meet in meatspace and which have no effect on their work/personal/etc. life.

Biologically hardwired instincts are hard to transcend :-/

Comment author: philh 26 November 2015 02:43:10PM *  1 point [-]

I don't understand your objection.

Using claim of immunity to status and authority games as evidence to assert a claim.

Which is to say, you are using a claim of immunity to status and authority games to assert status and authority.

Asserting a claim is not the same thing as asserting status and authority.

I'm not sure what you want from Ilya here. He seems to be describing his motivations in good faith. Do you think he's lying to gain status? Do you think he's telling the truth, but gaining status as a side effect, and he shouldn't do that?

Quick edit: Oh, I should probably have read the rest of the thread. I think I understand your objection now, but I disagree with it.

Comment author: Viliam 25 November 2015 08:53:25PM *  2 points [-]

Mid thirties in 2015 means about twenty in 2001 (the date of most of the linked archives), right? That's halfway to baby from where I am now. Some of my cringeworthy diaries were written in my mid twenties.

Comment author: Lumifer 25 November 2015 07:02:46PM *  2 points [-]

This is what makes LW/rationalists so fascinating to me

Welcome to the zoo! Please do not poke the animals with sticks of throw things at them to attract their attention. Do not push fingers or other object through the fences. We would also ask you not to feed the animals as it might lead to digestive problems.

Comment author: OrphanWilde 25 November 2015 07:15:26PM 4 points [-]

It's an interesting zoo, where all the exhibits think they're the ones visiting and observing...

Comment author: Lumifer 25 November 2015 07:21:30PM 1 point [-]
Comment author: CellBioGuy 24 November 2015 05:40:21PM *  4 points [-]

He literally wrote plans about what he would do with the billions of dollars the singularity institute would be bringing in by 2005 using the words 'silicon crusade' to describe its actions to bring about the singularity and interstellar supercivilization by 2010 so as to avoid the apocalyptic nanowar that would have started by then without their guidance. He also went on and on and on about his SAT scores in middle school (which are lower than those of one of my friends, taken via the same program at the same age) and how they proved he is a mutant supergenius who is the only possible person who can save the world.

I am distinctly unimpressed.

Comment author: polymathwannabe 25 November 2015 10:27:40PM *  4 points [-]

Is it at all meaningful to you that EY writes this in his homepage?

You should regard anything from 2001 or earlier as having been written by a different person who also happens to be named “Eliezer Yudkowsky”. I do not share his opinions.

It is true that EY has a big ego, but he also has the ability to renounce past opinions and admit his mistakes.

Comment author: IlyaShpitser 26 November 2015 05:54:22PM 0 points [-]

Absolutely, it is meaningful.

Comment author: CellBioGuy 25 November 2015 02:33:21AM *  2 points [-]

I can hardly wait to look back on his 'shameless blegging' post in a few years and compare it to reality. Pretty sure I know what the result will be.

Comment author: FrameBenignly 23 November 2015 10:14:25PM 4 points [-]

For many types of problems, analyzing how a system changed over time is a more effective method of understanding a problem than comparing one system's present state with another system's present state.

Comment author: NancyLebovitz 24 November 2015 12:02:07AM 1 point [-]

I don't think Eliezer's changes in hubris level are what's interesting-- he's had some influence, and no on seems to think his earliest work is his best. It might make sense to find out what how his writing has changed over time.

Comment author: Clarity 28 November 2015 04:47:06AM *  -1 points [-]

The other day I met a woman named common first name redacted out of respect to commentator's recommendation near the train station. I was just sitting and eating lunch, and she came over to chat. She had been ill with Lithium toxicity in hospital lately. She attends the same (mental) health complex as me. She was lovely, lonely, dated younger guys. She mentioned that her money is controlled by a State Trust to an extent, and that her last boyfriend continues to abuse her financially, and occasionally physically. She mentioned the police have recommended she break up with him, but she says that she loves him. We swapped numbers. Anything I can do for her?

Comment author: polymathwannabe 28 November 2015 06:57:31PM 3 points [-]

As a first protective measure, don't publish her name on the internet.

She has already contacted the police, and they have already given her the best advice available.

The rest is up to her.

Comment author: WhyAsk 24 November 2015 11:44:55PM *  -1 points [-]

Any US lawyers here?

A woman who once worked in a law office told me that clients come and go (she used the word e·phem·er·al) so the real allegiance for a lawyer is to other lawyers. Because they will see them again and again.

And Game Theory has something to say about how to treat a person that you are not likely to see again.

Please, folks, do not ask me to justify this "hearsay". I found her credible, so please take this woman's word as gospel, as an axiom, and go from there.

Please confirm, deny, explain or comment on her statement.

TIA.

Comment author: polymathwannabe 25 November 2015 01:11:42AM *  2 points [-]

A "person that you are not likely to see again" is not a complete description of a lawyer's client; it's missing the part where "this person pays me for my services so I need many of this person in order to make a living."

Comment author: WhyAsk 27 November 2015 12:35:50AM 0 points [-]

Your post reminds me of something.

If there is a huge disparity of power between the lawyer and you, Game Theory kind of "goes out the window".

Right?

Comment author: polymathwannabe 27 November 2015 12:56:29AM *  1 point [-]

The fact that I have never hired a lawyer may be a factor in my difficulty imagining a scenario where your lawyer turns into your opponent in a power struggle; I see it more likely to happen between you and your opponent's lawyer.

High-profile lawyers with a lot of power don't tend to be hired by ordinary people with little power. In any case, it is in your lawyer's interests that your interests get served. Besides, what you could lose in the worst scenario is that one lawsuit (and possibly money and/or jail time); what your lawyer has to lose in the worst scenario is reputation, future clients, and the legal ability to practice law.

Comment author: Viliam 27 November 2015 07:00:49AM *  2 points [-]

Imagine the following situation: we are having a lawsuit against each other. Let's say it is already obvious for both of our lawyers which side is going to win, but it is not so obvious for us.

The lawyers have an option to do it quickly and relatively cheaply. But they also have an option to charge each of us for extra hours of work, if they tell us it is necessary. Neither option will change the outcome of the lawsuit. But it will change how much money the lawyers get from us.

In such case, it would be rational for the lawyers to cooperate with each other, against our interests.

Comment author: WhyAsk 27 November 2015 10:08:45PM 1 point [-]

That's been my experience, and any questions about "How much more is this going to cost me?" are not received well.

Almost every lawyer I've hired or dealt with gave me almost nothing for my money. And good luck trying to get a bad lawyer disbarred.

What I should probably do is solicit bids for a particular legal problem.

Comment author: polymathwannabe 27 November 2015 03:10:03PM 1 point [-]

In this example the obvious culprit is the practice of charging by the hour, which I've always found a terrible idea.

Comment author: polymathwannabe 26 November 2015 08:33:12PM *  0 points [-]

In the news:

Nassim Taleb is an inverse stopped clock.

Comment author: username2 29 November 2015 09:42:29PM *  3 points [-]

When Nassim Taleb's predictions fail and someone points that out, he calls that person fucking idiot.

Comment author: ChristianKl 26 November 2015 10:37:56PM 3 points [-]

The main complaint seems to be that Taleb violates an orthodoxy and not that he's factually wrong. On the issues of costs the cited paper says:

This cost-saving potential has been supported by several studies that compared homeopathy with conventional medicine. However, our own health economic evaluations did not show a consistent picture. We observed no differences in costs [15] or additional costs [16,17] in the homeopathic group compared to conventional care depending on the setting or diagnosis. [...] A recent systematic review by Viksveen on the cost-effectiveness of homeopathy showed that in eight out of fourteen studies, the homeopathic treatment was less cost-intensive than the conventional treatment; in four studies, the treatment costs were similar; and in two studies, the homeopathic treatment was more costly than conventional treatment

There are observed cases where homeopathy did lead to cost savings as Taleb suggests.

Interestingly the cited PLoS paper puts people who don't take homeopathy into the homeopathy group based on the fact that they could get it for free:

For this analysis, patients belonged to the homeopathy group if they subscribed to the integrated care contract in 2011 and if they were continuously insured through the TK for the observational period (12 months before and 18 months after subscription to the integrated care contract), regardless of whether they used homeopathy during the study period."

Comment author: Clarity 25 November 2015 09:37:21AM -1 points [-]

'noisy text analytics'. Has anyone trialed applying those algorithms in their minds with human conversations or text messaging (say through facebook) it to filter information in real life? Was it more efficient than your default or non-volitional approach?

Comment author: Clarity 25 November 2015 08:07:11AM -2 points [-]

What, other than an interest in the commercial success of the car lot business, normative social influence and scrupulosity (all tenuous), stops someone from taking a second ticket (by foot) from a gated car park then immediately paying that one off when leaving, rather than paying the original entry ticket?

Comment author: RichardKennaway 25 November 2015 11:06:38AM 2 points [-]

What, other than an interest in the commercial success of the car lot business, normative social influence and scrupulosity (all tenuous)

These are what holds society together. These are what society is -- including the bit about commercial success.

But have you tried? The entry barriers only issue a ticket when there's a car in front of them. That's how it works at the car parks I'm familiar with that use that system.

And, to continue the discussion of why your karma is so persistently low, this is something you might have thought of before posting. See also.

Comment author: ChristianKl 25 November 2015 10:15:15AM 1 point [-]

Why don't people steal from other people if nobody is looking? General ethics.

Comment author: Elo 25 November 2015 09:50:56AM 4 points [-]

the gates usually only give tickets to large metal objects (like cars) because they have sensors in the road underneath the ticket machine. There was a Mr. Bean sketch about this event. He used a large metal rubbish bin to get a ticket.

Comment author: Clarity 23 November 2015 01:15:43PM -2 points [-]

How do you estimate threats and your ability to cope; what advice can you share with others based on your experiences?

Comment author: Stingray 23 November 2015 02:27:10PM 2 points [-]

What kind of threats?

Comment author: Clarity 23 November 2015 10:15:35AM -2 points [-]

What would happen if a altcoin was developed where users had to precommit not to forking that coin?

Comment author: Viliam 23 November 2015 11:59:06AM 4 points [-]

How exactly could users of something anonymous precommit to not do something?

Comment author: cleonid 23 November 2015 01:02:52PM 4 points [-]
Comment author: Clarity 24 November 2015 04:09:49AM 8 points [-]

Why is my karma so low? Is there something I'm consistently doing wrong that I can do less wrong? I'm sorry.

Comment author: SanguineEmpiricist 27 November 2015 08:03:44AM *  0 points [-]

Don't buy these comments too much. i'm glancing through them, they're much too critical. Listen to Nancy if anyone.

Comment author: Lumifer 24 November 2015 03:37:42PM 13 points [-]

You use LW as a dumping ground for whatever crosses your mind at the moment, and that is usually random and transient noise.

Comment author: Clarity 24 November 2015 11:44:10PM 3 points [-]

Thanks. What counts as noise and what as signal to you, and what do you mean by transient?

Comment author: Lumifer 25 November 2015 03:17:02AM 11 points [-]

By "transient" I mean that you mention a topic once and then never show any interest in it again. By "noise" I mean random pieces of text which neither contain useful information nor are interesting.

Comment author: RichardKennaway 25 November 2015 11:05:03AM 4 points [-]

In addition to what everyone else has said, here's a useful article on how to ask smart questions. It's talking about asking technical questions on support forums, but the matter generalises, especially the advice to make your best effort to answer it yourself, before asking it publicly, and when you do, to provide the context and where you have got to already.

while it isn't necessary to already be technically competent to get attention from us, it is necessary to demonstrate the kind of attitude that leads to competence — alert, thoughtful, observant, willing to be an active partner in developing a solution.

Comment author: Clarity 28 November 2015 12:17:26AM 1 point [-]

Thanks, that article is incredible. I hope to see one that is about how to answer questions, and how to understand answers too! After reading, some contemplation on the matter, and some chance happenings upon information I feel is relevant to the issue, I believe I've changed a lot:

Recently a highly admired friend of mine said something along the lines of 'I've never said anything that wasn't intention'. Whereas for me, most of that which I say is unintentional, just observed. So this got me thinking pretty hard about these things. Being on my mind, I suppose I got the following sliver of personal development when I started looking up some podcasts to comfort myself the following day:

I'm vain. When I listen to things, personal development podcasts or not, I tend to look for what could be about me. I sampled the Danger and Play podcasts and like what I've heared. Inspired by the way he frames self-talk as interpersonal ilocutation, my mental landscaped has changed steeply. One consequence of this has been that I'm no longer held captive to 'believing' the first thought or idea that comes to my head. Rather, it's as if it's just one mental subagents proposition, to be contested and such. I am not biased towards reserving my thoughts till a more complex stopping rule, like coming to a conclusion that a certain verbablisation would lead to a certain outcome (e.g. the conclusion is positive emotionally, raises my anxiety to an optimal level, and/or functional by way of interpersonal compliance) , rather than something that just spews from my mind.

Perhaps a precursor to this has been a general dampening of how seriously I've been taking my moral intuitions. I've contextualised them in terms of the facts that they are predated by evolutionary foreces, context, and such. Approximately an expressivist position, championed sometimes by A.J Ayer and the logical positives, regarding moral language, if I remember the wikipedia page correctly...but even say, in ingratituated sense of helplessness then seems no longer to relate to entrenched circumstances, but liable to change depending on the path dependence on my memory - something influenced by the past, but continuously influenced by the ongoing present, even for older memories that are revisited and updated, reframed etc.

Danger and Play is part of the 'red pill' 'manosphere' of content. Frequently the movement is derided as mysogenistic. I can't speak on that, since I reckon that it would be heterogenous with peoples attitudes towards women and labelling a broad category critically is misleading (like labelling all Islamists as terrorists, for analogy). Some of my sticking points in gender and sexual relations seem to relate to underdeveloped learned optimism and growth mindset. It seems like some 'red pill' and related 'seduction' movements include elements that are concurrently antithetical to developing these:

To illustrate, the prominent RSD company often frames things in ways that don't suggest negative things are situational and temporary, while making global judgemnets about negative things (eg: 'life is hard...'). That's a recipe for learned helplessness. Which, may very well be good for their business model, combined with all the motivation they spew out. In fact, this observation probably holds for a number of motivational video channels to keep people coming back. There are certainly exceptions - I remember one which started off with that quote from Albert Einstein that closely approximates a pithy summation of a growth mindset and learned optimism, but the details escape me.

One thing that really compels and reminds me to think in this reflective way is simply that a lot of my intuitions are really quite mean to myself. When that podcast instructed me to stand back and think of myself as another person, it just seems absurd to treat myself like that. I mean, if I find effective altruism things compelling because they're nice to do, isn't the most proximate and therefore likely one of the easier or more reliable niceties to be nice to myself. In turn, it looks like that will lead to:

competence — alert, thoughtful, observant, willing to be an active partner in developing a solution.

The kind of attitude that makes for smarter questions...

Comment author: Elo 24 November 2015 10:32:55PM 1 point [-]

as a hard rule; when posting in open; the ratio of your posts to posts by others should always be below 1:3 (other's might want to comment and suggest 1:4). You should post less then 1 in 4 of the posts in the open thread. They often read like a stream of consciousness (I think you know this already), and you might be better off taking on board some of the ideas of sitting on thoughts over a day or so and re-evaluating them for yourself before posting.

As a side note: presentation of an idea can help the reception. We are still human; and do care for delicate wording on some topics.

Comment author: Clarity 24 November 2015 11:49:40PM 1 point [-]

Thanks. I do tend to sit on my ideas, or I like to post and update those posts or reply with reflections upon revisitations of those thoughts so that I and others can see how my thinking changes over time.

My ratio is only that high when there is a new open thread. Since I post in blocks by formulating several posts then posting then when I next get a chance, it may appear early on that my ratio is high. But by the end of the month, I am certainly no where near that ratio.

I am continuously trying to improve my presentation. Unfortunately, till date I have received minimal specific feedback on how to improve presentation. Sometimes I feel the stream of consciousness approach illustrates the way I'm thinking about a certain thing more illustratively.

Comment author: gjm 25 November 2015 12:03:11AM 1 point [-]

It may well do, but illustrating the way you're thinking about something isn't necessarily a good goal here. Why should anyone else care how you happen to be thinking about something?

There may be special cases in which they do. If you are a world-class expert on something it could be very enlightening to see how you think about it. If you are just a world-class thinker generally, it might be fascinating to see how you think about anything. Otherwise, not so much.

Comment author: polymathwannabe 24 November 2015 04:29:21PM *  2 points [-]

Usually, your questions feel more suited for a general-purpose forum than the narrowly specialized set of interests commonly discussed here. (We do have "Stupid Questions" and "Instrumental Rationality" threads, but even those follow the same standards for comment quality as the rest of LW.)

Also, posting a dozen questions in succession may give users the impression that you're trying to monopolize the discussion. Even if that's not your intention, I would understand it if some users ended up thinking it is.

I would suggest looking for specialized forums on some of the topics that interest you, and using LW only for topics likely to be of interest to rationalists.

Comment author: Clarity 24 November 2015 11:45:31PM 2 points [-]

Thanks. Do you have a suggestion for another forum you recommend I move to?

Comment author: polymathwannabe 25 November 2015 01:19:39AM 3 points [-]

I don't know much about topic-specific forums, but seeing as you like to ask frequent questions, Reddit and Quora come to mind.

Comment author: entirelyuseless 24 November 2015 02:38:58PM 4 points [-]

A large proportion of your comments seem very distracting and sort of off-topic for Less Wrong.

Comment author: Clarity 24 November 2015 11:45:04PM 1 point [-]

Thanks. Can I have an example which is either self-evident as distracting and off-topic or explain why it is?

Comment author: NancyLebovitz 25 November 2015 01:45:06PM 1 point [-]

I looked at a few pages of your comment history to see if I could find a particularly horrible example to base an explanation on (entirelyuseless's link is appropriate), but I was surprised to find that the vast majority of your comments had no karma rather than downvotes.

I'm not sure what you need to do to upgrade or edit out your typical comment. Possibly you could review your upvoted comments to see how they're different from your usual comments.

Comment author: NancyLebovitz 24 November 2015 02:19:39PM 7 points [-]

Thank you for asking. I've been trying to figure out what to say to you, but couldn't figure out quite what the issue is. One possibility in terms of karma is to bundle a number of comments into a single comment, but this doesn't address how the comments could be better.

A possible angle is to work on is being more specific. It might be like the difference between a new computer user and a more sophisticated computer user. The new user says "My computer doesn't work!", and there is no way to help that person from a distance until they say what sort of computer it is, what they were trying to do, and some detail about what happened.

Being specific doesn't come naturally to all people on all subjects, but it's a learnable skill, and highly valued here.

Comment author: ChristianKl 24 November 2015 09:58:29AM 9 points [-]

As I said before, I think it would be good if you get in the habit of trying to predict the votes that your posts get beforehand and then not post when you think that a post would produce negative karma.

One way to do this might be, whenever you write a post keep it in a textfile and wait a day. The next day you ask yourself whether there anything you can do to improve it. If you feel you can improve it, do it. Then you estimate a confidence interval for the karma you expect your post to get and take a note of it in a spreadsheet. If you think it will be positive post your comment.

If you train that skill I would expect you to raise your karma and learn a generally valuable skill.

If at the end of writing a post you think "I’m not sure where I was going with this anymore." as in http://lesswrong.com/r/discussion/lw/mzx/some_thoughts_on_decentralised_prediction_markets/ , don't publish the post. If you yourself don't see the point in your writing it's unlikely that others will consider it valuable.

Comment author: Tem42 24 November 2015 10:14:10PM 2 points [-]

and learn a generally valuable skill.

I second this. This is also a very important skill for work and personal emails, and anything having to do with social sites like Facebook.

Comment author: moridinamael 24 November 2015 04:16:16PM 4 points [-]

As I said before, I think it would be good if you get in the habit of trying to predict the votes that your posts get beforehand and then not post when you think that a post would produce negative karma.

This is the best advice. The trick to keeping high karma is to cultivate your discernment. Each time you write a post, assess its value, and then delete it if you don't anticipate people appreciating it. View that deletion as a victory equal to the victory of posting a high-karma comment.

Comment author: Viliam 24 November 2015 09:11:30AM 16 points [-]

The first association I have with your username is "spams Open Threads with not really interesting questions".

Note that there are two parts in that objection. Posting a boring question in an Open Thread is not a problem per se -- I don't really want to discourage people from doing that. It's just that when I open any Open Thread, and there are at least five boring top-level comments by the same user, instead of simply ignoring them I feel annoyed.

Many of your comments are very general debate-openers, where you expect others to entertain you, but don't provide anything in return. Choosing your recent downvoted question as an example:

How do you estimate threats and your ability to cope; what advice can you share with others based on your experiences?

First, how do you estimate "threats and your ability to cope"? If you ask other people to provide their data, it would be polite to provide your own.

Second, what is your goal here? Are you just bored and want to start a debate that could entertain you? Or are you thinking about a specific problem you are trying to solve? Then maybe being more specific in the question could help to give you more relevant answer. But the thing is, your not being specific seems like an evidence for the "I am just bored and want you to entertain me" variant.

Comment author: MrMind 24 November 2015 08:02:47AM 2 points [-]

Many of your comment get downvoted, sometimes heavily. In every open thread you post a lot of questions, some of them completely off topic.
A single good question in the open thread can give you 2-3 karma, but a single bad one can go down as -7 or less. So stop asking so much irrelevant questions and start contributing.

Comment author: helldalgo 24 November 2015 04:46:41AM *  9 points [-]

I think it's that you post a lot of questions and not a lot of content. Less Wrong is predisposed to upvoting high-content responses. I haven't had an account for very long, but I have lurked for ages. That's my impression, anyways. I recognize that since I haven't actually pulled comment karma data from the site and analyzed it, I could be totally off-base.

Maybe when you ask questions, use this form:

[This is a general response to the post] and [This is what is confusing me] but [I thought about it and I think I have the answer, is this correct?] or [I thought about it, came up with these conclusions, but rejected them for reasons listed here, I'm still confused]

EDIT: I just looked at your submitted history. You do post content in Main, apparently, but your posts seem to run counter to the popular ideas here. There is bias, and LessWrong has a lot of ideas deemed "settled." Effective Altruism appears to be one, and you have posted arguments against it. I've also seen some of your posts jump to conclusions without explaining your explicit reasons. LWers seem to appreciate having concepts reduced as much as possible to make reasoning more explicit.

Comment author: ChristianKl 24 November 2015 10:13:07AM 5 points [-]

There is bias, and LessWrong has a lot of ideas deemed "settled."

Any group has a lot of ideas that are settled. If you want to convince any scientific minded group that the Aristoteles four elements is true, then you have to hit a high bar for not getting rejected. If anything LW allows a wide array of contrarian points.

LW's second highest voted post is Holden's post against MIRI and is contrarian to core ideas of this community in the same sense as a post criticizing EA is. The difference is that the post actualy goes deep and make a substantive argument.

Comment author: helldalgo 24 November 2015 10:35:51AM *  1 point [-]

I want to say that that's what I was trying to imply, but that might be backwards-rationalization. I do have the impression that contrarian ideas are accepted and lauded if and only if they're presented with the reasoning standards of the community. I'll be honest: LW does strike me as far-fetched in some respects BUT I recognize that I haven't done enough reading on those subjects to have an informed opinion. I've lurked but am not an ingrained member of the community and can't give a detailed analysis of the standards. Only my impression.

AND I realize that this sounds defensive, and I know there's no real reason for my ego to be wounded. I appreciate your input! I hope that my advice to Clarity wasn't too far off the mark. I tried to be clear about my advice being based on impressions more than data.

EDIT: removed "biased," replaced with "far-fetched."

Comment author: Nate646 28 November 2015 11:57:08AM 4 points [-]

The prediction market I was using, iPredict is closing. Apparently it represents a money laundering risk and the Government refused to grant an exemption. Does anyone know any good alternatives?

Comment author: Douglas_Knight 30 November 2015 05:10:24PM 1 point [-]

I asked about this recently. I think that the sports bookie Betfair is the best existing option, in terms of liquidity and diversity of topics. The only prediction markets that I know to be open to Americans are the Iowa Electronic Markets and PredictIt, both with smaller limits than iPredict.

Comment author: Elo 30 November 2015 04:46:24AM 0 points [-]

you should post this on the next OT

Comment author: Clarity 28 November 2015 01:43:48AM -2 points [-]

I have a student email account that forwards messages to my personal gmail account. Sometimes I have to send messages from my student gmail account. Can these get automatically moved to my personal gmail sent folder so that I can find them with one search?

Comment author: ike 29 November 2015 12:41:03AM 1 point [-]
Comment author: JoshuaZ 27 November 2015 05:59:59PM 3 points [-]

Further possible evidence for a Great Filter: A recent paper suggests that as long as the probability of an intelligent species arising on a habitable planet is not tiny, at least about 10^-24 then with very high probability humans are not the only civilization to have ever been in the observable universe, and a similar result holds for the Milky Way with around 10^-10 as the relevant probability. Article about paper is here and paper is here.

Comment author: Viliam 26 November 2015 07:21:06PM *  2 points [-]

Facebook question:

I have different types of 'friends' on Facebook, such as "Family", "Rationalists", "English-speaking", etc. Different materials I post are interesting for different groups. There is an option to select visibility of my posts, but that seems not exactly what I want.

What I'd like is to make my posts so that they are available to everyone, including people I don't know (e.g. if anyone clicks on my name, they will see everything I ever posted), but I don't want all my posts to appear automatically on all of my 'friends' home pages, if they follow me. In other words, I don't want to spam my 'friends'' pages with stuff they are unlikely to read, yet I want anyone to be able to read each of my posts if they wish so.

Is there an option "don't push this automatically to all people, but let them see it if they click on a permalink"?

Comment author: ChristianKl 26 November 2015 10:57:42PM 3 points [-]

I don't understand why facebook messes up the language issue so strongly. It seems like the American's at facebook quarters just don't care about bilinguals.

Comment author: solipsist 29 November 2015 06:57:00AM 2 points [-]

Yeah, your explanation sounds absolutely correct. But before you think "silly monoglot Americans", remember that London is closer to Istanbul than New York is to Mexico. Countries where people don't mostly speak English are thousands of kilometers away from most Americans.

Comment author: username2 29 November 2015 09:23:05PM -1 points [-]

Just because they have an excuse that geography made them silly monoglots doesn't mean they aren't silly monoglots :p

Comment author: gjm 29 November 2015 11:14:35PM *  1 point [-]

I think solipsist's point isn't that they have an excuse but that they have a reason -- being monoglot hurts them less than it would if they were e.g. on the European continent, so monoglossy (or whatever the right word is) isn't necessarily silly for them.

[EDITED to add:] Disappointingly, OED suggests that the right word is just "monoglottism".

Comment author: polymathwannabe 29 November 2015 03:59:23PM 0 points [-]

Those are suspiciously convenient examples. A more relevant comparison would be: Los Angeles is closer to Tijuana than London is to Paris.

Comment author: tut 30 November 2015 03:24:19PM *  0 points [-]

Here is a map with London and Istanbul on it. In between them are many countries with at least six majority languages (and that's a low count, where some people would lynch me for saying that their language is the same as the one their neighbor speaks). Los Angeles and Tijuana on the other hand are two cities right by a border, and the only languages commonly spoken between them is English, the language of the USA, and Spanish, the language of Mexico.

Comment author: polymathwannabe 30 November 2015 03:42:17PM 0 points [-]

I understood solipsist's argument to mean that Americans can be excused for being ignorant of other languages because most of them live too far from other linguistic communities, and pointed at the mutual closeness of European countries for contrast, implying that it's likelier to find a Turkish-speaking Brit than a Spanish-speaking American.

What I tried to say was that there was no need to artificially inflate the comparison distance by choosing Istanbul. Londoners can find speakers of a completely different language by merely driving to Cardiff. But the U.S. is not a monolingual bloc of homogeneity either: ironically, solipsist chose New York for his example, a multilingual smorgasbord if ever there was one.

Comment author: solipsist 29 November 2015 06:03:53PM *  0 points [-]

Well, I don't know. Some of the US is near Mexico, but most of it isn't. In Europe the farthest you can get from a border to foreign speaking country is perhaps southern Italy. The four US states which border Mexico are each bigger than Italy. Germany is a bigish country in Europe area-wise, but it's less than 3.7% the size of the US. The Mercator projection makes an optical illusion -- the US is huge.

Comment author: polymathwannabe 26 November 2015 08:35:33PM 1 point [-]

The way Facebook works, you decide what's available, but each of your friends has to individually decide how much they want to see of you.

Comment author: Viliam 27 November 2015 06:45:42AM *  0 points [-]

The problem is exactly the "how much they want to see of you" part, namely that there is only the one undifferentiated "you" instead of "your rationality posts", "your family photos", "your posts with kitten videos". I don't want to bother my family with rationality posts, and don't want to bother my LW friends with Slovak posts, but as long as I don't want to limit it all to 'friends of my friends' I don't have a choice.

Technically, the solution would be to create multiple accounts for mutliple aspects of my life, and have different sets of 'friends' for each. But this is against Facebook TOS, and is also technically inconvenient.

Actually, maybe I could use the "Pages" feature for this... That allows people to post under multiple identities, so each of them can have different followers. But officially, "Pages are for businesses, brands and organizations". Not sure if "Viliam's comments on politics in Slovakia" qualitfies as any of that.

Comment author: polymathwannabe 27 November 2015 03:06:43PM 0 points [-]

What you seem to be already doing, which is to manually select what group will see each post, seems to be good enough for your purposes. Anyone who actively wants to see more of you can simply go to your profile and see everything.

Comment author: Panorama 26 November 2015 04:45:13PM 6 points [-]

Meta-research: Evaluation and Improvement of Research Methods and Practices by John P. A. Ioannidis , Daniele Fanelli, Debbie Drake Dunne, Steven N. Goodman.

As the scientific enterprise has grown in size and diversity, we need empirical evidence on the research process to test and apply interventions that make it more efficient and its results more reliable. Meta-research is an evolving scientific discipline that aims to evaluate and improve research practices. It includes thematic areas of methods, reporting, reproducibility, evaluation, and incentives (how to do, report, verify, correct, and reward science). Much work is already done in this growing field, but efforts to-date are fragmented. We provide a map of ongoing efforts and discuss plans for connecting the multiple meta-research efforts across science worldwide.

Comment author: Clarity 28 November 2015 01:44:33AM 0 points [-]

Hope this kind of work gets decent funding...

Comment author: Lumifer 25 November 2015 07:16:41PM 5 points [-]

Paper in Nature about differences in gene expression correlated with chronological age.

tl;dr -- "We identified 1,497 genes that are differentially expressed with chronological age."

Quickdraw conclusion: this will require A LOT of silver bullets.

Comment author: ChristianKl 25 November 2015 08:11:24PM 7 points [-]

I don't think we learn a lot through the number. It might be that multiple genes are regulated by the same mechanism and turning that mechanism down brings us forward.

Comment author: CellBioGuy 28 November 2015 07:44:34AM *  2 points [-]

Indeed, not only is this only looking at the very broad end results of what is seen to co-vary with age in a regular way completely agnostic to mechanism, it is looking only at gene expression in peripheral blood, one very highly specialized (to the point of being a liquid) tissue type.

Comment author: zslastman 26 November 2015 12:33:04PM 0 points [-]

Yeah it doesn't say much. For one thing I'd say it's just about all of the genes that are differentially expressed, if you look hard enough. Regardless, that doesn't tell us how many of them really matter with respect to the things we care about, how many causal factors are at work, or how difficult it will be to fix. Doesn't rule out a single silver bullet aging cure (though other things probably do)

Comment author: RicardoFonseca 25 November 2015 06:07:27PM 3 points [-]

Are there any studies that highlight which biases become stronger when someone "falls in love"? (Assume the love is reciprocated.) I am mainly interested in biases that affect short- and medium-term decisions, since the state of mind in question usually doesn't last long.

One example is the apparent overblown usage of the affect heuristic when judging the goodness of the new partner's perceived characteristics and actions (the halo effect on steroids).

Comment author: RicardoFonseca 27 November 2015 12:46:06PM *  3 points [-]

Here is a study finding that "high levels of passionate love of individuals in the early stage of a romantic relationship are associated with reduced cognitive control": free copy / springer link

Also, while I was searching for studies, I found a news article saying this about a study by Robin Dunbar:

"The research, led by Robin Dunbar, head of the Institute of Cognitive and Evolutionary Anthropology at Oxford University, showed that men and women were equally likely to lose their closest friends when they started a new relationship."

More specifically, the study found the average number of lost friends per new relationship was two.

Except there is no publicly published paper anywhere online, despite what the news article says, there are only quotes by Dunbar at the 2010 British Science Festival, which seems a bit suspicious to me, maybe suggesting that the study was retracted later.

Comment author: [deleted] 27 November 2015 03:31:29PM *  4 points [-]

It's not necessarily that the study was retracted. The news article from the Guardian you linked mentioned that the study was submitted to the journal Personal Relationships; this means it had not yet been accepted for publication. And indeed it looks like that study never got published there despite all the media coverage.

Actually it has finally come out, 5 years later! Burton-Chellew, M.N and Dunbar, Robin I. M. (2015). Romance and reproduction are socially costly. Evolutionary Behavioral Sciences, 9(4), 229-241. http://dx.doi.org/10.1037/ebs0000046

From the abstract

We used an Internet sample of 540 respondents to test and show that the average size of support networks is reduced for individuals in a romantic relationship. We also found approximately 9% of our sample reported having an “extra” romantic partner they could call on for help, however these respondents did not have an even smaller network than those in just 1 relationship. The support network is also further reduced for those who have offspring, however these effects are contingent on age, primarily affecting those under the age of 36 years. Taking into account the acquisition of a new member to the network when entering a relationship, the cost of romance is the loss of nearly 2 members. On average, these social costs are spread equally among related and nonrelated members of the network.

Comment author: RicardoFonseca 28 November 2015 07:07:49PM 0 points [-]

Nice! Good to know the information is (more) reliable after all :)

Comment author: LessRightToo 25 November 2015 08:41:30PM 1 point [-]

A study that relies only on self-reported claims of 'being in love' might be interesting to read, but such a study would be of higher quality if there was an objective way to take a group of people and sort them into one of two groups: "in love" or "not in love." Based on my own experience and experiences reported by others, I wouldn't reject the notion that such a sorting is possible in principle, although it may be beyond our current technological capability. The pain associated with being suddenly separated from someone that you have 'fallen in love with' can rival physical pain in intensity. What type of instrumentation would we need to detect when a person is primed for such a response? I have no idea.

Comment author: ChristianKl 25 November 2015 11:48:05PM 1 point [-]

A study that relies only on self-reported claims of 'being in love' might be interesting to read, but such a study would be of higher quality if there was an objective way to take a group of people and sort them into one of two groups: "in love" or "not in love."

No, not automatically. An objective measurement can be both worse and be better than a self-reported measurement. There no reason to believe that one is inherently better.

Comment author: LessRightToo 28 November 2015 02:01:33PM 0 points [-]

New material added to this thread uses the phrase being in a relationship rather than being in love. I found the latter phrase problematic because it involves a poorly defined mental state that has changed meaning over time. The former phrase is objectively verifiable by external observers.

I have read a book or two on the Design of Experiments over the years purely for intellectual curiosity; I've never actually defined and run a scientific experiment. So I don't have anything worthwhile to say on the general topic of the relative value of objective vs. subjective measurements in scientific studies.

Comment author: RicardoFonseca 27 November 2015 12:35:02PM 0 points [-]

Why do you think "a person being primed for feeling pain when being separated from their new partner" matters here?

Are you thinking about studies that, at the very least, suggest the possibility of such a separation being an option that the subject will experience based on the outcome of some action/decision being studied? :( that's horrible ):

Comment author: LessRightToo 28 November 2015 02:10:04PM *  1 point [-]

An objectively verifiable indication that an animal has pair-bonded would be a visible indication of distress when forcibly separated from his/her mate. I'm not suggesting that this is the best way to determine whether an animal has pair-bonded. For example, an elevated level of some hormone in the blood stream (a "being in love" hormone) that reliably indicates being pair-bonded would be a superior objectively verifiable indication (in my opinion) because it doesn't involve causing distress in an animal.

I'm not a biologist - just an occasional recreational reader of popular works in biology. So, my opinion isn't worth much.

Comment author: RicardoFonseca 28 November 2015 07:06:02PM 1 point [-]

Right now, it seems that "passionate love" is measured in a discrete scale based on answers to a questionnaire. The "Passionate Love Scale" (PLS) is mentioned in this blog post and was introduced by this article in 1986.

In my other reply to my original comment I showed a study that finds that "high levels of passionate love of individuals in the early stage of a romantic relationship are associated with reduced cognitive control", in which they use the PLS.

Comment author: Silver_Swift 25 November 2015 04:12:47PM 2 points [-]

I don't typically read a lot of sci-fi, but I did recently read Perfect State, by Brandon Sanderson (because I basically devour everything that guy writes) and I was wondering how it stacks up to typical post-singularity stories.

Has anyone here read it? If so, what did you think of the world that was presented there, would this be a good outcome of a singularity?

For people that haven't read it, I would recommend it only if you are either a sci-fi fan that wants to try something by Brandon Sanderson or if you read some cosmere novels and would like a story touches on some slightly complexer (and more LWish) themes than usual (and don't mind it being a bit darker than usual).

Comment author: NancyLebovitz 25 November 2015 01:54:16PM 1 point [-]

Introverts, Extroverts, and Cooperation

As usual, a small hypothetical social science study, but I'm willing to play with the conclusion, which is that extroverts are more likely to cheat unless they're likely to get caught. It wouldn't surprise the hell out of me if introverts are more likely to internalize social rules (or are people on the autism spectrum getting classified as introverts?).

Could "publicize your charity" be better advice for extroverts and/or majority extrovert subcultures than for introverts?

Comment author: Lumifer 25 November 2015 03:29:20PM 3 points [-]

extroverts are more likely to cheat unless they're likely to get caught

That's not what your link says. First, there is no cheating involved, we are talking about degrees of cooperation without any deceit. And second, it's not about "getting caught", it's about being exposed to the light of the public opinion which, of course, extroverts are more sensitive to.

Comment author: AstraSequi 25 November 2015 02:24:34AM *  2 points [-]

I just found out about the “hot hand fallacy fallacy” (Dan Kahan, Andrew Gelman, Miller&Sanjuro paper) as a type of bias that more numerate people are likely more susceptible to, and for whom it's highly counterintuitive. It's described as a specific failure mode of the intuition used to get rid of the gambler's fallacy.

I understand the correct statement like this. Suppose we’re flipping a fair coin.

*If you're predicting future flips of the coin, the next flip is unaffected by the results of your previous flips, because the flips are independent. So far, so good.

*However, if you're predicting the next flip in a finite series of flips that has already occurred, it's actually more likely that you'll alternate between heads and tails.

The discussion is mostly about whether a streak of a given length will end or continue. This is for length of 1 and probability of 0.5. Another example is

...we can offer the following lottery at a $5 ticket price: a fair coin will be flipped 4 times. if the relative frequency of heads on flips that immediately follow a heads is greater than 0.5 then the ticket pays $10; if the relative frequency is less than 0.5 then the ticket pays $0; if the relative frequency is exactly equal to 0.5, or if no flip is immediately preceded by a heads, then a new sequence of 4 flips is generated. While, intuitively, it seems like the expected payout of this ticket is $0, it is actually $-0.71 (see Table 1). Curiously, this betting game may be more attractive to someone who believes in the independence of coin flips, rather than someone who holds the Gambler’s fallacy.

Comment author: gjm 25 November 2015 02:03:18PM 6 points [-]

I think this is not quite right, and it's not-quite-right in an important way. It really isn't true in any sense that "it's more likely that you'll alternate between heads and tails". This is a Simpson's-paradox-y thing where "the average of the averages doesn't equal the average".

Suppose you flip a coin four times, and you do this 16 times, and happen to get each possible outcome once: TTTT TTTH TTHT TTHH THTT THTH THHT THHH HTTT HTTH HTHT HTHH HHTT HHTH HHHT HHHH.

  • Question 1: in this whole sequence of events, what fraction of the time was the flip after a head another head? Answer: there were 24 flips after heads, and of these 12 were heads. So: exactly half the time, as it should be. (Clarification: we don't count the first flip of a group of 4 as "after a head" even if the previous group ended with a head.)
  • Question 2: if you answer that same question for each group of four, and ignore cases where the answer is indeterminate because it involves dividing by zero, what's the average of the results: Answer: it goes 0/0 0/0 0/1 1/1 0/1 0/1 1/2 2/2 0/1 0/1 0/2 1/2 1/2 1/2 2/3 3/3. We have to ignore the first two. The average of the rest is 17/42, or just over 0.4.

What's going on here isn't any kind of tendency for heads and tails to alternate. It's that an individual head or tail "counts for more" when the denominator is smaller, i.e., when there are fewer heads in the sample.

Comment author: AstraSequi 26 November 2015 01:53:56AM *  0 points [-]

My intuition is from the six points in Kahan's post. If the next flip is heads, then the flip after is more likely to be tails, relative to if the next flip is tails. If we have an equal number of heads and tails left, P(HT) > P(HH) for the next two flips. After the first heads, the probability for the next two might not give P(TH) > P(TT), but relative to independence it will be biased in that direction because the first T gets used up.

Is there a mistake? I haven't done any probability in a while.

Comment author: gjm 26 November 2015 02:23:18AM 3 points [-]

If the next flip is heads, then the flip after is more likely to be tails, relative to if the next flip is tails.

No, that is not correct. Have a look at my list of 16 length-4 sequences. Exactly half of all flips-after-heads are heads, and the other half tails. Exactly half of all flips-after-tails are heads, and the other half tails.

The result of Miller and Sanjuro is very specifically about "averages of averages". Here's a key quotation:

We demonstrate that in a finite sequence generated by i.i.d. Bernoulli trials with probability of success p, the relative frequency of success on those trials that immediately follow a streak of one, or more, consecutive successes is expected to be strictly less than p

"The relative frequency [average #1] is expected [average #2] to be ...". M&S are not saying that in finite sequences of trials successes are actually rarer after streaks of success. They're saying that if you compute their frequency separately for each of your finite sequences then the average frequency you'll get will be lower. These are not the same thing. If, e.g., you run a large number of those finite sequences and aggregate the counts of streaks and successes-after-streaks, the effect disappears.

Comment author: Viliam 25 November 2015 09:33:42AM *  1 point [-]

However, if you're predicting the next flip in a finite series of flips that has already occurred, it's actually more likely that you'll alternate between heads and tails.

...because heads occurring separately are on average balanced by heads occurring in long sequences; but limiting the length of the series puts a limit on the long sequences.

In other words, in infinite sequences, "heads preceeded by heads" and "heads preceeded by tails" would be in balance, but if you cut out a finite subsequence, if the first one was "head preceeded by head", by cutting out the subsequence you have reclassified it.

Am I correct, or is there more?

Comment author: gjm 25 November 2015 02:13:29PM 1 point [-]

I don't think this is correct. See my reply to AstraSequi.

(But I'm not certain I've understood what you're proposing, and if I haven't then of course your analysis and mine could both be right.)

Comment author: CellBioGuy 24 November 2015 11:27:42PM *  9 points [-]

More data on Kepler star KIC 8462852.

http://www.nasa.gov/feature/jpl/strange-star-likely-swarmed-by-comets

After going back through Spitzer space telescope infrared images, the star did not have an infrared excess as recently as earlier in 2015, meaning that there wasn't some kind of event that generated huge amounts of persistent dust between the last measurements of spectra and the Kepler dataset showing the dips in brightness. This bolsters the 'comet storm / icy body breakup' theory in that that would generate dust close to the star that rapidly goes away and is positioned such that we are primed to see large fractions of it as it is generated close to the star rather than a tiny fraction of dust further away.

(This comes after the Allen telescope array, failing to detect anything interesting, put an upper limit on radio radiation coming from the system at 'weaker than 400x the strength we could put out with Aricebo in narrow bands, or 5,000,000x in wide bands' for what that's worth)

Comment author: Elo 24 November 2015 10:13:01PM 1 point [-]

This week on the slack: http://lesswrong.com/r/discussion/lw/mpq/lesswrong_real_time_chat/

  • AI - language/words as a storage-place for meaning.
  • art and media - MGS V, Leviathan, SOMA, Undertale, advertising methods,
  • Business and startups - CACE (Change Anything Chances Everything) with respect to startups and machine learning. prediction.io , ,meetings: [each person speaks, so the length of meeting of the meeting is O(n) and there are n people, so the total meeting cost is O(n^2). On the margin, adding one person to the standup means they listen to n people speak, and n people listen to them speak.] and how they cost businesses money. machine speech ability, data wrangling is tedious, data processing resources: data source, computing power and blidness. "the whole world is simpler if greed is the primary motivator for everything". "People talk a lot about market failure but government failure is a thing too.". VC's and extortionary practices. what is the intention of implementing UBI? (unanswered). "if the game-plan (the economy) changes - i.e. by automation; or basic income. The people with more resources will be able to adapt to it faster..." wealth distribution.

  • Debating and rhetoric - we break apart the discussions and arguments from other places... We analysed where the first statement of an argument elsewhere shifted from discussion to disagreement. (surprisingly early) a two-pronged approach to offence. in regards to:

  • a statement could be taken the offensively
  • it was taken offensively by someone.

1: clean up the statement so that it is harder to take offensively (steelman) 2: encourage less personal offence from the original statement both sides are needed to make discussions more productive.
Grice's Maxims of communication - https://en.wikipedia.org/wiki/Cooperative_principle this is also interesting: http://www.smart-words.org/linking-words/transition-words.html

  • Effective altruism - EA Global have started hosting videos from this year's conference on their site. Duplicates of what is already up. Nothing at all from the Oxford conference yet. http://eaglobal.org/videos

  • goals of lesswrong - raising the sanity waterline, and before we extinct the planet of humans. how could the sanity waterline be raised:

    • Changing the education system
    • Getting enough influential writers
    • Getting enough famous people to be rationalists so that people want to emulate then
    • Creating a movie or TV series about rationalists
    • Get enough rationalists within the population that everyone gains some understanding of rationalist ideas asking a few teachers about how you might go about teaching the LW ideas to the average person...
  • human relationships - living in different places and different cultures of doing so. driving vs public transport and safety concerns. "youthful optimism" and it's contrasting "aging pessimism" as an exploration-exploitation problem. If we make a rough assumption that both things exist and at some point a youthful optomist transitions to an aging pessimist; what can we learn about that and how can we benefit from knowing that as a natural process.

  • lingustics - the phrase; "If I understand you correctly; you were saying..." followed by what you are saying next. it slows down a conversation; but keeps it clear.

  • Open - so many things! IQ/ the sports gene, (re: parable of talents), Accountability groups, A Big disagreement about a thing about this thing http://lo-tho.blogspot.com/2014/12/epistemic-trust.html , http://www.informationisbeautiful.net/visualizations/rhetological-fallacies/ , QS data, Case law and it's influence on the law and an analogy to Edge testing in programming. Some discussions on the layers of the state of our facebooks post-paris-events. some online courses, fighting death, advice about how to think about motivated cognition (clever-arguer) vs intellectual honesty (by which I just mean the lack of motivated cognition) in the case where one person has a really high probability for X and honestly believes that the argument is very one-sided?.

The quotation you’re looking for is from Chesterton’s 1929 book, The Thing, in the chapter entitled, “The Drift from Domesticity”:

In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.

  • Parenting - (uncharacteristically quiet) some talk about video games that we let kids play

  • philosophy - is there's a fundamental difference in the peer relationships among men as compared to the peer relationships among women. I've heard often that men by default are indifferent to each other while women by default are adversaries.

    response: sounds like an armchair philosophy. what evolutionary characteristics or behaviours did we or did we not pick up. even if you found a population with that to hold true; I doubt it would hold true everywhere. it may have temporarily been true for some people at some point. but evolution is all about gaming the rules. as soon as anything becomes a "rule" in the sense of being a regularly repeated behaviour; some individual who was not winning at the rule would try to generate a different win-condition so that they can continue to win.

in summary: how could we know? and also if it was true for a temporary time and place I doubt it would last more than a handful of generations. by generate I mean: randomly evolve a different pattern of behaviour.

"how should we feel, emotionally, about the real world when the real world kind of sucks, and is there anything we should do about it?" [various ideas; not completely answered]

Feel free to join us. Active meetup time: A time to try to get lots of people online to talk about things is going to be chosen soon, probably a 12 hour window or so.

We have over 130 people who have signed up. Not nearly that many people are active, but each day something interesting happens...

last month on slack: http://lesswrong.com/r/discussion/lw/mwt/open_thread_oct_26_nov_01_2015/cuq5

Comment author: Fluttershy 24 November 2015 05:41:34AM 2 points [-]

Do transhumanist types tend to value years of life lived past however long they'd expect to live anyways linearly (I.e. if they'd pay a maximum of exactly n to live an extra year, then would they also be willing to pay a maximum of exactly 100n to live 100 extra years)?

If so, the cost effectiveness of cryonics (in terms of added life years lived) could be compared with the cost effectiveness of other implementable health interventions would-be cryonicists are on the fence on. What's the marginal disutility that a given transhumanist might get from forcing themselves to eat a bit more healthily, and how much would that extend their life expectancy by? What about for exercise? Or going to the doctor over that odd itch in their throat that they'd like to ignore just one more day?

The point I'm coming to is that if I want my friends to live longer lives (or have more QALYs, or whatever) in expectation, it's probably better for me to pester them about certain lifestyle choices and preventative interventions than it is to pester them to sign up for cryonics. (By the same token, I seem to recall that Hanson or Yudkowsky once pointed out that cryonics would be expected to add more years to ones life than an open heart surgery (?) relative to the cost, or something like that.)

Comment author: Soothsilver 24 November 2015 09:14:37PM 1 point [-]

I also ask myself these questions and I'm unable to answer them. In the end, I exercise and modify my diet as much as my will allows without causing me too much stress.

As for valuing years of life, if I considered that the very best outcome of cryonics (as HungryHobo described) is certain, then, well, even for very small values that will result in cryonics giving me far more utility than exercice. I don't value later years of my life that low.

Yudkowsky believes that cryonics has a greater than 50% chance of working, and that we will be able to have fun for any amount of time, so for him, the expected value of cryonics is ginormous.

I get quite a bit of disutility from forcing myself to eat a bit more healthily. My food diversity is very power; if I try to ingest one of many foods I don't like, I will throw up. Attempting to eat those foods anyway causes me great discomfort. So that's not a great way for me to increase overall utility.

On the last paragraph, it appears to me that the two basics - avoiding obesity and not smoking - are the best thing you can pester them about. But the other lifestyle choices have the expected benefit of a few years total, if you don't expect any new medical technology to be developed.

Comment author: MarsColony_in10years 25 November 2015 06:41:54PM *  1 point [-]

avoiding obesity

Not to be pedantic, but I thought this might be of interest: As I understand it, amount of exercise is a better predictor of lifespan than weight. That is, I would expect someone overweight but who exercises regularly to outlive someone skinny who never exercises.

For example, this life expectancy calculator outputs 70 years for a 5"6" 25 year old male who weighs 300lbs, but exercises vigorously daily. Changing the weight to 150 lbs and putting in no exercise raised the life expectancy by only 1 year. (a bit less than I was expecting, actually. I was about to significantly update, but then it occurred to me that 300 lbs isn't the definition of obesity. I knew this previously, but apparently hadn't fully internalized that.) EDIT: This calculator may not work well for weights over ~250 lbs. See comment below.

So, my top two recommendations to friends would be quit smoking and exercise regularly. I'd recommend Less Wrongers either do high intensity workouts once a can read or watch Khan Academy or listen to The week to minimize the amount of time spent on non-productive activities, or pick a more frequent but lower intensity activity they Sequences audiobook while doing. I'm not an expert or anything. That's just the impression I've gotten from my own research.

Comment author: Soothsilver 25 November 2015 08:45:48PM 3 points [-]

I'm not sure I would trust that calculator. I'm not used to US units so I put in 84kg (my weight) and it said "with that BMI you can't be alive" so I put in 840 thinking maybe it wants the first decimal as well. Now I realize it wanted pounds. And for this, 840lbs, it also outputed 70 years.

I'm not sure where the calculator gets its data from.

Comment author: MarsColony_in10years 26 November 2015 08:31:04AM 2 points [-]

Hmmm, that's worrying. I played with some numbers for a 5'6" male, and got this:

99 lbs yields "Your BMI is way too low to be living"
100lbs yields 74 years
150lbs yields 76 years
200lbs yields 73 years
250lbs yields 69 years
300lbs yields 69 years
500lbs yields 69 years
999lbs yields 69 years

It looks to me like they are pulling data from a table, and the table maxes out under 250lbs?

Comment author: Viliam 25 November 2015 08:23:34PM 1 point [-]

As I understand it, amount of exercise is a better predictor of lifespan than weight.

This seems like a good news to me, because I can have greater control over my exercise than my weight.

Comment author: Lumifer 25 November 2015 07:08:43PM 3 points [-]

amount of exercise is a better predictor of lifespan than weight

First, there is no reason for you to care about ranking ("better"), you should only care whether something is a good predictor of lifespan. Predictors are not exclusive.

Second, weight effect on lifespan is nonlinear. As far as I remember it's basically a U-shaped curve.

Comment author: gjm 26 November 2015 12:08:07AM 3 points [-]

I think it's only U-shaped if you're plotting mortality rather than lifespan on the y-axis...

Comment author: Lumifer 26 November 2015 12:29:09AM 1 point [-]

Fair point.

Comment author: HungryHobo 24 November 2015 04:29:40PM 2 points [-]

The levels of uncertainty make this really hard to work with.

On the one hand perhaps it works and the person gets to live for billions of deeply fulfilling years, till the heat death of the universe experiencing 10x subjective time giving trillions of QALYs.

Or perhaps they get awoken into a world where life extension is possible but legally limited to a couple hundred years.

Or perhaps they get awoken into a world where they're considered on the same moral level as lab rats and millions of copies of their mind get to suffer in countless interesting ways.

so you end up with a very very wide range of values, negative to trillions of QALYs with no way to assign reasonable probabilities to anything in the range which makes cost effectiveness calculations a little less convincing.

Comment author: John_Maxwell_IV 24 November 2015 04:48:59AM 12 points [-]

MealSquares (the company I'm starting with fellow LW user RomeoStevens) is searching for nutrition experts to join our advisory team. The ideal person has a combination of formally recognized nutrition expertise & also at least a casual interest in things like study methodology and effect sizes (this unfortunately seems to be a rare combination). Advising us will be an opportunity to improve the diets of many people, it should not be much work, you'll get a small stake in our company, and you'll help us earn money for effective giving. Please get in touch with us (ideally using this page) if you or someone you know might be interested!

Comment author: MarsColony_in10years 25 November 2015 04:24:43PM *  3 points [-]

you'll help us earn money for effective giving

I realize you are in the startup phase now, and so it probably makes sense for you to put any surplus funds into growth rather than donating now. However, 2 questions:

  1. Once you finish with your growth phase, about what percent of your net proceeds do you expect to donate?

  2. What sorts of EA charities are you interested in?

I've been using MealSquares regularly, without realizing that that you guys were LWers or EAs. As such, I've been using mostly s/Soylent because of the cost difference. (A 400 Calorie MealSquare is ~$3, a 400 Calorie jug of Soylent 2.0 is ~$2.83, 400 Calories worth of unmixed Soylent powder is ~$1.83, and the ingredients for 400 Calories worth of DIY People Chow are ~$0.70. All these are slightly cheaper with a subscription/large purchase.)

I ask, because if you happen to be interested in similar EA causes to me, and expect to eventually donate X% of proceeds, then I should be budgeting my expenses to factor that in. If (100%-X%) * MealSquaresCost < soylentCost, then I would buy much less soylent and much (/many?) more MealSquares. I'd be paying a premium to Soylent in order to add a bit more culinary variety. (Also, I realize this X isn't equal to the expected altruistic return on investment, but that would be even harder to estimate.)

Comment author: Lumifer 25 November 2015 04:39:41PM 0 points [-]

I'd be paying a premium to Soylent in order to add a bit more culinary variety.

/chokes on his foie gras X-D

Comment author: MarsColony_in10years 26 November 2015 08:45:09AM 1 point [-]

Someone gave you a downvote. If it was on my behalf or on the behalf of Soylent, then for the record I thought it was funny. :)

Comment author: John_Maxwell_IV 26 November 2015 08:43:46AM *  1 point [-]

I realize you are in the startup phase now, and so it probably makes sense for you to put any surplus funds into growth rather than donating now.

Yep, that's what we've been doing. (We've been providing free MealSquares to some EA organizations, but we haven't been donating a significant portion of our profits directly.)

Once you finish with your growth phase, about what percent of your net proceeds do you expect to donate?

At least 10%, hopefully significantly more.

What sorts of EA charities are you interested in?

We've been trying to focus on growing our business rather than evaluating EA giving opportunities. If we actually do make a lot of money to donate, it will make sense to spend a lot of time thinking about where to give it. And we'll try & focus on identifying opportunities that we have a comparative advantage in (opportunities that are more suited to large donors, like funding a new organization from scratch).

I'm not exactly sure why, but for some reason the idea of people buying our product because we are EAs makes me uncomfortable. I would much rather people buy it because it's good for you, convenient, tasty, etc. As you point out, we are less than 10% more expensive on a per-calorie basis than jug form Soylent. Would you say that you are not interested in paying more for a healthier product, not convinced that MealSquares is better for you, something else?

Comment author: MarsColony_in10years 26 November 2015 10:16:50AM 0 points [-]

the idea of people buying our product because we are EAs makes me uncomfortable.

In retrospect, I think that would make me uncomfortable too. In your position, I'd probably feel like I'd delivered an ultimatum to someone else, even if they were the one who actually made the suggestion. On the other hand, maybe a deep feeling of obligation to charity isn't a bad thing?

Would you say that you are not interested in paying more for a healthier product, not convinced that MealSquares is better for you, something else?

Based on my (fairly limited) understanding of nutrition, I suspect that any marginal difference between your products is fairly small. I suspect humans get strongly diminishing returns (in the form of increased lifespan) once we have our basic nutritional requirements met in bio-available forms and without huge amounts of anything harmful. After that, I'd expect the noise to overpower the signal. For example, perhaps unmeasured factors like my mood or eating habits change as a function of my Soylent/MealSquares choice, and I wind up getting fast food more often, or get less work done or something. Let's say it would take me a month of solid researching and reading nutrition textbooks to make a semi-educated decision of which of two good things is best. Would the added health benefit give me an additional month of life? What if I value my healthy life, here and now, far more than 1 more month spent senile in a nursing home? What if I also apply hyperbolic discounting?

I've probably done more directed health-related reading than most people. (Maybe 24 hours total, over the pasty year or so?) Enough to minimize the biggest causes of death, and have some vague idea of what "healthy" might look like. Enough to start fooling around with my own DIY soylent, even if I wouldn't want to eat that every day without more research. If someone who sounds knowledgeable sits down and does an independent review, I'd probably read it and scan the comments for critiques of the review.

Comment author: John_Maxwell_IV 26 November 2015 11:04:35PM 1 point [-]

Thanks for the explanation. I wrote up some of the details of our approach here. Nutrition is far from being settled, and major discoveries have been made just in the past 50 years. Therefore we take an approach that's fairly conservative, which means (among other things) getting most of our nutrients from whole foods, the way humans have been eating for virtually all of our species' history. We think the burden of proof should be on Soylent to show that their approach is a good one.

Comment author: Tem42 26 November 2015 05:57:41PM 1 point [-]

the idea of people buying our product because we are EAs makes me uncomfortable.

I'd probably feel like I'd delivered an ultimatum to someone else, even if they were the one who actually made the suggestion.

I think many people would run the equation the other way -- buying from a company that gives a potion to charity is a way to pressure competing companies to do the same. In other words, MealSquares give consumers a way to put pressure on the industry. Of course, there are a lot of ways that that model could be flawed, but you're hardly abusing the people who make that choice.

Comment author: SolveIt 25 November 2015 03:32:20PM 4 points [-]

Do you have any plans for international shipping? (Say, the UK)

Comment author: John_Maxwell_IV 26 November 2015 08:18:53AM 2 points [-]

We've experimented with doing international shipping. It gets expensive, and it's also a bit of a hassle. It makes more sense if you're doing a group buy (90+ squares). If you really want MealSquares and you're willing to pay a bunch extra for international shipping, contact us and we can work out details. Long term we would love to set up production facilities in foreign countries like a regular multinational, but that won't be for a while.

Comment author: passive_fist 24 November 2015 11:07:03PM 3 points [-]

How does your product compare to widely-available meal replacement foods, like, say: http://www.cookietime.co.nz/osm.html ?

Comment author: John_Maxwell_IV 25 November 2015 03:09:22AM *  10 points [-]
  • MealSquares are nutritionally complete--5 MealSquares contain all the vitamins & minerals you need to survive for a day, in the amounts you need them. In principle you could eat only MealSquares and do quite well, although we don't officially recommend this. It's more about having an easy "default meal" that you can eat with confidence once or twice a day when you don't have something more interesting to do like get dinner with friends.

  • MealSquares is made from a variety of whole foods, and almost all of the vitamins and minerals are from whole food sources (as opposed to competing products like Soylent that use dubious vitamin powders). Virtually every nutrition expert in the past century has recommended eating a variety of whole foods, and MealSquares stuffs more than 10 whole food ingredients in to a single convenient package, including 3 different fruits and 3 different vegetables.

We've put a lot of research in to MealSquares to make it better for you than most or all competing products on the market. For example, the first ingredient in Clif Bar is brown rice syrup (basically a glorified form of sugar), and they get their protein from rice and soy (not as bioavailable as other sources). MealSquares contains only a bit of added sugar (dark chocolate chips) and bioavailable protein sources. I'm having a hard time finding solid nutrition info on the One Square Meal website. But you can see that our 400 calorie bar (120 grams) has only 12 grams of sugar, so 10% sugar by weight, whereas their bar is 17.1% sugar by weight.

Most competing meal bars are similar: non-bioavailable protein sources and lots of sugar, generally added sugar. Clif Bar is basically a candy bar disguised to be healthy: it has 23 grams of sugar in a 230 calorie bar, and a Hershey's Milk Chocolate with Almonds bar has 19 grams of sugar in a 210 calorie bar. Most meal bar makers are doing the nutritional equivalent of taking a Hershey bar, adding in some vitamin powders and soy protein isolate, and telling their customers that it's a healthy snack.

The biggest practical difference between us and One Square Meal is probably that we are available in the US and they are available in New Zealand.

Comment author: passive_fist 25 November 2015 03:36:23AM 2 points [-]

Interesting, thanks for the info. Yes most meal replacement bars seem to be simply soy-augmented candy bars, however there is of course a practical reason for this: sweet foods sell better.

It might be worth mentioning on your site that your product is more healthy and has less sugar than the alternatives. Another problem is soy protein. Some research hints at soy protein having undesirable hormone-imitating effects: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3074428/ so this could be a selling point as well as I presume you do not use soy protein.

Comment author: helldalgo 24 November 2015 10:46:54AM *  4 points [-]

I'm not the right person at all, but if you ever want an amateur data enthusiast to help clean and present research results, I'd be willing to donate my time. The project is interesting and I would like to start stretching my skill set. I am pretty good at graphing in R, have a solid understanding of probability theory (undergrad level). I also have a good intuition for cleaning data sets.

All of that evaluation is based on what other math nerds have told me, so I understand if you're not interested!

Comment author: ShardPhoenix 23 November 2015 11:16:36PM *  7 points [-]

What is the optimal amount of attention to pay to political news? I've been trying to cut down to reduce stress over things I can't control, but ignoring it entirely seems a little dangerous. For an extreme example, consider the Jews in Nazi Germany - I'd imagine those who kept an eye on what was going on were more likely to leave the country before the Holocaust. Of course something that bad is unlikely, but it seems like it could still be important to be aware of impactful new laws that are passed - eg anti-privacy laws, or internet piracy now much more heavily punishable, etc.

So what's the best way to keep up on things that might have an impact on one's life, without getting caught up in the back-and-forth of day-to-day politics?

Comment author: Tem42 24 November 2015 09:32:58PM -1 points [-]

Get weekly updates from light, happy sources (The Daily Show, The News Quiz, Mock the Week), and then specific searches for things that sound important.

Comment author: VoiceOfRa 27 November 2015 04:19:53AM *  2 points [-]

Those strike me as worse than useless for the kind of things ShardPhoenix is interested in, e.g., they are the kinds of shows that would mock the "idiots" who believe the "ridiculous conspiracy theory" that the Nazis are actually planning to systematically exterminate the Jews.

Comment author: Viliam 25 November 2015 06:48:23AM 2 points [-]

I wondered how something called "Mock the Weak" would be considered a "happy source"... then I noticed the two "e"s

Comment author: ChristianKl 24 November 2015 10:12:29AM 1 point [-]

If you live in the US I would guess that if you read LW you will see comments about really important political events.

Comment author: VoiceOfRa 27 November 2015 04:22:26AM 2 points [-]

This is harder than it seems. For example, to find out when you need to withdraw your money ahead of a banking crisis, like what happened in Cyprus and Greece, you need to figure this out ahead of everybody else. Furthermore, the authorities are going to be doing their best to cover up the impending crisis.

Comment author: Elo 24 November 2015 10:38:05PM *  1 point [-]

how I do it -

Things that I care about: local events (likelyhood of terrorism; or safety threats nearby)

Things I don't care about: any politics that is further away than that. (and not likely to affect my life)

global, country-wide, natural disasters that are far away.

Comment author: fubarobfusco 24 November 2015 08:42:17PM 9 points [-]

Some things to think about:

Are there actual political threats to you in your own polity (nation, state, etc.)? Do you belong to groups that there's a history of official repression or large-scale political violence against? Are there notable political voices or movements explicitly calling for the government to round you up, kill you, take away your citizenship or your children, etc.? (To be clear: An entertainer tweeting "kill all the lawyers" is not what I mean here.)

Are you engaged in fields of business or hobbies that are novel, scary, dangerous, or offensive to a lot of people in your polity, and that therefore might be subject to new regulation? This includes both things that you acknowledge as possibly harmful (say, working with poisonous chemicals that you take precautions against, but which the public might be exposed to) as well as things that you don't think are harmful, but which other people might disagree. (Examples: Internet; fossil fuels; drones; guns; gambling; recreational drugs; pornography)

Internationally — In the past two hundred years, how often has your country been invaded or conquered? How many civil wars, coups d'état, or failed wars of independence have there been; especially ones sponsored by foreign powers? How much of your country's border is disputed with neighboring nations?

Comment author: Lumifer 24 November 2015 09:05:29PM 2 points [-]

(Examples: Internet; fossil fuels; drones; guns; gambling; recreational drugs; pornography)

I do like the list :-)

Comment author: Lumifer 24 November 2015 01:10:56AM 3 points [-]

What is the optimal amount of attention to pay to political news?

To electioneering, zero would be about right (unless you appreciate the entertainement value). To particular laws and/or regulations which might affect you personally, enough to know the landscape.

Comment author: NancyLebovitz 24 November 2015 12:14:16AM 4 points [-]

For the extreme stuff, I think you'll get clues from things like how people like you are treated on the street.-- if it's your country. If you're at risk of being conquered by a government that hates you, the estimate is more complicated.

For the more likely things to keep track of, think about what's likely to affect you (like changes in laws) and use specialist sources.

Comment author: Lumifer 23 November 2015 09:35:51PM 3 points [-]
Comment author: passive_fist 23 November 2015 10:14:00PM 3 points [-]

Present day mathematics is a human construct, where computers are used more and more but do not play a creative role.

It always seemed very strange to me how, despite the obvious similarities and overlaps between mathematics and computer science, the use of computers for mathematics has largely been a fringe movement and mathematicians mostly still do mathematics the way it was done in the 19th century. This even though precision and accuracy is highly valued in mathematics and decades of experience in computer science has shown us just how prone humans are to making mistakes in programs, proofs, etc. and just how stubbornly these mistakes can evade the eyes of proof-checkers.

Comment author: MrMind 24 November 2015 08:13:20AM 0 points [-]

I think the difficulty is in part due to the fact that mathematicians use classical metalogic (e.g. proof by contradiction) which is not easily implemented in a computer system. The most famous mathematical assistant, Coq, is based on a constructive type theory. Even the univalence program, which is ambitious in its goal to formalize all mathematics, is based on a variant of intuitionistic meta-logic.

Comment author: Sarunas 24 November 2015 11:57:48AM *  4 points [-]

Correctness is essential, but another highly desirable property of a mathematical proof is its insightfulness, that is, whether they contain interesting and novel ideas that can later be reused in others' work (often they are regarded as more important than a theorem itself). These others are humans and they desire, let's call it, "human-style" insights. Perhaps if we had AIs that "desired" "computer-style" insights, some people (and AIs) would write their papers to provide them and investigate problems that are most likely to lead to them. Proofs that involve computers are often criticized for being uninsightful.

Proofs that involve steps that require use of computers (as opposed to formal proofs that employ proof assistants) are sometimes also criticized for not being human verifiable, because while both humans make mistakes and computer software can contain bugs, mathematicians sometimes can use their intuition and sanity checks to find the former, but not necessarily the latter.

Mathematical intuition is developed by working in an area for a long time and being exposed to various insights, heuristics, ideas (mentioned in a first paragraph). Thus not only computer based proofs are harder to verify, but also if an area relies on a lot of non human verifiable proofs that means it might be significantly harder to develop an intuition in that area which might then make it harder for humans to create new mathematical ideas. It is probably easier understand the landscape of ideas that were created to be human understandable.

That is neither to say that computers have little place in mathematics (they do, they can be used for formal proofs, generating conjectures or gathering evidence for what approach to use to solve a problem), nor it is to say that computers will never make human mathematicians obsolete (perhaps they will become so good that humans will no longer be able to compete).

However, it should be noted that some people have different opinions.

Comment author: RichardKennaway 24 November 2015 11:53:09AM 2 points [-]

Substantial work has been done on this. The two major systems I know of are Automath (defunct but historically important) and Mizar (still alive). Looking at those articles just now also turns up Metamath. Also of historical interest is QED, which never really got started, but is apparently still inspiring enough that a 20-year anniversary workshop was held last year.

Creating a medium for formally verified proofs is a frequently occurring idea, but no-one has yet brought such a project to completion. These systems are still used only to demonstrate that it can be done, but they are not used to write up new theorems.

Comment author: Vaniver 24 November 2015 02:44:36PM 1 point [-]

I thought there were several examples of theorems that had only been proved by computers, like the Four Color Theorem, but that they're sort of in their own universe because they rely on checking thousands of cases, and so not only could a person not really be sure that they verified the proof (because the odds of them making a mistake would be so high) they couldn't get much in the way of intuition or shared technique from the proof.

Comment author: RichardKennaway 24 November 2015 03:12:41PM 1 point [-]

I thought there were several examples of theorems that had only been proved by computers, like the Four Color Theorem

Yes, although as far as I know things like that, and the FCT in particular, have only been proved by custom software written for the problem.

There's also a distinction between using a computer to find a proof, and using it to formalise a proof found by other means.

Comment author: Douglas_Knight 25 November 2015 06:15:56PM 2 points [-]

Indeed, the computer-generated proofs of 4CT were not only not formal proofs, they were not correct. Once a decade, someone would point out an error in the previous version and code his own. But now there is a version for an off the shelf verifier.

Comment author: username2 23 November 2015 07:17:06PM 2 points [-]

Why are there many LWers from, say, Europe, but not China?

Comment author: Vaniver 23 November 2015 08:15:39PM 12 points [-]

I'm going to guess that English language proficiency is far higher in Europe than it is in China. But Asian Americans seem underrepresented on LW relative to the fields that LW draws heavily from, so that seems unlikely to be a complete explanation.

Comment author: username2 29 November 2015 09:28:01PM *  0 points [-]

Then why so few LWers from India which is an enormous country with English as an official language? Why so many Indians are on Quora, but relatively few are here?

Comment author: Vaniver 29 November 2015 10:04:52PM 0 points [-]

There are a lot of LWers from India, relative to the rest of the world? Agreed that there are less than we would expect, and in particular there are more East Asian LWers than South Asian LWers.

Comment author: iarwain1 23 November 2015 08:06:56PM 2 points [-]

I'm going to guess it's based on some of the East-West thinking differences outlined by Richard Nisbett in The Geography of Thought (I very highly recommend that book, BTW). I don't remember everything in the book, but I remember he had some stuff in there about why easterners are often less interested in, and have a harder time with, the sort of logical/scientific thinking that LW advocates.

Comment author: g_pepper 24 November 2015 03:33:21PM 1 point [-]

I second the recommendation of The Geography of Thought.

Comment author: MrMind 24 November 2015 08:17:32AM *  1 point [-]

Which is weird because, if you take seriously the ethnic-IQ correlation (which I don't), Asians show an higher-than-westerners average IQ.

Comment author: iarwain1 24 November 2015 02:59:18PM 7 points [-]

Nothing to do with IQ, but with modes of thinking. According to Nisbett, Eastern thinking is more holistic and concrete vs. the Western formal and abstract approach. He says that Easterners often make fewer thinking mistakes when dealing with other people, where a more holistic approach is needed (for example, Easterners are much less prone to the Fundamental Attribution Error). But at the same time they tend to make more thinking mistakes when it comes to thinking about scientific questions, as that often requires formal, abstract thinking. Nisbett also speculates that this is why science developed only in the west even though China was way ahead of the west in (concrete-thinking-based) technological progress.

In general there's very little if any correlation between IQ and rationality. A lot of Keith Stanovich's work is on this.

Comment author: Curiouskid 23 November 2015 04:54:33PM 6 points [-]

So, it seems like lots of people advise buying index funds, but how do I figure out which specific ones I should choose?

Comment author: Curiouskid 20 April 2016 06:38:48AM *  2 points [-]

So, I think the correct answer to the question "I have a 5-figure sum of money to invest" is to just go with Betterment/Wealthfront rather than Vanguard, so that you get diversification between asset classes (whereas a specific index fund will get you diversification within an asset class). If I'd known this when I'd asked the question, I would have picked a better mix of Vanguard index funds, and not hesitated as much with figuring out where to put the money. To be fair, Vaniver basically said this, I just think the links below explain it better, so I could feel certain enough to make a decision rather than let the money burn away through inflation.

http://www.mrmoneymustache.com/2012/02/17/book-review-the-intelligent-asset-allocator/

http://www.mrmoneymustache.com/2014/11/04/why-i-put-my-last-100000-into-betterment/

Comment author: Vaniver 20 April 2016 04:16:40PM 0 points [-]

MMM is in general excellent, and that's convinced me to move Betterment above Vanguard in my recommendation list in the future.

Comment author: RichardKennaway 24 November 2015 12:02:28PM 4 points [-]

I have a secondary question to that. These things seem to all operate online only, without bricks and mortar. How do I assure myself that a website that I have never seen before is trustworthy enough to invest, say, 6-figure sums of money in? Are there official ratings or registers, for probity rather than performance?

Comment author: Mac 25 November 2015 12:03:31PM 2 points [-]

You may want to check if the brokerage firm/custodian is a member of SIPC, which provides a level of insurance against misappropriation. I think all the big names are members (Vanguard, Schwab, TD Ameritrade, Fidelity, etc.)

http://www.sipc.org/for-investors/what-sipc-protects

Comment author: Vaniver 24 November 2015 06:25:35PM 5 points [-]

That's easy to answer for Vanguard, which has been around since 1975 and has $3T under management. It's not going anywhere. Both Wealthfront and Betterment were founded in 2008, in Palo Alto and NYC respectively, and have about $2B and $3B under management. I don't think there are any official ratings of probity out there; I'm not sure there's a good source besides trawling through the business press looking for red flags.

Comment author: FrameBenignly 23 November 2015 10:00:41PM 4 points [-]

The best argument for getting an index fund is the expense ratio; not broad versus narrow. Managed mutual funds have higher expense ratios because of the broker's salary. Private trading instead of buy and hold will similarly cost you more because of the transaction cost. To justify their transactions, a broker doesn't just have to beat the market, but to beat the market by a large enough swing to justify those extra costs. Because of the number of brokers out there, even if one has consistently beaten the market, it is impossible to determine whether that is due to skill or luck for any given broker. Large domestic index funds will generally have the lowest expense ratios.

Comment author: Vaniver 23 November 2015 06:57:51PM *  13 points [-]

Short version: try something like Vanguard's online recommendation, or check out Wealthfront or Betterment. Probably you'll just end up buying VTSMX.

Long version: The basic argument for index funds over individual stocks is that you think that a <broad class> is going to outperform a <narrow subclass> because of general economic growth and reduced risk through pooling. So if you apply the same logic to index funds, what that argues is that you should find the index fund that covers the largest possible pool.

But it also becomes obvious that this logic only stretches so far--one might think that meta-indexing requires having a stock index fund and a bond index fund that are both held in proportion to the total value of stocks and bonds. So let's start looking at the factors that push in the opposite direction.

First, historically stocks have returned more than bonds long-term, with higher variability. It makes sense to balance your holdings based on your time and risk preferences, rather than the total market's time and risk preferences. (If you're young, preferentially own stocks.)

As well, you might live in the US, for example, and find it more legally convenient to own US stocks than international stocks. The corresponding fund is VTSMX, for the total US stock market. If you want the global fund, it's VTWSX.

You might have beliefs about small caps and large caps, or sectors, and so on and so on. One mistake to avoid here is saying "well, I have three options, so clearly I should put a third of my money into each option," especially because many of these options contain each other--the global fund mentioned earlier is also a US fund, because the US is part of the globe.

Comment author: solipsist 25 November 2015 12:50:38PM 3 points [-]

Asset allocation (what portion of your money is in stocks and bonds) is very important, depends on your age, and will get out of whack unless you rebalance. So use a Vanguard Target Retirement Date fund.

Comment author: Lumifer 25 November 2015 03:33:01PM 1 point [-]

what portion of your money is in stocks and bonds

There are more financial assets than just stocks and bonds.

Comment author: banx 25 November 2015 09:47:47PM 1 point [-]

Yes, but those are the important ones. Stocks for high expected returns and bonds for stability. You can generalize "bonds" to include other things that return principal plus interest like cash and CDs.

Comment author: Lumifer 25 November 2015 11:43:17PM -1 points [-]

What's the criterion of importance?

...other things that return principal plus interest like cash

Um.... I hate to break it to you...

Comment author: banx 26 November 2015 12:11:16AM *  1 point [-]

What's the criterion of importance?

Important to the goal of increasing one's wealth while managing the risk of losing it. Certainly there are other possible goals (perhaps maximizing the chance of having a certain amount of money at a certain time, for example) but this is the most common, and the one that I assume people on LW discussing basic investing concepts would be interested in.

Um.... I hate to break it to you...

I'm not sure if you're referring to the fact that popular banks are returning virtually zero interest or if you're interpreting "cash" as "physical currency notes". If the former, I have cash in bank accounts that return .01%, 1%, and 4.09% (each serving different purposes). If the latter, I apologize for the confusion. The word is used to mean different things in different contexts. In the context of investing it is standard to include in its meaning checking and savings accounts, and often also CDs.

Comment author: Lumifer 26 November 2015 12:33:00AM 1 point [-]

Important to the goal of increasing one's wealth while managing the risk of losing it.

Given this definition, I don't see why only stocks and bonds qualify.

The word is used to mean different things in different contexts.

True, but given that you said "cash and CDs" I thought your idea of cash excludes deposits. Still, there are more asset classes than equity and fixed income.

Comment author: banx 26 November 2015 12:50:39AM 1 point [-]

Given this definition, I don't see why only stocks and bonds qualify.

My claim is that equity and fixed income are the important pieces for reaching that goal. With a total stock index fund and a total bond index fund you can achieve these goals almost as well as any other more complicated portfolio. Additional asset classes can add additional diversification or hedge against specific risks. What other asset classes do you have in mind? Real estate? Commodities? Currencies?

True, but given that you said "cash and CDs" I thought your idea of cash excludes deposits.

Fair enough. I was unclear.

Comment author: Lumifer 30 November 2015 05:19:56PM *  1 point [-]

My claim is that equity and fixed income are the important pieces for reaching that goal.

They are, of course, important. The question is whether they are the only important pieces.

What other asset classes do you have in mind

Real estate is the most noticeable thing here, given how for a lot of people it is actually their biggest financial investment (and often highly leveraged, too). Commodities and such generally require paying at least some attention to what's happening and the usual context of financial discussions on LW is the "into what can I throw my money so that I can forget about it until I need it?"

Comment author: Lumifer 23 November 2015 06:48:42PM 2 points [-]

You need to figure out things like your own risk tolerance, your own time horizons for investments, and your own ideas about what might happen (or not) in the econo-financial world within your time horizons.

Comment author: ike 29 November 2015 12:39:33AM 0 points [-]

The Guardian had an interesting article on biases. Makes a similar point as http://lesswrong.com/lw/he/knowing_about_biases_can_hurt_people/

Comment author: Clarity 28 November 2015 02:24:18AM 0 points [-]

I recall a tool, by WeiDao if I'm not mistaken which will display all a users posts and comments ever on one page. I was wondering if anyone had the link. Perhaps we could get a wiki page with all of the LessWrong widgets like this for reference? I am not authorised to make Wiki pages myself.

Comment author: gjm 28 November 2015 12:31:10PM 3 points [-]
Comment author: Clarity 27 November 2015 05:00:48AM 0 points [-]

What are you working on?

Do you need help?

Comment author: [deleted] 27 November 2015 12:11:54PM *  1 point [-]

Are you offering help to people, or just curious about support networks? I'm mainly trying to motivate myself to write up a paper on relatively old data: dealing with my usual problem that I am more excited about newer projects, even though the older ones are not completed. Help would be nice but it's essentially my sole responsibility to prepare a first draft, after which my coauthors will contribute.

What are you working on, and do you need help?

Comment author: Clarity 28 November 2015 12:20:44AM *  1 point [-]

I prompting discussion of these things in case any parties would like to help and/or be helped. Sometimes people who want to help don't feel like starting the discussion, and same for those who want help. But if we're all just mentioning what we're doing, perhaps people can help in ways we hadn't even thought of.

I'd be happy to help if my skills and interest set matches your hopes for a coauthor. I highly doubt that since I'm just a lowly grad student.

I'm working on a social enterprise, my rationality, working out some procedural things with two collaborators on two seperate projects, and getting my notes and records better organised. Don't really need any help from online for those things except rationality, and make pleas for help about that here all the time anyway. Thanks for asking.

Comment author: [deleted] 30 November 2015 10:18:21AM 1 point [-]

I see.... but buried deep in the open thread it's not likely to be seen by many, and not very clear what you are trying to get out of such a brief, open-ended comment when originally posted.

For example, I misunderstood your intent, and thought you were talking more generally about problem solving and social support, vs. requesting help from LW's users.

Comment author: Elo 28 November 2015 10:03:35AM 0 points [-]

I am interested in the paper on the topic; if you drop what you have into a google doc and PM me the link I will add my thoughts. (I have similar troubles with old/new projects)

Comment author: [deleted] 30 November 2015 10:10:52AM 0 points [-]

Sorry, my comment was ambiguous - I am not writing a paper on this subject but am struggling with finishing old projects on other topics, while being seduced by novelty. Writing up my thoughts on old/new projects would make the problem worse as this is well outside the field I need to make progress in to keep a desk over my head.

Comment author: Elo 30 November 2015 07:41:13PM 0 points [-]

a suggestion: If you consider the salience of completion more strongly, you might be able to motivate yourself to complete a half-done project sooner than a zero-done project.

Obviously the draw of the new-shiny project is significant and likely to be more interesting because it is novel. The finishing reward is further away though.

Consider: Making a list of what is left to do on this existing project. You might be suffering from a difficulty in knowing what to do next (which masks itself in akrasia and new shiny project feelings). At some point after doing all the obviously easy parts to the project we are left with the not-obviously easy parts (if all the parts were obvious and easy we would be done with the task).