If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

New to LessWrong?

New Comment
258 comments, sorted by Click to highlight new comments since: Today at 2:15 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

MealSquares (the company I'm starting with fellow LW user RomeoStevens) is searching for nutrition experts to join our advisory team. The ideal person has a combination of formally recognized nutrition expertise & also at least a casual interest in things like study methodology and effect sizes (this unfortunately seems to be a rare combination). Advising us will be an opportunity to improve the diets of many people, it should not be much work, you'll get a small stake in our company, and you'll help us earn money for effective giving. Please get in touch with us (ideally using this page) if you or someone you know might be interested!

7[anonymous]8y
I'm not the right person at all, but if you ever want an amateur data enthusiast to help clean and present research results, I'd be willing to donate my time. The project is interesting and I would like to start stretching my skill set. I am pretty good at graphing in R, have a solid understanding of probability theory (undergrad level). I also have a good intuition for cleaning data sets. All of that evaluation is based on what other math nerds have told me, so I understand if you're not interested!
5[anonymous]8y
Do you have any plans for international shipping? (Say, the UK)
2John_Maxwell8y
We've experimented with doing international shipping. It gets expensive, and it's also a bit of a hassle. It makes more sense if you're doing a group buy (90+ squares). If you really want MealSquares and you're willing to pay a bunch extra for international shipping, contact us and we can work out details. Long term we would love to set up production facilities in foreign countries like a regular multinational, but that won't be for a while.
3MarsColony_in10years8y
I realize you are in the startup phase now, and so it probably makes sense for you to put any surplus funds into growth rather than donating now. However, 2 questions: 1. Once you finish with your growth phase, about what percent of your net proceeds do you expect to donate? 2. What sorts of EA charities are you interested in? I've been using MealSquares regularly, without realizing that that you guys were LWers or EAs. As such, I've been using mostly s/Soylent because of the cost difference. (A 400 Calorie MealSquare is ~$3, a 400 Calorie jug of Soylent 2.0 is ~$2.83, 400 Calories worth of unmixed Soylent powder is ~$1.83, and the ingredients for 400 Calories worth of DIY People Chow are ~$0.70. All these are slightly cheaper with a subscription/large purchase.) I ask, because if you happen to be interested in similar EA causes to me, and expect to eventually donate X% of proceeds, then I should be budgeting my expenses to factor that in. If (100%-X%) * MealSquares_Cost < soylent_Cost, then I would buy much less soylent and much (/many?) more MealSquares. I'd be paying a premium to Soylent in order to add a bit more culinary variety. (Also, I realize this X isn't equal to the expected altruistic return on investment, but that would be even harder to estimate.)
1John_Maxwell8y
Yep, that's what we've been doing. (We've been providing free MealSquares to some EA organizations, but we haven't been donating a significant portion of our profits directly.) At least 10%, hopefully significantly more. We've been trying to focus on growing our business rather than evaluating EA giving opportunities. If we actually do make a lot of money to donate, it will make sense to spend a lot of time thinking about where to give it. And we'll try & focus on identifying opportunities that we have a comparative advantage in (opportunities that are more suited to large donors, like funding a new organization from scratch). I'm not exactly sure why, but for some reason the idea of people buying our product because we are EAs makes me uncomfortable. I would much rather people buy it because it's good for you, convenient, tasty, etc. As you point out, we are less than 10% more expensive on a per-calorie basis than jug form Soylent. Would you say that you are not interested in paying more for a healthier product, not convinced that MealSquares is better for you, something else?
0MarsColony_in10years8y
In retrospect, I think that would make me uncomfortable too. In your position, I'd probably feel like I'd delivered an ultimatum to someone else, even if they were the one who actually made the suggestion. On the other hand, maybe a deep feeling of obligation to charity isn't a bad thing? Based on my (fairly limited) understanding of nutrition, I suspect that any marginal difference between your products is fairly small. I suspect humans get strongly diminishing returns (in the form of increased lifespan) once we have our basic nutritional requirements met in bio-available forms and without huge amounts of anything harmful. After that, I'd expect the noise to overpower the signal. For example, perhaps unmeasured factors like my mood or eating habits change as a function of my Soylent/MealSquares choice, and I wind up getting fast food more often, or get less work done or something. Let's say it would take me a month of solid researching and reading nutrition textbooks to make a semi-educated decision of which of two good things is best. Would the added health benefit give me an additional month of life? What if I value my healthy life, here and now, far more than 1 more month spent senile in a nursing home? What if I also apply hyperbolic discounting? I've probably done more directed health-related reading than most people. (Maybe 24 hours total, over the pasty year or so?) Enough to minimize the biggest causes of death, and have some vague idea of what "healthy" might look like. Enough to start fooling around with my own DIY soylent, even if I wouldn't want to eat that every day without more research. If someone who sounds knowledgeable sits down and does an independent review, I'd probably read it and scan the comments for critiques of the review.
1John_Maxwell8y
Thanks for the explanation. I wrote up some of the details of our approach here. Nutrition is far from being settled, and major discoveries have been made just in the past 50 years. Therefore we take an approach that's fairly conservative, which means (among other things) getting most of our nutrients from whole foods, the way humans have been eating for virtually all of our species' history. We think the burden of proof should be on Soylent to show that their approach is a good one.
1Tem428y
I think many people would run the equation the other way -- buying from a company that gives a potion to charity is a way to pressure competing companies to do the same. In other words, MealSquares give consumers a way to put pressure on the industry. Of course, there are a lot of ways that that model could be flawed, but you're hardly abusing the people who make that choice.
-1Lumifer8y
/chokes on his foie gras X-D
2MarsColony_in10years8y
Someone gave you a downvote. If it was on my behalf or on the behalf of Soylent, then for the record I thought it was funny. :)
3passive_fist8y
How does your product compare to widely-available meal replacement foods, like, say: http://www.cookietime.co.nz/osm.html ?
  • MealSquares are nutritionally complete--5 MealSquares contain all the vitamins & minerals you need to survive for a day, in the amounts you need them. In principle you could eat only MealSquares and do quite well, although we don't officially recommend this. It's more about having an easy "default meal" that you can eat with confidence once or twice a day when you don't have something more interesting to do like get dinner with friends.

  • MealSquares is made from a variety of whole foods, and almost all of the vitamins and minerals are from whole food sources (as opposed to competing products like Soylent that use dubious vitamin powders). Virtually every nutrition expert in the past century has recommended eating a variety of whole foods, and MealSquares stuffs more than 10 whole food ingredients in to a single convenient package, including 3 different fruits and 3 different vegetables.

We've put a lot of research in to MealSquares to make it better for you than most or all competing products on the market. For example, the first ingredient in Clif Bar is brown rice syrup (basically a glorified form of sugar), and they get their protein from rice and soy (not a... (read more)

4passive_fist8y
Interesting, thanks for the info. Yes most meal replacement bars seem to be simply soy-augmented candy bars, however there is of course a practical reason for this: sweet foods sell better. It might be worth mentioning on your site that your product is more healthy and has less sugar than the alternatives. Another problem is soy protein. Some research hints at soy protein having undesirable hormone-imitating effects: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3074428/ so this could be a selling point as well as I presume you do not use soy protein.
[-][anonymous]8y130

More data on Kepler star KIC 8462852.

http://www.nasa.gov/feature/jpl/strange-star-likely-swarmed-by-comets

After going back through Spitzer space telescope infrared images, the star did not have an infrared excess as recently as earlier in 2015, meaning that there wasn't some kind of event that generated huge amounts of persistent dust between the last measurements of spectra and the Kepler dataset showing the dips in brightness. This bolsters the 'comet storm / icy body breakup' theory in that that would generate dust close to the star that rapidly goes away and is positioned such that we are primed to see large fractions of it as it is generated close to the star rather than a tiny fraction of dust further away.

(This comes after the Allen telescope array, failing to detect anything interesting, put an upper limit on radio radiation coming from the system at 'weaker than 400x the strength we could put out with Aricebo in narrow bands, or 5,000,000x in wide bands' for what that's worth)

[-][anonymous]8y130

Why is my karma so low? Is there something I'm consistently doing wrong that I can do less wrong? I'm sorry.

The first association I have with your username is "spams Open Threads with not really interesting questions".

Note that there are two parts in that objection. Posting a boring question in an Open Thread is not a problem per se -- I don't really want to discourage people from doing that. It's just that when I open any Open Thread, and there are at least five boring top-level comments by the same user, instead of simply ignoring them I feel annoyed.

Many of your comments are very general debate-openers, where you expect others to entertain you, but don't provide anything in return. Choosing your recent downvoted question as an example:

How do you estimate threats and your ability to cope; what advice can you share with others based on your experiences?

First, how do you estimate "threats and your ability to cope"? If you ask other people to provide their data, it would be polite to provide your own.

Second, what is your goal here? Are you just bored and want to start a debate that could entertain you? Or are you thinking about a specific problem you are trying to solve? Then maybe being more specific in the question could help to give you more relevant answer. But the thing is, your not being specific seems like an evidence for the "I am just bored and want you to entertain me" variant.

You use LW as a dumping ground for whatever crosses your mind at the moment, and that is usually random and transient noise.

6[anonymous]8y
Thanks. What counts as noise and what as signal to you, and what do you mean by transient?

By "transient" I mean that you mention a topic once and then never show any interest in it again. By "noise" I mean random pieces of text which neither contain useful information nor are interesting.

As I said before, I think it would be good if you get in the habit of trying to predict the votes that your posts get beforehand and then not post when you think that a post would produce negative karma.

One way to do this might be, whenever you write a post keep it in a textfile and wait a day. The next day you ask yourself whether there anything you can do to improve it. If you feel you can improve it, do it. Then you estimate a confidence interval for the karma you expect your post to get and take a note of it in a spreadsheet. If you think it will be positive post your comment.

If you train that skill I would expect you to raise your karma and learn a generally valuable skill.

If at the end of writing a post you think "I’m not sure where I was going with this anymore." as in http://lesswrong.com/r/discussion/lw/mzx/some_thoughts_on_decentralised_prediction_markets/ , don't publish the post. If you yourself don't see the point in your writing it's unlikely that others will consider it valuable.

7moridinamael8y
This is the best advice. The trick to keeping high karma is to cultivate your discernment. Each time you write a post, assess its value, and then delete it if you don't anticipate people appreciating it. View that deletion as a victory equal to the victory of posting a high-karma comment.
0Elo8y
I would be concerned that you might post with popular opinion not with valuable or worthwhile ideas. (if the caveat of worthwhile ideas even if they sound unpopular is included then this is still a good strategy)
2Tem428y
I second this. This is also a very important skill for work and personal emails, and anything having to do with social sites like Facebook.

Thank you for asking. I've been trying to figure out what to say to you, but couldn't figure out quite what the issue is. One possibility in terms of karma is to bundle a number of comments into a single comment, but this doesn't address how the comments could be better.

A possible angle is to work on is being more specific. It might be like the difference between a new computer user and a more sophisticated computer user. The new user says "My computer doesn't work!", and there is no way to help that person from a distance until they say what sort of computer it is, what they were trying to do, and some detail about what happened.

Being specific doesn't come naturally to all people on all subjects, but it's a learnable skill, and highly valued here.

[-][anonymous]8y110

I think it's that you post a lot of questions and not a lot of content. Less Wrong is predisposed to upvoting high-content responses. I haven't had an account for very long, but I have lurked for ages. That's my impression, anyways. I recognize that since I haven't actually pulled comment karma data from the site and analyzed it, I could be totally off-base.

Maybe when you ask questions, use this form:

[This is a general response to the post] and [This is what is confusing me] but [I thought about it and I think I have the answer, is this correct?] or [I thought about it, came up with these conclusions, but rejected them for reasons listed here, I'm still confused]

EDIT: I just looked at your submitted history. You do post content in Main, apparently, but your posts seem to run counter to the popular ideas here. There is bias, and LessWrong has a lot of ideas deemed "settled." Effective Altruism appears to be one, and you have posted arguments against it. I've also seen some of your posts jump to conclusions without explaining your explicit reasons. LWers seem to appreciate having concepts reduced as much as possible to make reasoning more explicit.

7ChristianKl8y
Any group has a lot of ideas that are settled. If you want to convince any scientific minded group that the Aristoteles four elements is true, then you have to hit a high bar for not getting rejected. If anything LW allows a wide array of contrarian points. LW's second highest voted post is Holden's post against MIRI and is contrarian to core ideas of this community in the same sense as a post criticizing EA is. The difference is that the post actualy goes deep and make a substantive argument.
1[anonymous]8y
I want to say that that's what I was trying to imply, but that might be backwards-rationalization. I do have the impression that contrarian ideas are accepted and lauded if and only if they're presented with the reasoning standards of the community. I'll be honest: LW does strike me as far-fetched in some respects BUT I recognize that I haven't done enough reading on those subjects to have an informed opinion. I've lurked but am not an ingrained member of the community and can't give a detailed analysis of the standards. Only my impression. AND I realize that this sounds defensive, and I know there's no real reason for my ego to be wounded. I appreciate your input! I hope that my advice to Clarity wasn't too far off the mark. I tried to be clear about my advice being based on impressions more than data. EDIT: removed "biased," replaced with "far-fetched."
0ChristianKl8y
Yes, LW does have reasoning standards. That's part of what being refining the art of human rationality is about. What do you mean with "biased"? That LW is different than mainstream society in the ideas it values? Do you think it's a bias to treat badly reasoned post which might result in people dying the differently than harmless badly reasoned posts?
4[anonymous]8y
Obviously it has reasoning standards. They are much higher than the average person might expect, because that's one of the goals of the community. Bias was an poor word to use, and I retract my use of the term. I mean that as a relatively new participant, there are ideas that seem far-fetched because I have not examined the arguments for them. I admit that this is nothing more than my visceral reaction. Until I examine each issue thoroughly, I won't be able to say anything but "that viscerally strikes me as biased." Cryonics, for instance, is a conclusion that seems far-fetched because I have a very poor understanding of biology, and no exposure to the discussion around it. Without a better background in the science and philosophy of cryonics, I have no way of incorporating casual acceptance of the idea into my own conclusion. I recognize that, admit it, and am apparently not being clear about that fact. In trying to express empathy with a visceral reaction of disbelief, I misused the word "bias" and will be more clear in the future. On the second point: I understand that there's a cost to treating every post with the same rigor. Posts that are poorly reasoned, and come to potentially dangerous conclusions, should be examined more rigorously. Posts that are just as bad, but whose conclusions are less dangerous, can probably be taken less seriously. Even so...someone who makes many such arguments, with a mix of dangerous and less-dangerous conclusions, might see a lack of negative feedback as positive feedback. That's an issue in itself, but newcomers wouldn't be in a position to recognize that.
3ChristianKl8y
Cryonics is not a discussion that's primarily about biology. A lot of outsider will want to either think that cryonics works or that it doesn't. On LW there a current that we don't make binary judgements like that but instead reason with probabilities. So thinking that there a 20% chance that cryonics works is enough for people to go out and buy cryonics insurance because of the huge value that cryonics has if it succeeds. That's radically different than most people outside of LW think.
2Viliam8y
Well, the biological aspect is "where exactly in the body is 'me' located"? For example, many people on LW seem to assume that the whole 'me' is in the head; so you can just freeze the head, and feed the rest to the worms. Maybe that's a wrong idea; maybe the 'me' is much more distributed in the body, and the head is merely a coordinating organ, plus a center of a few things that need to work really fast. Maybe if the future science will revive the head and connect it to some cloned/artificial average human body, we will see the original personality replaced by more or less an average personality; perhaps keeping the memories of the original, but unable to empathise with the hobbies or values of the original.
0ChristianKl8y
Whether you need to freeze the whole body or whether the head is enough is a meaningful debate, but it has little to do with why a lot of people oppose cryonics.
0NancyLebovitz8y
At this stage, I can see an argument for freezing the gut, or at least samples of the gut, so as to get the microbiome. Anyone know about reviving frozen microbes?
1Lumifer8y
It's not hard. IIRC people brought to life microbes which were frozen in permafrost tens of thousands of years ago.
1[anonymous]8y
I understand that; I'm still not comfortable enough with the discussion about cryonics to bet on it working.
0ChristianKl8y
Do you have a probability in your head about cryonics working or not working, or do you feel uncomfortable assigning a probability?
1[anonymous]8y
A little of both, I think. 1. There is evidence for and against cryonics that I KNOW exists, but I haven't parsed most of it yet. 2. If I come to the conclusion that cryonics insurance is worth betting on, I am not sure I can get my spouse on board. Since he'd ultimately be in charge of what happens to my remains, AND we have an agreement to be open about our financial decisions, him being on board is mandatory. 3. If I come to the conclusion that cryonics is worth betting on, I might feel morally obligated to proselytize about it. That has massive social costs for me. 4. I'm freaked out by the concept because very intelligent people in my life have dismissed the concept as "idiotic," and apparently cryonics believers make researchers in the field of cryogenics very uncomfortable. Basically, it's a whole mess of things to come to terms with. The spouse thing is the biggest.
4ChristianKl8y
I think those concerns are understandable but the thing that makes LW special is that discourse here often ignores uncomfortable barriers of thought like this. That can feel weird for outsiders.
7entirelyuseless8y
A large proportion of your comments seem very distracting and sort of off-topic for Less Wrong.
2[anonymous]8y
Thanks. Can I have an example which is either self-evident as distracting and off-topic or explain why it is?
1NancyLebovitz8y
I looked at a few pages of your comment history to see if I could find a particularly horrible example to base an explanation on (entirelyuseless's link is appropriate), but I was surprised to find that the vast majority of your comments had no karma rather than downvotes. I'm not sure what you need to do to upgrade or edit out your typical comment. Possibly you could review your upvoted comments to see how they're different from your usual comments.
0entirelyuseless8y
This is a sufficiently evident example.
5Richard_Kennaway8y
In addition to what everyone else has said, here's a useful article on how to ask smart questions. It's talking about asking technical questions on support forums, but the matter generalises, especially the advice to make your best effort to answer it yourself, before asking it publicly, and when you do, to provide the context and where you have got to already.
0[anonymous]8y
Thanks, that article is incredible. I hope to see one that is about how to answer questions, and how to understand answers too! After reading, some contemplation on the matter, and some chance happenings upon information I feel is relevant to the issue, I believe I've changed a lot: Recently a highly admired friend of mine said something along the lines of 'I've never said anything that wasn't intention'. Whereas for me, most of that which I say is unintentional, just observed. So this got me thinking pretty hard about these things. Being on my mind, I suppose I got the following sliver of personal development when I started looking up some podcasts to comfort myself the following day: I'm vain. When I listen to things, personal development podcasts or not, I tend to look for what could be about me. I sampled the Danger and Play podcasts and like what I've heared. Inspired by the way he frames self-talk as interpersonal ilocutation, my mental landscaped has changed steeply. One consequence of this has been that I'm no longer held captive to 'believing' the first thought or idea that comes to my head. Rather, it's as if it's just one mental subagents proposition, to be contested and such. I am not biased towards reserving my thoughts till a more complex stopping rule, like coming to a conclusion that a certain verbablisation would lead to a certain outcome (e.g. the conclusion is positive emotionally, raises my anxiety to an optimal level, and/or functional by way of interpersonal compliance) , rather than something that just spews from my mind. Perhaps a precursor to this has been a general dampening of how seriously I've been taking my moral intuitions. I've contextualised them in terms of the facts that they are predated by evolutionary foreces, context, and such. Approximately an expressivist position, championed sometimes by A.J Ayer and the logical positives, regarding moral language, if I remember the wikipedia page correctly...but even say, in ingratituate
3polymathwannabe8y
Usually, your questions feel more suited for a general-purpose forum than the narrowly specialized set of interests commonly discussed here. (We do have "Stupid Questions" and "Instrumental Rationality" threads, but even those follow the same standards for comment quality as the rest of LW.) Also, posting a dozen questions in succession may give users the impression that you're trying to monopolize the discussion. Even if that's not your intention, I would understand it if some users ended up thinking it is. I would suggest looking for specialized forums on some of the topics that interest you, and using LW only for topics likely to be of interest to rationalists.
4[anonymous]8y
Thanks. Do you have a suggestion for another forum you recommend I move to?
6polymathwannabe8y
I don't know much about topic-specific forums, but seeing as you like to ask frequent questions, Reddit and Quora come to mind.
3MrMind8y
Many of your comment get downvoted, sometimes heavily. In every open thread you post a lot of questions, some of them completely off topic. A single good question in the open thread can give you 2-3 karma, but a single bad one can go down as -7 or less. So stop asking so much irrelevant questions and start contributing.
1Elo8y
as a hard rule; when posting in open; the ratio of your posts to posts by others should always be below 1:3 (other's might want to comment and suggest 1:4). You should post less then 1 in 4 of the posts in the open thread. They often read like a stream of consciousness (I think you know this already), and you might be better off taking on board some of the ideas of sitting on thoughts over a day or so and re-evaluating them for yourself before posting. As a side note: presentation of an idea can help the reception. We are still human; and do care for delicate wording on some topics.
2[anonymous]8y
Thanks. I do tend to sit on my ideas, or I like to post and update those posts or reply with reflections upon revisitations of those thoughts so that I and others can see how my thinking changes over time. My ratio is only that high when there is a new open thread. Since I post in blocks by formulating several posts then posting then when I next get a chance, it may appear early on that my ratio is high. But by the end of the month, I am certainly no where near that ratio. I am continuously trying to improve my presentation. Unfortunately, till date I have received minimal specific feedback on how to improve presentation. Sometimes I feel the stream of consciousness approach illustrates the way I'm thinking about a certain thing more illustratively.
2gjm8y
It may well do, but illustrating the way you're thinking about something isn't necessarily a good goal here. Why should anyone else care how you happen to be thinking about something? There may be special cases in which they do. If you are a world-class expert on something it could be very enlightening to see how you think about it. If you are just a world-class thinker generally, it might be fascinating to see how you think about anything. Otherwise, not so much.
0Elo8y
It may be worth releasing the posts gradually over the course of the week so as to not make it look like a clump. (and again paying attention to that ratio). I agree that you seem to post a chunk and once in a week. but it may serve better to spread out your posts.
0SanguineEmpiricist8y
Don't buy these comments too much. i'm glancing through them, they're much too critical. Listen to Nancy if anyone.

What is the optimal amount of attention to pay to political news? I've been trying to cut down to reduce stress over things I can't control, but ignoring it entirely seems a little dangerous. For an extreme example, consider the Jews in Nazi Germany - I'd imagine those who kept an eye on what was going on were more likely to leave the country before the Holocaust. Of course something that bad is unlikely, but it seems like it could still be important to be aware of impactful new laws that are passed - eg anti-privacy laws, or internet piracy now much more heavily punishable, etc.

So what's the best way to keep up on things that might have an impact on one's life, without getting caught up in the back-and-forth of day-to-day politics?

Some things to think about:

Are there actual political threats to you in your own polity (nation, state, etc.)? Do you belong to groups that there's a history of official repression or large-scale political violence against? Are there notable political voices or movements explicitly calling for the government to round you up, kill you, take away your citizenship or your children, etc.? (To be clear: An entertainer tweeting "kill all the lawyers" is not what I mean here.)

Are you engaged in fields of business or hobbies that are novel, scary, dangerous, or offensive to a lot of people in your polity, and that therefore might be subject to new regulation? This includes both things that you acknowledge as possibly harmful (say, working with poisonous chemicals that you take precautions against, but which the public might be exposed to) as well as things that you don't think are harmful, but which other people might disagree. (Examples: Internet; fossil fuels; drones; guns; gambling; recreational drugs; pornography)

Internationally — In the past two hundred years, how often has your country been invaded or conquered? How many civil wars, coups d'état, or failed wars of independence have there been; especially ones sponsored by foreign powers? How much of your country's border is disputed with neighboring nations?

1Lumifer8y
I do like the list :-)
6NancyLebovitz8y
For the extreme stuff, I think you'll get clues from things like how people like you are treated on the street.-- if it's your country. If you're at risk of being conquered by a government that hates you, the estimate is more complicated. For the more likely things to keep track of, think about what's likely to affect you (like changes in laws) and use specialist sources.
4VoiceOfRa8y
This is harder than it seems. For example, to find out when you need to withdraw your money ahead of a banking crisis, like what happened in Cyprus and Greece, you need to figure this out ahead of everybody else. Furthermore, the authorities are going to be doing their best to cover up the impending crisis.
4Lumifer8y
To electioneering, zero would be about right (unless you appreciate the entertainement value). To particular laws and/or regulations which might affect you personally, enough to know the landscape.
2ChristianKl8y
If you live in the US I would guess that if you read LW you will see comments about really important political events.
1Elo8y
how I do it - Things that I care about: local events (likelyhood of terrorism; or safety threats nearby) Things I don't care about: any politics that is further away than that. (and not likely to affect my life) global, country-wide, natural disasters that are far away.
-1Tem428y
Get weekly updates from light, happy sources (The Daily Show, The News Quiz, Mock the Week), and then specific searches for things that sound important.
4VoiceOfRa8y
Those strike me as worse than useless for the kind of things ShardPhoenix is interested in, e.g., they are the kinds of shows that would mock the "idiots" who believe the "ridiculous conspiracy theory" that the Nazis are actually planning to systematically exterminate the Jews.
4Viliam8y
I wondered how something called "Mock the Weak" would be considered a "happy source"... then I noticed the two "e"s

So, it seems like lots of people advise buying index funds, but how do I figure out which specific ones I should choose?

Short version: try something like Vanguard's online recommendation, or check out Wealthfront or Betterment. Probably you'll just end up buying VTSMX.

Long version: The basic argument for index funds over individual stocks is that you think that a is going to outperform a because of general economic growth and reduced risk through pooling. So if you apply the same logic to index funds, what that argues is that you should find the index fund that covers the largest possible pool.

But it also becomes obvious that this logic only stretches so far--one might think that meta-indexing requires having a stock index fund and a bond index fund that are both held in proportion to the total value of stocks and bonds. So let's start looking at the factors that push in the opposite direction.

First, historically stocks have returned more than bonds long-term, with higher variability. It makes sense to balance your holdings based on your time and risk preferences, rather than the total market's time and risk preferences. (If you're young, preferentially own stocks.)

As well, you might live in the US, for example, and find it more legally convenient to own US stocks than international stocks. The co... (read more)

4solipsist8y
Asset allocation (what portion of your money is in stocks and bonds) is very important, depends on your age, and will get out of whack unless you rebalance. So use a Vanguard Target Retirement Date fund.
0Lumifer8y
There are more financial assets than just stocks and bonds.
2banx8y
Yes, but those are the important ones. Stocks for high expected returns and bonds for stability. You can generalize "bonds" to include other things that return principal plus interest like cash and CDs.
-3Lumifer8y
What's the criterion of importance? Um.... I hate to break it to you...
2banx8y
Important to the goal of increasing one's wealth while managing the risk of losing it. Certainly there are other possible goals (perhaps maximizing the chance of having a certain amount of money at a certain time, for example) but this is the most common, and the one that I assume people on LW discussing basic investing concepts would be interested in. I'm not sure if you're referring to the fact that popular banks are returning virtually zero interest or if you're interpreting "cash" as "physical currency notes". If the former, I have cash in bank accounts that return .01%, 1%, and 4.09% (each serving different purposes). If the latter, I apologize for the confusion. The word is used to mean different things in different contexts. In the context of investing it is standard to include in its meaning checking and savings accounts, and often also CDs.
0Lumifer8y
Given this definition, I don't see why only stocks and bonds qualify. True, but given that you said "cash and CDs" I thought your idea of cash excludes deposits. Still, there are more asset classes than equity and fixed income.
2banx8y
My claim is that equity and fixed income are the important pieces for reaching that goal. With a total stock index fund and a total bond index fund you can achieve these goals almost as well as any other more complicated portfolio. Additional asset classes can add additional diversification or hedge against specific risks. What other asset classes do you have in mind? Real estate? Commodities? Currencies? Fair enough. I was unclear.
0Lumifer8y
They are, of course, important. The question is whether they are the only important pieces. Real estate is the most noticeable thing here, given how for a lot of people it is actually their biggest financial investment (and often highly leveraged, too). Commodities and such generally require paying at least some attention to what's happening and the usual context of financial discussions on LW is the "into what can I throw my money so that I can forget about it until I need it?"
6Richard_Kennaway8y
I have a secondary question to that. These things seem to all operate online only, without bricks and mortar. How do I assure myself that a website that I have never seen before is trustworthy enough to invest, say, 6-figure sums of money in? Are there official ratings or registers, for probity rather than performance?
7Vaniver8y
That's easy to answer for Vanguard, which has been around since 1975 and has $3T under management. It's not going anywhere. Both Wealthfront and Betterment were founded in 2008, in Palo Alto and NYC respectively, and have about $2B and $3B under management. I don't think there are any official ratings of probity out there; I'm not sure there's a good source besides trawling through the business press looking for red flags.
3[anonymous]8y
You may want to check if the brokerage firm/custodian is a member of SIPC, which provides a level of insurance against misappropriation. I think all the big names are members (Vanguard, Schwab, TD Ameritrade, Fidelity, etc.) http://www.sipc.org/for-investors/what-sipc-protects
6[anonymous]8y
The best argument for getting an index fund is the expense ratio; not broad versus narrow. Managed mutual funds have higher expense ratios because of the broker's salary. Private trading instead of buy and hold will similarly cost you more because of the transaction cost. To justify their transactions, a broker doesn't just have to beat the market, but to beat the market by a large enough swing to justify those extra costs. Because of the number of brokers out there, even if one has consistently beaten the market, it is impossible to determine whether that is due to skill or luck for any given broker. Large domestic index funds will generally have the lowest expense ratios.
5Curiouskid8y
So, I think the correct answer to the question "I have a 5-figure sum of money to invest" is to just go with Betterment/Wealthfront rather than Vanguard, so that you get diversification between asset classes (whereas a specific index fund will get you diversification within an asset class). If I'd known this when I'd asked the question, I would have picked a better mix of Vanguard index funds, and not hesitated as much with figuring out where to put the money. To be fair, Vaniver basically said this, I just think the links below explain it better, so I could feel certain enough to make a decision rather than let the money burn away through inflation. http://www.mrmoneymustache.com/2012/02/17/book-review-the-intelligent-asset-allocator/ http://www.mrmoneymustache.com/2014/11/04/why-i-put-my-last-100000-into-betterment/
2Vaniver8y
MMM is in general excellent, and that's convinced me to move Betterment above Vanguard in my recommendation list in the future.
1Lumifer8y
You need to figure out things like your own risk tolerance, your own time horizons for investments, and your own ideas about what might happen (or not) in the econo-financial world within your time horizons.
0[anonymous]8y
I have a secondary question to that. These things seem to all operate online only, without bricks and mortar. How do I assure myself that an online website that I have never seen before is trustworthy enough to invest, say, 6-figure sums of money in? Are there official ratings or registers, for probity rather than performance?

Meta-research: Evaluation and Improvement of Research Methods and Practices by John P. A. Ioannidis , Daniele Fanelli, Debbie Drake Dunne, Steven N. Goodman.

As the scientific enterprise has grown in size and diversity, we need empirical evidence on the research process to test and apply interventions that make it more efficient and its results more reliable. Meta-research is an evolving scientific discipline that aims to evaluate and improve research practices. It includes thematic areas of methods, reporting, reproducibility, evaluation, and incentives

... (read more)
0[anonymous]8y
Hope this kind of work gets decent funding...

The prediction market I was using, iPredict is closing. Apparently it represents a money laundering risk and the Government refused to grant an exemption. Does anyone know any good alternatives?

1Douglas_Knight8y
I asked about this recently. I think that the sports bookie Betfair is the best existing option, in terms of liquidity and diversity of topics. The only prediction markets that I know to be open to Americans are the Iowa Electronic Markets and PredictIt, both with smaller limits than iPredict.
0Elo8y
you should post this on the next OT

Paper in Nature about differences in gene expression correlated with chronological age.

tl;dr -- "We identified 1,497 genes that are differentially expressed with chronological age."

Quickdraw conclusion: this will require A LOT of silver bullets.

9ChristianKl8y
I don't think we learn a lot through the number. It might be that multiple genes are regulated by the same mechanism and turning that mechanism down brings us forward.
3[anonymous]8y
Indeed, not only is this only looking at the very broad end results of what is seen to co-vary with age in a regular way completely agnostic to mechanism, it is looking only at gene expression in peripheral blood, one very highly specialized (to the point of being a liquid) tissue type.
0zslastman8y
Yeah it doesn't say much. For one thing I'd say it's just about all of the genes that are differentially expressed, if you look hard enough. Regardless, that doesn't tell us how many of them really matter with respect to the things we care about, how many causal factors are at work, or how difficult it will be to fix. Doesn't rule out a single silver bullet aging cure (though other things probably do)

Are there any studies that highlight which biases become stronger when someone "falls in love"? (Assume the love is reciprocated.) I am mainly interested in biases that affect short- and medium-term decisions, since the state of mind in question usually doesn't last long.

One example is the apparent overblown usage of the affect heuristic when judging the goodness of the new partner's perceived characteristics and actions (the halo effect on steroids).

4RicardoFonseca8y
Here is a study finding that "high levels of passionate love of individuals in the early stage of a romantic relationship are associated with reduced cognitive control": free copy%20Reduced%20cognitive%20control%20in%20passionate%20lovers.pdf) / springer link Also, while I was searching for studies, I found a news article saying this about a study by Robin Dunbar: "The research, led by Robin Dunbar, head of the Institute of Cognitive and Evolutionary Anthropology at Oxford University, showed that men and women were equally likely to lose their closest friends when they started a new relationship." More specifically, the study found the average number of lost friends per new relationship was two. Except there is no publicly published paper anywhere online, despite what the news article says, there are only quotes by Dunbar at the 2010 British Science Festival, which seems a bit suspicious to me, maybe suggesting that the study was retracted later.
5[anonymous]8y
It's not necessarily that the study was retracted. The news article from the Guardian you linked mentioned that the study was submitted to the journal Personal Relationships; this means it had not yet been accepted for publication. And indeed it looks like that study never got published there despite all the media coverage. Actually it has finally come out, 5 years later! Burton-Chellew, M.N and Dunbar, Robin I. M. (2015). Romance and reproduction are socially costly. Evolutionary Behavioral Sciences, 9(4), 229-241. http://dx.doi.org/10.1037/ebs0000046 From the abstract
0RicardoFonseca8y
Nice! Good to know the information is (more) reliable after all :)
1LessRightToo8y
A study that relies only on self-reported claims of 'being in love' might be interesting to read, but such a study would be of higher quality if there was an objective way to take a group of people and sort them into one of two groups: "in love" or "not in love." Based on my own experience and experiences reported by others, I wouldn't reject the notion that such a sorting is possible in principle, although it may be beyond our current technological capability. The pain associated with being suddenly separated from someone that you have 'fallen in love with' can rival physical pain in intensity. What type of instrumentation would we need to detect when a person is primed for such a response? I have no idea.
1ChristianKl8y
No, not automatically. An objective measurement can be both worse and be better than a self-reported measurement. There no reason to believe that one is inherently better.
0LessRightToo8y
New material added to this thread uses the phrase being in a relationship rather than being in love. I found the latter phrase problematic because it involves a poorly defined mental state that has changed meaning over time. The former phrase is objectively verifiable by external observers. I have read a book or two on the Design of Experiments over the years purely for intellectual curiosity; I've never actually defined and run a scientific experiment. So I don't have anything worthwhile to say on the general topic of the relative value of objective vs. subjective measurements in scientific studies.
0RicardoFonseca8y
Why do you think "a person being primed for feeling pain when being separated from their new partner" matters here? Are you thinking about studies that, at the very least, suggest the possibility of such a separation being an option that the subject will experience based on the outcome of some action/decision being studied? :( that's horrible ):
1LessRightToo8y
An objectively verifiable indication that an animal has pair-bonded would be a visible indication of distress when forcibly separated from his/her mate. I'm not suggesting that this is the best way to determine whether an animal has pair-bonded. For example, an elevated level of some hormone in the blood stream (a "being in love" hormone) that reliably indicates being pair-bonded would be a superior objectively verifiable indication (in my opinion) because it doesn't involve causing distress in an animal. I'm not a biologist - just an occasional recreational reader of popular works in biology. So, my opinion isn't worth much.
1RicardoFonseca8y
Right now, it seems that "passionate love" is measured in a discrete scale based on answers to a questionnaire. The "Passionate Love Scale" (PLS) is mentioned in this blog post and was introduced by this article in 1986. In my other reply to my original comment I showed a study%20Reduced%20cognitive%20control%20in%20passionate%20lovers.pdf) that finds that "high levels of passionate love of individuals in the early stage of a romantic relationship are associated with reduced cognitive control", in which they use the PLS.
5passive_fist8y
It always seemed very strange to me how, despite the obvious similarities and overlaps between mathematics and computer science, the use of computers for mathematics has largely been a fringe movement and mathematicians mostly still do mathematics the way it was done in the 19th century. This even though precision and accuracy is highly valued in mathematics and decades of experience in computer science has shown us just how prone humans are to making mistakes in programs, proofs, etc. and just how stubbornly these mistakes can evade the eyes of proof-checkers.
6Sarunas8y
Correctness is essential, but another highly desirable property of a mathematical proof is its insightfulness, that is, whether they contain interesting and novel ideas that can later be reused in others' work (often they are regarded as more important than a theorem itself). These others are humans and they desire, let's call it, "human-style" insights. Perhaps if we had AIs that "desired" "computer-style" insights, some people (and AIs) would write their papers to provide them and investigate problems that are most likely to lead to them. Proofs that involve computers are often criticized for being uninsightful. Proofs that involve steps that require use of computers (as opposed to formal proofs that employ proof assistants) are sometimes also criticized for not being human verifiable, because while both humans make mistakes and computer software can contain bugs, mathematicians sometimes can use their intuition and sanity checks to find the former, but not necessarily the latter. Mathematical intuition is developed by working in an area for a long time and being exposed to various insights, heuristics, ideas (mentioned in a first paragraph). Thus not only computer based proofs are harder to verify, but also if an area relies on a lot of non human verifiable proofs that means it might be significantly harder to develop an intuition in that area which might then make it harder for humans to create new mathematical ideas. It is probably easier understand the landscape of ideas that were created to be human understandable. That is neither to say that computers have little place in mathematics (they do, they can be used for formal proofs, generating conjectures or gathering evidence for what approach to use to solve a problem), nor it is to say that computers will never make human mathematicians obsolete (perhaps they will become so good that humans will no longer be able to compete). However, it should be noted that some people have different opinions.
0passive_fist8y
Automated theorem proving is a different problem entirely and it's obviously not ready yet to take the place of human mathematicians. I'm not in disagreement with you here. However there's no conflict between being 'insightful' and 'intuitive' and being computer-verifiable. In the ideal case you would have a language for expressing mathematics that mapped well to human intuition. I can't think of any reason this couldn't be done. But that's not even necessary -- you could simply write human-understandable versions of your proofs along with machine-verifiable versions, both proving the same statements.
3Richard_Kennaway8y
Substantial work has been done on this. The two major systems I know of are Automath (defunct but historically important) and Mizar (still alive). Looking at those articles just now also turns up Metamath. Also of historical interest is QED, which never really got started, but is apparently still inspiring enough that a 20-year anniversary workshop was held last year. Creating a medium for formally verified proofs is a frequently occurring idea, but no-one has yet brought such a project to completion. These systems are still used only to demonstrate that it can be done, but they are not used to write up new theorems.
2Vaniver8y
I thought there were several examples of theorems that had only been proved by computers, like the Four Color Theorem, but that they're sort of in their own universe because they rely on checking thousands of cases, and so not only could a person not really be sure that they verified the proof (because the odds of them making a mistake would be so high) they couldn't get much in the way of intuition or shared technique from the proof.
3Richard_Kennaway8y
Yes, although as far as I know things like that, and the FCT in particular, have only been proved by custom software written for the problem. There's also a distinction between using a computer to find a proof, and using it to formalise a proof found by other means.
3Douglas_Knight8y
Indeed, the computer-generated proofs of 4CT were not only not formal proofs, they were not correct. Once a decade, someone would point out an error in the previous version and code his own. But now there is a version for an off the shelf verifier.
0IlyaShpitser8y
People are working on changing that (at CMU for example).
0MrMind8y
I think the difficulty is in part due to the fact that mathematicians use classical metalogic (e.g. proof by contradiction) which is not easily implemented in a computer system. The most famous mathematical assistant, Coq, is based on a constructive type theory. Even the univalence program, which is ambitious in its goal to formalize all mathematics, is based on a variant of intuitionistic meta-logic.
0bogus8y
Converting most of existing math into formal developments suitable for computer use would be a huge undertaking, possibly requiring several hundred man-years of work. Most people aren't going to work on such a goal with any seriousness until it's clear to them that the results will in fact be widely used. This in turn requires further work in order to come up with lightweight, broadly-applicable logical foundations/frameworks, as well as more work on the usability of proof environments. Progress on these things has been quite slow, although we have seen some encouraging news lately, such as the recent 'formal proof' of the Kepler conjecture. And even that was actually a bunch of formal proofs developed under quite different systems, that can be argued to solve the conjecture only when they're somehow combined. I think this example makes it abundantly clear that current approaches to this field - even at their most successful - do have non-trivial drawbacks.
0passive_fist8y
You're speaking of unifying all of math under the same system. I don't think that's strictly necessary, or even desirable. The computer science equivalent of that would be a development environment where every algorithm in the literature is implemented as a function. I'm wondering more about why problem-specific computer-verifiable proofs aren't used.
2bogus8y
The problem is, no matter how 'problem-specific' your proofs are, they aren't going to be 'verifiable' unless you specify them all the way down to some reasonable foundation. That's the really big undertaking, so you'll want to unify things as much as possible, if only to share whatever you can and avoid any duplication of effort.
0passive_fist8y
If that's true then it logically follows that most existing mathematics literature is un-verifiable - a statement that I think mathematicians would take issue with. After all, that's not how most mathematics literature is presented.
4Viliam8y
I agree with that. In the future, it would be best to derive everything from the axioms. (Using libraries where the frequently used theorems are already proved.) The problem is, the most simple theorems that we can derive from the axioms quickly are not important enough to pay for the development and use of the software. So a better approach would be for the system to accept a few theorems as (temporary) axioms. Essentially, if it would be okay to use the Pythagorean theorem in a scientific paper without proving it, then in the first version of the program it would be okay to use the Pythagorean theorem as an axiom -- displaying a warning "I have used Pythagorean theorem without having a proof of it". This first version would already be helpful at verifying current papers. And there is an option to provide the proof of the Pythagorean theorem from the first principles later. If you add it later, you can re-run the papers and get the results with less warnings. If the Pythagorean theorem happens to be wrong, as long as you have provided the warnings for all papers, you know which ones of them to retract. Actually, I believe such systems would be super helpful e.g. in set theory, when you want to verify whether the proof you used does rely on the axiom of choice. Because even if you didn't use it directly, maybe one of them theorems you used was based on it. Generally, using different sets of axioms could become easier.
0passive_fist8y
Yes that's an insightful way of looking at how computer verification could assist in real mathematics research. Going back to the CS analogy, programmers started out by writing everything in machine language, then gradually people began to write commonly-used functions as libraries that you could just install and forget about (they didn't even have to be in the same language) and they wrote higher-level languages that could automatically compile to machine code. Higher and higher levels of abstraction were recognized and implemented over the years (for implementing things like parsers, data structures, databases, etc.) until we got to modern languages like python and java where programming almost feels like simply writing out your thoughts. There was very little universal coordination in all of this; it just grew out of the needs of various people. No one in 1960 sat down and said, "Ok, let's write python."
0Lumifer8y
For a very good reason: let me invite you to contemplate Python performance on 1960-class hardware. As to "writing out your thoughts", people did design such a language in 1959... P.S. Oh, and do your thoughts flow like this..? public class HelloWorld { public static void main(String[] args) { System.out.println("Hello World"); } }
2passive_fist8y
That the implementation of python is fairly slow is a different matter, and high-level languages need not be any slower than, say, C or Fortran, as modern JIT languages demonstrate. It just takes a lot of work to make them fast. Lisp was also designed during that same period and probably proves your point even better. But 1960's Lisp was as bare-bones as it was high-level; you still had to wrote almost everything yourself from scratch.
2bogus8y
Computerized math is the same today. No one wants to write everything they need from scratch, unless they're working in a genuinely self-contained (i.e. 'synthetic') subfield where the prereqs are inherently manageable. See programming languages (with their POPLmark challenge) and homotopy-type-theory as examples of such where computerization is indeed making quick progress.
0Lumifer8y
Umm... LISP is elegant and expressive -- you can (and people routinely do) construct complicated environments including DSLs on top of it. But that doesn't make it high-level -- it only makes it a good base for high-level things. But if you use "high-level" to mean "abstracted away from the hardware" then yes, it was, but that doesn't have much to do with "writing out your thoughts".
0bogus8y
LISP was definitely a thing in the 1960s, and python is not that different. For a long time, the former was pretty much 'the one' very-high-level, application-oriented language. Much like Python or Ruby today.
0Lumifer8y
8-0 Allow me to disagree. Allow me to disagree again. LISP was lambda calculus made flesh and was very popular in academia. Outside of the ivory towers, the suits used COBOL, and the numbers people used Fortran (followed by a whole lot of Algol-family languages) to write their applications.

Further possible evidence for a Great Filter: A recent paper suggests that as long as the probability of an intelligent species arising on a habitable planet is not tiny, at least about 10^-24 then with very high probability humans are not the only civilization to have ever been in the observable universe, and a similar result holds for the Milky Way with around 10^-10 as the relevant probability. Article about paper is here and paper is here.

Do transhumanist types tend to value years of life lived past however long they'd expect to live anyways linearly (I.e. if they'd pay a maximum of exactly n to live an extra year, then would they also be willing to pay a maximum of exactly 100n to live 100 extra years)?

If so, the cost effectiveness of cryonics (in terms of added life years lived) could be compared with the cost effectiveness of other implementable health interventions would-be cryonicists are on the fence on. What's the marginal disutility that a given transhumanist might get from forcing ... (read more)

3HungryHobo8y
The levels of uncertainty make this really hard to work with. On the one hand perhaps it works and the person gets to live for billions of deeply fulfilling years, till the heat death of the universe experiencing 10x subjective time giving trillions of QALYs. Or perhaps they get awoken into a world where life extension is possible but legally limited to a couple hundred years. Or perhaps they get awoken into a world where they're considered on the same moral level as lab rats and millions of copies of their mind get to suffer in countless interesting ways. so you end up with a very very wide range of values, negative to trillions of QALYs with no way to assign reasonable probabilities to anything in the range which makes cost effectiveness calculations a little less convincing.
1Soothsilver8y
I also ask myself these questions and I'm unable to answer them. In the end, I exercise and modify my diet as much as my will allows without causing me too much stress. As for valuing years of life, if I considered that the very best outcome of cryonics (as HungryHobo described) is certain, then, well, even for very small values that will result in cryonics giving me far more utility than exercice. I don't value later years of my life that low. Yudkowsky believes that cryonics has a greater than 50% chance of working, and that we will be able to have fun for any amount of time, so for him, the expected value of cryonics is ginormous. I get quite a bit of disutility from forcing myself to eat a bit more healthily. My food diversity is very power; if I try to ingest one of many foods I don't like, I will throw up. Attempting to eat those foods anyway causes me great discomfort. So that's not a great way for me to increase overall utility. On the last paragraph, it appears to me that the two basics - avoiding obesity and not smoking - are the best thing you can pester them about. But the other lifestyle choices have the expected benefit of a few years total, if you don't expect any new medical technology to be developed.
1MarsColony_in10years8y
Not to be pedantic, but I thought this might be of interest: As I understand it, amount of exercise is a better predictor of lifespan than weight. That is, I would expect someone overweight but who exercises regularly to outlive someone skinny who never exercises. For example, this life expectancy calculator outputs 70 years for a 5"6" 25 year old male who weighs 300lbs, but exercises vigorously daily. Changing the weight to 150 lbs and putting in no exercise raised the life expectancy by only 1 year. (a bit less than I was expecting, actually. I was about to significantly update, but then it occurred to me that 300 lbs isn't the definition of obesity. I knew this previously, but apparently hadn't fully internalized that.) EDIT: This calculator may not work well for weights over ~250 lbs. See comment below. So, my top two recommendations to friends would be quit smoking and exercise regularly. I'd recommend Less Wrongers either do high intensity workouts once a can read or watch Khan Academy or listen to The week to minimize the amount of time spent on non-productive activities, or pick a more frequent but lower intensity activity they Sequences audiobook while doing. I'm not an expert or anything. That's just the impression I've gotten from my own research.
4Soothsilver8y
I'm not sure I would trust that calculator. I'm not used to US units so I put in 84kg (my weight) and it said "with that BMI you can't be alive" so I put in 840 thinking maybe it wants the first decimal as well. Now I realize it wanted pounds. And for this, 840lbs, it also outputed 70 years. I'm not sure where the calculator gets its data from.
3MarsColony_in10years8y
Hmmm, that's worrying. I played with some numbers for a 5'6" male, and got this: 99 lbs yields "Your BMI is way too low to be living" 100lbs yields 74 years 150lbs yields 76 years 200lbs yields 73 years 250lbs yields 69 years 300lbs yields 69 years 500lbs yields 69 years 999lbs yields 69 years It looks to me like they are pulling data from a table, and the table maxes out under 250lbs?
2Lumifer8y
First, there is no reason for you to care about ranking ("better"), you should only care whether something is a good predictor of lifespan. Predictors are not exclusive. Second, weight effect on lifespan is nonlinear. As far as I remember it's basically a U-shaped curve.
5gjm8y
I think it's only U-shaped if you're plotting mortality rather than lifespan on the y-axis...
0Lumifer8y
Fair point.
1Viliam8y
This seems like a good news to me, because I can have greater control over my exercise than my weight.

Why are there many LWers from, say, Europe, but not China?

I'm going to guess that English language proficiency is far higher in Europe than it is in China. But Asian Americans seem underrepresented on LW relative to the fields that LW draws heavily from, so that seems unlikely to be a complete explanation.

0username28y
Then why so few LWers from India which is an enormous country with English as an official language? Why so many Indians are on Quora, but relatively few are here?
0Vaniver8y
There are a lot of LWers from India, relative to the rest of the world? Agreed that there are less than we would expect, and in particular there are more East Asian LWers than South Asian LWers.
2iarwain18y
I'm going to guess it's based on some of the East-West thinking differences outlined by Richard Nisbett in The Geography of Thought (I very highly recommend that book, BTW). I don't remember everything in the book, but I remember he had some stuff in there about why easterners are often less interested in, and have a harder time with, the sort of logical/scientific thinking that LW advocates.
2MrMind8y
Which is weird because, if you take seriously the ethnic-IQ correlation (which I don't), Asians show an higher-than-westerners average IQ.

Nothing to do with IQ, but with modes of thinking. According to Nisbett, Eastern thinking is more holistic and concrete vs. the Western formal and abstract approach. He says that Easterners often make fewer thinking mistakes when dealing with other people, where a more holistic approach is needed (for example, Easterners are much less prone to the Fundamental Attribution Error). But at the same time they tend to make more thinking mistakes when it comes to thinking about scientific questions, as that often requires formal, abstract thinking. Nisbett also speculates that this is why science developed only in the west even though China was way ahead of the west in (concrete-thinking-based) technological progress.

In general there's very little if any correlation between IQ and rationality. A lot of Keith Stanovich's work is on this.

1g_pepper8y
I second the recommendation of The Geography of Thought.

Facebook question:

I have different types of 'friends' on Facebook, such as "Family", "Rationalists", "English-speaking", etc. Different materials I post are interesting for different groups. There is an option to select visibility of my posts, but that seems not exactly what I want.

What I'd like is to make my posts so that they are available to everyone, including people I don't know (e.g. if anyone clicks on my name, they will see everything I ever posted), but I don't want all my posts to appear automatically on all of my 'f... (read more)

4ChristianKl8y
I don't understand why facebook messes up the language issue so strongly. It seems like the American's at facebook quarters just don't care about bilinguals.
3solipsist8y
Yeah, your explanation sounds absolutely correct. But before you think "silly monoglot Americans", remember that London is closer to Istanbul than New York is to Mexico. Countries where people don't mostly speak English are thousands of kilometers away from most Americans.
0polymathwannabe8y
Those are suspiciously convenient examples. A more relevant comparison would be: Los Angeles is closer to Tijuana than London is to Paris.
0tut8y
Here is a map with London and Istanbul on it. In between them are many countries with at least six majority languages (and that's a low count, where some people would lynch me for saying that their language is the same as the one their neighbor speaks). Los Angeles and Tijuana on the other hand are two cities right by a border, and the only languages commonly spoken between them is English, the language of the USA, and Spanish, the language of Mexico.
0polymathwannabe8y
I understood solipsist's argument to mean that Americans can be excused for being ignorant of other languages because most of them live too far from other linguistic communities, and pointed at the mutual closeness of European countries for contrast, implying that it's likelier to find a Turkish-speaking Brit than a Spanish-speaking American. What I tried to say was that there was no need to artificially inflate the comparison distance by choosing Istanbul. Londoners can find speakers of a completely different language by merely driving to Cardiff. But the U.S. is not a monolingual bloc of homogeneity either: ironically, solipsist chose New York for his example, a multilingual smorgasbord if ever there was one.
0solipsist8y
Well, I don't know. Some of the US is near Mexico, but most of it isn't. In Europe the farthest you can get from a border to foreign speaking country is perhaps southern Italy. The four US states which border Mexico are each bigger than Italy. Germany is a bigish country in Europe area-wise, but it's less than 3.7% the size of the US. The Mercator projection makes an optical illusion -- the US is huge.
-1username28y
Just because they have an excuse that geography made them silly monoglots doesn't mean they aren't silly monoglots :p
1gjm8y
I think solipsist's point isn't that they have an excuse but that they have a reason -- being monoglot hurts them less than it would if they were e.g. on the European continent, so monoglossy (or whatever the right word is) isn't necessarily silly for them. [EDITED to add:] Disappointingly, OED suggests that the right word is just "monoglottism".
1polymathwannabe8y
The way Facebook works, you decide what's available, but each of your friends has to individually decide how much they want to see of you.
0Viliam8y
The problem is exactly the "how much they want to see of you" part, namely that there is only the one undifferentiated "you" instead of "your rationality posts", "your family photos", "your posts with kitten videos". I don't want to bother my family with rationality posts, and don't want to bother my LW friends with Slovak posts, but as long as I don't want to limit it all to 'friends of my friends' I don't have a choice. Technically, the solution would be to create multiple accounts for mutliple aspects of my life, and have different sets of 'friends' for each. But this is against Facebook TOS, and is also technically inconvenient. Actually, maybe I could use the "Pages" feature for this... That allows people to post under multiple identities, so each of them can have different followers. But officially, "Pages are for businesses, brands and organizations". Not sure if "Viliam's comments on politics in Slovakia" qualitfies as any of that.
0polymathwannabe8y
What you seem to be already doing, which is to manually select what group will see each post, seems to be good enough for your purposes. Anyone who actively wants to see more of you can simply go to your profile and see everything.

I don't typically read a lot of sci-fi, but I did recently read Perfect State, by Brandon Sanderson (because I basically devour everything that guy writes) and I was wondering how it stacks up to typical post-singularity stories.

Has anyone here read it? If so, what did you think of the world that was presented there, would this be a good outcome of a singularity?

For people that haven't read it, I would recommend it only if you are either a sci-fi fan that wants to try something by Brandon Sanderson or if you read some cosmere novels and would like a story touches on some slightly complexer (and more LWish) themes than usual (and don't mind it being a bit darker than usual).

I just found out about the “hot hand fallacy fallacy” (Dan Kahan, Andrew Gelman, Miller&Sanjuro paper) as a type of bias that more numerate people are likely more susceptible to, and for whom it's highly counterintuitive. It's described as a specific failure mode of the intuition used to get rid of the gambler's fallacy.

I understand the correct statement like this. Suppose we’re flipping a fair coin.

*If you're predicting future flips of the coin, the next flip is unaffected by the results of your previous flips, because the flips are independent. So ... (read more)

I think this is not quite right, and it's not-quite-right in an important way. It really isn't true in any sense that "it's more likely that you'll alternate between heads and tails". This is a Simpson's-paradox-y thing where "the average of the averages doesn't equal the average".

Suppose you flip a coin four times, and you do this 16 times, and happen to get each possible outcome once: TTTT TTTH TTHT TTHH THTT THTH THHT THHH HTTT HTTH HTHT HTHH HHTT HHTH HHHT HHHH.

  • Question 1: in this whole sequence of events, what fraction of the time was the flip after a head another head? Answer: there were 24 flips after heads, and of these 12 were heads. So: exactly half the time, as it should be. (Clarification: we don't count the first flip of a group of 4 as "after a head" even if the previous group ended with a head.)
  • Question 2: if you answer that same question for each group of four, and ignore cases where the answer is indeterminate because it involves dividing by zero, what's the average of the results: Answer: it goes 0/0 0/0 0/1 1/1 0/1 0/1 1/2 2/2 0/1 0/1 0/2 1/2 1/2 1/2 2/3 3/3. We have to ignore the first two. The average of the rest is 17/42, or just over 0.4.

What's going on here isn't any kind of tendency for heads and tails to alternate. It's that an individual head or tail "counts for more" when the denominator is smaller, i.e., when there are fewer heads in the sample.

0AstraSequi8y
My intuition is from the six points in Kahan's post. If the next flip is heads, then the flip after is more likely to be tails, relative to if the next flip is tails. If we have an equal number of heads and tails left, P(HT) > P(HH) for the next two flips. After the first heads, the probability for the next two might not give P(TH) > P(TT), but relative to independence it will be biased in that direction because the first T gets used up. Is there a mistake? I haven't done any probability in a while.
5gjm8y
No, that is not correct. Have a look at my list of 16 length-4 sequences. Exactly half of all flips-after-heads are heads, and the other half tails. Exactly half of all flips-after-tails are heads, and the other half tails. The result of Miller and Sanjuro is very specifically about "averages of averages". Here's a key quotation: "The relative frequency [average #1] is expected [average #2] to be ...". M&S are not saying that in finite sequences of trials successes are actually rarer after streaks of success. They're saying that if you compute their frequency separately for each of your finite sequences then the average frequency you'll get will be lower. These are not the same thing. If, e.g., you run a large number of those finite sequences and aggregate the counts of streaks and successes-after-streaks, the effect disappears.
2Viliam8y
...because heads occurring separately are on average balanced by heads occurring in long sequences; but limiting the length of the series puts a limit on the long sequences. In other words, in infinite sequences, "heads preceeded by heads" and "heads preceeded by tails" would be in balance, but if you cut out a finite subsequence, if the first one was "head preceeded by head", by cutting out the subsequence you have reclassified it. Am I correct, or is there more?
2gjm8y
I don't think this is correct. See my reply to AstraSequi. (But I'm not certain I've understood what you're proposing, and if I haven't then of course your analysis and mine could both be right.)
0Viliam8y
Oops, you're right. Using the words from my previous comment, now the trick seems to be that 'heads occurring separately are on average balanced by heads occurring in long sequences' -- but according to the rules of the game, you get only one point of reward for a long sequence, while you could get multiple punishments for the separately occuring heads, if they appear in different series. Well, approximately.

This week on the slack: http://lesswrong.com/r/discussion/lw/mpq/lesswrong_real_time_chat/

  • AI - language/words as a storage-place for meaning.
  • art and media - MGS V, Leviathan, SOMA, Undertale, advertising methods,
  • Business and startups - CACE (Change Anything Chances Everything) with respect to startups and machine learning. prediction.io , ,meetings: [each person speaks, so the length of meeting of the meeting is O(n) and there are n people, so the total meeting cost is O(n^2). On the margin, adding one person to the standup means they listen to n peo

... (read more)

Introverts, Extroverts, and Cooperation

As usual, a small hypothetical social science study, but I'm willing to play with the conclusion, which is that extroverts are more likely to cheat unless they're likely to get caught. It wouldn't surprise the hell out of me if introverts are more likely to internalize social rules (or are people on the autism spectrum getting classified as introverts?).

Could "publicize your charity" be better advice for extroverts and/or majority extrovert subcultures than for introverts?

4Lumifer8y
That's not what your link says. First, there is no cheating involved, we are talking about degrees of cooperation without any deceit. And second, it's not about "getting caught", it's about being exposed to the light of the public opinion which, of course, extroverts are more sensitive to.

I've heard the Beatles have some recorded song they never released because they were too low quality. I think it would be worthwhile to study their material in its full breadth, mediocrity included, to get a sense for the true nature of the minds behind some greatness.

I've saved writings and poetry and raw, potentially embarrassing past creations for the sake of a similar understanding. I wish I had recordings of my initial fumblings with the instruments I now play rather better.

So it is in this general context of seeking fuller understanding, that I ask if anyone knows where to find these legendary old writings from Eliezer Yudkowsky, reputed to be embarrassing in their hubris, etc..

The "legendary old writings from Eliezer Yudkowsky" are probably easy to find, but I am not going to help you.

I do not like the idea of people (generally, not just EY) being judged for what they wrote dozens of years ago. (The "sense for the true nature" seems like the judgement is being prepared.)

Okay, I would make an exception in some situations; the rule of thumb being "more extreme things take longer time to forget". For example if someone would advocate genocide, or organize a murder of a specific person, then I would be suspicious of them even ten years later. But "embarrassing in their hubris"? Come on.

I don't think EY's ego got any smaller with time.

5polymathwannabe8y
Is it at all meaningful to you that EY writes this in his homepage? It is true that EY has a big ego, but he also has the ability to renounce past opinions and admit his mistakes.
0IlyaShpitser8y
Absolutely, it is meaningful.
2[anonymous]8y
I can hardly wait to look back on his 'shameless blegging' post in a few years and compare it to reality. Pretty sure I know what the result will be.
2Viliam8y
In the meantime he wrote the Sequences and HPMoR, and founded MIRI and CFAR. So maybe the distance between his ego and his real output got smaller. Also, as Eliezer mentions in the Sequences, he used to have an "affective death spiral" about "intelligence", which is probably visible in his old writings, and contributes to the reader's perception of "big ego". I don't really mind big egos as long as they drive people to produce something useful. (Yeah, we could have a separate debate about how much MIRI or HPMoR are really useful. But the old writings would be irrelevant for that debate.)
7IlyaShpitser8y
Here is what you sound like: "But look at all this awesome fan fiction, and furthermore this 'big ego' is all your perception anyways, and furthermore I don't even mind it." Why so defensive about EY's very common character flaws (which don't really require any exotic explanation, btw, e.g. think horses not zebras)? They don't reflect poorly on you. ---------------------------------------- EY's past stuff is evidence.

I'm defensive about digging in people's past, only to laugh that as teenagers they had the usual teenage hubris, and maybe as highly intelligent people they kept it for a few more years... and then use it to hint that even today 'deeply inside' they are 'essentially the same', i.e. not worth to be taken seriously.

What exactly are we punishing here; what exactly are we rewarding?

Ten or more years ago I also had a few weird ideas. My advantage is that I didn't publish them on visible places in English, and that I didn't become famous enough so people would now spend their time digging in my past. Also, I kept most of my ideas to myself, because I didn't try to organize people into anything. I didn't keep a regular diary, and when I find some old notes, I usually just cringe and quickly destroy them.

(So no, I don't care about any of Eliezer's flaws reflecting on me, or anything like that. Instead I imagine myself in a parallel universe, where I was more agenty and perhaps less introverted, so I started to spread my ideas sooner and wider, had the courage to try changing the world, and now people are digging up similar kinds of my writings. Generally, this is a mechanism for ruining si... (read more)

6IlyaShpitser8y
EY is not a baby, and was not a baby in the time period under discussion. He is in his mid thirties today. ---------------------------------------- I have zero interest in gaining status in the LW/rationalist community. I already won the status tournament I care about. I have no interest in "crabbing" for that reason. I have no interest in being a "guru" to anyone. I am not EY's competitor, I am involved in a different game. Whether me being free of the confounding influence of status in this context makes me a more reliable narrator I will let you decide. ---------------------------------------- What I am very interested in is decomposing cult behavior into constituent pieces to try to understand why it happens. This is what makes LW/rationalists so fascinating to me -- not quite a cult in the standard Scientology sense, but there is definitely something there.
2Viliam8y
Mid thirties in 2015 means about twenty in 2001 (the date of most of the linked archives), right? That's halfway to baby from where I am now. Some of my cringeworthy diaries were written in my mid twenties.
2Lumifer8y
Welcome to the zoo! Please do not poke the animals with sticks of throw things at them to attract their attention. Do not push fingers or other object through the fences. We would also ask you not to feed the animals as it might lead to digestive problems.
5OrphanWilde8y
It's an interesting zoo, where all the exhibits think they're the ones visiting and observing...
0Viliam8y
The true observers we'll never know, because by definition they are not commenting here.
0Lumifer8y
Of course :-)
2OrphanWilde8y
Downvote explanation: Using claim of immunity to status and authority games as evidence to assert a claim. Which is to say, you are using a claim of immunity to status and authority games to assert status and authority. Yes, that's right out of my own playbook, too. I welcome anybody who catches me at it to downvote me, and please let me know I've done it, as it is an insidious logical mistake I find it impossible to catch myself at.
2philh8y
I don't understand your objection. Asserting a claim is not the same thing as asserting status and authority. I'm not sure what you want from Ilya here. He seems to be describing his motivations in good faith. Do you think he's lying to gain status? Do you think he's telling the truth, but gaining status as a side effect, and he shouldn't do that? Quick edit: Oh, I should probably have read the rest of the thread. I think I understand your objection now, but I disagree with it.
1IlyaShpitser8y
I am not claiming status and authority (I don't want it), I am saying EY has a big ego. I don't think I need status and authority for that, right? Say I did gain status and authority on LW. What would I do with it? I don't go to meetups, I hardly interact with the rationalist community in real life. What is this supposed status going to buy me, in practice? I am not trying to get laid. I am not looking to lead anybody, or live in a 'rationalist house,' or write long posts read by the community. Forget status, I don't even claim to be a community member, really. I care about status in the context relevant to me (my academic community, for example, or my workplace). ---------------------------------------- Or, to put it simply, you guys are not my tribe. I just don't care enough about status here.
0OrphanWilde8y
You're claiming to have status and authority to make a particular claim about reality - "Outsider" status, a status which gains you, with respect to adjucation of insider status and authority games... status and authority. Now, your argument could stand or fall on its own merits, but you've chosen not to permit this, and instead have argued that you should be taken seriously on the merits of your personal relationship to the group (read: taken to have status and authority relative to the group, at least with respect to this claim).
2IlyaShpitser8y
[edit: I did not downvote anyone in this thread.] ---------------------------------------- I am? Is that how we are evaluating claims now? ---------------------------------------- Here is how this conversation played out (roughly paraphrased): me: EY has a big ego. Viliam: I wish you would stop digging up people's youthful indiscretions like that. Why not go do impressive things instead, why be a hater? me: EY wasn't young in the time period involved. Also, I have my own stuff going on, thanks! Also, I think this EY dynamic isn't healthy. you: Argument from status! me: Don't really want status here, have my own already. you: You are claiming status by signaling you don't want/need status here! And then using that to make claims! (At this point if I claim status I lose, and if I don't claim status I also lose.) ---------------------------------------- Well, look. Grandiose dimensions of EY's ego are not a secret to anyone who actually knows him, I don't think. I think slatestar even wrote something about that. If you don't think I am being straight with you, and I am playing some unstated game, that's ok. If you have time and inclination, you can dig around my post history and try to figure that out if you care. I would be curious what you find. ---------------------------------------- I think it is fair to call myself an outsider. I don't self-identify as rationalist, and I don't get any sort of emotional reaction when people attack rationalists (which is how you know what your tribe is). I don't think rationalists are evil mutants, but I think unhealthy things are going on in this community. You can listen to people like me, or not. I think you should, but ultimately your beliefs are your own business. I am not going to spend a ton of energy convincing you.
3OrphanWilde8y
I think you're being as completely straight and honest as you are humanly capable of being. I think you also overestimate the degree to which you're capable of being straight and honest. What's your straightest and most honest answer to the question of what probability you assign to the possibility that your actions can be influenced by subconscious status concerns? Which is to say: Status games are a bias. You're claiming to be above bias. I believe you believe that, but I don't believe that.
2polymathwannabe8y
Please elaborate.
4IlyaShpitser8y
As I said, I don't think rationalists are actually a cult in the way that Scientology is a cult. But I think there are some cult-like characteristics to the rationalist movement (and a big part of this is EY's position in the movement). And I think it would be a good idea for the movement to become more like colleagues, and less like what they are now. What I think is somewhat disappointing is both EY and a fair bit of rank and file like things as they are.
2Tem428y
I don't know if this matters. I don't particularly care for the Sequences, but that hasn't caused me any problems at all. LessWrong has been an easy site to get into and to learn from, and would be even if I never read anything by EY. (This seems to be true for most aspects of the site; LessWrong is useful even if you don't care about AIs, transhumanism, cybernetics, effective altruism.... there's enough here that you can find plenty to learn.) You may be seeing the problem as bigger than it is because of the lens that you are looking through, although I agree that charisma is an interesting thing to study, and was central to the development of the site.
0IlyaShpitser8y
It's not just LW, it's the invisible social organization around it. ---------------------------------------- "Culty" dynamics matter. It's dangerous stuff to be playing with.
-1Lumifer8y
Bask in the glory? :-) You might be an exception, but empirically speaking people tend to value their status in online communities, including communities members of which they will never meet in meatspace and which have no effect on their work/personal/etc. life. Biologically hardwired instincts are hard to transcend :-/
1IlyaShpitser8y
I think one difference is, I am a bit older than a typical LW member, and have someplace to "hang my hat" already. As one gets older and more successful, one gets less status-anxious.
0OrphanWilde8y
Which is why you're spending time assuring us that you're high-status?
6gjm8y
Ilya's comments about status could indeed be explained by the hypothesis that he's attempting some kind of sneaky second-order status manoeuvre. They could also be explained by his meaning what he says and genuinely not caring much (consciously or otherwise) about status here on LW. To me, the second looks at least as plausible as the first. More precisely: I doubt anyone is ever completely 100% unaffected by status considerations; the question is how much; Ilya's claim is that in this context the answer is "negligibly"; and I suggest that that could well be correct. You may be correct to say it isn't. But if so, it isn't enough just to observe that someone motivated by status might say the things Ilya has, because so might someone who in this context is only negligibly motivated by status. You need either to show us something Ilya's doing that's substantially better explained in status-seeking terms, or else give a reason why we should think him much more likely to be substantially status-seeking than not a priori. [EDITED to add: I have no very strong opinion on whether and to what degree Ilya's comments here are status manoeuvres.]
5[anonymous]8y
He literally wrote plans about what he would do with the billions of dollars the singularity institute would be bringing in by 2005 using the words 'silicon crusade' to describe its actions to bring about the singularity and interstellar supercivilization by 2010 so as to avoid the apocalyptic nanotech war that would have started by then without their guidance. He also went on and on and on about his SAT scores in middle school (which are lower than those of one of my friends, taken via the same program at the same age) and how they proved he is a mutant supergenius who is the only possible person who can save the world. I am distinctly unimpressed.
8[anonymous]8y
For many types of problems, analyzing how a system changed over time is a more effective method of understanding a problem than comparing one system's present state with another system's present state.
0MrMind8y
Is that true even with highly non-linear systems like humans?
2[anonymous]8y
Yes, it is.
3MrMind8y
Very interesting, thanks.
2[anonymous]8y
These are so much fun to read! (snapshot times chosen more or less at random, and specific pages are what I consider the highlights) https://web.archive.org/web/20010204095400/http://sysopmind.com/beyond.html (contains links to everything below and much more) https://web.archive.org/web/20010213215810/http://sysopmind.com/sing/plan.html (his original founding plans for the singularity institute, extremely amusing) https://web.archive.org/web/20010606183250/http://sysopmind.com/singularity.html http://web.archive.org/web/20101227203946/http://www.acceleratingfuture.com/wiki/So_You_Want_To_Be_A_Seed_AI_Programmer (some... exceptional quotes in here and you can follow links) https://web.archive.org/web/20010309014808/http://sysopmind.com/eliezer.html https://web.archive.org/web/20010202171200/http://sysopmind.com/algernon.html More can be found poking around on web archive and youtube and vimeo. Even more via PM.
1NancyLebovitz8y
I don't think Eliezer's changes in hubris level are what's interesting-- he's had some influence, and no on seems to think his earliest work is his best. It might make sense to find out what how his writing has changed over time.

The Guardian had an interesting article on biases. Makes a similar point as http://lesswrong.com/lw/he/knowing_about_biases_can_hurt_people/

[-][anonymous]8y00

I recall a tool, by WeiDao if I'm not mistaken which will display all a users posts and comments ever on one page. I was wondering if anyone had the link. Perhaps we could get a wiki page with all of the LessWrong widgets like this for reference? I am not authorised to make Wiki pages myself.

6gjm8y
Here.
[-][anonymous]8y00

What are you working on?

Do you need help?

2[anonymous]8y
Are you offering help to people, or just curious about support networks? I'm mainly trying to motivate myself to write up a paper on relatively old data: dealing with my usual problem that I am more excited about newer projects, even though the older ones are not completed. Help would be nice but it's essentially my sole responsibility to prepare a first draft, after which my coauthors will contribute. What are you working on, and do you need help?
2[anonymous]8y
I prompting discussion of these things in case any parties would like to help and/or be helped. Sometimes people who want to help don't feel like starting the discussion, and same for those who want help. But if we're all just mentioning what we're doing, perhaps people can help in ways we hadn't even thought of. I'd be happy to help if my skills and interest set matches your hopes for a coauthor. I highly doubt that since I'm just a lowly grad student. I'm working on a social enterprise, my rationality, working out some procedural things with two collaborators on two seperate projects, and getting my notes and records better organised. Don't really need any help from online for those things except rationality, and make pleas for help about that here all the time anyway. Thanks for asking.
1[anonymous]8y
I see.... but buried deep in the open thread it's not likely to be seen by many, and not very clear what you are trying to get out of such a brief, open-ended comment when originally posted. For example, I misunderstood your intent, and thought you were talking more generally about problem solving and social support, vs. requesting help from LW's users.
0Elo8y
I am interested in the paper on the topic; if you drop what you have into a google doc and PM me the link I will add my thoughts. (I have similar troubles with old/new projects)
0[anonymous]8y
Sorry, my comment was ambiguous - I am not writing a paper on this subject but am struggling with finishing old projects on other topics, while being seduced by novelty. Writing up my thoughts on old/new projects would make the problem worse as this is well outside the field I need to make progress in to keep a desk over my head.
0Elo8y
a suggestion: If you consider the salience of completion more strongly, you might be able to motivate yourself to complete a half-done project sooner than a zero-done project. Obviously the draw of the new-shiny project is significant and likely to be more interesting because it is novel. The finishing reward is further away though. Consider: Making a list of what is left to do on this existing project. You might be suffering from a difficulty in knowing what to do next (which masks itself in akrasia and new shiny project feelings). At some point after doing all the obviously easy parts to the project we are left with the not-obviously easy parts (if all the parts were obvious and easy we would be done with the task).

In the news:

Nassim Taleb is an inverse stopped clock.

6username28y
When Nassim Taleb's predictions fail and someone points that out, he calls that person fucking idiot.
3ChristianKl8y
The main complaint seems to be that Taleb violates an orthodoxy and not that he's factually wrong. On the issues of costs the cited paper says: There are observed cases where homeopathy did lead to cost savings as Taleb suggests. Interestingly the cited PLoS paper puts people who don't take homeopathy into the homeopathy group based on the fact that they could get it for free:
[-][anonymous]8y00

If anybody is interested in Moscow postrationality meetup, please comment here or pm me. Thanks!

If molecular interactions are deterministic, are all universes identical?

3Viliam8y
Depends on what you mean by "deterministic" (and "universe"). 1) Do you assume each interaction has only one outcome, or are multiple outcomes (in different Everett branches) possible? 2) Do you assume all universes started in the same state? Molecular interactions in an existing universe are a different topic than the "creation of the universe".
1polymathwannabe8y
In a universe where molecular interactions are deterministic, I don't see any additional universes emerging.
0MrMind8y
If by deterministic you mean informationally, that is with complete information we have the possibility to predict any future states (barred complexity), then we most definitely know that molecular interactions are not deterministic. However, even hypothesizing a deterministic universe, you could have different starting conditions that would evolve to different universes, and while you are at it, why not postulates different deterministic laws?

Do you know of any remedy or prevention for hiccups? I can't get anything trusthworthy out of the internet nor out of friends and family. All just anecdotes.

6[anonymous]8y
There's a very extensive medical literature - although mostly focusing upon persistent (>48 hours) or intractable (>1 month) hiccups. One possible remedy jumped out at me from Google Scholar results: title alone gives the game away (albeit N=1): Odeh, M., Bassan, H., & Oliven, A. (1990). Termination of intractable hiccups with digital rectal massage. Journal of internal medicine, 227(2), 145-146. A very recent review by Steger et al (2015) gives good coverage of "state of the art" in acute hiccups: before concluding in case of persistent/intractable hiccups: Steger, M., Schneemann, M., & Fox, M. (2015). Systemic review: the pathogenesis and pharmacological treatment of hiccups. Alimentary pharmacology & therapeutics, 42(9), 1037-1050. Further note: reference [23] above in Steger et al (2015) is "Watterson B. The Complete Calvin and Hobbes. Kansas City, MO: Andrews McMeel Publishing, 2005."
3NancyLebovitz8y
I've got a method that's reliable for me. I pay attention to how I feel between hiccups, observe what seems like a hiccuppy feeling (in the neighborhood of my diaphragm), and make myself stop feeling it.
1Manfred8y
Well, I know of some remedies, but they're also anecdotal :) All the good ones I know are essentially breathing exercises, where you have to pay close attention to your breathing for a while (i.e. take control of your diaphragm). Like the classic "drink a glass of water from the far side of the glass" is actually a breathing exercise, which works just as well if you just do the breathing without the glass of water.
0Elo8y
agree with others; the diaphragm is the muscle underneath the lungs that controls your breathing. Hiccups are caused by irritation of the diaphragm. knowing this; you are looking for methods of relaxing the diaphragm. that includes generally trying to work out the control for the automatic muscle; and figuring out how to calm it down. as for trustworthy, or better-than-anecdotes - you can get surgery if it's a long-term (over several months) problem. how do you relax the diaphragm? for your human-hardware? likely different to other humans' hardware - so not much luck finding non-anecdote solutions.
0moridinamael8y
This works for me: Pour yourself a glass of water and hold it in one hand. Lift your arms up, reaching for the ceiling - this movement has the consequence of lifting your ribcage. Drink a few swallows from the glass of water without dropping your ribcage from its elevated orientation. Do this a few times.
[-][anonymous]8y-20

The other day I met a woman named common first name redacted out of respect to commentator's recommendation near the train station. I was just sitting and eating lunch, and she came over to chat. She had been ill with Lithium toxicity in hospital lately. She attends the same (mental) health complex as me. She was lovely, lonely, dated younger guys. She mentioned that her money is controlled by a State Trust to an extent, and that her last boyfriend continues to abuse her financially, and occasionally physically. She mentioned the police have recommended she break up with him, but she says that she loves him. We swapped numbers. Anything I can do for her?

5polymathwannabe8y
As a first protective measure, don't publish her name on the internet. She has already contacted the police, and they have already given her the best advice available. The rest is up to her.
[-][anonymous]8y-20

'noisy text analytics'. Has anyone trialed applying those algorithms in their minds with human conversations or text messaging (say through facebook) it to filter information in real life? Was it more efficient than your default or non-volitional approach?

[-][anonymous]8y-20

How do you estimate threats and your ability to cope; what advice can you share with others based on your experiences?

4Stingray8y
What kind of threats?
0[anonymous]8y
Any arbitrary threat?
4Dorikka8y
404: Generalized model not found
[-][anonymous]8y-40

I have a student email account that forwards messages to my personal gmail account. Sometimes I have to send messages from my student gmail account. Can these get automatically moved to my personal gmail sent folder so that I can find them with one search?

2ike8y
See https://support.google.com/mail/answer/22370?hl=en
[-][anonymous]8y-40

What will Google's new semantic search mean for search strategy?

[-][anonymous]8y-40

What, other than an interest in the commercial success of the car lot business, normative social influence and scrupulosity (all tenuous), stops someone from taking a second ticket (by foot) from a gated car park then immediately paying that one off when leaving, rather than paying the original entry ticket?

5Elo8y
the gates usually only give tickets to large metal objects (like cars) because they have sensors in the road underneath the ticket machine. There was a Mr. Bean sketch about this event. He used a large metal rubbish bin to get a ticket.
2Richard_Kennaway8y
These are what holds society together. These are what society is -- including the bit about commercial success. But have you tried? The entry barriers only issue a ticket when there's a car in front of them. That's how it works at the car parks I'm familiar with that use that system. And, to continue the discussion of why your karma is so persistently low, this is something you might have thought of before posting. See also.
1ChristianKl8y
Why don't people steal from other people if nobody is looking? General ethics.

Any US lawyers here?

A woman who once worked in a law office told me that clients come and go (she used the word e·phem·er·al) so the real allegiance for a lawyer is to other lawyers. Because they will see them again and again.

And Game Theory has something to say about how to treat a person that you are not likely to see again.

Please, folks, do not ask me to justify this "hearsay". I found her credible, so please take this woman's word as gospel, as an axiom, and go from there.

Please confirm, deny, explain or comment on her statement.

TIA.

3polymathwannabe8y
A "person that you are not likely to see again" is not a complete description of a lawyer's client; it's missing the part where "this person pays me for my services so I need many of this person in order to make a living."
0WhyAsk8y
Your post reminds me of something. If there is a huge disparity of power between the lawyer and you, Game Theory kind of "goes out the window". Right?
1polymathwannabe8y
The fact that I have never hired a lawyer may be a factor in my difficulty imagining a scenario where your lawyer turns into your opponent in a power struggle; I see it more likely to happen between you and your opponent's lawyer. High-profile lawyers with a lot of power don't tend to be hired by ordinary people with little power. In any case, it is in your lawyer's interests that your interests get served. Besides, what you could lose in the worst scenario is that one lawsuit (and possibly money and/or jail time); what your lawyer has to lose in the worst scenario is reputation, future clients, and the legal ability to practice law.
2Viliam8y
Imagine the following situation: we are having a lawsuit against each other. Let's say it is already obvious for both of our lawyers which side is going to win, but it is not so obvious for us. The lawyers have an option to do it quickly and relatively cheaply. But they also have an option to charge each of us for extra hours of work, if they tell us it is necessary. Neither option will change the outcome of the lawsuit. But it will change how much money the lawyers get from us. In such case, it would be rational for the lawyers to cooperate with each other, against our interests.
1WhyAsk8y
That's been my experience, and any questions about "How much more is this going to cost me?" are not received well. Almost every lawyer I've hired or dealt with gave me almost nothing for my money. And good luck trying to get a bad lawyer disbarred. What I should probably do is solicit bids for a particular legal problem.
1polymathwannabe8y
In this example the obvious culprit is the practice of charging by the hour, which I've always found a terrible idea.
[-][anonymous]8y-40

What would happen if a altcoin was developed where users had to precommit not to forking that coin?

6Viliam8y
How exactly could users of something anonymous precommit to not do something?
-6[anonymous]8y
0ChristianKl8y
It wouldn't be much different than the status quo. No one of the direct forks of bitcoin currently compete with bitcoin for the core purpose of being a currency and not just speculation.
[+][anonymous]8y-50
[+][anonymous]8y-60
[+][anonymous]8y-60