You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Open thread, January 25- February 1

8 Post author: NancyLebovitz 25 January 2014 02:52PM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Comments (316)

Sort By: Popular
Comment author: ialdabaoth 02 February 2014 04:54:45AM 1 point [-]

I keep looping through the same crisis lately, which comes up any time someone points out that I'm pretentious / an idiot / full of shit / lebens unwertes leben / etc.:

Is there a good way for me to know if I'm actually any good at anything? What are appropriate criteria to determine whether I deserve to have pride in myself and my abilities? And what are appropriate criteria to determine whether I have the capacity to determine whether I've met those criteria?

Comment author: Lumifer 04 February 2014 01:00:49AM *  1 point [-]

Is there a good way for me to know if I'm actually any good at anything?

I recommend empirical reality. The kind that exists outside of your (and other people's) head.

Comment author: shminux 04 February 2014 12:29:13AM 1 point [-]

Having followed your posts here and on #lesswrong, I got an impression of your personality as a bizarre mix of insecurities and narcissism (but without any malice), and this comment is no exception. You are certainly in need of a few sessions with a good therapist, but, judging by your past posts, you are not likely to actually go for it, so that's a catch 22. Alternatively, taking a Dale Carnegie course and actually taking its lessons to heart and putting an effort into it might be a good idea. Or a similar interpersonal relationship course you can find locally and afford.

Comment author: [deleted] 18 February 2014 09:06:02PM *  1 point [-]

bizarre mix of insecurities and narcissism

If you don't mind, I'm gonna use this in my twitter's bio.

Comment author: ialdabaoth 04 February 2014 01:04:55AM 0 points [-]

Yeah, the narcissism is something that I've been trying to come up with a good plan for purging since I first became aware of it. (I sometimes think that some of the insecurities originally started as a botched attempt to undo the narcissism).

The therapy will absolutely happen as soon as I have a reasonable capacity to distinguish "good" therapists from "bad" ones.

Comment author: wedrifid 08 February 2014 12:57:28AM 1 point [-]

The therapy will absolutely happen as soon as I have a reasonable capacity to distinguish "good" therapists from "bad" ones.

Bad plan (and also a transparent, falsely humble excuse to procrastinate). Picking a therapist at random will give you distinctly positive expected value. Picking a therapist recommended by a friend or acquaintance will give you somewhat better expected value.

Incidentally, one of the methods by which you can most effectively boost your ability to distinguish between good therapists from bad therapists is by having actual exposure to therapists.

Comment author: gjm 03 February 2014 12:01:24AM 2 points [-]

Some things are easier to tell whether you're good at than others. I guess you aren't talking about the more assessable things (school/university studies, job, competitive sport, weightlifting, ...) but about things with a strong element of judgement (quality as a friend or lover, skill in painting, ...) or a lot of noise mixed with any signal there might be (stock-picking[1], running a successful startup company, ...).

[1] Index funds are the canonical answer to that one, but you know that already.

So, anyway, the answer to "how do I tell if I'm any good at X?" depends strongly on X.

But maybe you really mean not "(know if I'm actually any good at) anything" but know if I'm actually (any good at anything)" -- i.e., the question isn't "am I any good at X?" but "is there anything I'm any good at?". The answer to that is almost certainly yes; if someone is seriously suggesting otherwise then they are almost certainly dishonest or stupid or malicious or some combination of those, and should be ignored unless they have actual power to harm you; if some bit of your brain is seriously suggesting otherwise then you should learn to ignore it.

There are almost certainly specific X you have good evidence of being good at, which will imply a positive answer to "is there anything I'm good at?". Pick a few, inspect them as closely as you feel you have to to be sure you aren't fooling yourself, and remember the answer.

If someone else is declaring publicly that you are a pretentious idiot and full of shit, it is likely that what's going on is not at all that they're trying to make an objective assessment of your capabilities or character, but that they are engaged in some sort of fight over status or influence or something, and are saying whatever seems like it may do damage. I expect you have good reasons for getting into that sort of fight, so I'll just say: bear in mind when you do that this is a thing that happens, and that such comments are usually not useful feedback for self-assessment.

If you want to mention some specific X, I expect you'll get some advice on ways to assess whether you're any good at it/them. But I think the most important thing here is that the thing that's provoking your self-doubt, although it looks like an assessment of your capabilities, really isn't any such thing.

Comment author: NancyLebovitz 02 February 2014 04:28:34PM 2 points [-]

You could take a cognitive psych approach to some of this. What are the other person's qualifications?

I recommend exploring the concept of good enough.

There's a bit in Nathaniel Branden about "a primitive sense of self-affirmation"-- which I take to be the assurance that babies start out with that they get to care about their pain and pleasure. It isn't even a question for them. And animals are pretty much the same.

You don't need to have a right to be on your own side, you can just be on your own side.

Something I've been working on is getting past the idea that the universe is keeping score, and I have to get everything right.

What I believe about your situation is that you've been siding with your internal attack voice, and you need to associate your sense of self with other aspects of yourself like overall physical sensations.

Do you have people who are on your side? If so, can you explore taking their opinion seriously?

The attack voice comes on so strong it seems like the voice of reality, but it's just a voice. I've found that it's hard work to change my relationship to my attack voice, but it's possible.

For what it's worth, I think your prose is good. It's clear, and the style (as distinct from the subject matter) is pleasant.

Comment author: ialdabaoth 02 February 2014 04:37:08PM *  1 point [-]

What are the other person's qualifications?

Generally, their qualifications are that the audience is rallying around them. Also, they don't know me, which makes them less likely to be biased in my favor. (I.e., the old "my mom says I'm great at <X>, so shut up!" problem)

...the assurance that babies start out with that they get to care about their pain and pleasure.

This flies in the face of the political climate I exist within, that talks primarily about the gallish "entitlement" of poor people who believe they have the right to food and shelter and work.

Do you have people who are on your side? If so, can you explore taking their opinion seriously?

It's very, very difficult, primarily because people who are INTENSELY on my side are never as vocal as people who are casually against me.

I.e., people who clearly love me and are willing to share portions of their life with me are willing to go so far as to say "I think you do <X> pretty well." People whom I've never met are willing to go so far as to say "fucking kill yourself you fucking loser. Stop acting like you even know how to person, let alone <X>. Fuck it, I'm looking up your address; I'll kill you."

That churns up all sorts of emotional and social reactions, which makes processing the whole thing rationally even harder.

Comment author: NancyLebovitz 04 February 2014 12:04:44AM 1 point [-]

What are the other person's qualifications?

Generally, their qualifications are that the audience is rallying around them. Also, they don't know me, which makes them less likely to be biased in my favor. (I.e., the old "my mom says I'm great at <X>, so shut up!" problem)

On the other hand, they might be more likely to be biased against you, and they certainly don't know a lot about your situation.

...the assurance that babies start out with that they get to care about their pain and pleasure.

This flies in the face of the political climate I exist within, that talks primarily about the gallish "entitlement" of poor people who believe they have the right to food and shelter and work.

Can you find a different political environment?

I've noticed that conservatives tend to think that everything bad that happens to a person is the fault of that person, and progressives tend to think that people generally don't have any responsibility for their misfortunes. Both are overdoing it, but you might need to spend some time with progressives for the sake of balance.

Also, I've found it helps to realize that malice is an easy way of getting attention, so there are incentives for people to show malice just to get attention-- and some of them are getting paid for it. The thing is, it's an emotional habit, not the voice of reality.

Unfortunately, people are really vulnerable to insults. I don't have an evo psy explanation, though I could probably whomp one up.

Do you have people who are on your side? If so, can you explore taking their opinion seriously?

It's very, very difficult, primarily because people who are INTENSELY on my side are never as vocal as people who are casually against me.

It is very difficult, but I think you've made some progress. All I can see is what you write, but it seems like you're getting some distance from your self-attacks in something like the past year or so.

I find it helps to think about times when I've been on my own side and haven't been struck by lightning.

Comment author: satt 07 February 2014 01:16:22AM 0 points [-]

It's very, very difficult, primarily because people who are INTENSELY on my side are never as vocal as people who are casually against me.

I.e., people who clearly love me and are willing to share portions of their life with me are willing to go so far as to say "I think you do <X> pretty well." People whom I've never met are willing to go so far as to say "fucking kill yourself you fucking loser. Stop acting like you even know how to person, let alone <X>. Fuck it, I'm looking up your address; I'll kill you."

I might be an outlier, but a spiel like "fucking kill yourself you fucking loser. Stop acting like you even know how to person, let alone <X>. Fuck it, I'm looking up your address; I'll kill you" doesn't signal casualness to me. The only people I'd expect to say that casually are trolls trying to get a rise out of people. Idle trolling aside, someone laying down a fusillade of abuse like that is someone who cares quite a bit (and doubtless more than they'd like to admit) about my behaviour. Hardly an unbiased commentator! (I recognize that's easier said than internalized.)

Comment author: jaibot 01 February 2014 07:32:38AM 2 points [-]

Following up on http://lesswrong.com/lw/jij/open_thread_for_january_17_23_2014/af90 :

  • I've created a minimally (possibly sub-minimally) viable wiki page: http://wiki.lesswrong.com/wiki/Study_Hall
  • I've started playing with SimpleWebRTC and its component parts
  • I am precommitting to another update by February 10th

This is a minimally-viable update on account of recent travel and imminent job interviews, but the precommitments seem to be succeeding in at least forcing something like progress and keeping some attention on the problem.

Comment author: David_Gerard 31 January 2014 09:30:26PM *  2 points [-]

I hadn't realised before that Max Tegmark's work was actually funded by a massive grant from the Templeton Foundation. $9 million to found FQXI.

The purpose of the Templeton Foundation is to spray around more money than most academics could dream of - $9 million for philosophy! - seeking to try to blur the lines between science and religion and corrupt the public discourse. The best interpretation that can reasonably be put on taking the Templeton shilling is that one is doing so cynically.

This is not pleasing news, not at all.

Comment author: Nornagest 31 January 2014 10:18:13PM *  1 point [-]

The purpose of the Templeton Foundation is [...] to try to blur the lines between science and religion and corrupt the public discourse.

What's your basis for this interpretation? And particularly the "corrupt the public discourse" bit? I read your link, and I remember it getting briefly badmouthed in The God Delusion, but I'd prefer something a little more solid to go on, since this seems to lie on the sharp side of Hanlon's razor.

Comment author: ahbwramc 31 January 2014 04:01:30AM 3 points [-]

Any book recommendations for a good intro to evolutionary psychology? I remember Eliezer suggested The Moral Animal, but I also vaguely remember some other people recommending against it. I'll probably just go with TMA unless some other book gets suggested multiple times.

Comment author: Jayson_Virissimo 01 February 2014 12:05:31AM 1 point [-]

Evolutionary Psychology: The New Science of the Mind, by David Buss is a pretty good, mainstream, and accessible introduction to the field. I don't regret reading it.

Comment author: beoShaffer 01 February 2014 06:18:33PM 1 point [-]

I second the recommendation. It was used as one of two textbooks for my evo-psyc class, and worked quite well.

Comment author: hyporational 31 January 2014 04:33:37AM *  3 points [-]

I found TMA was too full of just so stories. I also think it disturbingly rationalized a particular brand of sexism$ and overemphazised status which was very unexpected since I don't think I'm squeamish at all on those fronts. I don't think it helped me to predict human behavior better.

This said I'd be interested too if someone could recommend some other book.

$ rigid view of differences between the sexes, incompatible with my experience (which does suggest the sexes are different)

Comment author: fubarobfusco 30 January 2014 10:21:30PM 4 points [-]

A few years back, the Amanda Knox murder case was extensively discussed on LW.

Today, Amanda Knox has been convicted again.

Comment author: Kevin92 30 January 2014 10:42:56PM 1 point [-]

Does anyone have a simple, easily understood definition of "logical fallacy" that can be used to explain the concept to people who have never heard of it before?

I was trying to explain the idea to a friend a few days ago but since I didn't have a definition I had to show her www.yourlogicalfallacyis.com. She understood the concept quickly, but it would be much more reliable and eloquent to actually define it.

Comment author: Qiaochu_Yuan 31 January 2014 07:56:26PM 7 points [-]

She understood the concept quickly, but it would be much more reliable and eloquent to actually define it.

You think she would've understood the concept even more quickly if you had a definition? I think people underestimate the value of showing people examples as a way of communicating a concept (and overestimate the value of definitions).

Comment author: Kevin92 02 February 2014 06:59:46PM 0 points [-]

Well I know I won't be around a computer 24/7, and I'd like something to explain it if I'm out and about. Although I suppose I could use a couple examples that I can just memorize, like strawman arguments and ad hominum.

Comment author: Jayson_Virissimo 31 January 2014 03:47:25AM 1 point [-]

To a "regular person", I might say something like "a logical fallacy is a form of reasoning that seems good to many humans, but actually isn't very good".

Comment author: IlyaShpitser 01 February 2014 06:15:49PM 0 points [-]

I don't think this is so simple to explain, because to really understand logical fallacies you need to understand what a proof is. Not a lot of people understand what a proof is.

Comment author: NancyLebovitz 01 February 2014 06:34:12PM 1 point [-]

On the other hand, I think people can acquire a pretty good ability to recognize fallacies without a formal understanding of what a good proof is.

Comment author: IlyaShpitser 04 February 2014 03:05:05PM *  2 points [-]

I just feel there is a difference between a "fallacy enthusiast" (someone who knows lists of logical fallacies, can spot them, etc.) and a "mathematician" (who realizes a 'logical fallacy' is just 'not a tautology'), in terms of being able to "regenerate the understanding."

This is similar to how you can try to explain to lawyers how they should update their beliefs in particular cases as new evidence comes to light, but to really get them to understand, you have to show them a general method:

http://en.wikipedia.org/wiki/Wigmore_chart

(Yes, belief propagation was more or less invented in 1913 by a lawyer.)

Comment author: CAE_Jones 30 January 2014 07:46:48AM 3 points [-]

I don't understand why wireheading is almost universally considered worse than death, or at least really really negative.

Comment author: JQuinton 30 January 2014 06:00:15PM 1 point [-]

I think the big fear is stasis. In each case you're put in a certain state of being without any recourse to get out of it, but wireheading seems to be like a state of living death.

Comment author: skeptical_lurker 31 January 2014 04:30:50PM 1 point [-]

I concur, but I think it wise to draw a distiction between wireheading as in an extreme example of a blissed out opiate haze, where one does nothing but feel content and so has no desire to acheve anything, and wireheading as in a state of strongly positive emotions where curisity, creativity etc remain intact. Yes, if a rat is given a choice it will keep on pressing the lever, but maybe a human would wedge the lever open and then go and continue with life as normal? To continue the drug analogy, some drugs leave people in a stupor, some make people socialable, some result in weird music. I would say the first type is certainly better then death, and the latter 'headonistic imperitive' wireheading sounds utopic.

Comment author: DefectiveAlgorithm 30 January 2014 04:44:55PM 1 point [-]

Speaking for myself, I consider wireheading to be very negative, but better than information-theoretic death, and better than a number of scenarios I can think of.

Comment author: Slackson 30 January 2014 08:38:18AM 3 points [-]

I would assume that it's considered worse than death by some because with death it's easier to ignore the opportunity cost. Wireheading makes that cost clearer, which also explains why it's considered negative compared to potential alternatives.

Comment author: NancyLebovitz 30 January 2014 12:22:53AM *  5 points [-]

Did someone here ask about the name of a fraud where the fraudster makes a number of true predictions for free, then says "no more predictions, I'm selling my system."? There's no system, instead the fraudster divided the potential victims into groups, and each group got different predictions. Eventually, a few people have the impression of an unbroken accurate series.

Anyway, the scam is called The Inverted Pyramid, and the place I'd seen it described was in the thoroughly charming "Adam Had Three Brothers. by R.A. Lafferty.

Edited to add: It turned out that someone had asked at Making Light.

Comment author: lukeprog 29 January 2014 08:06:31PM *  6 points [-]

People often ask why MIRI researchers think decision theory is relevant for AGI safety. I, too, often wonder myself whether it's as likely to be relevant as, say, program synthesis. But the basic argument for the relevance of decision theory was explained succinctly in Levitt (1999):

If robots are to put to more general uses, they will need to operate without human intervention, outdoors, on roads and in normal industrial and residential environments where unpredictable physical and visual events routinely occur. It will not be practical, or even safe, to halt robotic actions whenever the robot encounters an unexpected event or ambiguous visual interpretation.

Currently, commercial robots determine their actions mostly by control-theoretic feedback. Control-theoretic algorithms require the possibilities of what can happen in the world be represented in models embodied in software programs that allow the robot to pre-determine an appropriate action response to any task-relevant occurrence of visual events. When robots are used in open, uncontrolled environments, it will not be possible to provide the robot with a priori models of all the objects and dynamical events that might occur.

In order to decide what actions to take in response to un-modeled, unexpected or ambiguously interpreted events events in the world, robots will need to augment their processing beyond controlled feedback response, and engage in decision processes.

Comment author: ArisKatsaris 30 January 2014 01:43:25AM 1 point [-]

Can anyone recommend a good replacement for flagfic.com ? This was a site that could download stories from various archives (fanfiction.net, fimfiction.net, etc) transform them to various e-reader formats, and email them to you. I used it to email fanfics I wanted to read directly to my Kindle as .mobi files.

Comment author: VAuroch 30 January 2014 12:58:41AM 1 point [-]

http://www.edge.org/responses/what-scientific-idea-is-ready-for-retirement

Some of these ideas are very poorly thought out. Some are interesting.

Comment author: ChristianKl 29 January 2014 11:45:53AM 7 points [-]

A recent experience reminded me that basics are really important. On LW we talk a lot about advanced aspects of rationality.

If you would have to describe the basics, what would you say? What things are so obvious for you about rationality that they usually go without saying?

Comment author: edanm 01 February 2014 07:00:06AM 1 point [-]
  1. People can change (e.g. update on beliefs, self-improve).
  2. How to choose your actions - think about your goals, think what steps achieve them in the best way, act on those steps.
  3. There is such a thing as objective truth.

Amazing how the basic pillars of rationality are things other people so often don't agree with, even though they seem so dead obvious to me.

Comment author: NancyLebovitz 31 January 2014 04:54:20PM 6 points [-]

You can frequently make your life better by paying attention to what you're doing, looking for possible improvements, trying your ideas, and observing whether the improvements happen.

Comment author: Qiaochu_Yuan 30 January 2014 10:07:15PM 3 points [-]

I run on hardware that was optimized by millions of years of evolution to do the sort of things my ancestors did tens of thousands of years ago, not the sort of things I do now.

Comment author: Leonhart 29 January 2014 11:17:35PM *  5 points [-]

There is no magic.
I am not in a story.
Words are detachable handles.

Comment author: hyporational 30 January 2014 01:27:52AM *  1 point [-]

This is a fun exercise. The list could be a lot longer than I originally expected.

  • belief is about evidence
  • 0 and 1 are not probabilities
  • Occam's razor
  • strawman and steelman
  • privileging the hypothesis
  • tabooing
  • instrumental-terminal distinction of values
  • don't pull probabilities out of your posterior
  • introspection is often wrong
  • intuitions are often wrong
  • general concept of heuristics and biases
  • confirmation and disconfirmation bias
  • halo effect
  • knowing about biases doesn't unbias you
  • denotations and connotations
  • many more
Comment author: bramflakes 30 January 2014 03:32:33PM 2 points [-]

"not technically lying" is de facto lying

Comment author: ChristianKl 30 January 2014 01:38:54PM 1 point [-]

Nice list, even a bit that's basic enough that I can put it into an Anki deck about teaching rationality (a long term project of mine but at the moment I doesn't have enough cards for release).

Comment author: jkaufman 29 January 2014 03:10:35AM *  5 points [-]

Somewhere I saw the claim that in choosing sperm donors the biggest factor turns out to be how cute the baby pictures are, but at this point it's just a cached thought. Looking now I'm not able to substantiate it. Does anyone know where I might have seen this claim?

Comment author: lukeprog 28 January 2014 06:09:04PM 15 points [-]

Robin Hanson on Facebook:

Academic futurism has low status. This causes people interested in futurism to ignore those academics and instead listen to people who talk about futurism after gaining high status via focusing on other topics. As a result, the people who are listened to on the future tend to be amateurs, not specialists. And this is why "we" know a lot less about the future than we could.

Consider the case of Willard Wells and his Springer-published book Apocalypse When?: Calculating How Long the Human Race Will Survive (2009). From a UCSD news story about a talk Wells gave about the book:

Larry Carter, a UCSD emeritus professor of computer science, didn’t mince words. The first time he heard about Wells’s theories, he thought, “Oh my God, is this guy a crackpot?”

But persuaded by Well’s credentials, which include a PhD from Caltech in math and theoretical physics, a career that led him L-3 Photonics and the Caltech/Jet Propulsion Laboratory, and an invention under his belt, Carter gave the ideas a chance. And was intrigued.

For a taste of the book, here is Wells' description of one specific risk:

When advanced robots arrive... the serious threat [will be] human hackers. They may deliberately breed a hostile strain of androids, which then infects normal ones with its virus. To do this, the hackers must obtain a genetic algorithm and pervert it, probably early in the robotic age before safeguards become sophisticated... Excluding hackers, it seems unlikely that androids will turn against us as they do in some movies... computer code for hostility is too complex... In the very long term, androids will become conscious for the same reasons humans did, whatever those reasons may be... In summary, the androids have powerful instincts to nurture humans, but these instincts will be unencumbered by concerns for human rights. Androids will feel free to impose a harsh discipline that saves us from ourselves while violating many of our so-called human rights.

Now, despite Larry Carter's being "persuaded by Wells' credentials" — which might have been exaggerated or made-up by the journalist, I don't know — I suspect very few people have taken Wells seriously, for good reason. He's clearly just making stuff up, with almost no study of the issue whatsoever. (On this topic, the only people he cites are Joy, Kurzweil, and Posner, despite the book being published in 2009.)

But reading that passage did drive home again what it must be like for most people to read FHI or MIRI on AI risk, or Robin Hanson on ems. They probably can't tell the difference between someone who is making stuff up and an argument that has gone through a gauntlet of 15 years of heated debate and both theoretical and empirical research.

Comment author: RobinHanson 28 January 2014 07:32:47PM 5 points [-]

Yes by judging someone on their credentials in other fields, you can't tell if they are just making stuff up on this subject vs. studied it for 15 years.

Comment author: VincentYu 28 January 2014 09:25:33PM *  3 points [-]

Wells's book: Apocalypse when.

I took a quick skim through the book. Your focused criticism of Wells's book is somewhat unfair. The majority of the book (ch. 1–4) is about a survival analysis of doomsday risks. The scenario you quoted is in the last chapter (ch. 5), which looks like an afterthought to the main intent of the book (i.e., providing the survival analysis), and is prepended by the following disclaimer:

This set serves as a foil to the balanced discussions by Rees, Leslie, Powell, and others. The choice of eight examples is purely arbitrary. Their purpose is not orderly coverage but merely examples that indicate a range of possibilities. The actual number of such complex unorthodox scenarios is virtually infinite, hence the high risk.

I think it is fair to criticize the crackpot scenario that he gave as an example, but your criticism seems to suggest that his entire book is of the same crackpot nature, which it is not. It is unfortunate that PR articles and public attention focuses on the insubstantial parts of the book, but I am sure you know what that is like as the same occurs frequently to MIRI/SIAI's ideas.

Orthogonal notes on the book's content: Wells seems unaware of Bostrom's work on observation selection effects, and it appears that he implicitly uses SSA. (I have not carefully read enough of his book to form an opinion on his analysis, nor do I currently know enough about survival analysis to know whether what he does is standard.)

Comment author: lukeprog 28 January 2014 09:28:16PM *  1 point [-]

Ah, you're right that I should have quoted the "This set serves as a foil" paragraph as well.

I found chs. 1-4 pretty unconvincing, too, though I'm still glad that analysis exists.

Comment author: James_Miller 28 January 2014 08:02:59PM 2 points [-]

Yes, I'm an academic and I get a similar reaction from telling people I study the Singularity as when I say I've signed up for cryonics. Thankfully, I have tenure.

Comment author: Halfwitz 29 January 2014 01:32:38AM 1 point [-]

What happens when you say, "I study the economic implications of advanced artificial inteligence," to people?

Comment author: ChristianKl 28 January 2014 11:21:21PM 1 point [-]

Academic futurism has low status. This causes people interested in futurism to ignore those academics and instead listen to people who talk about futurism after gaining high status via focusing on other topics. As a result, the people who are listened to on the future tend to be amateurs, not specialists. And this is why "we" know a lot less about the future than we could.

I don't think that's the case. Most people who are listened to on the future don't tend to speak to an audience primarily consisting of futurists.

There are think tanks who employee people to think about the future and those think tanks tend generally to be quite good at influencing the public debate.

I also don't think that academic has any special claim to be specialists about the future. When I think about specialists on futurism names like Stewart Brand or Bruce Sterling.

Comment author: IlyaShpitser 29 January 2014 12:58:28PM *  1 point [-]

I don't think that's the case. Most people who are listened to on the future don't tend to speak to an audience primarily consisting of futurists.

This is a very important and general point. While it is important to communicate ideas to a general audience, generally excessive communication to general audiences at the expense of communication to peers should be "bad news" when it comes to evaluating experts. Folks like Witten mostly just get work done, they don't write popular science books.

Comment author: Kawoomba 28 January 2014 09:50:00PM 1 point [-]

It might be a worthwhile endeavor to modify our wiki such that it serves not only as a mostly local reference on current terms and jargon, but also as an independent guide to the various arguments for and against various concepts, where applicable. It could create a lot of credibility and exposure to establish a sort of neutral reference guide / an argument map / the history and iterations an idea has gone through, in a neutral voice. Ideally, neutrality regarding PoV works in favor of those with the balance of arguments in their favor.

This need not be entirely new material, but instead simply a few mandatory / recommended headers in each wiki entry, pertaining to history, counterarguments etc. Could be worth it lifting the wiki from relative obscurity, with a new landing page, and marketed potentially as a reference guide for journalists researching current topics. Kruel's LW interview with Shane Legg got linked to in a NYTimes blog, why not a suitable LW wiki article, too?

Comment author: Oscar_Cunningham 28 January 2014 07:45:10PM 3 points [-]
Comment author: palladias 28 January 2014 04:50:03AM 11 points [-]

Reason #k Why I <3 Pomodoros:

They really help me get over akrasia. I beemind how many pomodoros I do per week, so I do tasks I would otherwise procrastinate if I can do 20 minutes of them (yes, I do short pomodoros) and get to enter a data point at the end. Often I find that the task is much shorter/less awful than it felt in the abstract.

Example: I just moved today, and didn't have that much to unpack, but decided I'd do it tomorrow, because I felt tired and it would presumably be long and unpleasant. But then I realized I could get a pomodoro out of it (plus permission from myself to stop after 20 min and go to bed). Turns out it took 11 minutes and now I'm all set up!

Comment author: Qiaochu_Yuan 28 January 2014 04:51:34AM 2 points [-]

I do this all the time and it's great!

Comment author: D_Malik 28 January 2014 04:32:22AM *  5 points [-]

John_Maxwell_IV and I were recently wondering about whether it's a good idea to try to drink more water. At the moment my practice is "drink water ad libitum, and don't make too much of an effort to always have water at hand". But I could easily switch to "drink ad libitum, and always have a bottle of water at hand". Many people I know follow the second rule, and this definitely seems like something that's worth researching more because it literally affects every single day of your life. Here are the results of 3 minutes of googling:

http://www.sciencedirect.com/science/article/pii/S0002822399000486:

Dehydration of as little as 1% decrease in body weight results in impaired physiological and performance responses (4), (5) and (6), and is discussed in more detail below. It affects a wide range of cardiovascular and thermoregulatory responses (7), (8), (9), (10), (11), (12), (13) and (14).

The Nationwide Food Consumption Surveys indicate that a portion of the population may be chronically mildly dehydrated. Several factors may increase the likelihood of chronic, mild dehydration, including a poor thirst mechanism, dissatisfaction with the taste of water, common consumption of the natural diuretics caffeine and alcohol, participation in exercise, and environmental conditions. Dehydration of as little as 2% loss of body weight results in impaired physiological and performance responses. New research indicates that fluid consumption in general and water consumption in particular can have an effect on the risk of urinary stone disease; cancers of the breast, colon, and urinary tract; childhood and adolescent obesity; mitral valve prolapse; salivary gland function; and overall health in the elderly. Dietitians should be encouraged to promote and monitor fluid and water intake among all of their clients and patients through education and to help them design a fluid intake plan.

The effect of dehydration on mental performance has not been adequately studied, but it seems likely that as physical performance is impaired with hypohydration, mental performance is impaired as well (62) and (63). Gopinathan et al (29) studied variation in mental performance under different levels of heat stress-induced dehydration in acclimatized subjects. After recovery from exercise in the heat, subjects demonstrated significant and progressive reductions in the performance of arithmetic ability, short-term memory, and visuomotor tracking at 2% or more body fluid deficit compared with the euhydrated state.

So how much is 2% dehydration? http://en.wikipedia.org/wiki/Dehydration#Differential_diagnosis : "A person's body, during an average day in a temperate climate such as the United Kingdom, loses approximately 2.5 litres of water.[citation needed]" http://en.wikipedia.org/wiki/Body_water quotes Arthur Guyton 's Textbook of Medical Physiology: "the total amount of water in a man of average weight (70 kilograms) is approximately 40 litres, averaging 57 percent of his total body weight." So effects on cognition become apparent after 40l*2%=800ml of water has been lost, which takes roughly 800ml/(2.5l/24h) = 8 hours. Now, this assumes water is lost at a constant rate, which is false, but it still seems like it would take a while to lose a full 800ml. Which implies that you don't have to make a conscious effort to drink more water because everybody gets at least mildly thirsty after, say, half an hour of walking around outside on a warm day, which seems like it would be a lot less than 800ml.

http://freebeacon.com/michelle-obamas-drink-more-water-campaign-based-on-faulty-science/ : “There really isn’t data to support this,” said Dr. Stanley Goldfarb of the University of Pennsylvania. “I think, unfortunately, frankly, they’re not basing this on really hard science. It’s not a very scientific approach they’ve taken. … To make it a major public health effort, I think I would say it’s bizarre.” Goldfarb, a kidney specialist, took particular issue with White House claims that drinking more water would boost energy. ”The idea drinking water increases energy, the word I’ve used to describe it is: quixotic,” he said. “We’re designed to drink when we’re thirsty. … There’s no need to have more than that.”

http://ask.metafilter.com/166600/Drinking-more-water-should-make-me-less-thirsty-right : When you don't drink a lot of water your body retains liquid because it knows it's not being hydrated. It will conserve and reabsorb liquid. When you start drinking enough water to stay more than hydrated your body will start using the water and then dispensing of it as needed. Your acuity for thirst will be activated in a different way and in a sense work better.

Some thoughts:

  • More frequent water-drinking makes you urinate more often, which is probably a bad thing for productivity.
  • There might be negative effects with chronic mild dehydration at levels less severe than in the studies above.
  • There might also be hormetic effects. (As in, your body functions best under frequent mild dehydration because that's what happened in the EEA, and always giving it as much water as it wants will be bad.)

Thoughts? Please post your own opinion if you're knowledgeable about this or if you've researched it.

Comment author: hyporational 29 January 2014 03:15:33PM *  2 points [-]

While you're at it, you probably should also research how much water is too much, because on the other side of the spectrum lies hyponatremia and having suboptimal electrolyte levels from overdosing water could be harmful to your cognition too, although I think it's unlikely anyone here will develop a measurable hyponatremia just from drinking too much water. Sweating a lot for example might change the situation.

this definitely seems like something that's worth researching more because it literally affects every single day of your life

This doesn't look like a selective enough heuristic alone.

Comment author: ephion 28 January 2014 04:38:16PM 5 points [-]

More frequent water-drinking makes you urinate more often, which is probably a bad thing for productivity.

Extended sedentary periods are bad for you, so if drinking extra water also makes you get up and walk to the bathroom, that's a win-win.

Comment author: hyporational 29 January 2014 06:37:31PM 2 points [-]

Except when you're trying to sleep.

Comment author: ChristianKl 28 January 2014 10:51:46PM 1 point [-]

As far as water consumption goes I feel the difference between drinking one liter or four liter per day. I just feel much better with four liter.

There were times two years ago when unless I had drunk 4 liter by the time I entered my Salsa dancing location in the evening, my muscle coordination was worse and the dancing didn't flow well.

Does that mean that everyone has to drink 4 liters to be at his optimum? No, it doesn't. Get a feel how different amounts of water consumption effect you. For me the effect was clear to see without even needing to do QS. Even it's not as clear for you do QS.

Comment author: John_Maxwell_IV 28 January 2014 04:47:33AM 1 point [-]

Thanks for writing this up.

this definitely seems like something that's worth researching more because it literally affects every single day of your life

Lots of things fall in to this category :)

"A person's body, during an average day in a temperate climate such as the United Kingdom, loses approximately 2.5 litres of water.[citation needed]"

In case it's not obvious: this probably means in the absence of food/fluid consumption. You can't go on losing 2.5 litres of water a day indefinitely.

Comment author: adamzerner 28 January 2014 03:12:28AM 2 points [-]

I'm recalling a Less Wrong post about how rationality only leads to winning if you "have enough of it". Like if you're "90% rational", you'll often "lose" to someone who's only "10% rational". I can't find it. Does anyone know what I'm talking about, and if so can you link to it?

Comment author: ahbwramc 28 January 2014 03:48:39PM 3 points [-]
Comment author: Qiaochu_Yuan 27 January 2014 07:15:00PM *  5 points [-]

A year ago, I was asked to follow up on my post about the January 2013 CFAR workshop in a year. The time to write that post is fast approaching. Are there any issues / questions that people would be particularly interested in seeing this post address / answer?

Comment author: pewpewlasergun 01 February 2014 12:20:50AM 1 point [-]

I'd like to know how many techniques you were taught at the meetup you still use regularly. Also which has had the largest effect on your life.

Comment author: RationalityVienna 27 January 2014 01:58:40PM 12 points [-]

Hello, we are organizing monthly rationality meetups in Vienna - we have previously used the account of one of our members (ratcourse) but would like to switch to this account (rationalityvienna). Please upvote this account for creating rationality vienna meetups.

Comment author: pan 27 January 2014 06:33:04PM 5 points [-]

Is there a reasonably well researched list of behaviors that correlate positively with lifespan? I'm interested in seeing if there are any low hanging fruit I'm missing.

I found this previously posted, and a series of posts by gwern, but was wondering if there is anything else?

A quick google will give you a lot of lists but most of them are from news sources that I don't trust.

Comment author: John_Maxwell_IV 29 January 2014 08:28:57AM 3 points [-]

Romeo Stevens made this comprehensive doc.

Comment author: Vladimir_Golovin 28 January 2014 08:41:22AM *  1 point [-]

Eating a handful of nuts a day.

"Scientists from Dana-Farber Cancer Institute, Brigham and Women's Hospital, and the Harvard School of Public Health came to this conclusion after analyzing data on nearly 120,000 people collected over 30 years."

"The most obvious benefit was a reduction of 29 percent in deaths from heart disease - the major killer of people in America. But we also saw a significant reduction - 11% - in the risk of dying from cancer."

http://www.medicalnewstoday.com/articles/269206.php

Comment author: RichardKennaway 28 January 2014 01:46:18PM 1 point [-]

But:

The researchers point out that the study was not designed to examine cause and effect and so cannot conclude that eating more nuts causes people to live longer.

Indeed, the study consists only of observational data, not interventional, so what causal conclusions could be drawn from it?

Comment author: IlyaShpitser 31 January 2014 06:35:48AM 1 point [-]

You act like people never did a valid causal analysis of the data in the Nurses' health study.

Comment author: Qiaochu_Yuan 27 January 2014 07:18:08PM *  3 points [-]

I found this list of causes of death by age and gender enlightening (it doesn't necessarily tell you that a particular action will increase your lifespan, but then again neither do correlations). For example, I was surprised by how often people around my age or a bit older die of suicide and "poisoning" (not sure exactly what this covers but I think it covers stuff like alcohol poisoning and accidentally overdosing on medicine?).

Comment author: Lumifer 27 January 2014 06:45:04PM *  1 point [-]

Is there a reasonably well researched list of behaviors that correlate positively with lifespan?

Depends on what you'd call "well-researched" but, unfortunately, most of it is fuzzy platitudes. For example:

  • Do physical exercise. But not too much.
  • Be happy, avoid stress.
  • Get happily married.
  • Don't get obese.

and most importantly

  • Choose your parents well, their genes matter :-P
Comment author: buybuydandavis 27 January 2014 09:51:44AM 4 points [-]

Daniel Dennett quote to share, on an argument in Sam Harris' book Free Will;

... he has taken on a straw man, and the straw man is beating him

From: http://www.samharris.org/blog/item/reflections-on-free-will#sthash.5OqzuVcX.dpuf

Just thought that was pretty damn funny.

Comment author: DanielLC 04 February 2014 06:39:53AM 0 points [-]

That's known as Strawman Has A Point (Warning: TVTropes).

Comment author: gedymin 27 January 2014 01:23:58PM *  2 points [-]

I'm quite new to LW, and find myself wondering whether Hidden Markov models (HMM) are underappreciated as a formal reasoning tool in the rationalist community, especially compared to Bayesian networks?

Perhaps it's because HMM seem to be more difficult to grasp?

Or it's because formally HMM are just a special case of Bayesian networks (i.e. dynamic Bayes nets)? Still, HMM are widely used in science on their own.

For comparison, Google search "bayes OR bayesian network OR net" site:lesswrong.com gives 1,090 results.

Google search hidden markov model site:lesswrong.com gives 91 results.

Comment author: ChristianKl 27 January 2014 10:52:30PM 1 point [-]

Hidden Markov models are a reasoning model to solve a specific problem. If you don't face that specific problem they are no use.

Most of the problems we discuss aren't modeled well with HMMs.

Comment author: MathiasZaman 27 January 2014 12:50:42PM 2 points [-]

Is there a good way of finding what kind of job might fit a person? Common advice such as "do what you like to do" or "do what you're good at" is relatively useless for finding a specific job or even a broader category of jobs.

I've did some reading on 80000 hours, and most of the advice there is on how to choose between a couple of possible jobs, not on finding a fitting one from scratch.

Comment author: memoridem 28 January 2014 02:46:30AM *  2 points [-]

I think for most people who ask this question, the range of fitting jobs is much wider than they think. You learn to like what you become good at.

If I were to pick a career right now, I'd just take a long list of reasonably complex jobs and remove any that contain an obvious obstacle like a skill requirement I'm unlikely to improve at. Then from what is left, I'd narrow the choice by some other criteria than perceived fit, income and future employment prospects for example and then pick one of them either by some additional criteria or randomly. I'm confident I'd learn to like almost any job chosen this way.

If you make money you can do whatever you like in the future even if you chose your job poorly in the first place. So please don't choose to become an English major.

Comment author: ChristianKl 27 January 2014 10:58:39PM 2 points [-]

Is there a good way of finding what kind of job might fit a person?

That's a strange question.

Either you want to know how to pick up the skill of being a career adviser. Alternatively you want to find a job for yourself. You might also be a parent who tries to find a job that fits his child instead of letting the child decide for themselves.

I think the answers to those three possibilities are very different.

Comment author: gwern 27 January 2014 01:02:36AM *  16 points [-]

Some names familiar to LWers seem to have just made their fortunes (again, in some cases); http://recode.net/2014/01/26/exclusive-google-to-buy-artificial-intelligence-startup-deepmind-for-400m/ (via HN)

Google is shelling out $400 million to buy a secretive artificial intelligence company called DeepMind....Based in London, DeepMind was founded by games prodigy and neuroscientist Demis Hassabis, Skype & Kazaa developer Jaan Tallin and researcher Shane Legg.

I liked Legg's blog & papers and was sad when he basically stopped in the interests of working on his company, but one can hardly argue with the results.

EDIT: bigger discussion at http://lesswrong.com/r/discussion/lw/jks/google_may_be_trying_to_take_over_the_world/#comments - new aspects: $500m, not $400m; DeepMind proposes an ethics board

Comment author: PECOS-9 27 January 2014 12:22:39AM *  13 points [-]

PSA: You can download from scribd without paying, you just need to upload a file first (apparently any file -- it can be a garbage pdf or even a pdf that's already on scribd). They say this at the very bottom of their pricing page, but I didn't notice until just now.

Comment author: moridinamael 27 January 2014 03:02:25AM 5 points [-]

Has anyone had experiences with virtual assistants? I've been aware of the concept for many years but always been wary of what I perceive to be the risks involved in letting a fundamentally unknown party read my email.

I'd like to hear about any positive or negative experiences.

One problem with searching for information about the trustworthiness of entities like these is that one suspects any positive reports one finds via Googling to be astroturfing, and if one finds negative reports, well, negatives are always over-reported in consumer services. That's why I'm asking here.

Comment author: TylerJay 27 January 2014 02:06:19AM 5 points [-]

The MIRI course list bashes on "higher and higher forms of calculus" as not being useful for their purposes and calculus is not on the list at all. However, I know that at least some kind of calculus is needed for things like probability theory.

So imagine a person wanted to work their way through the whole MIRI course list and deeply understand each topic. How much calculus is needed for that?

Comment author: Qiaochu_Yuan 27 January 2014 07:25:49PM *  7 points [-]

Not much. The kind of probability relevant to MIRI's interests is not the kind of probability you need calculus to understand (the random variables are usually discrete, etc.). The closest thing to needing a calculus background is maybe numerical analysis (I suspect it would be helpful to at least have the intuition that derivatives measure the sensitivity of a function to changes in its input), but even then I think that's more algorithms. Not an expert on numerical analysis by any means, though.

If you have a general interest in mathematics, I still recommend that you learn some calculus because it's an important foundation for other parts of mathematics and because people, when explaining things to you, will often assume that you know calculus after a certain point and use that as a jumping-off point.

Comment author: TylerJay 27 January 2014 08:02:19PM 1 point [-]

Thanks. I took single variable calculus, differential equations, and linear algebra in college, but its been four years since then and I haven't really used any of it since (and I think I really only learned it in context, not deeply). I've just been trying to figure out how much of my math foundations i'm going to need to re-learn.

This was helpful.

Comment author: MarkL 27 January 2014 01:05:45AM 6 points [-]

My meditation blog from a (somewhat) rationalist perspective is now past 40 posts:

http://meditationstuff.wordpress.com/

Comment author: moridinamael 28 January 2014 03:07:29PM 1 point [-]

Do you have any material for dealing with chronic pain? Or material that could conceivably be leveraged to apply to chronic pain management?

Comment author: bramflakes 26 January 2014 05:09:55PM *  17 points [-]

I'm going to do the unthinkable: start memorizing mathematical results instead of deriving them.

Okay, unthinkable is hyperbole. But I've noticed a tendency within myself to regard rote memorization of things to be unbecoming of a student of mathematics and physics. An example: I was recently going through a set of practice problems for a university entrance exam, and calculators were forbidden. One of the questions required a lot of trig, and half the time I spent solving the problem was just me trying to remember or re-derive simple things like the arcsin of 0.5 and so on. I knew how to do it, but since I only have a limited amount of working memory, actually doing it was very inefficient because it led to a lot of backtracking and fumbling. In the same sense, I know how to derive all of my multiplication tables, but doing it every time I need to multiply two numbers together is obviously wrong. I don't know how widespread this is, but at least in my school, memorization was something that was left to the lower-status, less able people who couldn't grasp why certain results were true. I had gone along with this idea without thinking about it critically.

So these are the things I'm going to add to my anki decks, with the obligatory rule that I'm only allowed to memorize results if I could theoretically re-derive them (or if the know-how needed to derive them is far beyond my current ability). These will include common trig results, derivatives and integrals of all basic functions, most physical formulae relating heat, motion, pressure and so on. I predict that the reduction in mental effort required on basic operations will rapidly compound to allow for much greater fluency with harder problems, though I can't think of a way to measure this. Also, recommendations for other things to memorize are welcome.

Also, relevant

Comment author: shminux 26 January 2014 05:44:58PM 10 points [-]

In my experience memorization often comes for free when you strive for fluency through repetition. You end up remembering the quadratic formula after solving a few hundred quadratic equations. Same with the trig identities. I probably still remember all the most common identities years out of school, owing to the thousands (no exaggeration) of trig problems I had to solve in high school and uni. And can derive the rest in under a minute.

Memorization through solving problems gives you much more than anki decks, however: you end up remembering the roads, not just the signposts, so to speak, which is important for solving test problems quickly.

You are right that "the reduction in mental effort required on basic operations will rapidly compound to allow for much greater fluency with harder problems", I am not sure that anki is the best way to achieve this reduction, though it is certainly worth a try.

Comment author: ChristianKl 26 January 2014 11:04:45PM 2 points [-]

In general there the core principle of spaced repetition that you don't put something into the system that you don't already understand.

When trying to memorize mathematical results make sure that you only add cards when you really have a mental understanding. Using Anki to avoid forgetting basic operations is great. If you however add a bunch of information that's complex, you will forget it and waste a lot of time.

Comment author: whales 26 January 2014 11:56:19PM *  4 points [-]

That's true if you're just using spaced repetition to memorize, although I'd add that it's still often helpful to overlearn definitions and simple results just past the boundaries of your understanding, along the lines of Prof. Ravi Vakil's advice for potential students:

Here's a phenomenon I was surprised to find: you'll go to talks, and hear various words, whose definitions you're not so sure about. At some point you'll be able to make a sentence using those words; you won't know what the words mean, but you'll know the sentence is correct. You'll also be able to ask a question using those words. You still won't know what the words mean, but you'll know the question is interesting, and you'll want to know the answer. Then later on, you'll learn what the words mean more precisely, and your sense of how they fit together will make that learning much easier. The reason for this phenomenon is that mathematics is so rich and infinite that it is impossible to learn it systematically, and if you wait to master one topic before moving on to the next, you'll never get anywhere. Instead, you'll have tendrils of knowledge extending far from your comfort zone. Then you can later backfill from these tendrils, and extend your comfort zone; this is much easier to do than learning "forwards". (Caution: this backfilling is necessary. There can be a temptation to learn lots of fancy words and to use them in fancy sentences without being able to say precisely what you mean. You should feel free to do that, but you should always feel a pang of guilt when you do.)

The second point I'd make is that the spacing effect (distributed practice) works for complex learning goals as well, although it will help if your practice consists of more than rote recall.

Comment author: bramflakes 26 January 2014 11:45:44PM 1 point [-]

Yeah, I'm wary of that fact and I've learned the downsides of it through experience :)

Comment author: whales 26 January 2014 08:56:14PM *  1 point [-]

Nice, and good luck! I'm glad to see that my post resonated with someone. For rhetorical purposes, I didn't temper my recommendations as much as I could have -- I still think building mental models through deliberate practice in solving difficult problems is at the core of physics education.

I treat even "signpost" flashcards as opportunities to rehearse a web of connections rather than as the quiz "what's on the other side of this card?" If an angle-addition formula came up, I'd want to recall the easy derivation in terms of complex exponentials and visualize some specific cases on the unit circle, at least at first. I also use cards like that in addition to cards which are themselves mini-problems.

Comment author: lukeprog 26 January 2014 08:59:03AM *  36 points [-]

Every now and then I like to review my old writings so I can cringe at all the wrong things I wrote, and say "oops" for each of them. Here we go...

There was once a time when the average human couldn't expect to live much past age thirty. (Jul 2012)

That's probably wrong. IIRC, previous eras' low life expectancy was mostly due to high child mortality.

We have not yet mentioned two small but significant developments leading us to agree with Schmidhuber (2012) that "progress toward self-improving AIs is already substantially beyond what many futurists and philosophers are aware of." These two developments are Marcus Hutter's universal and provably optimal AIXI agent model... and Jurgen Schmidhuber's universal self-improving Godel machine models... (May 2012)

This sentence is defensible for certain definitions of "significant," but I think it was a mistake to include this sentence (and the following quotes from Hutter and Schmidhuber) in the paper. AIXI and Godel machines probably aren't particularly important pieces of progress to AGI worth calling out like that. I added those paragraphs to section 2.4. not long before the submission deadline, and regretted it a couple months later.

one statistical prediction rule developed in 1995 predicts the price of mature Bordeaux red wines at auction better than expert wine tasters do. (Jan 2011)

No, that's a misreading of the study.

On September 26, 1983, Soviet officer Stanislav Petrov saved the world. (Nov 2011)

Eh, not really.

in the U.S., the administering charity need not spend from the donor-advised fund as the donor wishes, though they often do. (Jul 2012)

Silly. Donor-advised funds basically always fund as the donor wishes.

Comment author: ChrisHallquist 27 January 2014 03:17:51AM 1 point [-]

Huh. I followed the link to the correction of the Petrov story, and found I'd already upvoted it.

But if you'd asked me yesterday for examples of heroes yesterday, I'd have cited Petrov immediately. S

hows how hard it is to unlearn false information once you've learned it.

Comment author: RichardKennaway 26 January 2014 10:12:56AM *  11 points [-]

On September 26, 1983, Soviet officer Stanislav Petrov saved the world. (Nov 2011)

Eh, not really.

The Wiki link in the linked LW post seems to be closer to "Stanislav Petrov saved the world" than "not really":

Petrov judged the report to be a false alarm, and his decision is credited with having prevented an erroneous retaliatory nuclear attack

...

His colleagues were all professional soldiers with purely military training and, following instructions, would have reported a missile strike if they had been on his shift.

...

Petrov, as an individual, was not in a position where he could single-handedly have launched any of the Soviet missile arsenal. ... But Petrov's role was crucial in providing information to make that decision. According to Bruce Blair, a Cold War nuclear strategies expert and nuclear disarmament advocate, formerly with the Center for Defense Information, "The top leadership, given only a couple of minutes to decide, told that an attack had been launched, would make a decision to retaliate."

A closely related article says:

Petrov's responsibilities included observing the satellite early warning network and notifying his superiors of any impending nuclear missile attack against the Soviet Union. If notification was received from the early warning systems that inbound missiles had been detected, the Soviet Union's strategy was an immediate nuclear counter-attack against the United States (launch on warning), specified in the doctrine of mutual assured destruction.

That he didn't literally have his finger on the "Smite!" button, or that the SU might still not have retaliated if he'd raised the alarm, is not the point.

Comment author: Gunnar_Zarncke 26 January 2014 07:17:51PM 2 points [-]

Smart move not only to review but post the results. Shows humbleness and at the same time prevents being called on it later.

This is an approach I'd like to see more often. Maybe you should add it to the http://lesswrong.com/lw/h7d/grad_student_advice_repository/ or some such.

Comment author: gjm 26 January 2014 10:29:46AM 10 points [-]

previous eras' low life expectancy was mostly due to high child mortality.

I have long thought that the very idea of "life expectancy at birth" is a harmful one, because it encourages exactly that sort of confusion. It lumps together two things (child mortality and life expectancy once out of infancy) with sufficiently different causes and sufficiently different effects that they really ought to be kept separate.

Comment author: TylerJay 26 January 2014 07:18:11PM 2 points [-]

Does anybody have a source that separates the two out? For example, to what age can the average X year old today expect to live? Or even at a past time?

Comment author: Lumifer 26 January 2014 07:33:42PM 5 points [-]

Does anybody have a source that separates the two out? For example, to what age can the average X year old today expect to live?

Sure, there is the concept of life expectancy at specific age. For example, there is the "default" life expectancy at birth, there is the life expectancy for a 20 year-old, life expectancy for a 60-year-old, etc. Just google it up.

Comment author: fubarobfusco 27 January 2014 07:11:26AM 1 point [-]

It's kind of important to the life insurance business ....

Comment author: TylerJay 26 January 2014 08:08:13PM 1 point [-]

Thanks. Interestingly, My numbers never matched up between any 2 sources.

The US SSA's actuarial tables give me a number that's 5 years different from their own "additional life expectancy" calculator.

Comment author: palladias 26 January 2014 10:23:31PM 3 points [-]

Has anyone paired Beeminder and Project Euler? I'd like to be able to set a goal of doing x problems per week and have it automatically update, instead of me entering the data in manually. Has anyone cobbled together a way to do it, which I could piggyback off of?

Comment author: amacfie 27 January 2014 12:18:42AM 2 points [-]

Is being "sexy" basically signaling promiscuity plus signaling being a fun intercourse partner?

Comment author: ChristianKl 27 January 2014 12:12:54PM 2 points [-]

Sexy is a quite broad word that probably used by different people in different ways. I think for most people it about what they feel when looking at the person. Those feeling where set up by evolution over large time frames.

Evolution doesn't really care about whether you get a fun intercourse partner.

But it's not only evolution. It also has a lot to do with culture. Culture also doesn't care about whether you get a fun intercourse partner. People who watch a lot of TV get taught that certain characteristics are sexy.

For myself I would guess that most of my cultural imprint regarding what I find sexy comes from dancing interactions. If a woman moves in a way that suggests that she doesn't dance well, that will reduce her sex appeal to me more than it probably does with the average male.

Comment author: Lumifer 27 January 2014 01:12:33AM 2 points [-]

"Sexy" isn't signaling -- it's a characteristic that people (usually) try to signal, more or less successfully. "I'm sexy" basically means "You want me" : note the difference in subjects :-)

Comment author: ChristianKl 27 January 2014 12:11:08PM 1 point [-]

If a man succeeds in signaling a high sexuality to a women, the woman might still treat him as a creep. Especially if there no established trust, signal really high amounts of sexuality doesn't result in "You want me".

In my own interactions with professional dancers there are plenty of situations where the woman succeeds in signaling a high amount of sexyness. I however know that I"m dancing with a professional dancer who going to sent that signal to a lot of guys so she doesn't enter my mental category of potential mates.

I think people frequently go wrong when the confuse impression of characteristics with goals.

Comment author: Lumifer 27 January 2014 03:43:11PM 2 points [-]

If a man succeeds in signaling a high sexuality to a women, the woman might still treat him as a creep.

In which case he failed to signal "sexy" and (a common failure mode) signaled "creepy" instead.

Comment author: ChristianKl 27 January 2014 04:06:51PM 1 point [-]

It depends on how you define the term.

For a reasonable definition of sexy, the term refers to letting a woman feel sexual tension. If you talk about social interactions it's useful to have a word that refers to making another person feel sexual tension.

Of course you can define beautiful, attractive and sexy all the same way. Then you get a one dimensional model where Bob wants Alice with utility rating X. I don't think that's model is very useful to understanding how humans behave in mating situations.

Comment author: Lumifer 27 January 2014 04:14:57PM 1 point [-]

It depends on how you define the term.

I define it as "arousing sexual interest and desire in people of appropriate gender and culture". Note that this is quite different from "beautiful" and is a narrow subset of "attractive".

the term refers to letting a woman feel sexual tension.

"Tension" generally implies conflict or some sort of a counterforce.

Comment author: amacfie 27 January 2014 01:58:49AM 1 point [-]

Ok, I may have been too vague. I was thinking of the exhibition of sexy behavior, e.g. clothes, dancing/gestures, sex-related language.

Comment author: Lumifer 27 January 2014 02:12:57AM *  1 point [-]

Pretty much the same thing. Regardless of an, um, widespread misunderstanding :-D sexy behavior does NOT signal either promiscuity or sexual availability. It signals "I want you to desire me" and being desired is a generally advantageous position to be in.

Comment author: Torello 27 January 2014 01:41:31AM 1 point [-]

Being sexy signals health, youth, and fertility. This is quite well supported by evidence and discussed in many books and articles.

I would agree with what Lumifer says below, but I think sexy can be signalling when many people are involved: look at the sexy people I hang out with. Being with sexy people brings high status because it's high status.

Comment author: ChristianKl 27 January 2014 11:33:19AM 1 point [-]

Being sexy signals health, youth, and fertility. This is quite well supported by evidence and discussed in many books and articles.

I think you confuse the label "sexy" with the label "attractive". As far as my reading goes few articles use the term sexy.

Comment author: Daniel_Burfoot 26 January 2014 03:51:21PM *  5 points [-]

Does anyone else experience the feeling of alienation? And does anyone have a good strategy for dealing with it?

Comment author: memoridem 28 January 2014 02:12:15AM *  2 points [-]

I think this feeling arises from social norms feeling unnatural to you. This feeling should be expected if your interests are relevant to this site, since people are not trying to be rational by default.

The difference between a pathetic misfit and and an admirable eccentric is their level of awesomeness. If you become good enough at anything relevant to other people, you don't have to live through their social expectations. Conform to the norms or rise above them.

Note that I think most social norms are nice to have, but this doesn't mean there aren't enough of the kind that make me feel alienated. It could be that the feeling of alienation is a necessary side effect of some beneficial cognitive change, in which case I'd try to cherish the feeling. I've found that rising to a leadership position diminishes the feeling significantly, however.

Comment author: MathiasZaman 27 January 2014 10:56:26AM 1 point [-]

I think that feeling is more common than you might think. Especially if you deviate enough from the societal norm (which Less Wrong generally does).

My general strategy for dealing with is social interaction with people who'll probably understand. Just talk it over with them. It's best if you do this with people you care about. It doesn't have to be in person, if you've got someone relevant on Skype, that works as well.

Comment author: Daniel_Burfoot 27 January 2014 02:38:16PM 2 points [-]

Hmm, this is probably good advice. Part of my problem is that my entire family is made up of people who are both 1) Passionate advocates of an American political tribe and 2) Not very sophisticated philosophically.

Comment author: MathiasZaman 27 January 2014 02:57:29PM 3 points [-]

A common condition with geeks in general and aspiring rationalists in particular, I'd say.

I've recently been expanding my network of like-minded people both by going to the local meetups and also by being invited in a Skype group for tumblr rationalists.

I know that a feeling of alienation isn't conductive to meeting new people, so I'm not sure I can offer other advice. Contact some friends who might be open to new ideas? I'd offer to help myself, but I'm not sure if I'm the right person to talk to. (In any case, I've PM'd my Skype name if you do need a complete stranger to talk to.)

Comment author: Lumifer 26 January 2014 05:13:11PM 7 points [-]

Does anyone else experience the feeling of alienation?

But of course.

And does anyone have a good strategy for dealing with it?

Accept that you're not average and not even typical.

Comment author: ChristianKl 26 January 2014 10:11:31PM 2 points [-]

Feeling usually become a problem when you resist them.

My general approach with feelings:

  1. Find someone towards which you can express the content behind the feeling. This works best in person. Online communication isn't good for resolving feelings. Speak openly about whatever comes to mind.

  2. Track the feeling down in your body. Be aware where it happens to be. Then release it.

Comment author: Kawoomba 26 January 2014 04:18:43PM 4 points [-]

Yes, although it would help if you could be a bit more specific, the term is somewhat overloaded.

As for the strategy, depends. Find a better community (than the one you feel alienated from) in the sense of better matching values? We both seem to feel quite at home in this one (for me, if not for the suffocating supremacy of EA).

Comment author: Daniel_Burfoot 26 January 2014 04:44:47PM *  5 points [-]

I meant alienated from society at large, not from LW, although the influence of society at large obviously affects discussion on LW.

One aspect of my feeling is that I increasingly suspect that the fundamental reason people believe things in the political realm is that they feel a powerful psychological need to justify hatred. The naive view of political psychology is that people form ideological beliefs out of their experience and perceptions of the world, and those beliefs suggest that a certain category of people is harming the world, and so therefore they are justified in feeling hatred against that category of people. But my new view is that causality flows in the opposite direction: people feel hatred as a primal psychological urge, and so their conscious forebrain is forced to concoct an ideology that justifies the hatred while still allowing the individual to maintain a positive pro-social self-image.

This theory is partially testable, because it posits that a basic prerequisite of an ideology is that it identifies an out-group and justifies hatred against that out-group.

Comment author: fubarobfusco 27 January 2014 06:06:41AM *  3 points [-]

There is a quote commonly mis-attributed to August Bebel and indeed to Marx: "Antisemitismus ist der Sozialismus des dummen Kerls." ("Antisemitism is the socialism of the stupid guy", or perhaps colloquially, "Antisemitism is a dumb-ass version of socialism") That is to say, politically naïve people were attracted to antisemitism because it offered them someone to blame for the problems they faced under capitalism, which — to the quoted speaker's view, anyway — would be better remedied by changing the political-economic structure.

Jay Smooth recently put out a video, "Moving the Race Conversation Forward", discussing recent research to the effect that mainstream-media discussions of racial issues tend to get bogged down in talking about whether an individual did or said something racist, as opposed to whether institutions and social structures produce racially biased outcomes.

There are probably other sources for similar ideas from around the political spectra. (I'll cheerfully admit that the above two sources are rather lefter than I am, and I just couldn't be arsed to find two rightish ones to fit the politesse of balance.) People do often look for individuals or out-groups to blame for problems caused by economic conditions, social structures, institutions, and so on. The individuals blamed may have precious little to do with the actual problems.

That said, if someone's looking to place blame for a problem, that does suggest the problem is real. It's not that they're inventing the problem in order to have something to pin on an out-group. (It also doesn't mean that a particular structural claim, Marxist or whatever, is correct on what that problem really is — just that the problem is not itself confabulated.)

Comment author: RichardKennaway 27 January 2014 08:11:10PM *  4 points [-]

There is a quote commonly mis-attributed to August Bebel and indeed to Marx: "Antisemitismus ist der Sozialismus des dummen Kerls." ("Antisemitism is the socialism of the stupid guy", or perhaps colloquially, "Antisemitism is a dumb-ass version of socialism") That is to say, politically naïve people were attracted to antisemitism because it offered them someone to blame for the problems they faced under capitalism, which — to the quoted speaker's view, anyway — would be better remedied by changing the political-economic structure.

Does that make socialism the anti-semitism of the smart? Or perhaps of the ambitious -- they're attracted to it because it gives them an enemy big enough to justify taking over everything?

Comment author: NancyLebovitz 27 January 2014 05:45:22PM 2 points [-]

I've seen it phrased as "Anti-semitism is the socialism of fools".

Comment author: NancyLebovitz 27 January 2014 03:34:58AM 1 point [-]

Tentatively: Look for what "and therefore" you've got associated with the feeling. Possibilities that come to my mind-- and therefore people are frightening, or and therefore I should be angry at them all the time, or and therefore I should just hide, or and therefore I shouldn't be seeing this.

In any case, if you've got an "and therefore" and you make it conscious, you might be able to think better about the feeling.

Comment author: Viliam_Bur 26 January 2014 07:23:59PM *  3 points [-]

The part where the emotional needs come first, and the ideological belief comes later as a way of expressing and justifying them, that feels credible. I just don't think that everyone starts from the position of hatred (or, in the naive view, not everyone ends with hatred). There are other emotions, too.

But maybe the people motivated by hatred make a large part of the most mindkilled crowd. Because other emotions can be expressed legitimately also outside of the politics.

Comment author: maia 26 January 2014 06:36:31PM 1 point [-]

Do you have an in-person community that you feel close to?

What I'm trying to get at is, does it bother you specifically that you are alienated from "society at large," or do you feel alienated in general?

Comment author: falenas108 26 January 2014 07:56:14AM 14 points [-]

I've been systematically downvoted for the past 16 days. Every day or two, I'd lose about 10 karma. So far, I've lost a total of about 160 karma.

It's not just somebody just going through my comments and downvoting the ones they disagree with. Even a comment where I said "thanks" when somebody pointed out a formatting error in my comments is now at -1.

I'm not sure what can/should be done about this, but I thought I should post it here. And if the person who did this is here and there is a reason, I would appreciate it if you would say it here.

Comment author: VAuroch 30 January 2014 01:09:30AM *  1 point [-]

I have experienced this also, though roughly a month ago, after an extended debate on trans* issues specifically.

I responded by messaging the person I had argued with, and politely asking that, if it was them who had been downvoting me, they please stop going through my comment history. I got no response, but the stream of downvotes seemed to tail off shortly thereafter.

EDIT: As a side note, the person with whom I had been debating/arguing was the same one that showed up in the thread ChrisHallquist linked. It looks like it's a pattern of behavior for him.

Comment author: Vulture 27 January 2014 06:48:26PM *  4 points [-]

I got a seemingly one-time hit of this about a week ago. For what it's worth I had just been posting comments on the subject of rape, but a whole bunch of my unrelated comments got it too.

(Since then it's been having an obnoxious deterrent effect on my commenting, because I feel so precariously close to just accumulating negative karma every time I post, leaving readers with the impression that my ideas have all been identified as worthless by someone probably cleverer than themselves. I'm now consciously trying to avoid thinking like this)

Comment author: ChrisHallquist 26 January 2014 09:43:33PM 3 points [-]
Comment author: CAE_Jones 26 January 2014 08:49:10AM 12 points [-]

A quick look at the first page of your recent comments shows most of your recent activity to have been in the recent "Is Less Wrong too scary to marginalized groups?" firestorm.

One of the most recent users to complain about mass downvoting also cited participation in flame-bait topics (specifically gender).

Comment author: gjm 26 January 2014 10:32:46AM 1 point [-]

I would prefer to see a little less victim-blaming here.

(I'm not sure whether you intended it as such -- but that phrase "participation in flame-bait topics" sounds like it.)

Comment author: drethelin 26 January 2014 09:48:54PM 2 points [-]

How is this victim blaming? As I interpret it the claim is that the person was probably NOT the victim of systematic downvoting but instead made a lot of comments that are counter to what people like to hear, creating the illusion of same.

Comment author: gjm 26 January 2014 10:12:13PM 3 points [-]

Hard to explain getting downvoted for

a comment where I said "thanks" when somebody pointed out a formatting error in my comments

as being about saying things "counter to what people like to hear". Which is why I didn't interpret CAE_Jones as suggesting that that's what was going on.

Comment author: CAE_Jones 26 January 2014 10:45:40AM 9 points [-]

That was not my intention. (If it's any consolation, I participated in the same firestorm.)

Comment author: pragmatist 26 January 2014 09:11:47AM *  4 points [-]

Gah... This is becoming way too common, and it seems like there's pretty good evidence (further supported in this instance) regarding the responsible party. I wish someone with the power to do so would do something about it.

Comment author: Stabilizer 26 January 2014 03:55:26AM 15 points [-]

In this article, Eliezer says:

Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever.

Recently, a similar phrase popped into my head, which I found quite useful:

Confusion gets curiosity. Does not get anger, disgust or fear. Never. Never ever never for ever.

That's all.

Comment author: [deleted] 26 January 2014 04:28:54PM 2 points [-]

I don't know what you mean precisely by confusion, but I personally can't always control what my immediate primal level response is to certain situations. If I try to strictly avoid certain feelings, I usually end up convincing myself that I'm not feeling that way when actually I am. I'd rather notice what I'm feeling and then move on from there, it's probably easier to control your thinking that way. Just because you're angry doesn't mean you have to act like angry.

Comment author: Stabilizer 26 January 2014 05:57:30PM 1 point [-]

That's basically what I meant. The move is to notice the anger, fear or disgust and then realize that this emotion isn't useful and can be actively detrimental. Then consciously try to switch to curiosity.

Of course, I couldn't condense the full messiness of reality into a pithy saying.

Comment author: Thomas 26 January 2014 09:51:32AM *  5 points [-]

Last night we had meetup in Ljubljana. It was a good debate, but quite a heretical one for the LW standards. Especially when organizers left us. Which was unfortunate. We mostly don't see ourselves particularly bonded to LW at all. Especially I.

We discussed personal identity, possible near super-intelligence (sudden hack, if you wish), Universe transformation following this eventuality, and some lighter topics like fracking for gas and oil, language revolutions throughout history, neo-reactionaries and their points, Einstein's brains (whether they were lighter or heavier than average - I am quite sure they were heavier but it seems that the Cathedral says otherwise).

We discussed Three Worlds Collide, IBM brain simulations, MIRI endeavors and progress, genetics ...

More than 5 hours of an interesting debate.

Comment author: Luke_A_Somers 26 January 2014 03:15:03PM 2 points [-]

Heretical? Well, considering that 'heretic' means 'someone who thinks on their own', I'm not sure how we're supposed to interpret that negatively.

I assume however that you meant 'disagreeing with common positions displayed on LW' - which of those common positions did you differ on, and why, and just how homogeneous do you think LW is on those?

Comment author: banx 26 January 2014 12:27:13AM *  5 points [-]

Is it always correct to choose that action with the highest expected utility?

Suppose I have a choice between action A, which grants -100 utilons with 99.9% chance and +1000000 utilons with 0.1% chance, or action B which grants +1 utilon with 100% chance. A has an expected utility of +900.1 utilons, while B has an expected utility of +1 utilon. This decision will be available to me only once, and all future decision will involve utility changes on the order of a few utilons.

Intuitively, it seems like action A is too risky. I'll almost certainly end up with a huge decrease in utility, just because there's a remote chance of a windfall. Risk aversion doesn't apply here, since we're dealing in utility, right? So either I'm failing to truly appreciate the chance at getting 1M utilons -- I'm stuck thinking about it as I would money -- or this is a case where there's reason to not take the action that maximizes expected value. Help?

EDIT: Changed the details of action A to what was intended

Comment author: Oscar_Cunningham 26 January 2014 11:48:49AM 3 points [-]

A, which grants -100 utilons with 99.9% chance and +10000 utilons with 0.1%

A has an expected utility of +900.1 utilons

Um, A actually has a utility of -89.9.

That explains why it seems less appealing!

Comment author: Alejandro1 26 January 2014 01:00:27AM *  14 points [-]

I think the non-intuitive nature of the A choice is because we naturally think of utilons as "things". For any valuable thing (money, moments of pleasure, whatever) anybody who is minimally risk adverse would choose B. But utllons are not things, they are abstractions defined by one's preferences. So that A is the rational choice is a tautology, in the standard versions of utility theory.

It may help to think it the other way around, starting from the actual preference. You would choose a 99.9% chance of losing ten cents and 0.1% chance of winning 10000 dollars over winning one cent with certainty, right? So then perhaps, as long as we don't think of other bets and outcomes, we can map winning 1 cent to +1 utilon, losing 10 cents to -100 utilons and winning 10000 dollars to +10000 utilons. Then we can refine and extend the "outcomes <=> utilons" map by considering your actual preferences under more and more bets. As long as your preferences are self-consistent in the sense of the VNM axioms, then there will a mapping that can be constructed.

ETA: of course, it is possible that your preferences are not self-consistent. The Allais paradox is an example where many people's intuitive preferences are not self-consistent in the VNM sense. But constructing such a case is more complicated that just considering risk-aversion on a single bet.

Comment author: [deleted] 26 January 2014 01:26:49AM 11 points [-]

Also, it's well possible that your utility function doesn't evaluate to +10000 for any value of its argument, i.e. it's bounded above.

Comment author: ThrustVectoring 26 January 2014 03:48:54AM 3 points [-]

I'd flip that around. Whatever action you end up choosing reveals what you think has highest utility, according to the information and utility function you have at the time. It's almost a definition of what utility is - if you consistently make choices that rank lower according to what you think your utility function is, then your model of your utility function is wrong.

If the utility function you think you have prefers B over A, and you prefer A over B, then there's some fact that's missing from the utility function you think you have (probably related to risk).

I've recently come to terms with how much fear/anxiety/risk avoidance is in my revealed preferences. I'm working on working with that to do effective long-term planning -- the best trick I have so far is weighing "unacceptable status quo continues" as a risk. That, and making explicit comparisons between anticipated and experienced outcomes of actions (consistently over-estimating risks doesn't help any, and I've been doing that).

Comment author: iconreforged 25 January 2014 08:34:05PM 9 points [-]

Even if you know that signaling is stupid, it doesn't escape the cost of not signaling.

It's a longstanding trope that Eliezer gets a lot of flack for having no formal education. Formal education is not the only way to gain knowledge, but it is a way of signaling knowledge, and it's not very easy to fake (Not so easy to fake that it falls apart as a credential on its own). Has anyone toyed around with the idea of sending him off to get a math degree somewhere? He might learn something, and if not it's a breezy recap of what he already knows. He comes out the other side without the eternal "has no formal education" tagline, and a whole new slew of acquaintances.

Now, I understand that there may be good reasons not to, and I'd very much appreciate someone pointing me to any previous discussion in which this has been ruled out. Otherwise, how feasible does it sound to crowdfund a "Here's your tuition and an extra sum of money to cover the opportunity cost of your time, I don't care how unfair it is that people won't take you seriously without credentials, go study something useful, make friends with your professors, and get out with the minimum number of credits possible" scholarship?

Comment author: drethelin 26 January 2014 09:56:22PM 1 point [-]

This might have been a good call 10 years ago but nowadays Eliezer is participating in regular face to face meetings with skilled mathematicians and scientists in the context of constructing and analyzing theorems and decision strategies. This means that for a large amount of the people who are most important to convince, he gets to screen out all the "evidence" of not having a degree. And to a large extent, someone having the respect of a bunch of math phds is more important a qualifier of talent than having that phd themselves.

There's theoretically still the problem of selling Eliezer to the muggles but I don't think that's anywhere near as important as getting serious thinkers on board.

Comment author: jsteinhardt 26 January 2014 06:56:12AM *  5 points [-]

4 years (or even 1 year if you are super hard-core) of time is a pretty non-trivial investment. I was 2 classes away from a second degree and declined to take them, because the ~100 hours of work it would have taken wasn't worth the additional letters after my name. I also just really don't know anyone relevant who thinks that a college degree or lack thereof particularly matters (although the knowledge and skills acquired in the course of pursuing said degree may matter a lot). Good people will judge you by what you've done to demonstrate skill, not based on a college diploma.

I think IlyaShpitser's comment pretty much nails it.

Comment author: IlyaShpitser 25 January 2014 11:06:50PM *  19 points [-]

Has anyone toyed around with the idea of sending him off to get a math degree somewhere?

I think the bigger issue w/ people not taking EY seriously is he does not communicate (e.g. publish peer reviewed papers). Facebook stream of consciousness does not count. Conditional on great papers, credentials don't mean that much (otherwise people would never move up the academic status chain).

Yes it is too bad that writing things down clearly takes a long time.

Comment author: lukeprog 26 January 2014 07:55:11AM *  12 points [-]

Somehow I doubt I will ever persuade Eliezer to write in a style fit for a journal, but even still, I'll briefly mention that Eliezer is currently meeting with a "mathematical exposition aimed at math researchers" tutor. I don't know yet what the effects will be, but it seemed (to Eliezer and I) a worthwhile experiment.

Comment author: ciphergoth 26 January 2014 05:02:09PM 3 points [-]

Presumably if MIRI were awash with funding you'd pay experts to make papers out of Eliezer's work, freeing Eliezer up for other things?

Comment author: lukeprog 26 January 2014 05:12:25PM 10 points [-]

That's basically what another of our ongoing experiments is.

Comment author: iconreforged 25 January 2014 11:45:29PM 3 points [-]

True. It seems like the great-papers avenue is being pursued full-steam these days with MIRI, but I wonder if they're going to run out of low-hanging fruit to publish, or if mainstream academia is going to drag their heels replying to them.

Comment author: [deleted] 26 January 2014 01:20:02AM 5 points [-]

Here's your tuition and an extra sum of money to cover the opportunity cost of your time

If you buy into the “crunch time” narrative, that's a lot of opportunity cost.

Comment author: ChristianKl 25 January 2014 09:02:00PM 9 points [-]

I don't think you understand signaling well.

Eliezer managed signaling well enough to get a billionaire to fund him on his project. A billionaire who fund people who drop out of college systematically in projects like his 20 Under 20 program.

Trying to go the traditional route wouldn't fit into the highly effective image that he already signals.

Comment author: James_Miller 26 January 2014 05:46:10AM *  3 points [-]

Peter Thiel (the billionaire) has the proven ability to spot talent, which is why he is a billionaire. Eliezer has traits that Thiel values, and this is probably much more important than any signal Eliezer sent.

Comment author: fubarobfusco 25 January 2014 09:21:41PM 9 points [-]

Put another way, the purpose of signaling isn't so nobody will give you crap. It's so somebody will help you accomplish your goals.

People will give you crap, especially if they can get paid to do so. See gossip journalists, for instance. They are not paid to give boring and unsuccessful people crap; they are paid to give interesting and successful people crap.

Comment author: iconreforged 25 January 2014 11:42:18PM 1 point [-]

Well, yes, there is going to be some inevitable crap, but the purpose of signalling is so that you could impress a much larger pool of people. So it might not be much help for gossip journalists, but it might help with the marginal professional ethicist, mathematician, or public figure. In that area, you might get some additional "Anybody who can do that must be damn impressive.". Does the additional damn-impressive outweigh the cost? I don't know, that's why I'm asking.

Comment author: David_Gerard 25 January 2014 10:40:51PM *  1 point [-]

Your last para would imply that not getting crap from gossip journalists means you are not interesting or successful. Eliezer/MIRI gets almost no press. Are you sure that's what you meant?

Comment author: fubarobfusco 25 January 2014 10:54:37PM 3 points [-]

Eliezer gets a lot more press than I do, which is just fine with me.

Comment author: buybuydandavis 26 January 2014 04:37:27AM 2 points [-]

Yes, the autodidact signal can be tremendously effective, particularly in tech/libertarian company.

Comment author: iconreforged 25 January 2014 11:32:45PM 3 points [-]

Impressing Thiel is independent of a future degree or not, because he's already impressed. Where's the next billionaire going to come from, and will they coincidentally also be as contrarian as Thiel? Maybe MIRI doesn't need another billionaire, but I don't think they'd turn one away.

Comment author: ChristianKl 26 January 2014 01:06:33AM 5 points [-]

Impressing Thiel is independent of a future degree or not, because he's already impressed.

I think the deal that Eliezer has with Thiel is that Eliezer does MIRI full time. Switching focus to getting a degree might violate the deal. Gives that Thiel has a lot of money impressing Thiel more might also be very useful if they want more money from him.

Where's the next billionaire going to come from, and will they coincidentally also be as contrarian as Thiel?

Do you really think that someone who isn't contrarian will put his money into MIRI? The present set up is quite okay. Those who want people with academic credentials can give their money to FHI. Those who want more contrarian people can give their money to MIRI.

Whether or not Eliezer has a degree doesn't change that he's the kind of person who has a public Okcupid profile detailing his sexual habits and the fact that he's polyamorous.

When Steve Job was alive and run around in a sweater, he didn't cause people to disregard him because he wasn't wearing a suit.

People respect the person who's a contrarian who's okay with not everyone liking him. The contrarian who tries to get every to like them on the other hand get's no respect.

Comment author: jimmy 25 January 2014 10:42:28PM 2 points [-]

In addition "getting flak" isn't necessarily a bad thing.

It can be counter-signaling if you can get flak and stay standing.

It can also polarize people and separate those who can evaluate the inside arguments to realize that you're good from those who can't and have to just write you off for having no formal education.

Comment author: iconreforged 25 January 2014 11:25:55PM 1 point [-]

Eddie has some math talent. He can invest some time, money, and effort C to get a degree, which allows other people to discern that he has a higher probability of having that math talent. This higher probability confers some benefit in that other people will more readily take his advice in mathematical matters, or talk with him about his math.

The fun twist is that Eddie lives in a society with many other individuals with varying degrees of math talent, each of whom can expend C to get a degree and the associated benefits. People with almost no mathematical talent have a prohibitively high C, because even if they can pony up the time and money, they have to work very hard to fake their way through. But people with high math ability often choose to stand out by getting the degree, because their C is relatively lower, and a very high proportion of them get degrees. This creates a high association between degrees and mathematical ability, and makes it unlikely to see high mathematical ability in the absence of a degree.

That's the basic idea, plus degrees signal other things which may be completely unrelated to math, but are still nice. Even in the case where the degree has no causal effect no math ability, there are benefits to having one, in that the other math people can judge very quickly that they're interested in talking to you.

Hopefully that demonstrates that I understand signalling. My question is about the costs and benefits of a particular signal.

Comment author: Locaha 25 January 2014 03:14:59PM 5 points [-]

Repeating my post from the last open thread, for better visibility:

I want to study probability and statistics in a deeper way than the Probability and Statistics course I had to take in the university. The problem is, my mathematical education isn't very good (on the level of Calculus 101). I'm not afraid of math, but so far all the books I could find are either about pure application, with barely any explanations, or they start with a lot of assumptions about my knowledge and introduce reams of unfamiliar notation.

I want a deeper understanding of the basic concepts. Like, mean is an indicator of the central tendency of a sample. Intuitively, it makes sense. But why this particular formula of sum/n? You can apply all kinds of mathematical stuff to the sample. And it's even worse with variance...

Any ideas how to proceed?

Comment author: Qiaochu_Yuan 27 January 2014 07:33:13PM *  4 points [-]

I don't think that's really what means are. That intuition might fit the median better. One reason means are nice is that they have really nice properties, e.g. they're linear under addition of random variables. That makes them particularly easy to compute with and/or prove theorems about. Another reason means are nice is related to betting and the interpretation of a mean as an expected value; the theorem justifying this interpretation is the law of large numbers.

Nevertheless in many situations the mean of a random variable is a very bad description of it (e.g. mean income is a terrible description of the income distribution and median would be much more appropriate).

Edit: On the other hand, here's one very undesirable property of means: they're not "covariant under increasing changes of coordinates," which on the other hand is true of medians. What I mean is the following: suppose you decide to compute the mean population of all cities in the US, but later decide this is a bad idea because there are some really big cities. If you suspect that city populations grow multiplicatively rather than additively (e.g. the presence of good thing X causes a city to be 1.2x bigger than it otherwise would, as opposed to 200 people bigger), you might decide that instead of looking at population you should look at log population. But the mean of log population is not the log of mean population!

On the other hand, because log is an increasing function, the median of log population is still the log of median population. So taking medians is in some sense insensitive to these sorts of decisions, which is nice.

Comment author: pragmatist 26 January 2014 06:36:57AM *  3 points [-]

As a first step, I suggest Dennis Lindley's Understanding Uncertainty. It's written for the layperson, so there's not much in the way of mathematical detail, but it is very good for clarifying the basic concepts, and covers some surprisingly sophisticated topics.

ETA: Ah, I didn't notice that Benito had already recommended this book. Well, consider this a second opinion then.

Comment author: buybuydandavis 26 January 2014 03:57:23AM 3 points [-]

Read Edwin Jaynes.

The problem with most Probability and Statistics courses is the axiomatic approach. Purely formalism. Here are the rules - you can play by them if you want to.

Jaynes was such a revelation for me, because he starts with something you want, not arbitrary rules and conventions. He builds probability theory on basic desiredata of reason that you that make sense. He had reasons for my "whys?".

Also, standard statistics classes always seemed a bit perverse to me - logically backward. They always just felt wrong. Jaynes approach replaced that tortured backward thinking with clear, straight lines going forward. You're always asking the same basic question "What is the probability of A given that I know B?"

And he also had the best notation. Even if I'm not going to do any math, I'll often formulate a problem using his notation to clarify my thinking.

Comment author: Locaha 26 January 2014 01:22:14PM *  5 points [-]

desiredata

I think this is a most awesome mistype of desiderata.

Comment author: solipsist 25 January 2014 04:09:00PM *  14 points [-]

I too spent a few years with a similar desire to understand probability and statistics at a deeper level, but we might have been stuck on different things. Here's an explanation:


Suppose you have 37 numbers. Purchase a massless ruler and 37 identical weights. For each of your numbers, find the number on the ruler and glue a weight there. You now have a massless ruler with 37 weights glued onto it.

Now try to balance the ruler sideways on a spike sticking out of the ground. The mean of your numbers will be the point on the ruler where it balances.

Now spin the ruler on the spike. It's easy to speed up or slow down the spinning ruler if the weights are close together, but more force is required if the weights are far apart. The variance of your numbers is proportional to the amount the ruler resists changes to its angular velocity -- how hard you have to twist the ruler to make it spin, or to make it stop spinning.


"I'd like to understand this more deeply" is a thought that occurs to people at many levels of study, so this explanation could be too high or low. Where did my comment hit?

Comment author: IlyaShpitser 25 January 2014 05:02:02PM 5 points [-]

Moments of mass in physics is a good intro to moments in stats for people who like to visualize or "feel out" concepts concretely. Good post!

Comment author: solipsist 25 January 2014 04:55:51PM *  3 points [-]

A different level explanation, which may or may not be helpful:

Read up on affine space, convex combinations, and maybe this article about torsors.

If you are frustrated with hand waving in calculus, read a Real Analysis textbook. The magic words which explain how the heck you can have a probability distributions over real numbers is measure theory.

Comment author: Benito 25 January 2014 10:47:15PM *  4 points [-]

I asked a similar question a while back, and I was directed to this book, which I found to be incredibly useful. It is written at an elementary level, has minimal little maths, yet is still technical, and brings across so many central ideas in very clear, Bayesian, terms. It is also on Lukeprog's CSA book recommendations for 'Become Smart Quickly'.

Note: this is the only probability textbook I have read. I've glanced through the openings of others, and they've tended to be above my level. I am sixteen.

Comment author: Viliam_Bur 25 January 2014 05:03:05PM *  9 points [-]

When you have thousands of different pieces of data, to grasp it mentally, you need to replace them with some simplification. For example, instead of a thousand different weights you could imagine a thousand identical weights, such that the new set is somehow the same as the original set; and then you would focus on the individual weight from the new set.

What precisely does "somehow the same as the original set" mean? Well, it depends on what did the numbers from the original set do; how exactly they join together.

For example, if we speak about weights, the natural way of "joining together" is to add their weight. Thus the new set of the identical weights is equivalent to the original set if the sum of the new set is the same as sum of the old set. The sum of the new set = number of pieces × weight of one piece. Therefore the weight of the piece in the new set is the sum of the pieces in the original set divided by their number; the "sum/n".

Specifically, if addition is the natural thing to do, the set 3, 4, 8 is equivalent to 5, 5, 5, because 3 + 4 + 8 = 5 + 5 + 5. Saying that "5 is the mean of the original set" means "the original set behaves (with regards to the natural thing to do, i.e. addition) as if it was composed of the 5's".

There are situations where some other operation is the natural thing to do. Sometimes it is multiplication. For example, if you multiply some original value with 2, and they you multiply it by 8, the result of these two operations is the same as if you would multiply it twice by 4. In this case it's called geometric mean, and it's a root of product.

It can be even more complicated, so it doesn't necessarily have a name, but the idea is always replacing the original set with a set of identical values such that in the original context they would behave the same way. For example, the example above could be described as a 100% growth (multiplication by 2) and 700% growth (multiplication by 8), and you need to get a result 300% (multiplication by 4); in which case it would be "root of (product of (Xi + 100%)) - 100%".

If there is no meaningful operation in the original set, if the set can be ordered, we can pick the median. If the set can't even be ordered, if there are discrete values, we can pick the most frequent value as the best approximation of the original set.

Comment author: Manfred 25 January 2014 07:13:57PM 2 points [-]
Comment author: Locaha 25 January 2014 08:36:09PM 3 points [-]

Actually, I started reading that one and found it too hard.

Comment author: [deleted] 25 January 2014 05:53:08PM *  1 point [-]

I want a deeper understanding of the basic concepts. Like, mean is an indicator of the central tendency of a sample. Intuitively, it makes sense. But why this particular formula of sum/n? You can apply all kinds of mathematical stuff to the sample.

  • The mean of the sum of two random variables is the sum of the means (ditto with the variances); there's no similarly simple formula for the median. (See ChristianKl's comment for why you'd care about the sum.)

  • The mean if the value of x that minimizes SUM_i (x - x_i)^2; if you have to approximate all elements in your sample with the same value and the cost of an imperfect approximation is the square distance from the exact value (and any smooth function looks like the square when you're sufficiently close to the minimum), then you should use the mean.

  • The mean and variance are jointly sufficient statistics for the normal distribution

  • Possibly something else which doesn't come to my mind at the moment.

Comment author: gothgirl420666 25 January 2014 06:44:11PM 2 points [-]

I'm in art school and I have a big problem with precision and lack of "sloppiness" in my work. I'm sort of hesitant to try to improve in this area, however, because I suspect it reflects some sort of biological limit - maybe the size of some area in the cerebellum or something, I don't know. Am I right in thinking this?

Comment author: maia 25 January 2014 08:28:32PM 8 points [-]

Seems to me that that's likely a self-fulfilling prophecy, which I subjectively estimate is at least as likely to prevent you from doing better as an actual biological problem. Maybe try to think of more ways to get better at it - perhaps some different kind of exercises - and do your best at those, before drawing any conclusions about your fundamental limits... because those conclusions themselves will limit you even more.

Comment author: Stabilizer 25 January 2014 07:10:14PM 4 points [-]

Just to be clear: you're worried that you aren't sloppy enough?

If so, for us non-artists, can you explain how 'sloppiness' can be a good thing?

Comment author: gothgirl420666 25 January 2014 11:06:53PM *  2 points [-]

Sorry, I communicated poorly. I meant [introducting] lack of sloppiness into my work. That's not what I meant. I'm too sloppy.

Comment author: Stabilizer 26 January 2014 03:51:03AM 5 points [-]

You should edit the original question. People seem to be answering the wrong question below.

Comment author: fubarobfusco 25 January 2014 09:33:28PM 2 points [-]

I have never biked twenty miles in one go.
It could be that this reflects some inherent limit.
Or it could be that I just haven't tried yet.

If I believe that it is an inherent limit, how might I test my belief?
Only by trying anyway.
If I try and succeed, then I will update.

If I believe that it is not an inherent limit, how might I test my belief?
Only by trying anyway.
If I try and fail, then I will update.

In either case, the test of my ability
Is not in contemplating what mechanisms of self might limit me,
But in trying anyway, when I have the opportunity to do so,
And seeing what happens.

Comment author: [deleted] 26 January 2014 01:21:12AM 1 point [-]

Be careful not to find yourself 7 miles away from home on your bike and too tired to keep on cycling.

Comment author: RichardKennaway 26 January 2014 09:11:02AM 1 point [-]

Be careful not to find yourself 7 miles away from home on your bike and too tired to keep on cycling.

If that means arranging with a friend to pick you up in their car if you have to bail out, or picking a circular route that never takes you that far from home, or any other way of handling the contingency. Going "but suppose I fail!" and not trying is an even worse piece of wormtonguing than the one fubarobfusco is addressing.

Comment author: EStokes 25 January 2014 09:08:30PM *  1 point [-]

Some guesses on my part-

  • Maybe your tendency towards precision is at the wrong times? If practicing, for example, it might be counterproductive since you probably want quantity instead of quality, or maybe you're trying to get everything down precisely too early on and it's making your work stiff.

  • Manfred's point is good- "metaphor that captures the scene without the need for detail."... If you render background details overmuch, they can distract the viewer from the focal point of the work. Maybe put some effort into looking at how the "metaphors" of different things work? For example, how more skilled artists draw/paint grass in the distance, or whatnot.

  • I think it's a common thing to sort of notice something wrong in an area, and to spend a lot of time on that area in hopes of fixing it, which would make it less sloppy... Maybe sketch that thing a lot for practice.

  • If you're drawing from life, it's possible that lack of sloppiness comes from not making sense of the gestalt, so to speak. I'd think that understanding the form of the subject and how the lighting on it works means you can simplify things away. I don't do much (read: any) figure drawings from life, but I'd imagine that understanding the figure and what's important and what isn't would be helpful. Maybe doing some master copies of skilled, more abstract drawings of the figure would help. Maybe look up a comic artist or cartoonist you like and look at what they do.

ETA:

To address your actual question, I'd say I don't know any particular evidence for why that should be so.

Rationality-technique-wise: It's good that you asked people, since that would bring you evidence of the idea being true or false. In the future it might be even more useful to suppress hypothesizing until some more investigating has gone on- "biological limit" is the sort of thing that feels true if you don't understand how to do something or how to understand how to do something. I think there's a post about this, or something; let me see if I can find it... ETA2: The exact anecdote I was thinking of doesn't apply as much as I thought it did, but maybe the post "Fake Explanations" or something applies?

Comment author: ChristianKl 25 January 2014 09:05:23PM *  1 point [-]

I would guess that you try to exert too much control. The kind of "sloppiness" that's useful for creativity is about letting things go.

Meditation might help.

As you are female, dancing a partner dance where you have to follow and can't control everything might be useful. Letting go of trying to control is lesson 101 for a lot of woman who pick up Salsa dancing.

Comment author: [deleted] 25 January 2014 10:04:56PM 5 points [-]

As you are female,

He isn't.

Comment author: buybuydandavis 26 January 2014 04:12:07AM *  1 point [-]

As a lead, you learn that you aren't really controlling much of anything in Salsa either. You're setting boundary conditions; follows have a fascinating way of exploring the space of those boundaries in ways you often don't expect.

But I'm guessing that you've hit on the right direction of interpretation of sloppiness as letting go of control. I'd extend that to too much self conscious* control. Great art, and particularly great dancing, is finding a clear intention and a method of focusing your discursive consciousness and voluntary attention that harnesses the rest of your capabilities for the same intention.

When the self monitoring person in your head tries to do too much, he gets in the way of the rest of you doing it right.

Comment author: gothgirl420666 25 January 2014 10:58:55PM 2 points [-]

I would guess that you try to exert too much control. The kind of "sloppiness" that's useful for creativity is about letting things go.

I'm already good at this part of creativity, but precision is also pretty important. Right now I'm working on a project where I have to trace in pen (can't erase, flaws are obvious) photographs that I took. Letting things go won't help here.

Meditation might help.

I already do meditate.

As you are female

I'm not, sorry.

Comment author: palladias 26 January 2014 06:20:13AM 2 points [-]

Swing classes are pretty good about letting either gender learn to follow, if you'd like.

Comment author: Metus 25 January 2014 05:06:55PM *  1 point [-]

Repost as there were no answers:

Has anyone here done Foundation Training? How is the evidence supporting them?