If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Open thread, January 25- February 1
New Comment
318 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Every now and then I like to review my old writings so I can cringe at all the wrong things I wrote, and say "oops" for each of them. Here we go...

There was once a time when the average human couldn't expect to live much past age thirty. (Jul 2012)

That's probably wrong. IIRC, previous eras' low life expectancy was mostly due to high child mortality.

We have not yet mentioned two small but significant developments leading us to agree with Schmidhuber (2012) that "progress toward self-improving AIs is already substantially beyond what many futurists and philosophers are aware of." These two developments are Marcus Hutter's universal and provably optimal AIXI agent model... and Jurgen Schmidhuber's universal self-improving Godel machine models... (May 2012)

This sentence is defensible for certain definitions of "significant," but I think it was a mistake to include this sentence (and the following quotes from Hutter and Schmidhuber) in the paper. AIXI and Godel machines probably aren't particularly important pieces of progress to AGI worth calling out like that. I added those paragraphs to section 2.4. not long before the submission deadline, and re... (read more)

On September 26, 1983, Soviet officer Stanislav Petrov saved the world. (Nov 2011)

Eh, not really.

The Wiki link in the linked LW post seems to be closer to "Stanislav Petrov saved the world" than "not really":

Petrov judged the report to be a false alarm, and his decision is credited with having prevented an erroneous retaliatory nuclear attack

...

His colleagues were all professional soldiers with purely military training and, following instructions, would have reported a missile strike if they had been on his shift.

...

Petrov, as an individual, was not in a position where he could single-handedly have launched any of the Soviet missile arsenal. ... But Petrov's role was crucial in providing information to make that decision. According to Bruce Blair, a Cold War nuclear strategies expert and nuclear disarmament advocate, formerly with the Center for Defense Information, "The top leadership, given only a couple of minutes to decide, told that an attack had been launched, would make a decision to retaliate."

A closely related article says:

Petrov's responsibilities included observing the satellite early warning network and notifying his sup

... (read more)
[-]gjm170

previous eras' low life expectancy was mostly due to high child mortality.

I have long thought that the very idea of "life expectancy at birth" is a harmful one, because it encourages exactly that sort of confusion. It lumps together two things (child mortality and life expectancy once out of infancy) with sufficiently different causes and sufficiently different effects that they really ought to be kept separate.

3TylerJay
Does anybody have a source that separates the two out? For example, to what age can the average X year old today expect to live? Or even at a past time?
7Lumifer
Sure, there is the concept of life expectancy at specific age. For example, there is the "default" life expectancy at birth, there is the life expectancy for a 20 year-old, life expectancy for a 60-year-old, etc. Just google it up.
4fubarobfusco
It's kind of important to the life insurance business ....
2TylerJay
Thanks. Interestingly, My numbers never matched up between any 2 sources. The US SSA's actuarial tables give me a number that's 5 years different from their own "additional life expectancy" calculator.
3ChrisHallquist
Huh. I followed the link to the correction of the Petrov story, and found I'd already upvoted it. But if you'd asked me yesterday for examples of heroes yesterday, I'd have cited Petrov immediately. S hows how hard it is to unlearn false information once you've learned it.
2Gunnar_Zarncke
Smart move not only to review but post the results. Shows humbleness and at the same time prevents being called on it later. This is an approach I'd like to see more often. Maybe you should add it to the http://lesswrong.com/lw/h7d/grad_student_advice_repository/ or some such.
0private_messaging
On the AIXI and such... you see, its just hard to appreciate just how much training it takes to properly understand something like that. Very intelligent people, with very high mental endurance, train for decades, to be able to mentally manipulate the relevant concepts at their base level. Now, let's say someone only spent a small fraction of the time - either because they've pursued a wrong topic through the most critical years, or because they have low mental endurance. Unless they're impossibly intelligent, they have no chance of forming even a merely good understanding.

I've been systematically downvoted for the past 16 days. Every day or two, I'd lose about 10 karma. So far, I've lost a total of about 160 karma.

It's not just somebody just going through my comments and downvoting the ones they disagree with. Even a comment where I said "thanks" when somebody pointed out a formatting error in my comments is now at -1.

I'm not sure what can/should be done about this, but I thought I should post it here. And if the person who did this is here and there is a reason, I would appreciate it if you would say it here.

A quick look at the first page of your recent comments shows most of your recent activity to have been in the recent "Is Less Wrong too scary to marginalized groups?" firestorm.

One of the most recent users to complain about mass downvoting also cited participation in flame-bait topics (specifically gender).

3gjm
I would prefer to see a little less victim-blaming here. (I'm not sure whether you intended it as such -- but that phrase "participation in flame-bait topics" sounds like it.)

That was not my intention. (If it's any consolation, I participated in the same firestorm.)

1drethelin
How is this victim blaming? As I interpret it the claim is that the person was probably NOT the victim of systematic downvoting but instead made a lot of comments that are counter to what people like to hear, creating the illusion of same.
8gjm
Hard to explain getting downvoted for as being about saying things "counter to what people like to hear". Which is why I didn't interpret CAE_Jones as suggesting that that's what was going on.
0CAE_Jones
For what it's worth, I agree with gjm that "flame-bait" was a poor choice of words on my part, and I understand how it could have been taken as victim-blaming in spite of my intentions.
9pragmatist
Gah... This is becoming way too common, and it seems like there's pretty good evidence (further supported in this instance) regarding the responsible party. I wish someone with the power to do so would do something about it.
7ChrisHallquist
For context, link to past discussion of mass-downvoting.
6Vulture
I got a seemingly one-time hit of this about a week ago. For what it's worth I had just been posting comments on the subject of rape, but a whole bunch of my unrelated comments got it too. (Since then it's been having an obnoxious deterrent effect on my commenting, because I feel so precariously close to just accumulating negative karma every time I post, leaving readers with the impression that my ideas have all been identified as worthless by someone probably cleverer than themselves. I'm now consciously trying to avoid thinking like this)
4VAuroch
I have experienced this also, though roughly a month ago, after an extended debate on trans* issues specifically. I responded by messaging the person I had argued with, and politely asking that, if it was them who had been downvoting me, they please stop going through my comment history. I got no response, but the stream of downvotes seemed to tail off shortly thereafter. EDIT: As a side note, the person with whom I had been debating/arguing was the same one that showed up in the thread ChrisHallquist linked. It looks like it's a pattern of behavior for him.
-2Gunnar_Zarncke
I have blindly upvoted your 10 most recent comments. This is meant as consolation but likely a one-time action .
-14Dias
-10Dias

Robin Hanson on Facebook:

Academic futurism has low status. This causes people interested in futurism to ignore those academics and instead listen to people who talk about futurism after gaining high status via focusing on other topics. As a result, the people who are listened to on the future tend to be amateurs, not specialists. And this is why "we" know a lot less about the future than we could.

Consider the case of Willard Wells and his Springer-published book Apocalypse When?: Calculating How Long the Human Race Will Survive (2009). From a UCSD news story about a talk Wells gave about the book:

Larry Carter, a UCSD emeritus professor of computer science, didn’t mince words. The first time he heard about Wells’s theories, he thought, “Oh my God, is this guy a crackpot?”

But persuaded by Well’s credentials, which include a PhD from Caltech in math and theoretical physics, a career that led him L-3 Photonics and the Caltech/Jet Propulsion Laboratory, and an invention under his belt, Carter gave the ideas a chance. And was intrigued.

For a taste of the book, here is Wells' description of one specific risk:

When advanced robots arrive... the serious threat [will be] h

... (read more)
7RobinHanson
Yes by judging someone on their credentials in other fields, you can't tell if they are just making stuff up on this subject vs. studied it for 15 years.
6VincentYu
Wells's book: Apocalypse when. I took a quick skim through the book. Your focused criticism of Wells's book is somewhat unfair. The majority of the book (ch. 1–4) is about a survival analysis of doomsday risks. The scenario you quoted is in the last chapter (ch. 5), which looks like an afterthought to the main intent of the book (i.e., providing the survival analysis), and is prepended by the following disclaimer: I think it is fair to criticize the crackpot scenario that he gave as an example, but your criticism seems to suggest that his entire book is of the same crackpot nature, which it is not. It is unfortunate that PR articles and public attention focuses on the insubstantial parts of the book, but I am sure you know what that is like as the same occurs frequently to MIRI/SIAI's ideas. Orthogonal notes on the book's content: Wells seems unaware of Bostrom's work on observation selection effects, and it appears that he implicitly uses SSA. (I have not carefully read enough of his book to form an opinion on his analysis, nor do I currently know enough about survival analysis to know whether what he does is standard.)
2lukeprog
Ah, you're right that I should have quoted the "This set serves as a foil" paragraph as well. I found chs. 1-4 pretty unconvincing, too, though I'm still glad that analysis exists.
3James_Miller
Yes, I'm an academic and I get a similar reaction from telling people I study the Singularity as when I say I've signed up for cryonics. Thankfully, I have tenure.
2Halfwitz
What happens when you say, "I study the economic implications of advanced artificial inteligence," to people?
0James_Miller
I don't phrase it this way.
0Randy_M
Do you actually say you "study the singularity" or give a more in depth explanation? I ask because the word study is usually used only in reference to things that do or have exisited, rather than to speculative future events.
0James_Miller
I go into more depth, especially when I (unsuccessfully) came up for promotion for full professor.
2Kawoomba
It might be a worthwhile endeavor to modify our wiki such that it serves not only as a mostly local reference on current terms and jargon, but also as an independent guide to the various arguments for and against various concepts, where applicable. It could create a lot of credibility and exposure to establish a sort of neutral reference guide / an argument map / the history and iterations an idea has gone through, in a neutral voice. Ideally, neutrality regarding PoV works in favor of those with the balance of arguments in their favor. This need not be entirely new material, but instead simply a few mandatory / recommended headers in each wiki entry, pertaining to history, counterarguments etc. Could be worth it lifting the wiki from relative obscurity, with a new landing page, and marketed potentially as a reference guide for journalists researching current topics. Kruel's LW interview with Shane Legg got linked to in a NYTimes blog, why not a suitable LW wiki article, too?
1ChristianKl
I don't think that's the case. Most people who are listened to on the future don't tend to speak to an audience primarily consisting of futurists. There are think tanks who employee people to think about the future and those think tanks tend generally to be quite good at influencing the public debate. I also don't think that academic has any special claim to be specialists about the future. When I think about specialists on futurism names like Stewart Brand or Bruce Sterling.
1IlyaShpitser
This is a very important and general point. While it is important to communicate ideas to a general audience, generally excessive communication to general audiences at the expense of communication to peers should be "bad news" when it comes to evaluating experts. Folks like Witten mostly just get work done, they don't write popular science books.
0ChristianKl
Witten doesn't ring a bell with me. Googling the name might mean Edward Witten and Tarynn Madysyn Witten. Do you mean either or them or someone else?
6IlyaShpitser
I mean Edward Witten, one of the most prominent physicists alive. The fact that his name does not ring a bell is precisely my point. The names that do ring a bell are the names of folks who are "good at the media," not necessarily folks who are the best in their field.
0ChristianKl
Okay, given that the subject is theoretical physics and I'm not much into that field I understand why I have no recognition. When looking at his Wikipedia I see he made Time 100 so it still might be worth knowing the name.
3Ander
Witten is one of the greatest physicists alive, if not the greatest. He is the one who unified the various string theories into M-theory. He is also the only physicist to receive a Fields Medal.
[-]gwern240

Some names familiar to LWers seem to have just made their fortunes (again, in some cases); http://recode.net/2014/01/26/exclusive-google-to-buy-artificial-intelligence-startup-deepmind-for-400m/ (via HN)

Google is shelling out $400 million to buy a secretive artificial intelligence company called DeepMind....Based in London, DeepMind was founded by games prodigy and neuroscientist Demis Hassabis, Skype & Kazaa developer Jaan Tallin and researcher Shane Legg.

I liked Legg's blog & papers and was sad when he basically stopped in the interests of working on his company, but one can hardly argue with the results.

EDIT: bigger discussion at http://lesswrong.com/r/discussion/lw/jks/google_may_be_trying_to_take_over_the_world/#comments - new aspects: $500m, not $400m; DeepMind proposes an ethics board

I'm going to do the unthinkable: start memorizing mathematical results instead of deriving them.

Okay, unthinkable is hyperbole. But I've noticed a tendency within myself to regard rote memorization of things to be unbecoming of a student of mathematics and physics. An example: I was recently going through a set of practice problems for a university entrance exam, and calculators were forbidden. One of the questions required a lot of trig, and half the time I spent solving the problem was just me trying to remember or re-derive simple things like the arcsin of 0.5 and so on. I knew how to do it, but since I only have a limited amount of working memory, actually doing it was very inefficient because it led to a lot of backtracking and fumbling. In the same sense, I know how to derive all of my multiplication tables, but doing it every time I need to multiply two numbers together is obviously wrong. I don't know how widespread this is, but at least in my school, memorization was something that was left to the lower-status, less able people who couldn't grasp why certain results were true. I had gone along with this idea without thinking about it critically.

So these are the things I'm ... (read more)

[-]Shmi180

In my experience memorization often comes for free when you strive for fluency through repetition. You end up remembering the quadratic formula after solving a few hundred quadratic equations. Same with the trig identities. I probably still remember all the most common identities years out of school, owing to the thousands (no exaggeration) of trig problems I had to solve in high school and uni. And can derive the rest in under a minute.

Memorization through solving problems gives you much more than anki decks, however: you end up remembering the roads, not just the signposts, so to speak, which is important for solving test problems quickly.

You are right that "the reduction in mental effort required on basic operations will rapidly compound to allow for much greater fluency with harder problems", I am not sure that anki is the best way to achieve this reduction, though it is certainly worth a try.

4ChristianKl
In general there the core principle of spaced repetition that you don't put something into the system that you don't already understand. When trying to memorize mathematical results make sure that you only add cards when you really have a mental understanding. Using Anki to avoid forgetting basic operations is great. If you however add a bunch of information that's complex, you will forget it and waste a lot of time.
5whales
That's true if you're just using spaced repetition to memorize, although I'd add that it's still often helpful to overlearn definitions and simple results just past the boundaries of your understanding, along the lines of Prof. Ravi Vakil's advice for potential students: The second point I'd make is that the spacing effect (distributed practice) works for complex learning goals as well, although it will help if your practice consists of more than rote recall.
1ChristianKl
If you learn definitions it's important to sit down and actually understand the definition. If you write a card before you understand it, that will lead to problems.
2bramflakes
Yeah, I'm wary of that fact and I've learned the downsides of it through experience :)
2whales
Nice, and good luck! I'm glad to see that my post resonated with someone. For rhetorical purposes, I didn't temper my recommendations as much as I could have -- I still think building mental models through deliberate practice in solving difficult problems is at the core of physics education. I treat even "signpost" flashcards as opportunities to rehearse a web of connections rather than as the quiz "what's on the other side of this card?" If an angle-addition formula came up, I'd want to recall the easy derivation in terms of complex exponentials and visualize some specific cases on the unit circle, at least at first. I also use cards like that in addition to cards which are themselves mini-problems.

In this article, Eliezer says:

Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever.

Recently, a similar phrase popped into my head, which I found quite useful:

Confusion gets curiosity. Does not get anger, disgust or fear. Never. Never ever never for ever.

That's all.

3[anonymous]
I don't know what you mean precisely by confusion, but I personally can't always control what my immediate primal level response is to certain situations. If I try to strictly avoid certain feelings, I usually end up convincing myself that I'm not feeling that way when actually I am. I'd rather notice what I'm feeling and then move on from there, it's probably easier to control your thinking that way. Just because you're angry doesn't mean you have to act like angry.
1Stabilizer
That's basically what I meant. The move is to notice the anger, fear or disgust and then realize that this emotion isn't useful and can be actively detrimental. Then consciously try to switch to curiosity. Of course, I couldn't condense the full messiness of reality into a pithy saying.
0[anonymous]
"Make yourself feel curiosity" is not very concretely actionable in the short term. If you want to coin near-mode actionable advice, instead of a far-mode affirmation of positive emotions, you might say something like, "The proper responses to feelings of confusion are orienting and exploring behaviors". Those behaviors should be unpacked to more specific things like looking at your surroundings, asking questions of nearby people, searching your memory for situation-relevant information, and planning an experiment or other (navigable) (causal) path to sources of information. Those levels should be fleshed out and made more concrete too. Now that I've given some helpful advice, I think I've earned an opportunity to express some cynicism: cheering for curiosity and exploration over anger, disgust, and fear shows a stereotypical value alignment of affluent, socially tolerant people in safe environments. The advice you give will not serve you will in adversarial games like chess. It will not serve you well in combat or social competition. It is in many situations harmful advice. Separate and unrelated, I would not like to see this template for inarticulately expressing advice continued. Mostly I say this for the same reasons that we don't make use of reaction .gifs and image macros on lesswrong. There is also a small concern that variants of familiar phrases are harder to evaluate critically, much as the mnemonic device of rhyme apparently makes some specific phrasings of claims more credible than others phrasings of those same claims.

PSA: You can download from scribd without paying, you just need to upload a file first (apparently any file -- it can be a garbage pdf or even a pdf that's already on scribd). They say this at the very bottom of their pricing page, but I didn't notice until just now.

Hello, we are organizing monthly rationality meetups in Vienna - we have previously used the account of one of our members (ratcourse) but would like to switch to this account (rationalityvienna). Please upvote this account for creating rationality vienna meetups.

Reason #k Why I <3 Pomodoros:

They really help me get over akrasia. I beemind how many pomodoros I do per week, so I do tasks I would otherwise procrastinate if I can do 20 minutes of them (yes, I do short pomodoros) and get to enter a data point at the end. Often I find that the task is much shorter/less awful than it felt in the abstract.

Example: I just moved today, and didn't have that much to unpack, but decided I'd do it tomorrow, because I felt tired and it would presumably be long and unpleasant. But then I realized I could get a pomodoro out of it (plus permission from myself to stop after 20 min and go to bed). Turns out it took 11 minutes and now I'm all set up!

3Qiaochu_Yuan
I do this all the time and it's great!

Even if you know that signaling is stupid, it doesn't escape the cost of not signaling.

It's a longstanding trope that Eliezer gets a lot of flack for having no formal education. Formal education is not the only way to gain knowledge, but it is a way of signaling knowledge, and it's not very easy to fake (Not so easy to fake that it falls apart as a credential on its own). Has anyone toyed around with the idea of sending him off to get a math degree somewhere? He might learn something, and if not it's a breezy recap of what he already knows. He comes out the other side without the eternal "has no formal education" tagline, and a whole new slew of acquaintances.

Now, I understand that there may be good reasons not to, and I'd very much appreciate someone pointing me to any previous discussion in which this has been ruled out. Otherwise, how feasible does it sound to crowdfund a "Here's your tuition and an extra sum of money to cover the opportunity cost of your time, I don't care how unfair it is that people won't take you seriously without credentials, go study something useful, make friends with your professors, and get out with the minimum number of credits possible" scholarship?

Has anyone toyed around with the idea of sending him off to get a math degree somewhere?

I think the bigger issue w/ people not taking EY seriously is he does not communicate (e.g. publish peer reviewed papers). Facebook stream of consciousness does not count. Conditional on great papers, credentials don't mean that much (otherwise people would never move up the academic status chain).

Yes it is too bad that writing things down clearly takes a long time.

Somehow I doubt I will ever persuade Eliezer to write in a style fit for a journal, but even still, I'll briefly mention that Eliezer is currently meeting with a "mathematical exposition aimed at math researchers" tutor. I don't know yet what the effects will be, but it seemed (to Eliezer and I) a worthwhile experiment.

4Paul Crowley
Presumably if MIRI were awash with funding you'd pay experts to make papers out of Eliezer's work, freeing Eliezer up for other things?

That's basically what another of our ongoing experiments is.

5iconreforged
True. It seems like the great-papers avenue is being pursued full-steam these days with MIRI, but I wonder if they're going to run out of low-hanging fruit to publish, or if mainstream academia is going to drag their heels replying to them.

I don't think you understand signaling well.

Eliezer managed signaling well enough to get a billionaire to fund him on his project. A billionaire who fund people who drop out of college systematically in projects like his 20 Under 20 program.

Trying to go the traditional route wouldn't fit into the highly effective image that he already signals.

Put another way, the purpose of signaling isn't so nobody will give you crap. It's so somebody will help you accomplish your goals.

People will give you crap, especially if they can get paid to do so. See gossip journalists, for instance. They are not paid to give boring and unsuccessful people crap; they are paid to give interesting and successful people crap.

2David_Gerard
Your last para would imply that not getting crap from gossip journalists means you are not interesting or successful. Eliezer/MIRI gets almost no press. Are you sure that's what you meant?
6fubarobfusco
Eliezer gets a lot more press than I do, which is just fine with me.
1iconreforged
Well, yes, there is going to be some inevitable crap, but the purpose of signalling is so that you could impress a much larger pool of people. So it might not be much help for gossip journalists, but it might help with the marginal professional ethicist, mathematician, or public figure. In that area, you might get some additional "Anybody who can do that must be damn impressive.". Does the additional damn-impressive outweigh the cost? I don't know, that's why I'm asking.
0A1987dM
The discussion about mean vs variance in this post may be relevant.
6James_Miller
Peter Thiel (the billionaire) has the proven ability to spot talent, which is why he is a billionaire. Eliezer has traits that Thiel values, and this is probably much more important than any signal Eliezer sent.
5iconreforged
Impressing Thiel is independent of a future degree or not, because he's already impressed. Where's the next billionaire going to come from, and will they coincidentally also be as contrarian as Thiel? Maybe MIRI doesn't need another billionaire, but I don't think they'd turn one away.
9ChristianKl
I think the deal that Eliezer has with Thiel is that Eliezer does MIRI full time. Switching focus to getting a degree might violate the deal. Gives that Thiel has a lot of money impressing Thiel more might also be very useful if they want more money from him. Do you really think that someone who isn't contrarian will put his money into MIRI? The present set up is quite okay. Those who want people with academic credentials can give their money to FHI. Those who want more contrarian people can give their money to MIRI. Whether or not Eliezer has a degree doesn't change that he's the kind of person who has a public Okcupid profile detailing his sexual habits and the fact that he's polyamorous. When Steve Job was alive and run around in a sweater, he didn't cause people to disregard him because he wasn't wearing a suit. People respect the person who's a contrarian who's okay with not everyone liking him. The contrarian who tries to get every to like them on the other hand get's no respect.
0drethelin
On the other hand if he decides to get a degree and pulls it off in a year or something impressive like that it could just feed into the contrarian genius image.
0ChristianKl
Yes, but that would prokbably either mean paying someone else to do your homework with means that you are vunerable to attack or making studying the sole focus for a year.
4buybuydandavis
Yes, the autodidact signal can be tremendously effective, particularly in tech/libertarian company.
4jimmy
In addition "getting flak" isn't necessarily a bad thing. It can be counter-signaling if you can get flak and stay standing. It can also polarize people and separate those who can evaluate the inside arguments to realize that you're good from those who can't and have to just write you off for having no formal education.
2iconreforged
Eddie has some math talent. He can invest some time, money, and effort C to get a degree, which allows other people to discern that he has a higher probability of having that math talent. This higher probability confers some benefit in that other people will more readily take his advice in mathematical matters, or talk with him about his math. The fun twist is that Eddie lives in a society with many other individuals with varying degrees of math talent, each of whom can expend C to get a degree and the associated benefits. People with almost no mathematical talent have a prohibitively high C, because even if they can pony up the time and money, they have to work very hard to fake their way through. But people with high math ability often choose to stand out by getting the degree, because their C is relatively lower, and a very high proportion of them get degrees. This creates a high association between degrees and mathematical ability, and makes it unlikely to see high mathematical ability in the absence of a degree. That's the basic idea, plus degrees signal other things which may be completely unrelated to math, but are still nice. Even in the case where the degree has no causal effect no math ability, there are benefits to having one, in that the other math people can judge very quickly that they're interested in talking to you. Hopefully that demonstrates that I understand signalling. My question is about the costs and benefits of a particular signal.
-3ChristianKl
It demonstrate that you don't. Humans make decisions via something called the availability heuristic. If you bring into the awareness of the person that you are talking that you are a mathematician that only has a bachleor, no master, no PHD and no professorship that you aren't bringing expertise into his mind. If you are however a self taught person who managed to published multiple papers among them a paper titled "Complex Value Systems in Friendly AI" in Artificial General Intelligence Lecture Notes in Computer Science Volume and who has his own research institute that's a better picture. If you have published papers that a lot more relevant for relevant experts than whether you have a degree that verifies basic understanding. If a person really cares whether Eliezer has a math degree he already lost that person.
1iconreforged
I'm not certain that getting a degree now counts as the traditional route. Also, I don't think that an additional degree is particularly damaging to his image. People aren't going to lose interest in FAI if he sells out and gets a traditional degree. Or they are and I have no idea what kind of people are involved.
9jsteinhardt
4 years (or even 1 year if you are super hard-core) of time is a pretty non-trivial investment. I was 2 classes away from a second degree and declined to take them, because the ~100 hours of work it would have taken wasn't worth the additional letters after my name. I also just really don't know anyone relevant who thinks that a college degree or lack thereof particularly matters (although the knowledge and skills acquired in the course of pursuing said degree may matter a lot). Good people will judge you by what you've done to demonstrate skill, not based on a college diploma. I think IlyaShpitser's comment pretty much nails it.
0[anonymous]
I came to the same conclusion, and in general a lack of degree has not impacted me as I get employment based on demonstrated skill. The main limitation is that any formal Postgrad study is impossible without a degree and this was a regret for me, prior to getting access to the coursera type courses.
8A1987dM
If you buy into the “crunch time” narrative, that's a lot of opportunity cost.
2drethelin
This might have been a good call 10 years ago but nowadays Eliezer is participating in regular face to face meetings with skilled mathematicians and scientists in the context of constructing and analyzing theorems and decision strategies. This means that for a large amount of the people who are most important to convince, he gets to screen out all the "evidence" of not having a degree. And to a large extent, someone having the respect of a bunch of math phds is more important a qualifier of talent than having that phd themselves. There's theoretically still the problem of selling Eliezer to the muggles but I don't think that's anywhere near as important as getting serious thinkers on board.
0Viliam_Bur
Different target groups may use different signals. For example, for a scientist the citations may be more important than formal education. For an ordinary person with a university diploma who never published anything anywhere, formal education will probably remain the most important signal, because that's what they use. A smart sponsor may instead consider the ability of getting things done. And the New Age fans will debate about how much Eliezer fits the definitions of an "indigo child". If the goal is to impress people for whom having an university diploma is the most important signal (they are a majority of the population), the best way would be to find an university which gives the diploma for minimum time and energy spent. Perhaps one where you just pay some money (hopefully not too much), take a few easy exams, and that's it; you don't have to spend time at the lessons. After this, no one can technically say that Eliezer has "no formal education". (And if they start discussing the quality of the university, then Eliezer can point to his citations.) The idea is to do this as easily as possible... assuming it's even worth doing. There are also other things to consider, such as the fact that other people working with Eliezer do have formal education... so why exactly is it a problem if Eliezer doesn't? Does MIRI seem from outside like one man show? Maybe that should be fixed.
0kalium
A diploma mill degree like you describe is not going to get any respect from the (large) population that went to a real university.
0[anonymous]
Would getting more citations partly nullify the lack of formal education?

A recent experience reminded me that basics are really important. On LW we talk a lot about advanced aspects of rationality.

If you would have to describe the basics, what would you say? What things are so obvious for you about rationality that they usually go without saying?

You can frequently make your life better by paying attention to what you're doing, looking for possible improvements, trying your ideas, and observing whether the improvements happen.

9Leonhart
There is no magic. I am not in a story. Words are detachable handles.
0Shmi
Brilliant.
5Qiaochu_Yuan
I run on hardware that was optimized by millions of years of evolution to do the sort of things my ancestors did tens of thousands of years ago, not the sort of things I do now.
2edanm
1. People can change (e.g. update on beliefs, self-improve). 2. How to choose your actions - think about your goals, think what steps achieve them in the best way, act on those steps. 3. There is such a thing as objective truth. Amazing how the basic pillars of rationality are things other people so often don't agree with, even though they seem so dead obvious to me.
2hyporational
This is a fun exercise. The list could be a lot longer than I originally expected. * belief is about evidence * 0 and 1 are not probabilities * Occam's razor * strawman and steelman * privileging the hypothesis * tabooing * instrumental-terminal distinction of values * don't pull probabilities out of your posterior * introspection is often wrong * intuitions are often wrong * general concept of heuristics and biases * confirmation and disconfirmation bias * halo effect * knowing about biases doesn't unbias you * denotations and connotations * many more
4bramflakes
"not technically lying" is de facto lying
0hyporational
This might be useful for staying honest to yourself and perhaps your allies, but it's also useful to keep in mind that most people give different kinds of lies different degrees of moral weight.
2ChristianKl
Nice list, even a bit that's basic enough that I can put it into an Anki deck about teaching rationality (a long term project of mine but at the moment I doesn't have enough cards for release).
0hyporational
I'd like to hear about the experience if you're willing to share it. How basic are we talking about? This older discussion thread seems to ask a similar question and some answers are relevant to your question. If you think your question phrased in a more specific way would elicit different kinds of responses, it might deserve its own thread.
0ChristianKl
The experience wasn't about the domain of rationality but another subject and the relationships of concepts in that framework. If don't think it's useful for people without the experience of the framework. As basic as you can get. What is the most basic thing you can say about rationality. If your reaction is: "Duh, I don't know nothing comes to mind", that's exactly why it might be worthwhile to investigate the issue. Recently there was a discussion about vocabulary for rationality and someone made the point that things can be said either implicit or explicit. Implicitness and explicitness are pretty basic concepts.
[-]MarkL110

My meditation blog from a (somewhat) rationalist perspective is now past 40 posts:

http://meditationstuff.wordpress.com/

2moridinamael
Do you have any material for dealing with chronic pain? Or material that could conceivably be leveraged to apply to chronic pain management?
4MarkL
I'm coming at this from ten years of brain fog, unrefreshing sleep, "feeling sick all the time," etc. Mostly better now; I did a lot of stuff highly specific to my situation. The below mostly helped with enduring it. Remember, I'm just some random idiot on the internet, hope this is helpful, and in no particular order: * http://www.amazon.com/Awareness-Through-Movement-Easy---Do/dp/0062503227/ * http://www.amazon.com/The-Lover-Within-Opening-Practice/dp/1581770170 * http://www.amazon.com/Male-Multiple-Orgasm-Step---Step/dp/1882899067/ * http://store.breathingcenter.com/books---in-english/buteyko-breathing-manual-download * http://www.amazon.com/Acceptance-Commitment-Therapy-Second-Edition/dp/1609189620/ * http://www.amazon.com/Get-Your-Mind-Into-Life/dp/1572244259 * http://www.amazon.com/Exposure-Therapy-Anxiety-Principles-Practice/dp/146250969X/ * http://www.coherencetherapy.org/resources/manual.htm * http://meditationstuff.wordpress.com/2013/07/22/additive-meditation/ * http://www.amazon.com/Compassion-Focused-Therapy-Distinctive-Features/dp/0415448077/ * http://www.butyoudontlooksick.com/wpress/articles/written-by-christine/the-spoon-theory/ * http://www.amazon.com/HIIT-Intensity-Interval-Training-Explained/dp/1477421599/ * Some of John Sarno's stuff

John_Maxwell_IV and I were recently wondering about whether it's a good idea to try to drink more water. At the moment my practice is "drink water ad libitum, and don't make too much of an effort to always have water at hand". But I could easily switch to "drink ad libitum, and always have a bottle of water at hand". Many people I know follow the second rule, and this definitely seems like something that's worth researching more because it literally affects every single day of your life. Here are the results of 3 minutes of googling:

http://www.sciencedirect.com/science/article/pii/S0002822399000486:

Dehydration of as little as 1% decrease in body weight results in impaired physiological and performance responses (4), (5) and (6), and is discussed in more detail below. It affects a wide range of cardiovascular and thermoregulatory responses (7), (8), (9), (10), (11), (12), (13) and (14).

The Nationwide Food Consumption Surveys indicate that a portion of the population may be chronically mildly dehydrated. Several factors may increase the likelihood of chronic, mild dehydration, including a poor thirst mechanism, dissatisfaction with the taste of water, common co

... (read more)
9ephion
Extended sedentary periods are bad for you, so if drinking extra water also makes you get up and walk to the bathroom, that's a win-win.
3hyporational
Except when you're trying to sleep.
2hyporational
While you're at it, you probably should also research how much water is too much, because on the other side of the spectrum lies hyponatremia and having suboptimal electrolyte levels from overdosing water could be harmful to your cognition too, although I think it's unlikely anyone here will develop a measurable hyponatremia just from drinking too much water. Sweating a lot for example might change the situation. This doesn't look like a selective enough heuristic alone.
2ChristianKl
As far as water consumption goes I feel the difference between drinking one liter or four liter per day. I just feel much better with four liter. There were times two years ago when unless I had drunk 4 liter by the time I entered my Salsa dancing location in the evening, my muscle coordination was worse and the dancing didn't flow well. Does that mean that everyone has to drink 4 liters to be at his optimum? No, it doesn't. Get a feel how different amounts of water consumption effect you. For me the effect was clear to see without even needing to do QS. Even it's not as clear for you do QS.
2John_Maxwell
Thanks for writing this up. Lots of things fall in to this category :) In case it's not obvious: this probably means in the absence of food/fluid consumption. You can't go on losing 2.5 litres of water a day indefinitely.
0Randy_M
I assumed it wasn't net, but the amount of water excreted, regardless of consumption. Though those probably are not unrelated processes.
0A1987dM
Anecdotally, I feel less lazy when I drink lots of water, but for all I know it might well be placebo.
4Gurkenglas
We should do a placebo study on the effects of drinking water.
[-]Locaha100

Repeating my post from the last open thread, for better visibility:

I want to study probability and statistics in a deeper way than the Probability and Statistics course I had to take in the university. The problem is, my mathematical education isn't very good (on the level of Calculus 101). I'm not afraid of math, but so far all the books I could find are either about pure application, with barely any explanations, or they start with a lot of assumptions about my knowledge and introduce reams of unfamiliar notation.

I want a deeper understanding of the basic concepts. Like, mean is an indicator of the central tendency of a sample. Intuitively, it makes sense. But why this particular formula of sum/n? You can apply all kinds of mathematical stuff to the sample. And it's even worse with variance...

Any ideas how to proceed?

I too spent a few years with a similar desire to understand probability and statistics at a deeper level, but we might have been stuck on different things. Here's an explanation:


Suppose you have 37 numbers. Purchase a massless ruler and 37 identical weights. For each of your numbers, find the number on the ruler and glue a weight there. You now have a massless ruler with 37 weights glued onto it.

Now try to balance the ruler sideways on a spike sticking out of the ground. The mean of your numbers will be the point on the ruler where it balances.

Now spin the ruler on the spike. It's easy to speed up or slow down the spinning ruler if the weights are close together, but more force is required if the weights are far apart. The variance of your numbers is proportional to the amount the ruler resists changes to its angular velocity -- how hard you have to twist the ruler to make it spin, or to make it stop spinning.


"I'd like to understand this more deeply" is a thought that occurs to people at many levels of study, so this explanation could be too high or low. Where did my comment hit?

8IlyaShpitser
Moments of mass in physics is a good intro to moments in stats for people who like to visualize or "feel out" concepts concretely. Good post!
6solipsist
A different level explanation, which may or may not be helpful: Read up on affine space, convex combinations, and maybe this article about torsors. If you are frustrated with hand waving in calculus, read a Real Analysis textbook. The magic words which explain how the heck you can have a probability distributions over real numbers is measure theory).
1Douglas_Knight
How does that answer the question? It's true that the center of gravity is a mean, but the moment of inertia is not a variance. It's one thing to say something is "proportional to a variance" to mean that the constant is 2 or pi, but when the constant is the number of points, I think it's missing the statistical point. But the bigger problem is that these are not statistical examples! Means and sums of squares occur many places, but why are they are a good choice for the central tendency and the tendency to be central? Are you suggesting that we think of a random variable as a physical rod? Why? Does trying to spin it have any probabilistic or statistical meaning?
4solipsist
I wasn't aiming to answer Locaha's question as much as figure out what question to answer. The range of math knowledge here is high, and I don't know where Locaha stands. I mean, That could be a basic question about the meaning of averages -- the sort of knowledge I internalized so deeply that I have trouble forming it into words. But maybe Locaha's asking a question like: That's a less philosophical question. So if Locaha says "means are like the centers of mass! I never understood that intuition until now!", I'll have a different follow up than if Locaha says "Yes, captain obvious, of course means are like centers of mass. I'm asking about XYZ".
0spxtr
Mean and variance are closely related to center of mass and moment of inertia. This is good intuition to have, and it's statistical. The only difference is that the first two are moments of a probability distribution, and the second two are moments of a mass distribution.
-3Douglas_Knight
Using the word "distribution" doesn't make it statistical.
0[anonymous]
Telegraph to a younger me: If you are frustrated with explanations in calculus, read a Real Analysis textbook. And the magic words that explain how the heck you can have probability distributions over real numbers is measure theory.

When you have thousands of different pieces of data, to grasp it mentally, you need to replace them with some simplification. For example, instead of a thousand different weights you could imagine a thousand identical weights, such that the new set is somehow the same as the original set; and then you would focus on the individual weight from the new set.

What precisely does "somehow the same as the original set" mean? Well, it depends on what did the numbers from the original set do; how exactly they join together.

For example, if we speak about weights, the natural way of "joining together" is to add their weight. Thus the new set of the identical weights is equivalent to the original set if the sum of the new set is the same as sum of the old set. The sum of the new set = number of pieces × weight of one piece. Therefore the weight of the piece in the new set is the sum of the pieces in the original set divided by their number; the "sum/n".

Specifically, if addition is the natural thing to do, the set 3, 4, 8 is equivalent to 5, 5, 5, because 3 + 4 + 8 = 5 + 5 + 5. Saying that "5 is the mean of the original set" means "the original set b... (read more)

6Qiaochu_Yuan
I don't think that's really what means are. That intuition might fit the median better. One reason means are nice is that they have really nice properties, e.g. they're linear under addition of random variables. That makes them particularly easy to compute with and/or prove theorems about. Another reason means are nice is related to betting and the interpretation of a mean as an expected value; the theorem justifying this interpretation is the law of large numbers. Nevertheless in many situations the mean of a random variable is a very bad description of it (e.g. mean income is a terrible description of the income distribution and median would be much more appropriate). Edit: On the other hand, here's one very undesirable property of means: they're not "covariant under increasing changes of coordinates," which on the other hand is true of medians. What I mean is the following: suppose you decide to compute the mean population of all cities in the US, but later decide this is a bad idea because there are some really big cities. If you suspect that city populations grow multiplicatively rather than additively (e.g. the presence of good thing X causes a city to be 1.2x bigger than it otherwise would, as opposed to 200 people bigger), you might decide that instead of looking at population you should look at log population. But the mean of log population is not the log of mean population! On the other hand, because log is an increasing function, the median of log population is still the log of median population. So taking medians is in some sense insensitive to these sorts of decisions, which is nice.
6Ben Pace
I asked a similar question a while back, and I was directed to this book, which I found to be incredibly useful. It is written at an elementary level, has minimal little maths, yet is still technical, and brings across so many central ideas in very clear, Bayesian, terms. It is also on Lukeprog's CSA book recommendations for 'Become Smart Quickly'. Note: this is the only probability textbook I have read. I've glanced through the openings of others, and they've tended to be above my level. I am sixteen.
5pragmatist
As a first step, I suggest Dennis Lindley's Understanding Uncertainty. It's written for the layperson, so there's not much in the way of mathematical detail, but it is very good for clarifying the basic concepts, and covers some surprisingly sophisticated topics. ETA: Ah, I didn't notice that Benito had already recommended this book. Well, consider this a second opinion then.
5buybuydandavis
Read Edwin Jaynes. The problem with most Probability and Statistics courses is the axiomatic approach. Purely formalism. Here are the rules - you can play by them if you want to. Jaynes was such a revelation for me, because he starts with something you want, not arbitrary rules and conventions. He builds probability theory on basic desiredata of reason that you that make sense. He had reasons for my "whys?". Also, standard statistics classes always seemed a bit perverse to me - logically backward. They always just felt wrong. Jaynes approach replaced that tortured backward thinking with clear, straight lines going forward. You're always asking the same basic question "What is the probability of A given that I know B?" And he also had the best notation. Even if I'm not going to do any math, I'll often formulate a problem using his notation to clarify my thinking.
8Locaha
I think this is a most awesome mistype of desiderata.
3Manfred
Here, have a book! http://www-biba.inrialpes.fr/Jaynes/prob.html
8Locaha
Actually, I started reading that one and found it too hard.
0edanm
IS this a good book to start with? I know it's the standard "Bayes" intro around here, but is it good for someone with, let's say, zero formal probability/statistics training?
5Kaj_Sotala
I was under the impression that the "this is definitely not a book for beginners" was the standard consensus here: I seem to recall seeing some heavily-upvoted comments saying that you should be approximately at the level of a math/stats graduate student before reading it. I couldn't find them with a quick search, but here's one comment that explicitly recommends another book over it.
0A1987dM
I think it's even better if you're not familiar with frequentist statistics because you won't have to unlearn it first, but I know many people here disagree.
0buybuydandavis
I suppose it's better that to never have suffered through frequentist statistics first, but I think you appreciate the right way a lot more after you've had to suffer through the wrong way for a while.
0A1987dM
Well, Jaynes does point out how bad frequentism is as often as he can get away with. I guess the main thing you're missing out if you weren't previously familiar with it is knowing whether he's attacking a strawman.
0jsteinhardt
I agree, that's why I'm glad I learned Bayes first. Makes you appreciate the good stuff more.
2A1987dM
Did you misread the comment you're replying to, are you sarcastic, or am I missing something?
2A1987dM
* The mean of the sum of two random variables is the sum of the means (ditto with the variances); there's no similarly simple formula for the median. (See ChristianKl's comment for why you'd care about the sum.) * The mean if the value of x that minimizes SUM_i (x - x_i)^2; if you have to approximate all elements in your sample with the same value and the cost of an imperfect approximation is the square distance from the exact value (and any smooth function looks like the square when you're sufficiently close to the minimum), then you should use the mean. * The mean and variance are jointly sufficient statistics for the normal distribution * Possibly something else which doesn't come to my mind at the moment.
0A1987dM
(Of course, all this means that if you're more likely to multiply things together than add them, the badness of an approximation depends on the ratio between it and the true value rather than the difference, and things are distributed log-normally, you should use the geometric mean instead. Or just take the log of everything.)
0Lumifer
This isn't at introductory level, but try exploring the ideas around Fisher information -- it basically ties together information theory and some important statistical concepts.
0othercriteria
Fisher Information is hugely important in that it lets you go from just treating a family of distributions as a collection of things to treating them as a space with its own meaningful geometry. The wikipedia page doesn't really convey it but this write-up by Roger Grosse does. This has been known for decades but the inferential distance to what folks like Amari and Barndorff-Nielsen write is vast.
0maia
Attending a CFAR workshop and session on Bayes (the 'advanced' session) helped me understand a lot of things in an intuitive way. Reading some online stuff to get intuitions about how Bayes' theorem and probability mass work was helpful too. I took an advanced stats course right after doing these things, and ended up learning all the math correctly, and it solidified my intuitions in a really nice way. (Other students didn't seem to have as good a time without those intuitions.) So that might be a good order to do things in. Some multidimensional calc might be helpful, but other than that, I think you don't need too much other math to support learning more probability and stats.
0Stefan_Schubert
Not really - but I do agree that it's absolutely vital to understand the basic concepts or terms. I think that's a major reason why people fail to learn - they just don't really grasp the most vital concepts. That's especially true of fields with lots of technical terms. If you don't understand the terms you'll struggle to follow even basic lines of reasoning. For this reason I sometimes provide students with a list of central terms, together with comprehensive explanations of what they mean, when I teach.
0ThrustVectoring
I don't have a good resource for you - I've had too much math education to pin down exactly where I picked up this kind of logic. I'd recommend set theory in general for getting an understanding of how math works and how to talk and read precisely in mathematics. For your specific question about the mean, it's the only number such that the sum of all (samples - mean) equals zero. Go ahead and play with the algebra to show it to yourself. What it means is that if you go off of the mean, you're going to be more positive of the numbers in the sample than you are negative, or more negative than positive.
0Locaha
Can you recommend a place to start learning about set theory?
0ThrustVectoring
http://intelligence.org/courses/ has information on set theory. I also enjoyed reading Bertrand Russell's "Principia Mathematica", but haven't evaluated it as a source for learning set theory.

A few years back, the Amanda Knox murder case was extensively discussed on LW.

Today, Amanda Knox has been convicted again.

Did someone here ask about the name of a fraud where the fraudster makes a number of true predictions for free, then says "no more predictions, I'm selling my system."? There's no system, instead the fraudster divided the potential victims into groups, and each group got different predictions. Eventually, a few people have the impression of an unbroken accurate series.

Anyway, the scam is called The Inverted Pyramid, and the place I'd seen it described was in the thoroughly charming "Adam Had Three Brothers. by R.A. Lafferty.

Edited to add: It... (read more)

People often ask why MIRI researchers think decision theory is relevant for AGI safety. I, too, often wonder myself whether it's as likely to be relevant as, say, program synthesis. But the basic argument for the relevance of decision theory was explained succinctly in Levitt (1999):

If robots are to put to more general uses, they will need to operate without human intervention, outdoors, on roads and in normal industrial and residential environments where unpredictable physical and visual events routinely occur. It will not be practical, or even safe, t

... (read more)

A year ago, I was asked to follow up on my post about the January 2013 CFAR workshop in a year. The time to write that post is fast approaching. Are there any issues / questions that people would be particularly interested in seeing this post address / answer?

2pewpewlasergun
I'd like to know how many techniques you were taught at the meetup you still use regularly. Also which has had the largest effect on your life.

Somewhere I saw the claim that in choosing sperm donors the biggest factor turns out to be how cute the baby pictures are, but at this point it's just a cached thought. Looking now I'm not able to substantiate it. Does anyone know where I might have seen this claim?

Does anyone else experience the feeling of alienation? And does anyone have a good strategy for dealing with it?

8Kawoomba
Yes, although it would help if you could be a bit more specific, the term is somewhat overloaded. As for the strategy, depends. Find a better community (than the one you feel alienated from) in the sense of better matching values? We both seem to feel quite at home in this one (for me, if not for the suffocating supremacy of EA).
9Daniel_Burfoot
I meant alienated from society at large, not from LW, although the influence of society at large obviously affects discussion on LW. One aspect of my feeling is that I increasingly suspect that the fundamental reason people believe things in the political realm is that they feel a powerful psychological need to justify hatred. The naive view of political psychology is that people form ideological beliefs out of their experience and perceptions of the world, and those beliefs suggest that a certain category of people is harming the world, and so therefore they are justified in feeling hatred against that category of people. But my new view is that causality flows in the opposite direction: people feel hatred as a primal psychological urge, and so their conscious forebrain is forced to concoct an ideology that justifies the hatred while still allowing the individual to maintain a positive pro-social self-image. This theory is partially testable, because it posits that a basic prerequisite of an ideology is that it identifies an out-group and justifies hatred against that out-group.
8fubarobfusco
There is a quote commonly mis-attributed to August Bebel and indeed to Marx: "Antisemitismus ist der Sozialismus des dummen Kerls." ("Antisemitism is the socialism of the stupid guy", or perhaps colloquially, "Antisemitism is a dumb-ass version of socialism") That is to say, politically naïve people were attracted to antisemitism because it offered them someone to blame for the problems they faced under capitalism, which — to the quoted speaker's view, anyway — would be better remedied by changing the political-economic structure. Jay Smooth recently put out a video, "Moving the Race Conversation Forward", discussing recent research to the effect that mainstream-media discussions of racial issues tend to get bogged down in talking about whether an individual did or said something racist, as opposed to whether institutions and social structures produce racially biased outcomes. There are probably other sources for similar ideas from around the political spectra. (I'll cheerfully admit that the above two sources are rather lefter than I am, and I just couldn't be arsed to find two rightish ones to fit the politesse of balance.) People do often look for individuals or out-groups to blame for problems caused by economic conditions, social structures, institutions, and so on. The individuals blamed may have precious little to do with the actual problems. That said, if someone's looking to place blame for a problem, that does suggest the problem is real. It's not that they're inventing the problem in order to have something to pin on an out-group. (It also doesn't mean that a particular structural claim, Marxist or whatever, is correct on what that problem really is — just that the problem is not itself confabulated.)
3Richard_Kennaway
Does that make socialism the anti-semitism of the smart? Or perhaps of the ambitious -- they're attracted to it because it gives them an enemy big enough to justify taking over everything?
3NancyLebovitz
I've seen it phrased as "Anti-semitism is the socialism of fools".
0Daniel_Burfoot
Sure, obviously there are real problems in the world. Your examples seem to support my thesis that people believe in ideologies not because those ideologies are capable of solving the problems, but because the ideologies justify their feelings of hatred.
3fubarobfusco
I suppose I see it as more a case of biased search: people have actual problems, and look for explanations and solutions to those problems, but have a bias towards explanations that have to do with blaming someone. The closer someone studies the actual problems, though, the less credibility blame-based explanations have.
3Viliam_Bur
The part where the emotional needs come first, and the ideological belief comes later as a way of expressing and justifying them, that feels credible. I just don't think that everyone starts from the position of hatred (or, in the naive view, not everyone ends with hatred). There are other emotions, too. But maybe the people motivated by hatred make a large part of the most mindkilled crowd. Because other emotions can be expressed legitimately also outside of the politics.
2maia
Do you have an in-person community that you feel close to? What I'm trying to get at is, does it bother you specifically that you are alienated from "society at large," or do you feel alienated in general?
1NancyLebovitz
Tentatively: Look for what "and therefore" you've got associated with the feeling. Possibilities that come to my mind-- and therefore people are frightening, or and therefore I should be angry at them all the time, or and therefore I should just hide, or and therefore I shouldn't be seeing this. In any case, if you've got an "and therefore" and you make it conscious, you might be able to think better about the feeling.
7Lumifer
But of course. Accept that you're not average and not even typical.
4ChristianKl
Feeling usually become a problem when you resist them. My general approach with feelings: 1. Find someone towards which you can express the content behind the feeling. This works best in person. Online communication isn't good for resolving feelings. Speak openly about whatever comes to mind. 2. Track the feeling down in your body. Be aware where it happens to be. Then release it.
3memoridem
I think this feeling arises from social norms feeling unnatural to you. This feeling should be expected if your interests are relevant to this site, since people are not trying to be rational by default. The difference between a pathetic misfit and and an admirable eccentric is their level of awesomeness. If you become good enough at anything relevant to other people, you don't have to live through their social expectations. Conform to the norms or rise above them. Note that I think most social norms are nice to have, but this doesn't mean there aren't enough of the kind that make me feel alienated. It could be that the feeling of alienation is a necessary side effect of some beneficial cognitive change, in which case I'd try to cherish the feeling. I've found that rising to a leadership position diminishes the feeling significantly, however.
1MathiasZaman
I think that feeling is more common than you might think. Especially if you deviate enough from the societal norm (which Less Wrong generally does). My general strategy for dealing with is social interaction with people who'll probably understand. Just talk it over with them. It's best if you do this with people you care about. It doesn't have to be in person, if you've got someone relevant on Skype, that works as well.
4Daniel_Burfoot
Hmm, this is probably good advice. Part of my problem is that my entire family is made up of people who are both 1) Passionate advocates of an American political tribe and 2) Not very sophisticated philosophically.
4MathiasZaman
A common condition with geeks in general and aspiring rationalists in particular, I'd say. I've recently been expanding my network of like-minded people both by going to the local meetups and also by being invited in a Skype group for tumblr rationalists. I know that a feeling of alienation isn't conductive to meeting new people, so I'm not sure I can offer other advice. Contact some friends who might be open to new ideas? I'd offer to help myself, but I'm not sure if I'm the right person to talk to. (In any case, I've PM'd my Skype name if you do need a complete stranger to talk to.)
[-]banx80

Is it always correct to choose that action with the highest expected utility?

Suppose I have a choice between action A, which grants -100 utilons with 99.9% chance and +1000000 utilons with 0.1% chance, or action B which grants +1 utilon with 100% chance. A has an expected utility of +900.1 utilons, while B has an expected utility of +1 utilon. This decision will be available to me only once, and all future decision will involve utility changes on the order of a few utilons.

Intuitively, it seems like action A is too risky. I'll almost certainly end up with ... (read more)

I think the non-intuitive nature of the A choice is because we naturally think of utilons as "things". For any valuable thing (money, moments of pleasure, whatever) anybody who is minimally risk adverse would choose B. But utllons are not things, they are abstractions defined by one's preferences. So that A is the rational choice is a tautology, in the standard versions of utility theory.

It may help to think it the other way around, starting from the actual preference. You would choose a 99.9% chance of losing ten cents and 0.1% chance of winning 10000 dollars over winning one cent with certainty, right? So then perhaps, as long as we don't think of other bets and outcomes, we can map winning 1 cent to +1 utilon, losing 10 cents to -100 utilons and winning 10000 dollars to +10000 utilons. Then we can refine and extend the "outcomes <=> utilons" map by considering your actual preferences under more and more bets. As long as your preferences are self-consistent in the sense of the VNM axioms, then there will a mapping that can be constructed.

ETA: of course, it is possible that your preferences are not self-consistent. The Allais paradox is an example where many people's intuitive preferences are not self-consistent in the VNM sense. But constructing such a case is more complicated that just considering risk-aversion on a single bet.

Also, it's well possible that your utility function doesn't evaluate to +10000 for any value of its argument, i.e. it's bounded above.

0Matt_Simpson
Since utility functions are only unique up to affine transformation, I don't know what to make of this comment. Do you have some sort of canonical representation in mind or something?
-2A1987dM
In the context of this thread, you can consider U(status quo) = 0 and U(status quo, but with one more dollar in my wallet) = 1. (OK, that makes +10000 an unreasonable estimate of the upper bound; pretend I said +1e9 instead.)
-1jsteinhardt
Yes, this seems almost certainly true (and I think is even necessary if you want to satisfy the VNM axioms, otherwise you violate the continuity axiom).
4A1987dM
An unbounded function is one that can take arbitrarily large finite values, not necessarily one that actually evaluates to infinity somewhere.
2jsteinhardt
Yes I'm quite aware... note that if there's a sequence of outcomes whose values increase without bound, then you could construct a lottery that has infinite value by appropriately mixing the lotteries together, e.g. put probability 2^-k on the outcome with value 2^k. Then this lottery would be problematic from the perspective of continuity (or even having an evaluable utliity function).
0A1987dM
Are lotteries allowed to have infinitely many possible outcomes? (The Wikipedia page about the VNM axioms only says "many"; I might look it up on the original paper when I have time.)
6Oscar_Cunningham
There are versions of the VNM theorem that allow infinitely many possible outcomes, but they either 1) require additional continuity assumptions so strong that they force your utility function to be bounded or 2) they apply only to some subset of the possible lotteries (i.e. there will be some lotteries for which your agent is not obliged to define a utility). The original statement and proof given by VNM are messy and complicated. They have since been neatened up a lot. If you have access to it, try "Follmer H., and Schied A., Stochastic Finance: An Introduction in Discrete Time, de Gruyter, Berlin, 2004" EDIT: It's online.
0Matt_Simpson
See also Kreps, Notes on the Theory of Choice. Note that one of these two restrictions are required in order to specifically prevent infinite expected utility. So if a lottery spits out infinite expected utility, you broke something in the VNM axioms. For anyone who's interested, a quick and dirty explanation is that the preference relation is primitive, and we're trying to come up with an index (a utility function) that reproduces the preference relation. In the case of certainty, we want a function U:O->R where O is the outcome space and R is the real numbers such that U(o1) > U(o2) if and only if o1 is preferred to o2. In the case of uncertainty, U is defined on the set of probability distributions over O, i.e. U:M(O) -> R. With the VNM axioms, we get U(L) = E_L[u(o)] where L is some lottery (i.e. a probability distribution over O). U is strictly prohibited from taking the value of infinity in these definition. Now you probably could extend them a little bit to allow for such infinities (at the cost of VNM utility perhaps), but you would need every lottery with infinite expected value to be tied for the best lottery according to the preference relation.
0jsteinhardt
I'm not sure, although I would expect VNM to invoke the Hahn-Banach theorem, and it seems hard to do that if you only allow finite lotteries. If you find out I'd be quite interested. I'm only somewhat confident in my original assertion (say 2:1 odds).
4Oscar_Cunningham
Um, A actually has a utility of -89.9. That explains why it seems less appealing!
4ThrustVectoring
I'd flip that around. Whatever action you end up choosing reveals what you think has highest utility, according to the information and utility function you have at the time. It's almost a definition of what utility is - if you consistently make choices that rank lower according to what you think your utility function is, then your model of your utility function is wrong. If the utility function you think you have prefers B over A, and you prefer A over B, then there's some fact that's missing from the utility function you think you have (probably related to risk). I've recently come to terms with how much fear/anxiety/risk avoidance is in my revealed preferences. I'm working on working with that to do effective long-term planning -- the best trick I have so far is weighing "unacceptable status quo continues" as a risk. That, and making explicit comparisons between anticipated and experienced outcomes of actions (consistently over-estimating risks doesn't help any, and I've been doing that).
0TylerJay
I sometimes have the same intuition as banx. You're right that the problem is not in the choice, but in the utility function and it most likely stems from thinking about utility as money. Lets examine the previous example and make it into money (dollars): -100 [dollars] with 99.9% chance and +10,000 [dollars] with 0.1% vs 100% chance at +1 [dollar] When doing the math, you have to take into future consequences as well. For example, if you knew you would be offered 100 loaded bets with an expected payoff of $0.50 in the future, each of which only cost you $1 to participate in, then you have to count this in your original payoff calculation if losing the $100 would prohibit you from being able to take these other bets. Basically, you have to think through all the long-term consequences when calculating expected payoff, even in dollars. Then when you try to convert this to utility, it's even more complicated. Is the utility per dollar gained in the +$10,000 case equivalent to the utility per dollar lost in the -$100 case? Would you feel guilty and beat yourself up afterwards if you took a bet that you had a 99.9% chance of losing? Even though a purely rational agent probably shouldn't feel this, it's still likely a factor in most actual humans' utility functions. TrustVectoring summed it up well above: If the utility function you think you have prefers B over A, and you prefer A over B, then there's some fact that's missing from the utility function you think you have. If you still prefer picking the +1 option, then most likely your assessment that the first choice only gives a negative utility of 100 is probably wrong. There are some other factors that make it a less attractive choice.
0Qiaochu_Yuan
Depending on your preferred framework, this is in some sense backwards: utility is, by definition, that thing which it is always correct to choose the action with the highest expected value of (say, in the framework of the von Neumann-Morgenstern theorem).
-2IlyaShpitser
People who play with money don't like high variance, and sometimes trade off some of the mean to reduce variance.

Daniel Dennett quote to share, on an argument in Sam Harris' book Free Will;

... he has taken on a straw man, and the straw man is beating him

From: http://www.samharris.org/blog/item/reflections-on-free-will#sthash.5OqzuVcX.dpuf

Just thought that was pretty damn funny.

0DanielLC
That's known as Strawman Has A Point (Warning: TVTropes).
0Alejandro1
Thanks for the link, that was an excellent exposition and defense of compatibilism. Here is one particularly strong paragraph:
0lmm
Isn't that begging the question?
1Alejandro1
It is common for incompatibilists to say that their conception of free will (as requiring the ability to do otherwise in exactly the same conditions) matches everybody's intuitions and that compatibilism is a philosopher's trick based on changing the definition. Dennett is arguing that, contrary to this, what actual people in actual circumstances do when they want to know if someone was "free to do otherwise" is never to think about global determinism; rather, as compatibilism requires, they think about whether that person (or relevantly similar people) actually does/do different when placed under very similar (but not precisely identical) conditions.
0buybuydandavis
I think the key is consideration people "in similar, but not exactly identical, circumstance". It's how the person compares to hypothetical others. Free will is a concept used to sort people for blame based on intention.

The MIRI course list bashes on "higher and higher forms of calculus" as not being useful for their purposes and calculus is not on the list at all. However, I know that at least some kind of calculus is needed for things like probability theory.

So imagine a person wanted to work their way through the whole MIRI course list and deeply understand each topic. How much calculus is needed for that?

Not much. The kind of probability relevant to MIRI's interests is not the kind of probability you need calculus to understand (the random variables are usually discrete, etc.). The closest thing to needing a calculus background is maybe numerical analysis (I suspect it would be helpful to at least have the intuition that derivatives measure the sensitivity of a function to changes in its input), but even then I think that's more algorithms. Not an expert on numerical analysis by any means, though.

If you have a general interest in mathematics, I still recommend that you learn some calculus because it's an important foundation for other parts of mathematics and because people, when explaining things to you, will often assume that you know calculus after a certain point and use that as a jumping-off point.

2TylerJay
Thanks. I took single variable calculus, differential equations, and linear algebra in college, but its been four years since then and I haven't really used any of it since (and I think I really only learned it in context, not deeply). I've just been trying to figure out how much of my math foundations i'm going to need to re-learn. This was helpful.

Last night we had meetup in Ljubljana. It was a good debate, but quite a heretical one for the LW standards. Especially when organizers left us. Which was unfortunate. We mostly don't see ourselves particularly bonded to LW at all. Especially I.

We discussed personal identity, possible near super-intelligence (sudden hack, if you wish), Universe transformation following this eventuality, and some lighter topics like fracking for gas and oil, language revolutions throughout history, neo-reactionaries and their points, Einstein's brains (whether they were l... (read more)

3Luke_A_Somers
Heretical? Well, considering that 'heretic' means 'someone who thinks on their own', I'm not sure how we're supposed to interpret that negatively. I assume however that you meant 'disagreeing with common positions displayed on LW' - which of those common positions did you differ on, and why, and just how homogeneous do you think LW is on those?
-2Thomas
I can speak mostly for myself. Still, we the locals go back decade and more, discussing some topics. It is kind of clear to me, that there is a race toward superintelligence. As it was always the race toward some future technology, be it flying, be it atomic bomb, be it Moon race ... you name it. Except, that this is the final, most important race ever. What can you expect then from the competitors? You can expect them to claim, that the Singularity/Transcendence is still far, far away. You can expect, that the competition will try to persuade you to abandon your own project, if you have any. For example, by saying that an uncontrollable monster is lurking in the dark, named UFAI. They will say just about anything, to persuade you to quit. This works both ways, between almost any two competitors, to be clear. My view is the following. If you are clever and dare enough, you can write a 10000 lines or there about long computer program, and there will be the Singularity the very next month. I am not sure, if there is a human (group) currently able to accomplish this. Very well might be. It's likely NOT THAT difficult. We discussed the Marylin vos Savant's toying with Paul Erdos. A smartass against a top scientist is occasionally like a cat and mouse game, where the mouse mistakenly thinks he's a cat. There are many other examples, like Ballard against all the historians and archeologists. Or Moldbug against Dawkins. Of course, that does not automatically mean another smartass is preying upon the MIRI and AI academia combined, in the real AI case. But it's not impossible. May be several different big cats in the wild who keep a low profile for a time being. Might be lion with his pride, inhabiting the academia also. The most interesting outcome would be no Singularity for a few decades.
4Lumifer
That seems an... unusual view. Have you actually tried writing code that exhibits something related to intelligence? 10K lines is not a big program.
0Luke_A_Somers
It depends on your language and coding style, doesn't it? I've seen C style guides that require you to stretch out onto 15 lines what I'd hope to take 4, and in a good functional language shouldn't take more than 2.
0Lumifer
Yes, and the number of lines is a ridiculously bad metric of the code's complexity anyway. Was a funny moment when someone I know was doing a Java assignment, I got curious, and it turned out that a full page of Java code is three lines in Perl :-)
0Luke_A_Somers
That really depends on coding style, again. I find that common Java coding styles are hideously decompressed, and become far more readable if you do a few things per line instead of maybe half a thing. Even they aren't as bad as the worst C coding styles I've seen, though, where it takes like 7 lines to declare a function. As for Perl vs Java... was it solved in Perl by a Regex? That's one case where if you don't know what you're doing, Java can end up really bloated but it usually doesn't need to be all that bad.
0Lumifer
I don't remember the details by now, but I think that yes, there was a regexp and a map, and a few of Perl's shortcuts turned out to be useful...
0Thomas
I have certain abilities. This is the product of the product of mine from 10 years ago. Smartass I am. Probably not smart enough to really make a difference, though.
6Lumifer
Smartass is good. Saying things which are clearly not true without a hidden smartassy implication behind them -- not so much :-)
[-]pan60

Is there a reasonably well researched list of behaviors that correlate positively with lifespan? I'm interested in seeing if there are any low hanging fruit I'm missing.

I found this previously posted, and a series of posts by gwern, but was wondering if there is anything else?

A quick google will give you a lot of lists but most of them are from news sources that I don't trust.

5John_Maxwell
Romeo Stevens made this comprehensive doc.
0pan
This is really great, do you know if the sources are compiled anywhere?
3Qiaochu_Yuan
I found this list of causes of death by age and gender enlightening (it doesn't necessarily tell you that a particular action will increase your lifespan, but then again neither do correlations). For example, I was surprised by how often people around my age or a bit older die of suicide and "poisoning" (not sure exactly what this covers but I think it covers stuff like alcohol poisoning and accidentally overdosing on medicine?).
1Vladimir_Golovin
Eating a handful of nuts a day. http://www.medicalnewstoday.com/articles/269206.php
1Richard_Kennaway
But: Indeed, the study consists only of observational data, not interventional, so what causal conclusions could be drawn from it?
2IlyaShpitser
You act like people never did a valid causal analysis of the data in the Nurses' health study.
0Richard_Kennaway
I know I overstated things. There are such things as natural experiments, having some causal information already, etc. I'm not familiar with the Nurses' health study, and a quick google only turns up its conclusions. What methods did they use?
2IlyaShpitser
Sorry, there are two separate issues: the data itself (which is a big dataset where they following a big set of nurses for many years, and recorded lots of things about them), and how the data could be used to maybe get causal conclusions. Plenty of folks at Harvard (e.g. Miguel Hernan, Jamie Robins) used this data in a sensible way to account for confounding (naturally their results are relatively low on the 'hierarchy of evidence', but still!) Trying to draw causal conclusions from observational data is 95% of modern causal inference!
0Lumifer
Depends on what you'd call "well-researched" but, unfortunately, most of it is fuzzy platitudes. For example: * Do physical exercise. But not too much. * Be happy, avoid stress. * Get happily married. * Don't get obese. and most importantly * Choose your parents well, their genes matter :-P

Has anyone had experiences with virtual assistants? I've been aware of the concept for many years but always been wary of what I perceive to be the risks involved in letting a fundamentally unknown party read my email.

I'd like to hear about any positive or negative experiences.

One problem with searching for information about the trustworthiness of entities like these is that one suspects any positive reports one finds via Googling to be astroturfing, and if one finds negative reports, well, negatives are always over-reported in consumer services. That's why I'm asking here.

0Adam Zerner
I don't, but in Tim Ferris' book Four-Hour Work Week, I think I recall him recommending them. I think this was the one he recommended: https://www.yourmaninindia.com/. Let me know if you come across some good findings on this. If effective, virtual assistants could be very useful, and thus they're something I'm interested in. On that note, it'd probably be worth writing a post about them.

Has anyone paired Beeminder and Project Euler? I'd like to be able to set a goal of doing x problems per week and have it automatically update, instead of me entering the data in manually. Has anyone cobbled together a way to do it, which I could piggyback off of?

I hadn't realised before that Max Tegmark's work was actually funded by a massive grant from the Templeton Foundation. $9 million to found FQXI.

The purpose of the Templeton Foundation is to spray around more money than most academics could dream of - $9 million for philosophy! - seeking to try to blur the lines between science and religion and corrupt the public discourse. The best interpretation that can reasonably be put on taking the Templeton shilling is that one is doing so cynically.

This is not pleasing news, not at all.

0Nornagest
What's your basis for this interpretation? And particularly the "corrupt the public discourse" bit? I read your link, and I remember it getting briefly badmouthed in The God Delusion, but I'd prefer something a little more solid to go on, since this seems to lie on the sharp side of Hanlon's razor.
2ahbwramc
Well, here's Sean Carroll's take on the matter. They don't seem like the worst organization in the world or anything, but I too was disappointed to hear about Max accepting their money.
2Nornagest
Thanks, that's the kind of thing I was looking for. I'd expect (boundedly) rational people to be able to disagree on the utility of promoting secularism, but Carroll's take on it does seem like a reasonable and un-Hanlony approach to the issue.
2David_Gerard
If I was offered $9m, I'd take it! Not that anyone's offering. But it's definitely a significant hit to his credibility.

Any book recommendations for a good intro to evolutionary psychology? I remember Eliezer suggested The Moral Animal, but I also vaguely remember some other people recommending against it. I'll probably just go with TMA unless some other book gets suggested multiple times.

5hyporational
I found TMA was too full of just so stories. I also think it disturbingly rationalized a particular brand of sexism$ and overemphazised status which was very unexpected since I don't think I'm squeamish at all on those fronts. I don't think it helped me to predict human behavior better. This said I'd be interested too if someone could recommend some other book. $ rigid view of differences between the sexes, incompatible with my experience (which does suggest the sexes are different)
2Jayson_Virissimo
Evolutionary Psychology: The New Science of the Mind, by David Buss is a pretty good, mainstream, and accessible introduction to the field. I don't regret reading it.
2beoShaffer
I second the recommendation. It was used as one of two textbooks for my evo-psyc class, and worked quite well.
0drethelin
I think "Evil" by Roy F Baumeister is a really good exploration that includes evo psych elements though is not primarily about evo psych.
0Stabilizer
This is not a book, but looks interesting.
-8Izeinwinter

I don't understand why wireheading is almost universally considered worse than death, or at least really really negative.

5Slackson
I would assume that it's considered worse than death by some because with death it's easier to ignore the opportunity cost. Wireheading makes that cost clearer, which also explains why it's considered negative compared to potential alternatives.
2DefectiveAlgorithm
Speaking for myself, I consider wireheading to be very negative, but better than information-theoretic death, and better than a number of scenarios I can think of.
1JQuinton
I think the big fear is stasis. In each case you're put in a certain state of being without any recourse to get out of it, but wireheading seems to be like a state of living death.
2skeptical_lurker
I concur, but I think it wise to draw a distiction between wireheading as in an extreme example of a blissed out opiate haze, where one does nothing but feel content and so has no desire to acheve anything, and wireheading as in a state of strongly positive emotions where curisity, creativity etc remain intact. Yes, if a rat is given a choice it will keep on pressing the lever, but maybe a human would wedge the lever open and then go and continue with life as normal? To continue the drug analogy, some drugs leave people in a stupor, some make people socialable, some result in weird music. I would say the first type is certainly better then death, and the latter 'headonistic imperitive' wireheading sounds utopic.
0drethelin
Some people value "actual things" being achieved by entities and like Slackson implied a society of wireheads takes away resources and has opportunity costs.

How would you define the word "sexy" in terms of signaling?

3Lumifer
"Sexy" isn't signaling -- it's a characteristic that people (usually) try to signal, more or less successfully. "I'm sexy" basically means "You want me" : note the difference in subjects :-)
2amacfie
Would it change for particular behavior, e.g. clothes, dancing/gestures, language?
1Lumifer
Pretty much the same thing. Regardless of an, um, widespread misunderstanding :-D sexy behavior does NOT signal either promiscuity or sexual availability. It signals "I want you to desire me" and being desired is a generally advantageous position to be in.
1ChristianKl
If a man succeeds in signaling a high sexuality to a women, the woman might still treat him as a creep. Especially if there no established trust, signal really high amounts of sexuality doesn't result in "You want me". In my own interactions with professional dancers there are plenty of situations where the woman succeeds in signaling a high amount of sexyness. I however know that I"m dancing with a professional dancer who going to sent that signal to a lot of guys so she doesn't enter my mental category of potential mates. I think people frequently go wrong when the confuse impression of characteristics with goals.
2Lumifer
In which case he failed to signal "sexy" and (a common failure mode) signaled "creepy" instead.
1ChristianKl
It depends on how you define the term. For a reasonable definition of sexy, the term refers to letting a woman feel sexual tension. If you talk about social interactions it's useful to have a word that refers to making another person feel sexual tension. Of course you can define beautiful, attractive and sexy all the same way. Then you get a one dimensional model where Bob wants Alice with utility rating X. I don't think that's model is very useful to understanding how humans behave in mating situations.
1Lumifer
I define it as "arousing sexual interest and desire in people of appropriate gender and culture". Note that this is quite different from "beautiful" and is a narrow subset of "attractive". "Tension" generally implies conflict or some sort of a counterforce.
0ChristianKl
Testosterone which is commonly associated with sexiness in males is about dominance. It has something to do with power that does create tension. Of course a woman can decide to have sex with shy a guy because he's nice and she thinks that he's intelligent or otherwise a good match. Given that there are shy guys who do have sex that's certainly happening in reality. Does that mean that the behavior of that guy deserves the label "sexy"? I don't think he's commonly given that label. There also words like sensual and empathic. A guy can get layed by being very empathic and just making woman that feel really great by interacting with him in a sensual way. I think it's useful to separate that mentally from the kind of behavior that comes from testosterone that commonly get's called sexy. If you read an exciting thriller you are also feeling tension even when you aren't in conflict with the book or there some counterforce. Building up tension and then releasing it is a way for human to feel pleasure.
2ChristianKl
Sexy is a quite broad word that probably used by different people in different ways. I think for most people it about what they feel when looking at the person. Those feeling where set up by evolution over large time frames. Evolution doesn't really care about whether you get a fun intercourse partner. But it's not only evolution. It also has a lot to do with culture. Culture also doesn't care about whether you get a fun intercourse partner. People who watch a lot of TV get taught that certain characteristics are sexy. For myself I would guess that most of my cultural imprint regarding what I find sexy comes from dancing interactions. If a woman moves in a way that suggests that she doesn't dance well, that will reduce her sex appeal to me more than it probably does with the average male.
2Torello
Being sexy signals health, youth, and fertility. This is quite well supported by evidence and discussed in many books and articles. I would agree with what Lumifer says below, but I think sexy can be signalling when many people are involved: look at the sexy people I hang out with. Being with sexy people brings high status because it's high status.
1ChristianKl
I think you confuse the label "sexy" with the label "attractive". As far as my reading goes few articles use the term sexy.

I keep looping through the same crisis lately, which comes up any time someone points out that I'm pretentious / an idiot / full of shit / lebens unwertes leben / etc.:

Is there a good way for me to know if I'm actually any good at anything? What are appropriate criteria to determine whether I deserve to have pride in myself and my abilities? And what are appropriate criteria to determine whether I have the capacity to determine whether I've met those criteria?

4Shmi
Having followed your posts here and on #lesswrong, I got an impression of your personality as a bizarre mix of insecurities and narcissism (but without any malice), and this comment is no exception. You are certainly in need of a few sessions with a good therapist, but, judging by your past posts, you are not likely to actually go for it, so that's a catch 22. Alternatively, taking a Dale Carnegie course and actually taking its lessons to heart and putting an effort into it might be a good idea. Or a similar interpersonal relationship course you can find locally and afford.
2[anonymous]
If you don't mind, I'm gonna use this in my twitter's bio.
1ialdabaoth
Yeah, the narcissism is something that I've been trying to come up with a good plan for purging since I first became aware of it. (I sometimes think that some of the insecurities originally started as a botched attempt to undo the narcissism). The therapy will absolutely happen as soon as I have a reasonable capacity to distinguish "good" therapists from "bad" ones.
3wedrifid
Bad plan (and also a transparent, falsely humble excuse to procrastinate). Picking a therapist at random will give you distinctly positive expected value. Picking a therapist recommended by a friend or acquaintance will give you somewhat better expected value. Incidentally, one of the methods by which you can most effectively boost your ability to distinguish between good therapists from bad therapists is by having actual exposure to therapists.
4gjm
Some things are easier to tell whether you're good at than others. I guess you aren't talking about the more assessable things (school/university studies, job, competitive sport, weightlifting, ...) but about things with a strong element of judgement (quality as a friend or lover, skill in painting, ...) or a lot of noise mixed with any signal there might be (stock-picking[1], running a successful startup company, ...). [1] Index funds are the canonical answer to that one, but you know that already. So, anyway, the answer to "how do I tell if I'm any good at X?" depends strongly on X. But maybe you really mean not "(know if I'm actually any good at) anything" but know if I'm actually (any good at anything)" -- i.e., the question isn't "am I any good at X?" but "is there anything I'm any good at?". The answer to that is almost certainly yes; if someone is seriously suggesting otherwise then they are almost certainly dishonest or stupid or malicious or some combination of those, and should be ignored unless they have actual power to harm you; if some bit of your brain is seriously suggesting otherwise then you should learn to ignore it. There are almost certainly specific X you have good evidence of being good at, which will imply a positive answer to "is there anything I'm good at?". Pick a few, inspect them as closely as you feel you have to to be sure you aren't fooling yourself, and remember the answer. If someone else is declaring publicly that you are a pretentious idiot and full of shit, it is likely that what's going on is not at all that they're trying to make an objective assessment of your capabilities or character, but that they are engaged in some sort of fight over status or influence or something, and are saying whatever seems like it may do damage. I expect you have good reasons for getting into that sort of fight, so I'll just say: bear in mind when you do that this is a thing that happens, and that such comments are usually not useful feedback f
2NancyLebovitz
You could take a cognitive psych approach to some of this. What are the other person's qualifications? I recommend exploring the concept of good enough. There's a bit in Nathaniel Branden about "a primitive sense of self-affirmation"-- which I take to be the assurance that babies start out with that they get to care about their pain and pleasure. It isn't even a question for them. And animals are pretty much the same. You don't need to have a right to be on your own side, you can just be on your own side. Something I've been working on is getting past the idea that the universe is keeping score, and I have to get everything right. What I believe about your situation is that you've been siding with your internal attack voice, and you need to associate your sense of self with other aspects of yourself like overall physical sensations. Do you have people who are on your side? If so, can you explore taking their opinion seriously? The attack voice comes on so strong it seems like the voice of reality, but it's just a voice. I've found that it's hard work to change my relationship to my attack voice, but it's possible. For what it's worth, I think your prose is good. It's clear, and the style (as distinct from the subject matter) is pleasant.
3ialdabaoth
Generally, their qualifications are that the audience is rallying around them. Also, they don't know me, which makes them less likely to be biased in my favor. (I.e., the old "my mom says I'm great at , so shut up!" problem) This flies in the face of the political climate I exist within, that talks primarily about the gallish "entitlement" of poor people who believe they have the right to food and shelter and work. It's very, very difficult, primarily because people who are INTENSELY on my side are never as vocal as people who are casually against me. I.e., people who clearly love me and are willing to share portions of their life with me are willing to go so far as to say "I think you do pretty well." People whom I've never met are willing to go so far as to say "fucking kill yourself you fucking loser. Stop acting like you even know how to person, let alone . Fuck it, I'm looking up your address; I'll kill you." That churns up all sorts of emotional and social reactions, which makes processing the whole thing rationally even harder.
2NancyLebovitz
On the other hand, they might be more likely to be biased against you, and they certainly don't know a lot about your situation. Can you find a different political environment? I've noticed that conservatives tend to think that everything bad that happens to a person is the fault of that person, and progressives tend to think that people generally don't have any responsibility for their misfortunes. Both are overdoing it, but you might need to spend some time with progressives for the sake of balance. Also, I've found it helps to realize that malice is an easy way of getting attention, so there are incentives for people to show malice just to get attention-- and some of them are getting paid for it. The thing is, it's an emotional habit, not the voice of reality. Unfortunately, people are really vulnerable to insults. I don't have an evo psy explanation, though I could probably whomp one up. It is very difficult, but I think you've made some progress. All I can see is what you write, but it seems like you're getting some distance from your self-attacks in something like the past year or so. I find it helps to think about times when I've been on my own side and haven't been struck by lightning.
0satt
I might be an outlier, but a spiel like "fucking kill yourself you fucking loser. Stop acting like you even know how to person, let alone . Fuck it, I'm looking up your address; I'll kill you" doesn't signal casualness to me. The only people I'd expect to say that casually are trolls trying to get a rise out of people. Idle trolling aside, someone laying down a fusillade of abuse like that is someone who cares quite a bit (and doubtless more than they'd like to admit) about my behaviour. Hardly an unbiased commentator! (I recognize that's easier said than internalized.)
0Lumifer
I recommend empirical reality. The kind that exists outside of your (and other people's) head.
[-][anonymous]40

Following up on http://lesswrong.com/lw/jij/open_thread_for_january_17_23_2014/af90 :

  • I've created a minimally (possibly sub-minimally) viable wiki page: http://wiki.lesswrong.com/wiki/Study_Hall
  • I've started playing with SimpleWebRTC and its component parts
  • I am precommitting to another update by February 10th

This is a minimally-viable update on account of recent travel and imminent job interviews, but the precommitments seem to be succeeding in at least forcing something like progress and keeping some attention on the problem.

http://www.edge.org/responses/what-scientific-idea-is-ready-for-retirement

Some of these ideas are very poorly thought out. Some are interesting.

I'm in art school and I have a big problem with precision and lack of "sloppiness" in my work. I'm sort of hesitant to try to improve in this area, however, because I suspect it reflects some sort of biological limit - maybe the size of some area in the cerebellum or something, I don't know. Am I right in thinking this?

[-]maia140

Seems to me that that's likely a self-fulfilling prophecy, which I subjectively estimate is at least as likely to prevent you from doing better as an actual biological problem. Maybe try to think of more ways to get better at it - perhaps some different kind of exercises - and do your best at those, before drawing any conclusions about your fundamental limits... because those conclusions themselves will limit you even more.

6fubarobfusco
I have never biked twenty miles in one go. It could be that this reflects some inherent limit. Or it could be that I just haven't tried yet. If I believe that it is an inherent limit, how might I test my belief? Only by trying anyway. If I try and succeed, then I will update. If I believe that it is not an inherent limit, how might I test my belief? Only by trying anyway. If I try and fail, then I will update. In either case, the test of my ability Is not in contemplating what mechanisms of self might limit me, But in trying anyway, when I have the opportunity to do so, And seeing what happens.
2A1987dM
Be careful not to find yourself 7 miles away from home on your bike and too tired to keep on cycling.
2Richard_Kennaway
If that means arranging with a friend to pick you up in their car if you have to bail out, or picking a circular route that never takes you that far from home, or any other way of handling the contingency. Going "but suppose I fail!" and not trying is an even worse piece of wormtonguing than the one fubarobfusco is addressing.
6Stabilizer
Just to be clear: you're worried that you aren't sloppy enough? If so, for us non-artists, can you explain how 'sloppiness' can be a good thing?
3gothgirl420666
Sorry, I communicated poorly. I meant [introducting] lack of sloppiness into my work. That's not what I meant. I'm too sloppy.
8Stabilizer
You should edit the original question. People seem to be answering the wrong question below.
0Manfred
I think it's a metaphor thing. Like, in writing, if you say "The shadow of a lamppost lay on the ground like a spear. He walked and it pierced him like a spear." What more description of the scene do you need than that? In fact, talking about the color of the path or what kind of trousers our character was wearing would be counterproductive to the quality of the writing. One could view sloppiness in art in the same way - use of metaphor that captures the scene without the need for detail. And no, of course it's not a biological limit.
2EStokes
Some guesses on my part- * Maybe your tendency towards precision is at the wrong times? If practicing, for example, it might be counterproductive since you probably want quantity instead of quality, or maybe you're trying to get everything down precisely too early on and it's making your work stiff. * Manfred's point is good- "metaphor that captures the scene without the need for detail."... If you render background details overmuch, they can distract the viewer from the focal point of the work. Maybe put some effort into looking at how the "metaphors" of different things work? For example, how more skilled artists draw/paint grass in the distance, or whatnot. * I think it's a common thing to sort of notice something wrong in an area, and to spend a lot of time on that area in hopes of fixing it, which would make it less sloppy... Maybe sketch that thing a lot for practice. * If you're drawing from life, it's possible that lack of sloppiness comes from not making sense of the gestalt, so to speak. I'd think that understanding the form of the subject and how the lighting on it works means you can simplify things away. I don't do much (read: any) figure drawings from life, but I'd imagine that understanding the figure and what's important and what isn't would be helpful. Maybe doing some master copies of skilled, more abstract drawings of the figure would help. Maybe look up a comic artist or cartoonist you like and look at what they do. ETA: To address your actual question, I'd say I don't know any particular evidence for why that should be so. Rationality-technique-wise: It's good that you asked people, since that would bring you evidence of the idea being true or false. In the future it might be even more useful to suppress hypothesizing until some more investigating has gone on- "biological limit" is the sort of thing that feels true if you don't understand how to do something or how to understand how to do something. I think there's a post about this,
2ChristianKl
I would guess that you try to exert too much control. The kind of "sloppiness" that's useful for creativity is about letting things go. Meditation might help. As you are female, dancing a partner dance where you have to follow and can't control everything might be useful. Letting go of trying to control is lesson 101 for a lot of woman who pick up Salsa dancing.
9A1987dM
He isn't.
4gothgirl420666
I'm already good at this part of creativity, but precision is also pretty important. Right now I'm working on a project where I have to trace in pen (can't erase, flaws are obvious) photographs that I took. Letting things go won't help here. I already do meditate. I'm not, sorry.
4palladias
Swing classes are pretty good about letting either gender learn to follow, if you'd like.
2buybuydandavis
As a lead, you learn that you aren't really controlling much of anything in Salsa either. You're setting boundary conditions; follows have a fascinating way of exploring the space of those boundaries in ways you often don't expect. But I'm guessing that you've hit on the right direction of interpretation of sloppiness as letting go of control. I'd extend that to too much self conscious control. Great art, and particularly great dancing, is finding a clear intention and a method of focusing your discursive consciousness and voluntary attention that harnesses the rest* of your capabilities for the same intention. When the self monitoring person in your head tries to do too much, he gets in the way of the rest of you doing it right.
0ChristianKl
For advanced dancing that's true. For beginners, not so much. At the beginning Salsa is the guy leading a move and the woman following. If you are a guy and want to learn dancing for the sake of letting go control I wouldn't recommend Salsa. I think it took me 1 1/2 years to get to that point.
0buybuydandavis
A whole 1 1/2 years? Took me a lot longer than that. I've been at Salsa mainly for about a decade. Yes, the unfortunate fact is that most leads are taught to "lead moves" when they start. If they were taught to lead movement, they'd make faster progress, IMO. Leading should be leading, to the point of manipulation, and not signaling a choreographed maneuver. I've seen a West Coast instructor teach a beginning class that way, and thought it was the best beginning class I had ever seen.
0ChristianKl
I think on of the turning events was for me my first Bachata Congress in Berlin. I didn't know too many Bachata patterns and after hours of dancing the brain just switches off and let's the body do it's thing. But you are right that it might well take longer for the average guy. That means it's not a good training exercise to pick up the skill of letting go control for man. For woman on the other hand it's something to be learned at the beginning. At the beginning I mainly thought I didn't understand what teaching dance is all about and that a bunch of teachers have something like real expertise. The more I dance the more I think that their teaching is very suboptimal. A local Salsa teacher teaches mainly patterns in her lessons. On the other hand she writes on her blog about how it's all in the technique and about traits like confidence. It's also not like she didn't learn dance at formal dance university courses for 5 years, so she should know a bit. Things like telling a guy who dances with a bit of distance to the girl to dance closer, just aren't good advice when the girl isn't comfortable with dancing closer. Yes, if they would dance closer things would be nicer, but there usually a reason why a pair has the distance it has. Manipulation is an interesting choice of words. What do you mean with it? I remember a Kizomba dance a year ago where I didn't know much Kizomba. I did have a lot of partner perception from Bachata. I picked up enough information from my dance partner that I could just follow her movements in a way where she didn't thought she was leading but I was certainly dancing a bunch of steps with her I hadn't learned in a lecture. To use sort of what "manipulation" means in osteopathy I think you could call that nonmanipluative leading. In Bachata I think there are a lot of situation where a movement is there in the body but surpressed and things get good if they lead can "free" the movement and stabilize it. I think such nonmanipulative da
0A1987dM
That seems related with the common observation that it's easier to speak a foreign language when drunk than when sober: in the latter case I feel I'm so worried of saying something grammatically incorrect that I end up speaking in very simple sentences and very haltingly. (And the widespread use of drugs among rock musicians is well-known.)
1Ishaan
If other people working the same craft have managed to achieve precision, it's very unlikely to be a biological limit, right? The resolution of human fine motor skills is really high. You didn't mention what the craft was or the nature of the sloppiness, but have you considered using simple tools to augment technical skills? Perhaps a magnifying glass, rulers. pieces of string/clay or other suitably shaped objects to guide the hand, etc?
0memoridem
You could try doing something that gives immediate feedback for sloppiness, like simple math problems for example. You might gain some generalizable insight like that speed affects sloppiness. Since you already practice meditation, it should be easier to become aware of the specific failure modes that contribute to sloppiness, which doesn't seem to be a well defined thing in itself.

I'm recalling a Less Wrong post about how rationality only leads to winning if you "have enough of it". Like if you're "90% rational", you'll often "lose" to someone who's only "10% rational". I can't find it. Does anyone know what I'm talking about, and if so can you link to it?

4ahbwramc
This, maybe? http://lesswrong.com/lw/7k/incremental_progress_and_the_valley/
0Adam Zerner
I'm like 60% sure that its not that article I had in mind, but the idea is the same (incremental increases in rationality don't necessarily lead to incremental increases in winning), so I feel pretty satisfied regardless. Thanks!
0ygert
Could the article you had in mind be this? In any case, Eliezer has touched on this point multiple times in the sequences, often as a side note in posts on other topics. (See for example in Why Our Kind Can't Cooperate.) It's an important point, regardless.
0Adam Zerner
No, that wasn't it. I don't think it was by Eliezer. And I think it was a featured or promoted article in Main.

I'm quite new to LW, and find myself wondering whether Hidden Markov models (HMM) are underappreciated as a formal reasoning tool in the rationalist community, especially compared to Bayesian networks?

Perhaps it's because HMM seem to be more difficult to grasp?

Or it's because formally HMM are just a special case of Bayesian networks (i.e. dynamic Bayes nets)? Still, HMM are widely used in science on their own.

For comparison, Google search "bayes OR bayesian network OR net" site:lesswrong.com gives 1,090 results.

Google search hidden markov model site:lesswrong.com gives 91 results.

2ChristianKl
Hidden Markov models are a reasoning model to solve a specific problem. If you don't face that specific problem they are no use. Most of the problems we discuss aren't modeled well with HMMs.
0moridinamael
Out of curiosity, did you happen to read Kurzweil's recent book on HHMMs? I think the safest answer is that a HMM is just a specific way of mathematically writing down an updating Bayesian network.
0gedymin
No, never heard of it. I'm not an Utopian, and from what I know about Kurzweil's ideas and arguments, they don't seem to be sound enough.
3moridinamael
Well, Kurzweil is an extremely accomplished inventor aside from being a pie-in-the-sky futurist, so when he says something about a particular algorithm working well, I assume he knows what he's talking about. He seems to think hidden hierarchical Markov models are the best way to represent the hierarchical nature of abstract thought. I'm not saying he's correct, just saying, it seems to be a popular idea.
0Qiaochu_Yuan
There's a proliferation of terminology in this area; I think a lot of these are in some sense equivalent and/or special cases of each other. I guess "Bayesian network" is more consistent with the other Bayes-based vocabulary around here.

Is there a good way of finding what kind of job might fit a person? Common advice such as "do what you like to do" or "do what you're good at" is relatively useless for finding a specific job or even a broader category of jobs.

I've did some reading on 80000 hours, and most of the advice there is on how to choose between a couple of possible jobs, not on finding a fitting one from scratch.

3memoridem
I think for most people who ask this question, the range of fitting jobs is much wider than they think. You learn to like what you become good at. If I were to pick a career right now, I'd just take a long list of reasonably complex jobs and remove any that contain an obvious obstacle like a skill requirement I'm unlikely to improve at. Then from what is left, I'd narrow the choice by some other criteria than perceived fit, income and future employment prospects for example and then pick one of them either by some additional criteria or randomly. I'm confident I'd learn to like almost any job chosen this way. If you make money you can do whatever you like in the future even if you chose your job poorly in the first place. So please don't choose to become an English major.
2ChristianKl
That's a strange question. Either you want to know how to pick up the skill of being a career adviser. Alternatively you want to find a job for yourself. You might also be a parent who tries to find a job that fits his child instead of letting the child decide for themselves. I think the answers to those three possibilities are very different.
0MathiasZaman
It's this option, although the general skill of being a career advisor also sounds appealing in the abstract.
-3ChristianKl
You managed to give this answer without using the word I. If you want to live a self-determined life, don't speak of yourself in the third person. Start associating with yourself. I think that will bring you a huge step in the right direction.

Does anyone have a simple, easily understood definition of "logical fallacy" that can be used to explain the concept to people who have never heard of it before?

I was trying to explain the idea to a friend a few days ago but since I didn't have a definition I had to show her www.yourlogicalfallacyis.com. She understood the concept quickly, but it would be much more reliable and eloquent to actually define it.

She understood the concept quickly, but it would be much more reliable and eloquent to actually define it.

You think she would've understood the concept even more quickly if you had a definition? I think people underestimate the value of showing people examples as a way of communicating a concept (and overestimate the value of definitions).

0Lavender
Well I know I won't be around a computer 24/7, and I'd like something to explain it if I'm out and about. Although I suppose I could use a couple examples that I can just memorize, like strawman arguments and ad hominum.
1pragmatist
It's a bad concept, at least the way it's traditionally used in introductory philosophy classes. It encourages people to believe that certain patterns of argument are always wrong, even though there are many cases in which those patterns do constitute good (non-deductive) arguments. Instructors will often try to account for these cases by carving out exceptions ("argument from authority is OK if the authority is actually a recognized expert on the topic at hand"), but if you have to carve out loads of exceptions in order to get a concept to make sense, chances are you're using a crappy concept. Ultimately, I can't find any unifying thread to "logical fallacy" other than "commonly seen bad argument", but even that isn't a very good definition because there are many commonly seen bad arguments that aren't usually considered logical fallacies (the base rate fallacy, for instance). Also, by coming up with cute names to label entire patterns of argument, and by failing to carve out enough exceptions, most expositions of "logical fallacy" end up labeling many good arguments as fallacious. So I guess my advice would be to stop using the concept altogether, rather than trying to explicate it. If you encounter a particular instance of a "logical fallacy" that you think is a bad argument, explain why that particular argument doesn't work, rather than just saying "that's an argumentum ad populum" or something like that.
1fubarobfusco
A logical fallacy is an argument that doesn't hold together. All of its assumptions might be true, but the conclusion doesn't actually follow from them. "Fallacy" is used to mean a few different things, though. Formal fallacies happen when you try to prove something with a logical argument, but the structure of your argument is broken. For instance, "All weasels are furry; Spot is furry; therefore Spot is a weasel." Any argument of this "shape" will have the same problem — regardless of whether it's about weasels, politics, or Java programming. Informal fallacies happen when you try to convince people of your conclusion through arguments that are irrelevant. A lot of informal fallacies try to argue that a statement is true because of something else — like its popularity, or the purported opinion of a smart person; or that its opponents are villains.
1Jayson_Virissimo
To a "regular person", I might say something like "a logical fallacy is a form of reasoning that seems good to many humans, but actually isn't very good".
0IlyaShpitser
I don't think this is so simple to explain, because to really understand logical fallacies you need to understand what a proof is. Not a lot of people understand what a proof is.
2NancyLebovitz
On the other hand, I think people can acquire a pretty good ability to recognize fallacies without a formal understanding of what a good proof is.
4IlyaShpitser
I just feel there is a difference between a "fallacy enthusiast" (someone who knows lists of logical fallacies, can spot them, etc.) and a "mathematician" (who realizes a 'logical fallacy' is just 'not a tautology'), in terms of being able to "regenerate the understanding." This is similar to how you can try to explain to lawyers how they should update their beliefs in particular cases as new evidence comes to light, but to really get them to understand, you have to show them a general method: http://en.wikipedia.org/wiki/Wigmore_chart (Yes, belief propagation was more or less invented in 1913 by a lawyer.)
0pragmatist
Could you explain why it is necessary to understand what a proof is in order to understand logical fallacies? Most commonly mentioned fallacies are informal. I'm not seeing how understanding the notion of proof is necessary (or even relevant) for understanding informal fallacies.

Can anyone recommend a good replacement for flagfic.com ? This was a site that could download stories from various archives (fanfiction.net, fimfiction.net, etc) transform them to various e-reader formats, and email them to you. I used it to email fanfics I wanted to read directly to my Kindle as .mobi files.

0pengvado
fanficdownloader. I haven't tried the webapp version of it, but I'm happy with the CLI.
0ArisKatsaris
Many thanks for the suggestion! I've started trying it out, and though it doesn't seem to work perfectly for fimfiction.net (half the .mobi files I create from fanfics there get rejected for some reason when I email them to my kindle), it so far seems to work fine with fanfiction.net at least. An excuse for me to learn Python so I can fix whatever it's doing wrong. :-) EDIT: On second thought, fimfiction.net allows me to get html downloads of the stories, which I can then email to kindle anyway -- so as long as fanficdownloader works with fanfiction.net, I'm all set :-) Thanks again.

Repost as there were no answers:

Has anyone here done Foundation Training? How is the evidence supporting them?

0NancyLebovitz
Corrected url: Foundation Training I tried the video at the url, and it seemed a lot more like straining (little pun about the mistaken url), but that might not be a fair test. The basic idea of getting hip mobility seems sound, but I recommend Scott Sonnon's Ageless Mobility and IntuFlow, and the The Five Tibetan Rites -- sorry for the cheesy name on the latter, but they're a cross between yoga and calisthenics with a lot of emphasis on getting backwards/forwards pelvis mobility.
0A1987dM
(I think there's a typo in the URL.)
0Metus
You are right. Fixed.