Filter Last three months

You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Comment author: gwern 26 July 2016 01:26:12AM 20 points [-]

I've written an essay criticizing the claim that computational complexity means a Singularity is impossible because of bad asymptotics: http://www.gwern.net/Complexity%20vs%20AI

Comment author: ike 06 August 2016 10:51:28PM *  16 points [-]

I got https://www.reddit.com/r/Futurology/comments/4vhqoc/should_we_wipe_mosquitoes_off_the_face_of_the/ to the front page of Reddit, which probably got somewhere on the order of magnitude of 50,000 people to read it or at least think about the idea, which can only help in terms of moving it into the Overton Window.

I know at one point it was number 6 for logged out users.

Comment author: Clarity 27 July 2016 10:01:27PM *  16 points [-]

Thanks for helping me out in some tough times, LessWrong crew. Please keep supporting one another and being positive rather than negative.

In response to Inefficient Games
Comment author: Gram_Stone 23 August 2016 07:15:56PM *  13 points [-]

It's nice to see that someone else has thought about this.

It's a popular rationalist pastime to try coming up with munchkin solutions to social dilemmas. A friend posed one such munchkin solution to me, and I thought he had an unrealistic idea of why regulations work, so I said to him:

Even though it's what you really want, I don't think the fact that you know everyone else will cooperate is the interesting thing per se about regulations, but that this is a consequence of the fact that you have decreased what was once the temptation payoff and thus constructed a different game. You have functionally reduced the expected payoff of the option "Don't pay taxes," by law. If you don't pay taxes, then you get fined or jailed. Now all players are playing a game where the Nash equilibrium is also Pareto optimal: Pay taxes or be fined or jailed. Clearly, one should pay taxes.

Now, ironically, this is good news if we want to cause better outcomes with less or no coercion, because it suggests that it is not coercion in itself that does the good work, but the fact that we have changed the payoffs to construct a different game; we can interpret coercion as just one instantiation of the general process by which 'inefficient games' become 'efficient games'. Coercion is perhaps a simple way to do the thing that all possible solutions to this problem seem to have in common, but there may be others that we can assume to syntactically change the payoffs in the way that coercion does, but which we may semantically interpret as something other than coercion.

A different time, a friend noticed that people building up trust seemed qualitatively similar to a Prisoner's Dilemma but couldn't see exactly how. I was like, "Have you heard of Stag Hunt? That's the whole reason Rousseau came up with it!" PD is just one kind of coordination game.

More generally, isn't it weird that the central objects of study in game theory, despite all of the formalization that has taken place since the beginning of the field, are remembered in the form of anecdotes?! You learn about the Stag Hunt and the Prisoner's Dilemma and Chicken and all other sorts of game, but there doesn't really seem to be any systematic notion of how different games are connected, or if any games are 'closer' to others in some sense (as our intuitions might suggest).

Meditations on Moloch was pretty but in the audience I coughed the words 'mechanism design'. It just seems like pointing out the mainstream academic work makes you boring when you're commenting on something poetic. You also might like Robinson and Goforth's Topology of the 2x2 Games. The math isn't that complex and it provides more insight than a barrage of anecdotes. Note that to my knowledge this is not taught in traditional game theory courses but probably should be one day. They refer to this general class of games as the 'social dilemmas', if I recall correctly.

Comment author: Manfred 21 July 2016 09:48:54PM 12 points [-]

Oh my gosh, the negative utilitarians are getting into AI safety. Everyone play it cool and try not to look like you're suffering.

Comment author: CellBioGuy 04 October 2016 10:00:49PM *  11 points [-]

Advice solicited. Topics of interest I have lined up for upcoming posts include:

  • The history of life on Earth and its important developments
  • The nature of the last universal common ancestor (REALLY good new research on this just came out)
  • The origin of life and the different schools of thought on it
  • Another exploration of time in which I go over a paper that came out this summer that basically did exactly what I did a few months earlier with my "Space and Time Part II" calculations of our point in star and planet order that showed we are not early and are right around when you would expect to find the average biosphere, but extended it to types of stars and their lifetimes in a way I think I can improve upon.
  • My thoughs on how and why SETI has been sidetracked away from activities that are more likely to be productive towards activities that are all but doomed to fail, with a few theoretical case studies
  • My thoughts on how the Fermi paradox / 'great filter' is an ill-posed concept
  • Interesting recent research on the apparent evolutionary prerequisites for primate intelligence

Any thoughts on which of these are of particular interest, or other ideas to delve into?

Comment author: ChristianKl 03 October 2016 08:42:52PM 11 points [-]
Comment author: WhySpace 27 September 2016 02:06:25AM *  10 points [-]

Happy Petrov day!

Today is September 26th, Petrov Day, celebrated to honor the deed of Stanislav Yevgrafovich Petrov on September 26th, 1983. Wherever you are, whatever you're doing, take a minute to not destroy the world.

  • 2007 - We started celebrating with the declaration above, followed by a brief description of incident. In short, one man decided to ignore procedure and report an early warning system trigger as a false alarm rather than a nuclear attack.

  • 2011 - Discussion

  • 2012 - Eneasz put together an image

  • 2013 - Discussion

  • 2014 - jimrandomh shared a program guide describing how their rationalist group celebrates the occasion. "The purpose of the ritual is to make catastrophic and existential risk emotionally salient, by putting it into historical context and providing positive and negative examples of how it has been handled."

  • 2015 - Discussion

Comment author: helldalgo 12 September 2016 06:09:38PM 11 points [-]

I occasionally just forget that I can change things about my environment. If my clothes are uncomfortable, I can change. If there are annoying sounds, I can wear earplugs.

Comment author: Elo 04 August 2016 11:20:15AM -2 points [-]

http://www.abc.net.au/triplej/programs/hack/hack-thursday/7674406

I had the opportunity to be on national radio talking about cryonics - my segment starts at 17:30. And the media will always cut and paste what you say. They did enjoy my tagline, "you only live twice", and gave it an overall positive spin.

Article also here: http://www.abc.net.au/triplej/programs/hack/cheating-death-with-cryonics/7662164

Comment author: James_Miller 10 October 2016 01:59:55PM 10 points [-]

Save less because of the high probability that the AI will (a) kill us, (b) make everyone extremely rich, or (c) make the world weird enough so that money doesn't matter.

Comment author: DataPacRat 19 September 2016 06:35:24PM 10 points [-]

As a cryonicist, I'm drafting out a text describing my revival preferences and requests, to be stored along with my other paperwork. (Oddly enough, this isn't a standard practice.) The current draft is here. I'm currently seeking suggestions for improvement, and a lot of the people around here seem to have good heads on their shoulders, so I thought I'd ask for comments here. Any thoughts?

Comment author: ike 07 August 2016 02:45:33AM *  10 points [-]

I just submitted it and was lucky. It's the kind of thing that sub likes. I've had around three posts hit the front page out of probably thousands since I started, there's definitely a large luck factor that goes in.

We're consequentialists here, so I get all the credit for it even if it wasn't much effort, right?

Comment author: MrMind 25 July 2016 07:22:08AM 9 points [-]

The fact that Adam committed a crime is not unfalsifiable, it's simply unfalsfied. There's just not enough probability weight for her to change her mind, she even admitted that with evidence strong enough she would otherwise change her mind.
Eve is being rational in retaining her current prior in the lack of evidence: it's not that she is assigning 0 to the probability of Adam being the killer, it's just that in the face of uncertainty there's no reason to update.
On the other hand I don't see how you could do this to uphold the belief in God: absence of evidence is evidence of absence.

Comment author: ChristianKl 02 October 2016 04:37:21PM *  8 points [-]

The article misses the point. It doesn't talk about the significance of the story.

A better headline might be "The Chinese government decided that it's in their interest to be public about data fabrication by Chinese scientists."

Given that this comes right after the Chinese government decides that it makes sense to reduce red meat consumption in China, it's a sign of progress and good Chinese leadership.

In response to Linkposts now live!
Comment author: WhySpace 28 September 2016 05:06:25PM 9 points [-]

Awesome! This strikes me as a very good thing, especially with your suggested social norms. I have 3 additional suggestions, though:

  1. Add a social norm where commenters make short summaries, or quote a couple sentences of new info, without the fluff. The title of the link serves much the same purpose, and gives readers enough info to decide whether or not to click through. This is standard practice on the more intellectual subreddit, since they already have the background context and knowledge that 90% of the article is spent explaining.

  2. Add a social norm where the best comments get linked to. I enjoy Yvain's SSC posts, and the comments section often contains some gems, but digging through all of them to find the gems is tedious. I intend to quote or rephrase gems when I find them, and link to them in comments here.

  3. Maybe we should have subreddits on LW. I'm not sure about this one. Tags serve some of the same purposes, so perhaps what would be ideal would be to subscribe and unsubscribe from tags you're interested in. However, just copying the Reddit code for subreddits would be simpler. It would divide up the community though, so probably not desirable while we're still small.

Comment author: gwern 19 September 2016 02:40:47AM 9 points [-]

My reading of the behavioral genetics literature is that high intelligence being driven by rare autism variants is looking unlikely. DeFries-Fulker extremes analyses like "Thinking positively: The genetics of high intelligence", Shakeshaft et al 2015 aren't consistent with the (relatively) high end being due to rare variants (but are consistent with the low end being due to rare variants) and current attempts to find rare variants enriched in the very high IQ with large effect sizes have turned up nothing: "A genome-wide analysis of putative functional and exonic variation associated with extremely high intelligence", Spain et al 2015. There is also an autism heritability observed in the GCTAs/LD score regression using only common SNPs (>=1% population frequency), along with a positive autism/intelligence genetic correlation, which undermines that idea.

My speculation at this point is that Spearman's law of diminishing returns is - based on all the genetic correlations with intelligence which have piled up and the current trends in brain imaging studies finding brain volume/thickness & global connectivity & white-matter integrity & connection speed to be the best predictors of intelligence - is due to intelligence reflecting a bottleneck between all the regions of the brain communicating to solve problems and that as the global communication becomes closer to optimal due to better health & development, individual specialized brain regions start to become the bottleneck to higher performance and shrinking the g factor.

Comment author: WalterL 11 August 2016 08:07:53PM 8 points [-]

headdesk

Has this ever worked for you? Seriously? Even once?

The part 2 sentences later, where they ask you why you want to shoot them, and you explain that they aren't smart enough to understand what you mean must be super persuasive.

You want to get someone to sign up for cryo? Tell them it is cheap and Beyonce is doing it. Tell them Trump will try to take away their right to get the good kind of cryo. Tell them the peace of mind from the policy will help them lose weight. Tell them you will pay them five hundred bucks in cash when you see the bracelet. Tell them anything but what you proposed.

Comment author: Lumifer 05 August 2016 02:29:55PM 7 points [-]

I'm not sure of the point of all this. You're taking a well-defined statistical concept of independence and renaming it 'fairness' which is a very flexible and politically-charged word.

If there is no actual relationship between S and Y, you have no problem and a properly fit classifier will ignore S since it does not provide any useful information. If the relationship between S and Y actually exists, are you going to define fairness as closing your eyes to this information?

Comment author: gjm 11 October 2016 03:10:30PM -1 points [-]

100%? Well, your future charitable donations will be markedly curtailed after you starve to death.

Comment author: username2 05 October 2016 06:16:23PM *  8 points [-]

The problem is that the statistics don't show the claimed bias. Normalized on a per-police-encounter basis, white cops (or cops-in-general) don't appear to shoot black suspects more often than they shoot white suspects. However, police interact with black people more frequently, so the absolute proportion of black shooting victims is elevated.

The fact that the incidence of police encounters with blacks is elevated would be the actual social problem worth addressing, but the reasons for the elevated incidence of police-black encounters do not make a nice soundbite.

None of this is important of course because, as is usual for politics, the whole mess degenerates into cheerleading for your team and condemning the other team, and sensitive analysis of the actual evidence would be giving aid and comfort to the hated enemy.

Comment author: Daniel_Burfoot 14 September 2016 01:07:44PM *  8 points [-]

Note that Adams is using persuasion tactics in the interview itself. The most obvious trick is that he describes himself as being objective (he doesn't care about Trump vs Clinton) and altruistic (because he's wealthy and older), making it more likely for us to believe the other things he has to say.

My guess is that Adams is hoping that Trump wins the election, because he will then write a book about persuasion and how Trump's persuasion skills helped him win. He already has a lot of this material on his blog. In that scenario he can capitalize on his correct prediction, which seemed radical at the time, to generate a lot of publicity for the book. Persuasion is a topic of perennial interest, and Adams is a skilled expositor. So there's a good chance that a Trump win will mean a multi-million dollar payoff for Adams.

I actually like Adams and think he's a smart guy, but I doubt he's much more altruistic and objective than everyone else. ;-)

Comment author: Gram_Stone 05 September 2016 10:59:44PM *  8 points [-]

Do you think it would be easier for you to untangle your earphones if the cord had a color gradient? Maybe something like the jack end is violet and the bud end is red, and it goes through the entire visible spectrum along the length of the cord. Could also be something like a grayscale gradient. I imagine it making it easier for me to simulate untangling strategies. Surprisingly there are basically no earphones that look like this even merely for aesthetic reasons.

Also, I'm already aware of ways to prevent tangling.

Comment author: gjm 02 September 2016 05:57:24PM -1 points [-]

I agree with the other commenters about this.

  • Some of what he says is correct: the map is not the territory, having a good model of the universe does not guarantee having any kind of privileged access to The Universe As It Really Is Deep Down, etc.
  • But "rationalism" or "rationality" in, say, the sense commonly used on LW does not in fact mean denying any of that.
  • The video is really long and (at least in the first 25 minutes or so) has awfully little content.
  • The guy in the video comes across (to me at least) as smugly superior even while uttering a succession of trivialities, which doesn't do much to encourage me to watch more.

So I thought "maybe it gets more interesting later on" and skipped to 50:00. At which point he isn't bothering to make any arguments, merely preening over how he understands the world so much more deeply than rationalists, who will come and bother him with their "arguments" and "contradictions" and he can just see that they "haven't got any awareness" and trying to engage with them would be like trying to teach calculus to a dog, and that the mechanism used to brainwash suicide bombers and fundamentalists are "the exact same mechanism that very intelligent scientists use to prove their theories of space and time and whatever else". OK, then.

Since I obviously wasn't enlightened enough for minute 50 of this thing, I went back to 40:00. He says it's important to connect with your emotions and not deny they're there (OK), and then he says that "rational people just assume that, well, we don't need any of that emotional stuff". OK, then. (And rational people like scientists get emotional when they argue with highly irrational people because they're attached to their rational models of the world and don't want to hear anything contrary to those models because of cognitive dissonance; they close their eyes and ears to the arational because they demonize it as irrational.)

OK, clearly still too advanced for me. Back to 30:00. Apparently, if your "awareness" is low then you think thinking is great (OK...), you think thinking is all there is (huh?), you think thinking is a powerful tool for understanding reality (OK...), but as you gain in "awareness" you realise that thinking is a system of symbols, and "this gulf between the map and the territory just grows wider and wider and wider, until you see that the map is just a complete fiction, a complete illusion", and once you realise this you see "the gross limitations of thinking". Einstein's theory of gravity isn't revealing anything deep about the world, it's just a set of sounds and symbols on paper. "That's what it literally is, except your awareness is too low to actually see that". And then he pulls an interesting move where he complains about people with "low" "awareness" getting "sucked into the content" of a theory because they don't see the "larger context". You might think he's now going to explain what the larger context is and how it should affect our understanding of relativity. Ha, ha. What a silly idea. Only someone with low awareness would expect that. What he actually does is to tell us how when rationalists criticize him they're doing it "on the level of thoughts" while he is "on the level of awareness, which is a much higher level". Bleh.

Oh, wait, he has something resembling an actual point somewhere around 35:00. Rationalists give too much credit to logic, he says, because logic "has no teeth", because it depends on its premises and the premises are doing the real work, and if your premises are dodgy then so are your conclusions, and "most of them are very very wrong". Cool, he's going to tell us what wrong premises we have. ... Oh, no, silly me, he isn't. He just says they're very wrong but gives no specifics.

So far as I can see, he alternates between three main things.

  • Saying things that are true but elementary and not in fact denied by rationalists. For some of these, he actually gives some kind of justification.
  • Saying that rationalists are wrong in various ways (giving too much weight to X, having wrong premises, ...). In every instance of this I heard (though I have not listened to the whole dreary thing) either the claim is flatly wrong, or he offers no sort of support for it, or both.
  • Saying smugly how much more "aware" he is than rationalists are, and how this puts him on a higher level than them.

If there's anything actually useful there, I missed it. And now I've listened to enough of this without any sign that he has anything useful to teach me, and I'm going to go and do something else. My apologies for not sitting through all 82 minutes of it.

Comment author: Viliam 02 September 2016 03:33:24PM *  8 points [-]

I tried listening to the video on the 1.5× speed. Even so, the density of ideas is horribly low. It's something like:

Science is successful, but that makes scientists overconfident. By 'rationalists' I mean people who believe they already understand everything.

Those fools don't understand that "what they understand" is just a tiny fraction of the universe. Also, they don't realize that the universe is not rational; for example the animals are not rational. Existence itself has nothing to do with rationality or logic. Rationalists believe that the universe is rational, but that's just their projection. Rationality is an emergent property. Existence doesn't need logic, but logic needs existence, therefore existence is primary.

You can't use logic to prove whether the sun is shining or not; you have to look out of the window. You can invent an explanation for empirical facts, but there are hundreds of other equally valid explanations.

That was the first 16 minutes, then I became too bored to continue.

My opinion?

Well, of course if you define a "rationalist" as a strawman, you can easily prove the strawman is foolish. You don't need more than one hour to convince me about that. No one in this community is trying to derive whether the sun is shining from the first principles.

I am not sure whether "universe is rational" is supposed to mean that (a) the universe has a relatively short description which could be understood by a mind, or that (b) the universe itself is a mind, specifically a rational one. Seems like the meaning was switched in the middle of an argument, using a sleight of hand.

In summary, my impression is of muddled thinking, and of feeling superior to the imaginary opponents. Actually, maybe the opponents are not imaginary -- there are many fools of various kinds out there -- it just has nothing to do with the kind of "rationality" that we use here, such as described e.g. by Stanovich.

Comment author: niceguyanon 10 August 2016 03:39:27PM *  7 points [-]

I thought that to most LW'ers the weak version of "Calories in, Calories out" was uncontroversial. One can accept that Calories in (the mouth) is not the whole story, and at the same time feel it's pretty much most of the story.

Comment author: Cariyaga 08 August 2016 10:29:16AM 8 points [-]

Man, I had no idea how much effort it takes to actually write and the sense of scale there is to five or ten thousand words. I've been working on a fanfic recently and just breached a thousand words so far on the first chapter. It takes a LOT of effort to write that much, especially in trying to keep it up to my own standards. Mad respect for authors that put out 10k a week. I've always preferred longer chapters, but damn if trying to write, myself, doesn't put things in perspective.

Comment author: ChristianKl 08 August 2016 08:34:24AM 7 points [-]

If a human could eat significantly more calories for the same amount of work and not put on weight we would be prodding them in a lab for breaking the laws of physics on conservation of mass and conservation of energy.

That's false. It doesn't break any law of physics. In practice it also doesn't seem to be true.

I have a friend that repeated Dave Asprey's experiment and added 1000 kcal of calories via butter per day. The person knew enough about his diet to know that this meant that he consumed more calories. They didn't put any weight by doing the experiment for a month.

As far as "resting" metabolism goes, there are also very different resting states. Friday I was lying on my back in my bed from 20 to 23 and was deelpy relaxing. At the end the space under the right side of my body was wet. The leg and up to the shoulder. The left side of my face was also wet and there was a bit sweat running down but not enough to make the blanket under that left side wet. I can't tell you to what extend the warmth was created by brown fat or by muscles, but I was certainly not exercising but deeply relaxing and releasing tensions.

Comment author: entirelyuseless 08 August 2016 04:48:56AM *  6 points [-]

People have had this argument many times on Less Wrong and elsewhere, and you are the one who is wrong here. Calories vs physical exercise is not a physical law. Of course you will only lose as much carbon as you can join to the oxygen that you breath out your mouth. But there is no physical law that says there has to be any particular proportion between that and the measurable exercise you perform externally, and in practice there is no fixed proportion -- people have different proportions, just as it seems to them.

(The fact that you bring up conservation of mass and conservation of energy suggests the absurd idea that you lose weight by converting mass directly into energy -- if that was the way you lose weight, you could eat once and live a few years off that, or more.)

Comment author: James_Miller 06 August 2016 09:49:46PM *  8 points [-]

So another reason to exterminate "wild" mosquitoes is that otherwise they are a convenient vector for bio-terrorism.

Comment author: MrMind 19 October 2016 03:40:28PM *  -1 points [-]

I've this weird fanfiction where LessWrong is a monastery/school of magic who has been abandoned by its creator a long time ago but it's still operating, that sometimes has been attacked by a disgruntled student who was expelled, but has somehow learned to do necromancy and has returned with an army of meat-puppets.
Now I'll have to incorporate that due to some random magic accident, the monastery disappeared, but not the rooms inside it.

Comment author: WalterL 17 October 2016 07:44:21PM 7 points [-]

I'd suggest you prioritize your personal security. Once you have an income that doesn't take up much of your time, a place to live, a stable social circle, etc...then you can think about devoting your spare resources to causes.

The reason I'd make this suggestion is that personal liberty allows you to A/B test your decisions. If you set up a stable state and then experiment, and it turns out badly, you can just chuck the whole setup. If you throw yourself into a cause without setting things up for yourself and it doesn't work out the fallout can be considerable.

Comment author: DanArmak 13 October 2016 11:19:20PM 6 points [-]

Joi Ito said several things that are unpleasant but are probably believed by most people, and so I am glad for the reminder.

JOI ITO: This may upset some of my students at MIT, but one of my concerns is that it’s been a predominately male gang of kids, mostly white, who are building the core computer science around AI, and they’re more comfortable talking to computers than to human beings. A lot of them feel that if they could just make that science-fiction, generalized AI, we wouldn’t have to worry about all the messy stuff like politics and society. They think machines will just figure it all out for us.

Yes, you would expect non-white, older, women who are less comfortable talking to computers to be better suited dealing with AI friendliness! Their life experience of structural oppression helps them formally encode morals!

ITO: [Temple Grandin] says that Mozart and Einstein and Tesla would all be considered autistic if they were alive today. [...] Even though you probably wouldn’t want Einstein as your kid, saying “OK, I just want a normal kid” is not gonna lead to maximum societal benefit.

I should probably get a good daily reminder most people would not, in fact, want their kid to be as smart, impactful and successful in life as Einstein, and prefer "normal", not-too-much-above-average kids.

Comment author: Lightwave 12 October 2016 04:48:07PM 5 points [-]
Comment author: skeptical_lurker 10 October 2016 06:26:46PM 7 points [-]

Ignore all the stuff about provably friendly AI, because AFAIK its fairly stuck at the fundamental level of theoretical impossibility due to lob's theorem and its prob going to take a lot more than five years. Instead, work on cruder methods which have less chance of working but far more chance of actually being developed in time. Specifically, if Google are developing it in 5 years, then its probably going to be deepmind with DNNs and RL, so work on methods that can fit in with that approach.

Comment author: gjm 06 October 2016 06:17:54PM -1 points [-]

20 years ago the very first crude neural nets were just getting started

The very first artificial neural networks were in the 1940s. Perceptrons 1958. Backprop 1975. That was over 40 years ago.

In 1992 Gerry Tesauro made a neural-network-based computer program that played world-class backgammon. That was 25 years ago.

What's about 20 years old is "deep learning", which really just means neural networks of a kind that was generally too expensive longer ago and that has become practical as a result of advances in hardware. (That's not quite fair. There's been plenty of progress in the design and training of these NNs, as a result of having fast enough hardware for them to be worth experimenting with.)

In response to Linkposts now live!
Comment author: Houshalter 28 September 2016 04:24:57PM 7 points [-]

This is really awesome and could change the fate of lesswrong. I really think this will bring people back (at least more than any other easy to implement change.) I personally expect to spend more time here now, at least.

One thing to take note of is that lesswrong, by default, sorts by /new. As the volume of posts increases, it may be necessary to change the default sort to /hot or /top/?t=week. Especially if you want it to be presentable to newcomers or even old timers coming back to the site, you want them to see the best links first.

Comment author: Clarity 28 September 2016 03:20:27AM 7 points [-]
Comment author: CellBioGuy 25 September 2016 06:18:03AM *  7 points [-]

Astrobiology bloggery got interrupted by a SEVERE bout of a sleep disorder, developing systems to measure metabolic states of single yeast cells in order to freaking graduate soonish, and having a bit of a life for a while.

Astrobiology bloggery resumes within 1 week, with my blog moved from thegreatatuin.blogspot.com to thegreatatuin.wordpress.com, blogger being completely unusable when it comes to inserting graphs and the like. Dear gods I'm excited, the last year has seen a massive explosion in origin of life research and study of certain outer solar system bodies. To the point that I'm pretty sure the metabolism of the last universal common ancestor has been figured out and the origin of the ribosome (and therefore protein-coding genetics) as well.

Advice on running personal wordpress account welcomed.

Comment author: Manfred 21 September 2016 03:16:32AM *  7 points [-]

This is only true for simple systems - with more complications you can indeed sometimes deduce causal structure!

Suppose you have three variables: Utopamine conentration, smiling, and reported happiness. And further suppose that there is an independent noise source for each of these variables - causal nodes that we put in as a catch-all for fluctuations and external forcings that are hard to model.

If Utopamine is the root cause of both smiling and reported happiness, then the variation in happiness will be independent of the variation in smiling, conditional on the variation in Utopamine. But conditional on the variation in smiling, the variation in utopamine and reported happiness will still be correlated!

The AI can now narrow down the causal structure to 2, and perhaps it can even figure out the right one if there's some time lag in the response and it assumes that causation goes forward in time.

Comment author: gwern 19 September 2016 06:20:19PM *  7 points [-]

von Neumann was noted as being social and extraverted long before he began his lobbying and politicking, and was never described as a second Dirac, so I don't think he was simply acting out of expediency. If high intelligence enabled faking extraversion & social skills, which are useful in almost all contexts*, we would see a noted personality correlation with intelligence and increasing with intelligence, which we don't - extraversion is largely independent of IQ, it's Openness in the Big Five which correlates. High-functioning autistic people are also not noted for easily acquiring psychopath-level skills in imitating & manipulating without feeling.

* see for example the correlation of increasing extraversion with increasing lifetime income in the Terman semi-high IQ sample

Comment author: buybuydandavis 19 September 2016 03:51:50AM 6 points [-]

If your lunatic sensor didn't go off reading this, you should get it adjusted.

A funny comment at LW.

Even lunatics can be right.

Gwern said

The assumption here is that both the general population and elite professions are described by a normal distribution (N(100,15) and N(125,6.5), respectively)

Is it? I didn't see that. assumption stated. Problem is, they didn't explicitly specify where they got their distributions. At least I don't see it.

Looking again at some of their conclusions in the preceding paragraph, it does look like they're assuming gaussians based on mean and sd a small sample, then projecting that out to the tails. Clearly malpractice.

They don't come out and say it, but the "This means that" below shows that they are extrapolating to the tails.

This means that 95% of people in intellectually elite professions have IQs between 112 and 138 99.98% have IQs between 99 and 151.

Funny that an article talking about how hard it is to be smart can be so dumb.

Still, my question remains - is there real data out there to support the contention that P(elite career|IQ) has a local max and then decreases for higher IQ?

Comment author: Brillyant 14 September 2016 02:15:56PM *  7 points [-]

I've casually followed his predictions about Trump. It's so silly.

  • Far Away From Election when Trump is the #1 news story every day — Trump will win because he is using advanced persuasion techniques!
  • Post Khan Debacle once Trump is getting beaten badly in polls and looks unelectable—Hillary has now started using advanced persuasion techniques! That is why she has come back! (Hedging...)
  • Polls Tighten — Trump is back to using advanced persuasion techniques!

Um. No. Sounds like phlogiston to me.

At any rate, Adams is the winner here. I'd never heard of the guy who wrote Dilbert. Now I've probably visited his blog 50 times in the last year and clicked around a bit. I'm sure he's sold plenty of books because of his predictions on the election.

Prediction: After the election, Adams will say this whole "Trump will win" gambit was just a meta advanced persuasion project he'd been running on all of his readers. This fact people believed Trump had special powers was proof that Adams is a Master Persuader. (Unless Trump actually wins...then Adams will say "See! I knew it all along!") Win. Win. For Adams.

Comment author: Anders_H 07 September 2016 11:07:05PM 7 points [-]

Today, I uploaded a sequence of three working papers to my website at https://andershuitfeldt.net/working-papers/

This is an ambitious project that aims to change fundamental things about how epidemiologists and statisticians think about choice of effect measure, effect modification and external validity. A link to an earlier version of this manuscript was posted to Less Wrong half a year ago, the manuscript has since been split into three parts and improved significantly. This work was also presented in poster form at EA Global last month.

I want to give a heads up before you follow the link above: Compared to most methodology papers, the mathematics in these manuscripts is definitely unsophisticated, almost trivial. I do however believe that the arguments support the conclusions, and that those conclusions have important implications for applied statistics and epidemiology.

I would very much appreciate any feedback. I invoke "Crocker's Rules" (see http://sl4.org/crocker.html) for all communication regarding these papers. Briefly, this means that I ask you, as a favor, to please communicate any disagreement as bluntly and directly as possible, without regards to social conventions or to how such directness may affect my personal state of mind.

I have made a standing offer to give a bottle of Johnnie Walker Blue Label to anyone who finds a flaw in the argument that invalidates the paper, and a bottle of 10-year old Single Scotch Malt to anyone who finds a significant but fixable error; or makes a suggestion that substantially improves the manuscript.

If you prefer giving anonymous feedback, this can be done through the link http://www.admonymous.com/effectmeasurepaper .

Comment author: James_Miller 01 September 2016 03:04:27PM 7 points [-]

I interviewed Zoltan Istvan (Transhumanist party Presidential candidate), Greg Cochran (expert on genetics and intelligence), and Phil Torres (founder of the XrisksInstitute) on my future strategist podcast. The Cochran interview has 4,266 listens. I had my best podcast moment when I observed my 11-year-old son texting his best friend saying that his dad interviewed a presidential candidate.

Comment author: WhySpace 30 August 2016 02:42:20PM *  7 points [-]

I think there's also a near/far thing going on. I can't find it now, but somewhere in the rationalist diaspora someone discussed a study showing that people will donate more to help a smaller number of injured birds. That's one reason why charity adds focus on 1 person or family's story, rather than faceless statistics.

Combining this with what you pointed out, maybe a fun place to take the discussion would be to suggest that we start with a specific one of our friends. "Exactly. Let's start with Bob. Alice next, then you. I'll volunteer to go last. After all, I wouldn't want you guys to have to suffer through the loss of all your friends, one by one. No need to thank me, it is it's own reward."

EDIT: I was thinking of scope insensitivity, but couldn't remember the name. It's not just a LW concept, but also an empirically studied bias with a Wikipedia page and everything.

However, I mis-remembered it above. It's true that I could cherry pick numbers and say that donations went down with scope in one case, but I'm guessing that's probably not statistically significant. People are probably willing to donate a little more, not less, to have an impact a hundred times as large. Perhaps there are effects from misleading vividness at a small scale, as I imply. However, on a large scale, the slope is likely largely positive, even if just barely.

Comment author: WhySpace 30 August 2016 05:01:24AM *  7 points [-]

Here's the problem with talking x-risk with cynics who believe humanity is a net negative, and also a couple possible solutions.

Frequently, when discussing the great filter, or averting nuclear war, someone will bring up the notion that it would be a good thing. Humanity has such a bad track record with environmental responsibility or human rights abuses toward less advanced civilizations, that the planet, and by extension the universe, would be better off without us. Or so the argument goes. I've even seen some countersignaling severe enough to argue, somewhat seriously, in favor of building more nukes and weapons, out of a vague but general hatred for our collective insanity, politics, pettiness, etc.

Obviously these aren't exactly careful, step by step arguments, where if I refute some point they'll reverse their decision and decide we should spread humanity to the stars. It's a very general, diffuse dissatisfaction, and if I were to refute any one part, the response would be "ok sure, but what about [lists a thousand other things that are wrong with the world]". It's like fighting fog, because it's not their true objection, at least not quite. It's not like either of us feels like we're on opposite sides of a debate or anything though, so usually pointing out a few simple facts is enough to get a concession that there are exceptions to the rule "humanity sucks". However, obviously refuting all thousand things, one by one, isn't a sound strategy. There really is a lot of bad stuff that humanity has done, and will continue to do I'm sure.

Usually, I try to point at broad improving trends like infant mortality, war, extreme poverty, etc. I'll argue that the media biases our fears by magnifying all the problems that remain. I paint a rosy future of people fighting debater's prisons in the past, debating universal healthcare today, and in the future arguing fiercely over whether money and work are needed at all in their post-scarcity Star Trek economy. Political rights for minorities yesterday, social justice today, argue over any minor inconveniences tomorrow. Starvation yesterday, healthy food for all today, gourmet delicacies free next to drinking fountains tomorrow. I figure they're more likely to accept a future where we never stop arguing, but do so over progressively more petty things, and never realize we're in a utopia.

However, I think I might have better luck trying to counter-counter signal. "Yeah, humanity is pretty messed up, but why do you want to put us out of our misery? Shouldn't we be made to suffer through climate change and everything else we've brought on ourselves, instead of getting off easy? Imagine another thousand years of inane cubical work and a dozen more Trump presidencies. Maybe we'll learn our lesson." [Obviously, I'm joking here.]

I think this might have the advantage of aligning their cynicism with their more charitable impulses, at least the way my conversations tend to go. And there's no impulse to counter-counter-counter-signal, because I've gone up a meta-level and made the counter-signaling game explicit, which releases all the fun available from being contrarian, and moves the conversation toward new sources of amusement. I'll bet we could then proceed to have interesting discussions on how to solve the world's problems. If whoever I'm musing with comes up with a few ideas of their own, maybe they'll even take ownership of the ideas, and start to actually care about saving the world in their own way. I can dream, I suppose.

Comment author: Dagon 25 August 2016 08:45:00PM -1 points [-]

I was around back in the day, and can confirm that this is nonsense. NRX evolved separtely. There was a period where it was of interest and explored by a number of LW contributors, but I don't think any of the thought leaders of either group were significantly influential to the other.

There is some philosophical overlap in terms of truth-seeking and attempted distinction between universal truths and current social equilibria, but neither one caused nor grew from the other.

Comment author: Fluttershy 23 August 2016 08:09:20AM *  6 points [-]

Several months ago, Ozy wrote a wonderful post on weaponized kindness over at Thing of Things. The principal benefit of weaponized kindness is that you can have more pleasant and useful conversations with would-be adversaries by acknowledging correct points they make, and actively listening to them. The technique sounds like exactly the sort of thing I'd expect Dale Carnegie to write about in How to Win Friends and Influence People.

I think, though, that there's another benefit to both weaponized kindness, and more general extreme kindness. To generalize from my own experience, it seems that people's responses to even single episodes of extreme kindness can tell you a lot about how you'll get along with them, if you're the type of person who enjoys being extremely kind. Specifically, people who reciprocate extreme kindness tend to get along well with people who give extreme kindness, as do people who socially or emotionally acknowledge that an act of kindness has been done, even without reciprocating. On the other hoof, the sort of people who have a habit of using extreme kindness don't tend to get along with the (say) half of the population consisting of people who are most likely to ignore or discredit extreme kindness.

In some sense, this is fairly obvious. The most surprising-for-me thing about using the reaction-to-extreme-kindness heuristic for predicting who I'll be good friends with, though, is how incredibly strong and accurate the heuristic is for me. It seems like 5 of the 6 individuals I feel closest to are all in the top ~1 % of people I've met at being good at giving and receiving extreme kindness.

(Partial caveat: this heuristic doesn't work as well when another party strongly wants something from you, e.g. in some types of unhealthy dating contexts).

In response to Identity map
Comment author: gjm 17 August 2016 11:46:54PM -1 points [-]

Add me to the list of people here who think trying to get super-precise about what "identity" means is like trying to get super-precise about (e.g.) where the outer edge of an atom is. Assuming nothing awful happens during the night, me-tomorrow is closely related to me-today in lots of ways I care about; summarizing those by saying that there's a single unitary me that me-today is and me-tomorrow also is is a useful approximation, but no more; for many practical purposes this approximation is good enough, and we have built lots of mental and social structures around the fiction that it's more than an approximation, but if we start having to deal seriously with splitting and merging and the like, or if we are thinking hard about anthropic questions, what we need to do is to lose our dependence on the approximation rather than trying to find the One True Approximation that will make everything make sense, because there isn't one.

Comment author: Lumifer 12 August 2016 02:53:41PM 7 points [-]

I think this article suffers from aggregating all science into one big bin. In reality, different disciplines have a radically different level of problems with replicability and fraud. Classical hard sciences like physics and chemistry don't have much of a problem. Very soft sciences like psychology or anthropology have a huge problem.

Comment author: Viliam 09 August 2016 09:34:35PM *  7 points [-]

I have heard repeatedly the argument about "calories in, calories out" (e.g. here). Seems to me that there are a few unspoken assumptions, and I would like to ask how true they are in reality. Here are the assumptions:

a) all calories in the food you put in your mouth are digested;

b) the digested calories are either stored as fat or spent as work; there is nothing else that could happen with them;

and in some more strawmanish forms of the argument:

c) the calories are the whole story about nutrition and metabolism, and all calories are fungible.

If we assume these things to be true, it seems like a law of physics that if you count the calories in the food you put in your mouth, and subtract the amount of exercise you do, the result exactly determines whether you gain or lose fat. Taken literally, if a healthy and thin person starts eating an extra apple a day, or starts taking a somewhat shorter walk to their work, without changing anything else, they will inevitably get fat. On the other hand, any fat person can become thin if they just start eating less and/or exercising more. If you doubt this, you doubt the very laws of physics.

It's easy to see how (c) is wrong: there are other important facts about food besides calories, for example vitamins and minerals. When a person has food containing less than optimal amount of vitamins or minerals per calorie, they don't have a choice between being fat or thin, but between being fat or sick. (Or alternatively, changing the composition of their diet, not just the amount.)

Okay, some proponents of "calories in, calories out" may now say that this is obvious, and that they obviously meant the advice to apply to a healthy diet. However, what if the problem is not with the diet per se, but with a way the individual body processes the food? For example, what if the food contains enough vitamins and minerals per calorie, but the body somehow extracts those vitamins and minerals inefficiently, so it reacts even to the optimal diet as if it was junk food? Could it be that some people are forced to eat large amounts of food just to extract the right amount of vitamins and minerals, and any attempt to eat less will lead to symptoms of malnutrition?

Ignoring the (c), we get a weaker variant of "calories in, calories out", which is, approximately -- maybe you cannot always get thin by eating less calories than you spend working; but if you eat more calories than you spend working, you will inevitably get fat.

But it is possible that some of the "calories in (the mouth)" may pass through the digestive system undigested and later excreted? Could people differ in this aspect, perhaps because of their gut flora?

Also, what if some people burn the stored fat in ways we would not intuitively recognize as work? For example, what if some people simply dress less warmly, and spend more calories heating up their bodies? Are there other such non-work ways of spending calories?

In other words, I don't doubt that the "calories in, calories out" model works perfectly for a spherical cow in a vacuum, but I am curious about how much such approximation applies to the real cases.

But even for the spherical cow in a vacuum, this model predicts that any constant lifestyle, unless perfectly balanced, should either lead to unlimited weight gain (if "calories in" exceed "calories out") or unlimited weight loss (in the opposite case). While reality seems to suggest that most people, both thin and fat, keep their weight stable around some specific value. The weight itself has an impact on how much calories people spend simply moving their own bodies, but I doubt that this is sufficient to balance the whole equation.

Comment author: Wei_Dai 22 July 2016 03:49:22PM 7 points [-]

That's funny. :) But these people actually sound remarkably sane. See here and here for example.

Comment author: WalterL 19 October 2016 09:21:03PM 6 points [-]

My life places me in a position to observe an uncommon number of people repenting and trying to change. As you might expect, humans being what we are, few accomplish their goal.

A fact that I've observed is that NONE of those who other themselves and blame the shard get it done. If someone says "I've got a terrible temper", he will still hit. If he says "I hit my girlfriend", he might stop. If someone says "I have shitty executive function", he will still be late. If he says "I broke my promise", he might change.

So, when you say "I have an addiction", I'm a bit concerned. A LW truism is that we don't have brains, we are brains. We aren't ghosts manning machines, we are machines.

I think it is some old "devil made me do it", stuff. The "other me" isn't real, so energy spent fighting him is wasted. Effort spent changing my behavior might bear fruit.

I'm reading a lot into phrasing, so if this isn't you, my bad. Just...my advice... be sure to own your stuff man. You either "have an addiction", or "screwed some randos without protection", and my experience suggests that thinking of it as the second one will help you more.

Comment author: username2 10 October 2016 09:23:33AM 6 points [-]

Is there something similar to the Library of Scott Alexandria available for The Last Psychiatrist ? I just read "Amy Schumer offers you a look into your soul" and I really liked it but I don't have enough time to read all posts on the blog.

Comment author: ChristianKl 07 October 2016 03:33:10PM 6 points [-]

Because the IRS isn't popular and it's not a good move for a politician to speak in favor of the IRS and advocate increase of IRS funding.

Comment author: Lumifer 06 October 2016 05:07:42PM *  5 points [-]

What is the best source for this in your view?

The raw data is plentiful -- look at any standardized test scores (e.g. SAT) by race. For a full-blown argument in favor see e.g. this (I can't check the link at the moment, it might be that you need to go to the Wayback Machine to access it). For a more, um, mainstream discussion see Charles Murray's The Bell Curve. Wikipedia has more links you could pursue.

Is it your view that past slavery in America still has a large impact on African Americans in the present day U.S.?

My view is that history is important and that outcomes are path-dependent. Slavery and segregation are crucial parts of the history of American blacks.

open to learning

Your social circles might have a strong reaction to you coming to anything other than the approved conclusions...

Comment author: ChristianKl 05 October 2016 09:00:42PM 6 points [-]

Our biosphere's junk DNA

Junk DNA generally doesn't survive that long in evolutionary timescales because there's nothing that prevents mutations. It seems a bad information storage system.

Comment author: DanArmak 02 October 2016 07:38:28AM *  6 points [-]

I can't figure out how to edit the post description to include a summary paragraph. Help?

... Now the actual link is gone and I can't edit it back in! It's supposed to point here. Mods/admins, can you help? Here is a screenshot of what I see.

Comment author: CellBioGuy 01 October 2016 11:41:19PM *  6 points [-]

Worth noting:

Possibly indicating that the end of the last glaciation rather than new invention drove the more or less simultaneous large-scale agricultural transitions that occurred all across the old and new world ~10k years ago.

Comment author: CellBioGuy 01 October 2016 11:32:37PM *  6 points [-]

My favorite crazy unlikely idea about that is that the Paleocene-Eocene Thermal Maximum 50 megayears ago - a 200k year pulse of high CO2 levels and temperatures in which the CO2 was added over a timescale of less than 10k years (potentially much less) and had an isotopic composition consistent with having been liberated from biogenic deposits - could theoretically be explained by all the coal and oil deposits of Antarctica being burned followed by some positive feedbacks kicking in.

(Most land of Antarctica never having been investigated geologically in any detail at all due to being under kilometers of ice) (And Antarctica at that time being completely unglaciated and relatively temperate despite being where it is now by then) (And subsequent glaciation having scraped most of the surface clean of anything that was on it at the time)

We have an advantage in that we evolved in the tropics - you can take a tropical animal and keep it warm near the poles by wrapping it in clothes. It's much more difficult to take a cold-adapted polar animal and keep it alive in the tropics...

Comment author: hg00 28 September 2016 01:43:41AM *  6 points [-]

My understanding is that a USA programmer would start at the $20,000-a-year level (?), and that someone with experience can probably get twice that, and a senior one can get $100,000/year.

A pessimistic starting salary for a competent US computer programmer is $60K and senior ones can clear $200K. $100K is a typical starting salary for a computer science student who just graduated from a top university (also the median nationwide salary).

In the US market, foreigners come work as computer programmers by getting H1B visas. The stereotypical H1B visa programmer is from India, speaks mostly intelligible English with a heavy accent, gets hired by a company that wants to save money by replacing their expensive American programmers, and exists under the thumb of their employer (if they lose their job, their visa is jeopardized). I think that the average H1B makes less money than the average American coder. It sounds to me like you'd be a significantly more attractive hire than a typical H1B--you're fluent in English, and you've made contributions to Scheme?

The cost of living in the US is much higher than the Philippines. Raising a family in Silicon Valley is notoriously expensive. Especially if you want your kids to go to a "good school" where they won't be bullied. I don't know what metro has the best job availability/cost of living/school quality tradeoff. It will probably be one of the cities that's referred to as a "startup hub", perhaps Seattle or Austin. If your wife is willing to homeschool, you don't have to worry about school quality.

You can dip your toes in Option 1 without taking a big risk. Just start applying to US software companies. They'll interview you via Skype at first, and if you seem good, the best companies will be willing to pay for your flight to the US to meet the team. To save time you probably want to line up several US interviews for a single visit so you can cut down on the number of flights. Here are some characteristics to look for in companies to apply to:

  • The company has a process in place for hiring foreigners.

  • The company is looking for developers with your skill set.

  • The company's developer team is "clued in". Contributing to Scheme is going to be a big positive signal to the right employer. You can do things like read the company engineering blog, use BuiltWith, look up the employees on LinkedIn to figure out if the company seems clued in. Almost all companies funded by Y Combinator are clued in. If your interviewer's response to seeing Scheme on your resume is "What is Scheme?", then you're interviewing at the wrong company and you'll be offered a higher salary elsewhere.

  • The company is profitable but not sexy. For example, selling software to small enterprises. (You probably don't want to work for a business that sells software to large enterprises, as these firms are generally not "clued in". See above.) Getting a job at a sexy consumer product company like Google or Facebook is difficult because those are the companies that everyone is applying to. You can interview at those companies for fun, as the last places you look at. And you don't want to apply for a startup that's not yet profitable because then you're risking your wife and kids on an unproven business. I'm not going to tell you how to find these companies--if you use the same methods everyone else uses to find companies to apply to, you'll be applying to the same places everyone else is.

Of course you'll be sending out lots of resumes because you don't have connections. Maybe experiment with writing an email cover letter very much like the post you wrote here, including the word "fucking". I've participated in hiring software developers before, and my experience is that attempts at formal cover letters inevitably come across as stuffy and inauthentic. Catch the interviewer's interest with an interesting email subject line+first few sentences and tell a good story.

Actually you might have some connections--consider reaching out to companies that are affiliated with the rationalist community, posting to the Scheme mailing list if that's considered an acceptable thing to do, etc.

Consider donating some $ to MIRI if my advice ends up proving useful.

Comment author: Elo 27 September 2016 02:36:16AM -2 points [-]

yes

Comment author: username2 22 September 2016 12:20:52AM *  6 points [-]

Have you ever taken Adderall? I greatly suspect you have not.

People who fight chronic akrasia because of varoius degrees of ADHD and related mental disorders have a different response to stimulants than "normal" individuals. For me, Adderall puts me into cool, calm, clear focus. The kind of productive mode of being that most people get into by drinking a cup of coffee (except coffee makes me jittery and unfocused). Being on Adderall is just... "normal." Indeed the first time I tried it I thought the dose was too low because I didn't feel a thing.. until 8 hours later when I realized I was still cranking away good code and able to focus instead of my normal bouts of mid-day akrasia. I could probably count on my hands the number of times I had a full day of highly focused work without feeling stress or burn-out afterwards... now it's the new normal :)

For such people low-dose amphetamines don't provide any high, nor are they accompanied by some sort of berserker productivity binge like popular media displays. In the correct dosages they also don't seem to come with any addiction or withdraw -- I go off of it without any problems, other than reverting to the normal, viscous cycles of distraction and akrasia. (This isn't just anecdotal data -- the incidence rate of Adderall addiction among those following the prescribed plan is lost in the backround noise of people who are abusing in these trials.)

Honestly, see a psychiatrist that specializes in these things and talk to them about your inability to focus, your history of trouble in completing complex, long tasks, how this is affecting your career and personal growth goals, etc. Be honest about your shortcomings, and chances are they will work with you to find a treatment plan that truly helps you. You're not manipulating anybody.

Seriously, ADHD is a real mental disorder. Your first step should be to recognize it as such, and accept the fact that you might actually have a real medical condition that needs treatment. You're not manipulating the system, you're exactly the kind of person the system is trying to help! Prescription drugs are for more than just people who hear voices...

Comment author: WhySpace 22 September 2016 12:10:47AM 6 points [-]

Truth is not what you want it to be;

it is what it is,

and you must bend to its power or live a lie.

- Miyamoto Musashi

Comment author: Douglas_Knight 19 September 2016 06:34:47PM *  6 points [-]

First of all, IQ tests aren't designed for high IQ, so there's a lot of noise there and this would mainly be noise, if he correctly reported the results, which he doesn't.

Second, there are some careful studies of high IQ (SMPY etc) by taking the well designed SAT test, which doesn't have a very high ceiling for adults and giving it to children below the age of 13. By giving the test to representative samples, they can well characterize the threshold for the top 3%. Using self-selected samples, they think that they can characterize up to 1/10,000. In any event, within the 3% they find increasing SAT score predicts increasing probability of accomplishments of all kinds, in direct contradiction of these claims.

Comment author: James_Miller 19 September 2016 04:30:35AM 6 points [-]

My reading of the behavioral genetics literature is that high intelligence being driven by rare autism variants is looking unlikely.

I haven't looked at this literature, but people with autism and very high IQs might be able to fake being neurotypical. As Steve Hsu told me, we don't know if von Neumann had a normal personality because he certainly had the intelligence to fake being normal if he felt this suited his interests.

Comment author: Dagon 19 September 2016 02:17:43AM 4 points [-]

Yikes. If your lunatic sensor didn't go off reading this, you should get it adjusted.

From a theoretical standpoint, democratic meritocracies should evolve five IQ defined 'castes', The Leaders, The Advisors, The Followers, The Clueless and The Excluded.

If that doesn't bother you, notice that this guy is putting a lot of weight on really simplistic statistics about the edge cases (the half-percent or less of the population which is very smart and/or is "successful in" one of his preferred "intellectually elite professions"). Oh, I see Gwern actually commented about this in a comment.

Basically, this is a lovely irony of a presumed-high-IQ author jumping to a pretty ridiculous conclusion because he's not willing/able to try to dissolve his questions and do the hard work to be rigorous in his research.

Comment author: gjm 12 September 2016 12:46:28PM -1 points [-]

Whether it's a good thing, and how good, depends on what happens to the people who had those jobs.

Imagine that there are a thousand people doing a horrible job for $20k/year. Now a machine of negligible cost comes along that can do the same job as one of them for $1k/year. Then, oversimplifying in some obvious ways, we have the following:

Option 1: Machines replace humans, workers take the benefits. Each of these people switches from doing the job for $20k/year to doing nothing for $19k/year. They're better off, their employer is exactly the same as before. This is better for some people and not worse for anyone. Of course it will never happen.

Option 2: Machines replace humans, owners take the benefits. Each of these people switches from doing the job for $20k/year to doing nothing for $0k/year. Maybe they can find other jobs, maybe not. Their employer is just $19k/year per worker better off. This is better for some people (the owners of the business) but much worse for the employees, at least in the short term. It is quite likely to happen.

(In this case, what probably happens next -- at least if there is competition -- is that the company lowers its prices somewhat. So now the business owners win and their customers win. In the long run these lower prices may lead to new jobs.)

Option 3: Something in between. A union negotiates a special deal, or the government steps in in the hope of reducing unemployment and disaffection, or something. Exactly what happens will vary but it's probably better for workers than 2 and worse than 1, better for owners than 1 and worse than 2, and probably a non-negligible fraction of the benefits get eaten up by administrative costs.

In the long run, all of these are probably better than leaving things as they are. In the short run -- say, a few decades -- option 2 (which is the most likely of the three, I think) means a thousand people out of work, and quite a lot of them may be unable to find other jobs. This may well be a bigger loss of net utility than the business owners' gain in wealth.

If machines end up taking everyone's jobs, that could be glorious (if it leads to lives of comfortable leisure for all) or terrible (if it leads to lives of comfortable leisure for people who are already wealthy enough not to need to work, and starvation for everyone else).

So: yes, a lot of jobs are pretty terrible and an optimal world without those jobs is much better than an optimal world with them. But we don't have the option of either sort of optimal world, we only get worlds designed by Moloch, and the Moloch-world without those jobs may be even worse than the Moloch-world with them.

Comment author: Houshalter 09 September 2016 10:54:22PM 4 points [-]

No not really. There is plausible reasoning to believe simulations will someday exist in our future (or if we are in the simulation, our past). I don't think there is much reason to believe in a creator otherwise, and certainly not the very specific ones that major religions believe.

Comment author: gjm 08 September 2016 05:17:55PM -1 points [-]

There seems to be quite some denial

I don't think so. What I see is people pointing out that the video is attacking straw men. (Extra-specially strawy, as regards LW in particular; but very strawy even if applied more broadly to people who explicitly aim to be rational.)

I never said that

Some of it is things the video said, and you've said you agree with it. I don't think there's anything in my (admittedly not especially generous) paraphrase that doesn't closely match things said in the video.

So you do agree with the video

Nope. I agree with some of what the video says. You know the old joke about the book review? "This book was both original and good. Unfortunately the parts that were original were not good, and the parts that were good were not original." In the same way, the video seems to me to combine (1) stating things that I think would be obvious to almost everyone here, (2) making less-obvious claims without any sort of justification, which in many cases I think are entirely false, and (3) gloating about how the maker is so much more advanced than those poor deluded rationalists.

Comment author: gjm 08 September 2016 12:04:29PM -1 points [-]

It's also up to the rationalist to consider opening up to the possibility everything they think is true, is wrong.

Gosh, if only someone associated with LW rationalism had ever thought of that.

Seriously, what you've done here is to come to a group of people whose foundational ideas include "the map is not the territory", "human brains are fallible and you need to pay attention to how your thoughts work", and "you should never be literally 100% sure of anything" and say "Hey, losers! Rationality is overrated because you confuse the map with the territory, you aren't aware of your own thoughts and don't distinguish them from reality, and you're 100% confident you're right and therefore can't change your minds!".

Comment author: Elo 05 September 2016 11:05:39PM -2 points [-]

how do you think your first failure at this will come about? before retrying?

Comment author: sdr 05 September 2016 02:35:48PM 6 points [-]

Elo,

You seem to be posting, like, a lot. This is good, this is what we have personal blogs for.

I do have an issue with syndicating your content straight to here, regardless of state, amount of research, amount of prior discussion with other people, confidence, or epistemic status. This introduces an asymetric opportunity cost on behalf of the lesswrong community; specifically, writing these is much easier, and lower effort, than the amount of effort these will collectively soak up for no gain.

For this reason, I have downvoted this post as is. I will also kindly ask of you to introduce a pre-syndication filter, which respects other people's limited amount of time, and attention; and cross-post only the ones where you have 1, a coherent thesis, and 2, validated interest coming from other people (as in, someone explicitely remarked "that's interesting").

Thanks.

Comment author: D_Malik 03 September 2016 06:20:40PM 0 points [-]

Don't let them tell us stories. Don't let them say of the man sentenced to death "He is going to pay his debt to society," but: "They are going to cut off his head." It looks like nothing. But it does make a little difference.

-- Camus

Comment author: D_Malik 03 September 2016 06:13:05PM *  1 point [-]

"From the fireside house, President Reagan suddenly said to me, 'What would you do if the United States were suddenly attacked by someone from outer space? Would you help us?'

"I said, 'No doubt about it.'"

"He said, 'We too.'"

Comment author: Elo 02 September 2016 07:23:12AM -2 points [-]

Oh god. This is really bad.

Someone should tell him about the straw vulcan.

The more we (lw'ers) are tied to the word "Rationality". That should happen less. If you feel personally affected by the idea that someone says this part of your identity is wrong, then maybe it's time to be more fox and less hedgehog.

https://en.wikipedia.org/wiki/The_Hedgehog_and_the_Fox

Comment author: WhySpace 30 August 2016 03:38:25AM *  6 points [-]

LWers who liked this may also like: http://sci-hub.bz/

About: https://en.wikipedia.org/wiki/Sci-Hub

Basically, if you search for something and they don't have it, there's a huge network of scientists with access to pay-walled journals, and one of them will add a PDF. They've grown larger than any of the journal subscription companies, and have the world's largest collection of scientific papers.

Comment author: morganism 29 August 2016 09:58:01PM 6 points [-]

Academic Torrents site, for large scale database transfers

http://academictorrents.com/

Comment author: James_Miller 22 August 2016 07:32:21PM 6 points [-]

Excellent. My personal theory is that the universe is fine-tuned for both life and for the Fermi paradox with a late great filter because across the multiverse most lifeforms such as us will exist in such universes in part because without a great filter intelligent life will quickly turn into something not in our reference class and then use all the resources of their universe and so make their universe inhospitable to life in our reference class.

View more: Prev | Next