Open thread, September 8-14, 2014
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one.
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (295)
I can't count the number of times I didn't do something that would have been beneficial because my social circle thought it would be weird or stupid. Just shows how important it is to choose the people around you carefully.
Someone -- maybe on LW? -- said that their strategy was to choose their friends carefully enough that they didn't have to resist peer pressure.
That has other dangers -- e.g. living in an echo chamber or facing the peer pressure to not change.
Yes, you have to be very careful. (And live in a place where the number of such people is large enough that it's even viable as a strategy, and ignore/isolate yourself from the wider culture or still maintain resistance to it, and so on, which makes it inaccessible to a large number of people, but it seems close to ideal in the rare circumstances where it's possible.)
I don't know if "careful" is the right word -- it's more an issue of finding a good balance and the optimal point isn't necessarily obvious. On the one hand, you should like your friends and not have them annoy you or push you in the directions you don't want to go. On the other hand, being surrounded by the best clones of yourself that you could find doesn't sound too appealing.
It's a bit like an ecosystem -- you want a healthy amount of diversity and not monoculture, but at the same time want to avoid what will poison you or maybe just eat you X-)
Paul Graham wrote about that in A Student's Guide To Startups:
I've always been a huge non-conformist, caring relatively little what others think. I now believe that I went too far and my advice to my younger self would be to try and fit in more.
You have a couple of graduate degrees and are a professor at a liberal arts college in the Northeast... People I would describe as "huge non-conformists" would probably be tailed by campus security if they ever showed up in the area X-D
See this.
Oh, I know you're a conservative in academia and had tenure troubles because of that. But that makes you a conservative in very liberal environment, not a non-conformist.
Of course you can call yourself anything you want to and the label is sufficiently fuzzy and could be defined in many ways. Still, from my perspective you're now a part of the establishment -- Smith did grant you tenure, even if screaming and kicking.
I am not passing judgement on you, it just surprised me that what you mean by a "huge non-conformist" is clearly very different from what I mean by a "huge non-conformist".
It's also stuff such as I don't like sports, music, fashion, or small talk, and in high school and college made zero effort to pay attention to them and it cost me socially. I realize now I should have at least pretended to like them to have had a better social life.
That makes you a fully-conforming geek, as you undoubtedly know. Welcome to the club :-)
I figured out when I was about 15 years old that I had to keep on things I didn't care about to earn points socially and it helped me a great deal and powers what I do as a writer and talk show presenter.
Such as?
In a great example of serendipity, the talking to myself is a case. I was observed doing that and people thought it would be weird, so I stopped doing that.
When I was younger, some adults told be that "you only understand something when you can teach it to someone", which people in my circle disputed as they were the kind of people that like to think of themselves as smart.
I didn't go to a couple of parties to socialise because there were people drinking copious amounts of alcohol, because there was a stigma against getting drunk and stupid. While the not drinking certainly was a good idea, the not socialising was not.
As a child I was extremely interested in everything scientific. Then in school none of the cooler kids were and neither were the friends I actually had, so I started playing video games. Thankfully I later found people interested in scholarship so I started doing that again.
(I am starting to realise most of these are from when I was in school. Might be because I matured or because I have more perspective through the distance)
Not that peer pressure can't have good effects, it is a tool like any other.
Though that certainly has happened to me as well, it strikes me that the opposite has happened more often: I've done things which turned out to be beneficial, and avoided to do things that would have been bad, because of the opinions of my social circles.
Lots of the time, things that are seen as weird and stupid by the majority actually are weird and stupid.
My go-to catchphrase when I notice this sort of situation is (spoken sarcastically):
"Why be happy when you can be normal?"
If people were a great deal better at coordination, would they refuse to use news sources which are primarily supported by advertising?
I don't think "refusing" news sources is helpful. Even a bad newspaper gives some perspective on some topics that you won't find elsewhere.
The whole idea of "news sources" is problematic. It assumes a certain 20th century model of learning about the world. If you want to get really informed about a topic it often necessary to read primary sources. I don't get scientific news from mainstream media. I either read the papers, discussion on LW or blogs by scientists.
When I see a claim that I find interesting and where I don't know it's true I head over to skeptic.stackexchange and open a question. The website is no newspaper but it also serves the purpose of staying in contact with world events.
Advertising is just one biases among many. If I watch a news video at German public television that's payed for by taxpayer money, the a German public television network pays a production company for that video. Some of those production companies also produce PR for paying customers.
A lot of articles in newspapers get these days written by freelance journalists who aren't payed very well and can be hired for other tasks. So even if the newspaper wouldn't make it's money by serving corporate interest the individual journalist might still serve corporate interests.
Wikipedia illustrates that we are actually quite good at coordination. Much better than anyone would have expected 20 years ago. It just doesn't like like we would have expected. Cultural development isn't just more of the same.
But reading it takes time that one could spend on something else.
If you make an utility calculation than the prime concern is about whether it makes sense to learn about a topic in the first place. If you do decide to inform yourself about a topic than you have to choose among the sources that are available. If you really care about an issue than it often makes sense to read multiple perspectives.
It quite easy to read government funded Al Jazeera, a commercial newspaper by a publically traded company that makes money via advertising and network driven community websites like Stackexchange or Wikipedia.
In a pluralistic society all those source of information can exist besides each other. If you don't like corporatist news sources there are a lot of alternatives these days.
That sounds like a good way to end up with more paywalls.
There would definitely be more paywalls. The question is whether it would be a net loss.
Would the quality of information be better? Advertising gets paid for one way or another-- would no-advertising news (possibly even no-advertising media in general) be a net financial loss for consumers?
Look at the history of cable TV. When it appeared it was also promoted as "no advertising, better shows".
I would argue for the existence of a treadmill effect on these things.
Although this may not have been true at the beginning, it arguably did grow to meet that standard. Cable TV is still fairly young in the grand scheme of things, though, so I would say there isn't enough information yet to conclude whether a TV paywall improved content overall.
Also, it's important to remember that TV relies on the data-weak and fairly inaccurate Nielsen ratings in order to understand its demographics and what they like (and it's even weaker and more inaccurate for pay cable). This leads to generally conservative decisions regarding programing. The internet, on the other hand, is filled with as much data as you wish to pull out regarding the people who use your site, on both a broad and granular level. This allows freedom to take more extreme changes of direction, because there's a feeling that the risk is lower. So the two groups really aren't on the same playing field, and their motivations for improving/shifting content potentially come from different directions.
If people were a great deal better at coordination I suspect advertising wouldn't exist at all.
Can someone point me to estimates given by Luke Muehlhauser and others as to MIRI's chances for success in its quest to ensure FAI? I recall some values (of course these were subjective probability estimates with large error bars) in some lesswrong.com post.
You can see some discussion on "How does MIRI know it has a medium probability of success?"
Does mankind have a duty to warn extraterrestrial civilizations that we might someday unintentionally build an unfriendly super-intelligent AI that expands at the speed of light gobbling up everything in its path? Assuming that the speed of light really is the maximum, our interstellar radio messages would outpace any paperclip maximizer. Obviously any such message would complicate future alien contact events as the aliens would worry that our ambassador was just an agent for a paperclipper. The act of warning others would be a good way to self-signal the dangers of AI.
I'd have thought any extraterrestrial civilization capable of doing something useful with the information wouldn't need the explicit warning.
This depends on the solution to the Fermi paradox. An advanced civilization might have decided to not build defenses against a paperclip maximizer because it figured no other civilization would be stupid/evil enough to attempt AI without a mathematical proof that its AI would be friendly. A civilization near our level of development might use the information to accelerate its AI program. If a paperclip maximizer beats everything else an advanced civilization might respond to the warning by moving away from us as fast as possible taking advantage of the expansion of the universe to hopefully get in a different Hubble volume from us.
One response to such a warning would be to build a super-intelligent AI that expands at the speed of light gobbling up everything in its path first.
And when the two (or more) collide, it would make a nice SF story :-)
This wouldn't be a horrible outcome, because the two civilizations light-cones would never fully intersect. Neither civilization would fully destroy the other.
Are you crazy! Think of all the potential paperclips that wouldn't come into being!!
The light cones might not fully intersect, but humans do not expand at close to the speed of light. It's enough to be able to destroy the populated planets.
I love this idea! A few thoughts:
What could the alien civilizations do? Suppose SETI decoded "Hi from the Andromeda Galaxy! BTW, nanobots might consume your planet in 23 years, so consider fleeing for your lives." Is there anything humans could do?
The costs might be high. Suppose our message saves an alien civilization one thousand light-years away, but delays a positive singularity by three days. By the time our colonizers reach the alien planet, the opportunity cost would be a three-light-day deep shell of a thousand light-year sphere. Most of the volume of a sphere is close to the surface, so this cost is enormous. Giving the aliens an escape ark when we colonize their planet would be quintillions of times less expensive. Of course, a paperclipper would do no such thing.
It may be presumptuous to warn about AI. Perhaps the correct message to say is something like "If you think of a clever experiment to measure dark energy density, don't do it."
It depends on your stage of development. You might build a defense, flee at close to the speed of light and take advantage of the universe's expansion to get into a separate Hubble volume from mankind, accelerate your AI program, or prepare for the possibility of annihilation.
Good point, and the resources we put into signaling could instead be used to research friendly AI.
The warming should be honest and give our best estimates.
Quite.
The outer thee days of a 1000 Ly sphere account for 0.0025% of its volume.
I was recently heartened to hear a very good discussion of effective altruism on BBC Radio 4's statistics programme, More or Less, in response to the "Ice Bucket Challenge". They speak to Neil Bowerman of the Centre for Effective Altruism and Elie Hassenfeld from GiveWell.
They even briefly raise the possibility that large drives of charitable donations to ineffective causes could be net negative as it's possible that people have a roughly fixed charity budget, which such drives would deplete. They admit there's not much hard evidence for such a claim, but to even hear such an unsentimental, rational view raised in the mainstream media is very bracing.
Available here: http://www.bbc.co.uk/podcasts/series/moreorless (click the link to "WS To Ice Or Not To Ice"), or directly here: http://downloads.bbc.co.uk/podcasts/radio4/moreorless/moreorless_20140908-1200a.mp3
Would there be any interest in an iPhone app for LessWrong? I was thinking it might be a fun side project for learning Swift, and I didn't see any search results on the App Store.
I bet some folks would love you forever if you gave them reply notification
What do you see it doing that the web site doesn't?
Imagining:
Welcoming other features that would draw users, too. I have to wonder if there are open source Reddit clients I could adapt, given the forked codebase...
I expect that forking a reddit client is the way to go for UI (if you don't have any in mind, I think AlienBlue and Reddit is Fun are probably worth looking into for this).
For the backend, reddit exposes itself through json, which LW doesn't seem to; e.g. http://www.reddit.com/user/philh/.json works, but http://lesswrong.com/user/philh/.json (and http://lesswrong.com/user/philh/overview/.json ) don't. I expect clients to mostly use this, so you'll need to rewrite those portions of the code.
Turns out AlienBlue did release their original version as open source, but the code is four years out of date! Hmmm.
Yeah, I would probably end up scraping the HTML. I filed a bug about .json being broken two years ago, but even if it were fixed, it seems that LW has quite a few customizations that the JSON output likely has not caught up to...
I think a predictionbook app or an app version of the credence game, would be more useful than a app for LessWrong.
There already is one for Android.
I wasn't aware of the Android app.
On the other hand the existence doesn't mean that a new attempt at the same problem is worthless. I think it's very valuable to have multiple people try to solve the problem.
To me it seems like a much more interesting project than having another go at writing an app to parse an online forum. There are few people thinking in depth about designing apps to teach people to be calibrated.
The fact that you have a smartphone also allows additional questions:
You can ask calibration questions such as:
Did John or Joe send you more emails in the last year?
Is the air pressure more or less than X?
Is the temperature of the smart phone battery more or less than X?
Does this arrow point more North or more South?"
Is the distance between your work location and where you are at the moment more or less than X?
Is the distance between your home location and where you are at the moment more or less than X?
Is the distance between where John lives and where you are at the moment more or less than X?
What was the average speed at which you where traveling in the last minute (if you sit in public transportation)
Is the average pitch of the background noise over the last minute more or less than X?
Is the longest email that you received in the past week more or less than X characters long?
What's the chance that you will get a call today?"
Is the average of beeminder value X that you tracked over the last week (month) more or less than X?
All those questions are more interesting then whether postmaster general X served before or after postmaster general Y or the boiling temperatures of various metals. Building an app around the issue might be more complicated than simply providing an new interface for LessWrong, but the payoff for getting Credence training right is also so much higher.
Even if you simply focus on building a beeminder history credence game that might not be too complicated but really useful. Too me it feels like a waste to have valuable development resources wasted on building a Lesswrong app when there are much more valuable projects.
Just wanted to say: thanks for the ideas!
A personal prediction book?
Simple version: You provide your own predictions, and state your credence. Later you say whether you were right or wrong. The app displays statistics of your calibration.
This is simple in essence, but there will be many design decisions, and many little details that can make the UI better. For example, I guess you should choose the credence from, say, 50%, 60%, 70%, 80%, 90%, 95%, and 99%, instead of typing your own value, because this way it will be easier to make statistics. Also, choosing one option is easier than typing two digits, although most of the work will be typing the questions. It should be possible to edit the text later (noticing a typo too late would drive me crazy). The app should also remember the date each question was entered, so it can give you statistics like: how well calibrated you are in the last 30 days (compared with the previous 30 days).
Maybe the data should be stored online, so you can edit them both from the mobile and from the PC. Although, I would prefer if the application works offline, too. These are two contradictory demands, so you have to find a solution. Perhaps each user should choose in settings whether their data should be kept in the mobile or on the web? And perhaps allow to change this setting later, and the data will be copied? Or maybe even keeping only the recent data in the mobile, and the full archive online? There are many decisions here.
A nice function would be to save some work typing repeated questions. For example, if I want to make a bet every morning "will I exercise today?", there should be an option to repeat one of the recent questions with current date. (By the way, if you always display the date along the question, you can write things like "today" or "this month" without having to always write the specific date.)
A more advanced version (don't do this as the first version; remember the planning fallacy!) would allow some kind of "multiplayer". You could add friends, and offer to share some bets with your friends. Anyone can create a question and offer it to other people; they can accept (by writing their credence) or reject it. Then there would be a summary comparing the members of the group.
Again, here are many design choices and UI improvements. How specifically will you add friends? Will you also have groups of friends, so you share some questions only with some groups? Who can answer the multiplayer question: the person who wrote it, anyone, or the person who wrote it chooses one of the former options?
Integrate the whole thing with Facebook, especially the multiplayer version? That could make the app wildly popular! (But I heard that the Facebook API is less than friendly.)
I would expect most LWers to prefer Android. Certainly I do.
Peter Thiel gave an AMA at Reddit, mentioned friendly AI and such (and even neoreaction :-D).
His answer to "Peter, what's the worst investment you've ever made? What lessons did you learn from it?" is intersting. He focuses on not investing more on facebook. The shift of focus says a lot about his mindset.
One of the better AMAs I've read.
Peter is an interesting guy. Is his book worth reading?
I read/scanned the predecessor of that book, the transcripts of his Stanford classes where he taught one course. They were quite interesting and worth reading.
Can Bayesian inference be applied to quantum immortality?
I'm writing an odd science fiction story, in which I'd like to express an idea; but I'd like to get the details correct. Another redditor suggested that I might find someone here with enough of an understanding of Bayesian theory, the Multiple Worlds interpretation of quantum mechanics, quantum suicide, that I might be able to get some feedback in time:
Assuming the Multiple Worlds Interpretation of quantum theory is true, then buying lottery tickets can be looked at in an interesting way: it can be viewed as an individual funneling money from the timelines where the buyer loses to the timelines where the buyer wins. While there is a great degree of 'friction' in this funneling (if a lottery has an average 45% payout, then 55% of the money is lost to the "friction"), it is the method that has, perhaps, the lowest barrier to entry: it only costs as much as a lottery ticket, and doesn't require significant education into abstruse financial instruments.
While, on the whole, buying a lottery ticket may have a negative expected utility (due to that "friction"), there is at least one set of circumstances where making the purchase is warranted: if a disaster is forthcoming, which requires a certain minimal amount of wealth to survive. As a simplification, if the only future timelines in which you continue to live are ones in which you've won the lottery, then buying tickets increases the portion of timelines in which you live. (Another redditor phrased it thusly: Hypothetically, let's say you have special knowledge that at 5pm next Wednesday the evil future government is going to deactivate the cortical implants of the poorest 80% of the population, killing them all swiftly and painlessly. In that circumstance, there would be positive expected utility, because you wouldn't be alive if you lost.)
Which brings us to the final bit: If you buy a lottery ticket, and /win/, then via Bayesian inference from the previous paragraphs, you have just collected evidence which suggests an increased likelihood that you are about to face a disaster which requires a great deal of resources to survive. That is, according to the idea of quantum immortality, if you never experience a timeline in which you've permanently died, then the only timelines you experience are the ones in which you have sufficient resources to survive; thus implying that whatever resources you have are going to be sufficient to survive.
However, I'm not /quite/ sure that I've got all my inferential ducks lined up in a row there. So if anyone reading this could point out whether anything like the idea I'm trying to describe could be considered reasonably accurate, then I'd appreciate the heads-up. (I'm reasonably confident that it would be trivial to point out some error in the above paragraphs; you could say that I'm trying to figure out the details of the steelmanned version.)
(My original formulation of the question was posted to https://www.reddit.com/r/rational/comments/2g09xh/bstqrsthsf_factchecking_some_quantum_math/ .)
Just out of curiosity: How (if at all) is this related to your LW post about a year ago?
I think surely the following has to be wrong:
because you can't get that kind of information about the future ("are going to be sufficient") just from the fact that you haven't died in the past.
As for the more central issue:
this also seems terribly wrong to me, at least if the situation I'm supposed to imagine is that I bought a lottery ticket just for fun, or out of habit, or something like that. Because surely the possible worlds that get more likely according to your quantum-immortality argument are ones in which I bought a lottery ticket in the expectation of a disaster. Further, I don't see how winning makes this situation any more likely, at least until the disaster has actually occurred and been surmounted with the help of your winnings.
Imagine 10^12 equal-probability versions of you. 10^6 of them anticipate situations that desperately require wealth and buy lottery tickets. Another 10^9 versions of you buy lottery tickets just for fun. Then one of the 10^6, and 10^3 of the 10^9, win the lottery. OK, so now your odds (conditional on having just bought a lottery ticket) of being about to face wealth-requiring danger are only 10^3:1 instead of 10^6:1 as they were before -- but you need to conditionalize on all the relevant evidence. Let's suppose that you can predict those terrible dangers half the time when they occur; so there are another 10^6 of you facing that situation without knowing it; 10^3 of them bought lottery tickets, and 10^-3 of them won. So conditional on having just bought a lottery ticket for fun, your odds of being in danger are still 10^6:1 (10^9 out of danger, 10^3 in); conditional on having just bought a lottery ticket for fun and won, they're still 10^6:1 (10^3 out of danger, 10^-3 in).
Perhaps I'm missing something important; I've never found the idea of "quantum immortality" compelling, and I think the modes of thought that make it compelling involve wrongheadedness about probability and QM, but maybe I'm the one who's wrongheaded...
I think you're leaving out that disasters which require a lot of money to survive are fairly rare and hard to predict.
The character has come uncomfortably close to dying several times in a relatively short period, having had to use one or another rare or unusual skill or piece of equipment just to survive each time. (In other words, she's a Protagonist.)
Y Combinator published a list of requests for startups.
The list makes for interesting ideas. Most of them seem good, but a few make we wonder about Paul Graham. Some of the ideas (e.g., Government) make me wonder if he's starting to drink his own cool-aid and it has caused him to forget everything he has learned along the way. With others (e.g., Diversity) one almost gets the impression that the SJ crowd is putting the screws on silicon valley and he has to at least through them some bone (the since deleted "Female Founders" essay reads similarly).
I am not terribly impressed by that list as it looks like a collection of wouldn't-it-be-nice-to-have wishes.
The Government section looks fine -- the government is a big customer and does have very bad software. But yeah, the Diversity section is... weird. At least there is no Save the Environment section.
It suggests someone at Y Combinator now alieves he has magical superpowers about cutting through government procurement bureaucracy.
Not quite. This is a list of requests -- the Y Combinator would like to find ways to achieve magical superpowers to cut through the government procurement bureaucracy.
Then why did the section talk about how inefficient government software was rather than cutting through procurement bureaucracy?
Because you need to have what's called a "market opportunity" to start with.
I think this list is due to Sam Altman. He has written about wanting to fund breakthrough technologies, and shortly after he became Y Combinator president they invested in a fusion energy company.
Well, that would explain why the list ignores Paul Graham's advise of investing in fields one understands.
Could someone recommend an article (at advanced pop-sci level) providing the best arguments against the multiverse approach to quantum mechanics.
What is the best textbook that explains quantum mechanics from a multiverse perspective (rather than following the Copenhagen school and then bringing in the multiverse as an alternative)? This should be a textbook, not pop-sci, but at a basic a level as possible.
David Wallace's The Emergent Multiverse is an excellent introduction to the many-worlds interpretation, written by its best defender. Most of it should be accessible to a layperson, although there are technical sections. You can't use it to fully learn quantum mechanics from scratch, though. But if you learn the basic formalism from another textbook (I recommend this one; the first eight chapters should suffice) you'll be able to follow almost all of Wallace.
As for criticism, this is the best non-technical article I know of. It does presume some knowledge of quantum mechanics and many-worlds, but not deep technical knowledge.
Has anyone ever worked for Varsity Tutors before? I'm looking at applying to them as an online tutor, but I don't know their track record from a tutor point of view. Has anyone had any experience with them?
Never worked for them in particular, but my experience with such online tutoring businesses hasn't been great: generally don't get many hours, are expected to commit fully to being available at certain times every week (which when in uni, with tests etc. at unexpected times, isn't too possible - might be possible for you in your situation) and they take a fair chunk of your earnings. On one occasion I put a lot of time into signing up, getting documents etc. to verify myself, and then never got a single student. On the other hand, signing up for services such as www.firsttutors.com has been great (not sure if this is international, I've been using the NZ site, but think it is). Basically it's a repository of tutors, people come and leave messages for you to see if you'd be a good fit and if you have times you could both make it, and then you each pay a small one-off fee (usually <$20 for the tutor) for the website providing the interface and get eachother's contact details. I've set up both online and in-person tutoring through this, online being about a fifth of all requests. The first year I used it I got about 3 or 4 students through it (each of whom I met for one or two hours a week and lasted on average ~6 months). Nowadays, with a few good reviews on there, I've put up my fees to double what they used to be and still get about 15 requests a year, each of which is good for about 2 hours tutoring a week - I don't take them all, but I could. And the fee the website charges is nothing in comparison to the hours I get out of it, usually it's less than an hour's work to make it back.
Tutoring seems like a great way for lots of LW people to earn extra money. Apparently at least one high end tutor earns $1000 an hour.
Interesting article, but that tutor is in a fairly small niche-- test prep tutoring for the children of very rich parents.
It's major that (when he tells the reporter how to solve a math problem), that he starts with teaching the reporter how to lower his panic level.
What's supposed to happen if an expanding FAI friendly to civilization X collides with an expanding FAI friendly to civilization Y?
If both FAIs use TDT or a comparable decision theory, then (under plausible assumptions), they will both maximize an aggregate of both civilizations' welfare.
Each FAI is friendly to its creators, not necessarily to the rest of the universe. Why would a FAI be interested in the welfare of aliens?
You might need a coalition against less tractable aliens, and you also might need a coalition to deal with something the non-living universe is going to throw at you.
If your creators include an interest in novelty in their CEV, then aliens are going to provide more variety than what your creators can make up on their own.
Heh. The situation is symmetric, so the humanity is also novelty for aliens. And how much value does novelty has? It it similar to having some exotic pets? X-D
It's not clear that territory that already has a FAI watching over it can be overtaken by another FAI. A FAI might expand to inhibit territory by sending small probes. I think those probes are unlikely to have any effect in territory already occupied by another FAI.
I'm also not sure to what extend you can call nodes of an FAI of the same origin that have millions of light years between them the same FAI.
That's a valid point. An AI can rapidly expand across interstellar distances only by replicating and sending out clones. Assuming the speed of light limit, the clones would be essentially isolated from each other and likely to develop independently. So while we talk about "AI expanding through the light cone", it's actually a large set of diverging clones that's expanding. It's an interesting question how far could they diverge from one another.
If their ideas of friendliness are incompatible with each other, perhaps a conflict? Superintelligent war? It may be the case that one will be 'stronger' than the other, and that there will be a winner-take-all(-of-the-universe?) resolution?
If there is some compatibility, perhaps a merge, a la Three Worlds Collide?
Or maybe they co-operate, try not to interfere with each other? This would be more unlikely if they are in competition for something or other (matter?), but more likely if they have difficulties assessing risks to not co-operating, or if there is mutually assured destruction?
It's a fun question, but I mean, Vinge had that event horizon idea, about how fundamentally unpredictable things are for us mere humans when we're talking about hypothetical intelligences of this caliber, and I think he had a pretty good point on that. This question is taking a few extra steps beyond that, even.
Oh, sure, it's much more of a flight-of-fantasy question than a realistic one. An invitation to consider the tactical benefits of bombarding galaxies with black holes accelerated to a high fraction of c, maybe X-D
But the original impetus was the curiosity about the status of intelligent aliens for a FAI mathematically proven to be friendly to humans.
Neither defects?
Why do you think it's going to be a prisoner's dilemma type of situation?
In the intersection of their future light cones, each FAI can either try to accommodate the other (C) or try to get its own way (D). If one plays C and one plays D, the latter's values are enforced in the intersection of light cones; if both play C, they'll enforce some kind of compromise values; if they both play D, they will fight. So the payoff matrix is either PD-like or Chicken-like depending on how bloody the fight would be and how bad their values are by each other's standards.
Or am I missing something?
The contact between the FAIs is not a one-decision-to-fight-or-share deal. It's a process that will take some time and each party will have to take many decisions during that process. Besides, the payoff matrix is quite uncertain -- if one initially cooperates and one initially defects does the defecting one get more? No one knows. For example, the start of the hostilities between Hitler and Stalin was the case where Stalin (initially) cooperated and Hitler (initially) defected. The end result -- not so good for Hitler.
There are many options here -- fully cooperate (and potentially merge), fight till death, divide spheres of influence, set up a DMZ with shared control, modify self, etc.
The first interesting question is, I guess, how friendly to aliens will a FAI be? Will it perceive another alien FAI as an intolerable obstacle in its way to implement friendliness as it understands it?
More questions go along the lines of how likely it is that one FAI will be stronger (or smarter) than the other one. If they fight, what might it look like (assume interstellar distances and speed of light limits). How might an AI modify itself on meeting another AI, etc. etc.
As much as is reasonable in given situation. If it is stronger, and if conquering the other AI is a net gain, it will fight. If it is not stronger, or the peace could be more efficient than the war, it will try to negotiate.
The costs of peace will depend on the differences between those two AIs. "Let's both self-modify to become compatible" is one way to make peace, forever. It has some cost, but it also saves some cost. Agreeing to split the universe into two parts, each governed by one AI, also has some cost. Depending on specific numbers, the utility maximizing choice could be "winner takes all" or "let's split the universe" or "let's merge into one" or maybe something else I didn't think about.
The critical question is, whose utility?
The Aumann theorem will not help here since the FAIs will start with different values and different priors.
Cryonics vs. Investment:
This is a question I have already made a decision on but would like some outside opinions for while it's still fresh. My beliefs have recently changed from "cryonics is not worth the investment" to "cryonics seems to be worth the investment but greater certainty for a decision is still wanting" (CStbWtIbGCoaDiSW for short). I've explored my options with Rudi Hoffman and found that while my primary choice of provider, Alcor, is out of my current range, my options are not unobtainable. CI with the bare basics, lowest pay option is within my budget, and Alcor is likely to be in my budget within a few years if my career plans continue working as they are.
There's the context, here's the question: which seems more effective, applying now with a cryonics provider under conditions I consider less than ideal (for me, CI using a term life policy rather than a whole life policy with Alcor, which is what I want) or saving for a short time (some odd months) so I can open up my mutual funds portfolio?
Why these are at odds: because of my income, starting up even a low pay cryonics plan now would set back my ability to invest likely to my next job. The longer I wait on investing, the less effective the investments will be. If all cryonics plans were equal, this would still be a fairly easy decision, but as my beliefs stand, CI is an option I currently do not favor and term life is a policy I definitely do not favor. Why? Because there is a very real probability that, once the policy expires, renewing or changing will incur very large costs should my health conditions change (probable enough to be a concern). So whole life or universal life with Alcor is, at the moment, what I favor.
So, my question again: invest in a cryonics option I do not want now or more quickly develop my portfolio, improving my finances, and allowing for better options in the near future? You can probably guess I have chosen the later option, putting my efforts into securing an investment. If no path can take me to the cryonics option I want now, then the best path is to minimize the distance between me and what I consider to be the best path. But, I am not the only one who has made decisions like this, so any second thoughts or considerations would be welcome.
Consider other possible tradeoffs such as engaging in less leisure activities so you can take a part time job that will pay for cryonics, or saving money by reducing consumption.
These are worthwhile tips and ones I've explored. I've reduced consumption down to bare minimums already. Most of my time out of work is spent in activities for work as my position requires time spent with the community and networking, but I still look for opportunities on the side. Still, these are useful and assist with either option. Thanks.
This is a god read: http://www.newrepublic.com/article/119321/harvard-ivy-league-should-judge-students-standardized-tests
Excerpt:
Max L.
Looks like they agree that specialization is for insects :-)
"They"? The author is Steven Pinker.
"They" can be singular or plural.
It is correct in the latter case, incorrect in the former. It largely doesn't matter, but recruiters I know, for example, throw out resumes for this particular error (though one had heard some schools actually encourage the practice, to the student's disservice) and some people (myself included until I thought better of it) think less of authors who make it. Linguistics as a discipline is descriptive, but people who are not linguists treat people differently for making errors.
It's a bit more complicated than correct or incorrect:
http://en.wikipedia.org/wiki/Singular_they
I agree with you as literally started, and am not a Wikipedia naysayer, but that again is descriptive linguistics. People do say that. People also do say "y'all aints gots no Beefaronis?" (One of my favorite examples heard by my own ears in a c store), and people do think differently of either than they do as what is sometimes called "blackboard grammar." I would recommend John McWhorter as a linguist who describes this better than I can. Or just say to yourself "huh, interesting opinion" and walk away; I swear I won't be offended :-)
That's nuts.
I don't think so, but either way, if one wants a job at GE, to use a recognizable example, one might want to know.
Why? It strikes me as a good way to sort out people who have bad attention to detail, as well as avoiding the SJW-types more interested in accusing everyone in the company of sexism than doing any actual work.
Does anyone have any good ideas about how to be productive while commuting? I'll be starting a program soon where I'll be spending about 2 hours a day commuting, and don't want these hours to go to waste. Note: I have interests similar to a typical LessWrong reader, and am particularly interested in startups.
My brainstorming:
Audio books and podcasts. This sounds like the most promising thing. However, the things I want to learn about are the hard sciences and those require pictures and diagrams to explain (you can't learn biology or math with an audiobook). I'm also in the process of learning web development and design, but these things also seem too visual to work as an audiobook.
Economics audiobooks might work, idk. I could also listen to books about startups/business, but I'm at the point where I know enough about these things that diminishing returns have kicked in.
I've read a good amount about psychology already, and feel like diminishing returns have kicked in. Although psychology seems like it'd work well with an audiobook.
Perhaps sci-fi audiobooks would be good? Would I learn from these or would it just be entertaining? Any suggestions (I read 1984, Enders Game and Brave New World. I liked them, but didn't learn too much from them.
I read HPMOR and loved it. Anything similar to that?
Other than audiobooks, I could spend the time brainstorming. Startup ideas, thought experiments, stuff like that.
Not really what you're looking for, but I feel obligated:
Move or get a different job. Reduce your commute by 1 or 1.5 hours. This is the best way to increase the productivity of your commute.
I read (can't remember source) that commuting was the worst part of the people's day (they were unhappy, or experienced the lowest levels of their self-assess subjective well being).
I'm doing a coding bootcamp (Fullstack Academy). It's in NYC and I live with my parents in Long Island now. It's only 13 weeks so it's not that bad, especially if I could make it productive. If it was long term I'd probably agree with you though.
Commuting by car is terrible. Commuting by foot is great. There is not a lot of data on commuting by subway, but it does not look good.
Long distance foot commuting is still pretty bad. In my experience I don't hate the world as much, but burning two plus hours a day commuting sucks no matter what. The subway is definitely much better than car commuting, but not as nice as biking or walking. I think subway commuting is vastly improved by good distractions available through a smartphone, though.
Driving or public transportation?
If driving, don't forget that you have a limited amount of attention available and being "productive" as a driver involves some trade-offs X-)
I should have mentioned that, it's all public transportation (train + subway). If I get a seat on the train and it's not too crowded I could use my laptop to code or to read, but it's difficult to get a seat.
You can read easily enough if you have a tablet or an e-reader.
Given the limitations (that you describe in other replies) I think you've got a good list.
Regarding podcasts, this could be a great time to experiment with new ones & decide which you want to listen to longer term.
Perhaps there are some short activities of value to you, such as Anki (assuming you have a smartphone), mentally reviewing your memory palace, or mindfulness exercises. Mindfulness exercises on public transport may seem a little odd, but the distractions may make it more effective as exercise - just be patient with yourself.
Research about online communities with upvotes and downvotes
I don't think things are quite that bad here.
If I understand correctly, people become utilitarians because they think that global suffering/well-being have such big values that all the other values don't really matter (this is what I see every time someone tries to argue for utilitarianism, (2) please correct me if I'm wrong). I think a lot of people don't share this view, and therefore, before trying to convince them they should choose utilitarianism as their morality, you first need to convince them about the value of harm-pleasure.
From http://www.preposterousuniverse.com/blog/2013/08/22/the-higgs-boson-vs-boltzmann-brains/
So, before reading the last sentence quoted I had no issue with the idea that I turned up as a random fluctuation, but that last sentence gives me pause - and my brain refuses to cross it and give useful thoughts.
Anyone have any useful comments? Thanks.
Quite a few people will pay $10 in order to not know whether they have herpes.
From Poor Economics by Esther Duflo and Abhijit Bannerjee
Thank you, that was very interesting.
It seems to me these people are paying in sanity what they can't pay in money - and the price they're paying is arguably higher than what the rich are paying, not even considering the physical health effects.
This might be one of the ways that being poor is expensive.
Indeed, 'being poor is expensive' is related to how they frame this fact. From the end of the same chapter:
"Whether you have herpes" is not as clearly-defined a category as it sounds. The blood test will tell you which types of HSV antibodies you have. If you're asymptomatic, it won't tell you the site of the infection, if you're communicable, or if you will ever experience an outbreak.
I had an HSV test a while ago (all clear, thankfully), and my impression from speaking to the medical staff was that given the prevalence and relative harmlessness of the disease, (compared to, say, HIV or hepatitis or something), the doubt surrounding a positive test result was enough of a psychological hazard for them to actively dissuade some people from taking it, and many sexual health clinics don't even offer it for this reason.
Thanks to its multiple infection sites, herpes has the unusual property that two people, neither of whom have an STI, can have sex that leads to one of them having an STI. It's a spontaneous creation of stigma! And if you have an asymptomatic infection (very common), there's no way to know whether it's oral (non-stigmatized, not an STI) or genital (stigmatized, STI) since the major strains are only moderately selective.
... and that's why you should prefer to sleep with rationalists. :)
But it might be rational to not find out if you believed you would have a duty to warn potential lovers if you tested positive, or were willing to lie but believed yourself to be a bad actor.
How is it rational to willfully keep others in ignorance of a risk they have every right to know about? The discomfort of honest disclosure is a minor inconvenience when compared to the disease.
A classic example of confusing is with ought...
You are right for the rationalist who gives substantial weight to the welfare of his or her lovers. But being rational doesn't necessarily imply you that care much about other people.
A rationalist that doesn't care about the welfare of their lovers and yet believes they have a duty to warn them about if they tested positive (but no duty to get tested in the first place, even if the cost is nonpositive)?
Are you advocating for prisoner defection?
In my game theory class I teach that rational people will defect in the prisoner's dilemma game, although I stress that you should try to change the game so it is no longer a prisoner's dilemma.
I hope you also talk about Parfit's hitchhiker, credible precommitment and morals (e.g. honor, honesty) as one of its aspects.
Can this situation be modeled as a prisoner's dilemma in a useful way? There seem to be some important differences.
For example, if both 'prisoners' have the same strain of herpes, then the utility for mutual defection is positive for both participants. That is, they get the sex they were looking for, with no further herpes.
Not prisoner's dilemma, but successful coordination to which a decrease in the spread of HIV in the gay community is attributed: serosorting.
The base rate of HSV2 in US adults is ~20%. I would argue that if you're sexually active, and don't get an HSV test between partners (which is typically not part of the standard barrage of STD tests), you're maintaining the same sort of plausible deniability strategy as those who pay to not see the results of their apropos-of-nothing tests.
By Brad Hicks
I think I'd be more inclined to frame this sort of thing as typical mind fallacy. Modeling it in terms of an I Win button seems to violate Hanlon's Razor: we don't need an adversarial model when plain old ignorance will suffice, and I don't think preferred interaction style is a matter of conscious choice for most people.
I'd split the difference-- I believe the typical mind fallacy can shade into believing that other sorts of minds aren't worth respecting.
Alternatively, the situation can be described in terms of tell vs. guess culture.
This model assumes that relationships are adversarial, which need not be the case, and isn't the case in a good relationship.
No, the model applies even if the relationship isn't adversarial. As long as you have different priorities and are not perfect at communicating, it applies.
Is there still a rewards credit card that autodonates to MIRI or CfAR? I've seen them mentioned, but can't find any sign up links that are still live.
Unfortunately the program has been discontinued by Capital One :(
We have it in our queue to look into alternatives.
One thing you might want to look into is that many cards will allow you to donate your reward points etc. to charity. For many credit cards, this generates more value for the charity you choose to donate to.
I think they stopped distributing them. The last I saw, they had that entry struck out on their support page.
How useful would it be to have more people working on AI/FAI? Would it be a big help to have another 1,000 researchers working on it making $200,000 a year? Or does an incredibly disproportionate amount of the contribution come from big names like Eliezer?
What do we want out of AI? Is it happiness? If so, then why not just research wireheading itself and not encounter the risks of an unfriendly AI?
We don't know what we want from AI, beyond obvious goals like survival. Mostly I think in terms of a perfect tutor that would bring us to its own level of intelligence before turning itself off. But quite possibly we don't want that at all. I recall some commenter here seemed to want a long-term ruler AI.
What do you guys think about having an ideas/brainstorming section? I don't see too much brainstorming of ideas here. Most posts seem to be very refined thoughts. What about a place to brainstorm some of the less refined thoughts?
This seems to be LW's collected wisdom on the matter.
http://wiki.lesswrong.com/wiki/Futility_of_chaos
Brainstorming does not rely on chaos. It's a method of using System 1 which delays any censoring by System 2.
Some evidence of LW beliefs about it: here and here. CFAR teaches people to brainstorm more often.
I'm a bit confused by what is meant by futility of chaos so forgive me if I misinterpreted it. Let me try to be a bit more clear about what I'm proposing and let me know if futility of chaos addresses it.
I'm saying that there are ideas that you think are worth brainstorming, and there are ideas that you feel confident enough about to write a post about to get some feedback. Right now it seems that people don't post about the "ideas worth brainstorming" and I suspect that it'd be beneficial if they did and we discussed them.
Futility of chaos seems to be addressing more "chaotic and random" ideas. I don't know enough about math to really know what that means, but I sense that it's different from ideas that smart people on LW judge to be worth brainstorming.
Brainstorming is too unstructured and unpredictable, a form of "creative disorder" that has received more credit than it deserves.
What about discussing ideas that you think have a decent shot at being good and important, but that you can't explain fully and still aren't that confident in?
Sure, that's what the open thread is for.
Could someone please give me some good arguments for a work ethic? I tend to oppose it, but the debate seems too easy so I may be missing something.
Having a work ethic might help you accomplish more things than you would without one.
It's a good reputation boost. "A highly-skilled, hard-working x" might be more flattering than "a highly skilled x."
Work ethic might be a signal/facet of conscientiousness, a desirable trait in many domains.
That makes sense; I hadn't thought of that. Thanks. Perhaps there would be a required critical mass of people to accept laziness as a virtue before it becomes "this good or that good" rather than "this good or lack of this good."
It'll build habits that also make it easier to do things you want when not at work?
That's the big one. I have things I want to do, in far mode, and I find that diligence at work translates to diligence off work. Admittedly I also love my job, but...
Thanks for reply! My question was unclear, but I meant the other meaning. I strongly do believe in doing whatever one does well, but not in seeking to do more work in the first place. I mean the idea that there's something more noble about working 40+ hours a week than not, and that people with sufficient means shouldn't retire in their thirties.
Sure, one can build habits at work, but one can do so cheaper than 2000 hours of one's life per year, net of compensation. Admittedly this does not apply so much if you love your job, but hypothetically if someone values leisure more, is there a way in which choosing that leisure is less ethical?
"Work" can mean different things, and so also "work ethic".
The way I use it, "work" is whatever you are serious (or at least want to be) about doing, whether it's something that matters in the larger scheme of things or not, and whether or not it earns money. (But having to earn a living makes it a lot easier to be serious about it.)
"Leisure" is whatever you like doing but choose not to be serious about.
In that sense, I'm not much interested in leisure. Idling one's days away on a tropical island is not my idea of fun, and I do not watch television. Valuing seriousness is what I would mean by "work ethic". What one should be serious about is a separate ethical question.
When other people talk about "work", they might mean service to others, and by "leisure" service to oneself. I score low on the "service to others" metric, but for EA people, that is their work ethic.
To others, "work" is earning a living, and "leisure" is whatever you do when you're not doing that. The work ethic relative to that concept is that the pay you get for your work is a measure of the value you are creating for others. If you are idling then you are neglecting your duty to create value all the years that you can, for time is the most perishable of all commodities: a day unused is a day lost to our future light-cone for ever.
That is an interesting use of "work" and "leisure," and one with which I was not familiar. I am very serious about my leisure (depending how you use serious... I love semantic arguments for fun but not everybody does so I'll cut that here). The more frequent use I have heard is close to its etymology: what one is allowed to do, as opposed to what one has a duty to do. That is anecdotal to the people I know so may not be the standard. I am much more serious about what I am allowed to do, and what others are allowed to do, than even a self-created duty.
Very interesting and I'd be happy to continue, but to restate the original question with help from noticed ambiguity: is there a strong argument why spending 80000 hours in a job for jobs sake is ethically superior to selling enough time to meet ones need and using rest for ones own goals?
To give a more direct answer, "a job for jobs sake" sounds like a lost purpose. In harder times, everyone had to work hard for as many years as they could, to support themselves, their household, and their community, and the community couldn't afford many passengers. Having broken free of the Malthusian wolves, the pressure is off, but the attitudes remain: idleness is sinful.
And then again, from the transhumanist point of view, the pressure isn't off at all, it's been replaced by a different one. We now have the prospect of a whole universe to conquer. How many passengers can the human race afford in that enterprise, among those able to contribute to it?
The answer really depends on the underlying value system. For example, most varieties of hedonism would find nothing wrong with retiring to the life of leisure at thirty. But if you value, say, self-actualization (a la Maslow), retiring early is a bad idea.
Generally speaking, the experience of the so-called trust fund kids indicates that NOT having to work for a living is bad for you. You can also compare housewives to working women.
If you want want to self-actualize in a way that does not (reliably, or soon enough) bring money, retiring early can be useful.
I think there's some lack of clarity in this thread about what it means to "retire". There are two interpretations (see e.g. this post):
(1) Retire means financial independence, not having to work for a living, so that you can focus your energy on what you want to do instead of what you have to do.
(2) Retire means a carefree life of leisure where you maximize your hedonics by doing easy and pleasant things and not doing hard and stressful things.
I think these two ways of retiring are quite different and lead to different consequences.
I meant to imply the former, albeit with the possibility "what you want to do" is not restricted from including leisure/hedonics/pleasure.
Technically, yes, though people mostly use (1) to mean doing something purposeful, an activity after which you can point and say "I made that", while (2) is essentially trying to get as close to wireheading as you currently can :-)
A friend of mine has started going into REM in frequent 5 minutes cycles during the day, in order to boost his learning potential. He developed this via multiple acid trips. Is that safe? It seems like there should be some sort of disadvantage to this system but so far he seems fine.
How does he know that he actually is in REM? How does he know it boosts his learning potential?
How does LSD help you get develop an ability to get to sleep faster? LSD makes one less sleepy, so this seems like an improbably ability to ascribe to it. But if it actually works, its a really useful ability.
You might want to try asking this question to a polyphasic sleeping community BTW.
What is "this"? this ability?
Does he also get a full night's sleep? Eliminating other stages of sleep is almost certainly bad, but supplementing with REM seems to me unlikely to be bad.
People with narcolepsy basically only have REM sleep. Narcolepsy is very bad, but many people who eventually develop it seemed to have only had REM sleep when they were functional with no ill effects. In particular, they greatly benefit from naps (both before and after developing full-blown narcolepsy).
So, I read textbooks "wrong".
The "standard" way of reading a textbook (a math textbook or something) is, at least I imagine, to read it in order. When you get to exercises, do them until you don't think you'd get any value out of the remaining exercises. If you come across something that you don't want to learn, skip forwards. If you come across something that's difficult to understand because you don't fully understand a previous concept, skip backwards.
I almost never read textbooks this way. I essentially read them in an arbitrary order. I tend to start near the beginning and move forwards. If I encounter something boring, I tend to skip it even if it's something I expect to have to understand eventually. If I encounter something I have difficulty understanding because I don't fully understand a previous concept, I skip backwards in order to review the previous concept. Or I skip forwards in the hopes that the previous concept will somehow become clear later. Or I forget about it and skip to an arbitrary different interesting section. I don't do exercises unless either they seem particularly interesting, or I feel like I have to do them in order to understand the material.
I know that I can sometimes get away with the second method even when other people wouldn't be able to. If I were to read a first-year undergraduate physics textbook, I imagine I could read it in essentially any order without trouble, even though I never took undergraduate physics. But I tend to use this method for all textbooks, including textbooks that are at or above my level (Awodey's Category Theory, Homotopy Type Theory, David Tong's Quantum Field Theory, Figure Drawing for All It's Worth).
Is the second method a perfectly good alternative to the "standard" method? Am I completely shooting myself in the foot by using the second method for difficult textbooks? Is the second method actually better than the "standard" method?
This is how I read too, usually. I think it's one of those things that works better for some people but not others. I've tried reading things the standard way, and it works for some books, but for other books I just get too bored trudging through the boring parts.
BTW, I've also been reading HoTT, so if you want to talk about it or something feel free to message me!
On one hand, it's a good sign that you have a keen sense of what you need to know, how and where to look for it, and at what pace. On the other hand, authors who know more about a subject than you do must have had their reasons to choose the order in which they present their material. I'd say keep listening to your gut on what is important to read, but at least try to get acquainted with the other topics you're choosing not to go deeply into.
Searching for genes that make people smart -- we still have no idea...
No, this is an unmitigated triumph. It's amazing how people take such a negative view of this.
So let me get this straight: over the past few decades we have slowly moved from a viewpoint where Gould is a saint, intelligence doesn't exist and has no predictive value since it's a racist made-up concept promoted by incompetent hacks and it has no genetic component and definitely nothing which could possibly differ between any groups at all, to a viewpoint where the validity of intelligence tests in multiple senses have been shown, the amount of genetic contribution has been accurately estimated, the architecture nailed down as highly polygenic & additive, the likely number of variants, and we've started accumulating the sample size to start detecting variants, and not just have we detected 60+ variants with >90% probability* (see the remarks on the Bayesian posterior probability in the supplementary material), we even have 3 which pass the usual (moronic, arbitrary, unjustified) statistical-significance thresholds - and wait, there's more, they also predict IQ out of sample and many of the implicated variants are known to relate to the central nervous system! - and this is a disappointment where 'we still have no idea' and the findings are 'maddeningly small' with 'inconclusive findings'?
* which imply you can predict much better than the article's calculation of 1.8 points
You've got to be kidding me. Or is this how zeitgeists change? They get walked back step by step and people pretend nothing has changed? When the tests are shown to be unbiased and predictive, we stop talking about them; when the twin studies show in every variant genetic influences on intelligence, we talk about how very difficult causal inference is and how twin studies can go wrong; when genetics comes up, suddenly everyone is discussing how nonadditive and gene-environment effects will make identification impossible (never mind that there's no reason to expect them to be large parts of the genetics); when good genetic candidates are found which don't pass arbitrary thresholds, that's taken as evidence they don't exist and genetic influence is irrelevant; and when enough samples are taken to satisfy that, then each of the hits is now deprecated as small and irrelevant? And the changes and refutations quietly go down the memory hole. 'Of course some of intelligence is genetic, everyone knows that - but all the variants are small, so really, this changes nothing at all.'
No, the Rietveld papers this year and last were historic triumphs. The theory has been as proven as it needs to be. The fundamental points no longer need to be debated - the debate is over. In some respects, it's now a pretty boring topic.
All that's left is engineering and application: getting enough samples to infer the rest to sufficiently high posterior probabilities to make good-enough predictions, and exploiting new possibilities like embryo-selection.
We're are looking at this in different context and are using different baselines.
You are talking about how long ago we started with the genetic component of intelligence being malicious fantasies of evil people and now it's just science. Sure (though you still can't discuss it publicly). I'm talking about this particular paper and how big of a step it is compared to, say, a couple of years ago.
My baseline is much more narrow and technical. It is "we look at the the genome of a baby and have no idea what will be its IQ when it grows up". That is still largely the case and the paper's ability to forecast does not look impressive to me.
The fact that intelligence is largely genetic and highly polygenic is already "normal" for me -- my attitude is "yeah, sure, we know this, what have you done for me lately".
I appreciate the historical context which we are not free of by any stretch of imagination (so, no, I don't see unmitigated triumphs), but I was not commenting on progress over the last half a century. I want out-of-sample predictions with noticeable magnitude and I think getting there will take a bit more than just engineering.
This paper validates the approach (something a lot of people, for a lot of different reasons, were skeptical of), and even on its own merits we still get some predictive power out of it: the 3 top hits cover a range of ~1.5 points, and the 69 variants with 90% confidence predict even more. (I'm not sure how much since they don't bother to use all their data, but if we assume the 69 are evenly distributed between 0-0.5 points, then the mean is 0.25 and the total predictive power is more than a few points.)
What use is this result? Well, what use is a new-born baby? As the cryptographers say, 'attacks only get better'.
And, uh, why would you think that? There's no secret sauce here. Just take a lot of samples and run a regression. I don't think they even used anything particularly complex like a lasso or elastic net.
Pretend for a second it's a nutrition study and apply your usual scepticism :-) You know quite well that "just run a regression" is, um... rarely that simple.
To give one obvious example, interaction effects are an issue, including interaction between genes and the environment.
No, that's the great thing about genetic associations! First, genes don't change over a lifetime, so every association is in effect a longitudinal study where the arrow of time immediately rules out A<-B or reverse causation in which IQ somehow causes particular variants to be overrepresented; that takes out one of the three causal pathways. Then you're left with confounding - but there's almost no way for a third variable to pick out people with particular alleles and grant them higher intelligence, no greenbeard effect, and population differences are dealt with by using relatively homogenous samples & controlling for principal components - so you don't have to worry much about A<-C->B. So all you're left with is A->B.
But they're not. They're not a large part of what's going on. And they don't affect the associations you find through a straight analysis looking for additive effects.
But their expression does.
How do you know?
An expression in circumstances dictated by what genes one started with.
Because if they were a large part of what was going on, the estimates would not break down cleanly and the methods work so well.
Keep in mind that the outside view of biological complexity is that
Or to phrase this another way:
I don't think the outside view is relevant here. We have coming up on a century of twin studies and behavioral genetics and very motivated people coming up with possibilities for problems, and so far the traditional estimates are looking pretty good: for example, when people go and look at genetics directly, the estimates for simple additive heritability look very similar to the traditional estimates. The other day offered an example of a SNP study confirming the estimates from twin studies, "Substantial SNP-based heritability estimates for working memory performance", Vogler et al 2014. If all these complexities were real and serious problems and the Outside View advises us to be skeptical, why do we keep finding the SNP/GCTA estimates look exactly like we would have predicted?
Ok, I confess I have no idea what SNP and GCTA are. As for the study Lumifer linked to, Razib Khan's analysis of it is that it suggests intelligence is a complex polygenetic trait. This should not be surprising as it is certainly an extremely complex trait in terms of phenotype.
My 30 day karma just jumped over 40 points since I checked LW this morning. Either I've said something really popular (and none of my recent comments have karma that high), or there's a bug.
I got about +30 as well, ad only a small amount is due to recent upvotes. And despite the jump, I'm out of the top 30-day contributors list, which I've been in and out of the bottom of for some weeks. The other names in that list are regulars there, so they must have got some upvotes also.
Perhaps some systematic downvoter had all his votes reversed?
My guess is that someone with a similar political ideology to you upvoted forty of your comments on the recent political post.
ETA: Well I've been struck by the mysterious mass-upvoter as well! I'm pretty sure the political motivation hypothesis is wrong now.
The same thing happened to me today - within 12 hours I got at least +1 karma on every single post of mine from the last month and a half or so, which happened to be primarily on the history of life / 'great filter' threads.
I don't think it's ideological. Mysterious mass-upvoter?
Since my political ideology in that debate was trying to steelman both sides, I doubt this is the case, unless there is a fanatical steelmanner out there.
I've seen several unexpected increases on the order of 10 points over the last couple of weeks. (I don't remember the exact dates.) My guess was gradual undoing of prior mass-downvoting, but a Mystery Mass Upvoter is certainly another possibility.
[EDITED to add ...] A possible variant of the Mystery Mass Upvoter hypothesis: we have a Mystery Small-Mass Upvoter, who is upvoting old posts in Main (maybe because s/he is new here and reading through old material). But that only works if everyone affected has old posts in Main, which I don't think is the case.
Hypothesis: we are the subjects of an experiment.
I seem to recall recent instances of a mysterious mass downvoter that produced several threads of people complaining / trying to figure out what could be done.
What if someone is doing the same thing, but with upvotes, to look for bias in community reactions?
Or they're just trolling. Whichever.
My karma's been running higher than I expected, too.
I wish there was some way to track karma dif. So far as I know, there's no way to do it for older comments and posts.
So it's not just me? I also seemed to see something like that, but I assumed I just misremembered my previous 30-day karma score or something.