Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

Open thread, Mar. 20 - Mar. 26, 2017
New Comment
208 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Okay, so I recently made this joke about future Wikipedia article about Less Wrong:

[article claiming that LW opposes feelings and support neoreaction] will probably be used as a "reliable source" by Wikipedia. Explanations that LW didn't actually "urge its members to think like machines and strip away concern for other people's feelings" will be dismissed as "original research", and people who made such arguments will be banned. Less Wrong will be officially known as a website promoting white supremacism, Roko's Basilisk, and removing female characters from computer games. This Wikipedia article will be quoted by all journals, and your families will be horrified by what kind of a monster you have become. All LW members will be fired from their jobs.

A few days later I actually looked at the Wikipedia article about Less Wrong:

In July 2010, LessWrong contributor Roko posted a thought experiment to the site in which an otherwise benevolent future AI system tortures simulations of those who did not work to bring the system into existence. This idea came to be known as "Roko's basilisk," based on Roko's idea that merely hearing about the idea

... (read more)
2Elo
can we fix this please? Edit: I will work on it.

I'd suggest being careful about your approach. If you lose this battle, you may not get another chance. David Gerard most likely has 100 times more experience with wiki battling than you. Essentially, when you make up a strategy, sleep on it, and then try imagining how a person already primed against LW would read your words.

For example, expect that any edit made by anyone associated with LW will be (1) traced back to their identity and LW account, and consequently (2) reverted, as a conflict of interest. And everyone will be like "ugh, these LW guys are trying to manipuate our website", so the next time they are not going to even listen to any of us.

Currently my best idea -- I didn't make any steps yet, just thinking -- is to post a reaction to the article's Talk page, without even touching the article. This would have two advantages: (1) No one can accuse me of being partial, because that's what I would openly disclose first, and because I would plainly say that as a person with a conflict of interest I shouldn't edit my article. Kinda establishing myself as the good guy who follows the Wikipedia rules. (2) A change in article could be simply reverted by David, but he i... (read more)

9ChristianKl
It's worth noting that David Gerard was a LW contributor with a significant amount of karma: http://lesswrong.com/user/David_Gerard/
1David_Gerard
This isn't what "conflict of interest" means at Wikipedia. You probably want to review WP:COI, and I mean "review" it in a manner where you try to understand what it's getting at rather than looking for loopholes that you think will let you do the antisocial thing you're contemplating. Your posited approach is the same one that didn't work for the cryptocurrency advocates either. (And "RationalWiki is a competing website therefore his edits must be COI" has failed for many cranks, because it's trivially obvious that their true rejection is that I edited at all and disagreed with them, much as that's your true rejection.) Being an advocate who's written a post specifically setting out a plan, your comment above would, in any serious Wikipedia dispute on the topic, be prima facie evidence that you were attempting to brigade Wikipedia for the benefit of your own conflict of interest. But, y'know, knock yourself out in the best of faith, we're writing an encyclopedia here after all and every bit helps. HTH! If you really want to make the article better, the guideline you want to take to heart is WP:RS, and a whacking dose of WP:NOR. Advocacy editing like you've just mapped out a detailed plan for is a good way to get reverted, and blocked if you persist.

Is any of the following not true?

  • You are one of the 2 or 3 most vocal critics of LW worldwide, for years, so this is your pet issue, and you are far from impartial.

  • A lot of what the "reliable sources" write about LW originates from your writing about LW.

  • You are cherry-picking facts that descibe LW in certain light: For example, you mention that some readers of LW identify as neoreactionaries, but fail to mention that some of them identify as e.g. communists. You keep adding Roko's basilisk as one of the main topics about LW, but remove mentions of e.g. effective altruism, despite the fact that there is at least 100 times more debate on LW about the latter than about the former.

-7David_Gerard
-1David_Gerard
(More generally as a Wikipedia editor I find myself perennially amazed at advocates for some minor cause who seem to seriously think that Wikipedia articles on their minor cause should only be edited by advocates, and that all edits by people who aren't advocates must somehow be wrong and bad and against the rules. Even though the relevant rules are (a) quite simple conceptually (b) say nothing of the sort. You'd almost think they don't have the slightest understanding of what Wikipedia is about, and only cared about advocating their cause and bugger the encyclopedia.)
-5David_Gerard
1TheAncientGeek
Yikes. The current version of the WP article is a lot less balanced than the RW one! Also, the edit warring is two way...someone wholesale deleted the Rs B section.
1Viliam
Problem is, this is probably not a good news for LW. Tomorrow, the RB section will most likely be back, possibly with a warning on the talk page that the evil cultists from LW are trying to hide their scandals.
[-][anonymous]160

PhD acquired.

5Regex
Now people have to call you doctor CellBioGuy

Link on "discussion" disappeared from the lesswrong.com. Is it planned change? Or only for me?

4Elo
Accidental css pull that caused unusual things. It's being worked on. Apologies.

Should we expect more anti-rationalism in the future? I believe that we should, but let me outline what actual observations I think we will make.

Firstly, what do I mean by 'anti-rationality'? I don't mean that in particular people will criticize LessWrong. I mean it in the general sense of skepticism towards science / logical reasoning, skepticism towards technology, and a hostility to rationalistic methods applied to things like policy, politics, economics, education, and things like that.

And there are a few things I think we will observe first (some of... (read more)

3username2
(I thought the post was reasonably written.) Can you say a word on whether (and how) this phenomenon you describe ("populist hostility gets directed towards what is perceived to be the worldview of the elite") is different from the past? It seems to me that this is a force that is always present, often led to "problems" (eg, the Luddite movement), but usually (though not always) the general population came around more in believing the same things as "the elites".
0tristanm
The process is not different from what occurred in the past, and I think this was basically the catalyst for anti-semitism during the post industrial revolution era. You observe a characteristic of a group of people who seem to be doing a lot better than you, in that case a lot of them happened to be Jewish, and so you then associate their Jewish-ness with your lack of success and unhappiness. The main difference is that society continues to modernize and technology improves. Bad ideas for why some people are better off than others become unpopular. Actual biases and unfairness in the system gradually disappear. But despite that, inequality remains and in fact seems to be rising. What happens is that the only thing left to blame is instrumental rationality. I imagine that people will look as hard as they can for bias and unfairness for as long as possible, and will want to see it in people who are instrumentally rational. In a free society, (and even more so as a society becomes freer and true bigotry disappears) some people will be better off just because they are better at making themselves better off, and the degree to which people vary in that ability is quite staggering. But psychologically it is too difficult for many to accept this, because no one wants to believe in inherent differences. So it's sort of a paradoxical result of our society actually improving.
1satt
Writing style looks fine. My quibbles would be with the empirical claims/predictions/speculations. Is the elite really more of a cognitive elite than in the past? Strenze's 2007 meta-analysis (previously) analyzed how the correlations between IQ and education, IQ and occupational level, and IQ and income changed over time. The first two correlations decreased and the third held level at a modest 0.2. Will elite worldviews increasingly diverge from the worldviews of those left behind economically? Maybe, although just as there are forces for divergence, there are forces for convergence. The media can, and do, transmit elite-aligned worldviews just as they transmit elite-opposed worldviews, while elites fund political activity, and even the occasional political movement. Would increasing inequality really prevent people from noticing economic gains for the poorest? That notion sounds like hyperbole to me. The media and people's social networks are large, and can discuss many economic issues at once. Even people who spend a good chunk of time discussing inequality discuss gains (or losses) of those with low income or wealth. For instance, Branko Milanović, whose standing in economics comes from his studies of inequality, is probably best known for his elephant chart, which presents income gains across the global income distribution, down to the 5th percentile. (Which percentile, incidentally, did not see an increase in real income between 1988 and 2008, according to the chart.) Also, while the Anglosphere's discussed inequality a great deal in the 2010s, that seems to me a vogue produced by the one-two-three punch of the Great Recession, the Occupy movement, and the economist feeding frenzy around Thomas Piketty's book. Before then, I reckon most of the non-economists who drew special attention to economic inequality were left-leaning activists and pundits in particular. That could become the norm once again, and if so, concerns about poverty would likely becom
0TheAncientGeek
Rationalists (Bay area type) tend to think of what they call Postmodernism[*] as the antithesis to themselves, but the reality is more complex. "Postmodernism" isn't a short and cohesive set of claims that are the opposite of the set of claims that rationalists make, it's a different set of concerns, goals and approachs. And what's worse is that bay area rationalism has not been able to unequivocally define "rationality" or "truth". (EY wrote an article on the Simple idea of Truth, in which he considers the correspondence theory, Tarki's theory, and a few others without resolving on a single correct theory). Bay area rationalism is the attitude that that sceptical (no truth) and relativistic (multiple truth) claims are utterly false, but it's an attitude, not a proof. What's worse still is that sceptical and relativistic claims can be supported using the toolkit of rationality. "Postmodernists" tend to be sceptics and relativists, but you don't have to be a "postmodernist" to be a relativist or sceptic. As non-bay-area, mainstream, rationalists understand well. If rationalist is to win over "postmodernism", then it must win rationally, by being able to demonstrate it's superioritiy. [*] "Postmodernists" call themselves poststructuralists, continental philosophers, or critical theorists.
2bogus
Not quite. "Poststructuralism" is an ex-post label and many of the thinkers that are most often identified with the emergence of "postmodern" ideas actually rejected it. (Some of them even rejected the whole notion of "postmodernism" as an unhelpful simplification of their actual ideas.) "Continental philosophy" really means the 'old-fashioned' sort of philosophy that Analytic philosophers distanced themselves from; you can certainly view postmodernism as encompassed within continental philosophy, but the notions are quite distinct. Similarly, "critical theory" exists in both 'modernist'/'high modern' and 'postmodern' variants, and one cannot understand the 'postmodern' kind without knowing the 'modern' critical theory it's actually referring to, and quite often criticizing in turn. All of which is to say that, really, it's complicated, and that while describing postmodernism as a "different set of concerns, goals and approaches" may hit significantly closer to the mark than merely caricaturing it as an antithesis to rationality, neither really captures the worthwhile ideas that 'postmodern' thinkers were actually developing, at least when they were at their best. (--See, the big problem with 'continental philosophy' as a whole is that you often get a few exceedingly worthwhile ideas mixed in with heaps of nonsense and confused thinking, and it can be really hard to tell which is which. Postmodernism is no exception here!)
0tristanm
Except that it does make claims that are the opposite of the claims rationalists make. It claims that there is no objective reality, no ultimate set of principles we can use to understand the universe, and no correct method of getting nearer to truth. And the 'goal' of postmodernism is to break apart and criticize everything that claims to be able to do those things. You would be hard pressed to find a better example of something diametrically opposed to rationalism. (I'm going to guess that with high likelihood I'll get accused of not understanding postmodernism by saying that). Well yeah, being able to unequivocally define anything is difficult, no argument there. But rationalists use an intuitive and pragmatic definition of truth that allows us to actually do things. Then what happens is they get accused by postmodernists of claiming to have the One and Only True and Correct Definition of Truth and Correctness, and of claiming that we have access to the Objective Reality. The point is that as soon as you allow for any leeway in this at all (leeway in allowing for some in-between area of there being a true objective reality with 100% access to and 0% access to), you basically obtain rationalism. Not because the principles it derives from are that there is an objective reality that is possible to Truly Know, or that there are facts that we know to be 100% true, but only that there are sets of claims we have some degree of confidence in, and other sets of claims we might want to calculate a degree of confidence in based on the first set of claims. It happens to be an attitude that works really well in practice, but the other two attitudes can't actually be used in practice if you were to adhere to them fully. They would only be useful for denying anything that someone else believes. I mean, what would it mean to actually hold two beliefs to be completely true but also that they contradict? In probability theory you can have degrees of confidence that are non-zero
2bogus
The actual ground-level stance is more like: "If you think that you know some sort of objective reality, etc., it is overwhelmingly likely that you're in fact wrong in some way, and being deluded by cached thoughts." This is an eminently rational attitude to take - 'it's not what you don't know that really gets you into trouble, it's what you know for sure that just ain't so.' The rest of your comment has similar problems, so I'm not going to discuss it in depth. Suffice it to say, postmodern thought is far more subtle than you give it credit for.
2tristanm
If someone claims to hold a belief with absolute 100% certainty, that doesn't require a gigantic modern philosophical edifice in order to refute. It seems like that's setting a very low bar for what postmodernism actually hopes to accomplish.
0bogus
The reason why postmodernism often looks like that superficially is that it specializes in critiquing "gigantic modern philosophical edifice[s]" (emphasis on 'modern'!). It takes a gigantic philosophy to beat a gigantic philosophy, at least in some people's view.
2TheAncientGeek
Citation needed. On the other hand, refraining from condemning others when you have skeletons in your own closet is easy. Engineers use an intuitive and pragmatic definition of truth that allows them to actually do things. Rationalists are more in the philosophy business. For some values of "work". It's possible to argue in detail that predictive power actually doesn't entail correspondence to ultimate reality, for instance. For instance, when you tell outsiders that you have wonderful answers to problems X, Y and Z, but you concede to people inside the tent that you actually don't. That's not what I said. There's no such thing as postmodernism and I'm not particularly in favour of it. My position is more about doing rationality right than not doing it all. If you critically apply rationality to itself, you end up with something a lot less elf confident and exclusionary than Bay Area rationalism.
0tristanm
Citing it is going to be difficult, even the Stanford Encyclopedia of Philosophy says "That postmodernism is indefinable is a truism." I'm forced to site philosophers who are opposed to it because they seem to be the only ones willing to actually define it in a concise way. I'll just reference this essay by Dennett to start with. I'm not sure I understand what you're referring to here. That's called lying. You know exactly what I mean when I use that term, otherwise there would be no discussion. It seems that you can't even name it without someone saying that's not what it's called, it actually doesn't have a definition, every philosopher who is labeled a postmodernist called it something else, etc. If I can't define it, there's no point in discussing it. But it doesn't change the fact that the way the mainstream left has absorbed the philosophy has been in the "there is no objective truth" / "all cultures/beliefs/creeds are equal" sense. This is mostly the sense in which I refer to it in my original post. I'd like to hear more about this. By "Bay Area rationalism", I assume you are talking about a specific list of beliefs like the likelihood of intelligence explosion? Or are you talking about the Bayesian methodology in general?
0TheAncientGeek
To which the glib answer is "that's because it isn't true". Dennett gives a concise definition because he has the same simplistic take on the subject as you. What he is not doing is showing that there is an actually group of people who describe themselves as postmodernists, and have those views. The use of the terms "postmodernist" is a bad sign: it's a tern that works like "infidel" and so on, a label for an outgroup, and an ingroup's views on an outgroup are rarely bedrock reality. When we, the ingroup, can't define something it's Ok, when they, the outgroup, can't define something, it shows how bad they are. People are quite psychologically capable of having compartmentalised beliefs, that sort of thing is pretty ubiquitous, which is why I was able to find an example from the rationalist community itself. Relativism without contextualisation probably doesn't make much sense, but who is proposing it? As you surely know that I mean there is no group of people who both call themselves postmodernists and hold the views you are attributing to postmodernists. It's kind of diffuse. But you can talk about scepticism, relativism, etc, if those are the issues. There's some terrible epistemology on the left, and on the right, and even in rationalism. I mean Yudkowsky's approach. Which flies under the flag of Bayesianism, but doesn't make much use of formal Bayesianism.
0Viliam
I have a feeling that perhaps in some sense politics is self-balancing. You attack things that are associated with your enemy, which means that your enemy will defend them. Assuming you are an entity that only cares about scoring political points, if your enemy uses rationality as an applause light, you will attack rationality, but if your enemy uses postmodernism as an applause light, you will attack postmodernism and perhaps defend (your interpretation of) rationality. That means that the real risk for rationality is not that everyone will attack it. As soon as the main political players will all turn against rationality, fighting rationality will become less important for them, because attacking things the others consider sacred will be more effective. You will soon get rationality apologists saying "rationality per se is not bad, it's only rationality as practiced by our political opponents that leads to horrible things". But if some group of idiots will choose "rationality" as their applause light and they will be doing it completely wrong, and everyone else will therefore turn against rationality, that would cause much more damage. (Similarly to how Stalin is often used as an example against "atheism". Now imagine a not-so-implausible parallel universe where Stalin used "rationality" -- interpreted as: 1984-style obedience of the Communist Party -- as the official applause light of his regime. In such world, non-communists hate the word "rationality" because it is associated with communism, and communists insist that the only true meaning of rationality is the blind obedience of the Party. Imagine trying to teach people x-rationality in that universe.)
0tristanm
I don't think it's necessary for 'rationality' to be used an applause light for this to happen. The only things needed, in my mind, are: * A group of people who adopt rationality and are instrumentally rationalist become very successful, wealthy and powerful because of it. * This groups makes up an increasing share of the wealthy and powerful, because they are better at becoming wealthy and powerful than the old elite. * The remaining people who aren't as wealthy or successful or powerful, who haven't adopted rationality, make observations about what the successful group does and associates whatever they do / say as the tribal characteristics and culture of the successful group. The fact that they haven't adopted rationality makes them more likely to do this. And because the final bullet point is always what occurs throughout history, the only difference - and really the only thing necessary for this to happen - is that rationalists make up a greater share of the elite over time.
0bogus
Somewhat ironically, this is exactly the sort of cargo-cultish "rationality" that originally led to the emergence of postmodernism, in opposition to it and calling for some much-needed re-evaluation and skepticism around all "cached thoughts". The moral I suppose is that you just can't escape idiocy.
1tristanm
Not exactly. What happened at first was that Marxism - which, in the early 20th century, became the dominant mode of thought for Western intellectuals - was based on rationalist materialism, until it was empirically shown to be wrong by some of the largest social experiments mankind is capable of running. The question for intellectuals who were unwilling to give up Marx after that time was how to save Marxism from empirical reality. The answer to that was postmodernism. You'll find that in most academic departments today, those who identify as Marxists are almost always postmodernists (and you won't find them in economics or political science, but rather in the english, literary criticism and social science departments). Marxists of the rationalist type are pretty much extinct at this point.
2bogus
I broadly agree, but you're basically talking about the dynamics that resulted in postmodernism becoming an intellectual fad, devoid of much of its originally-meaningful content. Whereas I'm talking about what the original memeplex was about - i.e what people like the often-misunderstood Jacques Derrida were actually trying to say. It's even clearer when you look at Michael Foucault, who was indeed a rather sharp critic of "high modernity", but didn't even consider himself a post-modernist (whereas he's often regarded as one today). Rather, he was investigating pointed questions like "do modern institutions like medicine, psychiatric care and 'scientific' criminology really make us so much better off compared to the past when we lacked these, or is this merely an illusion due to how these institutions work?" And if you ask Robin Hanson today, he will tell you that we're very likely overreliant on medicine, well beyond the point where such reliance actually benefits us.
0Douglas_Knight
So you concede that everyone you're harassing is 100% correct, you just don't want to talk about postmodernism? So fuck off.
0dogiv
This may be partially what has happened with "science" but in reverse. Liberals used science to defend some of their policies, conservatives started attacking it, and now it has become an applause light for liberals--for example, the "March for Science" I keep hearing about on Facebook. I am concerned about this trend because the increasing politicization of science will likely result in both reduced quality of science (due to bias) and decreased public acceptance of even those scientific results that are not biased.
0username2
I agree with your concern, but I think that you shouldn't limit your fear to party-aligned attacks. For example, the Thirty-Meter Telescope in Hawaii was delayed by protests from a group of people who are most definitely "liberal" on the "liberal/conservative" spectrum (in fact, "ultra-liberal"). The effect of the protests is definitely significant. While it's debatable how close the TMT came to cancelation, the current plan is to grant no more land to astronomy atop Mauna Kea.
0dogiv
Agreed. There are plenty of liberal views that reject certain scientific evidence for ideological reasons--I'll refrain from examples to avoid getting too political, but it's not a one-sided issue.
0Lumifer
So, do you want to ask the Jews how that theory worked out for them?

Front page being reconfigured. For the moment, you can get to a page with the sidebar by going through the "read the sequences" link (not great, and if you can read this, you probably didn't need this message).

Maybe there could be some high-profile positive press for cryonics if it became standard policy to freeze endangered species seeds or DNA for later resurrection

9ChristianKl
We do freeze seeds: https://en.wikipedia.org/wiki/Svalbard_Global_Seed_Vault

What is the steelmanned, not-nonsensical interpretation of the phrase "democratize AI"?

8fubarobfusco
One possibility: Ensure that the benefits of AI accrue to everyone generally, rather than exclusively to the teeny-tiny fraction of humanity who happen to own their own AI business.
3Lumifer
s/AI/capital/ Now, where have I heard this before..?
4Viliam
And your point is...? From my point of view, the main problem with "making the benefits of capital accrue to everyone generally" is that... well, people who use these words as an applause light typically do something else instead. First, they take most of the benefits of capital to themselves (think: all those communist leaders with golden watches and huge dachas). Second, as a side-effect of incompetent management (where signalling political loyalty trumps technical competence), even the capital that isn't stolen is used very inefficiently. But on a smaller scale... companies paying taxes, and those taxes being used to build roads or pay for universal healthcare... is an example of providing the benefits of capital to everyone. Just not all the capital; and besides the more-or-less neutral taxation, the use of the capital is not micromanaged by people chosen for their political loyalty. So the costs to the economy are much smaller, and arguably the social benefits are larger (some libertarians may disagree). Assuming that the hypothetical artificial superintelligence will be (1) smarter than humans, and (2) able to scale, e.g. to increase its cognitive powers thousandfold by creating 1000 copies of itself which will not immediately start feeding Moloch by fighting against each other, it should be able to not fuck up the whole economy, and could quite likely increase the production, even without increasing the costs to environment, by simply doing things smarter and removing inefficiencies. Unlike the communist bureaucrats who (1) were not superintelligent, and sometimes even not of average intelligence, (2) optimized each for their own personal goals, and (3) routinely lied to each other and to their superiors to avoid irrational punishments, so soon the whole system used completely fake data. Not being bound by ideology, if the AI would find out that it is better to leave something to do to humans (quite unlikely IMHO, but let's assume so for the sake of the ar
0Lumifer
Is it really that difficult to discern? So do you think that if we had real communism, with selfless and competent rulers, it would work just fine? Capital is not just money. You tax, basically, production (=creation of value) and production is not a "benefit of capital". In any case, the underlying argument here is that no one should own AI technology. As always, this means a government monopoly and that strikes me as a rather bad idea. Can we please not make appallingly stupid arguments? In which realistic scenarios do you thing this will be a choice that someone faces?
0Viliam
You mean this one? For the obvious reasons I don't think you can find selfless and competent human rulers to make this really work. But conditional on possibility of creating a Friendly superintelligent AI... sure. Although calling that "communism" is about as much of a central example, as calling the paperclip maximizer scenario "capitalism". Capital is a factor in production, often a very important one. Making a superintelligent AI will make our definitions of ownership (whether private or government) obsolete. And "as always" does not seem like a good argument for Singularity scenarios. Depends on whether you consider the possibility of superintelligent AI to be "realistic".
0Lumifer
That too :-) I am a big fan of this approach. But conditional on finding selfless and competent rulers (note that I'm not talking about the rest of the population), you think that communism will work? In particular, the economy will work? Aaaaand let me quote you yourself from just a sentence back: One of the arms of your choice involves Elon Musk (or equivalent) owning the singularity AI, the other gives every human 1/7B ownership share of the same AI. How does that work, exactly? Besides, I thought that when Rapture comes...err... I mean, when the Singularity happens, humans will not decide anything any more -- the AI will take over and will make the right decisions for them-- isn't that so?
0gjm
If we're talking about a Glorious Post-Singularity Future then presumably the superintelligent AIs are not only ruling the country and making economic decisions but also doing all the work, and they probably have magic nanobot spies everywhere so it's hard to lie to them effectively. That probably does get rid of the more obvious failure modes of a communist economy. (If you just put the superintelligent AIs in charge of the top-level economic institutions and leave everything else to be run by the same dishonest and incompetent humans as normal, you're probably right that that wouldn't suffice.)
0Lumifer
Actually, no, we're (at least, I am) talking about pre-Singularity situations were you still have to dig in the muck to grow crops and make metal shavings and sawdust to manufacture things. Viliam said that the main problem with communism is that the people at the top are (a) incompetent; and (b) corrupt. I don't think that's true with respect to the economy. That is, I agree that communism leads to incompetent and corrupt people rising to the top, but that is not the primary reason why communist economy isn't well-functioning. I think the primary reason is that communism breaks the feedback loop in the economy where prices and profit function as vital dynamic indicators for resource allocation decisions. A communist economy is like a body where the autonomic nervous system is absent and most senses function slowly and badly (but the brain can make the limbs move just fine). Just making the bureaucrats (human-level) competent and honest is not going to improve things much.
2gjm
Maybe I misunderstood the context, but it looked to me as if Viliam was intending only to say that post-Singularity communism might work out OK on account of being run by superintelligent AIs rather than superstupid meatsacks, and any more general-sounding things he may have said about the problems of communism were directed at that scenario. (I repeat that I agree that merely replacing the leaders with superintelligent AIs and changing nothing else would most likely not make communism work at all, for reasons essentially the same as yours.)
0Lumifer
I have no idea what this means.
0gjm
It seems you agree with Viliam: see the second paragraph below.
0Lumifer
Right, but I am specifically interested in Viliam's views about the scenario where there is no AI, but we do have honest and competent rulers.
0Viliam
That is completely irrelevant to debates about AI. But anyway, I object against the premise being realistic. Humans run on "corrupted hardware", so even if they start as honest and competent and rational and well-meaning, that usually changes very quickly. In long term, they also get old and die, so what you would actually need is honest and competent elite group, able to raise and filter its next generation that would be at least equally honest, competent, rational, well-meaning, and skilled at raising and filtering the next generation for the same qualities. In other words, you would need to have a group of rulers enlightened enough that they are able to impartially and precisely judge whether their competitors are equally good or somewhat better in the relevant criteria, and in such case they would voluntarily transfer their power to the competitors. -- Which goes completely against what the evolution taughts us: that if your opponent is better than you, you should use your power to crush him, preferably immediately, while you still have the advantage of power, and before other tribe members notice his superiority and start offering to ally with him against you. Oh, and this perfect group would also need to be able to overthrow the current power structures and get themselves in the positions of power, without losing any of its qualities in the process. That is, they have to be competent enough to overthrow an opponent with orders of magnitude more power (imagine someone who owns the media and police and army and secret service and can also use illegal methods to kidnap your members, torture them to extract their secrets, and kill them afterwards), without having to compromise on your values. So, in addition, the members of your elite group must have perfect mental resistance against torture and blackmail; and be numerous enough, so they can easily replace their fallen brethren and continue with the original plan. Well... there doesn't seem to be a law of phys
0gjm
Fair enough; I just wanted to make it explicit that that question has basically nothing to do with anything else in the thread. I mean, Viliam was saying "so it might be a good idea to do such-and-such about superhumanly capable AI" and you came in and said "aha, that kinda pattern-matches to communism. Are you defending communism?" and then said oh, by the way, I'm only interested in communism in the case where there is no superhumanly capable AI. But, well, trolls gonna troll, and you've already said trolling is your preferred mode of political debate.
0Lumifer
Well, the kinda-sorta OP phrased the issue this way: ...and that set the tone for the entire subthread :-P
3fubarobfusco
String substitution isn't truth-preserving; there are some analogies and some disanalogies there.
2bogus
Sure, but capital is a rather vacuous word. It basically means "stuff that might be useful for something". So yes, talking about democratizing AI is a whole lot more meaningful than just saying "y'know, it would be nice if everyone could have more useful stuff that might help em achieve their goals. Man, that's so deeeep... puff", which is what your variant ultimately amounts to!
0Lumifer
Um. Not in economics where it is well-defined. Capital is resources needed for production of value. Your stack of decade-old manga might be useful for something, but it's not capital. The $20 bill in your wallet isn't capital either.
0satt
While capital is resources needed for production of value, it's a bit misleading to imply that that's how it's "well-defined" "in economics", since the reader is likely to come away with the impression that capital = resources needed to produce value, even though not all resources needed for production of value are capital. Economics also defines labour & land* as resources needed for production of value. * And sometimes "entrepreneurship", but that's always struck me as a pretty bogus "factor of production" — as economists tacitly admit by omitting it as a variable from their production functions, even though it's as free to vary as labour.
0Lumifer
Sure, but that's all Econ 101 territory and LW isn't really a good place to get some education in economics :-/
0g_pepper
The way I remember it from my college days was that the inputs for the production of wealth are land, labor and capital (and, as you said, sometimes entrepreneurship is listed, although often this is lumped in with labor). Capital is then defined as wealth used towards the production of additional wealth. This formulation avoids the ambiguity that you identified.
0gjm
None the less, "capital" and "AI" are extremely different in scope and I see no particular reason to think that if "let's do X with capital" turns out to be a bad idea then we can rely on "let's do X with AI" also being a bad idea. In a hypothetical future where the benefits of AI are so enormous that the rest of the economy can be ignored, perhaps the two kinda coalesce (though I'm not sure it's entirely clear), but that hypothetical future is also one so different from the past that past failures of "let's do X with capital" aren't necessarily a good indication of similar future failure.
0bogus
And that stack of decade-old manga is a resource that might indeed provide value (in the form of continuing enjoyment) to a manga collector. That makes it capital. A $20 bill in my wallet is ultimately a claim on real resources that the central bank commits to honoring, by preserving the value of the currency - that makes it "capital" from a strictly individual perspective (indeed, such claims are often called "financial capital"), although it's indeed not real "capital" in an economy-wide sense (because any such claim must be offset by a corresponding liability).
0Lumifer
Sigh. You can, of course, define any word any way you like it, but I have my doubts about the usefulness of such endeavours. Go read).
0qmotus
I feel like it's rather obvious that this is approximately what is meant. The people who talk of democratizing AI are, mostly, not speaking about superintelligence or do not see it as a threat (with the exception of Elon Musk, maybe).
2Lumifer
Why do you think one exists?
2moridinamael
I try not to assume that I am smarter than everybody if I can help it, and when there's a clear cluster of really smart people making these noises, I at least want to investigate and see whether I'm mistaken in my presuppositions. To me, "democratize AI" makes as much sense as "democratize smallpox", but it would be good to find out that I'm wrong.
0bogus
Isn't "democratizing smallpox" a fairly widespread practice, starting from the 18th century or so - and one with rather large utility benefits, all things considered? (Or are you laboring under the misapprehension that the kinds of 'AIs' being developed by Google or Facebook are actually dangerous? Because that's quite ridiculous, TBH. It's the sort of thing for which EY and Less Wrong get a bad name in machine-learning- [popularly known as 'AI'] circles.)
1moridinamael
Not under any usual definition of "democratize". Making smallpox accessible to everyone is no one's objective. I wouldn't refer to making smallpox available to highly specialized and vetted labs as "democratizing" it. Google and/or Deepmind explicitly intend on building exactly the type of AI that I would consider dangerous, regardless of whether or not you would consider them to have already done so.
0Lumifer
Links to the noises?
0moridinamael
It's mainly an OpenAI noise but it's been parroted in many places recently. Definitely seen it in OpenAI materials, and I may have even heard Musk repeat the phrase, but can't find links. Also: YCombinator. which is pretty close to "we don't want only Google and Facebook to have control over smallpox". Microsoft in context of partnership with OpenAI. This is a much more nonstandard interpretation of "democratize". I suppose by this logic, Henry Ford democratized cars?
2Lumifer
Well, YC means, I think, that AI research should not become a monopoly (via e.g. software patents or by buying every competitor). That sounds entirely reasonable to me. Microsoft means that they want Cortana/Siri/Alexa/Assistant/etc. on every machine and in every home. That's just marketing speak. Both expressions have nothing to do with democracy, of course.
0tristanm
There are other ways that AI research can become a monopoly without any use of patents or purchases of competitors. For example, a fair bit of research can only be done through heavy computing infrastructure. In some sense places like Google will have an advantage no matter how much of their code is open-sourced (and a lot of it is open source already). Another issue is data, which is a type of capital - much unlike money however - where there is a limit to how much value you can extract from it that depends on your computing resources. These are barriers that I think probably can't be lowered even in principle.
0Lumifer
Having advantages in the field of AI research and having a monopoly are very different things. That's not self-evident to me. A fair bit of practical applications (e.g. Siri/Cortana) require a lot of infrastructure. What kind of research can't you do if you have a few terabytes of storage and a couple dozens of GPUs? What a research university will be unable to do? Data is an interesting issue. But first, the difference between research and practical applications is relevant again, and second, data control is mostly fought over at the legal/government level.
0tristanm
It's still the case that a lot of problems in AI and data analysis can be broken down into parallel tasks and massively benefit from just having plenty of CPUs/GPUs available. In addition, a lot of the research work at major companies like Google has gone into making sure that the infrastructure advantage is used to the maximum extent possible. But I will grant you that this may not represent an actual monopoly on anything (except perhaps search). Hardware is still easily available to those who can afford it. But in the context of "democratizing AI", I think we should expect that the firms with the most resources should have significant advantages over small startups in the AI space with not much capital. If I have a bunch of data I need analyzed, will I want to give that job to a new, untested player who may not even have the infrastructure depending on how much data I have, or someone established who I know has the capability and resources? The issue with data isn't so much about control / privacy, it's mainly the fact that if you give me a truckload of a thousand 2 TB hard drives, each containing potentially useful information, there's really not much I can do with it. Now if I happened to have a massive server farm, that would be a different situation. There's a pretty big gulf in value for certain objects depending on my ability to make use of it, and I think data is a good example of those kinds of objects.
0Lumifer
So how this is different from, say, manufacturing? Or pretty much any business for the last few centuries?
0tristanm
I think I would update my position here to say that AI is different from manufacturing, in that you can have small scale manufacturing operations (like 3D printing as username2 mentioned), that satisfy some niche market, whereas I sort of doubt that there are any niche markets in AI. I've noticed this a lot with "data science" and AI startups - in what way is their product unique? Usually its not. It's usually a team of highly talented AI researchers and engineers who need to showcase their skills until they get aqui-hired, or they develop a tool that gets really popular for a while and then it also gets bought. You really just don't see "disruption" (in the sense that Peter Thiel defines it) in the AI vertical. And you don't see niches.
0Lumifer
Hold on. Are you talking about niche markets, or are we talking about the capability to do some sort of AI at small-to-medium scale (say, startup to university size)? Um. I don't think the AI vertical exists. And what do you mean about niches? Wouldn't, I dunno, analysis of X-rays be a niche? high-frequency trading another niche? forecasting of fashion trends another niche? etc. etc.
0tristanm
Well, niche markets in AI aren't usually referred to as such, they're usually just companies that do task X with the help of statistics and machine learning. In that sense nearly all technology and finance companies could be considered an AI company. AI in the generalist sense is rare (Numenta, Vicarious, DeepMind), and usually gets absorbed by the bigger companies. In the specialist sense, if task X is already well-known or identified, you still have to go against the established players who have more data and have people who have been working on only that problem for decades. Thinking more about what YC meant in their "democratize AI' article, it seems they were referring to startups that want to use ML to solve problems that haven't traditionally been solved using ML yet. Or more generally, they want to help tech companies enter markets that usually aren't served by a tech company. That's fine. But I also get the feeling they really mean helping market certain companies by using the AI / ML hype train even if they don't, strictly speaking, use AI to solve a given task. A lot of "AI" startups just do basic statistical analysis but have a really fancy GUI on top of it.
0tristanm
Well I dont think it is. If someone said "let's democratize manufacturing" in the same sense as YC, would that sound silly to you?
0Lumifer
Generally speaking, yes, silly, but I can imagine contexts where the word "democratize" is still unfortunate but points to an actual underlying issue -- monopoly and/or excessive power of some company (or e.g. a cartel) over the entire industry.
0username2
No, it would sound like a 3D printing startup (and perfectly reasonable).
1username2
Open sourcing all significant advancements in AI and releasing all code under GNU GPL.
2Viliam
Tiling the whole universe with small copies of GNU GPL, because each nanobot is legally required to contain the full copy. :D
0username2
*GNU AGPL, preferably
0WalterL
"Make multiple AIs that can restrain one another instead of one tyrannical MCP"?

Hello guys, I am currently writing my master's thesis on biases in the investment context. One sub-sample that I am studying is people who are educated about biases in a general context, but not in the investment context. I guess LW is the right place to find some of those so I would be very happy if some of you would participate since people who are aware about biases are hard to come by elsewhere. Also I explicitly ask for activity in the LW community in the survey, so if enough of LWers participate I could analyse them as an individual subsample. Would... (read more)

0Elo
Look up a group called "the trading tribe" by Ed seykota.

I, for one, welcome our new paperclip Overlord.

Not the first criticism of the Singularity, and certainly not the last. I found this on reddit, just curious what the response will be here:

"I am taking up a subject at university, called Information Systems Management, and my teacher is a Futurologist! He refrains from even teaching the subject just to talk about technology and how it will solve all of our problems and make us uber-humans in just a decade or two. He has a PhD in A.I. and has already talked to us about nanotechnology getting rid of all diseases, A.I. merging with us, smart cities that... (read more)

I think most people on LW also distrust blind techno-optimism, hence the emphasis on existential risks, friendliness, etc.

4knb
Like a lot of reddit posts, it seems like it was written by a slightly-precocious teenager. I'm not much of a singularity believer but the case is very weak. "Declining Energy Returns" is based on the false idea that civilization requires exponential increases in energy input, which has been wrong for decades. Per capita energy consumption has been stagnant in the first world for decades, and most of these countries have stagnant or declining populations. Focusing on EROI and "quality" of oil produced is a mistake. We don't lack for sources of energy; the whole basis of the peak oil collapse theory was that other energy sources can't replace oil's vital role as a transport fuel. "Economic feasability" is non-sequitur concerned with whether gains from technology will go only to the rich, not relevant to whether or not it will happen. "Political resistance and corruption" starts out badly as the commenter apparently believes in the really dumb idea that electric cars have always been a viable competitor to internal combustion but the idea was suppressed by some kind of conspiracy. If you know anything about the engineering it took to make electric cars semi-viable competitors to ICE, the idea is obviously wrong. Even without getting into the technical aspect, there are lots of countries which had independent car industries and a strong incentive to get off oil (e.g. Germany and Japan before and during WW2).
0dglukhov
This seems relevant These statistics do not support your claim that energy consumption per capita has been stagnant. Did I miss something? Perhaps you're referring strictly to stagnation in per capita use of fossil fuels? Do you have different sources of support? After all, this is merely one data point. I'm not particularly sure where I stand with regards to the OP, part of the reason I brought it up was because this post sorely needed evidence to be brought up to the table, none of which I see. I suppose this lack of support gives a reader the impression of naiveté. but I was hoping members here would clarify with their own, founded claims. Thank you for the debunks, I'm sure there's plenty of literature to link to as such, which is exactly what I'm after. The engineering behind electric cars, and perhaps its history, will be a topic I'll be investigating myself in a bit. If you have any preferred sources for teaching purposes, I'd love a link.
3knb
Yep, your link is for world energy use per capita, my claim is that it was stagnant for the first world. E.g. in the US it peaked in 1978 and has since declined by about a fifth. Developed world is more relevant because that's where cutting edge research and technological advancement happens. Edit: here's a graph from the source you provided showing the energy consumption history of the main developed countries, all of which follow the same pattern. I don't really have a single link to sum up the difference between engineering an ICE car with adequate range and refuel time and a battery-electric vehicle with comparable range/recharge time. If you're really interested I would suggest reading about the early history of motor vehicles and then reading about the decades long development history of lithium-ion batteries before they became a viable product.
4ChristianKl
It seems to me like a long essay for a reasonable position written by someone who doesn't make a good case. Solar does get exponentially cheaper at a rate of doubling efficiency every 7 years. It's a valid answer to the question of where the energy will come from is the timeline is long enough. The article gives the impression that the poor in the third world stay poor. That's a popular misconception and in reality the fight against global poverty. Much more than the top 20% of this planet has mobile phones. Most people benefit from technologies like smart phones. The "planned obsolescence" conspiracy theory narrative also doesn't really help with understanding how technology get's deployed.
0dglukhov
I wouldn't cherry-pick one technological example and make a case for the rest of available technological advancements as conducive to closing the financial gap between people. Tech provides for industry, industry provides for shareholders, shareholders provide for themselves (here's one data point in a field of research exploring the seemingly direct relationship between excess resource acquisition and antisocial tendencies, I will work on finding more, if any). I am necessarily glossing over the extraneous details, but since the corporate incentive system provides for a whole host of advantages, and since it has power over top-level governments (lobbying success statistics come to mind), this incentive system is necessarily prevalent and of major interest when tech advances are the topic of discussion. Those with power get tech benefits first, if any benefits exist beyond that point, fantastic. If not, the obsolescence conspiracy seems the likely next scenario. I have no awareness of an incentive system that dictates that those with money and power need necessarily provide for everyone else. If there was one, I wouldn't be the only unaware one, since clearly the OP isn't aware of such a thing either. Are there any technological advancements you can think of that necessarily trickle down the socio-economic scale and help those poorest of the poor? My first idea would be agricultural advancements, but then I'd have to go and collect statistics on rates of food acquisition for the poorest subset of the world population, with maybe a start in the world census data for agriculture, which may not even have the data I'd need. Any ideas of your own?
1ChristianKl
That sentence is interesting. The thing I care about improving the lives of the poor. If you look at Bill Gates and Warren Buffet they see purpose in helping the poor. In general employing poor people to do something for you and paying them a wage is also a classic way poor people get helped. The great thing about smart phones is that they allow for software to be distributed with little cost for additional copies. Having a smart phone means that you can use Duolingo to learn English for free. We are quite successful in reducing the numbers of the poorest of the poor. We reduced them both in relative and in absolute numbers. It's debatable how much of that is due to new technology and how much is through other factors but we have now less people in extreme poverty.
0dglukhov
I'm happy that these people have taken actions to support such stances. However, I'm more interested in the incentive system, not a few outliers within it. Both of these examples hold about $80 billion in net worth, these are paltry numbers compared to the amount of money circulating in world today, GDP estimates ranging in the $74 trillion. I am therefore still unaware of an incentive system that helps the poor until I see the majority of this amount of money being circulated and distributed in the manner Gates and Buffett propose. Agreed, and unfortunately utilizing a smartphone to its full benefit isn't necessarily obvious to somebody poor. While one could use it to learn English for free, they could also use it inadvertently as an advertising platform with firms soliciting sales from the user, or just as a means of contact with others willing to stay in contact with them (other poor people, most likely). A smartphone would be an example of a technology that managed to trickle down the socio-economic ladder and help poor people, but it can do harm as well as good, or have no effect at all. Please show me these statistics. Are they adjusted to and normalized relative to population increase? A cursory search gave me contradictory statistics. http://www.statisticbrain.com/world-poverty-statistics/ I'd like to know where you get such sources, because a growing income gap between rich and poor necessarily implies three things: the rich are getting richer, the poor are getting poorer, or both. Note: we are discussing relative poverty, or absolute poverty? I'd like to keep it to absolute poverty, since meeting basic human needs is a solid baseline as long as you trust nutritional data sources and research with regards to health. If you do not trust our current understanding of human health, then relative poverty is probably the better topic to discuss. EDIT: found something to support your conclusion, first chart shows the decrease of population of people in the l
0ChristianKl
When basic needs are fulfilled many humans tend to want to satisfy needs around contributing to making the world a better place. It's a basic psychological mechanism.
0dglukhov
This completely ignores my previous point. A few people who managed to self-actualize within the current global economic system will not change that system. As I previously mentioned, I am not interested in outliers, but rather systematic trends in economic behavior.
0ChristianKl
Bill Gates and Warren Buffet aren't only outliers in respect to donating but also in being the most wealthy people. Both of them basically believe that it's makes more sense to use their fortune for the public good than to inherit it to their children. To the extend that this belief spreads (and it does with the giving pledge), you see more money being used this way.
0ChristianKl
The ability to stay in contact with other poor people is valuable. If you can send the person in the next village a message you don't have to walk to them to communicate with them. What have the millennium development goals achieved?
0dglukhov
It is also dangerous, people are unpredictable and, similarly to my point about phones, can cause good, harm, or nothing at all. A phone is not inherently, intrinsically good, it merely serves as a platform to any number of things, good, bad or neutral. I hope this initiative continues to make progress and that policy doesn't suddenly turn upside-down anytime soon. Then again, Trump is president, Brexit is a possibility, and economic collapse an always probable looming threat.
0ChristianKl
That's similar to saying that a car is not intrinsically good. Both technologies enable a lot of other actions.
0dglukhov
Cars also directly involve people in motor vehicle accidents, one of the leading causes of death in the developed world. Cars, and motor vehicles in general, also contribute to an increasingly alarming concentration of emissions into the atmosphere, with adverse effects to follow, most notably global warming. My point still stands. A technology is only inherently good if it solves more problems than it causes, with each problem weighed by their impacts on the world.
0Elo
Cars are net positive. Edit: ignoring global warming because it's really hard to quantify. Just comparing deaths to global productivity increase because of cars. Cars are a net positive. Edit 2: Clarification - it's hard to quantify the direct relationship of cars to global warming. Duh there's a relationship, but I really don't want to have a debate here. Ignoring that factor for a moment, net value of productivity of cars vs productivity lost by some deaths. Yea. Let's compare that.
0dglukhov
It is easy to illustrate that carbon dioxide, the major byproduct of internal combustion found in most car models today, causes global warming directly. If you look at this graph, you'll notice that solar radiation spans a large range of wavelengths of light. Most of these wavelengths of light get absorbed by our upper atmosphere according to chemical composition of said atmosphere, except for certain wavelengths in the UV region of the spectrum (that's the part of the spectrum most commercial sunscreens are designed to block). Different chemicals have different ranges over which wavelengths of light can excite their stable forms. Carbon dioxide, as it turns out, can be irradiated over a portion of the spectrum in the IR range, in the region around wavenumber 2351. When light is absorbed by carbon dioxide, it causes vibration in the molecule, which gets dissipated as heat, since this is technically an excitation of the molecule. This is why carbon dioxide is considered a greenhouse gas, because it absorbs solar energy in the form of light as an input, then dissipates that energy after vibrational excitation as output. The amount of carbon dioxide in the atmosphere today far exceeds natural levels ever before seen on earth. There are, of course, natural fluctuations of these levels going up and down (according to natural carbon fixing processes), but the overall trend is very distinct, obvious, and significant. We are putting more carbon dioxide into the atmosphere through our combustion processes than the earth can fix out of the atmosphere. The relationship has been quantified already. Please understand, there is absolutely no need to obscure this debate with claims that the relationship is hard to quantify. It is not, it has been done, the body of research surrounding this topic is quite robust, similarly to how robust the body of research around CFCs is. I will not stand idly by while people continue to misunderstand the situation. Your urge to ignore this fact
0Lumifer
Actually, not that easy because the greenhouse effect is dominated by water vapor. CO2 certainly is a greenhouse gas and certainly contributes to global warming, but the explanation is somewhat more complicated than you make it out to be. This is not true. Demonstrate, please.
0dglukhov
According to what sources, and how did they verify? Do you distrust the sampling techniques used to gather data on carbon dioxide levels before recorded history? What more could you possibly need? I just showed you evidence pointing to unnatural amount of carbon dioxide in the atmosphere. Disturb that balance, you cause warming. This cascades into heavier rainfall, higher levels of water vapor and other greenhouse gases, and you get a sort of runaway reaction.
0Lumifer
Will Wikipedia suffice? You did use the word "quantify", did you not? Do you know what it means?
0dglukhov
Putting data on the table to back up claims. Back up your idea of what is going on in the world with observations, notably observations you can put a number on.
0Lumifer
Turns out you don't know. The word means expressing your claims in numbers and, by itself, does not imply support by data. Usually "quantifying" is tightly coupled to being precise about your claims.
0dglukhov
Usually "quantifying" is tightly coupled to being precise about your claims. I'm confused. You wouldn't have claims to make before seeing the numbers in the first place. You communicate this claim to another, they ask you why, you show them the numbers. That's the typical process of events I'm used to, how is it wrong?
0Lumifer
LOL. Are you quite sure this is how humans work? :-) I want you to quantify the claim, not the evidence for the claim.
0dglukhov
They don't, that's something you train to do. Why? Are you asking me to write out the interpretation of the evidence I see as a mathematical model instead of a sentence in English?
0Lumifer
Not evidence. I want you to make a precise claim. For example, "because CO2 is a greenhouse gas, and because there's a lot more of it around than there used to be, that CO2 cascades into a warming event" is a not-quantified claim. It's not precise enough to be falsifiable (which is how a lot of people like it, but that's a tangent). A quantified equivalent would be something along the lines of "We expect the increase in atmospheric CO2 from 300 to 400 ppmv to lead to the increase of the average global temperature by X degrees spread over the period of Z years so that we forecast the average temperature in the year YYYY as measured by a particular method M to be T with the standard error of E". Note that this is all claim, no evidence (and not a model, either).
0Good_Burning_Plastic
Yes it is. For example, if CO2 concentrations and/or global temperatures went down by much more than the measurement uncertainties, the claim would be falsified.
0Lumifer
I said: The claim doesn't mention any measurement uncertainties. Moreover, the actual claim is "CO2 cascades into a warming event" and, y'know, it's just an event. Maybe it's an event with a tiny magnitude, maybe another event happens which counterbalances the CO2 effect, maybe the event ends, who knows...
0Good_Burning_Plastic
That's why I said "much more". If I claimed "X is greater than Y" and it turned out that X = 15±1 and Y = 47±1, would my claim not be falsified because it didn't mention measurement uncertainties?
0dglukhov
Well, at this point I'd concede its not easy to make a claim with standards fit for such an example. I'll see what I can do.
0Lumifer
The general test is whether the claim is precise enough to be falsifiable -- is there an outcome (or a set of data, etc) which will unambiguously prove that claim to be wrong, with no wiggle room to back out? And, by the way, IPCC reports are, of course, full of quantified claims like the one I mentioned. There might be concerns with data quality, model errors, overconfidence in the results, etc. etc, but the claims are well-quantified.
0dglukhov
That is fair, so why was the claim that cars are a net positive not nearly as thoroughly scrutinized as my counterargument? I can't help but notice some favoritism here... Was such an analysis done? Recently? Is this such common knowledge that nobody bothered to refute it? Edit: my imagination only stretches so far as to see climate change being the only heavy counterargument to the virtue of cars. Anything else seems relatively minor, i.e deaths from motor accidents, etc.
0Lumifer
Because there is a significant prior to overcome. Whenever people get sufficiently wealthy, they start buying cars. Happened in the West, happened in China, Russia, India, etc. etc. Everywhere. And powers-that-be are fine with that. So to assert that cars are a net negative you need to assert that everyone is wrong.
0dglukhov
Just out of curiosity, what is your stance on the impact of cars on climate change? And cars are too narrow, then what is your stance on fossil fuel consumptions and its impact on climate change? You linked to parts of the debate I've never been exposed to, so I'm curious if there's more to know.
0Lumifer
tl;dr It's complicated :-) Generally speaking, the issue of global warming is decomposable into several questions with potentially different answers. E.g.: * Have we observed general warming throughout the XX and early XXI century? That's a question about facts and can be answered relatively easily. * Does emitting very large amounts of CO2 into the atmosphere affect climate? That's a question about a scientific theory and by now it's relatively uncontested as well (note: quantifying the impact of CO2 on climate is a different thing. For now the issue is whether such an impact exists). * Are there other factors affecting climate on decade- and century- scales? Also a question about scientific theories and again the accepted answer is "yes", but quantifying the impact (or agreeing on a fixed set of such factors) is not so simple. * What do we expect the global temperatures to be in 20/50/100 years under certain assumptions about the rate of CO2 emissions? Ah, here we enter the realm of models and forecasts. Note: these are not facts. Also note that here the "complicated" part becomes "really complicated". For myself, I'll just point out that I distrust the confidence put by many people into these models and the forecasts they produce. By the way, there are a LOT of these models. * What consequences of our temperature forecasts do we anticipate? Forecasting these consequences is harder than forecasting temperatures, since these consequences are conditional on temperature forecasts. Some things here are not very controversial (it's unlikely that glaciers will stop retreating), some are (will hurricanes become weaker? stronger? more frequent? Umm....) * What should we do in response to global warming? At this point we actually leave the realm of science and enter the world of "should". For some reason many climate scientists decided that they are experts in economics and politics and so know what the response should be. Unfortunately for them, it's not a sc
0gjm
What inference are you expecting readers to draw from that? Inferences I draw from it: 1. Looks like researchers are checking one another's results, developing models that probe different features, improving models as time goes on, etc.; those are all good things. 2. It would be good to know how well these models agree with one another on what questions. I don't know how well they do, but I'm pretty sure that if the answer were "they disagree wildly on central issues" then the denier/skeptic[1] camp would be shouting it from the rooftops. Unless, I guess, the disagreement were because some models predict much worse futures than currently expected. So my guess is that although there are such a lot of models, they pretty much all agree that e.g. under business-as-usual assumptions we can expect quite a lot more warming over the coming decades. [1] I don't know of any good terms that indicate without value judgement that a given person does or doesn't broadly agree that global warming is real, substantially caused by human activities, and likely to continue in the future.
0Lumifer
That there is no single forecast that "the science" converged on and everyone is in agreement about what will happen. If you're curious, IPCC reports will provide you with lots of data. A nice example of a non-quantified claim :-P "Sceptic" implies value judgement? I thought being a sceptic was a good thing, certainly better than being credulous or gullible.
0gjm
I confess it never occurred to me that anyone would ever think such a thing. (Other than when reading what you wrote, when I thought "surely he can't be attacking such a strawman".) I mean, I bet there are people who think that or say it. Probably when someone says "we expect 2 degrees of warming by such-and-such a date unless something changes radically" many naive listeners take it to mean that all models agree on exactly 2 degrees of warming by exactly that date. But people seriously claiming that all the models agree on a single forecast? Even halfway-clueful people seriously believing it? Really? Do please show us all the quantified claims you have made about global warming, so that we can compare. (I can remember ... precisely none, ever. But perhaps I just missed them.) Not that I think there's anything very bad about non-quantified claims -- else I wouldn't go on making them, just like everyone else does. I simply think you're being disingenuous in complaining when other people make such claims, while avoiding making any definite claims to speak of yourself and leaving the ones you do make non-quantified. From the great-grandparent of this comment I quote: "relatively easily", "relatively uncontested", "really complicated", "I distrust the confidence ...", "a LOT of these models", "not very controversial", (implicitly in the same sentence) "very controversial". But, since you ask: I think it is broadly agreed (among actual climate scientists) that business as usual will probably (let's say p=0.75 or thereabouts) mean that by 2100 global mean surface temperature will be at least about 2 degrees C above what it was before 1900. (Relative to the same baseline, we're currently somewhere around 0.9 degrees up from then.) I paraphrase: "How silly to suggest that 'sceptic' implies a value judgement. It implies a positive value judgement." Being a skeptic is a good thing. I was deliberately using one word that suggests a positive judgement ("skeptic") alongsid
0Lumifer
I recommend glancing at some popular press. There's "scientific consensus", dontcha know? No need to mention specific numbers, but all right-thinking men, err... persons know that Something Must Be Done. Think of the children! Is this a competition? I don't feel the need to make quantified claims because I'm not asking anyone to reduce their carbon footprint or introduce carbon taxes, or destroy their incandescent bulbs, or tar-and-feather coal companies... Let me quote you some Richard Feynman: "I have approximate answers and possible beliefs and different degrees of uncertainty about different things, but I am not absolutely sure of anything". It is to me and, seems like, to you. I know people who think otherwise: a sceptic is a malcontent, a troublemaker who's never satisfied, one who distrusts what honest people tell him.
0gjm
Yeah, there's all kinds of crap in the popular press. That's why I generally don't pay much attention to it. Anyway, what do the deficiencies of the popular press have to do with the discussion here? No, it's a demonstration of your insincerity. Status quo bias. In (implicitly) asking us not to put effort into reducing carbon footprints, introduce carbon taxes, etc., etc., you are asking us (or our descendants) to accept whatever consequences that may have for the future. I fail to see why causing short-term inconvenience should require quantified claims, but not causing long-term possible disaster. (I am all in favour of the attitude Feymnan describes. It is mine too. If there is any actual connection between that and our discussion, other than that you are turning on the applause lights, I fail to see it.) Sure. So what?
0Lumifer
Because my original conversation was with a guy who, evidently, picked up some of his ideas about global warming from there. LOL. I am also implicitly asking not to stop sex-slave trafficking, not to prevent starvation somewhere in Africa, and not to thwart child abuse. A right monster am I! In any case, I could not fail to notice certain... rigidities in you mind with respect to certain topics. Perhaps it will be better if I tap out.
0gjm
Er, no. I'm sorry to hear that. I would gently suggest that you consider the possibility that the rigidity may not be where you think it is, but I doubt there's much point.
0Elo
. Fourth clarification IT IS HARD TO QUANTIFY THE EXACT PROPORTION OF GLOBAL WARMING THAT IS CAUSED BY CARS AS OPPOSED TO OTHER SOURCES OF GLOBAL WARMING, SAY EVERY OTHER REASON THAT CARBON DOIXIDE ENDS UP IN THE ATMOSPHERE AND AS AN ABSTRACTION FROM THAT HOW MUCH OF GLOBAL WARMING IS LITERALLY CAUSED BY CARS AND THEREFORE HOW MUCH DAMAGE TO PRODUCTIVITY CARS CAUSE BY CAUSING GLOBAL WARMING TO BE SOME FRACTION HIGHER THAN IT WOULD HAVE OTHERWISE BEEN. Tapping out.
0dglukhov
Oh really? Since when? Edit: Just in case you weren't convinced. If you go into the sampling and analysis specifics, the chemistry is sound. There are a few assumptions made, as with any data sampling technique, but if you decide to want to dispute such details, you may as well dispute the technical details and call your objection there. Otherwise, I don't see where your claim holds, this is one of the better documented global disputes (makes sense, since so much is at stake with regards to both the industry involved as well as the alleged consequences of climate change.) I can say that global productivity increase doesn't mean anything if it cannot be sustained.
0Lumifer
Please illustrate, then. What is the net cost of "cars and motor vehicles in general" with respect to their "emissions into the atmosphere"? Use numbers and show your work.
0dglukhov
Okay, consider this an IOU for a future post on an analysis. I'm assuming you'd want an analysis of emissions relative to automobile use, correct? Wouldn't an emissions based on fossil fuel consumption in general be more comprehensive? Edit: In the meantime, reading this analysis that's already been done may help establish a better understanding on the subject of quantifying emissions costs. Also please understand that what you're asking for is something whole analytical chemical organizations spend vast amounts of their funding on doing this analysis. To say that I alone will try to provide something anywhere close to the quality provided by these organizations is to exercise quite a bit of hubris on my part. That said, my true rejection to Elo's comment wasn't that global warming isn't hard to quantify. My true rejection is that it seems entirely careless to discard global warming from the discussion of the virtue (or lack thereof) of motor vehicles and other forms of transportation.
0Lumifer
We are talking about the cost-benefit analysis of cars and similar motor vehicles (let's define them as anything that moves and has an internal combustion engine). Your point seems to be that cars are not net beneficial -- is that so? A weaker claim -- that cars have costs and not only benefits -- is obvious and I don't think anyone would argue with it. In particular, you pointed out that some of the costs involved have to do with global warming and -- this is the iffy part -- that this cost is easy to quantify. Since I think that such cost would be very-difficult-to-impossible to quantify, I'm curious about your approach. Your link is to an uncritical Gish Gallop ("literature review" might be a more charitable characterization) through all the studies which said something on the topic. Re update: Cost is an economics question. Analytical chemistry is remarkably ill-equipped to answer such questions. As to "careless to discard global warming", well, I believe Elo's point was that it's hard to say anything definite about the costs of cars in this respect (keep in mind, for example, that humans do need transportation so in your alternate history where internal-combustion-engine motor vehicles don't exist or are illegal, what replaces them?)
0dglukhov
Analytical chemistry is well equipped to handle and acquire the data to show, definitively, that global warming is caused by emissions. To go further to say that we cannot use these facts to decide whether or not the automotive infrastructure isn't worth augmenting because its too hard to make a cost-benefit analysis in light of the potential costs associated with global warming and air pollution is careless. Coastal flooding is a major cost (with rising oceans), as are extreme weather patterns (the recent flooding in Peru comes to mind), as well as the inevitable mass migrations (or deaths) resulting from these phenomena. I'm not aware of such figures, but this is a start. Though I'm not asking for a replacement of motor vehicles (although electric cars come to mind), I am asking for augmentation. Why take the risk?
0Lumifer
I was not aware that analytical chemists make climate models and causal models, too... You are confused about tenses. Coastal flooding, etc. is (note the present tense) is not a major cost. Coastal flooding might become a cost in the future, but that is a forecast. Forecasts are different from facts. Electric batteries do not produce energy, they merely store energy. If the energy to charge these batteries comes from fossil fuels, nothing changes. What do you mean by augmentation?
1dglukhov
They can. Though the people who came up with the infrared spectroscopy technique may not have been analytical chemists by trade. Mostly physicists, I believe. Why is this relevant? Because the same reason why infrared spectroscopy works also gives a reason for why emission cause warming. Coastal flooding damages infrastructure built on said coasts (unless said infrastructure was designed to mitigate said damage). That is a fact. I don't see what the problem is here. Agreed. So let me rephrase. Solar energy comes to mind. Given enough time, solar panels that were built up using tools and manpower powered by fossil fuels will eventually outproduce the energy spent to build it. This does change things if that energy can then be stored, transferred, and used for transportation purposes, since our current infrastructure still relies on such transportation technology. This is what I mean by augmentation. Change the current infrastructure to support and accept renewable energy source over fossil fuels. We cannot do this yet globally, though some regions have managed to beat those odds.
0Lumifer
You are confused between showing that CO2 is a greenhouse gas and developing climate models of the planet Earth. Yes, but coastal flooding is a permanent feature of building on the coasts. Your point was that coastal flooding (and mass migrations and deaths) are (note: present tense) the result of global warming.This is (note: present tense) not true. There are people who say that this will become (note: future tense) true, but these people are making a forecast. At which point we are talking about the whole energy infrastructure of the society and not about the costs of cars.
0dglukhov
What other inferential steps does a person need to be shown to tell them that because CO2 is a greenhouse gas, and because there's a lot more of it around than there used to be, that CO2 cascades into a warming event? The recent weather anomalies hitting earth imply the future is here. Indeed, so why not debate at the metalevel of the infrastructure, and see where the results of that debate lead in terms of their impacts on the automotive industry? It is a massive industry, worth trillions of dollars globally, any impacts on energy infrastructure will have lasting impacts on the automotive industry.
0Lumifer
Look up a disagreement between two chaps, Svante Arrhenius and Knut Ångström :-) Here is the argument against your position (there is a counter-argument to it, too): . Like the remarkable hurricane drought in the North America? Or are you going to actually argue that weather is climate? Sure, but it's a different debate.
0dglukhov
What was his counter-argument? I can't read German. Well clearly we need to establish a time range. Most sources for weather and temperature records I've seen span a couple of centuries. Is that not a range large enough to talk about climate instead of weather? Its a related debate, especially relevant if conclusions in the debate a metalevel lower are unenlightened.
0Lumifer
Here
0dglukhov
Noted, edited. The description of the link is entirely unfair. It provides a (relatively) short summary of the language of the debate, as well as a slew of data points to overview. To frame the source as you describe it is entirely an exercise in poisoning the well.
0Lumifer
The source is a one-guy organization which doesn't even pretend it's unbiased.
0dglukhov
Ironic, since you just asked me to do my own analysis on the subject, yet you are unwilling to read the "one-guy organization" and what it has to say on the subject. The merits (or lack thereof) of said organization has nothing to do with how true or false the source is. This is ad hominem.
0Lumifer
I glanced at your source. The size is relevant because you told me that ...and the lack of bias (or lack of lack) does have much to do with how one treats sources of information.
0dglukhov
If you'd filter out one-man firm as a source not worth reading, you'd filter out any attempt of an analysis on my part as well. I am concerned about quality here, not so much who sources come from. This, necessarily, requires more than just a glance at material.
0Lumifer
It's called a survival instinct.
0dglukhov
Good luck coalescing that in any meaningful level of resistance. History shows that leaders haven't been very kind to revolutions, and the success rate for such movements aren't necessarily high given the technical limitations. I say this only because I'm seeing a slow tendency towards an absolution of leader-replacement strategies and sentiments.
0Lumifer
Resistance on whose part to what? Revolutions haven't been very kind to leaders, too -- that's the point. When the proles have nothing to lose but their chains, they get restless :-/ ...absolution?
2Viliam
Is this empirically true? I am not an expert, but seems to me that many revolutions are caused not by consistent suffering -- which makes people adjust to the "new normal" -- but rather by situations where the quality of life increases a bit -- which gives people expectations of improvement -- and then either fails to increase further, or even falls back a bit. That is when people explode. A child doesn't throw a tantrum because she never had a chocolate, but she will if you give her one piece and then take away the remaining ones.
0Lumifer
The issue is not the level of suffering, the issue is what do you have to lose. What's the downside to burning the whole system to the ground? If not much, well, why not? Middle class doesn't explode. Arguably that's the reason why revolutions (and popular uprisings) in the West have become much more rare than, say, a couple of hundred years ago.
5gjm
The American revolution seems to have been a pretty middle-class affair. The Czech(oslovakian) "Velvet Revolution" and the Estonian "Singing Revolution" too, I think. [EDITED to add:] In so far as there can be said to be a middle class in a communist state.
0Lumifer
Yeah, Eastern Europe / Russia is an interesting case. First, as you mention, it's unclear to what degree we can speak of the middle class there during the Soviet times. Second, some "revolutions" there were velvet primarily because the previous power structures essentially imploded leaving vacuum in their place -- there was no one to fight. However not all of them were and the notable post-Soviet power struggle in the Ukraine (the "orange revolution") was protracted and somewhat violent. So... it's complicated? X-)
2Viliam
More precisely, it is what you believe you have to lose. And humans seems to have a cognitive bias that they take all advantages of the current situation for granted, if they existed at least for a decade. So when people see more options, they are going to be like: "Worst case, we fail and everything stays like it is now. Best case, everything improves. We just have to try." Then they sometimes get surprised, for example when millions of them starve to death, learning too late that they actually had something to lose. In some sense, Brexit or Trump are revolutions converted by the mechanism of democracy into mere dramatic elections. People participating at them seem to have the "we have nothing to lose" mentality. I am not saying they are going to lose something as a consequence, only that the possibility of such outcome certainly exists. I wouldn't bother trying to convince them about that, though.
0MaryCh
(Yes it does.)
0dglukhov
Resistance of those without resources against those with amassed resources. We can call them rich vs. poor, leaders vs. followers, advantaged vs. disadvantaged. the advantaged groups tend to be characteristically small, the disadvantaged large. Restlessness is useless when it is condensed and exploited to empower those chaining them. For example, rebellion is an easily bought commercial product, a socially/tribally recognized garb you can wear. You'd be hard-pressed more to look the part of a revolutionary than to actually do anything that could potentially defy the oppressive regime you might be a part of. There are other examples, which leads me to my next point. It would be in the best interest for leaders to optimize for a situation where rebellion cannot ever arise, that is the single threat any self-interested leader with the goal of continuing their reign needs to worry about. Whether it involves mass surveillance, economic manipulation, or simply despotic control is largely irrelevant, the idea behind them is what counts. Now when you bring up the subject of technology, any smart leader with a stake in their reign time will immediately seize any opportunity to extend it. Set a situation up to create technology that necessarily mitigates the potential for rebellion to arise, and you get to rule longer. This is a theoretical scenario. It is a scary one, and the prevalence of conspiracy theories arising from such a theory simply plays to biases founded in fear. And of course, with bias comes the inevitable rationalist backlash to such idea. But I'm not interested in this political discourse, I just want to highlight something. The scenario establishes an optimization process. Optimization for control. It is always more advantageous for a leader to worry more about their reign and extend it than to be benevolent, a sort of tragedy of the commons for leaders. The natural in-system solution for this optimization problem is to eliminate all potential sources of
0Lumifer
And when it's not? Consider Ukraine. Or if you want to go a bit further in time the whole collapse of the USSR and its satellites. I don't see why. It is advantageous for a leader to have satisfied and so complacent subjects. Benevolence can be a good tool.
0dglukhov
Outcompeted by economic superpowers. Purge people all you want, if there are advantages to being integrated into the world economic system, the people who explicitly leave will suffer the consequences. China did not choose such a fate, but neither is it rebelling. Benevolence is expensive. You will always have an advantage in paying your direct subordinates (generals, bankers, policy-makers, etc) rather than the bottom rung of the economic ladder. If you endorse those who cannot keep you in power, those that would normally keep you in power will simply choose a different leader (who's probably going to endorse them more than you are). Of course, your subordinates are inevitably dealing with the exact same problem, and chances are they too will optimize by supporting those who can keep them in power. There is no in-system incentive to be benevolent. You could argue a traditional republic tries to circumvent this empowering those on the bottom to work better (which has no other choice but to improve living conditions), but the amount of uncertainty for the leader increases, and leaders in this system do not enjoy extended times of reign. To optimize to fix this solution, you absolve rebellious sentiment. Convince your working populace that they are happy (whether they're happy or not), and your rebellion problem is gone. There is, therefore, still no in-system incentive to be benevolent (this is just a Band-Aid), the true incentive is to get rid of uncertainty as to the loyalty of your subordinates. Side-note: analysis of the human mind scares me in a way. To be able to know precisely how to manipulate the human mind makes this goal much easier to attain. For example, take any data analytics firm that sell their services for marketing purposes. They can collaborate with social media companies such as facebook (which currently has over 1.7 billion active monthly users as data points, though perhaps more since this is old data), where you freely give away your person

I've been writing about effective altruism and AI and would be interested in feedback: Effective altruists should work towards human-level AI

2ChristianKl
That's sounds naive and gives the impression that you haven't taken the time to understand the AI risk concerns. You provide no arguments besides the fact that you don't see the problem of AI risk. The prevailing wisdom in this community is that most GAI designs are going to be unsafe and a lot of the unsafety isn't obvious beforehand. There's the belief that if the value alignment problem isn't solved before human level AGI, that means the end of humanity.
1dogiv
The idea that friendly superintelligence would be massively useful is implicit (and often explicit) in nearly every argument in favor of AI safety efforts, certainly including EY and Bostrom. But you seem to be making the much stronger claim that we should therefore altruistically expend effort to accelerate its development. I am not convinced. Your argument rests on the proposition that current research on AI is so specific that its contribution toward human-level AI is very small, so small that the modest efforts of EAs (compared to all the massive corporations working on narrow AI) will speed things up significantly. In support of that, you mainly discuss vision--and I will agree with you that vision is not necessary for general AI, though some form of sensory input might be. However, another major focus of corporate AI research is natural language processing, which is much more closely tied to general intelligence. It is not clear whether we could call any system generally intelligent without it. If you accept that mainstream AI research is making some progress toward human-level AI, even though it's not the main intention, then it quickly becomes clear that EA efforts would have greater marginal benefit in working on AI safety, something that mainstream research largely rejects outright.
0MrMind
This is almost the inverse Basilisk argument.
0turchin
If you prove that HLAI is safer than narrow AI jumping in paper clip maximiser, it is good EA case. If you prove that risks of synthetic biology is extremely high if we will not create HLAI in time, it would also support your point of view.

What do you think of the idea of 'learning all the major mental models' - as promoted by Charlie Munger and FarnamStreet? These mental models also include cognitive fallacies, one of the major foci of Lesswrong.

I personally think it is a good idea, but it doesn't hurt to check.

0ChristianKl
Learning different mental models is quite useful. On the other hand I'm not sure that it makes sense to think that there's one list with "the major mental models". Many fields have their own mental models.

The main page lesswrong.com no longer has a link to the Discussion section of the forum, nor a login link. I think these changes are both mistakes.

0TheAncientGeek
Yep.

Something happened to the mainpage. It no longer contains links to Main and Discussion.

0username2
Preparing for the closure of the discussion forums? "Management" efforts to kickstart things with content-based posts seem to have stalled after the flurry in Nov/Dec.
0Elo
yes, we are working on it.

Suppose there are 100 genes which figure into intelligence, the odds of getting any one being 50%.

The most common result would be for someone to get 50/100 of these genes and have average intelligence.

Some smaller number would get 51 or 49, and a smaller number still would get 52 or 48.

And so on, until at the extremes of the scale, such a small number of people get 0 or 100 of them that no one we've ever heard of or has ever been born has had all 100 of them.

As such, incredible superhuman intelligence would be manifest in a human who just got lucky enough to have all 100 genes. If some or all of these genes could be identified and manipulated in the genetic code, we'd have unprecedented geniuses.

2Viliam
Let me be the one to describe this glass as half-empty: If there are 100 genes that participate in IQ, it means that there exists an upper limit to human IQ, i.e. when you have all 100 of them. (Ignoring the possibility of new IQ-increasing mutations for the moment.) Unlike the mathematical bell curve which -- mathematically speaking -- stretches into infinity, this upper limit of human IQ could be relatively low; like maybe IQ 200, but definitely no Anasûrimbor Kellhus. It may turn out that to produce another Einstein or von Neumann, you need a rare combination of many factors, where having IQ close to the upper limit is necesary but not sufficient, and the rest is e.g. nutrition, personality traits, psychological health, and choices made in life. So even if you genetically produce 1000 people with the max IQ, barely one of them becomes functionally another Einstein. (But even then, 1 in 1000 is much better than 1 per generation globally.) (Actually, this is my personal hypothesis of IQ, which -- if true -- would explain why different populations have more or less the same average IQ. Basicly, let's assume that having all those IQ genes gives you IQ 200, and that all lower IQ is a result of mutational load, and IQ 100 simply means a person with average mutational load. So even if you would populate a new island with Mensa members, in a few generations some of them would receive bad genes not just by inheritance but also by random non-fatal mutations, gradually lowering the average IQ to 100. On the other hand, if you would populate a new island with retards, as long as all the IQ genes are present in at least some of them, in a few generations natural selection would spread those genes in the population, gradually increasing the average IQ to 100.)
5Lumifer
I'm pretty sure that that there is an upper limit to the IQ capabilities of a blob of wetware that has to fit inside a skull.
1gathaung
AFAIK (and wikipedia tells), this is not how IQ works. For measuring intelligence, we get an "ordinal scale", i.e. a ranking between test-subjects. An honest reporting would be "you are in the top such-and-so percent". For example, testing someone as "one-in-a-billion performant" is not even wrong; it is meaningless, since we have not administered one billion IQ tests over the course of human history, and have no idea what one-in-a-billion performance on an IQ test would look like. Because the IQ is designed by people who would try to parse HTML by regex (I cannot think of a worse insult here), it is normalized to a normal distribution. This means that one applies the inverse error-function with SD of 15 points to the percentile data. Hence, IQ is Gaussian-by-definition. In order to compare, use e.g. python as a handy pocket calculator: 4.442300208692339e-10 So we see that claims of any human being having an IQ of 165+ is statistically meaningless. If you extrapolated to all of human history, an IQ of 180+ is meaningless: 2.3057198811629745e-14 Yep, by current definition you would need to test 10^14 humans to get one that manages an IQ of 180. If you test 10^12 humans and one god-like super-intelligence, then the super-intelligence gets an IQ of maybe 175 -- because you should not apply the inverse error-function to an ordinal scale, because ordinal scales cannot capture bimodals. Trying to do so invites eldritch horrors on our plane who will parse HTML with a regex.
0Good_Burning_Plastic
The 15 should be (15.*sqrt(2)) actually, resulting in iqtopercentile(115) = 0.16 as it should be rather than 0.079 as your expression gives, iqtopercentile(165) = 7.3e-6 (i.e. 7 such people in a city with 1 million inhabitants in average), and iqtopercentile(180) = 4.8e-8 (i.e. several hundred such people in the world). (Note also that in python (x-100)/15 returns an integer whenever x is an integer.)
0Viliam
Yeah, I agree with everything you wrote here. For extra irony, I also have Mensa-certified IQ of 176. (Which would put me 1 IQ point above the godlike superintelligence. Which is why I am waiting for Yudkowsky to build his artificial intelligence, which will become my apprentice, and together we will rule the galaxy.) Ignoring the numbers, my point, which I probably didn't explain well, was this: * There is an upper limit to biological human intelligence (ignoring new future mutations), i.e. getting all the intelligence genes right. * It is possible that people with this maximum biological intelligence are actually less impressive than what we would expect. Maybe they are at an "average PhD" level. * And what we perceive as geniuses, e.g. Einstein or von Neumann, that's actually a combination of high biological intelligence and many other traits. * Therefore, a genetic engineering program creating thousand new max-intelligence humans could actually fail to produce a new Einstein.
0gathaung
Congrats! This means that you are a Mensa-certified very one-in-a-thousand-billion-special snowflake! If you believe in the doomsday argument then this ensures either the continued survival of bio-humans for another thousand years or widespread colonization of the solar system! On the other hand, this puts quite the upper limit on the (institutional) numeracy of Mensa... wide guessing suggests that at least one in 10^3 people have sufficient numeracy to be incapable of testifying an IQ of 176 with a straight face, which would give us an upper bound on the NQ (numeracy quotient) of Mensa at 135. (sorry for the snark; it is not directed at you but at the clowns at Mensa, and I am not judging anyone for having taken these guys seriously at a younger age) Regarding your serious points: Obviously you are right, and equally obviously luck (living at the right time and encountering the right problem that you can solve) also plays a pretty important role. It is just that we do not have sensible definitions for "intelligence". IQ is by design incapable of describing outliers, and IMHO mostly nonsense even in the bulk of the distribution (but reasonable people may disagree here). Also, even if you somehow construct a meaningful linear scale for "intelligence", then I very strongly suppose that the distribution will be very far from Gaussian at the tails (trivially so at the lower end, nontrivially so at the upper end). Also, applying the inverse error-function to ordinal scales... why?
3gjm
On the other hand, any regular reader of LW will (1) be aware that LW folks as a population are extremely smart and (2) notice that Viliam is demonstrably one of the smartest here, so the Mensa test got something right. Of course any serious claim to be identifying people five standard deviations above average in a truly normally-distributed property is bullshit, but if you take the implicit claim behind that figure of 176 to be only "there's a number that kinda-sorta measures brainpower, the average is about 100, about 2% are above 130, higher numbers are dramatically rarer, and Viliam scored 176 which means he's very unusually bright" then I don't think it particularly needs laughing at.
0gathaung
It was not my intention to make fun of Viliam; I apologize if my comment gave this impression. I did want to make fun of the institution of Mensa, and stand by them deserving some good-natured ridicule. I agree with your charitable interpretation about what an IQ of 176 might actually mean; thanks for stating this in such a clear form.
0Viliam
Well, Mensa sucks at numbers since its very beginning. The original plan was to select 1% of the most intelligent people, but by mistake they made it 2%, and when they later found out, they decided to just keep it as it is. "More than two sigma, that means approximately 2%, right?" "Yeah, approximately." Later: "You meant, 2% at both ends of the curve, so 1% at each, right?" "No, I meant 2% at each." "Oh, shit."
0tut
What? 2 sigma means 2.5% at each end.
0Lumifer
That sentence is imprecise. If you divide a standard Gaussian at the +2 sigma boundary, the probability mass to the left will be 97.5% and to the right ("the tail") -- 2.5%. So two sigmas don't mean 2.5% at each end, they mean 2.5% at one end. On the other hand, if you use a 4-sigma interval from -2 sigmas to +2 sigmas, the probability mass inside that interval will be 95% and both tails together will make 5% or 2.5% each.
0Viliam
Apparently, Mensa didn't get any better at math since then. As far as I know, they still use "2 sigma" and "top 2%" as synonyms. Well, at least those of them who know what "sigma" means.
0Lumifer
Only if what makes von Neumanns and Einsteins is not heritable. Once you have a genetic engineering program going, you are not limited to adjusting just IQ genes.
2philh
You're also assuming that the genes are independently distributed, which isn't true if intelligent people are more likely to have kids with other intelligent people.
0MrMind
Well, yes. You have re-discovered the fact that a binomial distribution resembles, in the limit, a normal distribution.
0Qiaochu_Yuan
I mean, yes, of course. You might be interested in reading about Stephen Hsu.