Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open thread, Mar. 20 - Mar. 26, 2017

3 Post author: MrMind 20 March 2017 08:01AM

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

Comments (208)

Comment author: CellBioGuy 24 March 2017 10:20:18PM 10 points [-]

PhD acquired.

Comment author: Regex 25 March 2017 01:05:37AM 3 points [-]

Now people have to call you doctor CellBioGuy

Comment author: Viliam 24 March 2017 11:59:59PM *  7 points [-]

Okay, so I recently made this joke about future Wikipedia article about Less Wrong:

[article claiming that LW opposes feelings and support neoreaction] will probably be used as a "reliable source" by Wikipedia. Explanations that LW didn't actually "urge its members to think like machines and strip away concern for other people's feelings" will be dismissed as "original research", and people who made such arguments will be banned. Less Wrong will be officially known as a website promoting white supremacism, Roko's Basilisk, and removing female characters from computer games. This Wikipedia article will be quoted by all journals, and your families will be horrified by what kind of a monster you have become. All LW members will be fired from their jobs.

A few days later I actually looked at the Wikipedia article about Less Wrong:

In July 2010, LessWrong contributor Roko posted a thought experiment to the site in which an otherwise benevolent future AI system tortures simulations of those who did not work to bring the system into existence. This idea came to be known as "Roko's basilisk," based on Roko's idea that merely hearing about the idea would give the hypothetical AI system stronger incentives to employ blackmail. Yudkowsky deleted Roko's posts on the topic, calling it "stupid". Discussion of Roko's basilisk was banned on LessWrong for several years before the ban was lifted in October 2015.

The majority of the LessWrong userbase identifies as atheist, consequentialist, white and male.

The neoreactionary movement is associated with LessWrong, attracted by discussions on the site of eugenics and evolutionary psychology. In the 2014 self-selected user survey, 29 users representing 1.9% of survey respondents identified as "neoreactionary". Yudkowsky has strongly repudiated neoreaction.

Well... technically, the article admit that at least Yudkowsky considers the basilisk stupid, and disagrees with neoreaction. Connotationally, it suggests that basilisk and neoreaction are 50% of what is worth mentioning about LW, because that's the fraction of the article these topics got.

Oh, and David Gerard is actively editing this page. Why am I so completely unsurprised? His contributions include:

  • making a link to a separate article for Roko's basilisk (link), which luckily didn't materialize;
  • removing suggested headers "Rationality", "Cognitive bias", "Heuristic", "Effective altruism", "Machine Intelligence Research Institute" (link) saying that "all of these are already in the body text"; but...
  • adding a header for Roko's basilisk (link);
  • shortening a paragraph on LW's connection to effective altruism (link) -- by the way, the paragraph is completely missing from the current version of the article;
  • an edit war emphasising that it is finally okay to talk on LW about the basilisk (link, link, link, link, link);
  • restoring the deleted section on basilisk (link) saying that it's "far and away the single thing it's most famous for";
  • adding neoreaction as one of the topics discussed on LW (link), later removing other topics competing for attention (link), and adding a quote that LW "attracted some readers and commenters affiliated with the alt-right and neoreaction, that broad cohort of neofascist, white nationalist and misogynist trolls" (link);

...in summary, removing or shortening mentions of cognitive biases and effective altruism, and adding or developing mentions of basilisk and neoreaction.

Sigh.

EDIT: So, looking back at my prediction that...

Less Wrong will be officially known as a website promoting white supremacism, Roko's Basilisk, and removing female characters from computer games.

...I'd say I was (1) right about the basilisk; (2) partially right about the white supremacism, which at this moment is not mentioned explicitly (yet! growth mindset), but the article says that the userbase is mostly white and male, and discusses eugenics; and (3) wrong about the computer games. 50% success rate!

Comment author: TheAncientGeek 26 March 2017 08:58:56AM *  0 points [-]

Yikes. The current version of the WP article is a lot less balanced than the RW one!

Also, the edit warring is two way...someone wholesale deleted the Rs B section.

Comment author: Viliam 27 March 2017 09:15:06AM 0 points [-]

Also, the edit warring is two way...someone wholesale deleted the Rs B section.

Problem is, this is probably not a good news for LW. Tomorrow, the RB section will most likely be back, possibly with a warning on the talk page that the evil cultists from LW are trying to hide their scandals.

Comment author: Elo 25 March 2017 12:45:18AM 0 points [-]

can we fix this please?

Edit: I will work on it.

Comment author: Viliam 25 March 2017 03:14:37PM *  5 points [-]

I'd suggest being careful about your approach. If you lose this battle, you may not get another chance. David Gerard most likely has 100 times more experience with wiki battling than you. Essentially, when you make up a strategy, sleep on it, and then try imagining how a person already primed against LW would read your words.

For example, expect that any edit made by anyone associated with LW will be (1) traced back to their identity and LW account, and consequently (2) reverted, as a conflict of interest. And everyone will be like "ugh, these LW guys are trying to manipuate our website", so the next time they are not going to even listen to any of us.

Currently my best idea -- I didn't make any steps yet, just thinking -- is to post a reaction to the article's Talk page, without even touching the article. This would have two advantages: (1) No one can accuse me of being partial, because that's what I would openly disclose first, and because I would plainly say that as a person with a conflict of interest I shouldn't edit my article. Kinda establishing myself as the good guy who follows the Wikipedia rules. (2) A change in article could be simply reverted by David, but he is not allowed to remove my reaction from the talk page, unless I make a mistake and break some other rule. That means, even if I lose the battle, people editing the article in the future will be able to see my reaction. This is a meta move: the goal is not to change the article, but to convince the impartial Wikipedia editors that it should be changed. If I succeed to convince them, I don't have to do the edit myself; someone else will. On the other hand, if I fail to convince them, any edit would likely be reverted by David, and I have neither time nor will to play wiki wars.

What would be the content of the reaction? Let's start with the assumption that on Wikipedia no one gives a fuck about Less Wrong, rationality, AI, Eliezer, etc.; to most people this is just an annoying noise. By drawing their attention to the topic, you are annoying them even more. And they don't really care about who is right, only who is technically correct. That's the bad news. The good news is that they equally don't give a fuck about RationalWiki or David. What they do care about is Wikipedia, and following the rules of Wikipedia. Therefore the core of my reaction would be this: David Gerard has a conflict of interest about this topic; therefore he should not be allowed to edit it, and all his previous edits should be treated with suspicion. The rest is simply preparing my case, as well as I can, for the judge and the jury, who are definitely not Bayesians, and want to see "solid", not probabilistic arguments.

The argument for David's conflict of interest is threefold. (1) He is a representative (admin? not sure) of RationalWiki, which is some sense is LessWrong's direct competitor, so it's kinda like having a director of Pepsi Cola edit the article on Coca Cola, only at a million times smaller scale. How are these two websites competitors? They both target the same niche, which is approximately "a young intelligent educated pro-science atheist, who cares a lot about his self-image as 'rational'". They have "rational" in their name, we have it pretty much everywhere except in the name; we compete for being the online authorities on the same word. (2) He has a history of, uhm, trying to associate LW with things he does not like. He made (not sure about this? certainly contributed a lot) the RW article on Roko's Basilisk several years ago; LW complained about RW already in 2012. Note: It does not matter for this point whether RW or LW was actually right or wrong; I am just trying to establish that these two have a several years of mutual dislike. (3) This would be most difficult to prove, but I believe that most sensational information about LW was actually inspired by RW. I think most mentions of Roko's Basilisk could be traced back to their article. So what David is currently doing in Wikipedia is somewhat similar to citogenesis... he writes something on his website, media find it and include it in their sensationalist reports, then he "impartially" quotes the media for Wikipedia. On some level, yes, the incident happened (there was one comment, which was once deleted by Eliezer -- as if nothing similar ever happened on any online forum), but the whole reason for its "notability" is, well, David Gerard; without his hard work, no one would give a fuck.

So this is the core, and then there are some additional details. Such as, it is misleading to tell the readers what 1% of LW survey identify as, without even mentioning the remaining 99%. Clearly, "1% neoreactionaries" is supposed to give it a right-wing image, which adding "also, 4% communists, and 20% socialists" (I am just making the numbers up at the moment) would immediately disprove. And the general pattern of David's edits, for increasing the length of the parts talking about basilisk and neoreaction, and decreasing the lenght of everything else.

My thoughts so far. But I am quite a noob as far as wiki wars are concerned, so maybe there is an obvious flaw in this that I haven't noticed. Maybe it would be best if a group of people could cooperate in precise wording of the comment (probably at a bit more private place, so that parts of the debate couldn't be later quoted out of context).

Comment author: ChristianKl 27 March 2017 08:27:00AM 4 points [-]

It's worth noting that David Gerard was a LW contributor with a significant amount of karma: http://lesswrong.com/user/David_Gerard/

Comment author: David_Gerard 23 April 2017 01:06:43AM *  3 points [-]

This isn't what "conflict of interest" means at Wikipedia. You probably want to review WP:COI, and I mean "review" it in a manner where you try to understand what it's getting at rather than looking for loopholes that you think will let you do the antisocial thing you're contemplating. Your posited approach is the same one that didn't work for the cryptocurrency advocates either. (And "RationalWiki is a competing website therefore his edits must be COI" has failed for many cranks, because it's trivially obvious that their true rejection is that I edited at all and disagreed with them, much as that's your true rejection.) Being an advocate who's written a post specifically setting out a plan, your comment above would, in any serious Wikipedia dispute on the topic, be prima facie evidence that you were attempting to brigade Wikipedia for the benefit of your own conflict of interest. But, y'know, knock yourself out in the best of faith, we're writing an encyclopedia here after all and every bit helps. HTH!

If you really want to make the article better, the guideline you want to take to heart is WP:RS, and a whacking dose of WP:NOR. Advocacy editing like you've just mapped out a detailed plan for is a good way to get reverted, and blocked if you persist.

Comment author: Viliam 24 April 2017 10:19:12AM 3 points [-]

Is any of the following not true?

  • You are one of the 2 or 3 most vocal critics of LW worldwide, for years, so this is your pet issue, and you are far from impartial.

  • A lot of what the "reliable sources" write about LW originates from your writing about LW.

  • You are cherry-picking facts that descibe LW in certain light: For example, you mention that some readers of LW identify as neoreactionaries, but fail to mention that some of them identify as e.g. communists. You keep adding Roko's basilisk as one of the main topics about LW, but remove mentions of e.g. effective altruism, despite the fact that there is at least 100 times more debate on LW about the latter than about the former.

Comment author: David_Gerard 25 April 2017 07:27:26AM *  1 point [-]

The first two would suggest I'm a subject-matter expert, and particularly the second if the "reliable sources" consistently endorse my stuff, as you observe they do. This suggests I'm viewed as knowing what I'm talking about and should continue. (Be careful your argument makes the argument you think it's making.) The third is that you dislike my opinion, which is fine, but also irrelevant. The final sentence fails to address any WP:RS-related criterion. HTH!

Comment author: eternal_neophyte 25 April 2017 12:40:38PM 2 points [-]

The first two would suggest I'm a subject-matter expert

Why? Are the two or three most vocal critics of evolution also experts? Does the fact that newspapers quote Michio Kaku or Bill Nye on the dangers of global warming make them climatology experts?

Comment author: Viliam 25 April 2017 09:54:54AM 1 point [-]

Oh, I see, it's one of those irregular words:

I am a subject-matter expert
you have a conflict of interests

Comment author: TheAncientGeek 25 April 2017 11:16:11AM *  1 point [-]

He is a paid shill

Comment author: David_Gerard 25 April 2017 12:08:14PM 0 points [-]

despite hearing that one a lot at Rationalwiki, it turns out the big Soros bucks are thinner on the ground than many a valiant truthseeker thinks

Comment author: gjm 25 April 2017 03:53:59PM 0 points [-]

In case it wasn't obvious (it probably was, in which case I apologize for insulting your intelligence, or more precisely I apologize so as not to insult your intelligence), TheAncientGeek was not in fact making a claim about you or your relationship with deep-pocketed malefactors but just completing the traditional "irregular verb" template.

Comment author: David_Gerard 25 April 2017 11:57:25AM *  0 points [-]

Or just what words mean in the context in question, keeping in mind that we are indeed speaking in a particular context.

[here, let me do your homework for you]

In particular, expertise does not constitute a Wikipedia conflict of interest:

https://en.wikipedia.org/wiki/Wikipedia:Conflict_of_interest#External_roles_and_relationships

While editing Wikipedia, an editor's primary role is to further the interests of the encyclopedia. When an external role or relationship could reasonably be said to undermine that primary role, the editor has a conflict of interest. (Similarly, a judge's primary role as an impartial adjudicator is undermined if she is married to the defendant.)

Any external relationship—personal, religious, political, academic, financial or legal—can trigger a COI. How close the relationship needs to be before it becomes a concern on Wikipedia is governed by common sense. For example, an article about a band should not be written by the band's manager, and a biography should not be an autobiography or written by the subject's spouse.

Subject-matter experts are welcome to contribute within their areas of expertise, subject to the guidance on financial conflict of interest, while making sure that their external roles and relationships in that field do not interfere with their primary role on Wikipedia.

Note "the subject doesn't think you're enough of a fan" isn't listed.

Further down that section:

COI is not simply bias

Determining that someone has a COI is a description of a situation. It is not a judgment about that person's state of mind or integrity.[5] A COI can exist in the absence of bias, and bias regularly exists in the absence of a COI. Beliefs and desires may lead to biased editing, but they do not constitute a COI. COI emerges from an editor's roles and relationships, and the tendency to bias that we assume exists when those roles and relationships conflict.[9] COI is like "dirt in a sensitive gauge."[10]

On experts:

https://en.wikipedia.org/wiki/Wikipedia:Expert_editors

Expert editors are cautioned to be mindful of the potential conflict of interest that may arise if editing articles which concern an expert's own research, writings, discoveries, or the article about herself/himself. Wikipedia's conflict of interest policy does allow an editor to include information from his or her own publications in Wikipedia articles and to cite them. This may only be done when the editors are sure that the Wikipedia article maintains a neutral point of view and their material has been published in a reliable source by a third party. If the neutrality or reliability are questioned, it is Wikipedia consensus, rather than the expert editor, that decides what is to be done. When in doubt, it is good practice for a person who may have a conflict of interest to disclose it on the relevant article's talk page and to suggest changes there rather than in the article. Transparency is essential to the workings of Wikipedia.

i.e., don't blatantly promote yourself, run it past others first.

You're still attempting to use the term "conflict of interest" when what you actually seem to mean is "he disagrees with me therefore should not be saying things." That particular tool, the term "conflict of interest", really doesn't do what you think it does.

The way Wikipedia deals with "he disagrees with me therefore should not be saying things" is to look at the sources used. Also, "You shouldn't use source X because its argument originally came from Y which is biased" is not generally a winning argument on Wikipedia without a lot more work.

Before you then claim bias as a reason, let me quote again:

https://en.wikipedia.org/wiki/Wikipedia:Identifying_reliable_sources#Biased_or_opinionated_sources

Wikipedia articles are required to present a neutral point of view. However, reliable sources are not required to be neutral, unbiased, or objective. Sometimes non-neutral sources are the best possible sources for supporting information about the different viewpoints held on a subject.

Common sources of bias include political, financial, religious, philosophical, or other beliefs. Although a source may be biased, it may be reliable in the specific context. When dealing with a potentially biased source, editors should consider whether the source meets the normal requirements for reliable sources, such as editorial control and a reputation for fact-checking. Editors should also consider whether the bias makes it appropriate to use in-text attribution to the source, as in "Feminist Betty Friedan wrote that...", "According to the Marxist economist Harry Magdoff...," or "Conservative Republican presidential candidate Barry Goldwater believed that...".

So if, as you note, the Reliable Sources regularly use me, that would indicate my opinions would be worth taking note of - rather than the opposite. As I said, be careful you're making the argument you think you are.

(I don't self-label as an "expert", I do claim to know a thing or two about the area. You're the one who tried to argue from my opinions being taken seriously by the "reliable sources".)

Comment author: gjm 25 April 2017 05:22:17PM 1 point [-]

No one is actually suggesting that either "expertise" or "not being enough of a fan" constitutes a conflict of interest, nor are those the attributes you're being accused of having.

On the other hand, the accusations actually being made are a little unclear and vary from occasion to occasion, so let me try to pin them down a bit. I think the ones worth taking seriously are three in number. Only one of them relates specifically to conflicts of interest in the Wikipedia sense; the others would (so far as I can see) not be grounds for any kind of complaint or action on Wikipedia even if perfectly correct in every detail.

So, they are: (1) That you are, for whatever reasons, hostile to Less Wrong (and the LW-style-rationalist community generally, so far as there is such a thing) and keen to portray it in a bad light. (2) That as a result of #1 you have in fact taken steps to portray Less Wrong (a.t.Lsr.c.g.s.f.a.t.i.s.a.t.) in a bad light, even when that has required you to be deliberately misleading. (3) That your close affiliation with another organization competing for mindshare, namely RationalWiki, constitutes a WP:COI when writing about Less Wrong.

Note that #3 is quite different in character from a similar claim that might be made by, say, a creationist organization; worsening the reputation of the Institute for Creation Research is unlikely to get more people to visit RationalWiki and admire your work there (perhaps even the opposite), whereas worsening the reputation of Less Wrong might do. RW is in conflict with the ICR, but (at least arguably) in competition with LW.

For the avoidance of doubt, I am not endorsing any of those accusations; just trying to clarify what they are, because it seems like you're addressing different ones.

Comment author: David_Gerard 25 April 2017 06:34:16PM *  0 points [-]

I already answered #3: the true rejection seems to be not "you are editing about us on Wikipedia to advance RationalWiki at our expense" (which is a complicated and not very plausible claim that would need all its parts demonstrated), but "you are editing about us in a way we don't like".

Someone from the IEET tried to seriously claim (COI Noticeboard and all) that I shouldn't comment on the deletion nomination for their article - I didn't even nominate it, just commented - on the basis that IEET is a 501(c)3 and RationalWiki is also a 501(c)3 and therefore in sufficiently direct competition that this would be a Wikipedia COI. It's generally a bad and terrible claim and it's blitheringly obvious to any experienced Wikipedia editor that it's stretching for an excuse.

Variations on #3 are a perennial of cranks of all sorts who don't want a skeptical editor writing about them at Wikipedia, and will first attempt not to engage with the issues and sources, but to stop the editor from writing about them. (My favourite personal example is this Sorcha Faal fan who revealed I was editing as an NSA shill.) So it should really be considered an example of the crackpot offer, and if you find yourself thinking it then it would be worth thinking again.

(No, I don't know why cranks keep thinking implausible claims of COI are a slam dunk move to neutralise the hated outgroup. I hypothesise a tendency to conspiracist thinking, and first assuming malfeasance as an explanation for disagreement. So if you find yourself doing that, it's another one to watch out for.)

Comment author: David_Gerard 23 April 2017 01:26:36AM *  1 point [-]

(More generally as a Wikipedia editor I find myself perennially amazed at advocates for some minor cause who seem to seriously think that Wikipedia articles on their minor cause should only be edited by advocates, and that all edits by people who aren't advocates must somehow be wrong and bad and against the rules. Even though the relevant rules are (a) quite simple conceptually (b) say nothing of the sort. You'd almost think they don't have the slightest understanding of what Wikipedia is about, and only cared about advocating their cause and bugger the encyclopedia.)

Comment author: David_Gerard 23 April 2017 01:29:45AM 0 points [-]

but in the context of Wikipedia, you should after all keep in mind that I am an NSA shill.

Comment author: tristanm 20 March 2017 10:54:00PM 5 points [-]

Should we expect more anti-rationalism in the future? I believe that we should, but let me outline what actual observations I think we will make.

Firstly, what do I mean by 'anti-rationality'? I don't mean that in particular people will criticize LessWrong. I mean it in the general sense of skepticism towards science / logical reasoning, skepticism towards technology, and a hostility to rationalistic methods applied to things like policy, politics, economics, education, and things like that.

And there are a few things I think we will observe first (some of which we are already observing) that will act as a catalyst for this. Number one, if economic inequality increases, I think a lot of the blame for this will be placed on the elite (as it always is), but in particular the cognitive elite (which makes up an ever-increasing share of the elite). Whatever the views of the cognitive elite are will become the philosophy of evil from the perspective of the masses. Because the elite are increasingly made up of very high intelligence people, many of whom with a connection to technology or Silicon Valley, we should expect that the dominant worldview of that environment will increasingly contrast with the worldview of those who haven't benefited or at least do not perceive themselves to benefit from the increasing growth and wealth driven by those people. What's worse, it seems that even if economic gains benefit those at the very bottom too, if inequality still increases, that is the only thing that will get noticed.

The second issue is that as technology improves, our powers of inference increase, and privacy defenses become weaker. It's already the case that we can predict a person's behavior to some degree and use that knowledge to our advantage (if you're trying to sell something to them, give them / deny them a loan, judge whether they would be a good employee, or predict whether or not they will commit a crime). There's already a push-back against this, in the sense that certain variables correlate with things we don't want them to, like race. This implies that the standard definition of privacy, in the sense of simply not having access to specific variables, isn't strong enough. What's desired is not being able to infer the values of certain variables, either, which is a much, much stronger condition. This is a deep, non-trivial problem that is unlikely to be solved quickly - and it runs into the same issues as all problems concerning discrimination do, which is how to define 'bias'. Is reducing bias at the expense of truth even a worthy goal? This shifts the debate towards programmers, statisticians and data scientists who are left with the burden of never making a mistake in this area. "Weapons of Math Destruction" is a good example of the way this issue gets treated.

We will also continue to observe a lot ideas from postmodernism being adopted as part of political ideology of the left. Postmodernism is basically the antithesis of rationalism, and is particularly worrying because it is a very adaptable and robust meme. And an ideology that essentially claims that rationality and truth are not even possible to define, let alone discover, is particularly dangerous if it is adopted as the mainstream mode of thought. So if a lot of the above problems get worse, I think there is a chance that rationalism will get blamed as it has been in the framework of postmodernism.

The summary of this is: As politics becomes warfare between worldviews rather than arguments for and against various beliefs, populist hostility gets directed towards what is perceived to be the worldview of the elite. The elite tend to be more rationalist, and so that hostility may get directed towards rationalism itself.

I think a lot more can be said about this, but maybe that's best left to a full post, I'm not sure. Let me know if this was too long / short or poorly worded.

Comment author: username2 21 March 2017 03:08:48AM 2 points [-]

(I thought the post was reasonably written.)

Can you say a word on whether (and how) this phenomenon you describe ("populist hostility gets directed towards what is perceived to be the worldview of the elite") is different from the past? It seems to me that this is a force that is always present, often led to "problems" (eg, the Luddite movement), but usually (though not always) the general population came around more in believing the same things as "the elites".

Comment author: tristanm 21 March 2017 08:48:54PM 0 points [-]

The process is not different from what occurred in the past, and I think this was basically the catalyst for anti-semitism during the post industrial revolution era. You observe a characteristic of a group of people who seem to be doing a lot better than you, in that case a lot of them happened to be Jewish, and so you then associate their Jewish-ness with your lack of success and unhappiness.

The main difference is that society continues to modernize and technology improves. Bad ideas for why some people are better off than others become unpopular. Actual biases and unfairness in the system gradually disappear. But despite that, inequality remains and in fact seems to be rising. What happens is that the only thing left to blame is instrumental rationality. I imagine that people will look as hard as they can for bias and unfairness for as long as possible, and will want to see it in people who are instrumentally rational.

In a free society, (and even more so as a society becomes freer and true bigotry disappears) some people will be better off just because they are better at making themselves better off, and the degree to which people vary in that ability is quite staggering. But psychologically it is too difficult for many to accept this, because no one wants to believe in inherent differences. So it's sort of a paradoxical result of our society actually improving.

Comment author: satt 24 March 2017 02:18:56AM 1 point [-]

I think a lot more can be said about this, but maybe that's best left to a full post, I'm not sure. Let me know if this was too long / short or poorly worded.

Writing style looks fine. My quibbles would be with the empirical claims/predictions/speculations.

Is the elite really more of a cognitive elite than in the past?

Strenze's 2007 meta-analysis (previously) analyzed how the correlations between IQ and education, IQ and occupational level, and IQ and income changed over time. The first two correlations decreased and the third held level at a modest 0.2.

Will elite worldviews increasingly diverge from the worldviews of those left behind economically?

Maybe, although just as there are forces for divergence, there are forces for convergence. The media can, and do, transmit elite-aligned worldviews just as they transmit elite-opposed worldviews, while elites fund political activity, and even the occasional political movement.

Would increasing inequality really prevent people from noticing economic gains for the poorest?

That notion sounds like hyperbole to me. The media and people's social networks are large, and can discuss many economic issues at once. Even people who spend a good chunk of time discussing inequality discuss gains (or losses) of those with low income or wealth.

For instance, Branko Milanović, whose standing in economics comes from his studies of inequality, is probably best known for his elephant chart, which presents income gains across the global income distribution, down to the 5th percentile. (Which percentile, incidentally, did not see an increase in real income between 1988 and 2008, according to the chart.)

Also, while the Anglosphere's discussed inequality a great deal in the 2010s, that seems to me a vogue produced by the one-two-three punch of the Great Recession, the Occupy movement, and the economist feeding frenzy around Thomas Piketty's book. Before then, I reckon most of the non-economists who drew special attention to economic inequality were left-leaning activists and pundits in particular. That could become the norm once again, and if so, concerns about poverty would likely become more salient to normal people than concerns about inequality.

Will the left continue adopting lots of ideas from postmodernism?

This is going to depend on how we define postmodernism, which is a vexed enough question that I won't dive deeply into it (at least TheAncientGeek and bogus have taken it up). If we just define (however dodgily) postmodernism to be a synonym for anti-rationalism, I'm not sure the left (in the Anglosphere, since that's the place we're presumably really talking about) is discernibly more postmodernist/anti-rationalist than it was during the campus/culture wars of the 1980s/1990s. People tend to point to specific incidents when they talk about this question, rather than try to systematically estimate change over time.

Granted, even if the left isn't adopting any new postmodern/anti-rationalist ideas, the ideas already bouncing around in that political wing might percolate further out and trigger a reaction against rationalism. Compounding the risk of such a reaction is the fact that the right wing can also operate as a conduit for those ideas — look at yer Alex Jones and Jason Reza Jorjani types.

Is politics becoming more a war of worldviews than arguments for & against various beliefs?

Maybe, but evidence is needed to answer the question. (And the dichotomy isn't a hard and fast one; wars of worldviews are, at least in part, made up of skirmishes where arguments are lobbed at specific beliefs.)

Comment author: TheAncientGeek 22 March 2017 11:56:51AM *  0 points [-]

Postmodernism is basically the antithesis of rationalism, and is particularly worrying because it is a very adaptable and robust meme.

Rationalists (Bay area type) tend to think of what they call Postmodernism[*] as the antithesis to themselves, but the reality is more complex. "Postmodernism" isn't a short and cohesive set of claims that are the opposite of the set of claims that rationalists make, it's a different set of concerns, goals and approachs.

And an ideology that essentially claims that rationality and truth are not even possible to define, let alone discover, is particularly dangerous if it is adopted as the mainstream mode of thought.

And what's worse is that bay area rationalism has not been able to unequivocally define "rationality" or "truth". (EY wrote an article on the Simple idea of Truth, in which he considers the correspondence theory, Tarki's theory, and a few others without resolving on a single correct theory).

Bay area rationalism is the attitude that that sceptical (no truth) and relativistic (multiple truth) claims are utterly false, but it's an attitude, not a proof. What's worse still is that sceptical and relativistic claims can be supported using the toolkit of rationality. "Postmodernists" tend to be sceptics and relativists, but you don't have to be a "postmodernist" to be a relativist or sceptic. As non-bay-area, mainstream, rationalists understand well. If rationalist is to win over "postmodernism", then it must win rationally, by being able to demonstrate it's superioritiy.

[*] "Postmodernists" call themselves poststructuralists, continental philosophers, or critical theorists.

Comment author: bogus 22 March 2017 01:41:46PM *  1 point [-]

"Postmodernists" call themselves poststructuralists, continental philosophers, or critical theorists.

Not quite. "Poststructuralism" is an ex-post label and many of the thinkers that are most often identified with the emergence of "postmodern" ideas actually rejected it. (Some of them even rejected the whole notion of "postmodernism" as an unhelpful simplification of their actual ideas.) "Continental philosophy" really means the 'old-fashioned' sort of philosophy that Analytic philosophers distanced themselves from; you can certainly view postmodernism as encompassed within continental philosophy, but the notions are quite distinct. Similarly, "critical theory" exists in both 'modernist'/'high modern' and 'postmodern' variants, and one cannot understand the 'postmodern' kind without knowing the 'modern' critical theory it's actually referring to, and quite often criticizing in turn.

All of which is to say that, really, it's complicated, and that while describing postmodernism as a "different set of concerns, goals and approaches" may hit significantly closer to the mark than merely caricaturing it as an antithesis to rationality, neither really captures the worthwhile ideas that 'postmodern' thinkers were actually developing, at least when they were at their best. (--See, the big problem with 'continental philosophy' as a whole is that you often get a few exceedingly worthwhile ideas mixed in with heaps of nonsense and confused thinking, and it can be really hard to tell which is which. Postmodernism is no exception here!)

Comment author: tristanm 22 March 2017 06:28:32PM 0 points [-]

Rationalists (Bay area type) tend to think of what they call Postmodernism[*] as the antithesis to themselves, but the reality is more complex. "Postmodernism" isn't a short and cohesive set of claims that are the opposite of the set of claims that rationalists make, it's a different set of concerns, goals and approachs.

Except that it does make claims that are the opposite of the claims rationalists make. It claims that there is no objective reality, no ultimate set of principles we can use to understand the universe, and no correct method of getting nearer to truth. And the 'goal' of postmodernism is to break apart and criticize everything that claims to be able to do those things. You would be hard pressed to find a better example of something diametrically opposed to rationalism. (I'm going to guess that with high likelihood I'll get accused of not understanding postmodernism by saying that).

And what's worse is that bay area rationalism has not been able to unequivocally define "rationality" or "truth". (EY wrote an article on the Simple idea of Truth, in which he considers the correspondence theory, Tarki's theory, and a few others without resolving on a single correct theory).

Well yeah, being able to unequivocally define anything is difficult, no argument there. But rationalists use an intuitive and pragmatic definition of truth that allows us to actually do things. Then what happens is they get accused by postmodernists of claiming to have the One and Only True and Correct Definition of Truth and Correctness, and of claiming that we have access to the Objective Reality. The point is that as soon as you allow for any leeway in this at all (leeway in allowing for some in-between area of there being a true objective reality with 100% access to and 0% access to), you basically obtain rationalism. Not because the principles it derives from are that there is an objective reality that is possible to Truly Know, or that there are facts that we know to be 100% true, but only that there are sets of claims we have some degree of confidence in, and other sets of claims we might want to calculate a degree of confidence in based on the first set of claims.

Bay area rationalism is the attitude that that sceptical (no truth) and relativistic (multiple truth) claims are utterly false, but it's an attitude, not a proof.

It happens to be an attitude that works really well in practice, but the other two attitudes can't actually be used in practice if you were to adhere to them fully. They would only be useful for denying anything that someone else believes. I mean, what would it mean to actually hold two beliefs to be completely true but also that they contradict? In probability theory you can have degrees of confidence that are non-zero that add up to one, but it's unclear if this is the same thing as relativism in the sense of "multiple truths". I would guess that it isn't, and multiple truths really means holding two incompatible beliefs to both be true.

If rationalist is to win over "postmodernism", then it must win rationally, by being able to demonstrate it's superioritiy.

Except that you can't demonstrate superiority of anything within the framework of postmodernism. Within rationalism it's very easy and straightforward.

I imagine the reason that some rationalists might find postmodernism to be useful is in the spirit of overcoming biases. This in and of itself I have no problem with - but I would ask what you consider postmodern ideas to offer in the quest to remove biases that rationalism doesn't offer, or wouldn't have access to even in principle?

Comment author: bogus 22 March 2017 11:58:46PM *  1 point [-]

Except that it does make claims that are the opposite of the claims rationalists make. It claims that there is no objective reality, no ultimate set of principles we can use to understand the universe, and no correct method of getting nearer to truth.

The actual ground-level stance is more like: "If you think that you know some sort of objective reality, etc., it is overwhelmingly likely that you're in fact wrong in some way, and being deluded by cached thoughts." This is an eminently rational attitude to take - 'it's not what you don't know that really gets you into trouble, it's what you know for sure that just ain't so.' The rest of your comment has similar problems, so I'm not going to discuss it in depth. Suffice it to say, postmodern thought is far more subtle than you give it credit for.

Comment author: tristanm 23 March 2017 12:18:32AM 1 point [-]

If someone claims to hold a belief with absolute 100% certainty, that doesn't require a gigantic modern philosophical edifice in order to refute. It seems like that's setting a very low bar for what postmodernism actually hopes to accomplish.

Comment author: bogus 23 March 2017 06:55:30AM *  0 points [-]

If someone claims to hold a belief with absolute 100% certainty, that doesn't require a gigantic modern philosophical edifice in order to refute.

The reason why postmodernism often looks like that superficially is that it specializes in critiquing "gigantic modern philosophical edifice[s]" (emphasis on 'modern'!). It takes a gigantic philosophy to beat a gigantic philosophy, at least in some people's view.

Comment author: TheAncientGeek 22 March 2017 08:58:54PM *  1 point [-]

Except that it does make claims that are the opposite of the claims rationalists make. It claims that there is no objective reality, no ultimate set of principles we can use to understand the universe, and no correct method of getting nearer to truth.

Citation needed.

Well yeah, being able to unequivocally define anything is difficult, no argument there

On the other hand, refraining from condemning others when you have skeletons in your own closet is easy.

But rationalists use an intuitive and pragmatic definition of truth that allows us to actually do things. T

Engineers use an intuitive and pragmatic definition of truth that allows them to actually do things. Rationalists are more in the philosophy business.

It happens to be an attitude that works really well in practice,

For some values of "work". It's possible to argue in detail that predictive power actually doesn't entail correspondence to ultimate reality, for instance.

I mean, what would it mean to actually hold two beliefs to be completely true but also that they contradict?

For instance, when you tell outsiders that you have wonderful answers to problems X, Y and Z, but you concede to people inside the tent that you actually don't.

Except that you can't demonstrate superiority of anything within the framework of postmodernism

That's not what I said.

but I would ask what you consider postmodern ideas to offer in the quest to remove biases that rationalism doesn't offer, or wouldn't have access to even in principle?

There's no such thing as postmodernism and I'm not particularly in favour of it. My position is more about doing rationality right than not doing it all. If you critically apply rationality to itself, you end up with something a lot less elf confident and exclusionary than Bay Area rationalism.

Comment author: tristanm 22 March 2017 11:04:11PM 0 points [-]

Citation needed.

Citing it is going to be difficult, even the Stanford Encyclopedia of Philosophy says "That postmodernism is indefinable is a truism." I'm forced to site philosophers who are opposed to it because they seem to be the only ones willing to actually define it in a concise way. I'll just reference this essay by Dennett to start with.

On the other hand, refraining from condemning others when you have skeletons in your own closet is easy.

I'm not sure I understand what you're referring to here.

For instance, when you tell outsiders that you have wonderful answers to problems X, Y and Z, but you concede to people inside the tent that you actually don't.

That's called lying.

There's no such thing as postmodernism

You know exactly what I mean when I use that term, otherwise there would be no discussion. It seems that you can't even name it without someone saying that's not what it's called, it actually doesn't have a definition, every philosopher who is labeled a postmodernist called it something else, etc.

If I can't define it, there's no point in discussing it. But it doesn't change the fact that the way the mainstream left has absorbed the philosophy has been in the "there is no objective truth" / "all cultures/beliefs/creeds are equal" sense. This is mostly the sense in which I refer to it in my original post.

My position is more about doing rationality right than not doing it all. If you critically apply rationality to itself, you end up with something a lot less elf confident and exclusionary than Bay Area rationalism.

I'd like to hear more about this. By "Bay Area rationalism", I assume you are talking about a specific list of beliefs like the likelihood of intelligence explosion? Or are you talking about the Bayesian methodology in general?

Comment author: TheAncientGeek 25 March 2017 07:39:47PM 0 points [-]

Citing it is going to be difficult,

To which the glib answer is "that's because it isn't true".

" I'm forced to site philosophers who are opposed to it because they seem to be the only ones willing to actually define it in a concise way. I'll just reference this essay by Dennett to start with.

Dennett gives a concise definition because he has the same simplistic take on the subject as you. What he is not doing is showing that there is an actually group of people who describe themselves as postmodernists, and have those views. The use of the terms "postmodernist" is a bad sign: it's a tern that works like "infidel" and so on, a label for an outgroup, and an ingroup's views on an outgroup are rarely bedrock reality.

On the other hand, refraining from condemning others when you have skeletons in your own closet is easy.

I'm not sure I understand what you're referring to here.

When we, the ingroup, can't define something it's Ok, when they, the outgroup, can't define something, it shows how bad they are.

For instance, when you tell outsiders that you have wonderful answers to problems X, Y and Z, but you concede to people inside the tent that you actually don't.

That's called lying.

People are quite psychologically capable of having compartmentalised beliefs, that sort of thing is pretty ubiquitous, which is why I was able to find an example from the rationalist community itself. Relativism without contextualisation probably doesn't make much sense, but who is proposing it?

There's no such thing as postmodernism

You know exactly what I mean when I use that term, otherwise there would be no discussion.

As you surely know that I mean there is no group of people who both call themselves postmodernists and hold the views you are attributing to postmodernists.

It seems that you can't even name it without someone saying that's not what it's called, it actually doesn't have a definition, every philosopher who is labeled a postmodernist called it something else, etc.

It's kind of diffuse. But you can talk about scepticism, relativism, etc, if those are the issues.

If I can't define it, there's no point in discussing it. But it doesn't change the fact that the way the mainstream left has absorbed the philosophy has been in the "there is no objective truth" / "all cultures/beliefs/creeds are equal" sense.

There's some terrible epistemology on the left, and on the right, and even in rationalism.

My position is more about doing rationality right than not doing it all. If you critically apply rationality to itself, you end up with something a lot less elf confident and exclusionary than Bay Area rationalism.

I'd like to hear more about this. By "Bay Area rationalism", I assume you are talking about a specific list of beliefs like the likelihood of intelligence explosion? Or are you talking about the Bayesian methodology in general?

I mean Yudkowsky's approach. Which flies under the flag of Bayesianism, but doesn't make much use of formal Bayesianism.

Comment author: Viliam 21 March 2017 01:31:28PM 0 points [-]

I have a feeling that perhaps in some sense politics is self-balancing. You attack things that are associated with your enemy, which means that your enemy will defend them. Assuming you are an entity that only cares about scoring political points, if your enemy uses rationality as an applause light, you will attack rationality, but if your enemy uses postmodernism as an applause light, you will attack postmodernism and perhaps defend (your interpretation of) rationality.

That means that the real risk for rationality is not that everyone will attack it. As soon as the main political players will all turn against rationality, fighting rationality will become less important for them, because attacking things the others consider sacred will be more effective. You will soon get rationality apologists saying "rationality per se is not bad, it's only rationality as practiced by our political opponents that leads to horrible things".

But if some group of idiots will choose "rationality" as their applause light and they will be doing it completely wrong, and everyone else will therefore turn against rationality, that would cause much more damage. (Similarly to how Stalin is often used as an example against "atheism". Now imagine a not-so-implausible parallel universe where Stalin used "rationality" -- interpreted as: 1984-style obedience of the Communist Party -- as the official applause light of his regime. In such world, non-communists hate the word "rationality" because it is associated with communism, and communists insist that the only true meaning of rationality is the blind obedience of the Party. Imagine trying to teach people x-rationality in that universe.)

Comment author: tristanm 21 March 2017 08:27:51PM 0 points [-]

I don't think it's necessary for 'rationality' to be used an applause light for this to happen. The only things needed, in my mind, are:

  • A group of people who adopt rationality and are instrumentally rationalist become very successful, wealthy and powerful because of it.
  • This groups makes up an increasing share of the wealthy and powerful, because they are better at becoming wealthy and powerful than the old elite.
  • The remaining people who aren't as wealthy or successful or powerful, who haven't adopted rationality, make observations about what the successful group does and associates whatever they do / say as the tribal characteristics and culture of the successful group. The fact that they haven't adopted rationality makes them more likely to do this.

And because the final bullet point is always what occurs throughout history, the only difference - and really the only thing necessary for this to happen - is that rationalists make up a greater share of the elite over time.

Comment author: bogus 21 March 2017 05:54:48PM *  0 points [-]

But if some group of idiots will choose "rationality" as their applause light and they will be doing it completely wrong, and everyone else will therefore turn against rationality, that would cause much more damage. (Similarly to how Stalin is often used as an example against "atheism". Now imagine a not-so-implausible parallel universe where Stalin used "rationality" -- interpreted as: 1984-style obedience of the Communist Party -- as the official applause light of his regime. In such world, non-communists hate the word "rationality" because it is associated with communism, and communists insist that the only true meaning of rationality is the blind obedience of the Party.

Somewhat ironically, this is exactly the sort of cargo-cultish "rationality" that originally led to the emergence of postmodernism, in opposition to it and calling for some much-needed re-evaluation and skepticism around all "cached thoughts". The moral I suppose is that you just can't escape idiocy.

Comment author: tristanm 21 March 2017 08:09:21PM *  1 point [-]

Not exactly. What happened at first was that Marxism - which, in the early 20th century, became the dominant mode of thought for Western intellectuals - was based on rationalist materialism, until it was empirically shown to be wrong by some of the largest social experiments mankind is capable of running. The question for intellectuals who were unwilling to give up Marx after that time was how to save Marxism from empirical reality. The answer to that was postmodernism. You'll find that in most academic departments today, those who identify as Marxists are almost always postmodernists (and you won't find them in economics or political science, but rather in the english, literary criticism and social science departments). Marxists of the rationalist type are pretty much extinct at this point.

Comment author: bogus 22 March 2017 03:42:40AM *  1 point [-]

I broadly agree, but you're basically talking about the dynamics that resulted in postmodernism becoming an intellectual fad, devoid of much of its originally-meaningful content. Whereas I'm talking about what the original memeplex was about - i.e what people like the often-misunderstood Jacques Derrida were actually trying to say. It's even clearer when you look at Michael Foucault, who was indeed a rather sharp critic of "high modernity", but didn't even consider himself a post-modernist (whereas he's often regarded as one today). Rather, he was investigating pointed questions like "do modern institutions like medicine, psychiatric care and 'scientific' criminology really make us so much better off compared to the past when we lacked these, or is this merely an illusion due to how these institutions work?" And if you ask Robin Hanson today, he will tell you that we're very likely overreliant on medicine, well beyond the point where such reliance actually benefits us.

Comment author: Douglas_Knight 23 March 2017 05:13:49AM 0 points [-]

postmodernism becoming an intellectual fad, devoid of much of its originally-meaningful content. Whereas I'm talking about what the original memeplex was about

So you concede that everyone you're harassing is 100% correct, you just don't want to talk about postmodernism? So fuck off.

Comment author: dogiv 21 March 2017 05:34:02PM 0 points [-]

This may be partially what has happened with "science" but in reverse. Liberals used science to defend some of their policies, conservatives started attacking it, and now it has become an applause light for liberals--for example, the "March for Science" I keep hearing about on Facebook. I am concerned about this trend because the increasing politicization of science will likely result in both reduced quality of science (due to bias) and decreased public acceptance of even those scientific results that are not biased.

Comment author: username2 22 March 2017 12:31:42AM 0 points [-]

I agree with your concern, but I think that you shouldn't limit your fear to party-aligned attacks.

For example, the Thirty-Meter Telescope in Hawaii was delayed by protests from a group of people who are most definitely "liberal" on the "liberal/conservative" spectrum (in fact, "ultra-liberal"). The effect of the protests is definitely significant. While it's debatable how close the TMT came to cancelation, the current plan is to grant no more land to astronomy atop Mauna Kea.

Comment author: dogiv 22 March 2017 05:06:49PM 0 points [-]

Agreed. There are plenty of liberal views that reject certain scientific evidence for ideological reasons--I'll refrain from examples to avoid getting too political, but it's not a one-sided issue.

Comment author: Lumifer 21 March 2017 05:13:42PM 0 points [-]

As soon as the main political players will all turn against rationality, fighting rationality will become less important for them, because attacking things the others consider sacred will be more effective.

So, do you want to ask the Jews how that theory worked out for them?

Comment author: turchin 23 March 2017 11:12:46AM 4 points [-]

Link on "discussion" disappeared from the lesswrong.com. Is it planned change? Or only for me?

Comment author: Elo 23 March 2017 11:31:11AM 2 points [-]

Accidental css pull that caused unusual things. It's being worked on. Apologies.

Comment author: -necate- 25 March 2017 08:53:39AM 2 points [-]

Hello guys, I am currently writing my master's thesis on biases in the investment context. One sub-sample that I am studying is people who are educated about biases in a general context, but not in the investment context. I guess LW is the right place to find some of those so I would be very happy if some of you would participate since people who are aware about biases are hard to come by elsewhere. Also I explicitly ask for activity in the LW community in the survey, so if enough of LWers participate I could analyse them as an individual subsample. Would be interesting to know how LWers perform compared to psychology students for example. Also I think this is related enough to LW that I could post a link to the survey in discussion, right? If so I would be happy about some karma, because I just registered and cant post yet. The link to the survey is: https://survey.deadcrab.de/

Comment author: Elo 25 March 2017 09:12:19AM 0 points [-]

Look up a group called "the trading tribe" by Ed seykota.

Comment author: Vaniver 23 March 2017 08:04:05AM 2 points [-]

Front page being reconfigured. For the moment, you can get to a page with the sidebar by going through the "read the sequences" link (not great, and if you can read this, you probably didn't need this message).

Comment author: Bound_up 22 March 2017 01:36:56PM 2 points [-]

Maybe there could be some high-profile positive press for cryonics if it became standard policy to freeze endangered species seeds or DNA for later resurrection

Comment author: ChristianKl 22 March 2017 01:43:00PM *  5 points [-]
Comment author: moridinamael 20 March 2017 03:09:47PM 2 points [-]

What is the steelmanned, not-nonsensical interpretation of the phrase "democratize AI"?

Comment author: fubarobfusco 20 March 2017 05:59:58PM *  4 points [-]

One possibility: Ensure that the benefits of AI accrue to everyone generally, rather than exclusively to the teeny-tiny fraction of humanity who happen to own their own AI business.

Comment author: Lumifer 20 March 2017 06:24:55PM 2 points [-]

Ensure that the benefits of AI accrue to everyone generally, rather than exclusively to the teeny-tiny fraction of humanity who happen to own their own AI business.

s/AI/capital/

Now, where have I heard this before..?

Comment author: Viliam 21 March 2017 04:01:58PM 2 points [-]

And your point is...?

From my point of view, the main problem with "making the benefits of capital accrue to everyone generally" is that... well, people who use these words as an applause light typically do something else instead. First, they take most of the benefits of capital to themselves (think: all those communist leaders with golden watches and huge dachas). Second, as a side-effect of incompetent management (where signalling political loyalty trumps technical competence), even the capital that isn't stolen is used very inefficiently.

But on a smaller scale... companies paying taxes, and those taxes being used to build roads or pay for universal healthcare... is an example of providing the benefits of capital to everyone. Just not all the capital; and besides the more-or-less neutral taxation, the use of the capital is not micromanaged by people chosen for their political loyalty. So the costs to the economy are much smaller, and arguably the social benefits are larger (some libertarians may disagree).

Assuming that the hypothetical artificial superintelligence will be (1) smarter than humans, and (2) able to scale, e.g. to increase its cognitive powers thousandfold by creating 1000 copies of itself which will not immediately start feeding Moloch by fighting against each other, it should be able to not fuck up the whole economy, and could quite likely increase the production, even without increasing the costs to environment, by simply doing things smarter and removing inefficiencies. Unlike the communist bureaucrats who (1) were not superintelligent, and sometimes even not of average intelligence, (2) optimized each for their own personal goals, and (3) routinely lied to each other and to their superiors to avoid irrational punishments, so soon the whole system used completely fake data. Not being bound by ideology, if the AI would find out that it is better to leave something to do to humans (quite unlikely IMHO, but let's assume so for the sake of the argument), it would be free to do exactly that. Unlike a hypothetical enlightened communist bureaucrat, who after making the same observation would be probably shot as a traitor and replaced by a less enlightened one.

If the choice is between giving each human a 1/7000000000 of the universe, or giving the whole universe to Elon Musk (or some other person) and letting everyone else starve (because I don't think anyone would be able to get any job in a world where the scalable superintelligence is your direct competitor), the former option seems better to me, and I think even Elon Musk wouldn't mind... especially considering that going for the former option will make people much more willing to cooperate with him.

Comment author: Lumifer 21 March 2017 04:38:58PM *  0 points [-]

And your point is...?

Is it really that difficult to discern?

From my point of view, the main problem with "making the benefits of capital accrue to everyone generally" is that... well, people who use these words as an applause light typically do something else instead.

So do you think that if we had real communism, with selfless and competent rulers, it would work just fine?

companies paying taxes, and those taxes being used to build roads or pay for universal healthcare... is an example of providing the benefits of capital to everyone

Capital is not just money. You tax, basically, production (=creation of value) and production is not a "benefit of capital".

In any case, the underlying argument here is that no one should own AI technology. As always, this means a government monopoly and that strikes me as a rather bad idea.

If the choice is between giving each human a 1/7000000000 of the universe, or giving the whole universe to Elon Musk (or some other person) and letting everyone else starve

Can we please not make appallingly stupid arguments? In which realistic scenarios do you thing this will be a choice that someone faces?

Comment author: Viliam 21 March 2017 04:57:44PM 0 points [-]

Is it really that difficult to discern?

You mean this one?

So do you think that if we had real communism, with selfless and competent rulers, it would work just fine?

For the obvious reasons I don't think you can find selfless and competent human rulers to make this really work. But conditional on possibility of creating a Friendly superintelligent AI... sure.

Although calling that "communism" is about as much of a central example, as calling the paperclip maximizer scenario "capitalism".

production is not a "benefit of capital".

Capital is a factor in production, often a very important one.

no one should own AI technology. As always, this means a government monopoly

Making a superintelligent AI will make our definitions of ownership (whether private or government) obsolete. And "as always" does not seem like a good argument for Singularity scenarios.

In which realistic scenarios do you thing this will be a choice that someone faces?

Depends on whether you consider the possibility of superintelligent AI to be "realistic".

Comment author: Lumifer 21 March 2017 05:08:27PM *  0 points [-]

this one

That too :-) I am a big fan of this approach.

For the obvious reasons I don't think you can find selfless and competent human rulers to make this really work.

But conditional on finding selfless and competent rulers (note that I'm not talking about the rest of the population), you think that communism will work? In particular, the economy will work?

Depends on whether you consider the possibility of superintelligent AI to be "realistic".

Aaaaand let me quote you yourself from just a sentence back:

Making a superintelligent AI will make our definitions of ownership (whether private or government) obsolete.

One of the arms of your choice involves Elon Musk (or equivalent) owning the singularity AI, the other gives every human 1/7B ownership share of the same AI. How does that work, exactly?

Besides, I thought that when Rapture comes...err... I mean, when the Singularity happens, humans will not decide anything any more -- the AI will take over and will make the right decisions for them-- isn't that so?

Comment author: gjm 21 March 2017 06:05:39PM 0 points [-]

conditional on finding selfless and competent rulers (note that I'm not talking about the rest of the population), you think that communism will work?

If we're talking about a Glorious Post-Singularity Future then presumably the superintelligent AIs are not only ruling the country and making economic decisions but also doing all the work, and they probably have magic nanobot spies everywhere so it's hard to lie to them effectively. That probably does get rid of the more obvious failure modes of a communist economy.

(If you just put the superintelligent AIs in charge of the top-level economic institutions and leave everything else to be run by the same dishonest and incompetent humans as normal, you're probably right that that wouldn't suffice.)

Comment author: Lumifer 21 March 2017 06:19:48PM *  0 points [-]

Actually, no, we're (at least, I am) talking about pre-Singularity situations were you still have to dig in the muck to grow crops and make metal shavings and sawdust to manufacture things.

Viliam said that the main problem with communism is that the people at the top are (a) incompetent; and (b) corrupt. I don't think that's true with respect to the economy. That is, I agree that communism leads to incompetent and corrupt people rising to the top, but that is not the primary reason why communist economy isn't well-functioning.

I think the primary reason is that communism breaks the feedback loop in the economy where prices and profit function as vital dynamic indicators for resource allocation decisions. A communist economy is like a body where the autonomic nervous system is absent and most senses function slowly and badly (but the brain can make the limbs move just fine). Just making the bureaucrats (human-level) competent and honest is not going to improve things much.

Comment author: gjm 22 March 2017 01:07:20AM 1 point [-]

Maybe I misunderstood the context, but it looked to me as if Viliam was intending only to say that post-Singularity communism might work out OK on account of being run by superintelligent AIs rather than superstupid meatsacks, and any more general-sounding things he may have said about the problems of communism were directed at that scenario.

(I repeat that I agree that merely replacing the leaders with superintelligent AIs and changing nothing else would most likely not make communism work at all, for reasons essentially the same as yours.)

Comment author: fubarobfusco 20 March 2017 06:36:37PM 2 points [-]

String substitution isn't truth-preserving; there are some analogies and some disanalogies there.

Comment author: bogus 21 March 2017 06:03:21PM *  1 point [-]

Sure, but capital is a rather vacuous word. It basically means "stuff that might be useful for something". So yes, talking about democratizing AI is a whole lot more meaningful than just saying "y'know, it would be nice if everyone could have more useful stuff that might help em achieve their goals. Man, that's so deeeep... puff", which is what your variant ultimately amounts to!

Comment author: Lumifer 21 March 2017 06:22:03PM *  0 points [-]

capital is a rather vacuous word. It basically means "stuff that might be useful for something"

Um. Not in economics where it is well-defined. Capital is resources needed for production of value. Your stack of decade-old manga might be useful for something, but it's not capital. The $20 bill in your wallet isn't capital either.

Comment author: satt 24 March 2017 12:55:43AM 0 points [-]

Um. Not in economics where it is well-defined. Capital is resources needed for production of value.

While capital is resources needed for production of value, it's a bit misleading to imply that that's how it's "well-defined" "in economics", since the reader is likely to come away with the impression that capital = resources needed to produce value, even though not all resources needed for production of value are capital. Economics also defines labour & land* as resources needed for production of value.

* And sometimes "entrepreneurship", but that's always struck me as a pretty bogus "factor of production" — as economists tacitly admit by omitting it as a variable from their production functions, even though it's as free to vary as labour.

Comment author: Lumifer 24 March 2017 03:27:28PM 0 points [-]

Sure, but that's all Econ 101 territory and LW isn't really a good place to get some education in economics :-/

Comment author: g_pepper 24 March 2017 01:43:15AM 0 points [-]

The way I remember it from my college days was that the inputs for the production of wealth are land, labor and capital (and, as you said, sometimes entrepreneurship is listed, although often this is lumped in with labor). Capital is then defined as wealth used towards the production of additional wealth. This formulation avoids the ambiguity that you identified.

Comment author: gjm 22 March 2017 01:11:16AM 0 points [-]

None the less, "capital" and "AI" are extremely different in scope and I see no particular reason to think that if "let's do X with capital" turns out to be a bad idea then we can rely on "let's do X with AI" also being a bad idea.

In a hypothetical future where the benefits of AI are so enormous that the rest of the economy can be ignored, perhaps the two kinda coalesce (though I'm not sure it's entirely clear), but that hypothetical future is also one so different from the past that past failures of "let's do X with capital" aren't necessarily a good indication of similar future failure.

Comment author: bogus 21 March 2017 06:51:58PM *  0 points [-]

Capital is resources needed for production of value.

And that stack of decade-old manga is a resource that might indeed provide value (in the form of continuing enjoyment) to a manga collector. That makes it capital. A $20 bill in my wallet is ultimately a claim on real resources that the central bank commits to honoring, by preserving the value of the currency - that makes it "capital" from a strictly individual perspective (indeed, such claims are often called "financial capital"), although it's indeed not real "capital" in an economy-wide sense (because any such claim must be offset by a corresponding liability).

Comment author: Lumifer 21 March 2017 07:03:33PM *  0 points [-]

Sigh. You can, of course, define any word any way you like it, but I have my doubts about the usefulness of such endeavours. Go read.

Comment author: qmotus 21 March 2017 09:47:44AM 0 points [-]

I feel like it's rather obvious that this is approximately what is meant. The people who talk of democratizing AI are, mostly, not speaking about superintelligence or do not see it as a threat (with the exception of Elon Musk, maybe).

Comment author: username2 20 March 2017 04:17:54PM 1 point [-]

Open sourcing all significant advancements in AI and releasing all code under GNU GPL.

Comment author: Viliam 21 March 2017 04:05:52PM 1 point [-]

Tiling the whole universe with small copies of GNU GPL, because each nanobot is legally required to contain the full copy. :D

Comment author: username2 20 March 2017 10:11:23PM 0 points [-]

*GNU AGPL, preferably

Comment author: Lumifer 20 March 2017 03:17:03PM 1 point [-]

Why do you think one exists?

Comment author: moridinamael 20 March 2017 03:55:33PM *  1 point [-]

I try not to assume that I am smarter than everybody if I can help it, and when there's a clear cluster of really smart people making these noises, I at least want to investigate and see whether I'm mistaken in my presuppositions.

To me, "democratize AI" makes as much sense as "democratize smallpox", but it would be good to find out that I'm wrong.

Comment author: bogus 20 March 2017 06:26:02PM *  0 points [-]

To me, "democratize AI" makes as much sense as "democratize smallpox", but it would be good to find out that I'm wrong.

Isn't "democratizing smallpox" a fairly widespread practice, starting from the 18th century or so - and one with rather large utility benefits, all things considered? (Or are you laboring under the misapprehension that the kinds of 'AIs' being developed by Google or Facebook are actually dangerous? Because that's quite ridiculous, TBH. It's the sort of thing for which EY and Less Wrong get a bad name in machine-learning- [popularly known as 'AI'] circles.)

Comment author: moridinamael 20 March 2017 09:30:57PM 1 point [-]

Not under any usual definition of "democratize". Making smallpox accessible to everyone is no one's objective. I wouldn't refer to making smallpox available to highly specialized and vetted labs as "democratizing" it.

Google and/or Deepmind explicitly intend on building exactly the type of AI that I would consider dangerous, regardless of whether or not you would consider them to have already done so.

Comment author: Lumifer 20 March 2017 03:57:26PM 0 points [-]

Links to the noises?

Comment author: moridinamael 20 March 2017 04:03:12PM *  0 points [-]

It's mainly an OpenAI noise but it's been parroted in many places recently. Definitely seen it in OpenAI materials, and I may have even heard Musk repeat the phrase, but can't find links. Also:

YCombinator.

Our long-term goal is to democratize AI. We want to level the playing field for startups to ensure that innovation doesn’t get locked up in large companies like Google or Facebook. If you’re starting an AI company, we want to help you succeed.

which is pretty close to "we don't want only Google and Facebook to have control over smallpox".

Microsoft in context of partnership with OpenAI.

At Microsoft, we believe everyone deserves to be able to take advantage of these breakthroughs, in both their work and personal lives.

In short, we are committed to democratizing AI and making it accessible to everyone.

This is a much more nonstandard interpretation of "democratize". I suppose by this logic, Henry Ford democratized cars?

Comment author: Lumifer 20 March 2017 04:22:57PM *  1 point [-]

Well, YC means, I think, that AI research should not become a monopoly (via e.g. software patents or by buying every competitor). That sounds entirely reasonable to me.

Microsoft means that they want Cortana/Siri/Alexa/Assistant/etc. on every machine and in every home. That's just marketing speak.

Both expressions have nothing to do with democracy, of course.

Comment author: tristanm 20 March 2017 07:08:04PM 0 points [-]

Well, YC means, I think, that AI research should not become a monopoly (via e.g. software patents or by buying every competitor). That sounds entirely reasonable to me.

There are other ways that AI research can become a monopoly without any use of patents or purchases of competitors. For example, a fair bit of research can only be done through heavy computing infrastructure. In some sense places like Google will have an advantage no matter how much of their code is open-sourced (and a lot of it is open source already). Another issue is data, which is a type of capital - much unlike money however - where there is a limit to how much value you can extract from it that depends on your computing resources. These are barriers that I think probably can't be lowered even in principle.

Comment author: Lumifer 20 March 2017 07:23:36PM *  0 points [-]

Having advantages in the field of AI research and having a monopoly are very different things.

a fair bit of research can only be done through heavy computing infrastructure

That's not self-evident to me. A fair bit of practical applications (e.g. Siri/Cortana) require a lot of infrastructure. What kind of research can't you do if you have a few terabytes of storage and a couple dozens of GPUs? What a research university will be unable to do?

Another issue is data

Data is an interesting issue. But first, the difference between research and practical applications is relevant again, and second, data control is mostly fought over at the legal/government level.

Comment author: tristanm 20 March 2017 09:06:13PM 0 points [-]

It's still the case that a lot of problems in AI and data analysis can be broken down into parallel tasks and massively benefit from just having plenty of CPUs/GPUs available. In addition, a lot of the research work at major companies like Google has gone into making sure that the infrastructure advantage is used to the maximum extent possible. But I will grant you that this may not represent an actual monopoly on anything (except perhaps search). Hardware is still easily available to those who can afford it. But in the context of "democratizing AI", I think we should expect that the firms with the most resources should have significant advantages over small startups in the AI space with not much capital. If I have a bunch of data I need analyzed, will I want to give that job to a new, untested player who may not even have the infrastructure depending on how much data I have, or someone established who I know has the capability and resources?

The issue with data isn't so much about control / privacy, it's mainly the fact that if you give me a truckload of a thousand 2 TB hard drives, each containing potentially useful information, there's really not much I can do with it. Now if I happened to have a massive server farm, that would be a different situation. There's a pretty big gulf in value for certain objects depending on my ability to make use of it, and I think data is a good example of those kinds of objects.

Comment author: Lumifer 20 March 2017 09:16:56PM 0 points [-]

we should expect that the firms with the most resources should have significant advantages over small startups

So how this is different from, say, manufacturing? Or pretty much any business for the last few centuries?

Comment author: WalterL 20 March 2017 03:43:48PM 0 points [-]

"Make multiple AIs that can restrain one another instead of one tyrannical MCP"?

Comment author: Lumifer 24 March 2017 07:58:29PM 1 point [-]

I, for one, welcome our new paperclip Overlord.

Comment author: dglukhov 21 March 2017 09:05:13PM 1 point [-]

Not the first criticism of the Singularity, and certainly not the last. I found this on reddit, just curious what the response will be here:

"I am taking up a subject at university, called Information Systems Management, and my teacher is a Futurologist! He refrains from even teaching the subject just to talk about technology and how it will solve all of our problems and make us uber-humans in just a decade or two. He has a PhD in A.I. and has already talked to us about nanotechnology getting rid of all diseases, A.I. merging with us, smart cities that are controlled by A.I. like the Fujisawa project, and a 20 minute interview to Ray Kurzweil about how the singularity will make us all immortal by 2045.

Now, I get triggered as fuck whenever my teacher opens his mouth, because not only does he sell these claims with no other basis than "technology is growing exponentially", but he also implies that all of our problems can and will be solved by it, empowering us to keep fucking up things along the way. But I prefer to stay in silence, because most idiots at my class are beyond saving anyway and I don't get off on confronting others, but that is beside the point.

I wanted to make a case for why the singularity is beyond the limits of this current industrial civilization, and I will base my assessment on these pillars:

-Declining Energy Returns: We are living in a world where the return for oil is what, a tenth of what it used to be last century? Not to mention that even this lower-quality oil is facing depletion, at least from profitable sources. Renewables are at an extremely early stage as to even hope they run an industrial, exponentially growing civilization like ours at this point, and there are some physical laws that limit the amount of energy that can be actually absorbed from the sun, along with what can be efficiently stored at batteries, not to mention intermittency issues, transport costs, etc. One would think that more complex civilizations require more and more energy, especially at exponential growth rates, but the only argument that futurists spew out is some free market bullshit about solar, or like my teacher did, only expect the idea will come true because humans are awesome and technolgy is increasing at exponential rates. These guys think applied science and technology exist in a vacuum, which brings me to the next point.

-Economic feasibility: I know it is easy to talk about the wonders of tech and the bright future ahead of us, when one lives in the developed world, and is part of a priviliged socio-economical class, being as such isolated from 99% of the misery of this planet. There are people today that cannot afford clean water. In fact, most people that are below the top 20% of the population in terms of income probably won't be able to afford many of the new technological developments more than they do today. In fact, if the wealth gap keeps increasing, only the top 1% would be able to turn into cyborgs or upload their minds into robots or whatever it is that these guys preach. I think the argument of a post-scarcity era is a lot less compelling once you realize it will only benefit a portion of the populations of developed countries.

-Political resistance and corruption: Electric cars have been a thing ever since the 20th century, and who know what technologies have been hidden and lobbied against by the big corporations that rule this capitalist system. Yet the only hope for the singularity is that is somehow profitable for the stockholders. Look at planned obsolescence. We could have products that are 100 times more durable, that are more efficient, that are safer, that pollute less, but then where would profits go? Who is to tell you that they won't do the same in the future? In fact, a big premise of smart cities is that they will reduce crime by constant suirvellance; In fujisawa every lightpost triggered a motion camera and houses had centralized information centers that could be easily turned into Orwellian control devices, which sounds terrifying to me. We will have to wait and see how the middle class and below react to automation taking many jobs, and how the UBI experiment is carried out, if at all.

-Time constraints: Finally, people hope for the Singularity to reach us by 2045. That would imply that we need around 30 years of constant technological development, disregarding social decline, resource depletion, global warming, crop failuers, droughts, etc. If civilization collapses before 2045, which I think is very likely, then that won't come around and save us, and as far as I know, there is no other hope from futurologists other than a major breakthrough in technology at this point. Plus, like the video "Are humans smarter than bacteria?" very clearly states, humans need time to figure out the problems we face, then we need some more time to design some solution, then we need even more time to debate, lobby and finally implement some form of the original solution, and hope no other problems arise from it, because as we know technology is highly unpredictable and many times it creates more problems than it solves. Until we do all that, on a global scale, without destroying civil liberties, I think we will all be facing severe environmental problems, and developing countries may very well have fallen apart long before that.

What do you think? Am I missing something? What is the main force that will stop us reaching the Singularity in time? "

Comment author: cousin_it 21 March 2017 10:38:36PM *  6 points [-]

I think most people on LW also distrust blind techno-optimism, hence the emphasis on existential risks, friendliness, etc.

Comment author: knb 23 March 2017 04:40:01AM 3 points [-]

Like a lot of reddit posts, it seems like it was written by a slightly-precocious teenager. I'm not much of a singularity believer but the case is very weak.

"Declining Energy Returns" is based on the false idea that civilization requires exponential increases in energy input, which has been wrong for decades. Per capita energy consumption has been stagnant in the first world for decades, and most of these countries have stagnant or declining populations. Focusing on EROI and "quality" of oil produced is a mistake. We don't lack for sources of energy; the whole basis of the peak oil collapse theory was that other energy sources can't replace oil's vital role as a transport fuel.

"Economic feasability" is non-sequitur concerned with whether gains from technology will go only to the rich, not relevant to whether or not it will happen.

"Political resistance and corruption" starts out badly as the commenter apparently believes in the really dumb idea that electric cars have always been a viable competitor to internal combustion but the idea was suppressed by some kind of conspiracy. If you know anything about the engineering it took to make electric cars semi-viable competitors to ICE, the idea is obviously wrong. Even without getting into the technical aspect, there are lots of countries which had independent car industries and a strong incentive to get off oil (e.g. Germany and Japan before and during WW2).

Comment author: dglukhov 23 March 2017 01:37:23PM *  0 points [-]

"Declining Energy Returns" is based on the false idea that civilization requires exponential increases in energy input, which has been wrong for decades. Per capita energy consumption has been stagnant in the first world for decades, and most of these countries have stagnant or declining populations. Focusing on EROI and "quality" of oil produced is a mistake. We don't lack for sources of energy; the whole basis of the peak oil collapse theory was that other energy sources can't replace oil's vital role as a transport fuel.

This seems relevant These statistics do not support your claim that energy consumption per capita has been stagnant. Did I miss something? Perhaps you're referring strictly to stagnation in per capita use of fossil fuels? Do you have different sources of support? After all, this is merely one data point.

I'm not particularly sure where I stand with regards to the OP, part of the reason I brought it up was because this post sorely needed evidence to be brought up to the table, none of which I see.

I suppose this lack of support gives a reader the impression of naiveté. but I was hoping members here would clarify with their own, founded claims. Thank you for the debunks, I'm sure there's plenty of literature to link to as such, which is exactly what I'm after. The engineering behind electric cars, and perhaps its history, will be a topic I'll be investigating myself in a bit. If you have any preferred sources for teaching purposes, I'd love a link.

Comment author: knb 25 March 2017 05:54:42AM *  2 points [-]

This seems relevant These statistics do not support your claim that energy consumption per capita has been stagnant. Did I miss something?

Yep, your link is for world energy use per capita, my claim is that it was stagnant for the first world. E.g. in the US it peaked in 1978 and has since declined by about a fifth. Developed world is more relevant because that's where cutting edge research and technological advancement happens. Edit: here's a graph from the source you provided showing the energy consumption history of the main developed countries, all of which follow the same pattern.

I don't really have a single link to sum up the difference between engineering an ICE car with adequate range and refuel time and a battery-electric vehicle with comparable range/recharge time. If you're really interested I would suggest reading about the early history of motor vehicles and then reading about the decades long development history of lithium-ion batteries before they became a viable product.

Comment author: ChristianKl 22 March 2017 10:57:47AM 3 points [-]

It seems to me like a long essay for a reasonable position written by someone who doesn't make a good case.

Solar does get exponentially cheaper at a rate of doubling efficiency every 7 years. It's a valid answer to the question of where the energy will come from is the timeline is long enough. The article gives the impression that the poor in the third world stay poor. That's a popular misconception and in reality the fight against global poverty. Much more than the top 20% of this planet has mobile phones. Most people benefit from technologies like smart phones.

The "planned obsolescence" conspiracy theory narrative also doesn't really help with understanding how technology get's deployed.

Comment author: dglukhov 22 March 2017 02:37:08PM *  0 points [-]

Much more than the top 20% of this planet has mobile phones. Most people benefit from technologies like smart phones.

I wouldn't cherry-pick one technological example and make a case for the rest of available technological advancements as conducive to closing the financial gap between people. Tech provides for industry, industry provides for shareholders, shareholders provide for themselves (here's one data point in a field of research exploring the seemingly direct relationship between excess resource acquisition and antisocial tendencies, I will work on finding more, if any). I am necessarily glossing over the extraneous details, but since the corporate incentive system provides for a whole host of advantages, and since it has power over top-level governments (lobbying success statistics come to mind), this incentive system is necessarily prevalent and of major interest when tech advances are the topic of discussion. Those with power get tech benefits first, if any benefits exist beyond that point, fantastic. If not, the obsolescence conspiracy seems the likely next scenario. I have no awareness of an incentive system that dictates that those with money and power need necessarily provide for everyone else. If there was one, I wouldn't be the only unaware one, since clearly the OP isn't aware of such a thing either.

Are there any technological advancements you can think of that necessarily trickle down the socio-economic scale and help those poorest of the poor? My first idea would be agricultural advancements, but then I'd have to go and collect statistics on rates of food acquisition for the poorest subset of the world population, with maybe a start in the world census data for agriculture, which may not even have the data I'd need. Any ideas of your own?

Comment author: ChristianKl 23 March 2017 10:48:14AM 1 point [-]

I wouldn't cherry-pick one technological example and make a case for the rest of available technological advancements as conducive to closing the financial gap between people.

That sentence is interesting. The thing I care about improving the lives of the poor.

I have no awareness of an incentive system that dictates that those with money and power need necessarily provide for everyone else.

If you look at Bill Gates and Warren Buffet they see purpose in helping the poor.

In general employing poor people to do something for you and paying them a wage is also a classic way poor people get helped.

I wouldn't cherry-pick one technological example and make a case for the rest of available technological advancements as conducive to closing the financial gap between people.

The great thing about smart phones is that they allow for software to be distributed with little cost for additional copies. Having a smart phone means that you can use Duolingo to learn English for free.

Are there any technological advancements you can think of that necessarily trickle down the socio-economic scale and help those poorest of the poor?

We are quite successful in reducing the numbers of the poorest of the poor. We reduced them both in relative and in absolute numbers. It's debatable how much of that is due to new technology and how much is through other factors but we have now less people in extreme poverty.

Comment author: dglukhov 23 March 2017 12:48:28PM *  0 points [-]

If you look at Bill Gates and Warren Buffet they see purpose in helping the poor. In general employing poor people to do something for you and paying them a wage is also a classic way poor people get helped.

I'm happy that these people have taken actions to support such stances. However, I'm more interested in the incentive system, not a few outliers within it. Both of these examples hold about $80 billion in net worth, these are paltry numbers compared to the amount of money circulating in world today, GDP estimates ranging in the $74 trillion. I am therefore still unaware of an incentive system that helps the poor until I see the majority of this amount of money being circulated and distributed in the manner Gates and Buffett propose.

The great thing about smart phones is that they allow for software to be distributed with little cost for additional copies. Having a smart phone means that you can use Duolingo to learn English for free.

Agreed, and unfortunately utilizing a smartphone to its full benefit isn't necessarily obvious to somebody poor. While one could use it to learn English for free, they could also use it inadvertently as an advertising platform with firms soliciting sales from the user, or just as a means of contact with others willing to stay in contact with them (other poor people, most likely). A smartphone would be an example of a technology that managed to trickle down the socio-economic ladder and help poor people, but it can do harm as well as good, or have no effect at all.

We are quite successful in reducing the numbers of the poorest of the poor. We reduced them both in relative and in absolute numbers. It's debatable how much of that is due to new technology and how much is through other factors but we have now less people in extreme poverty.

Please show me these statistics. Are they adjusted to and normalized relative to population increase?

A cursory search gave me contradictory statistics. http://www.statisticbrain.com/world-poverty-statistics/

I'd like to know where you get such sources, because a growing income gap between rich and poor necessarily implies three things: the rich are getting richer, the poor are getting poorer, or both.

Note: we are discussing relative poverty, or absolute poverty? I'd like to keep it to absolute poverty, since meeting basic human needs is a solid baseline as long as you trust nutritional data sources and research with regards to health. If you do not trust our current understanding of human health, then relative poverty is probably the better topic to discuss.

EDIT: found something to support your conclusion, first chart shows the decrease of population of people in the lowest economic tier. These are not up to date, only comparing statistics from 2001 to 2011. I'm having a hard time finding anything more recent.

Comment author: ChristianKl 24 March 2017 11:13:48PM 0 points [-]

I'm happy that these people have taken actions to support such stances. However, I'm more interested in the incentive system, not a few outliers within it.

When basic needs are fulfilled many humans tend to want to satisfy needs around contributing to making the world a better place. It's a basic psychological mechanism.

Comment author: dglukhov 25 March 2017 01:50:40AM 0 points [-]

When basic needs are fulfilled many humans tend to want to satisfy needs around contributing to making the world a better place. It's a basic psychological mechanism.

This completely ignores my previous point. A few people who managed to self-actualize within the current global economic system will not change that system. As I previously mentioned, I am not interested in outliers, but rather systematic trends in economic behavior.

Comment author: ChristianKl 25 March 2017 08:51:03AM 0 points [-]

Bill Gates and Warren Buffet aren't only outliers in respect to donating but also in being the most wealthy people. Both of them basically believe that it's makes more sense to use their fortune for the public good than to inherit it to their children.

To the extend that this belief spreads (and it does with the giving pledge), you see more money being used this way.

Comment author: ChristianKl 24 March 2017 02:38:34PM *  0 points [-]

they could also use it inadvertently as an advertising platform with firms soliciting sales from the user, or just as a means of contact with others willing to stay in contact with them (other poor people, most likely)

The ability to stay in contact with other poor people is valuable. If you can send the person in the next village a message you don't have to walk to them to communicate with them.

Please show me these statistics. Are they adjusted to and normalized relative to population increase?

What have the millennium development goals achieved?

MDG 1: The number of people living on less than $1.25 a day has been reduced from 1.9 billion in 1990 to 836 million in 2015

Comment author: dglukhov 25 March 2017 01:55:46AM 0 points [-]

The ability to stay in contact with other poor people is valuable.

It is also dangerous, people are unpredictable and, similarly to my point about phones, can cause good, harm, or nothing at all.

A phone is not inherently, intrinsically good, it merely serves as a platform to any number of things, good, bad or neutral.

What have the millennium development goals achieved?

I hope this initiative continues to make progress and that policy doesn't suddenly turn upside-down anytime soon. Then again, Trump is president, Brexit is a possibility, and economic collapse an always probable looming threat.

Comment author: ChristianKl 25 March 2017 08:51:59AM 0 points [-]

A phone is not inherently, intrinsically good, it merely serves as a platform to any number of things, good, bad or neutral.

That's similar to saying that a car is not intrinsically good. Both technologies enable a lot of other actions.

Comment author: dglukhov 27 March 2017 01:44:35PM *  0 points [-]

Cars also directly involve people in motor vehicle accidents, one of the leading causes of death in the developed world. Cars, and motor vehicles in general, also contribute to an increasingly alarming concentration of emissions into the atmosphere, with adverse effects to follow, most notably global warming. My point still stands.

A technology is only inherently good if it solves more problems than it causes, with each problem weighed by their impacts on the world.

Comment author: Elo 27 March 2017 01:58:40PM 0 points [-]

Cars are net positive.

Edit: ignoring global warming because it's really hard to quantify. Just comparing deaths to global productivity increase because of cars. Cars are a net positive.

Edit 2:

Edit: ignoring global warming because it's really hard to quantify

Clarification - it's hard to quantify the direct relationship of cars to global warming. Duh there's a relationship, but I really don't want to have a debate here. Ignoring that factor for a moment, net value of productivity of cars vs productivity lost by some deaths. Yea. Let's compare that.

Comment author: Lumifer 22 March 2017 02:41:23PM 0 points [-]

I have no awareness of an incentive system that dictates that those with money and power need necessarily provide for everyone else.

It's called a survival instinct.

Comment author: dglukhov 22 March 2017 02:55:10PM *  0 points [-]

Good luck coalescing that in any meaningful level of resistance. History shows that leaders haven't been very kind to revolutions, and the success rate for such movements aren't necessarily high given the technical limitations.

I say this only because I'm seeing a slow tendency towards an absolution of leader-replacement strategies and sentiments.

Comment author: Lumifer 22 March 2017 04:18:15PM *  0 points [-]

coalescing that in any meaningful level of resistance

Resistance on whose part to what?

History shows that leaders haven't been very kind to revolutions

Revolutions haven't been very kind to leaders, too -- that's the point. When the proles have nothing to lose but their chains, they get restless :-/

an absolution of leader-replacement strategies

...absolution?

Comment author: Viliam 23 March 2017 10:31:26AM 3 points [-]

When the proles have nothing to lose but their chains, they get restless :-/

Is this empirically true? I am not an expert, but seems to me that many revolutions are caused not by consistent suffering -- which makes people adjust to the "new normal" -- but rather by situations where the quality of life increases a bit -- which gives people expectations of improvement -- and then either fails to increase further, or even falls back a bit. That is when people explode.

A child doesn't throw a tantrum because she never had a chocolate, but she will if you give her one piece and then take away the remaining ones.

Comment author: Lumifer 24 March 2017 03:25:34PM 0 points [-]

seems to me that many revolutions are caused not by consistent suffering

The issue is not the level of suffering, the issue is what do you have to lose. What's the downside to burning the whole system to the ground? If not much, well, why not?

That is when people explode

Middle class doesn't explode. Arguably that's the reason why revolutions (and popular uprisings) in the West have become much more rare than, say, a couple of hundred years ago.

Comment author: gjm 24 March 2017 06:06:53PM *  3 points [-]

The American revolution seems to have been a pretty middle-class affair. The Czech(oslovakian) "Velvet Revolution" and the Estonian "Singing Revolution" too, I think. [EDITED to add:] In so far as there can be said to be a middle class in a communist state.

Comment author: Lumifer 24 March 2017 07:29:47PM 0 points [-]

Yeah, Eastern Europe / Russia is an interesting case. First, as you mention, it's unclear to what degree we can speak of the middle class there during the Soviet times. Second, some "revolutions" there were velvet primarily because the previous power structures essentially imploded leaving vacuum in their place -- there was no one to fight. However not all of them were and the notable post-Soviet power struggle in the Ukraine (the "orange revolution") was protracted and somewhat violent.

So... it's complicated? X-)

Comment author: Viliam 24 March 2017 10:52:04PM 1 point [-]

The issue is not the level of suffering, the issue is what do you have to lose.

More precisely, it is what you believe you have to lose. And humans seems to have a cognitive bias that they take all advantages of the current situation for granted, if they existed at least for a decade.

So when people see more options, they are going to be like: "Worst case, we fail and everything stays like it is now. Best case, everything improves. We just have to try." Then they sometimes get surprised, for example when millions of them starve to death, learning too late that they actually had something to lose.

In some sense, Brexit or Trump are revolutions converted by the mechanism of democracy into mere dramatic elections. People participating at them seem to have the "we have nothing to lose" mentality. I am not saying they are going to lose something as a consequence, only that the possibility of such outcome certainly exists. I wouldn't bother trying to convince them about that, though.

Comment author: MaryCh 24 March 2017 04:43:47PM 0 points [-]

(Yes it does.)

Comment author: dglukhov 22 March 2017 05:05:55PM *  0 points [-]

Resistance on whose part to what?

Resistance of those without resources against those with amassed resources. We can call them rich vs. poor, leaders vs. followers, advantaged vs. disadvantaged. the advantaged groups tend to be characteristically small, the disadvantaged large.

Revolutions haven't been very kind to leaders, too -- that's the point. When the proles have nothing to lose but their chains, they get restless :-/

Restlessness is useless when it is condensed and exploited to empower those chaining them. For example, rebellion is an easily bought commercial product, a socially/tribally recognized garb you can wear. You'd be hard-pressed more to look the part of a revolutionary than to actually do anything that could potentially defy the oppressive regime you might be a part of. There are other examples, which leads me to my next point.

...absolution?

It would be in the best interest for leaders to optimize for a situation where rebellion cannot ever arise, that is the single threat any self-interested leader with the goal of continuing their reign needs to worry about. Whether it involves mass surveillance, economic manipulation, or simply despotic control is largely irrelevant, the idea behind them is what counts. Now when you bring up the subject of technology, any smart leader with a stake in their reign time will immediately seize any opportunity to extend it. Set a situation up to create technology that necessarily mitigates the potential for rebellion to arise, and you get to rule longer.

This is a theoretical scenario. It is a scary one, and the prevalence of conspiracy theories arising from such a theory simply plays to biases founded in fear. And of course, with bias comes the inevitable rationalist backlash to such idea. But I'm not interested in this political discourse, I just want to highlight something.

The scenario establishes an optimization process. Optimization for control. It is always more advantageous for a leader to worry more about their reign and extend it than to be benevolent, a sort of tragedy of the commons for leaders. The natural in-system solution for this optimization problem is to eliminate all potential sources of competition. The out-system solution for this optimization problem is mutual cooperation and control-sharing to meet certain needs and goals.

There currently exists no out-system incentive that I am currently aware of. Rationality doesn't count, since it still leads to in-system outcomes (benevolent leaders).

EDIT: I just thought of an ironic situation. The current solution to the tragedy of the commons most prevalent is through the use of government regulation. This is only a Band-Aid, since you get a recursion issue of figuring out who's gonna govern the government.

Comment author: Lumifer 22 March 2017 06:28:42PM *  0 points [-]

Restlessness is useless when it is condensed and exploited to empower those chaining them.

And when it's not? Consider Ukraine. Or if you want to go a bit further in time the whole collapse of the USSR and its satellites.

It is always more advantageous for a leader to worry more about their reign and extend it than to be benevolent

I don't see why. It is advantageous for a leader to have satisfied and so complacent subjects. Benevolence can be a good tool.

Comment author: dglukhov 22 March 2017 08:29:26PM *  0 points [-]

And when it's not? Consider Ukraine. Or if you want to go a bit further in time the whole collapse of the USSR and its satellites.

Outcompeted by economic superpowers. Purge people all you want, if there are advantages to being integrated into the world economic system, the people who explicitly leave will suffer the consequences. China did not choose such a fate, but neither is it rebelling.

I don't see why. It is advantageous for a leader to have satisfied and so complacent subjects. Benevolence can be a good tool.

Benevolence is expensive. You will always have an advantage in paying your direct subordinates (generals, bankers, policy-makers, etc) rather than the bottom rung of the economic ladder. If you endorse those who cannot keep you in power, those that would normally keep you in power will simply choose a different leader (who's probably going to endorse them more than you are). Of course, your subordinates are inevitably dealing with the exact same problem, and chances are they too will optimize by supporting those who can keep them in power. There is no in-system incentive to be benevolent. You could argue a traditional republic tries to circumvent this empowering those on the bottom to work better (which has no other choice but to improve living conditions), but the amount of uncertainty for the leader increases, and leaders in this system do not enjoy extended times of reign. To optimize to fix this solution, you absolve rebellious sentiment.

Convince your working populace that they are happy (whether they're happy or not), and your rebellion problem is gone. There is, therefore, still no in-system incentive to be benevolent (this is just a Band-Aid), the true incentive is to get rid of uncertainty as to the loyalty of your subordinates.

Side-note: analysis of the human mind scares me in a way. To be able to know precisely how to manipulate the human mind makes this goal much easier to attain. For example, take any data analytics firm that sell their services for marketing purposes. They can collaborate with social media companies such as facebook (which currently has over 1.7 billion active monthly users as data points, though perhaps more since this is old data), where you freely give away your personal information, and get a detailed understanding of population clusters in regions with access to such services.

Comment author: markan 20 March 2017 06:29:30PM 1 point [-]

I've been writing about effective altruism and AI and would be interested in feedback: Effective altruists should work towards human-level AI

Comment author: ChristianKl 22 March 2017 12:16:43PM 2 points [-]

A good metaphor is a cliff. A cliff poses a risk in that it is physically possible to drive over it. In the same way, it may be physically possible to build a very dangerous AI. But nobody wants to do that, and—in my view—it looks quite avoidable.

That's sounds naive and gives the impression that you haven't taken the time to understand the AI risk concerns. You provide no arguments besides the fact that you don't see the problem of AI risk.

The prevailing wisdom in this community is that most GAI designs are going to be unsafe and a lot of the unsafety isn't obvious beforehand. There's the belief that if the value alignment problem isn't solved before human level AGI, that means the end of humanity.

Comment author: dogiv 20 March 2017 06:58:18PM 1 point [-]

The idea that friendly superintelligence would be massively useful is implicit (and often explicit) in nearly every argument in favor of AI safety efforts, certainly including EY and Bostrom. But you seem to be making the much stronger claim that we should therefore altruistically expend effort to accelerate its development. I am not convinced.

Your argument rests on the proposition that current research on AI is so specific that its contribution toward human-level AI is very small, so small that the modest efforts of EAs (compared to all the massive corporations working on narrow AI) will speed things up significantly. In support of that, you mainly discuss vision--and I will agree with you that vision is not necessary for general AI, though some form of sensory input might be. However, another major focus of corporate AI research is natural language processing, which is much more closely tied to general intelligence. It is not clear whether we could call any system generally intelligent without it.

If you accept that mainstream AI research is making some progress toward human-level AI, even though it's not the main intention, then it quickly becomes clear that EA efforts would have greater marginal benefit in working on AI safety, something that mainstream research largely rejects outright.

Comment author: MrMind 22 March 2017 11:07:09AM 0 points [-]

But you seem to be making the much stronger claim that we should therefore altruistically expend effort to accelerate its development.

This is almost the inverse Basilisk argument.

Comment author: turchin 20 March 2017 07:12:00PM 0 points [-]

If you prove that HLAI is safer than narrow AI jumping in paper clip maximiser, it is good EA case.

If you prove that risks of synthetic biology is extremely high if we will not create HLAI in time, it would also support your point of view.

Comment author: mortal 25 March 2017 01:11:42PM 0 points [-]

What do you think of the idea of 'learning all the major mental models' - as promoted by Charlie Munger and FarnamStreet? These mental models also include cognitive fallacies, one of the major foci of Lesswrong.

I personally think it is a good idea, but it doesn't hurt to check.

Comment author: ChristianKl 27 March 2017 08:31:18AM 0 points [-]

Learning different mental models is quite useful.

On the other hand I'm not sure that it makes sense to think that there's one list with "the major mental models". Many fields have their own mental models.

Comment author: PhilGoetz 25 March 2017 04:37:54AM 0 points [-]

The main page lesswrong.com no longer has a link to the Discussion section of the forum, nor a login link. I think these changes are both mistakes.

Comment author: TheAncientGeek 26 March 2017 09:00:57AM 0 points [-]

Yep.

Comment author: username2 24 March 2017 11:06:18AM 0 points [-]

Something happened to the mainpage. It no longer contains links to Main and Discussion.

Comment author: username2 24 March 2017 12:14:55PM 0 points [-]

Preparing for the closure of the discussion forums? "Management" efforts to kickstart things with content-based posts seem to have stalled after the flurry in Nov/Dec.

Comment author: Elo 24 March 2017 12:14:36PM 0 points [-]

yes, we are working on it.

Comment author: Bound_up 20 March 2017 11:49:25PM 0 points [-]

Suppose there are 100 genes which figure into intelligence, the odds of getting any one being 50%.

The most common result would be for someone to get 50/100 of these genes and have average intelligence.

Some smaller number would get 51 or 49, and a smaller number still would get 52 or 48.

And so on, until at the extremes of the scale, such a small number of people get 0 or 100 of them that no one we've ever heard of or has ever been born has had all 100 of them.

As such, incredible superhuman intelligence would be manifest in a human who just got lucky enough to have all 100 genes. If some or all of these genes could be identified and manipulated in the genetic code, we'd have unprecedented geniuses.

Comment author: Viliam 21 March 2017 04:19:33PM *  1 point [-]

Let me be the one to describe this glass as half-empty:

If there are 100 genes that participate in IQ, it means that there exists an upper limit to human IQ, i.e. when you have all 100 of them. (Ignoring the possibility of new IQ-increasing mutations for the moment.) Unlike the mathematical bell curve which -- mathematically speaking -- stretches into infinity, this upper limit of human IQ could be relatively low; like maybe IQ 200, but definitely no Anasûrimbor Kellhus.

It may turn out that to produce another Einstein or von Neumann, you need a rare combination of many factors, where having IQ close to the upper limit is necesary but not sufficient, and the rest is e.g. nutrition, personality traits, psychological health, and choices made in life. So even if you genetically produce 1000 people with the max IQ, barely one of them becomes functionally another Einstein. (But even then, 1 in 1000 is much better than 1 per generation globally.)

(Actually, this is my personal hypothesis of IQ, which -- if true -- would explain why different populations have more or less the same average IQ. Basicly, let's assume that having all those IQ genes gives you IQ 200, and that all lower IQ is a result of mutational load, and IQ 100 simply means a person with average mutational load. So even if you would populate a new island with Mensa members, in a few generations some of them would receive bad genes not just by inheritance but also by random non-fatal mutations, gradually lowering the average IQ to 100. On the other hand, if you would populate a new island with retards, as long as all the IQ genes are present in at least some of them, in a few generations natural selection would spread those genes in the population, gradually increasing the average IQ to 100.)

Comment author: Lumifer 21 March 2017 04:26:16PM 3 points [-]

it means that there exists an upper limit to human IQ

I'm pretty sure that that there is an upper limit to the IQ capabilities of a blob of wetware that has to fit inside a skull.

Comment author: gathaung 27 March 2017 04:04:15PM *  1 point [-]

AFAIK (and wikipedia tells), this is not how IQ works. For measuring intelligence, we get an "ordinal scale", i.e. a ranking between test-subjects. An honest reporting would be "you are in the top such-and-so percent". For example, testing someone as "one-in-a-billion performant" is not even wrong; it is meaningless, since we have not administered one billion IQ tests over the course of human history, and have no idea what one-in-a-billion performance on an IQ test would look like.

Because the IQ is designed by people who would try to parse HTML by regex (I cannot think of a worse insult here), it is normalized to a normal distribution. This means that one applies the inverse error-function with SD of 15 points to the percentile data. Hence, IQ is Gaussian-by-definition. In order to compare, use e.g. python as a handy pocket calculator:

from math import *

iqtopercentile = lambda x: erfc((x-100)/15)/2

iqtopercentile(165)

4.442300208692339e-10

So we see that claims of any human being having an IQ of 165+ is statistically meaningless. If you extrapolated to all of human history, an IQ of 180+ is meaningless:

iqtopercentile(180)

2.3057198811629745e-14

Yep, by current definition you would need to test 10^14 humans to get one that manages an IQ of 180. If you test 10^12 humans and one god-like super-intelligence, then the super-intelligence gets an IQ of maybe 175 -- because you should not apply the inverse error-function to an ordinal scale, because ordinal scales cannot capture bimodals. Trying to do so invites eldritch horrors on our plane who will parse HTML with a regex.

Comment author: Good_Burning_Plastic 28 March 2017 03:47:01PM *  0 points [-]

iqtopercentile = lambda x: erfc((x-100)/15)/2

The 15 should be (15.*sqrt(2)) actually, resulting in iqtopercentile(115) = 0.16 as it should be rather than 0.079 as your expression gives, iqtopercentile(165) = 7.3e-6 (i.e. 7 such people in a city with 1 million inhabitants in average), and iqtopercentile(180) = 4.8e-8 (i.e. several hundred such people in the world).

(Note also that in python (x-100)/15 returns an integer whenever x is an integer.)

Comment author: Viliam 28 March 2017 10:50:36AM *  0 points [-]

Yeah, I agree with everything you wrote here. For extra irony, I also have Mensa-certified IQ of 176. (Which would put me 1 IQ point above the godlike superintelligence. Which is why I am waiting for Yudkowsky to build his artificial intelligence, which will become my apprentice, and together we will rule the galaxy.)

Ignoring the numbers, my point, which I probably didn't explain well, was this:

  • There is an upper limit to biological human intelligence (ignoring new future mutations), i.e. getting all the intelligence genes right.

  • It is possible that people with this maximum biological intelligence are actually less impressive than what we would expect. Maybe they are at an "average PhD" level.

  • And what we perceive as geniuses, e.g. Einstein or von Neumann, that's actually a combination of high biological intelligence and many other traits.

  • Therefore, a genetic engineering program creating thousand new max-intelligence humans could actually fail to produce a new Einstein.

Comment author: gathaung 28 March 2017 03:20:01PM 0 points [-]

Congrats! This means that you are a Mensa-certified very one-in-a-thousand-billion-special snowflake! If you believe in the doomsday argument then this ensures either the continued survival of bio-humans for another thousand years or widespread colonization of the solar system!

On the other hand, this puts quite the upper limit on the (institutional) numeracy of Mensa... wide guessing suggests that at least one in 10^3 people have sufficient numeracy to be incapable of testifying an IQ of 176 with a straight face, which would give us an upper bound on the NQ (numeracy quotient) of Mensa at 135.

(sorry for the snark; it is not directed at you but at the clowns at Mensa, and I am not judging anyone for having taken these guys seriously at a younger age)

Regarding your serious points: Obviously you are right, and equally obviously luck (living at the right time and encountering the right problem that you can solve) also plays a pretty important role. It is just that we do not have sensible definitions for "intelligence".

IQ is by design incapable of describing outliers, and IMHO mostly nonsense even in the bulk of the distribution (but reasonable people may disagree here). Also, even if you somehow construct a meaningful linear scale for "intelligence", then I very strongly suppose that the distribution will be very far from Gaussian at the tails (trivially so at the lower end, nontrivially so at the upper end). Also, applying the inverse error-function to ordinal scales... why?

Comment author: gjm 29 March 2017 12:20:32PM 2 points [-]

On the other hand, any regular reader of LW will (1) be aware that LW folks as a population are extremely smart and (2) notice that Viliam is demonstrably one of the smartest here, so the Mensa test got something right.

Of course any serious claim to be identifying people five standard deviations above average in a truly normally-distributed property is bullshit, but if you take the implicit claim behind that figure of 176 to be only "there's a number that kinda-sorta measures brainpower, the average is about 100, about 2% are above 130, higher numbers are dramatically rarer, and Viliam scored 176 which means he's very unusually bright" then I don't think it particularly needs laughing at.

Comment author: gathaung 16 May 2017 03:49:51PM 0 points [-]

It was not my intention to make fun of Viliam; I apologize if my comment gave this impression.

I did want to make fun of the institution of Mensa, and stand by them deserving some good-natured ridicule.

I agree with your charitable interpretation about what an IQ of 176 might actually mean; thanks for stating this in such a clear form.

Comment author: Viliam 29 March 2017 12:08:02AM 0 points [-]

Well, Mensa sucks at numbers since its very beginning. The original plan was to select 1% of the most intelligent people, but by mistake they made it 2%, and when they later found out, they decided to just keep it as it is.

"More than two sigma, that means approximately 2%, right?" "Yeah, approximately." Later: "You meant, 2% at both ends of the curve, so 1% at each, right?" "No, I meant 2% at each." "Oh, shit."

Comment author: tut 29 March 2017 11:51:26AM 0 points [-]

What? 2 sigma means 2.5% at each end.

Comment author: Lumifer 29 March 2017 03:07:52PM 0 points [-]

2 sigma means 2.5% at each end

That sentence is imprecise.

If you divide a standard Gaussian at the +2 sigma boundary, the probability mass to the left will be 97.5% and to the right ("the tail") -- 2.5%.

So two sigmas don't mean 2.5% at each end, they mean 2.5% at one end.

On the other hand, if you use a 4-sigma interval from -2 sigmas to +2 sigmas, the probability mass inside that interval will be 95% and both tails together will make 5% or 2.5% each.

Comment author: Viliam 29 March 2017 01:34:47PM 0 points [-]

Apparently, Mensa didn't get any better at math since then. As far as I know, they still use "2 sigma" and "top 2%" as synonyms. Well, at least those of them who know what "sigma" means.

Comment author: Lumifer 28 March 2017 02:47:00PM *  0 points [-]

Therefore, a genetic engineering program creating thousand new max-intelligence humans could actually fail to produce a new Einstein.

Only if what makes von Neumanns and Einsteins is not heritable. Once you have a genetic engineering program going, you are not limited to adjusting just IQ genes.

Comment author: philh 21 March 2017 12:36:24PM 1 point [-]

You're also assuming that the genes are independently distributed, which isn't true if intelligent people are more likely to have kids with other intelligent people.

Comment author: MrMind 21 March 2017 08:20:44AM 0 points [-]

Well, yes. You have re-discovered the fact that a binomial distribution resembles, in the limit, a normal distribution.

Comment author: Qiaochu_Yuan 21 March 2017 04:33:40AM 0 points [-]

I mean, yes, of course. You might be interested in reading about Stephen Hsu.