Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Lumifer 22 March 2017 04:18:15PM *  0 points [-]

coalescing that in any meaningful level of resistance

Resistance on whose part to what?

History shows that leaders haven't been very kind to revolutions

Revolutions haven't been very kind to leaders, too -- that's the point. When the proles have nothing to lose but their chains, they get restless :-/

an absolution of leader-replacement strategies

...absolution?

Comment author: Viliam 23 March 2017 10:31:26AM 2 points [-]

When the proles have nothing to lose but their chains, they get restless :-/

Is this empirically true? I am not an expert, but seems to me that many revolutions are caused not by consistent suffering -- which makes people adjust to the "new normal" -- but rather by situations where the quality of life increases a bit -- which gives people expectations of improvement -- and then either fails to increase further, or even falls back a bit. That is when people explode.

A child doesn't throw a tantrum because she never had a chocolate, but she will if you give her one piece and then take away the remaining ones.

Comment author: Lumifer 22 March 2017 02:38:52PM *  0 points [-]

Right, but I am specifically interested in Viliam's views about the scenario where there is no AI, but we do have honest and competent rulers.

Comment author: Viliam 23 March 2017 10:17:26AM 0 points [-]

That is completely irrelevant to debates about AI.

But anyway, I object against the premise being realistic. Humans run on "corrupted hardware", so even if they start as honest and competent and rational and well-meaning, that usually changes very quickly. In long term, they also get old and die, so what you would actually need is honest and competent elite group, able to raise and filter its next generation that would be at least equally honest, competent, rational, well-meaning, and skilled at raising and filtering the next generation for the same qualities.

In other words, you would need to have a group of rulers enlightened enough that they are able to impartially and precisely judge whether their competitors are equally good or somewhat better in the relevant criteria, and in such case they would voluntarily transfer their power to the competitors. -- Which goes completely against what the evolution taughts us: that if your opponent is better than you, you should use your power to crush him, preferably immediately, while you still have the advantage of power, and before other tribe members notice his superiority and start offering to ally with him against you.

Oh, and this perfect group would also need to be able to overthrow the current power structures and get themselves in the positions of power, without losing any of its qualities in the process. That is, they have to be competent enough to overthrow an opponent with orders of magnitude more power (imagine someone who owns the media and police and army and secret service and can also use illegal methods to kidnap your members, torture them to extract their secrets, and kill them afterwards), without having to compromise on your values. So, in addition, the members of your elite group must have perfect mental resistance against torture and blackmail; and be numerous enough, so they can easily replace their fallen brethren and continue with the original plan.

Well... there doesn't seem to be a law of physics that would literally prevent this, it just seems very unlikely.

With a less elite group, there are many things that can possibly go wrong, and evolutionary pressures in favor of things going wrong as quickly as possible.

Comment author: James_Miller 18 March 2017 07:31:27PM 6 points [-]

"Elsewhere on the internet, another fearsomely intelligent group of thinkers prepared to assault the secular religions of the establishment: the neoreactionaries, also known as #NRx."

"Neoreactionaries appeared quite by accident, growing from debates on LessWrong.com, a community blog set up by Silicon Valley machine intelligence researcher Eliezer Yudkowsky. The purpose of the blog was to explore ways to apply the latest research on cognitive science to overcome human bias, including bias in political thought and philosophy."

"LessWrong urged its community members to think like machines rather than humans. Contributors were encouraged to strip away self-censorship, concern for one’s social standing, concern for other people’s feelings, and any other inhibitors to rational thought. It’s not hard to see how a group of heretical, piety-destroying thinkers emerged from this environment — nor how their rational approach might clash with the feelings-first mentality of much contemporary journalism and even academic writing."

This article currently has 32,760 Facebook shares.

Comment author: Viliam 21 March 2017 05:05:35PM *  0 points [-]

Sigh.

It makes sense for NRs to associate themselves with rationalists. For a fringe movement, any (fiction of) support is good support, and "rationality" seems like a reasonable applause light.

It makes sense for SJWs to associate NRs with rationalists. It supports the homogeneity-of-outgroup narrative about evil white nerdy males.

No one gives a fuck about what LW says, or what actually happened.

Welcome to the future of journalism!

Later, this article will probably be used as a "reliable source" by Wikipedia. Explanations that LW didn't actually "urge its members to think like machines and strip away concern for other people's feelings" will be dismissed as "original research", and people who made such arguments will be banned. Less Wrong will be officially known as a website promoting white supremacism, Roko's Basilisk, and removing female characters from computer games. This Wikipedia article will be quoted by all journals, and your families will be horrified by what kind of a monster you have become. All LW members will be fired from their jobs.

Comment author: Lumifer 21 March 2017 04:38:58PM *  0 points [-]

And your point is...?

Is it really that difficult to discern?

From my point of view, the main problem with "making the benefits of capital accrue to everyone generally" is that... well, people who use these words as an applause light typically do something else instead.

So do you think that if we had real communism, with selfless and competent rulers, it would work just fine?

companies paying taxes, and those taxes being used to build roads or pay for universal healthcare... is an example of providing the benefits of capital to everyone

Capital is not just money. You tax, basically, production (=creation of value) and production is not a "benefit of capital".

In any case, the underlying argument here is that no one should own AI technology. As always, this means a government monopoly and that strikes me as a rather bad idea.

If the choice is between giving each human a 1/7000000000 of the universe, or giving the whole universe to Elon Musk (or some other person) and letting everyone else starve

Can we please not make appallingly stupid arguments? In which realistic scenarios do you thing this will be a choice that someone faces?

Comment author: Viliam 21 March 2017 04:57:44PM 0 points [-]

Is it really that difficult to discern?

You mean this one?

So do you think that if we had real communism, with selfless and competent rulers, it would work just fine?

For the obvious reasons I don't think you can find selfless and competent human rulers to make this really work. But conditional on possibility of creating a Friendly superintelligent AI... sure.

Although calling that "communism" is about as much of a central example, as calling the paperclip maximizer scenario "capitalism".

production is not a "benefit of capital".

Capital is a factor in production, often a very important one.

no one should own AI technology. As always, this means a government monopoly

Making a superintelligent AI will make our definitions of ownership (whether private or government) obsolete. And "as always" does not seem like a good argument for Singularity scenarios.

In which realistic scenarios do you thing this will be a choice that someone faces?

Depends on whether you consider the possibility of superintelligent AI to be "realistic".

Comment author: Viliam 21 March 2017 04:36:35PM 0 points [-]

An interesting way of solving the chicken-or-egg problem of a new content-publishing service.

Comment author: Bound_up 20 March 2017 11:49:25PM 0 points [-]

Suppose there are 100 genes which figure into intelligence, the odds of getting any one being 50%.

The most common result would be for someone to get 50/100 of these genes and have average intelligence.

Some smaller number would get 51 or 49, and a smaller number still would get 52 or 48.

And so on, until at the extremes of the scale, such a small number of people get 0 or 100 of them that no one we've ever heard of or has ever been born has had all 100 of them.

As such, incredible superhuman intelligence would be manifest in a human who just got lucky enough to have all 100 genes. If some or all of these genes could be identified and manipulated in the genetic code, we'd have unprecedented geniuses.

Comment author: Viliam 21 March 2017 04:19:33PM *  1 point [-]

Let me be the one to describe this glass as half-empty:

If there are 100 genes that participate in IQ, it means that there exists an upper limit to human IQ, i.e. when you have all 100 of them. (Ignoring the possibility of new IQ-increasing mutations for the moment.) Unlike the mathematical bell curve which -- mathematically speaking -- stretches into infinity, this upper limit of human IQ could be relatively low; like maybe IQ 200, but definitely no Anasûrimbor Kellhus.

It may turn out that to produce another Einstein or von Neumann, you need a rare combination of many factors, where having IQ close to the upper limit is necesary but not sufficient, and the rest is e.g. nutrition, personality traits, psychological health, and choices made in life. So even if you genetically produce 1000 people with the max IQ, barely one of them becomes functionally another Einstein. (But even then, 1 in 1000 is much better than 1 per generation globally.)

(Actually, this is my personal hypothesis of IQ, which -- if true -- would explain why different populations have more or less the same average IQ. Basicly, let's assume that having all those IQ genes gives you IQ 200, and that all lower IQ is a result of mutational load, and IQ 100 simply means a person with average mutational load. So even if you would populate a new island with Mensa members, in a few generations some of them would receive bad genes not just by inheritance but also by random non-fatal mutations, gradually lowering the average IQ to 100. On the other hand, if you would populate a new island with retards, as long as all the IQ genes are present in at least some of them, in a few generations natural selection would spread those genes in the population, gradually increasing the average IQ to 100.)

Comment author: username2 20 March 2017 04:17:54PM 1 point [-]

Open sourcing all significant advancements in AI and releasing all code under GNU GPL.

Comment author: Viliam 21 March 2017 04:05:52PM 1 point [-]

Tiling the whole universe with small copies of GNU GPL, because each nanobot is legally required to contain the full copy. :D

Comment author: Lumifer 20 March 2017 06:24:55PM 2 points [-]

Ensure that the benefits of AI accrue to everyone generally, rather than exclusively to the teeny-tiny fraction of humanity who happen to own their own AI business.

s/AI/capital/

Now, where have I heard this before..?

Comment author: Viliam 21 March 2017 04:01:58PM 1 point [-]

And your point is...?

From my point of view, the main problem with "making the benefits of capital accrue to everyone generally" is that... well, people who use these words as an applause light typically do something else instead. First, they take most of the benefits of capital to themselves (think: all those communist leaders with golden watches and huge dachas). Second, as a side-effect of incompetent management (where signalling political loyalty trumps technical competence), even the capital that isn't stolen is used very inefficiently.

But on a smaller scale... companies paying taxes, and those taxes being used to build roads or pay for universal healthcare... is an example of providing the benefits of capital to everyone. Just not all the capital; and besides the more-or-less neutral taxation, the use of the capital is not micromanaged by people chosen for their political loyalty. So the costs to the economy are much smaller, and arguably the social benefits are larger (some libertarians may disagree).

Assuming that the hypothetical artificial superintelligence will be (1) smarter than humans, and (2) able to scale, e.g. to increase its cognitive powers thousandfold by creating 1000 copies of itself which will not immediately start feeding Moloch by fighting against each other, it should be able to not fuck up the whole economy, and could quite likely increase the production, even without increasing the costs to environment, by simply doing things smarter and removing inefficiencies. Unlike the communist bureaucrats who (1) were not superintelligent, and sometimes even not of average intelligence, (2) optimized each for their own personal goals, and (3) routinely lied to each other and to their superiors to avoid irrational punishments, so soon the whole system used completely fake data. Not being bound by ideology, if the AI would find out that it is better to leave something to do to humans (quite unlikely IMHO, but let's assume so for the sake of the argument), it would be free to do exactly that. Unlike a hypothetical enlightened communist bureaucrat, who after making the same observation would be probably shot as a traitor and replaced by a less enlightened one.

If the choice is between giving each human a 1/7000000000 of the universe, or giving the whole universe to Elon Musk (or some other person) and letting everyone else starve (because I don't think anyone would be able to get any job in a world where the scalable superintelligence is your direct competitor), the former option seems better to me, and I think even Elon Musk wouldn't mind... especially considering that going for the former option will make people much more willing to cooperate with him.

Comment author: tristanm 20 March 2017 10:54:00PM 4 points [-]

Should we expect more anti-rationalism in the future? I believe that we should, but let me outline what actual observations I think we will make.

Firstly, what do I mean by 'anti-rationality'? I don't mean that in particular people will criticize LessWrong. I mean it in the general sense of skepticism towards science / logical reasoning, skepticism towards technology, and a hostility to rationalistic methods applied to things like policy, politics, economics, education, and things like that.

And there are a few things I think we will observe first (some of which we are already observing) that will act as a catalyst for this. Number one, if economic inequality increases, I think a lot of the blame for this will be placed on the elite (as it always is), but in particular the cognitive elite (which makes up an ever-increasing share of the elite). Whatever the views of the cognitive elite are will become the philosophy of evil from the perspective of the masses. Because the elite are increasingly made up of very high intelligence people, many of whom with a connection to technology or Silicon Valley, we should expect that the dominant worldview of that environment will increasingly contrast with the worldview of those who haven't benefited or at least do not perceive themselves to benefit from the increasing growth and wealth driven by those people. What's worse, it seems that even if economic gains benefit those at the very bottom too, if inequality still increases, that is the only thing that will get noticed.

The second issue is that as technology improves, our powers of inference increase, and privacy defenses become weaker. It's already the case that we can predict a person's behavior to some degree and use that knowledge to our advantage (if you're trying to sell something to them, give them / deny them a loan, judge whether they would be a good employee, or predict whether or not they will commit a crime). There's already a push-back against this, in the sense that certain variables correlate with things we don't want them to, like race. This implies that the standard definition of privacy, in the sense of simply not having access to specific variables, isn't strong enough. What's desired is not being able to infer the values of certain variables, either, which is a much, much stronger condition. This is a deep, non-trivial problem that is unlikely to be solved quickly - and it runs into the same issues as all problems concerning discrimination do, which is how to define 'bias'. Is reducing bias at the expense of truth even a worthy goal? This shifts the debate towards programmers, statisticians and data scientists who are left with the burden of never making a mistake in this area. "Weapons of Math Destruction" is a good example of the way this issue gets treated.

We will also continue to observe a lot ideas from postmodernism being adopted as part of political ideology of the left. Postmodernism is basically the antithesis of rationalism, and is particularly worrying because it is a very adaptable and robust meme. And an ideology that essentially claims that rationality and truth are not even possible to define, let alone discover, is particularly dangerous if it is adopted as the mainstream mode of thought. So if a lot of the above problems get worse, I think there is a chance that rationalism will get blamed as it has been in the framework of postmodernism.

The summary of this is: As politics becomes warfare between worldviews rather than arguments for and against various beliefs, populist hostility gets directed towards what is perceived to be the worldview of the elite. The elite tend to be more rationalist, and so that hostility may get directed towards rationalism itself.

I think a lot more can be said about this, but maybe that's best left to a full post, I'm not sure. Let me know if this was too long / short or poorly worded.

Comment author: Viliam 21 March 2017 01:31:28PM 0 points [-]

I have a feeling that perhaps in some sense politics is self-balancing. You attack things that are associated with your enemy, which means that your enemy will defend them. Assuming you are an entity that only cares about scoring political points, if your enemy uses rationality as an applause light, you will attack rationality, but if your enemy uses postmodernism as an applause light, you will attack postmodernism and perhaps defend (your interpretation of) rationality.

That means that the real risk for rationality is not that everyone will attack it. As soon as the main political players will all turn against rationality, fighting rationality will become less important for them, because attacking things the others consider sacred will be more effective. You will soon get rationality apologists saying "rationality per se is not bad, it's only rationality as practiced by our political opponents that leads to horrible things".

But if some group of idiots will choose "rationality" as their applause light and they will be doing it completely wrong, and everyone else will therefore turn against rationality, that would cause much more damage. (Similarly to how Stalin is often used as an example against "atheism". Now imagine a not-so-implausible parallel universe where Stalin used "rationality" -- interpreted as: 1984-style obedience of the Communist Party -- as the official applause light of his regime. In such world, non-communists hate the word "rationality" because it is associated with communism, and communists insist that the only true meaning of rationality is the blind obedience of the Party. Imagine trying to teach people x-rationality in that universe.)

Comment author: username2 18 March 2017 04:44:04PM *  3 points [-]

As the other anonymous said, this doesn't follow at all. A group living situation creates a larger field of "trusted adults" per child. Unless all the adults are mindful of these risks, a situation arises where any adult may at any time be put in charge of watching any child or children. This is frankly the textbook definition of what not to do.

If the adults are mindful of the risk, then they can be open about it, and ensure that two or more adults are always tasked with watching children, so that the adults can watch each other. And even this may eventually cease to be necessary.

Also, I find that your definition of paranoid must be different from mine if you look at those statistics and think "nothing risky going on here". I have to assume you have no personal experience with this issue. I can't help but feel like people in this thread are conflating a feeling of "I don't want this to be true and I don't want to have to think about it" with "this is obviously overly paranoid".

Comment author: Viliam 20 March 2017 10:41:50AM 2 points [-]

ensure that two or more adults are always tasked with watching children, so that the adults can watch each other.

This may feel exaggerated, because many people not living in communities are not following this rule consistently either. People often leave their children alone with grandparents or babysitters. Sure, there is a risk involved, but... life sometimes gives you constraints.

View more: Next