Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: cousin_it 16 May 2017 04:32:25PM *  1 point [-]

There's a free market idea that the market rewards those who provide value to society. I think I've found a simple counterexample.

Imagine a loaf of bread is worth 1 dollar to consumers. If you make 100 loaves and sell them for 99 cents each, you've provided 1 dollar of value to society, but made 99 dollars for yourself. If you make 100 loaves and give them away to those who can't afford it, you've provided 100 dollars of value to society, but made zero for yourself. Since the relationship is inverted, we see that the market doesn't reward those who provide value. Instead it rewards those who provide value to those who provide value! It's recursive, like PageRank!

That's the main reason why we have so much inequality. Recursive systems will have attractors that concentrate stuff. That's also why you can't blame people for having no jobs. They are willing to provide value, but they can't survive by providing to non-providers, and only the best can provide to providers.

Comment author: Val 19 May 2017 09:29:10AM 1 point [-]

If you make 100 loaves and sell them for 99 cents each, you've provided 1 dollar of value to society, but made 100 dollars for yourself.

Not 99 dollars?

Reaching out to people with the problems of friendly AI

4 Val 16 May 2017 07:30PM

There have been a few attempts to reach out to broader audiences in the past, but mostly in very politically/ideologically loaded topics.

After seeing several examples of how little understanding people have about the difficulties in creating a friendly AI, I'm horrified. And I'm not even talking about a farmer on some hidden ranch, but about people who should know about these things, researchers, software developers meddling with AI research, and so on.

What made me write this post, was a highly voted answer on stackexchange.com, which claims that the danger of superhuman AI is a non-issue, and that the only way for an AI to wipe out humanity is if "some insane human wanted that, and told the AI to find a way to do it". And the poster claims to be working in the AI field.

I've also seen a TEDx talk about AIs. The talker didn't even hear about the paperclip maximizer, and the talk was about the dangers presented by the AIs as depicted in the movies, like the Terminator, where an AI "rebels", but we can hope that AIs would not rebel as they cannot feel emotion, so we should hope the events depicted in such movies will not happen, and all we have to do is for ourselves to be ethical and not deliberately write malicious AI, and then everything will be OK.

The sheer and mind-boggling stupidity of this makes me want to scream.

We should find a way to increase public awareness of the difficulty of the problem. The paperclip maximizer should become part of public consciousness, a part of pop culture. Whenever there is a relevant discussion about the topic, we should mention it. We should increase awareness of old fairy tales with a jinn who misinterprets wishes. Whatever it takes to ingrain the importance of these problems into public consciousness.

There are many people graduating every year who've never heard about these problems. Or if they did, they dismiss it as a non-issue, a contradictory thought experiment which can be dismissed without a second though:

A nuclear bomb isn't smart enough to override its programming, either. If such an AI isn't smart enough to understand people do not want to be starved or killed, then it doesn't have a human level of intelligence at any point, does it? The thought experiment is contradictory.

We don't want our future AI researches to start working with such a mentality.

 

What can we do to raise awareness? We don't have the funding to make a movie which becomes a cult classic. We might start downvoting and commenting on the aforementioned stackexchange post, but that would not solve much if anything.



Comment author: Val 28 April 2017 02:45:10PM 0 points [-]

Anyone who is reading this should take this survey, even if you don't identify as an "effective altruist".

Why? The questions are too much centered not only on effective altruists, but also on left- or far-left-leaning ideologies. I stopped filling it when it assumed only movements of that single political spectrum are considered social movements.

Comment author: Val 09 March 2017 07:59:48PM *  2 points [-]

Even with the limited AGI with very specific goals (build 1000 cars) the problem is not automatically solved.

The AI might deduce that if humans still exist, there is a higher than zero probability that a human will prevent it from finishing the task, so to be completely safe, all humans must be killed.

Comment author: username2 15 February 2017 12:01:43AM *  0 points [-]

Because there are plenty of all-seeing eye superpowers in this world. Not everyone is convinced that the very real, very powerful security regimes around the world would be suddenly left inept when the opponent is a computer instead of a human being.

My comment didn't contribute any less than yours to the discussion, which is rather the point. The validity of an allegory depends on the accuracy of the setup and rules, not the outcome. You seemed happy to engage until it was pointed out that the outcome was not what you expected.

Comment author: Val 15 February 2017 06:21:48PM 1 point [-]

Those "very real, very powerful security regimes around the world" are surprisingly inept at handling a few million people trying to migrate to other countries, and similarly inept at handling the crime waves and the political fallout generated by it.

And if you underestimate how much a threat could a mere "computer" be, read the "Friendship is Optimal" stories.

Comment author: Val 08 January 2017 11:44:42PM *  4 points [-]

This is a well-presented article, and even though most (or maybe all) of the information is easily available else-where, this is a well-written summary. It also includes aspects which are not talked about much, or which are often misunderstood. Especially the following one:

Debating the beliefs is a red herring. There could be two groups worshiping the same sacred scripture, and yet one of them would exhibit the dramatic changes in its members, white the other would be just another mainstream faith with boring compartmentalizing believers; so the difference is clearly not the scripture itself.

Indeed, the beliefs are not even close to be among the most important aspects of a cult. A cult is not merely a group which believes in something you personally find ridiculous. A cult can even have a stated core belief which is objectively true, or is a universally accepted good thing, like protecting the environment or world peace.

Comment author: Fluttershy 05 January 2017 12:20:00AM 5 points [-]

It helps that you shared the dialogue. I predict that Jane doesn't System-2-believe that Trump is trying to legalize rape; she's just offering the other conversation participants a chance to connect over how much they don't like Trump. This may sound dishonest to rationalists, but normal people don't frown upon this behavior as often, so I can't tell if it would be epistemically rational of Jane to expect to be rebuffed in the social environment you were in. Still, making claims like this about Trump may be an instrumentally rational thing for Jane to do in this situation, if she's looking to strengthen bonds with others.

Jane's System 1 is a good bayesian, and knows that Trump supporters are more likely to rebuff her, and that Trump supporters aren't social allies. She's testing the waters, albeit clumsily, to see who her social allies are.

Jane could have put more effort into her thoughts, and chosen a factually correct insult to throw at Trump. You could have said that even if he doesn't try to legalize rape, then he'll do some other specific thing that you don't approve of (and you'd have gotten bonus points for proactively thinking of a bad thing to say about him). The implementation of either of these changes would have had a roughly similar effect on the levels of nonviolence and agreeability of the conversation.

This generalizes to most conversations about social support. When looking for support, many people switch effortlessly between making low effort claims they don't believe, and making claims that they System-2-endorse. Agreeing with their sensible claims, and offering supportive alternative claims to their preposterous claims, can mark you as a social ally while letting you gently, nonviolently nudge them away from making preposterous claims.

Comment author: Val 05 January 2017 09:56:12PM *  1 point [-]

This comment was very insightful, and made me think that the young-earth creationist I talked about had a similar motivation. Despite this outrageous argument, she is a (relatively speaking) smart and educated person. Not academic-level, but neither grown up on the streets level.

Comment author: Val 03 January 2017 09:36:27PM *  2 points [-]

I always thought the talking snakes argument was very weak, but being confronted by a very weird argument from a young-earth creationist provided a great example for it:

If you believe in evolution, why don't you grow wings and fly away?

The point here is not about the appeal to ridicule (although it contains a hefty dose of that too). It's about a gross misrepresentation of a viewpoint. Compare the following flows of reasoning:

  • Christianity means that snakes can talk.
  • We can experimentally verify that snakes cannot talk.
  • Therefore, Christianity is false.

and

  • Evolution means people can spontaneously grow wings.
  • We can experimentally verify that people cannot spontaneously grow wings.
  • Therefore, evolution is false.

The big danger in this reasoning is that one can convince oneself of having used the experimental method, or of having been a rationalist. Because hey, we can scientifically verify the claim! - Without realizing that the verified claim is very different from the claims the discussed viewpoint actually holds.

I've even seen many self-proclaimed "rationalists" fall into this trap. Just as many religious people are reinforced by a "pat on the back" from their peers if they say something which is liked by the community they are in, so can people feel motivated to claim they are rationalists if that causes a pat on the back from people they interact with the most.

Comment author: Viliam 02 January 2017 03:45:23PM *  9 points [-]

A consequence of availability bias: the less you understand what other people do, the easier "in principle" it seems.

By "in principle" I mean that you wouldn't openly call it easy, because the work obviously requires specialized knowledge you don't have, and cannot quickly acquire. But it seems like for people who already have the specialized knowledge, it should be relatively straightforward.

"It's all just a big black box for me, but come on, it's only one black box, don't act like it's hundreds of boxes."

as opposed to:

"It's a transparent box with hundreds of tiny gadgets. Of course it takes a lot of time to get it right!"

Comment author: Val 03 January 2017 08:58:11PM 2 points [-]

Isn't this very closely related to the Dunning-Kruger effect?

Comment author: Val 29 November 2016 03:37:03PM *  2 points [-]

I'm not surprised Dawkins makes a cameo in it. The theist in the discussion is a very blunt strawman, just as Dawkins usually likes to invite the dumbest theists he can find, who say the stupidest things about evolution or global warming, thereby allegedly proving all theists wrong.

I'm sorry if I might have offended Dawkins, I know many readers here are a fan of him. However, I have to state that although I have no doubts about the values of his scientific work and his competence in his field, he does make a clown of himself with all those stawman attacks against theism.

View more: Next