Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to comment by Val on Mini map of s-risks
Comment author: turchin 08 July 2017 09:23:39PM *  1 point [-]

Indexical blackmail was somewhere on lesswrong, and idea is that AI in the box creates many my copies, and inform me about it, and because of this I can't be sure that I am not one of such copies, and thus I will realise it from the box (or face torture with probability 999 to 1000). I can't find the link.

The idea is based on idea of the "indexical uncertainty", which is googlable, for example, here: https://books.google.ru/books?id=1mMJBAAAQBAJ&pg=PT138&lpg=PT138&dq=indexical+uncertainty&source=bl&ots=Pmy8RXDflh&sig=7RT7DQIKidIN-Q6Po6seizyNGYw&hl=en&sa=X&ved=0ahUKEwi285nQzfrUAhXjZpoKHQJ2CQM4ChDoAQgjMAA#v=onepage&q=indexical%20uncertainty&f=false

Hacker's joke - it is a hypothetical situation when first and last AI creator is just a random 15 years old boy, who wants to play with AI by putting in it stupid goals. Nothing to google here.

Comment author: Val 09 July 2017 09:15:38AM 0 points [-]

I know about the first one having been mentioned on this site, I've read about it plenty of times, but it was not named as such. Therefore it's advisable if you use a rare term (or especially one made up by you) that you also tell what it means.

In response to Mini map of s-risks
Comment author: Val 08 July 2017 08:46:37PM 1 point [-]

Could you please put some links to "Hacker's joke" and "Indexical blackmail"? Both use words common enough to not yield obvious results for a google search.

In response to Any Christians Here?
Comment author: Val 22 June 2017 04:54:37PM *  3 points [-]

Another Christian here, raised as a Calvinist, but consider myself more of a non-denominational, ecumenical one, with some very slight deist tendencies.

I don't want to sound rude, but I don't know how to formulate it in a better way: if you think you have to choose between christianity and science, you have a very incomplete information about what Christianity is about, and also incomplete knowledge about the history of science itself. I wonder how many who call themselves Bayesians know that Bayes was a very devout Christian, similar to many other founders of modern science who where also philosophers and theologians.

This "Christianity is the enemy of rational thought" idea seems to be relatively recent, and is probably caused or at least magnified by the handful young earth creationists being very loud.

Why there are so few committed Christians here on this site, can be attributed to, among other factors, to how this community started. Reading the earliest posts, it seems that almost every single one of them was a rant against Christianity. No wonder this community mostly attracted atheists, at least in the beginning.

Christianity doesn't mean, and shouldn't mean, trials after trials to find a mathematical proof of God's existence and a vicious fight against those who claim to have found mathematical proofs of God's non-existence.

I want to converse and debate with rationalists who despite their Bayesian enlightenment choose to remain in the flock.

I would love to speak with them, to know exactly why they still believe and how

I'll try an example to give back at least some part of the feeling. Let's say you enjoy to listen to the songs of birds at dawn. (if you actually don't, then imagine something else, something you enjoy which is not based around rationality. Like the smell of fresh flowers, or your favorite musical instrument, or looking at a great painting)

Would you stop enjoying listening to the singing birds, would you stop finding it beautiful, if someone explained it to you that scientifically, they are just waves formed by ordinary molecules bumping into each other, they are just mechanical vibrations, and you shouldn't find anything more in them? Or would you stop enjoying it if someone pointed out to you that there were some horrible criminals hundreds of years ago on the other side of the planet who also claimed to enjoy listening to the songs of birds? Would you stop enjoying it if someone pointed it out to you that there is no rational explanation why you would find this vibration of the air more beautiful than any other vibration of the air? And, more importantly, would you find the singing of birds suddenly something horrible and disgusting, just because you developed a greater understanding in a scientific topic? (I'm not claiming Christianity is merely a form of thoughts to find pleasure or refuge in, this was only an example of how something which is not based on rationality can be compatible with rationality.)

Comment author: cousin_it 16 May 2017 04:32:25PM *  2 points [-]

There's a free market idea that the market rewards those who provide value to society. I think I've found a simple counterexample.

Imagine a loaf of bread is worth 1 dollar to consumers. If you make 100 loaves and sell them for 99 cents each, you've provided 1 dollar of value to society, but made 99 dollars for yourself. If you make 100 loaves and give them away to those who can't afford it, you've provided 100 dollars of value to society, but made zero for yourself. Since the relationship is inverted, we see that the market doesn't reward those who provide value. Instead it rewards those who provide value to those who provide value! It's recursive, like PageRank!

That's the main reason why we have so much inequality. Recursive systems will have attractors that concentrate stuff. That's also why you can't blame people for having no jobs. They are willing to provide value, but they can't survive by providing to non-providers, and only the best can provide to providers.

Comment author: Val 19 May 2017 09:29:10AM 1 point [-]

If you make 100 loaves and sell them for 99 cents each, you've provided 1 dollar of value to society, but made 100 dollars for yourself.

Not 99 dollars?

Reaching out to people with the problems of friendly AI

4 Val 16 May 2017 07:30PM

There have been a few attempts to reach out to broader audiences in the past, but mostly in very politically/ideologically loaded topics.

After seeing several examples of how little understanding people have about the difficulties in creating a friendly AI, I'm horrified. And I'm not even talking about a farmer on some hidden ranch, but about people who should know about these things, researchers, software developers meddling with AI research, and so on.

What made me write this post, was a highly voted answer on stackexchange.com, which claims that the danger of superhuman AI is a non-issue, and that the only way for an AI to wipe out humanity is if "some insane human wanted that, and told the AI to find a way to do it". And the poster claims to be working in the AI field.

I've also seen a TEDx talk about AIs. The talker didn't even hear about the paperclip maximizer, and the talk was about the dangers presented by the AIs as depicted in the movies, like the Terminator, where an AI "rebels", but we can hope that AIs would not rebel as they cannot feel emotion, so we should hope the events depicted in such movies will not happen, and all we have to do is for ourselves to be ethical and not deliberately write malicious AI, and then everything will be OK.

The sheer and mind-boggling stupidity of this makes me want to scream.

We should find a way to increase public awareness of the difficulty of the problem. The paperclip maximizer should become part of public consciousness, a part of pop culture. Whenever there is a relevant discussion about the topic, we should mention it. We should increase awareness of old fairy tales with a jinn who misinterprets wishes. Whatever it takes to ingrain the importance of these problems into public consciousness.

There are many people graduating every year who've never heard about these problems. Or if they did, they dismiss it as a non-issue, a contradictory thought experiment which can be dismissed without a second though:

A nuclear bomb isn't smart enough to override its programming, either. If such an AI isn't smart enough to understand people do not want to be starved or killed, then it doesn't have a human level of intelligence at any point, does it? The thought experiment is contradictory.

We don't want our future AI researches to start working with such a mentality.

 

What can we do to raise awareness? We don't have the funding to make a movie which becomes a cult classic. We might start downvoting and commenting on the aforementioned stackexchange post, but that would not solve much if anything.



Comment author: Val 28 April 2017 02:45:10PM 1 point [-]

Anyone who is reading this should take this survey, even if you don't identify as an "effective altruist".

Why? The questions are too much centered not only on effective altruists, but also on left- or far-left-leaning ideologies. I stopped filling it when it assumed only movements of that single political spectrum are considered social movements.

Comment author: Val 09 March 2017 07:59:48PM *  2 points [-]

Even with the limited AGI with very specific goals (build 1000 cars) the problem is not automatically solved.

The AI might deduce that if humans still exist, there is a higher than zero probability that a human will prevent it from finishing the task, so to be completely safe, all humans must be killed.

Comment author: username2 15 February 2017 12:01:43AM *  0 points [-]

Because there are plenty of all-seeing eye superpowers in this world. Not everyone is convinced that the very real, very powerful security regimes around the world would be suddenly left inept when the opponent is a computer instead of a human being.

My comment didn't contribute any less than yours to the discussion, which is rather the point. The validity of an allegory depends on the accuracy of the setup and rules, not the outcome. You seemed happy to engage until it was pointed out that the outcome was not what you expected.

Comment author: Val 15 February 2017 06:21:48PM 1 point [-]

Those "very real, very powerful security regimes around the world" are surprisingly inept at handling a few million people trying to migrate to other countries, and similarly inept at handling the crime waves and the political fallout generated by it.

And if you underestimate how much a threat could a mere "computer" be, read the "Friendship is Optimal" stories.

Comment author: Val 08 January 2017 11:44:42PM *  4 points [-]

This is a well-presented article, and even though most (or maybe all) of the information is easily available else-where, this is a well-written summary. It also includes aspects which are not talked about much, or which are often misunderstood. Especially the following one:

Debating the beliefs is a red herring. There could be two groups worshiping the same sacred scripture, and yet one of them would exhibit the dramatic changes in its members, white the other would be just another mainstream faith with boring compartmentalizing believers; so the difference is clearly not the scripture itself.

Indeed, the beliefs are not even close to be among the most important aspects of a cult. A cult is not merely a group which believes in something you personally find ridiculous. A cult can even have a stated core belief which is objectively true, or is a universally accepted good thing, like protecting the environment or world peace.

Comment author: Fluttershy 05 January 2017 12:20:00AM 5 points [-]

It helps that you shared the dialogue. I predict that Jane doesn't System-2-believe that Trump is trying to legalize rape; she's just offering the other conversation participants a chance to connect over how much they don't like Trump. This may sound dishonest to rationalists, but normal people don't frown upon this behavior as often, so I can't tell if it would be epistemically rational of Jane to expect to be rebuffed in the social environment you were in. Still, making claims like this about Trump may be an instrumentally rational thing for Jane to do in this situation, if she's looking to strengthen bonds with others.

Jane's System 1 is a good bayesian, and knows that Trump supporters are more likely to rebuff her, and that Trump supporters aren't social allies. She's testing the waters, albeit clumsily, to see who her social allies are.

Jane could have put more effort into her thoughts, and chosen a factually correct insult to throw at Trump. You could have said that even if he doesn't try to legalize rape, then he'll do some other specific thing that you don't approve of (and you'd have gotten bonus points for proactively thinking of a bad thing to say about him). The implementation of either of these changes would have had a roughly similar effect on the levels of nonviolence and agreeability of the conversation.

This generalizes to most conversations about social support. When looking for support, many people switch effortlessly between making low effort claims they don't believe, and making claims that they System-2-endorse. Agreeing with their sensible claims, and offering supportive alternative claims to their preposterous claims, can mark you as a social ally while letting you gently, nonviolently nudge them away from making preposterous claims.

Comment author: Val 05 January 2017 09:56:12PM *  1 point [-]

This comment was very insightful, and made me think that the young-earth creationist I talked about had a similar motivation. Despite this outrageous argument, she is a (relatively speaking) smart and educated person. Not academic-level, but neither grown up on the streets level.

View more: Next