Filter This hour

You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Comment author: Houshalter 20 October 2016 08:48:36PM 1 point [-]

Most AI researchers have not done any research into the topic of AI research, so their opinions are irrelevant. That's like pointing to the opinions of sailors on global warming. Because global warming is about oceans and sailors should be experts on that kind of thing.

I think AI researchers are slowly warming up to AI risk. A few years ago it was a niche thing that no one had ever heard of. Now it's gotten some media attention and there is a popular book about it. Slate Star Codex has compiled a list of notable AI researchers that take AI risk seriously.

Personally my favorite name on there is Schmidhuber, who is very well known and I think has been ahead of his time in many areas in AI. With a focus particularly on general intelligence, and methods that are more general like reinforcement learning and recurrent nets, instead of the standard machine learning stuff. His opinions on AI risk are nuanced though, I think he expects AIs to leave Earth and go into space, but he does accept most of the premises of AI risk.

Bostrom did a survey back in 2014 that found AI researchers think there is at least a 30% probability that AI will be "bad" or "extremely bad" for humanity. I imagine that opinion has changed since then as AI risk has become more well known. And it will only increase with time.

Lastly this is not an outlier or 'extremist' view on this website. This is the majority opinion here and has been discussed to death in the past, and I think it's as settled as it can be expected. If you have any new points to make or share, please feel free. Otherwise you aren't adding anything at all. There is literally no argument in your comment at all, just an appeal to authority.

Comment author: philosophytorres 20 October 2016 08:06:02PM 1 point [-]

Totally agree that some x-risks are non-agential, such as (a) risks from nature, and (b) risks produced by coordination problems, resulting in e.g. climate change and biodiversity loss. As for superpowers, I would classify them as (7). Thoughts? Any further suggestions? :-)

Comment author: turchin 20 October 2016 08:39:31PM *  0 points [-]

I would also add Doomsday blackmailers. These are rational agents which would create Doomsday Machine to blackmail the world with the goal of world domination.

Another option worth considering is arogant scientists, who benefit personally from dangerous experiments. Example is CERN proceeded with LHC before its safety was proven. Another group of bioscientists excavated 1918 pandemic flu, sequenced it and posted it in the internet. And another scientist deliberately created new superflu studying genetic variation which could make birds flu stronger. We could imagine a scientist who would to increase personal longevity by gene therapy, even if it poses 1 per cent pandemic risk. And if there are many of them...

Also there is a possible class of agents who try to create smaller catastrophe in order to prevent larger catastrophe. Recent movie "Inferno" is about it, where a character created a virus to kill half humanity to safe all humanity later.

I listed all my ideas in my agent map, which is here on Less Wrong http://lesswrong.com/r/discussion/lw/o0m/the_map_of_agents_which_may_create_xrisks/

Comment author: faul_sname 20 October 2016 08:36:10PM 0 points [-]

How many seconds have you been in the room?

Let's say the time between t1 and t2 is 1 trillion seconds. Let us further assume that all people go through the rooms in the same amount of time (thus people spend 1 second each in room A, and 1 million seconds each in room B).

100 trillion of the 100.1 trillion observer moments between 0 and 1 seconds in a room occur in room A. All of the observer moments past 1 second occur in room B (this is somewhat flawed in that it is possible that the observers don't all spend the same amount of time in a given room, but even in the case where 100 million people stay in room A for 1 million seconds each, and the rest spend zero time, an observer who's been in a room for 1 million seconds is still overwhelmingly likely to be in room B. So basically the longer you've been in the room, the more probably you should consider it that you're in room B).

If an observer doesn't know how long they've been in a given room, I'm not sure how meaningful it is to call them "an" observer.

Comment author: Houshalter 20 October 2016 08:31:48PM 0 points [-]

I agree. But it's worthwhile to try to get AI researchers on our side, and get them researching things relevant to FAI. Perhaps lesswrong could have some influence on this group. If nothing else it's interesting to keep an eye on how AI is progressing.

Comment author: Houshalter 20 October 2016 08:29:43PM 0 points [-]

See my other comment for more clarification on how CEV would eliminate negative values.

Comment author: Houshalter 20 October 2016 08:27:59PM 0 points [-]

how will you create or choose a process which will build a FAI?

You are literally asking me to solve the FAI problem right here and now. I understand that FAI is a very hard problem and I don't expect to solve it instantly. Just because a problem is hard, doesn't mean it can't have a solution.

First of all let me adopt some terminology from Superintelligence. I think FAI requires solving two somewhat different problems. Value Learning and Value Loading.

You seem to think Value Learning is the hard problem, getting an AI to learn what humans actually want. I think that's the easy problem, and any intelligent AI will form a model of humans and understand what we want. Getting it to care about what we want seems like the hard problem to me.

But I do see some promising ideas to approach the problem. For instance have AIs that predict what choices a human would make in each situation. So you basically get an AI which is just a human, but sped up a lot. Or have an AI which presents arguments for and against each choice, so that humans can make more informed choices. Then it could predict what choice a human would make after hearing all the arguments, and do that.

More complicated ideas were mentioned in Superintelligence. I like the idea of "motivational scaffolding".Somehow train an AI that can learn how the world works and can generate an "interpretable model". Like e.g. being able to understand English sentences and translate their meanings to representations the AI can use. Then you can explicitly program a utility function into the AI using its learned model.

That doesn't make much sense. What do you mean by "negative" and from which point of view?

From your point of view. You gave me examples of values which you consider bad, as an argument against FAI. I'm showing you that CEV would eliminate these things.

Also, I see no evidence to support the view that as people know more, their morals improve, for pretty much any value of "improve".

Your stated example was ISIS. ISIS is so bad because they incorrectly believe that God is on their side and wants them to do the things they do. That the people that die will go to heaven, so loss of life isn't so bad. If they were more intelligent, informed, and rational... If they knew all the arguments for and against religion, then their values would be more like ours. They would see how bad killing people is, and that their religion is wrong.

The second thing CEV does is average everyone's values together. So even if ISIS really does value killing people, their victims value not being killed even more. So a CEV of all of humanity would still value life, even if evil people's values are included. Even if everyone was a sociopath, their CEV would still be the best compromise possible, between everyone's values.

Comment author: NancyLebovitz 20 October 2016 08:18:45PM 0 points [-]

I'm improving the subject line-- I agree that I need to do better with the summary.

Comment author: philosophytorres 20 October 2016 08:08:23PM 0 points [-]

What do you mean? How is mitigating climate change related to blackmail?

Comment author: ingive 20 October 2016 08:08:04PM *  0 points [-]

LOGIC NATION: A Psychological Revolution https://www.youtube.com/watch?v=drcseH-7hpw

What does LW think after trying or think to not try?

Comment author: philosophytorres 20 October 2016 08:07:56PM 0 points [-]

I actually think most historical groups wanted to vanquish the enemy, but not destroy either themselves or the environment to the point at which it's no longer livable. This is one of the interesting things that shifts to the foreground when thinking about agents in the context of existential risks. As for people fighting to the death, often this was done for the sake of group survival, where the group is the relevant unit here. (Thoughts?)

Comment author: philosophytorres 20 October 2016 08:04:53PM 0 points [-]

(2) is quite different in that it isn't motivated by supernatural eschatologies. Thus, the ideological and psychological profiles of ecoterrorists are quite different than apocalyptic terrorists, which are bound together by certain common worldview-related threads.

Comment author: philosophytorres 20 October 2016 08:02:50PM 0 points [-]

I think my language could have been more precise: it's not merely genocidal, but humanicidal or omnicidal that we're talking about in the context of x-risks. Also, Khmer Rough wasn't suicidal to my knowledge. Am I less right?

View more: Prev