Oligopsony comments on Existential Risk and Public Relations - Less Wrong

36 Post author: multifoliaterose 15 August 2010 07:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (613)

You are viewing a single comment's thread.

Comment author: Oligopsony 15 August 2010 11:20:09AM 20 points [-]

I'm new to all this singularity stuff - and as an anecdotal data point, I'll say a lot of it does make my kook bells go off - but with an existential threat like uFAI, what does the awareness of the layperson count for? With global warming, even if most of any real solution involves the redesign of cities and development of more efficient energy sources, individuals can take some responsibility for their personal energy consumption or how they vote. uFAI is a problem to be solved by a clique of computer and cognitive scientists. Who needs to put thought into the possibility of misbuilding an AI other than people who will themselves engage in AI research? (This is not a rhetorical question - again, I'm new to this.)

There is, of course, the question of fundraising. ("This problem is too complicated for you to help with directly, but you can give us money..." sets off further alarm bells.) But from that perspective someone who thinks you're nuts is no worse than someone who hasn't heard of you. You can ramp up the variance of people's opinions and come out better financially.

Comment author: CarlShulman 15 August 2010 11:26:44AM 15 points [-]

Awareness on the part of government funding agencies (and the legislators and executive branch people with influence over them), technology companies and investors, and political and military decisionmakers (eventually) could all matter quite a lot. Not to mention bright young people deciding on their careers and research foci.

Comment author: wedrifid 15 August 2010 11:43:40AM 5 points [-]

Who needs to put thought into the possibility of misbuilding an AI other than people who will themselves engage in AI research? (This is not a rhetorical question - again, I'm new to this.)

The people who do the real work. Utlimately it doesn't matter if the people who do the AI research care about existential risk or not (if we make some rather absolute economic assumptions). But you've noticed this already and you are right about the 'further alarm bells'.

Ultimately, the awareness of the layperson matters for the same reason that the awareness of the layperson matters for any other political issue. While with AI people can't get their idealistic warm fuzzies out of barely relevant things like 'turning off a light bulb' things like 'how they vote' do matter. Even if it is at a lower level of 'voting' along the lines of 'which institutions do you consider more prestigious'?

You can ramp up the variance of people's opinions and come out better financially.

Good point!

Comment author: jacob_cannell 25 August 2010 03:57:50AM *  0 points [-]

Don't you realize the default scenario?

The default scenario is some startup or big company or mix therein develops strong AGI for commercialization, attempts to 'control it', fails, and inadvertently unleashes a god upon the earth. To first approximation the type of AGI we are discussing here could just be called a god. Nanotechnology is based on science, but it will seem like magic.

The question then is what kind of god do we want to unleash.

Comment author: ata 25 August 2010 04:10:39AM *  8 points [-]

While we're in a thread with "Public Relations" in its title, I'd like to point out that calling an AGI a "god", even metaphorically or by (some) definition, is probably a very bad idea. Calling anything a god will (obviously) tend to evoke religious feelings (an acute mind-killer), not to mention that sort of writing isn't going to help much in combating the singularity-as-religion pattern completion.

Comment author: jacob_cannell 25 August 2010 07:00:38AM *  -2 points [-]

Religions are worldviews. The Singularity is also a worldview, and one with a future prediction is quite different than the older more standard linear atheist scientific worldview, where the future is unknown but probably like the past, AI has no role, etc etc.

I read the "by (some) definition" and I find it actually supports the cluster mapping utility of the god term as it applies to AI's. "Scary powerful optimization process" just doesn't instantly convey the proper power relation.

But nonetheless, I do consider your public relations image point to be important. But I'm not convinced that one needs to hide fully behind the accepted confines of the scientific magisterium and avoid the unspoken words.

Science tells us how the world was, is, and can become. Religion/Mythology/Science Fiction tells us what people want the world to be.

Understanding the latter domain is important for creating good AI and CEV and all that.

Comment author: Pavitra 25 August 2010 07:23:23AM 2 points [-]

Calling an AGI a god too easily conjures up visions of a benevolent force. Even those who consider that it might not have our best interests at heart tend to think of dystopian science fiction.

I use the phrase "robot Cthulhu", because the Singularity will probably eat the world without particularly noticing or caring that there's someone living on it.

Comment author: kodos96 25 August 2010 08:56:03AM 2 points [-]

Calling an AGI a god too easily conjures up visions of a benevolent force

That really depends on how you feel about religion/god in the first place. To a guy like me, who is, as Hitchens is fond of describing himself, "not just an atheist, but an anti-theist", the uFAI/god connection makes me want to donate everything I have to SIAI to make sure it doesn't happen.

Maybe that's just me.

Comment author: timtyler 25 August 2010 05:58:35AM 0 points [-]

The default scenario is some startup or big company or mix therein develops strong AGI for commercialization, attempts to 'control it', fails,

You assume incompetent engineers?!? What's the best case for engineers predictably failing at safety-critical tasks.

Comment author: khafra 25 August 2010 02:08:04PM *  1 point [-]

Incompetence is not a necessary condition for failure. Building something new is pretty near a sufficient condition for it, though. For instance, bridge design has been well-understood by engineers for millenia, but a slight variation on it brought catastrophic failure.

Comment author: timtyler 25 August 2010 04:19:22PM *  0 points [-]

Moon landings? Man in space?

http://en.wikipedia.org/wiki/Transatlantic_flight#Early_notable_transatlantic_flights

...shows that after the first success there were some failures - but nobody died up until The White Bird in 1927.

Engineers are pretty good at not killing people. In fact their efforts have created lives on a large scale.

Major sources of lives lost to engineering are automobile accidents and weapons of war. Automobile accidents are due to machines being too stupid - and intelligent machines should help fix that.

The bug that destroyed the world scenario seems pretty incredible to me - and I don't see a case for describing it as the "default scenario".

It seems, if anything - based on what we have seen so far - that it is slightly more likely that a virus might destroy the world - not that the chances of that happening are very high either.

Comment author: thomblake 25 August 2010 04:23:26PM 0 points [-]

...shows that after the first success there were some failures - but nobody died.

"Notable attempt (3)" - "lost" likely means "died".

Comment author: timtyler 25 August 2010 04:39:10PM *  0 points [-]

Thanks. I had edited my post before seeing your reply.

Powered flight had a few associated early deaths: Otto Lilienthal died in a glider in 1896. Percy Pilcher in another hang gliding crash in 1899. Wilbur Wright almost came to a sticky end himself.

Comment author: thomblake 25 August 2010 04:20:42PM 0 points [-]

It seems, if anything, slightly more likely that a virus might destroy the world - not that the chances of that happening are very high either.

I'd never compared the likelihood of those two events before; is this comparison discussed anywhere prominent?

Comment author: timtyler 25 August 2010 04:43:59PM *  1 point [-]

I don't know. Looking at the current IT scene, viruses, trojans and malware are probably the most prominent source of damage.

Bugs which are harmful are often the ones that allow viruses and malware to be produced.

We kind-of know how to avoid most harmful bugs. But either nobody cares enough to bother - or else the NSA likes people to be using insecure computers.