timtyler comments on Self-improving AGI: Is a confrontational or a secretive approach favorable? - Less Wrong

7 Post author: Friendly-HI 11 July 2011 03:29PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (79)

You are viewing a single comment's thread.

Comment author: timtyler 06 July 2011 09:00:24AM *  2 points [-]

Once robots become more commonplace in our lives, I think we can reasonably expect that people will begin to place their trust into simple AI's - and they will hopefully become less suspicious towards AGI and simply assume (like a lot of current AI-researchers apparently) that somehow it is trivial to make it behave friendly towards humans.

Step one is to use machine intelligence to stop the carnage on the roads. With machines regularly brutally killing and maiming people, trust in machines is not going to get very high.

Comment author: Wilka 11 July 2011 08:48:35PM *  1 point [-]

I think http://inhabitat.com/google-succeeds-in-making-driverless-cars-legal-in-nevada/ was a big step to helping improve that. Providing it works, once people start to notice the (hopefully) massive drop in traffic accidents for autonomous cars they should push for them to be more widespread.

Still, it's a way off for them to actually be in use on the roads.

Comment author: Raw_Power 06 July 2011 11:37:07AM 1 point [-]

Car accidents take more lives in developed countries than actual wars. This is depressing.

Comment author: JoshuaZ 06 July 2011 01:47:22PM *  7 points [-]

Car accidents take more lives in developed countries than actual wars. This is depressing.

It tells us where we should concentrate our work. But this isn't depressing: this is a sign that as a society we've become a lot more peaceful over the last few hundred years. Incidentally, the number of traffic fatalities in the US has shown general trend downwards for the last fifty years, even as the US population has increased. Moreover, I think the same is true in much of Europe. (I don't have a citation for this part though.)

Comment author: fubarobfusco 07 July 2011 09:16:55AM 0 points [-]

I'd expect a lot of that downward trend is due to better engineering, or to be specific, more humane engineering — designing cars in a way that takes into account the human preference that survival of the humans inside the car is a critical concern.

A 1950s car is designed as a machine for going fast. A modern car is designed as a machine to protect your life at high speed. The comparison is astounding.

It is arguably an example of rationality failure that automobile safety had to become a political issue before this change in engineering values was made.

Comment author: timtyler 06 July 2011 11:53:41AM *  4 points [-]

Right - and we mostly know how to fix it: smart cars. We pretty-much have the technology today, it just needs to be worked on and deployed.

Comment author: Raw_Power 06 July 2011 01:58:06PM 1 point [-]

Linkies?

If only we could say the same of the accident victims... "We have the technology. We can rebuld them..."

Comment author: timtyler 06 July 2011 02:54:28PM 0 points [-]

Well, search if you are interested. There's a lot of low-hanging fruit in the area. From slamming on the brakes when you are about to hit something, to pointing a camera at the driver to see if they are awake.

Comment author: Raw_Power 06 July 2011 03:01:47PM 3 points [-]

Any reason why that sector isn't developing explosively then? Or is it actually developing explosively and we just don't notice?

Comment author: timtyler 06 July 2011 03:18:19PM *  2 points [-]

Safety is improving - despite there being more vehicles on the roads. ...and yes, there are developments taking place with smart cars. e.g. Volvo Pedestrian Detection. Of course one issue is how to sell additional safety to consumer. It is often most visible in the price tag.

Comment author: Raw_Power 06 July 2011 03:27:17PM 2 points [-]

I suggest legislation. It's hard to get someone to pay additional money to protect others, especially from themselves. It's much easier to get them to feel the fuzzy moral righteousness of supporting a law that forces them to do so by making those measures compulsory.

Comment author: timtyler 06 July 2011 04:17:14PM *  3 points [-]

I suggest legislation.

That might take a while, though. What might help a little in the mean time is recognition and support from insurance companies.

Comment author: p4wnc6 12 July 2011 12:29:57AM *  2 points [-]

This seems to suffer the same problems as robotics in surgery. People not only can't readily understand the expected utility benefit of having a robot assist a doctor with difficult incisions, they go further and demand that we don't act or talk about medical risk as if it is quantifiable. Most people tend to think that if you reduce their medical care down to a numeric risk, even if that number is very accurate and it is really quite beneficial to have it, then you somehow are cold and don't care about them. I think an insurance company would have a hard time not alienating its customers (who are mostly non-rationalists) by showing interest in any procedure that attempts to take control of human lives out of the hands of humans -- even if doing so was statistically undeniably safer. People don't care much about what actually is safer, rather what is "safer" in some flowery model that includes religion and apple pie and the American dream, etc. etc. I think getting societies at large to adopt technologies like these either has to just be enforced through unpopular legislation or a massive grassroots campaign that gets younger generations to accept methods of rationality.

Comment author: Raw_Power 06 July 2011 04:44:54PM 1 point [-]

Yeah, definitely! They'd certainly like that, if they could be convinced it'd be cost-effective for them. Then lobbying occurs.

Comment author: Friendly-HI 04 November 2011 01:45:31PM 0 points [-]

...except if safe cars become so abundant one day that no one will want to pay insurance for such an unlikely incident as dying inside a car.