lukeprog comments on Will the world's elites navigate the creation of AI just fine? - Less Wrong

20 Post author: lukeprog 31 May 2013 06:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (266)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 24 November 2013 01:27:32PM 1 point [-]

From Singer's Wired for War:

people have long peered into the future and then gotten it completely and utterly wrong. My favorite example took place on October 9, 1903, when the New York Times predicted that “the flying machine which will really fly might be evolved by the combined and continuous efforts of mathematicians and mechanicians in from one million to ten million years.” That same day, two brothers who owned a bicycle shop in Ohio started assembling the very first airplane, which would fly just a few weeks later.

Similarly botched predictions frequently happen in the military field. General Giulio Douhet, the commander of Italy’s air force in World War I, is perhaps the most infamous. In 1921, he wrote a best-selling book called The Command of the Air, which argued that the invention of airplanes made all other parts of the military obsolete and unnecessary. Needless to say, this would be news both to my granddaddy, who sailed out to another world war just twenty years later, and to the soldiers slogging through the sand and dust of Iraq and Afghanistan today.

Comment author: lukeprog 24 November 2013 01:54:15PM 1 point [-]

More (#7) from Wired for War:

if a robot vacuum cleaner started sucking up infants as well as dust, because of some programming error or design flaw, we can be sure that the people who made the mistakes would be held liable. That same idea of product liability can be taken from civilian law and applied over to the laws of war. While a system may be autonomous, those who created it still hold some responsibility for its actions. Given the larger stakes of war crimes, though, the punishment shouldn’t be a lawsuit, but criminal prosecution. If a programmer gets an entire village blown up by mistake, the proper punishment is not a monetary fine that the firm’s insurance company will end up paying. Many researchers might balk at this idea and claim it will stand in the way of their work. But as Bill Joy sensibly notes, especially when the consequences are high, “Scientists and technologists must take clear responsibility for the consequences of their discoveries.” Dr. Frankenstein should not get a free pass for his monster’s work, just because he has a doctorate.

The same concept could apply to unmanned systems that commit some war crime not because of manufacturer’s defect, but because of some sort of misuse or failure to take proper precautions. Given the different ways that people are likely to classify robots as “beings” when it comes to expectations of rights we might grant them one day, the same concept might be flipped across to the responsibilities that come with using or owning them. For example, a dog is a living, breathing animal totally separate from a human. That doesn’t mean, however, that the law is silent on the many legal questions that can arise from dogs’ actions. As odd as it sounds, pet law might then be a useful resource in figuring out how to assess the accountability of autonomous systems.

The owner of a pit bull may not be in total control of exactly what the dog does or even who the dog bites. The dog’s autonomy as a “being” doesn’t mean, however, that we just wave our hands and act as if there is no accountability if that dog mauls a little kid. Even if the pit bull’s owner was gone at the time, they still might be criminally prosecuted if the dog was abused or trained (programmed) improperly, or because the owner showed some sort of negligence in putting a dangerous dog into a situation where it was easy for kids to get harmed.

Like the dog owner, some future commander who deploys an autonomous robot may not always be in total control of their robot’s every operation, but that does not necessarily break their chain of accountability. If it turns out that the commands or programs they authorized the robot to operate under somehow contributed to a violation of the laws of war or if their robot was deployed into a situation where a reasonable person could guess that harm would occur, even unintentionally, then it is proper to hold them responsible. Commanders have what is known as responsibility “by negation.” Because they helped set the whole situation in process, commanders are equally responsible for what they didn’t do to avoid a war crime as for what they might have done to cause it.

And:

Today, the concept of machines replacing humans at the top of the food chain is not limited to stories like The Terminator or Maximum Overdrive (the Stephen King movie in which eighteen-wheeler trucks conspire to take over the world, one truck stop at a time). As military robotics expert Robert Finkelstein projects, “within 20 years” the pairing of AI and robotics will reach a point of development where a machine “matches human capabilities. You [will] have endowed it with capabilities that will allow it to outperform humans. It can’t stay static. It will be more than human, different than human. It will change at a pace that humans can’t match.” When technology reaches this point, “the rules change,” says Finkelstein. “On Monday you control it, on Tuesday it is doing things you didn’t anticipate, on Wednesday, God only knows. Is it a good thing or a bad thing, who knows? It could end up causing the end of humanity, or it could end war forever.”

Finkelstein is hardly the only scientist who talks so directly about robots taking over one day. Hans Moravec, director of the Robotics Institute at Carnegie Mellon University, believes that “the robots will eventually succeed us: humans clearly face extinction.” Eric Drexler, the engineer behind many of the basic concepts of nanotechnology, says that “our machines are evolving faster than we are. Within a few decades they seem likely to surpass us. Unless we learn to live with them in safety, our future will likely be both exciting and short.” Freeman Dyson, the distinguished physicist and mathematician who helped jump-start the field of quantum mechanics (and inspired the character of Dyson in the Terminator movies), states that “humanity looks to me like a magnificent beginning, but not the final word.” His equally distinguished son, the science historian George Dyson, came to the same conclusion, but for different reasons. As he puts it, “In the game of life and evolution, there are three players at the table: human beings, nature and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines.” Even inventor Ray Kurzweil of Singularity fame gives humanity “a 50 percent chance of survival.” He adds, “But then, I’ve always been accused of being an optimist.”

...Others believe that we must take action now to stave off this kind of future. Bill Joy, the cofounder of Sun Microsystems, describes himself as having had an epiphany a few years ago about his role in humanity’s future. “In designing software and microprocessors, I have never had the feeling I was designing an intelligent machine. The software and hardware is so fragile, and the capabilities of a machine to ‘think’ so clearly absent that, even as a possibility, this has always seemed very far in the future.... But now, with the prospect of human-level computing power in about 30 years, a new idea suggests itself: that I may be working to create tools which will enable the construction of technology that may replace our species. How do I feel about this? Very uncomfortable.”

Comment author: Clarity 17 October 2015 04:48:23PM 0 points [-]

The army recruiters say that soldiers on the ground still win wars. I reckon that Douhet's prediction will approach true, however, crudely. Drones.

Comment author: lukeprog 24 November 2013 01:45:52PM 0 points [-]

More (#5) from Wired for War:

The challenge for the United States is that stories like that of the Blues and Predator, where smart, innovative systems are designed at low costs, are all too rare. The U.S. military is by far the biggest designer and purchaser of weapons in the world. But it is also the most inefficient. As David Walker, the head of the Government Accountability Office (GAO), puts it, “We’re number 1 in the world in military capabilities. But on the business side, the Defense Department gets a D-minus, giving them the benefit of the doubt. If they were a business, they wouldn’t be in business.”

The Department of Justice once found that as much as 5 percent of the government’s annual budget is lost to old-fashioned fraud and theft, most of it in the defense realm. This is not helped by the fact that the Pentagon’s own rules and laws for how it should buy weapons are “routinely broken,” as one report in Defense News put it. One 2007 study of 131 Pentagon purchases found that 117 did not meet federal regulation standards. The Pentagon’s own inspector general also reported that not one person had been fired or otherwise held accountable for these violations.

...Whenever any new weapon is contemplated, the military often adds wave after wave of new requirements, gradually creeping the original concept outward. It builds in new design mandates, asks for various improvements and additions, forgetting that each new addition means another delay in delivery (and for robots, at least, forgetting that the systems were meant to be expendable). In turn, the makers are often only too happy to go along with what transforms into a process of gold-plating, as adding more bells, more whistles, and more design time means more money. These sorts of problems are rife in U.S. military robotics today. The MDARS (Mobile Detection Assessment Response System) is a golf-cart-sized robot that was planned as a cheap sentry at Pentagon warehouses and bases. It is now fifty times more expensive than originally projected. The air force’s unmanned bomber design is already projecting out at more than $2 billion a plane, roughly three times the original $737 million cost of the B-2 bomber it is to replace.

These costs weigh not just in dollars and cents. The more expensive the systems are, the fewer can be bought. The U.S. military becomes more heavily invested in those limited numbers of systems, and becomes less likely to change course and develop or buy alternative systems, even if they turn out to be better. The costs also change what doctrines can be used in battle, as the smaller number makes the military less likely to endanger systems in risky operations. Many worry this is defeating the whole purpose of unmanned systems. “We become prisoners of our very expensive purchases,” explains Ralph Peters. He worries that the United States might potentially lose some future war because of what he calls “quantitative incompetence.” Norm Augustine even jokes, all too seriously, that if the present trend continues, “In the year 2054, the entire defense budget will purchase just one tactical aircraft. This aircraft will have to be shared by the Air Force and Navy, three and one half days per week, except for the leap year, when it will be made available to the Marines for the extra day.”

Comment author: lukeprog 24 November 2013 01:43:25PM 0 points [-]

More (#4) from Wired for War:

The force soon centered on a doctrine that would later be called the blitzkrieg, or “lightning war.” Tanks would be coordinated with air, artillery, and infantry units to create a concentrated force that could punch through enemy lines and spread shock and chaos, ultimately overwhelming the foe. This choice of doctrine influenced the Germans to build tanks that emphasized speed (German tanks were twice as fast) and reliability (the complicated French and British tanks often broke down), and that could communicate and coordinate with each other by radio. When Hitler later took power, he supported this mechanized way of warfare not only because it melded well with his vision of Nazism as the wave of the future, but also because he had a personal fear of horses.

When war returned to Europe, it seemed unlikely that the Germans would win. The French and the British had won the last war in the trenches, and seemed well prepared for this one with the newly constructed Maginot Line of fortifications. They also seemed better off with the new technologies as well. Indeed, the French alone had more tanks than the Germans (3,245 to 2,574). But the Germans chose the better doctrine, and they conquered all of France in just over forty days. In short, both sides had access to roughly the same technology, but made vastly different choices about how to use it, choices that shaped history.

And:

the [air] force will still sometimes put pilots’ career interests ahead of military efficiency, especially when those making the decisions are fighter jocks themselves. For example, many believe that the air force canceled its combat drone, Boeing’s X-45, before it could even be tested, in order to keep it from competing with its manned fighter jet of the future, the Joint Strike Fighter (JSF, a program now $38 billion over its original budget, and twenty-seven months past its schedule). One designer recalls, “The reason that was given was that we were expected to be simply too good in key areas and that we would have caused massive disruption to the efforts to ‘keep . . . JSF sold.’ If we had flown and things like survivability had been evenly assessed on a small scale and Congress had gotten ahold of the data, JSF would have been in serious trouble.”

Military cultural resistance also jibes with problems of technological “lock-in.” This is where change is resisted because of the costs sunk in the old technology, such as the large investment in infrastructure supporting it. Lock-in, for example, is why so many corporate and political interests are fighting the shift away from gas-guzzling cars.

This mix of organizational culture and past investment is why militaries will go to great lengths to keep their old systems relevant and old institutions intact. Cavalry forces were so desperate to keep horses relevant when machine guns and engines entered twentieth-century warfare that they even tried out “battle chariots,” which were basically machine guns mounted on the kind of chariots once used by ancient armies. Today’s equivalent is the development of a two-seat version of the Air Force’s F-22 Raptor (which costs some $360 million per plane, when you count the research and development). A sell of the idea described how the copilot is there to supervise an accompanying UAV that would be sent to strike guarded targets and engage enemy planes in any dogfights, as the drone could “perform high-speed aerobatics that would render a human pilot unconscious.” It’s an interesting concept, but it begs the question of what the human fighter pilot would do.

Akin to the baseball managers who couldn’t adapt to change like Billy Beane, such cultural resistance may prove another reason why the U.S. military could fall behind others in future wars, despite its massive investments in technologies. As General Eric Shinseki, the former U.S. Army chief of staff, once admonished his own service, “If you dislike change, you’re going to dislike irrelevance even more.” It is not a good sign then that the last time Shinseki made such a warning against the general opinion—that the invasion of Iraq would be costly—he was summarily fired by then secretary of defense Rumsfeld.

Comment author: lukeprog 24 November 2013 01:39:45PM 0 points [-]

More (#3) from Wired for War:

Congress ordered the Pentagon to show a “preference for joint unmanned systems in acquisition programs for new systems, including a requirement under any such program for the development of a manned system for a certification that an unmanned system is incapable of meeting program requirements.” If the U.S. military was going to buy a new weapon, it would now have to justify why it was not a robotic one.

And:

In Steven Spielberg’s movie Minority Report, for instance, Tom Cruise wears gloves that turn his fingers into a virtual joystick/mouse, allowing him to call up and control data, including even video, without ever touching a computer. He literally can “point and click” in thin air. Colonel Bruce Sturk, who runs the high-tech battle lab at Langley Air Force Base, liked what he saw in the movie. “As a military person, I said, ‘My goodness, how great would it be if we had something similar to that?’ ” So the defense contractor Raytheon was hired to create a real version for the Pentagon. Bringing it full circle, the company then hired John Underkoffler, the technology guru who had first proposed the fictional idea to Spielberg. The result is the “G-Speak Gestural Technology System,” which lets users type and control images on a projected screen (including even a virtual computer keyboard projected in front of the user). Movie magic is made real via sensors inside the gloves and cameras that track the user’s hand movements.

And:

it is easy to see the attraction of building increasing levels of autonomy into military robots. The more autonomy a robot has, the less human operators have to support it. As one Pentagon report put it, “Having a dedicated operator for each robot will not pass the common sense test.” If robots don’t get higher on the autonomy scale, they don’t yield any cost or manpower savings. Moreover, it is incredibly difficult to operate a robot while trying to interpret and use the information it gathers. It can even get dangerous as it’s hard to operate a complex system while maintaining your own situational awareness in battle. The kid parallel would be like trying to play Madden football on a PlayStation in the middle of an actual game of dodgeball.

With the rise of more sophisticated sensors that better see the world, faster computers that can process information more quickly, and most important, GPS that can give a robot its location and destination instantaneously, higher levels of autonomy are becoming more attainable, as well as cheaper to build into robots. But each level of autonomy means more independence. It is a potential good in moving the human away from danger, but also raises the stakes of the robot’s decisions.

Comment author: lukeprog 24 November 2013 01:35:22PM 0 points [-]

More (#2) from Wired for War:

Beyond just the factor of putting humans into dangerous environments, technology does not have the same limitations as the human body. For example, it used to be that when planes made high-speed turns or accelerations, the same gravitational pressures (g-forces) that knocked the human pilot out would also tear the plane apart. But now, as one study described of the F-16, the machines are pushing far ahead. “The airplane was too good. In fact, it was better than its pilots in one crucial way: It could maneuver so fast and hard that its pilots blacked out.”

If, as an official at DARPA observed, “the human is becoming the weakest link in defense systems,” unmanned systems offer a path around those limitations. They can fly faster and turn harder, without worrying about that squishy part in the middle. Looking forward, a robotics researcher notes that “the UCAV [the unmanned fighter jet] will totally trump the human pilot eventually, purely because of physics.” This may prove equally true at sea, and not just in underwater operations, where humans have to worry about small matters like breathing or suffering ruptured organs from water pressure. For example, small robotic boats (USV) have already operated in “sea state six.” This is when the ocean is so rough that waves are eighteen feet high or more, and human sailors would break their bones from all the tossing about.

Working at digital speed is another unmanned advantage that’s crucial in dangerous situations. Automobile crash avoidance technologies illustrate that a digital system can recognize a danger and react in about the same time that the human driver can only get to mid-curse word. Military analysts see the same thing happening in war, where bullets or even computer-guided missiles come in at Mach speed and defenses must be able to react against them even quicker. Humans can only react to incoming mortar rounds by taking cover at the last second, whereas “R2-D2,” the CRAM system in Baghdad, is able to shoot them down before they even arrive. Some think this is only the start. One army colonel says, “The trend towards the future will be robots reacting to robot attack, especially when operating at technologic speed. . . . As the loop gets shorter and shorter, there won’t be any time in it for humans.”

Comment author: lukeprog 24 November 2013 01:32:44PM 0 points [-]

More (#1) from Wired for War:

nothing in this book is classified information. I only include what is available in the public domain. Of course, a few times in the course of the research I would ask some soldier or scientist about a secretive project or document and they would say, “How did you find out about that? I can’t even talk about it!” “Google” was all too often my answer, which says a lot both about security as well as what AI search programs bring to modern research.

And:

On August 12, 1944, the naval version of one of these planes, a converted B-24 bomber, was sent to take out a suspected Nazi V-3, an experimental 300-foot-long “supercannon” that supposedly could hit London from over 100 miles away (unbeknownst to the Allies, the cannon had already been knocked out of commission in a previous air raid). Before the plane even crossed the English Channel, the volatile Torpex exploded and killed the crew.

The pilot was Joseph Kennedy Jr., older brother of John Fitzgerald Kennedy, thirty-fifth president of the United States. The two had spent much of their youth competing for the attention of their father, the powerful businessman and politician Joseph Sr. While younger brother JFK was often sickly and decidedly bookish, firstborn son Joe Jr. had been the “chosen one” of the family. He was a natural-born athlete and leader, groomed from birth to become the very first Catholic president. Indeed, it is telling that in 1940, just before war broke out, JFK was auditing classes at Stanford Business School, while Joe Jr. was serving as a delegate to the Democratic National Convention. When the war started, Joe Jr. became a navy pilot, perhaps the most glamorous role at the time. John was initially rejected for service by the army because of his bad back. The navy relented and allowed John to join only after his father used his political influence.

When Joe Kennedy Jr. was killed in 1944, two things happened: the army ended the drone program for fear of angering the powerful Joe Sr. (setting the United States back for years in the use of remote systems), and the mantle of “chosen one” fell on JFK. When the congressional seat in Boston opened up in 1946, what had been planned for Joe Jr. was handed to JFK, who had instead been thinking of becoming a journalist. He would spend the rest of his days not only carrying the mantle of leadership, but also trying to live up to his dead brother’s carefree and playboy image.

Comment author: lukeprog 24 November 2013 01:50:06PM 0 points [-]

More (#6) from Wired for War:

Perhaps the best illustration of how the bar is being lowered for groups seeking to develop or use such sophisticated systems comes in the form of “Team Gray,” one of the competitors in the 2005 DARPA Grand Challenge. Gray Insurance is a family-owned insurance company from Metairie, Louisiana, just outside New Orleans. As Eric Gray, who owns the firm along with his brother and dad, explained, the firm’s entry into robotics came on a lark. “I read an article in Popular Science about last year’s race and then threw the magazine in the back of my office. Later on, my brother came over and read the article, and he yelled over to me, ‘Hey did you read about this race?’ And I said, ‘Yeah,’ and he said, ‘You wanna try it?’ And I said, ‘Yeah, heck, let’s give it a try.’ ”

The Grays didn’t have PhDs in robotics, billion-dollar military labs backing them, or even much familiarity with computers. Instead, they brought in the head of their insurance company’s ten-person IT department for guidance on what to do. He then went out and bought some of the various parts and components described in the magazine article. They got their ruggedized computer, for example, at a boat show. The Grays then began reading up on video game programming, thinking that programming a robot car to drive through the real-world course had many parallels with “navigating an animated monster through a virtual world.” Everything was loaded into a Ford Escape Hybrid SUV, which they called Kat 5, after the category 5 Hurricane Katrina that hit their hometown just a few months before the race.

When it came time for the race to see who could design the best future automated military vehicle, Team Gray’s entry lined up beside robots made by some of the world’s most prestigious universities and companies. Kat 5 then not only finished the racecourse (recall that no robot contestant had even been able to go more than a few miles the year before), but came in fourth out of the 195 contestants, just thirty-seven minutes behind Sebastian Thrun’s Stanley robot. Said Eric Gray, who spent only $650,000 to make a robot that the Pentagon and nearly every top research university had been unable to build just a year before, “It’s a beautiful thing when people are ignorant that something is impossible.”

And:

When we think of the terrorist risks that emanate from unmanned systems, robotics expert Robert Finkelstein advises that we shouldn’t just look at organizations like al-Qaeda. “They can make a lone actor like Timothy McVeigh even more scary.” He describes a scenario in which “a few amateurs could shut down Manhattan with relative ease.” (Given that my publisher is based in Manhattan, we decided to leave the details out of the book.) Washington Post technology reporter Joel Garreau similarly writes, “One bright but embittered loner or one dissident grad student intent on martyrdom could—in a decent biological lab for example—unleash more death than ever dreamed of in nuclear scenarios. It could even be done by accident.”

In political theory, noted philosophers like Thomas Hobbes argued that individuals have always had to grant their obedience to governments because it was only by banding together and obeying some leader that people could protect themselves. Otherwise, life would be “nasty, brutish and short,” as he famously described a world without governments. But most people forget the rest of the deal that Hobbes laid out. “The obligation of subjects to the sovereign is understood to last as long and no longer than the power lasteth by which he is able to protect them.”

As a variety of scientists and analysts look at such new technologies as robotics, AI and nanotech, they are finding that massive power will no longer be held only by states. Nor will it even be limited to nonstate organizations like Hezbollah or al-Qaeda. It is also within the reach of individuals. The playing field is changing for Hobbes’s sovereign.

Even the eternal optimist Ray Kurzweil believes that with the barriers to entry being lowered for violence, we could see the rise of superempowered individuals who literally hold humanity’s future in their hands. New technologies are allowing individuals with creativity to push the limits of what is possible. He points out how Sergey Brin and Larry Page were just two Stanford kids with a creative idea that turned into Google, a mechanism that makes it easy for anyone to search almost all the world’s knowledge. However, their $100 billion idea is “also empowering for those who are destructive.” Information on how to build your own remote bomb or the genetic code for the 1918 flu bug are as searchable as the latest news on Britney Spears. Kurzweil describes the looming period in human history that we are entering, just before his hoped-for Singularity: “It feels like all ten billion of us are standing in a room up to our knees in flammable fluid, waiting for someone—anyone—to light a match.”

Kurzweil thinks we have enough fire extinguishers to avoid going up in flames before the Singularity arrives, but others aren’t so certain. Bill Joy, the so-called father of the Internet, for example, fears what he calls “KMD,” individuals who wield knowledge-enabled mass destruction. “It is no exaggeration to say that we are on the cusp of the further perfection of extreme evil, an evil whose possibility spreads well beyond that which weapons of mass destruction bequeathed to the nation states, on to a surprising and terrible empowerment of individuals.”

The science fiction writers concur. “Single individual mass destruction” is the biggest dilemma we have to worry about with our new technologies, warns Greg Bear. He notes that many high school labs now have greater sophistication and capability than the Pentagon’s top research labs did in the cold war. Vernor Vinge, the computer scientist turned award-winning novelist, agrees: “Historically, warfare has pushed technologies. We are in a situation now, if certain technologies become cheap enough, it’s not just countries that can do terrible things to millions of people, but criminal gangs can do terrible things to millions of people. What if for 50 dollars you buy something that could destroy everybody in a country? Then, basically, anybody who’s having a bad hair day is a threat to national survival.”

Comment author: Clarity 17 October 2015 04:56:00PM -2 points [-]

Inequality doesn't seem so bad now, huh?