RSI capabilities could be charted, and are likely to be AI-complete.
What does RSI stand for?
Lately I've been listening to audiobooks (at 2x speed) in my down time, especially ones that seem likely to have passages relevant to the question of how well policy-makers will deal with AGI, basically continuing this project but only doing the "collection" stage, not the "analysis" stage.
I'll post quotes from the audiobooks I listen to as replies to this comment.
More (#3) from Better Angels of Our Nature:
...let’s have a look at political discourse, which most people believe has been getting dumb and dumber. There’s no such thing as the IQ of a speech, but Tetlock and other political psychologists have identified a variable called integrative complexity that captures a sense of intellectual balance, nuance, and sophistication. A passage that is low in integrative complexity stakes out an opinion and relentlessly hammers it home, without nuance or qualification. Its minimal complexity can be quantified by counting words like absolutely, always, certainly, definitively, entirely, forever, indisputable, irrefutable, undoubtedly, and unquestionably. A passage gets credit for some degree of integrative complexity if it shows a touch of subtlety with words like usually, almost, but, however, and maybe. It is rated higher if it acknowledges two points of view, higher still if it discusses connections, tradeoffs, or compromises between them, and highest of all if it explains these relationships by reference to a higher principle or system. The integrative complexity of a passage is not the same as the intelligence of the person who wrote it, but the
Okay. In this comment I'll keep an updated list of audiobooks I've heard since Sept. 2013, for those who are interested. All audiobooks are available via iTunes/Audible unless otherwise noted.
Outstanding:
Worthwhile if you care about the subject matter:
A process for turning ebooks into audiobooks for personal use, at least on Mac:
Personal and tribal selfishness align with AI risk-reduction in a way they may not align on climate change.
This seems obviously false. Local expenditures - of money, pride, possibility of not being the first to publish, etc. - are still local, global penalties are still global. Incentives are misaligned in exactly the same way as for climate change.
RSI capabilities could be charted, and are likely to be AI-complete.
This is to be taken as an arguendo, not as the author's opinion, right? See IEM on the minimal conditions for takeoff. Albeit if &q...
(I don't have answers to your specific questions, but here are some thoughts about the general problem.)
I agree with most of you said. I also assign significant probability mass to most parts of the argument for hope (but haven't thought about this enough to put numbers on this), though I too am not comforted on these parts because I also assign non-small chance to them going wrong. E.g., I have hope for "if AI is visible [and, I add, AI risk is understood] then authorities/elites will be taking safety measures".
That said, there are some steps in...
I personally am optimistic about the world's elites navigating AI risk as well as possible subject to inherent human limitations that I would expect everybody to have, and the inherent risk. Some points:
I've been surprised by people's ability to avert bad outcomes. Only two nuclear weapons have been used since nuclear weapons were developed, despite the fact that there are 10,000+ nuclear weapons around the world. Political leaders are assassinated very infrequently relative to how often one might expect a priori.
AI risk is a Global Catastrophic Risk i
The argument from hope or towards hope or anything but despair and grit is misplaced when dealing with risks of this magnitude.
Don't trust God (or semi-competent world leaders) to make everything magically turn out all right. The temptation to do so is either a rationalization of wanting to do nothing, or based on a profoundly miscalibrated optimism for how the world works.
/doom
I think there's a >15% chance AI will not be preceded by visible signals.
Aren't we seeing "visible signals" already? Machines are better than humans at lots of intelligence-related tasks today.
Which historical events are analogous to AI risk in some important ways? Possibilities include: nuclear weapons, climate change, recombinant DNA, nanotechnology, chloroflourocarbons, asteroids, cyberterrorism, Spanish flu, the 2008 financial crisis, and large wars.
Cryptography and cryptanalysis are obvious precursors of supposedly-dangerous tech within IT.
Looking at their story, we can plausibly expect governments to attempt to delay the development of "weaponizable" technology by others.
These days, cryptography facilitates international trade. It seems like a mostly-positive force overall.
One question is whether AI is like CFCs, or like CO2, or like hacking.
With CFCs, the solution was simple: ban CFCs. The cost was relatively low, and the benefit relatively high.
With CO2, the solution is equally simple: cap and trade. It's just not politically palatable, because the problem is slower-moving, and the cost would be much, much greater (perhaps great enough to really mess up the world economy). So, we're left with the second-best solution: do nothing. People will die, but the economy will keep growing, which might balance that out, because ...
Here are my reasons for pessimism:
There are likely to be effective methods of controlling AIs that are of subhuman or even roughly human-level intelligence which do not scale up to superhuman intelligence. These include for example reinforcement by reward/punishment, mutually beneficial trading, legal institutions. Controlling superhuman intelligence will likely require qualitatively different methods, such as having the superintelligence share our values. Unfortunately the existence of effective but unscalable methods of AI control will probably lull el
Congress' non-responsiveness to risks to critical infrastructure from geomagnetic storms, despite scientific consensus on the issue, is also worrying.
Even if one organization navigates the creation of friendly AI successfully, won't we still have to worry about preventing anyone from ever creating an unsafe AI?
Unlike nuclear weapons, a single AI might have world ending consequences, and an AI requires no special resources. Theoretically a seed AI could be uploaded to Pirate Bay, from where anyone could download and compile it.
The use of early AIs to solve AI safety problems creates an attractor for "safe, powerful AI."
What kind of "AI safety problems" are we talking about here? If they are like the "FAI Open Problems" that Eliezer has been posting, they would require philosophers of the highest (perhaps even super-human) caliber to solve. How could "early AIs" be of much help?
If "AI safety problems" here do not refer to FAI problems, then how do those problems get solved, according to this argument?
@Lukeprog, can you
(1) update us on your working answers the posed questions in brief? (2) your current confidence (and if you would like to, by proxy, MIRI's as an organisation's confidence in each of the 3:
Elites often fail to take effective action despite plenty of warning.
I think there's a >10% chance AI will not be preceded by visible signals.
I think the elites' safety measures will likely be insufficient.
Thank you for your diligence.
There's another reason for hope in this above global warming: The idea of a dangerous AI is already common in the public eye as "things we need to be careful about." A big problem the global warming movement had, and is still having, is convincing the public that it's a threat in the first place.
Who do you mean by "elites". Keep in mind that major disruptive technical progress of the type likely to precede the creation of a full AGI tends to cause the type of social change that shakes up the social hierarchy.
Combining the beginning and the end of your questions reveals an answer.
Can we trust the world's elite decision-makers (hereafter "elites") to navigate the creation of [nuclear weapons, climate change, recombinant DNA, nanotechnology, chloroflourocarbons, asteroids, cyberterrorism, Spanish flu, the 2008 financial crisis, and large wars] just fine?
Answer how just fine any of these are any you have analogous answers.
You might also clarify whether you are interested in what is just fine for everyone, or just fine for the elites, or just fine for the AI in question. The answer will change accordingly.
More (#1) from The Big Short:
[Meredith] Whitney was an obscure analyst of financial firms for an obscure financial firm, Oppenheimer and Co., who, on October 31, 2007, ceased to be obscure. On that day she predicted that Citigroup had so mismanaged its affairs that it would need to slash its dividend or go bust. It's never entirely clear on any given day what causes what inside the stock market, but it was pretty clear that, on October 31, Meredith Whitney caused the market in financial stocks to crash. By the end of the trading day, a woman whom basically no one had ever heard of, and who could have been dismissed as a nobody, had shaved 8 percent off the shares of Citigroup and $390 billion off the value of the U.S. stock market. Four days later, Citigroup CEO Chuck Prince resigned. Two weeks later, Citigroup slashed its dividend.
From that moment, Meredith Whitney became E. F. Hutton: When she spoke, people listened. Her message was clear: If you want to know what these Wall Street firms are really worth, take a cold, hard look at these crappy assets they're holding with borrowed money, and imagine what they'd fetch in a fire sale. The vast assemblages of highly paid people inside them were worth, in her view, nothing. All through 2008, she followed the bankers' and brokers' claims that they had put their problems behind them with this write-down or that capital raise with her own claim: You're wrong. You're still not facing up to how badly you have mismanaged your business. You're still not acknowledging billions of dollars in losses on subprime mortgage bonds. The value of your securities is as illusory as the value of your people. Rivals accused Whitney of being overrated; bloggers accused her of being lucky. What she was, mainly, was right. But it's true that she was, in part, guessing. There was no way she could have known what was going to happen to these Wall Street firms, or even the extent of their losses in the subprime mortgage market. The CEOs themselves didn't know. "Either that or they are all liars," she said, "but I assume they really just don't know."
Now, obviously, Meredith Whitney didn't sink Wall Street. She'd just expressed most clearly and most loudly a view that turned out to be far more seditious to the social order than, say, the many campaigns by various New York attorneys general against Wall Street corruption. If mere scandal could have destroyed the big Wall Street investment banks, they would have vanished long ago. This woman wasn't saying that Wall Street bankers were corrupt. She was saying that they were stupid. These people whose job it was to allocate capital apparently didn't even know how to manage their own.
And:
"Here's this database," Eisman said simply. "Go into that room. Don't come out until you've figured out what it means."...
What first caught Vinny's eye were the high prepayments coming in from a sector called "manufactured housing." ("It sounds better than 'mobile homes.'") Mobile homes were different from the wheel-less kind: Their value dropped, like cars', the moment they left the store. The mobile home buyer, unlike the ordinary home buyer, couldn't expect to refinance in two years and take money out. Why were they prepaying so fast? Vinny asked himself. "It made no sense to me. Then I saw that the reason the prepayments were so high is that they were involuntary." "Involuntary prepayment" sounds better than "default." Mobile home buyers were defaulting on their loans, their mobile homes were being repossessed, and the people who had lent them money were receiving fractions of the original loans. "Eventually I saw that all the subprime sectors were either being prepaid or going bad at an incredible rate," said Vinny. "I was just seeing stunningly high delinquency rates in these pools." The interest rate on the loans wasn't high enough to justify the risk of lending to this particular slice of the American population. It was as if the ordinary rules of finance had been suspended in response to a social problem. A thought crossed his mind: How do you make poor people feel wealthy when wages are stagnant? You give them cheap loans.
To sift every pool of subprime mortgage loans took him six months, but when he was done he came out of the room and gave Eisman the news. All these subprime lending companies were growing so rapidly, and using such goofy accounting, that they could mask the fact that they had no real earnings, just illusory, accounting-driven, ones. They had the essential feature of a Ponzi scheme: To maintain the fiction that they were profitable enterprises, they needed more and more capital to create more and more subprime loans. "I wasn't actually a hundred percent sure I was right," said Vinny, "but I go to Steve and say, 'This really doesn't look good.' That was all he needed to know. I think what he needed was evidence to downgrade the stock."
The report Eisman wrote trashed all of the subprime originators; one by one, he exposed the deceptions of a dozen companies. "Here is the difference," he said, "between the view of the world they are presenting to you and the actual numbers." The subprime companies did not appreciate his effort. "He created a shitstorm," said Vinny. "All these subprime companies were calling and hollering at him: You're wrong. Your data's wrong. And he just hollered back at them, 'It's YOUR fucking data!'" One of the reasons Eisman's report disturbed so many is that he'd failed to give the companies he'd insulted fair warning. He'd violated the Wall Street code. "Steve knew this was going to create a shitstorm," said Vinny. "And he wanted to create the shitstorm. And he didn't want to be talked out of it. And if he told them, he'd have had all these people trying to talk him out of it."
"We were never able to evaluate the loans before because we never had the data," said Eisman later. "My name was wedded to this industry. My entire reputation had been built on covering these stocks. If I was wrong, that would be the end of the career of Steve Eisman."
Eisman published his report in September 1997, in the middle of what appeared to be one of the greatest economic booms in U.S. history. Less than a year later, Russia defaulted and a hedge fund called Long-Term Capital Management went bankrupt. In the subsequent flight to safety, the early subprime lenders were denied capital and promptly went bankrupt en masse. Their failure was interpreted as an indictment of their accounting practices, which allowed them to record profits before they were realized. No one but Vinny, so far as Vinny could tell, ever really understood the crappiness of the loans they had made. "It made me feel good that there was such inefficiency to this market," he said. "Because if the market catches on to everything, I probably have the wrong job. You can't add anything by looking at this arcane stuff, so why bother? But I was the only guy I knew who was covering companies that were all going to go bust during the greatest economic boom we'll ever see in my lifetime. I saw how the sausage was made in the economy and it was really freaky."
One open question in AI risk strategy is: Can we trust the world's elite decision-makers (hereafter "elites") to navigate the creation of human-level AI (and beyond) just fine, without the kinds of special efforts that e.g. Bostrom and Yudkowsky think are needed?
Some reasons for concern include:
But if you were trying to argue for hope, you might argue along these lines (presented for the sake of argument; I don't actually endorse this argument):
The basic structure of this 'argument for hope' is due to Carl Shulman, though he doesn't necessarily endorse the details. (Also, it's just a rough argument, and as stated is not deductively valid.)
Personally, I am not very comforted by this argument because:
Obviously, there's a lot more for me to spell out here, and some of it may be unclear. The reason I'm posting these thoughts in such a rough state is so that MIRI can get some help on our research into this question.
In particular, I'd like to know: