Discussion of concrete near-to-middle term trends in AI

13 Punoxysm 08 February 2015 10:05PM

Instead of prognosticating on AGI/Strong AI/Singularities, I'd like to discuss more concrete advancements to expect in the near-term in AI. I invite those who have an interest in AI to discuss predictions or interesting trends they've observed.

This discussion should be useful for anyone looking to research or work in companies involved in AI, and might guide longer-term predictions.

With that, here are my predictions for the next 5-10 years in AI. This is mostly straightforward extrapolation, so it won't excite those who know about these areas but may interest those who don't:

  • Speech Processing, the task of turning the spoken words into text, will continue to improve until it is essentially a solved problem. Smartphones and even weaker devices will be capable of quite accurately transcribing heavily-accented speech in many languages and noisy environments. This is the simple continuation of the rapid improvements in speech processing that have allowed brought us from Dragon Naturally-Speaking to Google Now and Siri.
  • Assistant and intent-based (they try to figure out the "intent" of your input) systems, like Siri, that need to interpret a sentence as a particular command they are capable of, will become substantially more accurate and varied and take cues like tone and emphasis into account. So for example, if you're looking for directions you won't have to repeat yourself in an increasingly loud, slowed and annoyed voice. You'll be able to phrase your requests naturally and conversationally. New tasks like "Should I get this rash checked out" will be available. A substantial degree of personalization and use of your personal history might also allow "show me something funny/sad/stimulating [from the internet]".
  • Natural language processing, the task of parsing the syntax and semantics of language, will improve substantially. Look at this list of traditional tasks with standard benchmarks: on Wikipedia. Every one of these tasks will have a several percentage point improvement, particularly in the understudied areas of informal text (Chat logs, tweet, anywhere where grammar and vocabulary are less rigorous). It won't get so good that it can be confused with solving AI-complete aspects of NLP, but it will allow vast improvements in text mining and information extraction. For instance, search queries like "What papers are critical of VerHoeven and Michaels '08" or "Summarize what twitter thinks of the 2018 superbowl" will be answerable. Open source libraries will continue to improve from their current just-above-boutique state (NLTK, CoreNLP). Medical diagnosis based on analysis of medical texts will be a major area of research. Large-scale analysis of scientific literature in areas where it is difficult for researchers to read all relevant texts will be another. Machine translation will not be ready for most diplomatic business, but it will be very very good across a wide variety of languages.
  • Computer Vision, interpreting the geometry and contents of images an video, will undergo tremendous advances. In act, it already has in the past 5 years, but now it makes sense for major efforts, academic, military and industrial, to try to integrate different modules that have been developed for subtasks like object recognition, motion/gesture recognition, segmentation, etc. I think the single biggest impact this will have will be the foundation for robotics development, since a lot of the arduous work of interpreting sensor input will be partly taken care of by excellent vision libraries.  Those general foundations will make it easy to program specialist tasks (like differentiating weeds from crops in an image, or identifying activity associated with crime in a video). This will be complemented by a general proliferation of cheap high-quality cameras and other sensors. Augmented reality also rests on computer vision, and the promise of the most fanciful tech demo videos will be realized in practice. 
  • Robotics will advance rapidly. The foundational factors of computer vision, growing availability of cheap platforms, and fast progress on tasks like motion planning and grasping has the potential to fuel an explosion of smarter industrial and consumer robotics that can perform more complex and unpredictable tasks than most current robots. Prototype ideas like search-and-rescue robots, more complex drones, and autonomous vehicles will come to fruition (though 10 years may be too short a time frame for ubiquity). Simpler robots with exotic chemical sensors will have important applications in medical and environmental research.

 

What false beliefs have you held and why were you wrong?

28 Punoxysm 16 October 2014 05:58PM

What is something you used to believe, preferably something concrete with direct or implied predictions, that you now know was dead wrong. Was your belief rational given what you knew and could know back then, or was it irrational, and why?

 

Edit: I feel like some of these are getting a bit glib and political. Please try to explain what false assumptions or biases were underlying your beliefs - be introspective - this is LW after all.

HPMOR hypothesis: Harry will use Timeless Decision Theory to resolve some of the time-turner paradoxes

-7 Punoxysm 08 April 2014 03:38AM

Evidence:

 

  1. The obsession with precise times in the last few chapters, the prominence of time-turners in the plot in general, and Harry's vow to revive Hermione all indicate use of time-turners in the final arc.
  2. EY has involved many of his favorite ideas and themes (especially from the sequences) into HPMOR already. Timeless Decision Theory is without a doubt among his most prominent interests.
  3. Harry has already gained two superpowers (super-patronus and partial transfiguration) by virtue of, well, being a proponent of EY's favorite themes essentially. Why not a third?
  4. One specific concrete use would be to coordinate an indefinite number of selves in the way that Harry failed to during the prime-factoring experiment in the early chapters. Why did that experiment fail? Not because time is impossible to mess with, but because one of the Harry's messed up. But since then, Harry has been pushing the bounds of paradox. If he could firmly pre-commit to follow through on a course of action (perhaps with an unbreakable oath?) he could have an indefinite number of Harry's coordinate on some action. There are many ways this could be useful.
Comments? Criticism?

 

What legal ways do people make a profit that produce the largest net loss in utility?

2 Punoxysm 25 March 2014 01:53AM

This is an offshoot of a thread I made earlier, but which wasn't eliciting the sort of responses I'd hoped for.

So let me pose a clearer question with less potential to get people on watchlists.

What legal ways of making a profit are the most anti-altruistic, the most damaging to society, the opposite of effective altruism in result. 

I am using utility loosely. The answers need not be given from a utilitarian perspective at all, but instead merely deal with any means of making a profit that seems to you clearly wretched, and such that the world would be better if nobody participated in it.

I'd also like to emphasize that these things should be legal. There are some obviously wretched illegal businesses that would top the list otherwise. If something is legal but only in a particular jurisdiction, then you should only discuss it within the context of the jurisdiction where it is legal.

If it's a grey area, go for it, but extra points for society-harming enterprises definitely legal in both the letter an the spirit of the law.

This is NOT about whether the enterprise in question should be illegal, just whether it causes a net loss of utility (deal with counterfactuals however you see fit).

What is the most anti-altruistic way to spend a million dollars?

-4 Punoxysm 24 March 2014 09:50PM

Edit: The purpose of this question is not to make the world worse, but to see whether we actually have concrete ideas of what would, and my guess is that most of us don't, not in a really concrete way. From the downvotes I'm wondering if everyone else is thinking way darker directions than I am. If so please share.

There is a lot of discussion here about effective altruism. Organizations like GiveWell with donations, using criterion like quality-life-years-saved-per-dollar. People distinguish warm-and-fuzzy giving from the most effective use of dollars from various utilitarian perspectives.

But I want to ask a different question: What would effective anti-altruism be?

To make it more concrete:

I am an eccentric multimillionaire, proposing a contest to all of you, who will for the purposes of this exercise play greedy and callous, yet honest and efficient, contest entrants.

Whoever can propose the most negative possible use for my money, in the sense that it causes the greatest amount of global misery, (feel free to argue for your own interpretation of the details of what this means) will receive $1 million to carry out his or her proposal and $1 million to keep for him or herself to with as desired. 

A few rules:

1) Everything must be 100% legal in whatever jurisdiction you propose. Edit: People had trouble with the old phrasing, so I'll add that it should not only be legal in the letter of the law, but also in some reasonable interpretation of the spirit of the law.

1a) In fact, I encourage you to think of things that aren't merely legal but that would also be legal under whatever your favorite hypothetical laws are. Maybe that means non-coercive, non-violent, or something else in that vein.

2) This money may be used as seed funding for a non-profit or for-profit anti-altruistic venture, but I will take into account both the risk and the marginal impact of only the first million dollars.

3) Risk and plausibility are factors just as they would be in any investment for effective altruism

4) If you're going to propose that you keep and embezzle the first million dollars, you should have an extremely good justification for why such a mundane plan would match my standards for anti-altruism.

 

I hope this pushes you all to think of truly anti-altruistic means of spending this money. I think you may find that effective anti-altruism is a good deal harder than you'd believe.

How to Study Unsafe AGI's safely (and why we might have no choice)

10 Punoxysm 07 March 2014 07:24AM

TL;DR

A serious possibility is that the first AGI(s) will be developed in a Manhattan Project style setting before any sort of friendliness/safety constraints can be integrated reliably. They will also be substantially short of the intelligence required to exponentially self-improve. Within a certain range of development and intelligence, containment protocols can make them safe to interact with. This means they can be studied experimentally, and the architecture(s) used to create them better understood, furthering the goal of safely using AI in less constrained settings.

Setting the Scene

The year is 2040, and in the last decade a series of breakthroughs in neuroscience, cognitive science, machine learning, and computer hardware have put the long-held dream of a human-level artificial intelligence in our grasp. The wild commercial success of lifelike robotic pets, the integration into everyday work and leisure of AI assistants and concierges, and STUDYBOT's graduation from Harvard's Online degree program with an octuple major and full honors, DARPA, the NSF and the European Research Council have announced joint funding of an artificial intelligence program that will create a superhuman intelligence in 3 years.

Safety was announced as a critical element of the project, especially in light of the self-modifying LeakrVirus that catastrophically disrupted markets in 36 and 37. The planned protocols have not been made public, but it seems they will be centered in traditional computer security rather than techniques from the nascent field of Provably Safe AI, which were deemed impossible to integrate on the current project timeline.

Technological and/or Political issues could force the development of AI without theoretical safety guarantees that we'd certainly like, but there is a silver lining

A lot of the discussion around LessWrong and MIRI that I've seen (and I haven't seen all of it, please send links!) seems to focus very strongly on the situation of an AI that can self-modify or construct further AIs, resulting in an exponential explosion of intelligence (FOOM/Singularity). The focus on FAI is on finding an architecture that can be explicitly constrained (and a constraint set that won't fail to do what we desire).

My argument is essentially that there could be a critical multi-year period preceding any possible exponentially self-improving intelligence during which a series of AGIs of varying intelligence, flexibility and architecture will be built. This period will be fast and frantic, but it will be incredibly fruitful and vital both in figuring out how to make an AI sufficiently strong to exponentially self-improve and in how to make it safe and friendly (or develop protocols to bridge the even riskier period between when we can develop FOOM-capable AIs and when we can ensure their safety). 

I'll break this post into three parts.
  1. why is a substantial period of proto-singularity more likely than a straight-to-singularity situation?
  2. Second, what strategies will be critical to developing, controlling, and learning from these pre-FOOM AIs?
  3. Third, what are the political challenge that will develop immediately before and during this period?
Why is a proto-singularity likely?

The requirement for a hard singularity, an exponentially self-improving AI, is that the AI can substantially improve itself in a way that enhances its ability to further improve itself, which requires the ability to modify its own code; access to resources like time, data, and hardware to facilitate these modifications; and the intelligence to execute a fruitful self-modification strategy.

The first two conditions can (and should) be directly restricted. I'll elaborate more on that later, but basically any AI should be very carefully sandboxed (unable to affect its software environment), and should have access to resources strictly controlled. Perhaps no data goes in without human approval or while the AI is running. Perhaps nothing comes out either. Even a hyperpersuasive hyperintelligence will be slowed down (at least) if it can only interact with prespecified tests (how do you test AGI? No idea but it shouldn't be harder than friendliness). This isn't a perfect situation. Eliezer Yudkowsky presents several arguments for why an intelligence explosion could happen even when resources are constrained, (see Section 3 of Intelligence Explosion Microeconomics) not to mention ways that those constraints could be defied even if engineered perfectly (by the way, I would happily run the AI box experiment with anybody, I think it is absurd that anyone would fail it! [I've read Tuxedage's accounts, and I think I actually do understand how a gatekeeper could fail, but I also believe I understand how one could be trained to succeed even against a much stronger foe than any person who has played the part of the AI]).

But the third emerges from the way technology typically develops. I believe it is incredibly unlikely that an AGI will develop in somebody's basement, or even in a small national lab or top corporate lab. When there is no clear notion of what a technology will look like, it is usually not developed. Positive, productive accidents are somewhat rare in science, but they are remarkably rare in engineering (please, give counterexamples!). The creation of an AGI will likely not happen by accident; there will be a well-funded, concrete research and development plan that leads up to it. An AI Manhattan Project described above. But even when there is a good plan successfully executed, prototypes are slow, fragile, and poor-quality compared to what is possible even with approaches using the same underlying technology. It seems very likely to me that the first AGI will be a Chicago Pile, not a Trinity; recognizably a breakthrough but with proper consideration not immediately dangerous or unmanageable. [Note, you don't have to believe this to read the rest of this. If you disagree, consider the virtues of redundancy and the question of what safety an AI development effort should implement if they can't be persuaded to delay long enough for theoretically sound methods to become available].

A Manhattan Project style effort makes a relatively weak, controllable AI even more likely, because not only can such a project implement substantial safety protocols that are explicitly researched in parallel with primary development, but also because the total resources, in hardware and brainpower, devoted to the AI will be much greater than a smaller project, and therefore setting a correspondingly higher bar for the AGI thus created to reach to be able to successfully self-modify itself exponentially and also break the security procedures.

Strategies to handle AIs in the proto-Singularity, and why they're important

First, take a look the External Constraints Section of this MIRI Report and/or this article on AI Boxing. I will be talking mainly about these approaches. There are certainly others, but these are the easiest to extrapolate from current computer security.

These AIs will provide us with the experimental knowledge to better handle the construction of even stronger AIs. If careful, we will be able to use these proto-Singularity AIs to learn about the nature of intelligence and cognition, to perform economically valuable tasks, and to test theories of friendliness (not perfectly, but well enough to start). 

"If careful" is the key phrase. I mentioned sandboxing above. And computer security is key to any attempt to contain an AI. Monitoring the source code, and setting a threshold for too much changing too fast at which point a failsafe freezes all computation; keeping extremely strict control over copies of the source. Some architectures will be more inherently dangerous and less predictable than others. A simulation of a physical brain, for instance, will be fairly opaque (depending on how far neuroscience has gone) but could have almost no potential to self-improve to an uncontrollable degree if its access to hardware is limited (it won't be able to make itself much more efficient on fixed resources). Other architectures will have other properties. Some will be utility optimizing agents. Some will have behaviors but no clear utility. Some will be opaque, some transparent.

All will have a theory to how they operate, which can be refined by actual experimentation. This is what we can gain! We can set up controlled scenarios like honeypots to catch malevolence. We can evaluate our ability to monitor and read the thoughts of the agi. We can develop stronger theories of how damaging self-modification actually is to imposed constraints. We can test our abilities to add constraints to even the base state. But do I really have to justify the value of experimentation?

I am familiar with criticisms based on absolutley incomprehensibly perceptive and persuasive hyperintelligences being able to overcome any security, but I've tried to outline above why I don't think we'd be dealing with that case.

Political issues

Right now AGI is really a political non-issue. Blue sky even compared to space exploration and fusion both of which actually receive funding from government in substantial volumes. I think that this will change in the period immediately leading up to my hypothesized AI Manhattan Project. The AI Manhattan Project can only happen with a lot of political will behind it, which will probably mean a spiral of scientific advancements, hype and threat of competition from external unfriendly sources. Think space race.

So suppose that the first few AIs are built under well controlled conditions. Friendliness is still not perfected, but we think/hope we've learned some valuable basics. But now people want to use the AIs for something. So what should be done at this point?

I won't try to speculate what happens next (well you can probably persuade me to, but it might not be as valuable), beyond extensions of the protocols I've already laid out, hybridized with notions like Oracle AI. It certainly gets a lot harder, but hopefully experimentation on the first, highly-controlled generation of AI to get a better understanding of their architectural fundamentals, combined with more direct research on friendliness in general would provide the groundwork for this.