Deepmind Plans for Rat-Level AI

20 moridinamael 18 August 2016 04:26PM

Demis Hassabis gives a great presentation on the state of Deepmind's work as of April 20, 2016. Skip to 23:12 for the statement of the goal of creating a rat-level AI -- "An AI that can do everything a rat can do," in his words. From his tone, it sounds like this is more a short-term, not a long-term goal.

I don't think Hassabis is prone to making unrealistic plans or stating overly bold predictions. I strongly encourage you to scan through Deepmind's publication list to get a sense of how quickly they're making progress. (In fact, I encourage you to bookmark that page, because it seems like they add a new paper about twice a month.) The outfit seems to be systematically knocking down all the "Holy Grail" milestones on the way to GAI, and this is just Deepmind. The papers they've put out in just the last year or so concern successful one-shot learning, continuous control, actor-critic architectures, novel memory architectures, policy learning, and bootstrapped gradient learning, and these are just the most stand-out achievements. There's even a paper co-authored by Stuart Armstrong concerning Friendliness concepts on that list.

If we really do have a genuinely rat-level AI within the next couple of years, I think that would justify radically moving forward expectations of AI development timetables. Speaking very naively, if we can go from "sub-nematode" to "mammal that can solve puzzles" in that timeframe, I would view it as a form of proof that "general" intelligence does not require some mysterious ingredient that we haven't discovered yet.

An EPub of Eliezer's blog posts

40 ciphergoth 11 August 2011 02:20PM

Update 2015-03-21: I would now strongly recommend reading Rationality: From AI to Zombies over this. Though the blog posts I collected here are the starting point for that book, considerable work has gone into selecting and arranging the essays as well as adding thoughtful new material and useful material not in this collection. Only if you've already read that should you consider starting on this; you can always skip the essays you've already read.

This is all Eliezer's posts to Less Wrong up to the end of 2010 as an EPub. Can be read with Aldiko and other eBook readers, though you might have to jump through some hoops on the Kindle (haven't tried it). I shared it privately with a few friends in the past, but I thought it might be more generally useful.  Highlights include that all the screwed-up Unicode is fixed AFAIK.

Source code.

Update: have now made a MOBI for the Kindle too.

Updated 2011-08-13 17:20 BST: Now with images!

The Library of Scott Alexandria

45 RobbBB 14 September 2015 01:38AM

I've put together a list of what I think are the best Yvain (Scott Alexander) posts for new readers, drawing from SlateStarCodex, LessWrong, raikoth.net, and Scott's LiveJournal.

The list should make the most sense to people who start from the top and read through it in order, though skipping around is encouraged too. Rather than making a chronological list, I’ve tried to order things by a mix of "where do I think most people should start reading?" plus "sorting related posts together."

This is a work in progress; you’re invited to suggest things you’d add, remove, or shuffle around. Since many of the titles are a bit cryptic, I'm adding short descriptions. See my blog for a version without the descriptions.

 


I. Rationality and Rationalization


II. Probabilism


III. Science and Doubt


IV. Medicine, Therapy, and Human Enhancement


V. Introduction to Game Theory


VI. Promises and Principles


VII. Cognition and Association


VIII. Doing Good


IX. Liberty


X. Progress


XI. Social Justice


XII. Politicization


XIII. Competition and Cooperation


 

If you liked these posts and want more, I suggest browsing the SlateStarCodex archives.

Linkposts now live!

26 Vaniver 28 September 2016 03:13PM

 

You can now submit links to LW! As the rationality community has grown up, more and more content has moved off LW to other places, and so rather than trying to generate more content here we'll instead try to collect more content here. My hope is that Less Wrong becomes something like "the Rationalist RSS," where people can discover what's new and interesting without necessarily being plugged in to the various diaspora communities.

Some general norms, subject to change:

 

  1. It's okay to link someone else's work, unless they specifically ask you not to. It's also okay to link your own work; if you want to get LW karma for things you make off-site, drop a link here as soon as you publish it.
  2. It's okay to link old stuff, but let's try to keep it to less than 5 old posts a day. The first link that I made is to Yudkowsky's Guide to Writing Intelligent Characters.
  3. It's okay to link to something that you think rationalists will be interested in, even if it's not directly related to rationality. If it's political, think long and hard before deciding to submit that link.
  4. It's not okay to post duplicates.

As before, everything will go into discussion. Tag your links, please. As we see what sort of things people are linking, we'll figure out how we need to divide things up, be it separate subreddits or using tags to promote or demote the attention level of links and posts.

(Thanks to James Lamine for doing the coding, and to Trike (and myself) for supporting the work.)

Would Your Real Preferences Please Stand Up?

42 Yvain 08 August 2009 10:57PM

Related to: Cynicism in Ev Psych and Econ

In Finding the Source, a commenter says:

I have begun wondering whether claiming to be victim of 'akrasia' might just be a way of admitting that your real preferences, as revealed in your actions, don't match the preferences you want to signal (believing what you want to signal, even if untrue, makes the signals more effective).

I think I've seen Robin put forth something like this argument [EDIT: Something related, but very different], and TGGP points out that Brian Caplan explicitly believes pretty much the same thing1:

I've previously argued that much - perhaps most - talk about "self-control" problems reflects social desirability bias rather than genuine inner conflict.

Part of the reason why people who spend a lot of time and money on socially disapproved behaviors say they "want to change" is that that's what they're supposed to say.

Think of it this way: A guy loses his wife and kids because he's a drunk. Suppose he sincerely prefers alcohol to his wife and kids. He still probably won't admit it, because people judge a sinner even more harshly if he is unrepentent. The drunk who says "I was such a fool!" gets some pity; the drunk who says "I like Jack Daniels better than my wife and kids" gets horrified looks. And either way, he can keep drinking.

I'll call this the Cynic's Theory of Akrasia, as opposed to the Naive Theory. I used to think it was plausible. Now that I think about it a little more, I find it meaningless. Here's what changed my mind.

continue reading »

Special Status Needs Special Support

20 Eliezer_Yudkowsky 04 May 2009 10:59PM

I just recorded another BHTV with Adam Frank, though it's not out yet, and I had a thought that seems worth recording.  At a certain point in the dialogue, Adam Frank was praising the wisdom and poetry in religion.  I retorted, "Tolkien's got great poetry, and some parts that are wise and some that are unwise; but you don't see people wearing little rings around their neck in memory of Frodo."

(I don't remember whether this observation is original to me, so if anyone knows a prior source for this exact wording, please comment it!)

The general structure of this critique is that Frank wants to assign a special status to the Book of Job, but he gives a reason that would be equally applicable to The Lord of the Rings (good poetry and some wise parts).  So if those are his real reasons, he should feel just the same way about God and Gandalf.  Or if not that exact particular book, then some other work of poetic fiction that was always understood to be poetic fiction.

continue reading »

Atheism = Untheism + Antitheism

86 Eliezer_Yudkowsky 01 July 2009 02:19AM

One occasionally sees such remarks as, "What good does it do to go around being angry about the nonexistence of God?" (on the one hand) or "Babies are natural atheists" (on the other).  It seems to me that such remarks, and the rather silly discussions that get started around them, show that the concept "Atheism" is really made up of two distinct components, which one might call "untheism" and "antitheism".

A pure "untheist" would be someone who grew up in a society where the concept of God had simply never been invented - where writing was invented before agriculture, say, and the first plants and animals were domesticated by early scientists.  In this world, superstition never got past the hunter-gatherer stage - a world seemingly haunted by mostly amoral spirits - before coming into conflict with Science and getting slapped down.

Hunter-gatherer superstition isn't much like what we think of as "religion".  Early Westerners often derided it as not really being religion at all, and they were right, in my opinion.  In the hunter-gatherer stage the supernatural agents aren't particularly moral, or charged with enforcing any rules; they may be placated with ceremonies, but not worshipped.  But above all - they haven't yet split their epistemology.  Hunter-gatherer cultures don't have special rules for reasoning about "supernatural" entities, or indeed an explicit distinction between supernatural entities and natural ones; the thunder spirits are just out there in the world, as evidenced by lightning, and the rain dance is supposed to manipulate them - it may not be perfect but it's the best rain dance developed so far, there was that famous time when it worked...

If you could show hunter-gatherers a raindance that called on a different spirit and worked with perfect reliability, or, equivalently, a desalination plant, they'd probably chuck the old spirit right out the window.  Because there are no special rules for reasoning about it - nothing that denies the validity of the Elijah Test that the previous rain-dance just failed.  Faith is a post-agricultural concept.  Before you have chiefdoms where the priests are a branch of government, the gods aren't good, they don't enforce the chiefdom's rules, and there's no penalty for questioning them.

And so the Untheist culture, when it invents science, simply concludes in a very ordinary way that rain turns out to be caused by condensation in clouds rather than rain spirits; and at once they say "Oops" and chuck the old superstitions out the window; because they only got as far as superstitions, and not as far as anti-epistemology.

The Untheists don't know they're "atheists" because no one has ever told them what they're supposed to not believe in - nobody has invented a "high god" to be chief of the pantheon, let alone monolatry or monotheism.

continue reading »

Skill: The Map is Not the Territory

49 Eliezer_Yudkowsky 06 October 2012 09:59AM

Followup to: The Useful Idea of Truth (minor post)

So far as I know, the first piece of rationalist fiction - one of only two explicitly rationalist fictions I know of that didn't descend from HPMOR, the other being "David's Sling" by Marc Stiegler - is the Null-A series by A. E. van Vogt. In Vogt's story, the protagonist, Gilbert Gosseyn, has mostly non-duplicable abilities that you can't pick up and use even if they're supposedly mental - e.g. the ability to use all of his muscular strength in emergencies, thanks to his alleged training. The main explicit-rationalist skill someone could actually pick up from Gosseyn's adventure is embodied in his slogan:

"The map is not the territory."

Sometimes it still amazes me to contemplate that this proverb was invented at some point, and some fellow named Korzybski invented it, and this happened as late as the 20th century. I read Vogt's story and absorbed that lesson when I was rather young, so to me this phrase sounds like a sheer background axiom of existence.

But as the Bayesian Conspiracy enters into its second stage of development, we must all accustom ourselves to translating mere insights into applied techniques. So:

Meditation: Under what circumstances is it helpful to consciously think of the distinction between the map and the territory - to visualize your thought bubble containing a belief, and a reality outside it, rather than just using your map to think about reality directly?  How exactly does it help, on what sort of problem?

continue reading »

Rationality: Appreciating Cognitive Algorithms

37 Eliezer_Yudkowsky 06 October 2012 09:59AM

Followup to: The Useful Idea of Truth

It is an error mode, and indeed an annoyance mode, to go about preaching the importance of the "Truth", especially if the Truth is supposed to be something incredibly lofty instead of some boringmundane truth about gravity or rainbows or what your coworker said about your manager.

Thus it is a worthwhile exercise to practice deflating the word 'true' out of any sentence in which it appears. (Note that this is a special case of rationalist taboo.) For example, instead of saying, "I believe that the sky is blue, and that's true!" you can just say, "The sky is blue", which conveys essentially the same information about what color you think the sky is. Or if it feels different to say "I believe the Democrats will win the election!" than to say, "The Democrats will win the election", this is an important warning of belief-alief divergence.

Try it with these:

  • I believe Jess just wants to win arguments.
  • It’s true that you weren’t paying attention.
  • I believe I will get better.
  • In reality, teachers care a lot about students.

If 'truth' is defined by an infinite family of sentences like 'The sentence "the sky is blue" is true if and only if the sky is blue', then why would we ever need to talk about 'truth' at all?

Well, you can't deflate 'truth' out of the sentence "True beliefs are more likely to make successful experimental predictions" because it states a property of map-territory correspondences in general. You could say 'accurate maps' instead of 'true beliefs', but you would still be invoking the same concept.

It's only because most sentences containing the word 'true' are not talking about map-territory correspondences in general, that most such sentences can be deflated.

Now consider - when are you forced to use the word 'rational'?

continue reading »

The Useful Idea of Truth

77 Eliezer_Yudkowsky 02 October 2012 06:16PM

(This is the first post of a new Sequence, Highly Advanced Epistemology 101 for Beginners, setting up the Sequence Open Problems in Friendly AI.  For experienced readers, this first post may seem somewhat elementary; but it serves as a basis for what follows.  And though it may be conventional in standard philosophy, the world at large does not know it, and it is useful to know a compact explanation.  Kudos to Alex Altair for helping in the production and editing of this post and Sequence!)


I remember this paper I wrote on existentialism. My teacher gave it back with an F. She’d underlined true and truth wherever it appeared in the essay, probably about twenty times, with a question mark beside each. She wanted to know what I meant by truth.
-- Danielle Egan

I understand what it means for a hypothesis to be elegant, or falsifiable, or compatible with the evidence. It sounds to me like calling a belief ‘true’ or ‘real’ or ‘actual’ is merely the difference between saying you believe something, and saying you really really believe something.
-- Dale Carrico

What then is truth? A movable host of metaphors, metonymies, and; anthropomorphisms: in short, a sum of human relations which have been poetically and rhetorically intensified, transferred, and embellished, and which, after long usage, seem to a people to be fixed, canonical, and binding.
-- Friedrich Nietzche


The Sally-Anne False-Belief task is an experiment used to tell whether a child understands the difference between belief and reality. It goes as follows:

  1. The child sees Sally hide a marble inside a covered basket, as Anne looks on.

  2. Sally leaves the room, and Anne takes the marble out of the basket and hides it inside a lidded box.

  3. Anne leaves the room, and Sally returns.

  4. The experimenter asks the child where Sally will look for her marble.

Children under the age of four say that Sally will look for her marble inside the box. Children over the age of four say that Sally will look for her marble inside the basket.

continue reading »

View more: Next