You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

[LINK] AI risk summary published in "The Conversation"

8 Stuart_Armstrong 14 August 2014 11:12AM

A slightly edited version of "AI risk - executive summary" has been published in "The Conversation", titled "Your essential guide to the rise of the intelligent machines":

The risks posed to human beings by artificial intelligence in no way resemble the popular image of the Terminator. That fictional mechanical monster is distinguished by many features – strength, armour, implacability, indestructability – but Arnie’s character lacks the one characteristic that we in the real world actually need to worry about – extreme intelligence.

Thanks again for those who helped forge the original article. You can use this link, or the Less Wrong one, depending on the audience.

My book: Simulating Dennett - This Wednesday in Sao Paulo

3 diegocaleiro 17 March 2014 08:15AM

There's been somewhat frequent coverage of Daniel Dennett on Lesswrong:


How not to be a Naïve Computationalist

Dennett's "Consciousness Explained": Prelude

"Where Am I?", by Daniel Dennett

Dennett's heterophenomenology

My personal favorite: Zombies: The Movie

I've written a book called Simulating Dennett nearly five years ago now (if you are considering an academic career, keep that slow paced speed in mind, for good or ill). It summarizes Dennett's philosophy  while trying to make the reader able to think like Dennett. It seemed to me at the time, and still does now, that Dennett's kind of mind is very interesting and we should have more of those, so I tried my best to create a Dennett installer in book form.

Simulating Dennett: Tools and Constructions of a Naturalist


Is the 244 pages that ensued. Portuguese or Spanish reading skills advised. Or use it to learn Portuguese prior to your trip to Rio, Pantanal, Iguaçu Falls and the Amazon Forest. (for legal reasons I've chopped out the second half of the file, but there are instructions on how to get it when you get to the end of the first half)

Abstract

This dissertation intends to provide the reader with an inner simulation of Daniel Dennett’s form of reasoning, spreading over his whole philosophy, emphasizing his treatment of patterns, the evolutionary algorithm, consciousness, and his use of illata, abstracta, semantic, and syntax, to carve nature at its joints, especially biology and the human mind. It recasts, in a new light, great part of his most important ideas, and reverse engineers what made him think in particular ways, walking the reader through similar pathways, fostering an active learning of a thinking style, above and beyond a mere exposition of the results obtained by this thinking style over the years.

Keywords: Daniel Dennett, Consciousness, Memetics, Intentional stance, Evolution,

Algorithm.

This Wednesday 2013-03-19 at 14:00 I’ll be presenting it as thesis in the University of São Paulo. Lesswrongers passing by Brazil, or the 20 of us who actually live here are welcome to join.


Here is the Facebook event.


Cryonics Presentation [help request]

2 MathieuRoy 09 November 2013 08:51PM

This Monday (November 11th 2013 EDIT: it has been postponed to November 18th 2013) I will participate in a 'scientific communication' competition Laval University. If I win, I will go to the Quebec Engineering Competition (http://cqi-qec.qc.ca/). If I win again, I will go to the Canadian Engineering Competition (http://cec.cfes.ca/).

I need to do a presentation of 15 to 20 minutes. Then the judges can ask me questions during 10 minutes. I will do my presentation on cryonics. I want to invest approximately 12 to 15 hours for preparing it. I will not have time to read everything there is on Internet about cryonics in that time period, so if some of you are familiar with the subject, I would appreciate if you could link me to the best resources on the scientific and ethic aspects of cryonics.

I will do my presentation with Google Drive Presentation. It will be in French. I will put the link here later on if someone wants to review the presentation (EDIT: the presentation is done; you can see it and comment it on Google Drive). Moreover, I would like to practice my presentation tomorrow in a Google+ hangout if some people want to watch and comment it.

Thank you.

P.S.: If there are any Canadian engineering students reading this, check out the competition: there's 7 categories and it's a really interesting competition in my opinion.

Advices needed for a presentation on rationality

2 Worthstream 13 June 2012 10:41AM

Hi, next month I'm going to be doing an hour long presentation on rationality to Mensa members. It needs to be rather introductory, since High QI != Rationality and most of them are not familiar with the concepts discussed here.

I'm planning to talk about what rationality is (any good quotes?), what is the difference between the brain and the conscience, why being rational does not mean having a perfect willpower, some common and easily avoided fallacies (sunk cost, scope insensitivity).

I did a search on the site for this kind of introductory posts and have quite a large pool of interesting arguments to touch. Does anyone have any suggestion on which topics should be included, any pointers to interesting posts that should be summarized or used as source material, etc.?

AI risk: the five minute pitch

9 Stuart_Armstrong 08 May 2012 04:28PM

I did a talk at the 25th Oxford Geek night, in which I had five minutes to present the dangers of AI. The talk is now online. Though it doesn't contain anything people at Less Wrong would find new, I feel it does a reasonable job at pitching some of the arguments in a very brief format.

Summary of "The Straw Vulcan"

30 alexvermeer 26 December 2011 04:29PM

Followup to: Communicating rationality to the public: Julia Galef's "The Straw Vulcan"

The Straw VulcanI wrote a summary of Julia Galef's "The Straw Vulcan" presentation from Skepticon 4. Note that it is written in my own words, but all of the ideas should be credited to Julia and her presentation (unless I unintentionally misrepresent any of them!).

---

The classic Hollywood example of rationality is the Vulcans from Star Trek. They are depicted as an ultra-rational race that has eschewed all emotion from their lives.

But is this truly rational? What is rationality?

A “Straw Vulcan”—an idea originally defined on TV Tropes—is a straw man used to show that emotion is better than logic. Traditionally, you have your ‘rational’ character who thinks perfectly ‘logically’, but then ends up running into trouble, having problems, or failing to achieve what they were trying to achieve.

These characters have a sort of fake rationality. They don’t fail because rationality failed, but because they aren’t actually being rational. Straw Vulcan rationality is not the same thing as actual rationality.

What is real rationality?

There are two different concepts that we refer to when we use the word ‘rationality’:

1. The method of obtaining an accurate view of reality. (Epistemic Rationality) — Learning new things, updating your beliefs based on the evidence, being as accurate as possible, being as close to what is true as possible, etc.

2. The method of achieving your goals. (Instrumental Rationality) — Whatever your goals are, be them selfish or altruistic, there are better and worse ways to achieve them, and instrumental rationality helps you figure this out.

These two concepts are obviously related. You want a clear model of the world to be able to achieve your goals. You also may have goals related to obtaining an accurate model of the world.

How do these concepts of rationality relate to Straw Vulcan rationality? What is the Straw Vulcan conception of rationality?

“Straw Vulcan” Rationality Principles

Straw Vulcan Principle #1: Being rational means expecting other people to be rational too.

Galef uses an example from Star Trek where Spock, in an attempt to protect the crew of the crashed ship, decides to show aggression against the local aliens so that they will be scared and run away. Instead, they are angered by the display of aggression and attack even more fiercely, much to Spock’s dismay and confusion.

But this isn’t being rational! Spock’s model of the world is severely tarnished by his silly expectation for everyone else to be as rational as he would be. Real rationality would require you to try to understand all aspects of the situation and act accordingly.

Straw Vulcan Principle #2: Being rational means never making a decision until you have all the information.

This seems to assume that the only important criteria for making decisions is that you make the best one given all the information. But what about things like time and risk? Surely those should factor into your decisions too.

We know intuitively that this is true. If you want a really awesome sandwich you may be willing to pay an extra $1.00 for some cheese, but you wouldn’t pay $300 for a small increase in the quality of a sandwich. You want the best possible outcome, but this requires simultaneously weighing various things like time, cost, value, and risk.

What is the most rational way to find a partner? Take this example from Gerd Gigerenzer, a well-respected psychology describing how a rationalist would find a partner:

“He would have to look at the probabilities of various consequences of marrying each of them—whether the woman would still talk to him after they’re married, whether she’d take care of their children, whatever is important to him—and the utilities of each of these…After many years of research he’d probably find out that his final choice had already married another person who didn’t do these computations, and actually just fell in love with her.”

But clearly this isn’t optimal decision making. The rational thing to do isn’t to merely wait until you have as much information as you can possibly have. You need to factor in things like how long the research is taking, the decreasing number of available partners as time passes, etc.

Straw Vulcan Principle #3: Being rational means never relying on intuition.

Straw Vulcan rationality says that anything intuition-based is illogical. But what is intuition?

We have two systems in our brains, which have been unexcitingly called System 1 and System 2.

System 1—the intuitive system—is the older of the two and allows us to make quick, automatic judgments using shortcuts (i.e. heuristics) that are usually good most of the time, all while requiring very little of your time and attention.

System 2—the deliberative system—is the newer of the two and allows us to do things like abstract hypothetical thinking and make models that explain unexpected events. System 2 tends to do better when you have more resources and more time and worse when there are many factors to consider and you have limited time.

Take a sample puzzle: A bat and ball together cost $1.10. If the bat costs $1 more than the ball, how much does the ball cost?

When a group of Princeton students were given this question, about 50% of them got it wrong. The correct answer is $0.05, since then the bat would cost $1.05 for a total of $1.10. The wrong answer of $0.10 is easily generated (incorrectly) by our System 1, and our System 2 accepts it without question.

Your System 1 is prone to biases, and it is also incredibly powerful. Our intuition tends to do well with purchasing decisions or other choices about our personal lives. System 1 is also very powerful for an expert. Chess grandmasters can glance at a chessboard and say, “white checkmates in three moves,” because of the vast amount of time and mental effort spent playing chess and building up a mental knowledge base about it.

Intuition can be bad and less reliable when based on something not relevant to the task at hand or when you don’t have expert knowledge on the topic. You opinions of AI may be heavily influenced by scifi movies that have little basis in reality.

The main thing to take away from this System 1 and 2 split is that both systems have strengths and weaknesses, and rationality is about finding the best path—using both systems at the right times—to epistemic and instrumental rationality.

Being “too rational” usually means you are using your System 2 brain intentionally but poorly. For example, teenagers were criticized in an article for being “too rational” because they could reason themselves into things like drugs and speeding. But this isn’t a problem with being too rational; it’s a problem with being very bad at System 2 reasoning!

Straw Vulcan Principle #4: Being rational means not having emotions.

Rationality and emotions are often portrayed in a certain way in Straw Vulcan rationalists, such as when Spock is excited to see that Captain Kirk isn’t dead, and then quickly covers up his emotions. The simplistic Hollywood portrayal of emotions and rationality is as follows:

Note that emotions can get in the way of taking actions on our goals. For example, anxiety causes us to overestimate risks; depression causes us to underestimate how much we will enjoy an activity; and feeling threatened or vulnerable causes us to exhibit more superstitious behavior and and likely to see patterns that don’t exist.

But emotions are also important for making the decisions themselves. Without having any emotional desires we would have no reason to have goals in the first place. You would have no motivations to choose between a calm beach and a nuclear waste site for your vacation. Emotions are necessary for forming goals; rationality is lame without them!

[Galef noted in a comment that the intended meaning is in line with “Emotions are necessary for forming goals among humans, rationality has no normative value to humans without goals.”]

This leaves us with a more accurate portrayal of the relationship between emotions and rationality:

How do emotions make us irrational? Emotions can be epistemically irrational if they are based on a false model of the world. You can be angry at your husband for not asking how your presentation at work went, but then upon reflection realize you never told him about it so how would he know it happened? Your anger was based on a false model of reality.

Emotions can be instrumentally irrational if they get in the way of you achieving your goals. If you feel things are hopeless and there are no ways to change the situation, you may be wrong about that. Your emotions may prevent you from taking necessary actions.

Our emotions also influence each other. If you have a desire to be liked by others and a desire to sit on a couch all day, you may run into problems. These desires may influence and conflict with each other.

We can also change our emotions. For example, cognitive behavioral therapy has many exercises and techniques (e.g. Thought Records) for changing your emotions by changing your beliefs.

Straw Vulcan Principle #5: Being rational means valuing only quantifiable things, like money, efficiency, or productivity.

If it isn’t concrete and measurable then there is no reason to value it, right? Things like beauty, love, or joy are just irrational emotions, right?

What are the problems with this? For starters, money can’t be valuable in and of itself, because it is only a means to obtain other valued things. Also, there is no reason to assume that money and productivity are the only things of value.

The Main Takeaway

Galef finishes off with this final message:

“If you think you’re acting rationally but you consistently keep getting the wrong answer, and you consistently keep ending worse off than you could be, then the conclusion you should draw from that is not that rationality is bad, it’s that you’re bad at rationality.

In other words, you’re doing it wrong!

You're Doing It Wrong!

First three images are from measureofdoubt.com > The Straw Vulcan: Hollywood’s illogical approach to logical decisionmaking.
You're Doing It Wrong image from evilbomb.com.

Presentation on Learning

3 datadataeverywhere 17 November 2011 05:30PM

In order to do a better job putting together my thoughts and knowledge on the subject, I precommitted myself to giving a presentation on learning. My specific goal for the presentation is to inform audience members about how humans actually learn and teach them how to leverage this knowledge to efficiently learn and maintain factual and procedural knowledge and create desired habits.

I will be focusing a little on background neuroscience, borrowing especially from A Crash Course in the Neuroscience of Human Motivation. I will heavily discuss spaced repetition, and I will also talk about the relevance of System 1 and System 2 thinking. I will not be talking about research, or about how to discover what to learn; for the purposes of my presentation, people already know what they want or need to learn, and have a fairly accurate picture of what that knowledge or those behaviors look like.

Given that I will only have an hour to speak, I will be unable to explore everything I might like to in depth. Less Wrong (both the site and the community) are my most valuable resource here, so I am asking two things:

  1. In one hour, what would you cover if you earnestly wanted to improve people's ability to learn?
  2. What background material do I need to ensure fluency with? This should be material that I need to have adequate familiarity with or else risk presenting an error, even if I don't need to present the material itself in any depth.
The audience will be students and faculty in a Computer Science department. In decreasing order of number of members, the audience will be Masters students, seniors, Ph.D candidates, professors; no Junior or lower-level undergraduates, so I will probably use computing analogies that wouldn't make sense in other contexts. Because of the audience, I'm also comfortable giving a fairly information-dense presentation, but since I intend to persuade as well as inform the presentation will not be a report.