Comment author: skeptical_lurker 04 October 2016 05:23:48AM *  3 points [-]

I've been thinking about what seems to be the standard LW pitch on AI risk. It goes like this: "Consider an AI that is given a goal by humans. Since 'convert the planet into computronium' is a subgoal of most goals, it does this and kills humanity."

The problem, which various people have pointed out, is that this implies an intelligence capable of taking over the world, but not capable of working out that when a human says pursue a certain goal, they would not want this goal to be pursued in a way that leads to the destruction of the world.

Worse, the argument can then be made that this idea that an AI will interpret goals so literally without modelling a human mind constitutes an "autistic AI" and that only autistic people would assume that AI would be similarly autistic. I do not endorse this argument in any way, but I guess its still better to avoid arguments that signal low social skills, all other things being equal.

Is there any consensus on what the best 'elevator pitch' argument for AI risk is? Instead of focusing on any one failure mode, I would go with something like this:

"Most philosophers agree that there is no reason why superintelligence is not possible. Anything which is possible will eventually be achieved, and so will superintelligence, perhaps in the far future, perhaps in the next few decades. At some point, superintelligences will be as far above humans as we are above ants. I do not know what will happen at this point, but the only reference case we have is humans and ants, and if superintelligences decide that humans are an infestation, we will be exterminated."

Incidentally, this is the sort of thing I mean by painting LW style ideas as autistic (via David Pierce)

As far as we can tell, digital computers are still zombies. Our machines are becoming autistically intelligent, but not supersentient - nor even conscious. [...] Full-Spectrum Superintelligence entails: [...] social intelligence [...] a metric to distinguish the important from the trivial [...] a capacity to navigate, reason logically about, and solve problems in multiple state-spaces of consciousness [e.g. dreaming states (cf. lucid dreaming), waking consciousness, echolocatory competence, visual discrimination, synaesthesia in all its existing and potential guises, humour, introspection, the different realms of psychedelia [...] and finally "Autistic", pattern-matching, rule-following, mathematico-linguistic intelligence, i.e. the standard, mind-blind cognitive tool-kit scored by existing IQ tests. High-functioning "autistic" intelligence is indispensable to higher mathematics, computer science and the natural sciences. High-functioning autistic intelligence is necessary - but not sufficient - for a civilisation capable of advanced technology that can cure ageing and disease, systematically phase out the biology of suffering, and take us to the stars. And for programming artificial intelligence.

Sometimes David Pierce seems very smart. And sometimes he seems to imply that the ability to think logically while on psychedelic drugs is as important as 'autistic intelligence'. I don't think he thinks that autistic people are zombies that do not experience subjective experience, but that also does seem implied.

Comment author: ruelian 04 October 2016 02:08:04PM *  0 points [-]

I think the basic problem here is an undissolved question: what is 'intelligence'? Humans, being human, tend to imagine a superintelligence as a highly augmented human intelligence, so the natural assumption is that regardless of the 'level' of intelligence, skills will cluster roughly the way they do in human minds, i.e. having the ability to take over the world implies a high posterior probability of having the ability to understand human goals.

The problem with this assumption is that mind-design space is large (<--understatement), and the prior probability of a superintelligence randomly ending up with ability clusters analogous to human ability clusters is infinitesimal. Granted, the probability of this happening given a superintelligence designed by humans is significantly higher, but still not very high. (I don't actually have enough technical knowledge to estimate this precisely, but just by eyeballing it I'd put it under 5%.)

In fact, autistic people are an example of non-human-standard ability clusters, and even that's only by a tiny amount in the scale of mind-design-space.

As for an elevator pitch of this concept, something like "just because evolution happened design our brains to be really good at modeling human goal systems, doesn't mean all intelligences are good at it, regardless of how good they might be at destroying the planet".

Comment author: ruelian 16 March 2016 08:21:29AM 0 points [-]

Looking for advice with something it seems LW can help with.

I'm currently part of a program the trains highly intelligent people to be more effective, particularly with regards to scientific research and effecting change within large systems of people. I'm sorry to be vague, but I can't actually say more than that.

As part of our program, we organize seminars for ourselves on various interesting topics. The upcoming one is on self-improvement, and aims to explore the following questions: Who am I? What are my goals? How do I get there?

Naturally, I'm of the opinion that rationalist thought has a lot to offer on all of those questions. (I also have ulterior motives here, because I think it would be really cool to get some of these people on board with rationalism in general.) I'm having a hard time narrowing down this idea to a lesson plan I can submit to the organizers, so I thought I'd ask for suggestions.

The possible formats I have open for an activity are a lecture, a workshop/discussion in small groups, and some sort of guided introspection/reading activity (for example just giving people a sheet with questions to ponder on it, or a text to reflect on).

I've also come up with several possible topics: How to Actually Change Your Mind (ideas on how to go about condensing it are welcome), practical mind-hacking techniques and/or techniques for self-transparency, or just information on heuristics and biases because I think that's useful in general.

You can also assume the intended audience already know each other pretty well, and are capable of rather more analysis and actual math than is average.

Ideas for topics or activities, particularly ones that include a strong affective experience because those are generally better at getting poeple to think about this sort of thing for the first time, are welcome.

Comment author: Steven_Bukal 29 October 2014 06:49:58AM 0 points [-]

I've read that the CEO of Levi's recommends washing jeans very infrequently.

Won't they smell? I have a pretty clean white-collar lifestyle, but I'm concerned about wearing mine even once or twice between machine washing. Is it considered socially acceptable to re-wear jeans?

Comment author: ruelian 30 October 2014 03:54:49PM 0 points [-]

It also depends on the jeans. Some jeans are, for some reason, more likely to smell after being worn just once. I have no idea why, but several people I know have corroborated this independently.

Comment author: MathiasZaman 27 October 2014 09:28:54PM *  7 points [-]

I've recently started a tumblr dedicated to teaching people what amounts to Rationality 101. This post isn't about advertising that blog, since the sort of people that actually read Less Wrong are unlikely to be the target audience. Rather, I'd like to ask the community for input on what are the most important concepts I could put on that blog.

(For those that would like to follow this endeavor, but don't like tumblr, I've got a parallel blog on wordpress)

Comment author: ruelian 28 October 2014 07:56:44PM 1 point [-]

Map and territory - why is rationality important in the first place?

Comment author: lmm 28 October 2014 07:00:46PM 1 point [-]

It makes sense but it doesn't match my subjective experience.

Comment author: ruelian 28 October 2014 07:33:15PM 1 point [-]

Alright, that works too. We're allowed to think differently. Now I'm curious, could you define your way of thinking more precisely? I'm not quite sure I grok it.

Comment author: wadavis 27 October 2014 10:02:24PM 1 point [-]

Doing everything both ways, nonverbal and verbal, has lent itself to better understanding of the reasoning. Which touches on the anecdote problem, if you test every nonverbal result; you get something statistically relevant. If your odds are more often than not with nonverbal, testing every result and digging for the mistakes will increase your understanding (disclaimer: this is hard work).

Comment author: ruelian 28 October 2014 05:23:58PM 2 points [-]

So, essentially, there isn't actually any way of getting around the hard work. (I think I already knew that and just decided to go on not acting on it for a while longer.) Oh well, the hard work part is also fun.

Comment author: Luke_A_Somers 27 October 2014 10:40:48PM 3 points [-]

I don't tend to do a lot of proofs anymore. When I think of math, I find it most important to be able to flip back and forth between symbol and referent freely - look at an equation and visualize the solutions, or (to take one example of the reverse) see a curve and think of ways of representing it as an equation. Since when visualizing numbers will often not be available, I tend to think of properties of a Taylor or Fourier series for that graph. I do a visual derivative and integral.

That way, the visual part tells me where to go with the symbolic part. Things grind to a halt when I have trouble piecing that visualization together.

Comment author: ruelian 28 October 2014 04:55:11PM 1 point [-]

This appears to be a useful skill that I haven't practiced enough, especially for non-proof-related thinking. I'll get right on that.

Comment author: RichardKennaway 28 October 2014 09:51:03AM 1 point [-]

Which of those, if any, sounds closer to the way you think about math?

Each serves its own purpose. It is like the technical and artistic sides of musical performance: the technique serves the artistry. In a sense the former is subordinate to the latter, but only in the sense that the foundation of a building is subordinate to its superstructure. To perform well enough that someone else would want to listen, you need both.

This may be useful reading, and the essays here (from which the former is linked).

Comment author: ruelian 28 October 2014 04:53:12PM 1 point [-]

reads the first essay and bookmarks the page with the rest

Thanks for that, it made for enjoyable and thought-provoking reading.

Comment author: lmm 27 October 2014 10:26:07PM 1 point [-]

I'm not really conscious of the distinction, unless you're talking about outright auditory things like rehearsing a speech in my head. The overwhelming majority of my thinking is in a format where I'm thinking in terms of concepts that I have a word for, but probably not consciously using the word until I start thinking about what I'm thinking about. Do you have a precise definition of "verbal"? But whether you call it verbal or not, it feels like it's all the same thing.

Comment author: ruelian 28 October 2014 04:50:20PM 1 point [-]

I don't really have good definitions at this point, but in my head the distinction between verbal and nonverbal thinking is a matter of order. When I'm thinking nonverbally, my brain addresses the concepts I'm thinking about and the way they relate to each other, then puts them to words. When I'm thinking verbally, my brain comes up with the relevant word first, then pulls up the concept. It's not binary; I tend to put it on a spectrum, but one that has a definite tipping point. Kinda like a number line: it's ordered and continuous, but at some point you cross zero and switch from positive to negative. Does that even make sense?

Comment author: wadavis 27 October 2014 08:14:41PM 1 point [-]

Correct and Precise may have been better terms. By correct I mean a result that I have very high confidence in, but that is not precise enough to be useable. By accurate I mean a result that is very precise but with far less confidence that it is correct.

As an example, consider a damped oscillation word problem from first year. You are very confident that as time approaches infinity that the displacement will approach a value just by looking at it, but you don't know that value. Now when you crunch the numbers (the verbal process in the extreme) you get a very specific value that the function approaches, but have less confidence that that value is correct, you could have made any of a number of mistakes. In this example the classic wrong result is the displacement is in the opposite direction as the applied force.

This is a very simple example so it may be hard to separate the non-verbal process from the verbal, but there are many cases where you know what the result should look like but deriving the equations and relations can turn into a black box.

Comment author: ruelian 27 October 2014 08:40:03PM *  2 points [-]

Right, that makes much more sense now, thanks.

One of my current problems is that I don't understand my brain well enough for nonverbal thinking not to turn into a black box. I think this might be a matter of inexperience, as I only recently managed intuitive, nonverbal understanding of math concepts, so I'm not always entirely sure what my brain is doing. (Anecdotally, my intuitive understanding of a problem produces good results more often than not, but any time my evidence is anecdotal there's this voice in my head that yells "don't update on that, it's not statistically relevant!")

Does experience in nonverbal reasoning on math lend actually itself to better understanding of said reasoning, or is that just a cached thought of mine?

View more: Next