Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

What It's Like to Notice Things

26 BrienneStrohl 17 September 2014 02:19PM

Phenomenology

Phenomenology is the study of the structures of experience and consciousness. Literally, it is the study of "that which appears". The first time you look at a twig sticking up out of the water, you might be curious and ask, "What forces cause things to bend when placed in water?" If you're a curious phenomenologist, though, you'll ask things like, "Why does that twig in water appear as though bent? Do other things appear to bend when placed in water? Do all things placed in water appear to bend to the same degree? Are there things that do not appear to bend when placed in water? Does my perception of the bending depend on the angle or direction from which I observe the twig?"

Pehenomenology means breaking experience down to its more basic components, and being precise in our descriptions of what we actually observe, free of further speculation and assumption. A phenomenologist recognizes the difference between observing "a six-sided cube", and observing the three faces, at most, from which we extrapolate the rest.

I consider phenomenology to be a central skill of rationality. The most obvious example: You're unlikely to generate alternative hypotheses when the confirming observation and the favored hypothesis are one and the same in your experience of experience. The importance of phenomenology to rationality goes deeper than that, though. Phenomenology trains especially fine grained introspection. The more tiny and subtle are the thoughts you're aware of, the more precise can be the control you gain over the workings of your mind, and the faster can be your cognitive reflexes.

(I do not at all mean to say that you should go read Husserl and Heidegger. Despite their apparent potential for unprecedented clarity, the phenomenologists, without exception, seem to revel in obfuscation. It's probably not worth your time to wade through all of that nonsense. I've mostly read about phenomenology myself for this very reason.)

I've been doing some experimental phenomenology of late.

Noticing

I've noticed that rationality, in practice, depends on noticing. Some people have told me this is basically tautological, and therefore uninteresting. But if I'm right, I think it's likely very important to know, and to train deliberately.

The difference between seeing the twig as bent and seeing the twig as seeming bent may seem inane. It is not news that things that are bent tend to seem bent. Without that level of granularity in your observations, though, you may not notice that it could be possible for things to merely seem bent without being bent. When we're talking about something that may be ubiquitous to all applications of rationality, like noticing, it's worth taking a closer look at the contents of our experiences.

Many people talk about "noticing confusion", because Eliezer's written about it. Really, though, every successful application of a rationality skill begins with noticing. In particular, applied rationality is founded on noticing opportunities and obstacles. (To be clear, I'm making this up right this moment, so as far as I know it's not a generally agreed-upon thing. That goes for nearly everything in this post. I still think it's true.) You can be the most technically skilled batter in the world, and it won't help a bit if you consistently fail to notice when the ball whizzes by you--if you miss the opportunities to swing. And you're not going to run very many bases if you launch the ball straight at an opposing catcher--if you're oblivious to the obstacles.

It doesn't matter how many techniques you've learned if you miss all the opportunities to apply them, and fail to notice the obstacles when they get in your way. Opportunities and obstacles are everywhere. We can only be as strong as our ability to notice the ones that will make a difference.

Inspired by Whales' self-experiment in noticing confusion, I've been practicing noticing things. Not difficult or complicated things, like noticing confusion, or noticing biases. I've just been trying to get a handle on noticing, full stop. And it's been interesting.

Noticing Noticing

What does it mean to notice something, and what does it feel like?

I started by checking to see what I expected it to feel like to notice that it's raining, just going from memory. I tried for a split-second prediction, to find what my brain automatically stored under "noticing rain". When I thought about noticing rain, I got this sort of vague impression of rainyness, which included few sensory details and was more of an overall rainy feeling. My brain tried to tell me that "noticing rain" meant "being directly acquainted with rainyness", in much the same way that it tries to tell me it's experiencing a cube when it's actually only experiencing a pattern of light and shadows I interpret as three faces.

Then, I waited for rain. It didn't take long, because I'm in North Carolina for the month. (This didn't happen last time I was in North Carolina, so perhaps I just happened to choose The One Valley of Eternal Rain.)

The real "noticing rain" turned out to be a response to the physical sensations concurrent with the first raindrop falling on my skin. I did eventually have an "abstract rainyness feeling", but that happened a full two seconds later. My actual experience went like this.

It was cloudy and humid. This was not at the forefront of my attention, but it slowly moved in that direction as the temperature dropped. I was fairly focused on reading a book.

(I'm a little baffled by the apparent gradient between "not at all conscious of x" and "fully aware of x". I don't know how that works, but I experience the difference between being a little aware of the sky being cloudy and being focused on the patterns of light in the clouds, as analogous to the difference between being very-slightly-but-not-uncomfortably warm and burning my hand on the stove.)

My awareness of something like an "abstract rainyness feeling" moved further toward consciousness as the wind picked up. Suddenly--and the suddenness was an important part of the experience--I felt something like a cool, dull pin-prick on my arm. I looked at it, saw the water, and recognized it as a raindrop. Over the course of about half a second, several sensations leapt forward into full awareness: the darkness of my surroundings, the humidity in the air, the dark grey-blueness of the sky, the sound of rain on leaves like television static, the scent of ozone and damp earth, the feeling of cool humid wind on my face, and the word "rain" in my internal monologue.

I think it is that sudden leaping forward of many associated sensations that I would call "noticing rain".

After that, I felt a sort of mental step backward--though it was more like a zooming out or sliding away than a discrete step--from the sensations, and then a feeling of viewing them from the outside. There was a sensation of the potential to access other memories of times when it's rained.

(Sensations of potential are fascinating to me. I noticed a few weeks ago that after memorizing a list of names and faces, I could predict in the first half second of seeing the face whether or not I'd be able to retrieve the name in the next five seconds. Before I actually retrieved the name. What??? I don't know either.)

Only then did all of it resolve into the more distant and abstract "feeling of rainyness" that I'd predicted before. The resolution took four times as long as the simultaneous-leaping-into-consciousness-of-related-sensations that I now prefer to call "noticing", and ten times as long as the first-raindrop-pin-prick, which I think I'll call the "noticing trigger" if it turns out to be a general class of pre-noticing experiences.

("Can you really distinguish between 200 and 500 milliseconds?" Yes, but it's an acquired skill. I spent a block of a few minutes every day for a month, then several blocks a day for about a week, doing this Psychomotor Vigiliance Task when I was gathering data for the polyphasic sleep experiment. (No, I'm sorry, to the best of my knowledge Leverage has not yet published anything on the results of this. Long story short: Everyone who wasn't already polyphasic is still not polyphasic today.) It gives you fast feedback on simple response time. I'm not sure if it's useful for anything else, but it comes in handy when taking notes on experiences that pass very quickly.)

Noticing Environmental Cues

My second experiment was in repeated noticing. This is more closely related to rationality as habit cultivation.

Can I get better at noticing something just by practicing?

I was trying to zoom in on the experience of noticing itself, so I wanted something as simple as possible. Nothing subtle, nothing psychological, and certainly nothing I might be motivated to ignore. I wanted a straightforward element of my physical environment. I'm out in the country and driving around for errands and such about once a day, so I went with "red barn roofs".

I had an intuition that I should give myself some outward sign of having noticed, lest I not notice that I noticed, and decided to snap my fingers every time I noticed a red barn roof.

On the first drive, I noticed one red barn roof. That happened when I was almost at my destination and I thought, "Oh right, I'm supposed to be noticing red barn roofs, oops" then started actively searching for them.

Noticing a red barn roof while searching for it feels very different from noticing rain while reading a book. With the rain, it felt sort of like waking up, or like catching my name in an overheard conversation. There was a complete shift in what my brain was doing. With the barn roof, it was like I had a box with a red-barn-roof-shaped hole, and it felt like completion when a I grabbed a roof and dropped it through the hole. I was prepared for the roof, and it was a smaller change in the contents of consciousness.

I noticed two on the way back, also while actively searching for them, before I started thinking about something else and became oblivious.

I thought that maybe there weren't enough red barn roofs, and decided to try noticing red roofs of all sorts of buildings the next day. This, it turns out, was the correct move.

On day two of red-roof-noticing, I got lots of practice. I noticed around fifteen roofs on the way to the store, and around seven on the way back. By the end, I was not searching for the roofs as intently as I had been the day before, but I was still explicitly thinking about the project. I was still aware of directing my eyes to spend extra time at the right level in my field of vision to pick up roofs. It was like waving the box around and waiting for something to fall in, while thinking about how to build boxes.

I went out briefly again on day two, and on the way back, I noticed a red roof while thinking about something else entirely. Specifically, I was thinking about the possibility of moving to Uruguay, and whether I knew enough Spanish to survive. In the middle of one of those unrelated thoughts, my eyes moved over a barn roof and stayed there briefly while I had the leaping-into-consciousness experience with respect to the sensations of redness, recognizing something as shaped like a building, and feeling the impulse to snap my fingers. It was like I'd been wearing the box as a hat to free up my hands, and I'd forgotten about it. And then, with a heavy ker-thunk, the roof became my new center of attention.

And oh my gosh, it was so exciting! It sounds so absurd in retrospect to have been excited about noticing a roof. But I was! It meant I'd successfully installed a new cognitive habit to run in the background. On purpose. "Woo hoo! Yeah!" (I literally said that.)

On the third day, I noticed TOO MANY red roofs. I followed the same path to the store as before, but I noticed somewhere between twenty and thirty red roofs. I got about the same number going back, so I think I was catching nearly all the opportunities to notice red roofs. (I'd have to do it for a few days to be sure.) There was a pattern to noticing, where I'd notice-in-the-background, while thinking about something else, the first roof, and then I'd be more specifically on the lookout for a minute or two after that, before my mind wandered back to something other than roofs. I got faster over time at returning to my previous thoughts after snapping my fingers, but there were still enough noticed roofs to intrude uncomfortably upon my thoughts. It was getting annoying.

So I decided to switch back to only noticing the red roofs of barns in particular.

Extinction of the more general habit didn't take very long. It was over by the end of my next fifteen minute drive. For the first three times I saw a roof, I rose my hand a little to snap my fingers before reminding myself that I don't care about non-barns anymore. The next couple times I didn't raise my hand, but still forcefully reminded myself of my disinterest in my non-barns. The promotion of red roofs into consciousness got weaker with each roof, until the difference between seeing a non-red non-barn roof and a red non-barn roof was barely perceptible. That was my drive to town today.

On the drive back, I noticed about ten red barn roofs. Three I noticed while thinking about how to install habits, four while thinking about the differences between designing exercises for in-person workshops and designing exercises to put in books, and three soon enough after the previous barn to probably count as "searching for barns".

So yes, for at least some things, it seems I can get better at noticing them my  by practicing.

What These Silly Little Experiments Are Really About

My plan is to try noticing an internal psychological phenomenon next, but still something straightforward that I wouldn't be motivated not to notice. I probably need to try a couple things to find something that works well. I might go with "thinking the word 'tomorrow' in my internal monologue", for example, or possibly "wondering what my boyfriend is thinking about". I'll probably go with something more like the first, because it is clearer, and zooms in on "noticing things inside my head" without the extra noise of "noticing things that are relatively temporally indiscrete", but the second is actually a useful thing to notice.

Most of the useful things to notice are a lot less obvious than "thinking the word 'tomorrow' in my internal monologue". From what I've learned so far, I think that for "wondering what my boyfriend is thinking about", I'll need to pick out a couple of very specific, instantaneous sensations that happen when I'm curious what my boyfriend is thinking about. I expect that to be a repetition of the rain experiment, where I predict what it will feel like, then wait 'til I can gather data in real time. Once I have a specific trigger, I can repeat the red roof experiment to catch the tiny moments when I wonder what he's thinking. I might need to start with a broader category, like "notice when I'm thinking about my boyfriend", get used to noticing those sensations, and then reduce the set of sensations I'm watching out for to things that happen only when I'm curious what my boyfriend is thinking.

After that, I imagine I'll want to practice with different kinds of actions I can take when I notice a trigger. (If you've never heard of Implementation Intentions, I suggest trying them out.) So far, I've used the physical action of snapping my fingers. That was originally for clarity in recognizing the noticing, but it's also a behavioral response to a trigger. I could respond with a psychological behavior instead of a physical one, like "imagining a carrot". A useful response to noticing that I'm curious about what my boyfriend is thinking would be "check to see if he's busy" and then "say, 'What are you thinking about?'"

See, this "noticing" thing sounds boringly simple at first, and not worth much consideration in the art of rationality. Even in his original "noticing confusion" post, Eliezer really talked more about recognizing the implications of confusion than about the noticing itself.

Noticing is more complicated than it seems at first, and it's easy to mix it up with responding. There's a whole sub-art to noticing, and I really think that deliberate practice is making me better at it. Responses can be hard. It's essential to make noticing as effortless as possible. Then you can break the noticing and the responding apart, and you can recognize reality even before you know what to do with it.

Overcoming Decision Anxiety

14 TimMartin 11 September 2014 04:22AM

I get pretty anxious about open-ended decisions. I often spend an unacceptable amount of time agonizing over things like what design options to get on a custom suit, or what kind of job I want to pursue, or what apartment I want to live in. Some of these decisions are obviously important ones, with implications for my future happiness. However, in general my sense of anxiety is poorly calibrated with the importance of the decision. This makes life harder than it has to be, and lowers my productivity.


I moved apartments recently, and I decided that this would be a good time to address my anxiety about open-ended decisions. My hope is to present some ideas that will be helpful for others with similar anxieties, or to stimulate helpful discussion.


Solutions

 

Exposure therapy

One promising way of dealing with decision anxiety is to practice making decisions without worrying about them quite so much. Match your clothes together in a new way, even if you're not 100% sure that you like the resulting outfit. Buy a new set of headphones, even if it isn't the “perfect choice.” Aim for good enough. Remind yourself that life will be okay if your clothes are slightly mismatched for one day.

This is basically exposure therapy – exposing oneself to a slightly aversive stimulus while remaining calm about it. Doing something you're (mildly) afraid to do can have a tremendously positive impact when you try it and realize that it wasn't all that bad. Of course, you can always start small and build up to bolder activities as your anxieties diminish.

For the past several months, I had been practicing this with small decisions. With the move approaching in July, I needed some more tricks for dealing with a bigger, more important decision.

Reasoning with yourself

It helps to think up reasons why your anxieties aren't justified. As in actual, honest-to-goodness reasons that you think are true. Check out this conversation between my System 1 and System 2 that happened just after my roommates and I made a decision on an apartment:

System 1: Oh man, this neighborhood [the old neighborhood] is such a great place to go for walks. It's so scenic and calm. I'm going to miss that. The new neighborhood isn't as pretty.
System 2: Well that's true, but how many walks did we actually take in five years living in the old neighborhood? If I recall correctly, we didn't even take two per year.
System 1: Well, yeah... but...
System 2: So maybe “how good the neighborhood is for taking walks” isn't actually that important to us. At least not to the extent that you're feeling. There were things that we really liked about our old living situation, but taking walks really wasn't one of them.
System 1: Yeah, you may be right...

Of course, this “conversation” took place after the decision had already been made. But making a difficult decision often entails second-guessing oneself, and this too can be a source of great anxiety. As in the above, I find that poking holes in my own anxieties really makes me feel better. I do this by being a good skeptic and turning on my critical thinking skills – only instead of, say, debunking an article on pseudoscience, I'm debunking my own worries about how bad things are going to be. This helps me remain calm.

Re-calibration

The last piece of this process is something that should help when making future decisions. I reasoned that if my System 1 feels anxiety about things that aren't very important – if it is, as I said, poorly calibrated – then I perhaps I can re-calibrate it.

Before moving apartments, I decided to make predictions about what aspects of the new living situation would affect my happiness. “How good the neighborhood is for walks” may not be important to me, but surely there are some factors that are important. So I wrote down things that I thought would be good and bad about the new place. I also rated them on how good or bad I thought they would be.

In several months, I plan to go back over that list and compare my predicted feelings to my actual feelings. What was I right about? This will hopefully give my System 1 a strong impetus to re-calibrate, and only feel anxious about aspects of a decision that are strongly correlated with my future happiness.

Future Benefits

I think we each carry in our heads a model of what is possible for us to achieve, and anxiety about the choices we make limits how bold we can be in trying new things. As a result, I think that my attempts to feel less anxiety about decisions will be very valuable to me, and allow me to do things that I couldn't do before. At the same time, I expect that making decisions of all kinds will be a quicker and more pleasant process, which is a great outcome in and of itself.

Why the tails come apart

110 Thrasymachus 01 August 2014 10:41PM

[I'm unsure how much this rehashes things 'everyone knows already' - if old hat, feel free to downvote into oblivion. My other motivation for the cross-post is the hope it might catch the interest of someone with a stronger mathematical background who could make this line of argument more robust]

Many outcomes of interest have pretty good predictors. It seems that height correlates to performance in basketball (the average height in the NBA is around 6'7"). Faster serves in tennis improve one's likelihood of winning. IQ scores are known to predict a slew of factors, from income, to chance of being imprisoned, to lifespan.

What is interesting is the strength of these relationships appear to deteriorate as you advance far along the right tail. Although 6'7" is very tall, is lies within a couple of standard deviations of the median US adult male height - there are many thousands of US men taller than the average NBA player, yet are not in the NBA. Although elite tennis players have very fast serves, if you look at the players serving the fastest serves ever recorded, they aren't the very best players of their time. It is harder to look at the IQ case due to test ceilings, but again there seems to be some divergence near the top: the very highest earners tend to be very smart, but their intelligence is not in step with their income (their cognitive ability is around +3 to +4 SD above the mean, yet their wealth is much higher than this) (1).

The trend seems to be that although we know the predictors are correlated with the outcome, freakishly extreme outcomes do not go together with similarly freakishly extreme predictors. Why?

Too much of a good thing?

One candidate explanation would be that more isn't always better, and the correlations one gets looking at the whole population doesn't capture a reversal at the right tail. Maybe being taller at basketball is good up to a point, but being really tall leads to greater costs in terms of things like agility. Maybe although having a faster serve is better all things being equal, but focusing too heavily on one's serve counterproductively neglects other areas of one's game. Maybe a high IQ is good for earning money, but a stratospherically high IQ has an increased risk of productivity-reducing mental illness. Or something along those lines.

I would guess that these sorts of 'hidden trade-offs' are common. But, the 'divergence of tails' seems pretty ubiquitous (the tallest aren't the heaviest, the smartest parents don't have the smartest children, the fastest runners aren't the best footballers, etc. etc.), and it would be weird if there was always a 'too much of a good thing' story to be told for all of these associations. I think there is a more general explanation.

The simple graphical explanation

[Inspired by this essay from Grady Towers]

Suppose you make a scatter plot of two correlated variables. Here's one I grabbed off google, comparing the speed of a ball out of a baseball pitchers hand compared to its speed crossing crossing the plate:

It is unsurprising to see these are correlated (I'd guess the R-square is > 0.8). But if one looks at the extreme end of the graph, the very fastest balls out of the hand aren't the very fastest balls crossing the plate, and vice versa. This feature is general. Look at this data (again convenience sampled from googling 'scatter plot') of quiz time versus test score:

Or this:

Or this:

Given a correlation, the envelope of the distribution should form some sort of ellipse, narrower as the correlation goes stronger, and more circular as it gets weaker:

correlations

The thing is, as one approaches the far corners of this ellipse, we see 'divergence of the tails': as the ellipse doesn't sharpen to a point, there are bulges where the maximum x and y values lie with sub-maximal y and x values respectively:

diffmaxes

So this offers an explanation why divergence at the tails is ubiquitous. Providing the sample size is largeish, and the correlation not to tight (the tighter the correlation, the larger the sample size required), one will observe the ellipses with the bulging sides of the distribution (2).

Hence the very best basketball players aren't the tallest (and vice versa), the very wealthiest not the smartest, and so on and so forth for any correlated X and Y. If X and Y are "Estimated effect size" and "Actual effect size", or "Performance at T", and "Performance at T+n", then you have a graphical display of winner's curse and regression to the mean.

An intuitive explanation of the graphical explanation

It would be nice to have an intuitive handle on why this happens, even if we can be convinced that it happens. Here's my offer towards an explanation:

The fact that a correlation is less than 1 implies that other things matter to an outcome of interest. Although being tall matters for being good at basketball, strength, agility, hand-eye-coordination matter as well (to name but a few). The same applies to other outcomes where multiple factors play a role: being smart helps in getting rich, but so does being hard working, being lucky, and so on.

For a toy model, pretend these height, strength, agility and hand-eye-coordination are independent of one another, gaussian, and additive towards the outcome of basketball ability with equal weight.(3) So, ceritus paribus, being taller will make one better at basketball, and the toy model stipulates there aren't 'hidden trade-offs': there's no negative correlation between height and the other attributes, even at the extremes. Yet the graphical explanation suggests we should still see divergence of the tails: the very tallest shouldn't be the very best.

The intuitive explanation would go like this: Start at the extreme tail - +4SD above the mean for height. Although their 'basketball-score' gets a  massive boost from their height, we'd expect them to be average with respect to the other basketball relevant abilities (we've stipulated they're independent). Further, as this ultra-tall population is small, this population won't have a very high variance: with 10 people at +4SD, you wouldn't expect any of them to be +2SD in another factor like agility.

Move down the tail to slightly less extreme values - +3SD say. These people don't get such a boost to their basketball score for their height, but there should be a lot more of them (if 10 at +4SD, around 500 at +3SD), this means there is a lot more expected variance in the other basketball relevant activities - it is much less surprising to find someone +3SD in height and also +2SD in agility, and in the world where these things were equally important, they would 'beat' someone +4SD in height but average in the other attributes. Although a +4SD height person will likely be better than a given +3SD height person, the best of the +4SDs will not be as good as the best of the much larger number of +3SDs

The trade-off will vary depending on the exact weighting of the factors, which explain more of the variance, but the point seems to hold in the general case: when looking at a factor known to be predictive of an outcome, the largest outcome values will occur with sub-maximal factor values, as the larger population increases the chances of 'getting lucky' with the other factors:

maxisubmax

So that's why the tails diverge.

Endnote: EA relevance

I think this is interesting in and of itself, but it has relevance to Effective Altruism, given it generally focuses on the right tail of various things (What are the most effective charities? What is the best career? etc.) It generally vindicates worries about regression to the mean or winner's curse, and suggests that these will be pretty insoluble in all cases where the populations are large: even if you have really good means of assessing the best charities or the best careers so that your assessments correlate really strongly with what ones actually are the best, the very best ones you identify are unlikely to be actually the very best, as the tails will diverge.

This probably has limited practical relevance. Although you might expect that one of the 'not estimated as the very best' charities is in fact better than your estimated-to-be-best charity, you don't know which one, and your best bet remains your estimate (in the same way - at least in the toy model above - you should bet a 6'11" person is better at basketball than someone who is 6'4".)

There may be spread betting or portfolio scenarios where this factor comes into play - perhaps instead of funding AMF to diminishing returns when its marginal effectiveness dips below charity #2, we should be willing to spread funds sooner.(4) Mainly, though, it should lead us to be less self-confident.


1. One might look at the generally modest achievements of people in high-IQ societies as further evidence, but there are worries about adverse selection.

2. One needs a large enough sample to 'fill in' the elliptical population density envelope, and the tighter the correlation, the larger the sample needed to fill in the sub-maximal bulges. The old faithful case is an example where actually you do get a 'point', although it is likely an outlier.

 

3. If you want to apply it to cases where the factors are positively correlated - which they often are - just use the components of the other factors that are independent of the factor of interest. I think, but I can't demonstrate, the other stipulations could also be relaxed.

4. I'd intuit, but again I can't demonstrate, the case for this becomes stronger with highly skewed interventions where almost all the impact is focused in relatively low probability channels, like averting a very specified existential risk.

Too good to be true

23 PhilGoetz 11 July 2014 08:16PM

A friend recently posted a link on his Facebook page to an informational graphic about the alleged link between the MMR vaccine and autism. It said, if I recall correctly, that out of 60 studies on the matter, not one had indicated a link.

Presumably, with 95% confidence.

This bothered me. What are the odds, supposing there is no link between X and Y, of conducting 60 studies of the matter, and of all 60 concluding, with 95% confidence, that there is no link between X and Y?

Answer: .95 ^ 60 = .046. (Use the first term of the binomial distribution.)

So if it were in fact true that 60 out of 60 studies failed to find a link between vaccines and autism at 95% confidence, this would prove, with 95% confidence, that studies in the literature are biased against finding a link between vaccines and autism.

continue reading »

Failures of an embodied AIXI

28 So8res 15 June 2014 06:29PM

Building a safe and powerful artificial general intelligence seems a difficult task. Working on that task today is particularly difficult, as there is no clear path to AGI yet. Is there work that can be done now that makes it more likely that humanity will be able to build a safe, powerful AGI in the future? Benja and I think there is: there are a number of relevant problems that it seems possible to make progress on today using formally specified toy models of intelligence. For example, consider recent program equilibrium results and various problems of self-reference.

AIXI is a powerful toy model used to study intelligence. An appropriately-rewarded AIXI could readily solve a large class of difficult problems. This includes computer vision, natural language recognition, and many other difficult optimization tasks. That these problems are all solvable by the same equation — by a single hypothetical machine running AIXI — indicates that the AIXI formalism captures a very general notion of "intelligence".

However, AIXI is not a good toy model for investigating the construction of a safe and powerful AGI. This is not just because AIXI is uncomputable (and its computable counterpart AIXItl infeasible). Rather, it's because AIXI cannot self-modify. This fact is fairly obvious from the AIXI formalism: AIXI assumes that in the future, it will continue being AIXI. This is a fine assumption for AIXI to make, as it is a very powerful agent and may not need to self-modify. But this inability limits the usefulness of the model. Any agent capable of undergoing an intelligence explosion must be able to acquire new computing resources, dramatically change its own architecture, and keep its goals stable throughout the process. The AIXI formalism lacks tools to study such behavior.

This is not a condemnation of AIXI: the formalism was not designed to study self-modification. However, this limitation is neither trivial nor superficial: even though an AIXI may not need to make itself "smarter", real agents may need to self-modify for reasons other than self-improvement. The fact that an embodied AIXI cannot self-modify leads to systematic failures in situations where self-modification is actually necessary. One such scenario, made explicit using Botworld, is explored in detail below.

In this game, one agent will require another agent to precommit to a trade by modifying its code in a way that forces execution of the trade. AIXItl, which is unable to alter its source code, is not able to implement the precommitment, and thus cannot enlist the help of the other agent.

Afterwards, I discuss a slightly more realistic scenario in which two agents have an opportunity to cooperate, but one agent has a computationally expensive "exploit" action available and the other agent can measure the waste heat produced by computation. Again, this is a scenario where an embodied AIXItl fails to achieve a high payoff against cautious opponents.

Though scenarios such as these may seem improbable, they are not strictly impossible. Such scenarios indicate that AIXI — while a powerful toy model — does not perfectly capture the properties desirable in an idealized AGI.

continue reading »

On Terminal Goals and Virtue Ethics

62 Swimmer963 18 June 2014 04:00AM

Introduction

A few months ago, my friend said the following thing to me: “After seeing Divergent, I finally understand virtue ethics. The main character is a cross between Aristotle and you.”

That was an impossible-to-resist pitch, and I saw the movie. The thing that resonated most with me–also the thing that my friend thought I had in common with the main character–was the idea that you could make a particular decision, and set yourself down a particular course of action, in order to make yourself become a particular kind of person. Tris didn’t join the Dauntless cast because she thought they were doing the most good in society, or because she thought her comparative advantage to do good lay there–she chose it because they were brave, and she wasn’t, yet, and she wanted to be. Bravery was a virtue that she thought she ought to have. If the graph of her motivations even went any deeper, the only node beyond ‘become brave’ was ‘become good.’ 

(Tris did have a concept of some future world-outcomes being better than others, and wanting to have an effect on the world. But that wasn't the causal reason why she chose Dauntless; as far as I can tell, it was unrelated.)

My twelve-year-old self had a similar attitude. I read a lot of fiction, and stories had heroes, and I wanted to be like them–and that meant acquiring the right skills and the right traits. I knew I was terrible at reacting under pressure–that in the case of an earthquake or other natural disaster, I would freeze up and not be useful at all. Being good at reacting under pressure was an important trait for a hero to have. I could be sad that I didn’t have it, or I could decide to acquire it by doing the things that scared me over and over and over again. So that someday, when the world tried to throw bad things at my friends and family, I’d be ready.

You could call that an awfully passive way to look at things. It reveals a deep-seated belief that I’m not in control, that the world is big and complicated and beyond my ability to understand and predict, much less steer–that I am not the locus of control. But this way of thinking is an algorithm. It will almost always spit out an answer, when otherwise I might get stuck in the complexity and unpredictability of trying to make a particular outcome happen.


Virtue Ethics

I find the different houses of the HPMOR universe to be a very compelling metaphor. It’s not because they suggest actions to take; instead, they suggest virtues to focus on, so that when a particular situation comes up, you can act ‘in character.’ Courage and bravery for Gryffindor, for example. It also suggests the idea that different people can focus on different virtues–diversity is a useful thing to have in the world. (I'm probably mangling the concept of virtue ethics here, not having any background in philosophy, but it's the closest term for the thing I mean.)

I’ve thought a lot about the virtue of loyalty. In the past, loyalty has kept me with jobs and friends that, from an objective perspective, might not seem like the optimal things to spend my time on. But the costs of quitting and finding a new job, or cutting off friendships, wouldn’t just have been about direct consequences in the world, like needing to spend a bunch of time handing out resumes or having an unpleasant conversation. There would also be a shift within myself, a weakening in the drive towards loyalty. It wasn’t that I thought everyone ought to be extremely loyal–it’s a virtue with obvious downsides and failure modes. But it was a virtue that I wanted, partly because it seemed undervalued. 

By calling myself a ‘loyal person’, I can aim myself in a particular direction without having to understand all the subcomponents of the world. More importantly, I can make decisions even when I’m rushed, or tired, or under cognitive strain that makes it hard to calculate through all of the consequences of a particular action.

 

Terminal Goals

The Less Wrong/CFAR/rationalist community puts a lot of emphasis on a different way of trying to be a hero–where you start from a terminal goal, like “saving the world”, and break it into subgoals, and do whatever it takes to accomplish it. In the past I’ve thought of myself as being mostly consequentialist, in terms of morality, and this is a very consequentialist way to think about being a good person. And it doesn't feel like it would work. 

There are some bad reasons why it might feel wrong–i.e. that it feels arrogant to think you can accomplish something that big–but I think the main reason is that it feels fake. There is strong social pressure in the CFAR/Less Wrong community to claim that you have terminal goals, that you’re working towards something big. My System 2 understands terminal goals and consequentialism, as a thing that other people do–I could talk about my terminal goals, and get the points, and fit in, but I’d be lying about my thoughts. My model of my mind would be incorrect, and that would have consequences on, for example, whether my plans actually worked.

 

Practicing the art of rationality

Recently, Anna Salamon brought up a question with the other CFAR staff: “What is the thing that’s wrong with your own practice of the art of rationality?” The terminal goals thing was what I thought of immediately–namely, the conversations I've had over the past two years, where other rationalists have asked me "so what are your terminal goals/values?" and I've stammered something and then gone to hide in a corner and try to come up with some. 

In Alicorn’s Luminosity, Bella says about her thoughts that “they were liable to morph into versions of themselves that were more idealized, more consistent - and not what they were originally, and therefore false. Or they'd be forgotten altogether, which was even worse (those thoughts were mine, and I wanted them).”

I want to know true things about myself. I also want to impress my friends by having the traits that they think are cool, but not at the price of faking it–my brain screams that pretending to be something other than what you are isn’t virtuous. When my immediate response to someone asking me about my terminal goals is “but brains don’t work that way!” it may not be a true statement about all brains, but it’s a true statement about my brain. My motivational system is wired in a certain way. I could think it was broken; I could let my friends convince me that I needed to change, and try to shoehorn my brain into a different shape; or I could accept that it works, that I get things done and people find me useful to have around and this is how I am. For now. I'm not going to rule out future attempts to hack my brain, because Growth Mindset, and maybe some other reasons will convince me that it's important enough, but if I do it, it'll be on my terms. Other people are welcome to have their terminal goals and existential struggles. I’m okay the way I am–I have an algorithm to follow.

 

Why write this post?

It would be an awfully surprising coincidence if mine was the only brain that worked this way. I’m not a special snowflake. And other people who interact with the Less Wrong community might not deal with it the way I do. They might try to twist their brains into the ‘right’ shape, and break their motivational system. Or they might decide that rationality is stupid and walk away.

Willpower Depletion vs Willpower Distraction

61 Academian 15 June 2014 06:29PM

I once asked a room full of about 100 neuroscientists whether willpower depletion was a thing, and there was widespread disagreement with the idea. (A propos, this is a great way to quickly gauge consensus in a field.) Basically, for a while some researchers believed that willpower depletion "is" glucose depletion in the prefrontal cortex, but some more recent experiments have failed to replicate this, e.g. by finding that the mere taste of sugar is enough to "replenish" willpower faster than the time it takes blood to move from the mouth to the brain:

Carbohydrate mouth-rinses activate dopaminergic pathways in the striatum–a region of the brain associated with responses to reward (Kringelbach, 2004)–whereas artificially-sweetened non-carbohydrate mouth-rinses do not (Chambers et al., 2009). Thus, the sensing of carbohydrates in the mouth appears to signal the possibility of reward (i.e., the future availability of additional energy), which could motivate rather than fuel physical effort.

-- Molden, D. C. et al, The Motivational versus Metabolic Effects of Carbohydrates on Self-Control. Psychological Science.

Stanford's Carol Dweck and Greg Walden even found that hinting to people that using willpower is energizing might actually make them less depletable:

When we had people read statements that reminded them of the power of willpower like, “Sometimes, working on a strenuous mental task can make you feel energized for further challenging activities,” they kept on working and performing well with no sign of depletion. They made half as many mistakes on a difficult cognitive task as people who read statements about limited willpower. In another study, they scored 15 percent better on I.Q. problems.

-- Dweck and Walden, Willpower: It’s in Your Head? New York Times.

While these are all interesting empirical findings, there’s a very similar phenomenon that’s much less debated and which could explain many of these observations, but I think gets too little popular attention in these discussions:

Willpower is distractible.

Indeed, willpower and working memory are both strongly mediated by the dorsolateral prefontal cortex, so “distraction” could just be the two functions funging against one another. To use the terms of Stanovich popularized by Kahneman in Thinking: Fast and Slow, "System 2" can only override so many "System 1" defaults at any given moment.

So what’s going on when people say "willpower depletion"? I’m not sure, but even if willpower depletion is not a thing, the following distracting phenomena clearly are:

  • Thirst
  • Hunger
  • Sleepiness
  • Physical fatigue (like from running)
  • Physical discomfort (like from sitting)
  • That specific-other-thing you want to do
  • Anxiety about willpower depletion
  • Indignation at being asked for too much by bosses, partners, or experimenters...

... and "willpower depletion" might be nothing more than mental distraction by one of these processes. Perhaps it really is better to think of willpower as power (a rate) than energy (a resource).

If that’s true, then figuring out what processes might be distracting us might be much more useful than saying “I’m out of willpower” and giving up. Maybe try having a sip of water or a bit of food if your diet permits it. Maybe try reading lying down to see if you get nap-ish. Maybe set a timer to remind you to call that friend you keep thinking about.

The last two bullets,

  • Anxiety about willpower depletion
  • Indignation at being asked for too much by bosses, partners, or experimenters...

are also enough to explain why being told willpower depletion isn’t a thing might reduce the effects typically attributed to it: we might simply be less distracted by anxiety or indignation about doing “too much” willpower-intensive work in a short period of time.

Of course, any speculation about how human minds work in general is prone to the "typical mind fallacy". Maybe my willpower is depletable and yours isn’t. But then that wouldn’t explain why you can cause people to exhibit less willpower depletion by suggesting otherwise. But then again, most published research findings are false. But then again the research on the DLPFC and working memory seems relatively old and well established, and distraction is clearly a thing...

All in all, more of my chips are falling on the hypothesis that willpower “depletion” is often just willpower distraction, and that finding and addressing those distractions is probably a better a strategy than avoiding activities altogether in order to "conserve willpower".

LW Women- Female privilege

23 [deleted] 05 May 2013 01:58AM

Daenerys' Note: This is the last item in the LW Women series. Thanks to all who participated. :)


Standard Intro

The following section will be at the top of all posts in the LW Women series.

Several months ago, I put out a call for anonymous submissions by the women on LW, with the idea that I would compile them into some kind of post.  There is a LOT of material, so I am breaking them down into more manageable-sized themed posts.

Seven women replied, totaling about 18 pages. 

Standard Disclaimer- Women have many different viewpoints, and just because I am acting as an intermediary to allow for anonymous communication does NOT mean that I agree with everything that will be posted in this series. (It would be rather impossible to, since there are some posts arguing opposite sides!)

To the submitters- If you would like to respond anonymously to a comment (for example if there is a comment questioning something in your post, and you want to clarify), you can PM your message and I will post it for you. If this happens a lot, I might create a LW_Women sockpuppet account for the submitters to share.

Please do NOT break anonymity, because it lowers the anonymity of the rest of the submitters.




Submitter E

 

I'm a girl, and by me that's only great.

No seriously. I've grown up and lived in the social circles where female privilege way outweigh male privilege. I've never been sexually assaulted, nor been denied anything because of my gender. I study a male-dominated subject, and most of my friends are polite, deferential feminism-controlled men. I have, however, been able to flirt and sympathise and generally girl-game my way into getting what I want. (Charming guys is fun!) Sure, there will eventually come a point where I'll be disadvantaged in the job market because of my ability to bear children; but I've gotta balance that against the fact that I have the ability to bear children.

In fact, most of the gender problems I personally face stem from biology, so there's not much I can do about them. It sucks that I have to be the one responsible for contraception, and that my attractiveness to men depends largely on my looks but the inverse is not true. But there's not much society can do to change biological facts, so I live with them.

 I don't think it's a very disputed fact that women, in general, tend to be more emotional than men. I'm an INFJ, most of my (male) friends are INTJ. With the help of Less Wrong's epistemology and a large pinch of Game, I've achieved a fair degree of luminosity over my inner workings. I'm complicated. I don't think my INTJ friends are this complicated, and the complicatedness is part of the reason why I'm an "F": my intuitions system is useful. It makes me really quite good at people, especially when I can introspect and then apply my conscious to my instincts as well. I don't know how many of the people here are F instead of T, but for anyone who uses intuition a lot, applying proper rationality to introspection (a.k.a. luminosity) is essential. It is so so so easy to rationalise, and it takes effort to just know my instinct without rationalising false reasons for it. I'm not sure the luminosity sequence helps everyone, because everyone works differently, but just being aware of the concept and being on the lookout for ways that work is good.

There's a problem with strong intuition though, and that's that I have less conscious control over my opinions - it's hard enough being aware of them and not rationalising additional reasons for them. I judge ugly women and unsuccessful men. I try to consciously adjust for the effect, but it's hard.

Onto the topic of gender discussions on Less Wrong - it annoys me how quickly things gets irrational. The whole objectification debacle of July 2009 proved that even the best can get caught up in it (though maybe things have got better since 2009?). I was confused in the same way Luke was: I didn't see anything wrong with objectification. I objectify people all the time, but I still treat them as agents when I need to. Porn is great, but it doesn't mean I'm going to find it harder to befriend a porn star. I objectify Eliezer Yukowsky because he's a phenomenon on the internet more than a flesh-and-blood person to me, but that doesn't mean I'd have difficulty interacting with a flesh-and-blood Eliezer. On the whole, Less Wrong doesn't do well at talking about controversial topics, even though we know how to. Maybe we just need to work harder. Maybe we need more luminosity. I would love for Less Wrong to be a place where all things could just be discussed rationally.

There's another reason that I come out on a different side to most women in feminism and gender discussions though, and this is the bit I'm only saying because it's anonymous. I'm not a typical woman. I act, dress and style feminine because I enjoy feeling like a princess. I am most fulfilled when I'm in a M-dom f-sub relationship. My favourite activity is cooking and my honest-to-god favourite place in the house is the kitchen. I take pride in making awesome sandwiches. I just can't alieve it's offensive when I hear "get in the kitchen", because I'd just be like "ok! :D". I love sex, and I value getting better at it. I want to be able to have sex like a porn star. Suppressing my gag reflex is one of the most useful things I learned all year. I love being hit on and seduced by men. When I dress sexy, it is because male attention turns me on. I love getting wolf whistles. Because of luminosity and self-awareness, I'm ever-conscious of the vagina tingle. I'm aware of when I'm turned on, and I don't rationalise it away. And the same testosterone that makes me good at a male-dominated subject, makes sure I'm really easily turned on.

I understand that all these things are different when I'm consenting and I'm viewed as an agent and all that. But it's just hard to understand other girls being offended when I'm not, because it's much harder to empathise with someone you don't agree with. Not generalising from one example is hard.

Understanding other girls is hard.

 

Too busy to think about life

78 Academian 22 April 2010 03:14PM

Many adults maintain their intelligence through a dedication to study or hard work.  I suspect this is related to sub-optimal levels of careful introspection among intellectuals.

If someone asks you what you want for yourself in life, do you have the answer ready at hand?  How about what you want for others?  Human values are complex, which means your talents and technical knowledge should help you think about them.  Just as in your work, complexity shouldn't be a curiosity-stopper.  It means "think", not "give up now."

But there are so many terrible excuses stopping you...

Too busy studying?  Life is the exam you are always taking.  Are you studying for that?  Did you even write yourself a course outline?

Too busy helping?  Decision-making is the skill you are aways using, or always lacking, as much when you help others as yourself.  Isn't something you use constantly worth improving on purpose?

Too busy thinking to learn about your brain?  That's like being too busy flying an airplane to learn where the engines are.  Yes, you've got passengers in real life, too: the people whose lives you affect.

Emotions too irrational to think about them?  Irrational emotions are things you don't want to think for you, and therefore are something you want to think about.  By analogy, children are often irrational, and no one sane concludes that we therefore shouldn't think about their welfare, or that they shouldn't exist.

So set aside a date.  Sometime soon.  Write yourself some notes.  Find that introspective friend of yours, and start solving for happiness.  Don't have one?  For the first time in history, you've got LessWrong.com!

Reasons to make the effort:

Happiness is a pairing between your situation and your disposition. Truly optimizing your life requires adjusting both variables: what happens, and how it affects you.

You are constantly changing your disposition.  The question is whether you'll do it with a purpose.  Your experiences change you, and you affect those, as well as how you think about them, which also changes you.  It's going to happen.  It's happening now.  Do you even know how it works?  Put your intelligence to work and figure it out!

The road to harm is paved with ignorance.  Using your capability to understand yourself and what you're doing is a matter of responsibility to others, too.  It makes you better able to be a better friend.

You're almost certainly suffering from Ugh Fields unconscious don't-think-about-it reflexes that form via Pavlovian conditioning.  The issues most in need of your attention are often ones you just happen not to think about for reasons undetectable to you.

How not to waste the effort:

Don't wait till you're sad.  Only thinking when you're sad gives you a skew perspective.  Don't infer that you can think better when you're sad just because that's the only time you try to be thoughtful.  Sadness often makes it harder to think: you're farther from happiness, which can make it more difficult to empathize with and understand.  Nonethess we often have to think when sad, because something bad may have happened that needs addressing.

Introspect carefully, not constantly.  Don't interrupt your work every 20 minutes to wonder whether it's your true purpose in life.  Respect that question as something that requires concentration, note-taking, and solid blocks of scheduled time.  In those times, check over your analysis by trying to confound it, so lingering doubts can be justifiably quieted by remembering how thorough you were.

Re-evaluate on an appropriate time-scale.  Try devoting a few days before each semester or work period to look at your life as a whole.  At these times you'll have accumulated experience data from the last period, ripe and ready for analysis.  You'll have more ideas per hour that way, and feel better about it.  Before starting something new is also the most natural and opportune time to affirm or change long term goals.  Then, barring large unexpecte d opportunities, stick to what you decide until the next period when you've gathered enough experience to warrant new reflection.

(The absent minded driver is a mathematical example of how planning outperforms constant re-evaluation.  When not engaged in a deep and careful introspection, we're all absent minded drivers to a degree.)

Lost about where to start?  I think Alicorn's story is an inspiring one.  Learn to understand and defeat procrastination/akrasia.  Overcome your cached selves so you can grow freely (definitely read their possible strategies at the end).  Foster an everyday awareness that you are a brain, and in fact more like two half-brains.

These suggestions are among the top-rated LessWrong posts, so they'll be of interest to lots of intellectually-minded, rationalist-curious individuals.  But you have your own task ahead of you, that only you can fulfill.

So don't give up.  Don't procrastinate it.  If you haven't done it already, schedule a day and time right now when you can realistically assess

  • how you want your life to affect you and other people, and
  • what you must change to better achieve this.

Eliezer has said I want you to live.  Let me say:

I want you to be better at your life.

Calling all MIRI supporters for unique May 6 giving opportunity!

18 lukeprog 04 May 2014 11:45PM

(Cross-posted from MIRI's blog. MIRI maintains Less Wrong, with generous help from Trike Apps, and much of the core content is written by salaried MIRI staff members.)

Update: I'm liveblogging the fundraiser here.

Read our strategy below, then give here!

SVGives logo lrgAs previously announced, MIRI is participating in a massive 24-hour fundraiser on May 6th, called SV Gives. This is a unique opportunity for all MIRI supporters to increase the impact of their donations. To be successful we'll need to pre-commit to a strategy and see it through. If you plan to give at least $10 to MIRI sometime this year, during this event would be the best time to do it!


The plan

We need all hands on deck to help us win the following prize as many times as possible:

$2,000 prize for the nonprofit that has the most individual donors in an hour, every hour for 24 hours.

To paraphrase, every hour, there is a $2,000 prize for the organization that has the most individual donors during that hour. That's a total of $48,000 in prizes, from sources that wouldn't normally give to MIRI.  The minimum donation is $10, and an individual donor can give as many times as they want. Therefore we ask our supporters to:

  1. give $10 an hour, during every hour of the fundraiser that they are awake (I'll be up and donating for all 24 hours!);
  2. for those whose giving budgets won't cover all those hours, see below for list of which hours you should privilege; and
  3. publicize this effort as widely as possible.

International donors, we especially need your help!

MIRI has a strong community of international supporters, and this gives us a distinct advantage! While North America sleeps, you'll be awake, ready to target all of the overnight $2,000 hourly prizes.

continue reading »

View more: Next