You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

A collection of Stubs.

-6 Elo 06 September 2016 07:24AM

In light of SDR's comment yesterday, instead of writing a new post today I compiled my list of ideas I wanted to write about, partly to lay them out there and see if any stood out as better than the rest, and partly so that maybe they would be a little more out in the wild than if I hold them until I get around to them.  I realise there is not a thesis in this post, but I figured it would be better to write one of these than to write each in it's own post with the potential to be good or bad.

Original post: http://bearlamp.com.au/many-draft-concepts/

I create ideas at about the rate of 3 a day, without trying to.  I write at about a rate of 1.5 a day.  Which leaves me always behind.  Even if I write about the best ideas I can think of, some good ones might never be covered.  This is an effort to draft out a good stack of them so that maybe it can help me not have to write them all out, by better defining which ones are the good ones and which ones are a bit more useless.

With that in mind, in no particular order - a list of unwritten posts:


From my old table of contents

Goals of your lesswrong group – As a guided/workthrough exercise in deciding why the group exists and what it should do.  Help people work out what they want out of it (do people know)? setting goals, doing something particularly interesting or routine, having fun, changing your mind, being activists in the world around you.  Whatever the reasons you care about, work them out and move towards them.  Nothing particularly groundbreaking in the process here.  Sit down with the group with pens and paper, maybe run a resolve cycle, maybe talk about ideas and settle on a few, then decide how to carry them out.  Relevant links: Sydney meetup,  group resources (estimate 2hrs to write)

Goals interrogation + Goal levels – Goal interrogation is about asking <is this thing I want to do actually a goal of mine> and <is my current plan the best way to achieve that>, goal levels are something out of Sydney Lesswrong that help you have mutual long term goals and supporting short term goal.  There are 3 main levels, Dream, Year, Daily (or approximate) you want dream goals like going to the moon, you want yearly goals like getting another year further in your degree and you want daily goals like studying today that contribute to the upper level goals.  Any time you are feeling lost you can look at the guide you set out for yourself and use it to direct you. (3hrs)

How to human – A zero to human guide. A guide for basic functionality of a humanoid system. Something of a conglomeration of maslow, mental health, so you feel like shit and system thinking.  Am I conscious?Am I breathing? Am I bleeding or injured (major or minor)? Am I falling or otherwise in danger and about to cause the earlier questions to return false?  Do I know where I am?  Am I safe?  Do I need to relieve myself (or other bodily functions, i.e. itchy)?  Have I had enough water? sleep? food?  Is my mind altered (alcohol or other drugs)?  Am I stuck with sensory input I can't control (noise, smells, things touching me)?  Am I too hot or too cold?  Is my environment too hot or too cold?  Or unstable?  Am I with people or alone? Is this okay?  Am I clean (showered, teeth, other personal cleaning rituals)?  Have I had some sunlight and fresh air in the past few days?  Have I had too much sunlight or wind in the past few days?  Do I feel stressed?  Okay?  Happy?  Worried?  Suspicious?  Scared? Was I doing something?  What am I doing?  do I want to be doing something else?  Am I being watched (is that okay?)?  Have I interacted with humans in the past 24 hours?  Have I had alone time in the past 24 hours?  Do I have any existing conditions I can run a check on - i.e. depression?  Are my valuables secure?  Are the people I care about safe?  (4hrs)

List of common strategies for getting shit done – things like scheduling/allocating time, pomodoros, committing to things externally, complice, beeminder, other trackers. (4hrs)

List of superpowers and kryptonites – when asking the question “what are my superpowers?” and “what are my kryptonites?”. Knowledge is power; working with your powers and working out how to avoid your kryptonites is a method to improve yourself.  What are you really good at, and what do you absolutely suck at and would be better delegating to other people.  The more you know about yourself, the more you can do the right thing by your powers or weaknesses and save yourself troubles.

List of effective behaviours – small life-improving habits that add together to make awesomeness from nothing. And how to pick them up.  Short list: toothbrush in the shower, scales in front of the  fridge, healthy food in the most accessible position in the fridge, make the unhealthy stuff a little more inacessible, keep some clocks fast - i.e. the clock in your car (so you get there early),  prepare for expected barriers ahead of time (i.e. packing the gym bag and leaving it at the door), and more.

Stress prevention checklist – feeling off? You want to have already outsourced the hard work for “things I should check on about myself” to your past self. Make it easier for future you. Especially in the times that you might be vulnerable.  Generate a list of things that you want to check are working correctly.  i.e. did I drink today?  Did I do my regular exercise?  Did I take my medication?  Have I run late today?  Do I have my work under control?

Make it easier for future you. Especially in the times that you might be vulnerable. – as its own post in curtailing bad habits that you can expect to happen when you are compromised.  inspired by candy-bar moments and turning them into carrot-moments or other more productive things.  This applies beyond diet, and might involve turning TV-hour into book-hour (for other tasks you want to do instead of tasks you automatically do)

A p=np approach to learning – Sometimes you have to learn things the long way; but sometimes there is a short cut. Where you could say, “I wish someone had just taken me on the easy path early on”. It’s not a perfect idea; but start looking for the shortcuts where you might be saying “I wish someone had told me sooner”. Of course the answer is, “but I probably wouldn’t have listened anyway” which is something that can be worked on as well. (2hrs)

Rationalists guide to dating – Attraction. Relationships. Doing things with a known preference. Don’t like unintelligent people? Don’t try to date them. Think first; then act - and iteratively experiment; an exercise in thinking hard about things before trying trial-and-error on the world. Think about places where you might meet the kinds of people you want to meet, then use strategies that go there instead of strategies that flop in the general direction of progress.  (half written)

Training inherent powers (weights, temperatures, smells, estimation powers) – practice makes perfect right? Imagine if you knew the temperature always, the weight of things by lifting them, the composition of foods by tasting them, the distance between things without measuring. How can we train these, how can we improve.  Probably not inherently useful to life, but fun to train your system 1! (2hrs)

Strike to the heart of the question. The strongest one; not the one you want to defeat – Steelman not Strawman. Don’t ask “how do I win at the question”; ask, “am I giving the best answer to the best question I can give”.  More poetic than anything else - this post would enumerate the feelings of victory and what not to feel victorious about, as well as trying to feel what it's like to be on the other side of the discussion to yourself, frustratingly trying to get a point across while a point is being flung at yourself. (2hrs)

How to approach a new problem – similar to the “How to solve X” post.  But considerations for working backwards from a wicked problem, as well as trying “The least bad solution I know of”, Murphy-jitsu, and known solutions to similar problems.  Step 0. I notice I am approaching a problem.

Turning Stimming into a flourish – For autists, to make a presentability out of a flaw.

How to manage time – estimating the length of future tasks (and more), covered in notch system, and do tasks in a different order.  But presented on it's own.

Spices – Adventures in sensory experience land.  I ran an event of spice-smelling/guessing for a group of 30 people.  I wrote several documents in the process about spices and how to run the event.  I want to publish these.  As an exercise - it's a fun game of guess-the-spice.

Wing it VS Plan – All of the what, why, who, and what you should do of the two.  Some people seem to be the kind of person who is always just winging it.  In contrast, some people make ridiculously complicated plans that work.  Most of us are probably somewhere in the middle.  I suggest that the more of a planner you can be the better because you can always fall back on winging it, and you probably will.  But if you don't have a plan and are already winging it - you can't fall back on the other option.  This concept came to me while playing ingress, which encourages you to plan your actions before you make them.

On-stage bias – The changes we make when we go onto a stage include extra makeup to adjust for the bright lights, and speaking louder to adjust for the audience which is far away. When we consider the rest of our lives, maybe we want to appear specifically X (i.e, confident, friendly) so we should change ourselves to suit the natural skews in how we present based on the "stage" we are appearing on.  appear as the person you want to appear as, not the person you naturally appear as.

Creating a workspace – considerations when thinking about a “place” of work, including desk, screen, surrounding distractions, and basically any factors that come into it.  Similar to how the very long list of sleep maintenance suggestions covers environmental factors in your sleep environment but for a workspace.


Posts added to the list since then

Doing a cost|benefit analysis - This is something we rely on when enumerating the options and choices ahead of us, but something I have never explicitly looked into.  Some costs that can get overlooked include: Time, Money, Energy, Emotions, Space, Clutter, Distraction/Attention, Memory, Side effects, and probably more.  I'd like to see a How to X guide for CBA. (wikipedia)

Extinction learning at home - A cross between intermittent reward (the worst kind of addiction), and what we know about extinguishing it.  Then applying that to "convincing" yourself to extinguish bad habits by experiential learning.  Uses the CFAR internal Double Crux technique, precommit yourself to a challenge, for example - "If I scroll through 20 facebook posts in a row and they are all not worth my time, I will be convinced that I should spend less time on facebook because it's not worth my time"  Adjust 20 to whatever position your double crux believes to be true, then run a test and iterate.  You have to genuinely agree with the premise before running the test.  This can work for a number of committed habits which you want to extinguish.  (new idea as at the writing of this post)

How to write a dating ad - A suggestion to include information that is easy to ask questions about (this is hard).  For example; don't write, "I like camping", write "I like hiking overnight with my dog", giving away details in a way that makes them worth inquiring about.  The same reason applies to why writing "I'm a great guy" is really not going to get people to believe you, as opposed to demonstrating the claim. (show, don't tell)

How to give yourself aversions - an investigation into aversive actions and potentially how to avoid collecting them when you have a better understanding of how they happen.  (I have not done the research and will need to do that before publishing the post)

How to give someone else an aversion - similar to above, we know we can work differently to other people, and at the intersection of that is a misunderstanding that can leave people uncomfortable.

Lists - Creating lists is a great thing, currently in draft - some considerations about what lists are, what they do, what they are used for, what they can be used for, where they come in handy, and the suggestion that you should use lists more. (also some digital list-keeping solutions)

Choice to remember the details - this stems from choosing to remember names, a point in the conversation where people sometimes tune out.  As a mindfulness concept you can choose to remember the details. (short article, not exactly sure why I wanted to write about this)

What is a problem - On the path of problem solving, understanding what a problem is will help you to understand how to attack it.  Nothing more complicated than this picture to explain it.  The barrier is a problem.  This doesn't seem important on it's own but as a foundation for thinking about problems it's good to have  sitting around somewhere.

whatisaproblem

How to/not attend a meetup - for anyone who has never been to a meetup, and anyone who wants the good tips on etiquette for being the new guy in a room of friends.  First meetup: shut up and listen, try not to be too much of an impact on the existing meetup group or you might misunderstand the culture.

Noticing the world, Repercussions and taking advantage of them - There are regularly world events that I notice.  Things like the olympics, Pokemon go coming out, the (recent) spaceX rocket failure.  I try to notice when big events happen and try to think about how to take advantage of the event or the repercussions caused by that event.  Motivated to think not only about all the olympians (and the fuss leading up to the olympics), but all the people at home who signed up to a gym because of the publicity of the competitive sport.  If only I could get in on the profit of gym signups...

leastgood but only solution I know of - So you know of a solution, but it's rubbish.  Or probably is.  Also you have no better solutions.  Treat this solution as the best solution you have (because it is) and start implementing it, as you do that - keep looking for other solutions.  But at least you have a solution to work with!

Self-management thoughts - When you ask yourself, "am I making progress?", "do I want to be in this conversation?" and other self management thoughts.  And an investigation into them - it's a CFAR technique but their writing on the topic is brief.  (needs research)

instrumental supply-hoarding behaviour - A discussion about the benefits of hoarding supplies for future use.  Covering also - what supplies are not a good idea to store, and what supplies are.  Maybe this will be useful for people who store things for later days, and hopefully help to consolidate and add some purposefulness to their process.

list of sub groups that I have tried - Before running my local lesswrong group I partook in a great deal of other groups.  This was meant as a list with comments on each group.

If you have nothing to do – make better tools for use when real work comes along - This was probably going to be a poetic style motivation post about exactly what the title suggests.  Be Prepared.

what other people are good at (as support) - When reaching out for support, some people will be good at things that other people are not.  For example - emotional support, time to spend on each other, ideas for solving your problems.  Different people might be better or worse than others.  Thinking about this can make your strategies towards solving your problems a bit easier to manage.  Knowing what works and what does not work, or what you can reliably expect when you reach out for support from some people - is going to supercharge your fulfilment of those needs.

Focusing - An already written guide to Eugine Gendlin's focusing technique.  That needs polishing before publishing.  The short form: treat your system 1 as a very powerful machine that understands your problems and their solutions more than you do; use your system 2 to ask it questions and see what it returns.

Rewrite: how to become a 1000 year old vampire - I got as far as breaking down this post and got stuck at draft form before rewriting.  Might take another stab at it soon.

Should you tell people your goals? This thread in a post.  In summary: It depends on the environment, the wrong environment is actually demotivational, the right environment is extra motivational.


Meta: this took around 4 hours to write up.  Which is ridiculously longer than usual.  I noticed a substantial number of breaks being taken - not sure if that relates to the difficulty of creating so many summaries or just me today.  Still.  This experiment might help my future writing focus/direction so I figured I would try it out.  If you see an idea of particularly high value I will be happy to try to cover it in more detail.

People who lie about how much they eat are jerks

-10 Elo 08 August 2016 03:45AM

Originally posted here: http://bearlamp.com.au/people-who-lie-about-how-much-they-eat-are-jerks/


Weight loss journey is a long and complicated problem solving adventure.  This is one small factor that adds to the confusion.  You probably have that one friend.  Appears to eat a whole bunch, and yet doesn't put on weight.  If you ever had that conversation it goes something like,

"How are you so thin?"
"raah raah metabolism"
"raah raah I dont know why I don't put on weight"
"Take advantage of the habit"

Well I have had enough.  You're wrong.  You're lying and you probably don't even know it.  It's not possible. (Within a reasonable scope of human variation) Calories and energy are a black box system.  Calories in, work out, leftovers become weight gain, deficit is weight loss.  If a human could eat significantly more calories for the same amount of work and not put on weight we would be prodding them in a lab for breaking the laws of physics on conservation of mass and conservation of energy.

So this is you, you say you gain weight no matter what you eat and that's scientifically impossible.  Now what?  You probably don't mean to break the laws of physics (and you probably don't actually break them).  You genuinely absentmindedly don't notice when you scoff down whole plates of food and when you skip dinner because you didn't feel like it (and absentmindedly balance the calories automatically).  It's all the same to you because you naturally do that.

This very likely is about habits, and natural habits that people have.  If for example John has the habit of getting home and going to the fridge, making dinner because it's usually the evening.  Wendy doesn't have the habit.  She eats when she is hungry.  Not having a set mealtime sometimes means that she gets tired-hungry and has a state of being too exhausted to decide what to eat and too hungry to do anything else that would help solve the problem.  But for Wendy she doesn't get home and automatically cook dinner.  (good things and bad things come from habits.)

Wendy and john go to a big lunch together.  They both eat 150% of the calories they should be eating for that meal, and they don't mind - enjoying food is part of enjoying life.  It was a fancy restaurant with good food.  Later that evening when Wendy gets home she doesn't feel hungry and goes off to read a book or talk to friends on the internet.  Eventually she has a light snack (of 10% of her "dinner" calories) and heads off to totalling 160% of the calories for the two meals.  Effectively under-eating for the day.  John on the other hand, has his habit of heading home and making dinner.  Even after the big lunch, his automatic systems take over and he makes and ordinary dinner of 100% of his calories for that meal.  John's total for that day is 250% for two meals or effectively half a meal extra for that day.

If W and J do this every week (assuming the rest of their diets are perfectly balanced), John will have an upwards trajectory and Wendy will have a downwards one.  John might ask Wendy how she stays so skinny, and Wendy wouldn't know.  After all they eat about the same amount when they are together.

No one understands this.  


What can we do about it?

1. We can hire scientists to follow both J and W around for a week and write down every time they eat something. (this is impractical - maybe if we are in an isolated environment like a weekend retreat it would be easier to do this)
2. We can get them to self report via an app (but people are usually pretty bad at that)
3. We can try ask more specifically, "what do you eat in a day?", or "what have you eaten since this time yesterday?" and gather data points to try to build a picture of what a person eats.
4. We can search for people with similar habits around food to us and ask them how they stay healthy.
5. We can look for people with successful habits around food, ask them for advice and then figure out why that advice works, and how to make that advice work for us.

On the noticing level.  You should notice that every single thing that you eat adds to your caloric intake. Every single piece of work you do adds to your burn.  It's easier to eat another piece of chocolate (for 5 seconds) than run another 15minutes to burn that chocolate off.  If something is not working towards your dieting success it's probably working against it.


Meta: this took one hour to write.

The meta-strategy

-6 Elo 02 August 2016 11:08PM

Original post:  http://bearlamp.com.au/against-the-five-love-languages/


You are in a relationship, someone made some objection about communication, you don't seem to understand what's going on.  Many years later you find yourself looking back at the relationship and reflecting with friends.  That's when someone brings up The Five Love Languages.  Oh deep and great and meaningful secrets encoded into a book.

The 5 languages are: 

  1. Gifts
  2. Quality time
  3. Words of affirmation
  4. Acts of service (devotion)
  5. Physical touch (intimacy)

Oooooh if only you had spent more energy trying to get quality time, and less effort on gifts that relationship could have been saved.  Or the other way - the relationship was doomed because you wanted quality time and they wanted gifts as a show of love.  

You start seeing the world in 5 languages, your coworker offering to get you a coffee is a gift.  Your boss praising your good work is words of affirmation.  You start thinking like a Man with a hammer.  Strictly speaking I enjoy man with a hammer syndrome.  I like to use a model to death, and then pick a new model and do it all again.


What I want you to do now is imagine you didn't do that.  Imagine we cloned the universe.  In one universe we gave you the love-languages book and locked you in a room to read it.  In the second universe we offered to run you through a new relationship-training exercise.  "It's no guide book on how to communicate with your partner, but it's a pretty good process", we lock you in a room with a chair, a desk, some paper, pens (few distractions) and order you to derive some theory and idea about how to communicate with your partner.

Which one do you predict will yield the best result?


When I ask my system 2, it is fairly happy with the idea that using someone else's model is a shortcut to finding the answers.  After all they pre-derived the model.  No need to spend hours working on it myself when it's all in a book.

When I ask my system 1, it thinks that the self-derived system is about a billion times better than the one I found in a book.  It's going to be personally suited, it's going to be sharp and accurate, and bend to my needs.


Meta-strategy

Which is going to yield the best result for the problem? Self-derived solutions to all future problems? Book-derived solutions for all problems?

I propose that the specific strategy used to answer the problem, depending on the problem (obviously sometimes 1+1 will only be solved with addition, and solving it with subtraction is going to be difficult), is mostly irrelevant compared to having the meta-strategy.  

In the original example:

My relationship has bad communication, so we end the relationship.

The meta-strategy for this case:

My relationship has bad communication, how do we find more information about that and solve that problem.

In the general case:

I have a problem, I will fix the problem.

the meta strategy for the general case:

I have a problem, what is the best way to solve the problem? 

Or the meta-meta strategy:

I have a problem, how will I go about finding what is the best way to solve the problem? 


I propose that having the meta strategy, and the meta-meta strategy is almost as powerful as the true strategy.  On the object level for the problem example, instead of searching for the book in the problem field that is the five love languages you could instead search for any book about the problem area.  Any book is better than no book.  In fact I would make a hierarchy:

The best strategy > a good strategy > any strategy > no strategy
The best book > a good book > any book on the topic > no book on the topic

You encounter a problem in the wild - what should you do?

  1. Try just solve the problem
  2. Try any strategy (with a small amount of thinking - a few seconds or minutes)
  3. search for a better strategy

Depending on the problem, the time, the real factors - the best path forward may be to just "think of what to do then do that", or it may be to "stop and write out a 10 page plan before executing 10 pages worth of instructions".


Should you read the five love languages book?  That depends.  What is the problem?  and have you tried solving the problem on your own first?

Meta: this took an hour to write.

My table of contents: lesswrong.com/r/discussion/lw/mp2/my_future_posts_a_table_of_contents/ (which needs updating)

The Problem (TM) - Part 2

-6 Elo 02 August 2016 08:01AM

From part 1: http://bearlamp.com.au/the-problem-tm-analyse-a-conversation/
part 1 on lesswrong: http://lesswrong.com/r/discussion/lw/nsn/

(this) part: http://bearlamp.com.au/the-problem-analyse-a-conversation-part-2/

I had a chat with a person who admitted to having many problems themselves.  I offered my services as a problem solving amateur, willing to try to get to the bottom of this.  Presented is the conversation (With details changed for privacy).

I had my first shot at analysing the person's problems and drilling down to the bottom.  I am interested in what other people have to say is the problem.  Here we study the meta-strategy of how to solve the problem, which I find much more interesting than the object level analysis of the problem and how to solve it.

I don't think I got to the bottom of the problem, and I don't think I conducted myself in a top-notch capacity but needless to say I wonder if you have any comments about what IS TheProblem(tm), how did you come to that conclusion and what can be done about it (for the benefit of this person and anyone with a similar problem).


What is actually the problem?  I have a theory, but I also wanted to publish this without declaring my answer.  I will share my ideas in a few weeks but I want to know what you think and how you came to that answer.


This is a new style of post so I expected some responses along the lines of:

I considered downvoting. I opted instead to ignore after reading the preamble. - buybuydandavis

That's fine.  It was literally a chat log.  Not for everyone.

I also got some interesting and relevant responses.  There are several and they overlap so I decided it's best to answer with another post.


Many people narrowed down to a few particularly alarming examples:

  • The most alarming part of that conversation for me was "A few weeks ago I punched a housemate in the face ten times, breaking her nose;" - Strangeattractor
  • Is it really the most alarming part? I would think suicide ideation more so. - Romashka
  • Treatment for mental illness (and possibly organic brain trauma) seems priority #1 here... -CronoDAS
  • Zebra was extremely bad at imagining good outcomes in a way which led to him taking action-- in other words, probably depression. - NancyLebovitz 

And then this:

There are lots of problems. If I had to pick only one, it would be that you seem to think there is a single, simple problem that can be identified from this transcript. - Dagon

It sounds a bit as if you are implicitly proposing a principle like "there is always a single underlying problem, if you can only find it" - gjm


To gjm first:

I present "The Problem (TM)" because I suspect in this case there is an underlying problem.  Not always.  Often when problem solving we try to figure out what is the lowest hanging fruit, or what one thing can be changed first.

There was a scene in Doctor Who - The ends of time Part 2 where the doctor is trapped in space on a spaceship that doesn't work.  Instead of giving up he just (knowing what he is doing) starts fiddling with the heating.  Other characters insist that everything is hopeless and lo and behold; as he fixes the heating; that fixes the engine and the computers and everything whirs back to life and we continue to the next epic fight scene!

Now, generalising from one fictional example.  As rationalists we wish that there really was one thing that you could fix, which would cause the fixing of the next thing and a chain of events that fix everything.  When we look at the accelerating factors, we wish this is how it happens:

13331168_10153822586668878_3315347668470874063_n

We'd be dreaming to think that such a thing can actually happen.  After 41 days we are at 1.5x where we started.  After 70 days 2x, and 111, 3x.  Which is just nuts.  What if I told you that in a month of nudging 1% you'd be nearly 1.5x from where you are.  Not likely, not going to happen.

1 1.01 51 1.6610781401
2 1.0201 52 1.6776889215
3 1.030301 53 1.6944658107
4 1.04060401 54 1.7114104688
5 1.0510100501 55 1.7285245735
6 1.0615201506 56 1.7458098192
7 1.0721353521 57 1.7632679174
8 1.0828567056 58 1.7809005966
9 1.0936852727 59 1.7987096025
10 1.1046221254 60 1.8166966986
11 1.1156683467 61 1.8348636655
12 1.1268250301 62 1.8532123022
13 1.1380932804 63 1.8717444252
14 1.1494742132 64 1.8904618695
15 1.1609689554 65 1.9093664882
16 1.1725786449 66 1.9284601531
17 1.1843044314 67 1.9477447546
18 1.1961474757 68 1.9672222021
19 1.2081089504 69 1.9868944242
20 1.2201900399 70 2.0067633684
21 1.2323919403 71 2.0268310021
22 1.2447158598 72 2.0470993121
23 1.2571630183 73 2.0675703052
24 1.2697346485 74 2.0882460083
25 1.282431995 75 2.1091284684
26 1.295256315 76 2.130219753
27 1.3082088781 77 2.1515219506
28 1.3212909669 78 2.1730371701
29 1.3345038766 79 2.1947675418
30 1.3478489153 80 2.2167152172
31 1.3613274045 81 2.2388823694
32 1.3749406785 82 2.2612711931
33 1.3886900853 83 2.283883905
34 1.4025769862 84 2.306722744
35 1.416602756 85 2.3297899715
36 1.4307687836 86 2.3530878712
37 1.4450764714 87 2.3766187499
38 1.4595272361 88 2.4003849374
39 1.4741225085 89 2.4243887868
40 1.4888637336 90 2.4486326746
41 1.5037523709 91 2.4731190014
42 1.5187898946 92 2.4978501914
43 1.5339777936 93 2.5228286933
44 1.5493175715 94 2.5480569803
45 1.5648107472 95 2.5735375501
46 1.5804588547 96 2.5992729256
47 1.5962634432 97 2.6252656548
48 1.6122260777 98 2.6515183114
49 1.6283483385 99 2.6780334945
50 1.6446318218 100 2.7048138294

Nonetheless we pursue.  It might be important too, to look for the problem at the bottom, otherwise we might find ourselves bikeshedding about the trivial problems.

This week while making the emergency room project, I spent some time looking at other data.  Specifically the (Australian) National Drug Strategy Household Survey data.  Where the first question on the survey was; "When people talk about “a drug problem”, which is the first drug you think of?".  What kind of information is that likely to yield?  Is it going to return the drug which is the biggest problem in the country?  Or maybe it's going to yield whatever the media feels makes a good story, (say ICE because it's dangerous) (weed because it's controvertial) (or alcohol because it's the most common)?  Or is it going to yield the one with the most personally damaging reputation (tobacco > alcohol)?

In reality, is the government going to take action on what people think is the biggest problem drug?  Or should the government instead take action on the drug actually killing people?  Are we bikeshedding on this issue?

What actually is the biggest problem, it's a relevant question, certainly not every time.  but sometimes it's worth digging into.


To Strangeattractor, Romashka, CronoDAS, NancyLebovitz:

You are not wrong.  The violence, mental health, potential head wound, depression, inability to leave the house, lack of friends, weight problems, exercise problems. Are all very very important problems to tackle.  And I will come back to this.


Some analysis:

I started with simple background questions.  History, etc.  knowing that anything being brought up is probably being brought up because it has special relevance to the topic.  It's almost like a job interview, when they ask you for your top 10 characteristics, they don't expect you to tell them about how you can fry a perfect egg (if that's not relevant to the task at hand).  There is a need to make certain assumptions about the truth and about the validity of the information.

I was previously very depressed, and then recovered for a few years.

Definitely relevant, sets the scene.  I asked, "So you are currently feeling depressed"

Yes. Possibly as a symptom of bipolar disorder (I’ve recently started having manic episodes), or possibly not–I’ve never been diagnosed with that, and until recently had never had issues with mania.

A while back I tried reading the DSM.  While it really doesn't tell you much about reality it is one instance of a Map of the world, Just like legislation, instruction manuals, guide books, Guides to how a project was done (this is an example of a map of how my process works), and more.  The interesting thing about what the DSM has to say about bipolar diagnosis is that there is a requirement for mania in both the upwards and downwards directions, often affecting sleep, and giving people feelings of godliness or invincibility.

So who cares.  Well; on the one hand; using this knowledge here to ask about sleep is a signal that I at least know a little bit of what is being talked about.  On the other hand I think I got lucky about whether sleep was relevant. (and on the third hand - sleep is a very common problem for people generally and worth asking about.)

I guess immediately I feel quite isolated, very stressed, and don’t know how to proceed forward.

The idea of "feeling stressed" is a complicated one.  On some level, you have an understanding of what "I feel stressed" means.  But on another level - if you spend enough time around different diversely stressed people.  You get the feeling that there is some kind of miscommunication going on.

sorry for the mess meme

Kind of like this one here.  It's a map and territory problem.  One person's map of stress is not the same as another person's map of stress.


ELiot: Is there a specific stress?

I guess; loneliness, numerous tensions with my girlfriend, some financial issues (to a large extent a symptom of the recent mania), extreme dissatisfaction with myself and especially my own appearance, frustrations with daily life, and a general dissatisfaction with the world.

So there's a list.  But the problem I find with vague lists is it's easy to see there's a problem here but harder to address any part and make a difference.  I personally have list making habits.  Something I will one day make a post about.   Which is where this comes from:

I am going to write the list out

1. Loneliness
2. Girlfriend tension
3. Financial issues
4. Self + appearance
5. Daily life
6. Dissatisfied with the world

Which grew a bit by the end of the conversation.


ChristianKl Rightly criticised me for saying this:  

ELiot
Would you like to pick a specific one from the list to talk about?

I can pick one if you like

(Why do you offer to pick the specific issue? Agency is important for getting out of depression.) Unfortunately I stripped out time-stamps which would explain why I offered to pick one.  There are three parts to this problem.

  1. Not picking one would likely lead to more complaining about the issues without solving anything.  If Z was unmotivated enough to be unable to pick one (a worse failure mode) then picking any would be better than nothing.
  2. Leading with any of them would be fine, because I planned to cover a few of them and the conversation would naturally tend to flow onto the bigger problems anyway (as it did)
  3. If one cannot decide between them, they are probably all equally relevant-challenging-problematic and equal gains would be made on any of them from a bit of effort.

As it was - it was relatively easy for Z to pick one.  I generally wouldn't pick one - even if I suggested that I would.  and well done ChristianKl for spotting this.


Is that bad?

I often find myself not eating until nighttime, or sometimes not eating at all, due to wanting to avoid those stressors.

An important question - is that a bad thing?  I repeat this whenever I see an unjustified badness.  In the sense that it should be up to the individual to decide what is or is not bad.  In theory; not eating is a bad thing.  Possibly to lead to mood swings from hunger or sugar levels, and who-knows what else.  But, that's what I think; not what Z had to say about why it's bad. 


Is this correct: you feel stressed about not wanting to leave to go buy food. Then you feel stressed about not buying food as well.

And I guess I’m kind of lonely.

later

When I go out sometimes it’s ok, and sometimes I realise the people around me are crap and I am too and I get even sadder.

later still

And really I don’t want to be staying at home, as that’s also very stressful.

later still.

There’s nothing much I can identify that I really want to do.

and then:

Also I’m frequently very exhausted, and it’s often hard to work up the energy to do those things.

and on:

Well, I really dislike being alone, but I don’t much like most people.

(I think that's enough for now)

So Z is lonely, but doesn't want to go out because sometimes it's crap, but doesn't want to be staying home, but doesn't have anything they really want to do, but is also very exhausted and doesn't have the energy to do things, but really dislikes being alone....

If we looked at the loneliness it wouldn't really improve the state of The problem because the loneliness isn't the big deal.  If we looked at the going out problem, that wouldn't be it, because Z wants to go out, but also doesn't like staying home, but also if we solved the going out problem that wouldn't do it because they don't really have anything they want to do, but if we found something they want to do that wouldn't fix it because they don't have the energy to do that thing.  So what if we solve the energy problem?  

In the hope that once we solve the desire to go out they will have the energy, and they won't need to be stuck at home and they won't feel alone and they can go on to live a happy and prosperous life.  No.  That's not it.  Once we dig to the bottom of the energy problem we get to an absence problem:

I kind of zone out, frequently. People find that scary.

And down the rabbit hole we go.

I want to be clear that each of these problems are valid problems, each are the most important problem and each need to be solved to dig Z out of the hole.  I want to not disparage the ongoing discussion and identification of problems until we can really get to that root of all things; fix the heating and whirr the spaceship into action!

That's not how problems work.  Or at least - not how this one works.  At the bottom of every problem is another problem (reminds me of a poem - There's a hole in my bucket - this is not a coincidence).  We also have a term for getting side tracked from the real work at hand - Yak shaving.  

But wait!  What is the real problem we should be working on?  If all this talk is just yak shaving our way down the river - how do we know what to actually work on?


The problem

In this case - certainly not repeatable.  I can't say how often it happens but I wanted to identify this very clear problem as it sneakily tries to evade capture.  This problem is exactly the process of solving the problem has become part of the problem.  We can't solve the loneliness without first solving the home problem, but first - having nothing to do, but first energy, but first absence feelings.  It's a problem spiral.

What next?

Let's say you or a friend has a problem spiral.  You start talking about it and you spiral downwards, every problem being worse than the one before until you feel absolutely terrible, develop an ugh field and resolve to do nothing at all.  (probably a familiar pattern)

You get in this pattern and nothing gets solved.  To break out of this pattern; I propose a known solution (the scientific method).  Pick one of the problems, set a 5 minute timer (or a 20minute pomodoro, or a whole day to work on it).  Your task is to improve the state of this problem, conduct tests, observe what happens.  It's the loneliness problem, and it sucks because you don't want to leave the house.  But that's okay.  Keep trying.  Don't try to solve the house-leaving problem right now, just work on the loneliness.  Try talking to people about it, try therapists, try leave the house, try online forums, try anything and everything you can think of.  Take notes.

Notes are evidence, evidence is how we make progress.

Your task, should you choose to accept it - is to focus on making some kind of progress on any one of the many problems.  Then when you are sick of this one, or tired, or done, or successful, pick the next one.  repeat, fail, repeat, succeed, repeat.  Iterate.

I propose the 3 part solution to this one meta-problem is:

  1. pick something to work on
  2. work on it
  3. iterate

It's unlikely that you solve any one problem the first time around.  If you did - take your winnings!  Walk away!  On to the next one.  But if the situation is (as can be expected) a complicated problem - one that you already couldn't just solve - which led to the stacking up of layer upon layer of problems.  It's going to take some time.

Keep at it.  Good luck.


Credit goes to  DagonThere are lots of problems. If I had to pick only one, it would be that you seem to think there is a single, simple problem that can be identified from this transcript.

Well done.


Meta: this took two days to write, and the better part of 3+ hours.

If you are interested in a conversation, send me a message.  No guarantees we can solve your problems, but maybe we can try.

This has been a new style of post, not for all - thanks for reading.

The Problem (TM) - Analyse a conversation

-6 Elo 26 July 2016 11:03AM

Originally published here: http://bearlamp.com.au/the-problem-tm-analyse-a-conversation/
Part 2: http://lesswrong.com/r/discussion/lw/nt8/the_problem_tm_part_2/
Part 2: http://bearlamp.com.au/the-problem-analyse-a-conversation-part-2/

I had a chat with a person who admitted to having many problems themselves.  I offered my services as a problem solving amateur, willing to try to get to the bottom of this.  Presented is the conversation (With details changed for privacy).

I had my first shot at analysing the person's problems and drilling down to the bottom.  I am interested in what other people have to say is the problem.  Here we study the meta-strategy of how to solve the problem, which I find much more interesting than the object level analysis of the problem and how to solve it.

I don't think I got to the bottom of the problem, and I don't think I conducted myself in a top-notch capacity but needless to say I wonder if you have any comments about what IS TheProblem(tm), how did you come to that conclusion and what can be done about it (for the benefit of this person and anyone with a similar problem).


Zebra
Hey

ELiot
Where would you like to start?
Do you want to share about your history?

Zebra
I was previously very depressed, and then recovered for a few years. While I'm glad I was able to have those couple years, I don't think they were worth suffering through the depression, and I didn't at the time, when I didn't think it would return.

Zebra
(Though it hasn't returned as bad as it was.)

ELiot
So you are currently feeling depressed

Zebra
Yes. Possibly as a symptom of bipolar disorder (I've recently started having manic episodes), or possibly not--I've never been diagnosed with that, and until recently had never had issues with mania.

ELiot
How much are you sleeping? One Indication of bipolar swings is total sleep

Zebra
The last couple days I've slept okay, but when I had more manic symptoms sleep was very intermittent. A few weeks ago I punched a housemate in the face ten times, breaking her nose; at that point, I'd not slept in two days.

ELiot
Sounds like a bad event.

Zebra
I guess immediately I feel quite isolated, very stressed, and don't know how to proceed forward.

ELiot
Is there a specific stress?

Zebra
I guess; loneliness, numerous tensions with my girlfriend, some financial issues (to a large extent a symptom of the recent mania), extreme dissatisfaction with myself and especially my own appearance, frustrations with daily life, and a general dissatisfaction with the world.

ELiot
Manic up should correlate with little sleep, manic down with extra sleep. Manic up should also come with a variation on _feeling invincible_

Which of the things in that list do you think can't change?

Zebra
I suppose they're all changeable if you apply enough effort, but that seems like a lot of work, and frankly I've never seen much in the world that seems worth it.

As I said, I've gotten better, to some extent, previously.

Even after I had already gotten better and I no longer wanted to suicide, I wished I had previously, because even though life then was fine, it just wasn't worth what had gone before.

I don't feel invincible really.

ELiot
When in manic up states?

Zebra
When in manic states I still don't feel invincible.

ELiot
If you could remove the problems listed do you think you would want to live?

Zebra
All of them? Yes, if I could do some magically, or at a reasonable cost.

ELiot
I would say that is a good thing. But it depends on your goals.

I can offer ideas about working with those problems to make them better, but not if you don't want that.

Zebra
Well those would be good.

ELiot
Would you like to pick a specific one from the list to talk about?

I can pick one if you like

Zebra
Uhm, you can pick. I'm not sure which one would be most imminently solvable.

ELiot
I am going to write the list out

1. Loneliness
2. Girlfriend tension
3. Financial issues
4. Self + appearance
5. Daily life
6. Dissatisfied with the world

Zebra
Yep, that's most of it.

ELiot
What burdens do you currently have on your life? I. E. Supporting a child, have to show up at work each day. Etc.

Thinking about number 5 - Regular commitments

Zebra
Not a whole lot really. I've no job or school (family money, though not a large amount). My girlfriend is financially dependent on me at this point, though she's supposed to be starting a job this month.

To be honest even going downstairs to buy food, or really even to talk to a delivery person on the phone, feels like a huge burden.

ELiot
So in terms of pressure on your daily life?

Zebra
I often find myself not eating until nighttime, or sometimes not eating at all, due to wanting to avoid those stressors.

ELiot
Is that bad?

Zebra
Well, yeah. It feels very negative and causes me stress and I really don't feel life has much to offer in return for even minor inconveniences.

ELiot
is there a reason that not eating is a bad thing to do for you?

Zebra
I don't see life as particularly positive, really, and just want it to be over with so I don't have to bother with this crap every day. On the other hand, actually going about killing yourself is fucking scary.

So I guess I'm trying to find some way out of that conclusion so I won't have to face the immediately distasteful action of actually offing myself, even though it's probably preferable to suffering through a lifetime of even minor annoyances.

ELiot
Is this correct: you feel stressed about not wanting to leave to go buy food. Then you feel stressed about not buying food as well.

Zebra
Yes.

And I guess I'm kind of lonely, and even minor inconveniences, when they have no positive aspects in between, eventually get you really, really down.

I feel like what I do most days is just wait, be sad and lonely, be slightly annoyed, and wait and cry and be lonely more.

When I go out sometimes it's ok, and sometimes I realise the people around me are crap and I am too and I get even sadder.

ELiot
Here is how I see this very limited problem. Without looking at other things just yet.

When making the first choice, either stay home and not buy food or leave and buy food you choose the less stress option. To stay home. I see that as a win. You successfully made the right choice to avoid the immediate stress. Then later you decide that going out is more important/useful/(Less stress) than staying home and not having food. Seems like you also win by carrying out the choice to leave and get food have less stress.

You appear to be stressing yourself out over two reasonable choices. I would suggest that you have done well to make both the choice of staying home and later the choice to leave for food.

Zebra
The stress of not going is physiological rather than psychological, so I don't think looking at that differently can really fix it.

And really I don't want to be staying at home, as that's also very stressful.

I'm just not sure what else to do...

ELiot
In terms of where to go? Or in terms of how to spend your time?

Zebra
Both

There's nothing much I can identify that I really want to do.

ELiot
I can suggest options down those paths

Zebra
ok.

ELiot
I don't know where you are gegraphically, but if we consider specifically where to go and what to do near where you are;

I would look at; google, "things to do in *city*" as well as looking at meetups in city. As well as looking for parks, museums, monuments, walks, local history, pretty geography, public spaces I. E. Libraries, evening classes, sports to play

Zebra
I'm in Hong Kong.

I go to meetups sometimes.

ELiot
Generally the idea of exploration of the place

Also temples, religious places, hikes

Zebra
As for meetups, sometimes you meet interesting people, but often it's stressful dealing with idiots. And most people are idiots.

ELiot
You are mostly allowed to do what you like with your time. In terms of going places and later going home to sleep etc.

A large fraction of people are idiots

Zebra
And the more interesting people are often difficult to connect with more than superficially.

ELiot
"Allowed to" is a funny idea. No one needs to give you permission to do what you like.

Going to add 7. Social strategy

Zebra
True. I just don't feel like I _like_ much.

Also I'm frequently very exhausted, and it's often hard to work up the energy to do those things.

ELiot
Do you think you have tried to find many things you like or do you think the bottle neck lies before that? In trying to find them?

If you do nothing (because you are tired) is that a problem?

Zebra
Yeah, doing nothing all the time sucks. If I stay home I feel like I'm in jail...

but if I go out I feel like I've been sent as a labourer to Australia.

ELiot
At some point the desire to stay home because you are tired should weigh up against the desire to go out and feel like you are not in jail. That is a fine time to leave, feeling bad about both staying at home and leaving the house sounds like a recipe for displeasure either way... Does that make sense?

Zebra
Well it is, obviously, which is why I feel like I'm in a no-win situation, and want to die.

(or at least part of it)

I mean, occasionally there are meetups and stuff which I go to, and those are ok, but really I have so much free time and since my mental health issues started I've alienated almost everyone I knew.

And that just increases the stress and makes it difficult to make new friends.

ELiot
I would be going down the path of tracing that feeling of bad to its source because it's not really about staying or going it's about that bad pressure that appears self imposed.

Do you feel like you _should be doing_ things?

I.e. Going out

Zebra
Well, I really dislike being alone, but I don't much like most people.

I think that's what it boils down to.

And yes, I get that that might not be a healthy state to be in, but again, that I'm not in a healthy state has already been established.

ELiot
Do you know what part or kind of social interaction you like? When you say "dislike alone" what is "not alone"

Zebra
Well, I like talking with friends and drinking and doing stuff, but often it's difficult to make new friends.

ELiot
Conversation with new people is "not alone"

And you sometimes feel alone when you hang around old friends

Zebra
Yes, that's true.

ELiot
Can you financially afford to go drinking and doing stuff?

Zebra
I guess new people I meet are often very disappointing, and more than that, even when they aren't, I myself have a lot of recently developed mental issues it takes a lot of effort to control.

I kind of zone out, frequently. People find that scary.

ELiot
What kind of new people would you like to meet?

Zebra
Uhm, I dunno. It's hard to specify really.

ELiot
Is your zoning out actually absence or is it more like daydreaming?

Zebra
Absence

Or sometimes I just sort of feel sad.

But usually no internal thoughts associated.

I can sort of afford to go drinking and stuff.

ELiot
Do you recall things that happen while you are absent?

Zebra
Mostly not. I can sort of remember it happening but super vague.

ELiot
Do you feel like you are an automa - following a path you were on, and then you zone back in?

Zebra
It's not like in the middle of a sentence, but people notice that I look dead and then sometimes I don't respond until they call me a couple times, though sometimes I can respond immediately.

More like my energy's just gone, I guess.

Sometimes I'll lose track of the conversation, even when I myself am speaking.

That's not as common recently, though.

ELiot
I was going to say I suspect an absent seizure. It came up in the lw open thread this week. Let me get you a link

Zebra
My mother claims I told her I was hospitalised for a head injury around the time my mental health problems started worsening.

I can't remember the incident, though, and she has not much in the way of specific details.

ELiot
http://lesswrong.com/r/discussion/lw/niv/open_thread_apr_18_apr_24_2016/d8ow

If you have something like that I am sure it makes everything worse

Zebra
I had depression before that, but as I said, it had mostly gotten better. On the other hand, there were a lot of issues in my life around the same time which may have led to the recurrence of symptoms as well.

ELiot
Okay, what kind of person would you like to meet?

Zebra
Hmm, previously I wanted to see a neurologist because my symptoms were much worse, but they've lessened now.

ELiot
There is medication to reduce seizures to nearly nothing

Which might help

Zebra
Well, an intelligent person, but those are rare; or someone who's fun, but finding one who's willing to put up with my lethargy and depression is hard; or someone who's nice and not a complete idiot.

ELiot
It also might help to keep a diary of what you do each day to try to keep track of how often they happen

Zebra
Maybe. I'm not at all certain I'm having seizures, though.

I have pretty bad memory, too.

ELiot
Where might you find intelligent people?

A brain scan would tell you if you are or are not having seizures

Zebra
I have no idea. I guess some of the intellectually-focused meetup groups have some, but not all that many.

Yeah, I've been meaning to go to a neurologist, but I frequently fail to get around to stuff.

ELiot
I would suggest university campus as a viable place

To find smart ones

Zebra
Maybe, but I'm not in university and probably don't have the effort to enter.

Also, somewhat smart, but not very smart, people really annoy me.

Universities have a lot of those.

ELiot
Campuses here are just places you can walk into, not sure what it's like there

If you want to get out of the house and see something, universities are a nice place to visit

Zebra
Hmm, I guess I could try.

Many offer classes to the public very cheaply.

ELiot
You can probably also work out how to sneak into a lecture anyway - they usually don't check the roll

Any topic of study fancy your interest? To sneak into a lecture about

Zebra
Hmm, not sure. Linguistics or history might be fun.

CS would probably just be a recap of basic material.

ELiot
You can usually find course details online and work out where the lectures are and just kinda walk in and sit down - For a bit of fun

Zebra
How does that translate to meeting people though?

(If it's not obvious, I've never been to uni.)

ELiot
Chat to people if you want to. Lectures have breaks, uni tries to encourage social groups too usually, barbecues and stuff

If you make yourself look approachable and friendly people will talk to you. It's how I avoid approaching others. I wear funny hats and strangers talk to me

Zebra
Really? Haha, what sort of hats?

ELiot
Pirate hat, top hats, Stetson,

I have about 50 hats

Different ones all the time

That's on the topic of appearance tho

Zebra
I don't look very approachable now :( Since I became ill again my personal health and hygeine have done very poorly.

ELiot
Do you like being hygienic? Indifferent?

Zebra
Well, I like being hygenic, but getting to that state is difficult.

Also, I've probably gained 40kg since then, so even if I was it's probably all for naught.

ELiot
What contributes to that state? For me it's having a shower and brushing my teeth.

Maybe deodorant too. And clean clothes

Zebra
Well, those things.

Now I've got so fat it's hard to buy clothes :-/

ELiot
I would say you can work on that

Both the fat and the clothes

If you want to

Exercise would help you, leaving the house to go for a walk would help you, you don't need anywhere to go other than around a block or something

Do you track your weight?

Zebra
Yes, but it's very difficult.

ELiot
Is it still climbing or staying where it is?

Zebra
I've tried some stuff. Fasting, methamphetamine, etc., but I was never able to really reduce it.

ELiot
Difficult to track? To walk? To exercise? To buy clothes?

Weight loss is difficult, Yes

Zebra
I just don't have the energy to excercise. Even when I was taking methamphetamine I didn't have the energy for it.

ELiot
Would you consider paying for a service that helped you lose weight?

Zebra
Right now I think it's not climbing, but I didn't buy a new scale when my last one broke.

Yes, if I thought it had reasonable chance of being effective.

ELiot
An option would be to look at what is available

Near to where you live

Zebra
I don't think there are any drugs that work as well for weighy loss as meth, though, and that was not effective enough.

I don't know what else such a service could provide really.

I mean, I _know_ you need to excercise and eat healthy, but I just haven't been able to do it.

ELiot
Commitment, a gym, a trainer setting a program

There are greater experts in the field of weight loss than I

Zebra
Honestly, I've tried so much, I do not realistically think I would continue to follow through with that.

ELiot
Okay

Zebra
Other than the very deepest depths of depression (which I still haven't fallen to this time around), I've never experienced anything as unpleasant as excercise.

ELiot
I can offer ideas about weight loss and exercise but maybe another time.

What types of exercise?

Zebra
I suppose there are some illnesses which might do better than meth, but trying to induce those makes me feel very squeamish.

Pretty much anything.

It'a just so hot and icky and tiring.

ELiot
Oh! Yes, a problem with your geography

Other geographies are not as hot and sticky. Even that has solutions. My exercise is walking, running, swimming, unicycle, circus skills, rock climbing, ice skating, laser tag, and trampoline, I also did pole dance for a while. Also I would kayak and hike more if I had more opportunities...

Zebra
True. I had some fun doing outdoor type stuff in the Southwestern US.

Moving has its own host of problems, though.

ELiot
Other sports I have done include table tennis, actual tennis, archery...

Zebra
The primary one being that I don't know anyone anywhere else.

ELiot
I don't imagine moving will solve all your problems

Zebra
Except my mother in Florida, USA.

ELiot
Yes I was going to say, it would certainly make loneliness harder

Especially when you don't currently know how to make new friends very well

You can exercise at night, find an indoor pool to swim in maybe.

Zebra
Yes. I did the moving thing once, and it was probably good at the time, but I had fairly exceptional circumstances then which I don't have now.

There's a pretty nice pool in my condo, but I get tired. Swimming is exhausting.

And very self-conscious doing excercise around others.

ELiot
Yes.

Zebra
That's probably equally as serious an issue as the exhaustion.

ELiot
Night time for self conscious

Take a friend or girlfriend?

Moral support?

Zebra
Makes me more self conscious :(

ELiot
You need support network not criticism

Do you trust these people?

Do you think you could track how far you swim and try to increase laps or so?

The idea being to measure progress and feel like you are going somewhere

Zebra
I don't think I've ever actually trusted anyone, even as a child.

ELiot
That is a different problem

Zebra
Yeah. I have a lot of problems. :(

ELiot
That is okay for a place to be

Better to know than not know.

To be more specific you have a lot of problems *at the same time*

Which is making it hard to work out what the biggest one is, and where to start

Zebra
Yes. That kind of sucks.

ELiot
It appears that at the bottom of each problem there is a slightly different problem, also with a solution but one that too needs implementation

I am confident that this can all be fixed, I am also confident that you can enjoy the journey of doing so.

Perhaps you might benefit from writing down the problems until you have a clearer picture for yourself

Zebra
Yes, that's how it feels to me too. There's a large web of problems which are fixable with enough effort, but inter-related so hard to fix one at a time, and I don't really feel like I have the effort to do it all at once, nor that it would be worth it.

ELiot
As you talk to me you are clarifying the problems, I imagine that can help to identify them to help solve them.

If I were in your position I would pick the first one that I encountered and try to make a little progress on it before the next one hit me, and trying to make progress on the next one too.

I firmly believe in the concept of _making it easier for future you_.

Zebra
Sometimes I feel that all of them could be fixed in one go with a more radical change, but that's a rather scary thing to do.

ELiot
It is. Especially without experience in radical changes.

Zebra
Well, I moved alone to a country I'd never been when I was 18. So I guess it's not entirely unfamiliar.

ELiot
A change of scenery would probably change the problems. Not necessarily fix them

Zebra
Yeah.

ELiot
It could be the motivation you need to help make it easier for you to make progress

But it could also leave you exhausted and worse off

Zebra
I've looked some into moving to the Republic of Georgia.

But I do have friends here, even if there are only a few remaining and I feel increasingly alienated from them.

ELiot
You might benefit from a time management system

Zebra
Why? I don't have enough to even fill one activity per day...

ELiot
A list of problems, followed by a list of ways to solve the problems followed by a plan of how to spend your next 168 hours towards solving those problems while also not making new ones...

Each week

Energy limited? That's also a problem. With a solution. You do need sleep and rest

Zebra
I usually sleep a lot, but it doesn't feel restful.

I try to go on holidays, but again, usually come back more stressed than before.

ELiot
That too has a solution. Are you getting enough light when you wake up?

Zebra
I typically keep the blinds closed.

I don't like light :(

ELiot
Bright light when you wake up will help you feel awake more. Only when you wake up.

Zebra
But then what do I do?

ELiot
Pick something you want to change and go for it.

The strategy of: "Try X"

It might help to have a notebook paper trail of ideas you have tried

Or thoughts you have had about each problem and how to solve it

Zebra
Most of the things I want to change are hard to change, computer related (and this not really helpful to not feeling terrible and alone), or things I don't have a good plan for how to change.

ELiot
You have as much to do as you want to. You can make a plan.

Zebra
I guess if I did something computer related it could make money, maybe, but I'd still feel awful. In the longer term it may be helpful, but I've tried this before and it is difficult to not get depressed and quit to go cry all day after 30 minutes.

ELiot
Even the meta strategy of "trying to plan" can help

You should write down that idea

It also seems like you apply pressure and expectations above what you have evidence of yourself being capable of.

Zebra
The idea of trying to plan, or?

ELiot
Yes and the "computer thing" idea

You should update on the estimation of your capabilities to be more of a reflection on what you have recently observed you are able to do

Zebra
I have a lot of computer thing ideas. I know pretty specifically how to do them, but sitting it down and typing it out is harder.

Well, I can walk to 7/11 if I put a lot of effort into it.

That's about it...

ELiot
Which is a way of saying to start small. Reset from the beginning (which is not easy)

Zebra
That doesn't seem helpful.

ELiot
That's what your baseline is

Anything upwards is now impressive.

Including this conversation

You have come a long way already

Zebra
Doesn't feel like it. Starting from walking to 7/11 sounds kind of exhausting and not very enjoyable.

ELiot
But that's where you are right now

I would say try habit RPG, but I never found it useful to me

Zebra
Yeah, but I mean, back on to the original point, all this seems much harder than trying to work through my hangups about suicide.

ELiot
Possibly, Yes.

All these problems are solveable, But perhaps

What about the possibility of solving the most immediate discomfort at any time?

What is the most immediate discomfort right now?

Zebra
I feel stressed about life being shit generally, I guess.

Which is generally how I feel when I have nothing specific to be stressed about.

ELiot
What can you do about that right now? How can you make life less generally shit for the you that lives 10 minutes in the future?

Or maybe make yourself feel less stressed about it

Zebra
I guess I could try to do some meditation. That used to work, but hasn't been so much recently.

For the stress part, at least.

I have no idea how to make life immediately less crap in the next ten minutes.

ELiot
I would suggest your environment or hygiene

As they are usually quick low hanging ideas.

Zebra
What sorts of things are you thinking of specifically with regard to those that could be accomplished within 10 minutes?

ELiot
A shower, a little cleaning up your space, changing clothes

Taking out the trash

Zebra
I guess that's doable.

Zebra
Welp, done that. I suppose I do feel mildly better...

ELiot
That particular strategy is called success spirals. Successfully doing a thing to help the you of the future slightly. One bit at a time.

I should add - if you want to talk about death we should have that talk too

Death, dying, pain

Zebra
Well, death seems somewhat scary in the immediate sense.

Especially death by falling, which is the most low-effort solution for someone living in a high rise building.

ELiot
You need at least 10 floors to be confident of a sudden death

Zebra
More high-effort strategies, like pentobarbital or such, seem more palatable, but not quite as immediately actionable.

I'm on the tenth floor, and I think there's 20 something.

ELiot
And it depends whether you want to impact others I. E. Seeing you fall and or the body

Zebra
I don't really care, though obviously I wouldn't want anyone seeing me "on the ledge" if I couldn't go through with it.

OTOH, nighttime is a thing.

ELiot
Yes

Zebra
But it's ... scary.

Have you ever been with someone during suicide?

ELiot
No, I recently discouraged someone from taking action in person. They were making rash decisions at the time

Zebra
Ah

ELiot
At least 3 people in my life have come close. They are not all better yet, still in limbo of up and down

I would still encourage you to do the things that you want. Have you read the guilt series by nate soares?

Zebra
No. What is it about?

ELiot
Why we have guilt and defeating it where it's not appropriate

Zebra
I don't think I experience a significant amount of guilt.

ELiot
Guilt in the sense of, "should be going out" but "should stay in". The conflicting desire of parts of you to do different things. And sorting it out
Zebra
Ah, hmm

I will read the Guilt series then...

ELiot
I also went through a period of time when I felt purposeless, I described it as, "everything is meaningless" and it's bothering me. As distinctly different to, "everything is meaningless and it doesn't matter"

Zebra
Everything being meaningless doesn't bother me. I don't think meaningfulness is a possible thing in any universe. Everything being shitty and empty bothers me, but that's rather different.

http://mindingourway.com/dont-steer-with-guilt/ <- this?

ELiot
Yes, but that's the middle of the series, better to start in the beginning

http://mindingourway.com/guilt/

That's the table of contents

Zebra
Hmm, it's a pretty good read.

------------------------------------ Later in time...............
Zebra
Finished it. It was long!

I liked it more than Eliezer's writing. It may even have been potentially useful irl, maybe.

ELiot
do you think you can apply things to your life?

Zebra
Maybe. I've been trying to do the breaking things up part.

I made a small amount of money with stupid computer things... I guess that's a modicum of progress, maybe.

I liked the last part about changing goals. That might be useful.

Visualising bad things seems like a potentially helpful strategy as well.

Zebra
A lot of the techniques do seem effective. Hopefully it will make a positive difference.

---------------------A long time later--------------
ELiot
hey

I promised to get back to you.

how are things?

Zebra
Hi

ELiot
it's been a while..

Zebra
I'm doing somewhat better. Got on meds for bipolar disorder, which has helped a lot.

Yeah. Been trying to actually do things now, so I feel less stagnant.

ELiot
Oh! great!

Zebra
Hopefully life will end up in a better place than before.


The Problem TM

What is actually the problem?  I have a theory, but I also wanted to publish this without declaring my answer.  I will share my ideas in a few weeks but I want to know what you think and how you came to that answer.


Meta: this conversation happened over 6 months ago, this took 2 hours to collate, tidy and publish.

Originally published here: http://bearlamp.com.au/the-problem-tm-analyse-a-conversation/

Notes on the Safety in Artificial Intelligence conference

25 UmamiSalami 01 July 2016 12:36AM

These are my notes and observations after attending the Safety in Artificial Intelligence (SafArtInt) conference, which was co-hosted by the White House Office of Science and Technology Policy and Carnegie Mellon University on June 27 and 28. This isn't an organized summary of the content of the conference; rather, it's a selection of points which are relevant to the control problem. As a result, it suffers from selection bias: it looks like superintelligence and control-problem-relevant issues were discussed frequently, when in reality those issues were discussed less and I didn't write much about the more mundane parts.

SafArtInt has been the third out of a planned series of four conferences. The purpose of the conference series was twofold: the OSTP wanted to get other parts of the government moving on AI issues, and they also wanted to inform public opinion.

The other three conferences are about near term legal, social, and economic issues of AI. SafArtInt was about near term safety and reliability in AI systems. It was effectively the brainchild of Dr. Ed Felten, the deputy U.S. chief technology officer for the White House, who came up with the idea for it last year. CMU is a top computer science university and many of their own researchers attended, as well as some students. There were also researchers from other universities, some people from private sector AI including both Silicon Valley and government contracting, government researchers and policymakers from groups such as DARPA and NASA, a few people from the military/DoD, and a few control problem researchers. As far as I could tell, everyone except a few university researchers were from the U.S., although I did not meet many people. There were about 70-100 people watching the presentations at any given time, and I had conversations with about twelve of the people who were not affiliated with existential risk organizations, as well as of course all of those who were affiliated. The conference was split with a few presentations on the 27th and the majority of presentations on the 28th. Not everyone was there for both days.

Felten believes that neither "robot apocalypses" nor "mass unemployment" are likely. It soon became apparent that the majority of others present at the conference felt the same way with regard to superintelligence. The general intention among researchers and policymakers at the conference could be summarized as follows: we need to make sure that the AI systems we develop in the near future will not be responsible for any accidents, because if accidents do happen then they will spark public fears about AI, which would lead to a dearth of funding for AI research and an inability to realize the corresponding social and economic benefits. Of course, that doesn't change the fact that they strongly care about safety in its own right and have significant pragmatic needs for robust and reliable AI systems.

Most of the talks were about verification and reliability in modern day AI systems. So they were concerned with AI systems that would give poor results or be unreliable in the narrow domains where they are being applied in the near future. They mostly focused on "safety-critical" systems, where failure of an AI program would result in serious negative consequences: automated vehicles were a common topic of interest, as well as the use of AI in healthcare systems. A recurring theme was that we have to be more rigorous in demonstrating safety and do actual hazard analyses on AI systems, and another was that we need the AI safety field to succeed in ways that the cybersecurity field has failed. Another general belief was that long term AI safety, such as concerns about the ability of humans to control AIs, was not a serious issue.

On average, the presentations were moderately technical. They were mostly focused on machine learning systems, although there was significant discussion of cybersecurity techniques.

The first talk was given by Eric Horvitz of Microsoft. He discussed some approaches for pushing into new directions in AI safety. Instead of merely trying to reduce the errors spotted according to one model, we should look out for "unknown unknowns" by stacking models and looking at problems which appear on any of them, a theme which would be presented by other researchers as well in later presentations. He discussed optimization under uncertain parameters, sensitivity analysis to uncertain parameters, and 'wireheading' or short-circuiting of reinforcement learning systems (which he believes can be guarded against by using 'reflective analysis'). Finally, he brought up the concerns about superintelligence, which sparked amused reactions in the audience. He said that scientists should address concerns about superintelligence, which he aptly described as the 'elephant in the room', noting that it was the reason that some people were at the conference. He said that scientists will have to engage with public concerns, while also noting that there were experts who were worried about superintelligence and that there would have to be engagement with the experts' concerns. He did not comment on whether he believed that these concerns were reasonable or not.

An issue which came up in the Q&A afterwards was that we need to deal with mis-structured utility functions in AI, because it is often the case that the specific tradeoffs and utilities which humans claim to value often lead to results which the humans don't like. So we need to have structural uncertainty about our utility models. The difficulty of finding good objective functions for AIs would eventually be discussed in many other presentations as well.

The next talk was given by Andrew Moore of Carnegie Mellon University, who claimed that his talk represented the consensus of computer scientists at the school. He claimed that the stakes of AI safety were very high - namely, that AI has the capability to save many people's lives in the near future, but if there are any accidents involving AI then public fears could lead to freezes in AI research and development. He highlighted the public's irrational tendencies wherein a single accident could cause people to overlook and ignore hundreds of invisible lives saved. He specifically mentioned a 12-24 month timeframe for these issues.

Moore said that verification of AI system safety will be difficult due to the combinatorial explosion of AI behaviors. He talked about meta-machine-learning as a solution to this, something which is being investigated under the direction of Lawrence Schuette at the Office of Naval Research. Moore also said that military AI systems require high verification standards and that development timelines for these systems are long. He talked about two different approaches to AI safety, stochastic testing and theorem proving - the process of doing the latter often leads to the discovery of unsafe edge cases.

He also discussed AI ethics, giving an example 'trolley problem' where AI cars would have to choose whether to hit a deer in order to provide a slightly higher probability of survival for the human driver. He said that we would need hash-defined constants to tell vehicle AIs how many deer a human is worth. He also said that we would need to find compromises in death-pleasantry tradeoffs, for instance where the safety of self-driving cars depends on the speed and routes on which they are driven. He compared the issue to civil engineering where engineers have to operate with an assumption about how much money they would spend to save a human life.

He concluded by saying that we need policymakers, company executives, scientists, and startups to all be involved in AI safety. He said that the research community stands to gain or lose together, and that there is a shared responsibility among researchers and developers to avoid triggering another AI winter through unsafe AI designs.

The next presentation was by Richard Mallah of the Future of Life Institute, who was there to represent "Medium Term AI Safety". He pointed out the explicit/implicit distinction between different modeling techniques in AI systems, as well as the explicit/implicit distinction between different AI actuation techniques. He talked about the difficulty of value specification and the concept of instrumental subgoals as an important issue in the case of complex AIs which are beyond human understanding. He said that even a slight misalignment of AI values with regard to human values along one parameter could lead to a strongly negative outcome, because machine learning parameters don't strictly correspond to the things that humans care about.

Mallah stated that open-world discovery leads to self-discovery, which can lead to reward hacking or a loss of control. He underscored the importance of causal accounting, which is distinguishing causation from correlation in AI systems. He said that we should extend machine learning verification to self-modification. Finally, he talked about introducing non-self-centered ontology to AI systems and bounding their behavior.

The audience was generally quiet and respectful during Richard's talk. I sensed that at least a few of them labelled him as part of the 'superintelligence out-group' and dismissed him accordingly, but I did not learn what most people's thoughts or reactions were. In the next panel featuring three speakers, he wasn't the recipient of any questions regarding his presentation or ideas.

Tom Mitchell from CMU gave the next talk. He talked about both making AI systems safer, and using AI to make other systems safer. He said that risks to humanity from other kinds of issues besides AI were the "big deals of 2016" and that we should make sure that the potential of AIs to solve these problems is realized. He wanted to focus on the detection and remediation of all failures in AI systems. He said that it is a novel issue that learning systems defy standard pre-testing ("as Richard mentioned") and also brought up the purposeful use of AI for dangerous things.

Some interesting points were raised in the panel. Andrew did not have a direct response to the implications of AI ethics being determined by the predominantly white people of the US/UK where most AIs are being developed. He said that ethics in AIs will have to be decided by society, regulators, manufacturers, and human rights organizations in conjunction. He also said that our cost functions for AIs will have to get more and more complicated as AIs get better, and he said that he wants to separate unintended failures from superintelligence type scenarios. On trolley problems in self driving cars and similar issues, he said "it's got to be complicated and messy."

Dario Amodei of Google Deepbrain, who co-authored the paper on concrete problems in AI safety, gave the next talk. He said that the public focus is too much on AGI/ASI and wants more focus on concrete/empirical approaches. He discussed the same problems that pose issues in advanced general AI, including flawed objective functions and reward hacking. He said that he sees long term concerns about AGI/ASI as "extreme versions of accident risk" and that he thinks it's too early to work directly on them, but he believes that if you want to deal with them then the best way to do it is to start with safety in current systems. Mostly he summarized the Google paper in his talk.

In her presentation, Claire Le Goues of CMU said "before we talk about Skynet we should focus on problems that we already have." She mostly talked about analogies between software bugs and AI safety, the similarities and differences between the two and what we can learn from software debugging to help with AI safety.

Robert Rahmer of IARPA discussed CAUSE, a cyberintelligence forecasting program which promises to help predict cyber attacks. It is a program which is still being put together.

In the panel of the above three, autonomous weapons were discussed, but no clear policy stances were presented.

John Launchbury gave a talk on DARPA research and the big picture of AI development. He pointed out that DARPA work leads to commercial applications and that progress in AI comes from sustained government investment. He classified AI capabilities into "describing," "predicting," and "explaining" in order of increasing difficulty, and he pointed out that old fashioned "describing" still plays a large role in AI verification. He said that "explaining" AIs would need transparent decisionmaking and probabilistic programming (the latter would also be discussed by others at the conference).

The next talk came from Jason Gaverick Matheny, the director of IARPA. Matheny talked about four requirements in current and future AI systems: verification, validation, security, and control. He wanted "auditability" in AI systems as a weaker form of explainability. He talked about the importance of "corner cases" for national intelligence purposes, the low probability, high stakes situations where we have limited data - these are situations where we have significant need for analysis but where the traditional machine learning approach doesn't work because of its overwhelming focus on data. Another aspect of national defense is that it has a slower decision tempo, longer timelines, and longer-viewing optics about future events.

He said that assessing local progress in machine learning development would be important for global security and that we therefore need benchmarks to measure progress in AIs. He ended with a concrete invitation for research proposals from anyone (educated or not), for both large scale research and for smaller studies ("seedlings") that could take us "from disbelief to doubt".

The difference in timescales between different groups was something I noticed later on, after hearing someone from the DoD describe their agency as having a longer timeframe than the Homeland Security Agency, and someone from the White House describe their work as being crisis reactionary.

The next presentation was from Andrew Grotto, senior director of cybersecurity policy at the National Security Council. He drew a close parallel from the issue of genetically modified crops in Europe in the 1990's to modern day artificial intelligence. He pointed out that Europe utterly failed to achieve widespread cultivation of GMO crops as a result of public backlash. He said that the widespread economic and health benefits of GMO crops were ignored by the public, who instead focused on a few health incidents which undermined trust in the government and crop producers. He had three key points: that risk frameworks matter, that you should never assume that the benefits of new technology will be widely perceived by the public, and that we're all in this together with regard to funding, research progress and public perception.

In the Q&A between Launchbury, Matheny, and Grotto after Grotto's presentation, it was mentioned that the economic interests of farmers worried about displacement also played a role in populist rejection of GMOs, and that a similar dynamic could play out with regard to automation causing structural unemployment. Grotto was also asked what to do about bad publicity which seeks to sink progress in order to avoid risks. He said that meetings like SafArtInt and open public dialogue were good.

One person asked what Launchbury wanted to do about AI arms races with multiple countries trying to "get there" and whether he thinks we should go "slow and secure" or "fast and risky" in AI development, a question which provoked laughter in the audience. He said we should go "fast and secure" and wasn't concerned. He said that secure designs for the Internet once existed, but the one which took off was the one which was open and flexible.

Another person asked how we could avoid discounting outliers in our models, referencing Matheny's point that we need to include corner cases. Matheny affirmed that data quality is a limiting factor to many of our machine learning capabilities. At IARPA, we generally try to include outliers until they are sure that they are erroneous, said Matheny.

Another presentation came from Tom Dietterich, president of the Association for the Advancement of Artificial Intelligence. He said that we have not focused enough on safety, reliability and robustness in AI and that this must change. Much like Eric Horvitz, he drew a distinction between robustness against errors within the scope of a model and robustness against unmodeled phenomena. On the latter issue, he talked about solutions such as expanding the scope of models, employing multiple parallel models, and doing creative searches for flaws - the latter doesn't enable verification that a system is safe, but it nevertheless helps discover many potential problems. He talked about knowledge-level redundancy as a method of avoiding misspecification - for instance, systems could identify objects by an "ownership facet" as well as by a "goal facet" to produce a combined concept with less likelihood of overlooking key features. He said that this would require wider experiences and more data.

There were many other speakers who brought up a similar set of issues: the user of cybersecurity techniques to verify machine learning systems, the failures of cybersecurity as a field, opportunities for probabilistic programming, and the need for better success in AI verification. Inverse reinforcement learning was extensively discussed as a way of assigning values. Jeanette Wing of Microsoft talked about the need for AIs to reason about the continuous and the discrete in parallel, as well as the need for them to reason about uncertainty (with potential meta levels all the way up). One point which was made by Sarah Loos of Google was that proving the safety of an AI system can be computationally very expensive, especially given the combinatorial explosion of AI behaviors.

In one of the panels, the idea of government actions to ensure AI safety was discussed. No one was willing to say that the government should regulate AI designs. Instead they stated that the government should be involved in softer ways, such as guiding and working with AI developers, and setting standards for certification.

Pictures: https://imgur.com/a/49eb7

In between these presentations I had time to speak to individuals and listen in on various conversations. A high ranking person from the Department of Defense stated that the real benefit of autonomous systems would be in terms of logistical systems rather than weaponized applications. A government AI contractor drew the connection between Mallah's presentation and the recent press revolving around superintelligence, and said he was glad that the government wasn't worried about it.

I talked to some insiders about the status of organizations such as MIRI, and found that the current crop of AI safety groups could use additional donations to become more established and expand their programs. There may be some issues with the organizations being sidelined; after all, the Google Deepbrain paper was essentially similar to a lot of work by MIRI, just expressed in somewhat different language, and was more widely received in mainstream AI circles.

In terms of careers, I found that there is significant opportunity for a wide range of people to contribute to improving government policy on this issue. Working at a group such as the Office of Science and Technology Policy does not necessarily require advanced technical education, as you can just as easily enter straight out of a liberal arts undergraduate program and build a successful career as long as you are technically literate. (At the same time, the level of skepticism about long term AI safety at the conference hinted to me that the signalling value of a PhD in computer science would be significant.) In addition, there are large government budgets in the seven or eight figure range available for qualifying research projects. I've come to believe that it would not be difficult to find or create AI research programs that are relevant to long term AI safety while also being practical and likely to be funded by skeptical policymakers and officials.

I also realized that there is a significant need for people who are interested in long term AI safety to have basic social and business skills. Since there is so much need for persuasion and compromise in government policy, there is a lot of value to be had in being communicative, engaging, approachable, appealing, socially savvy, and well-dressed. This is not to say that everyone involved in long term AI safety is missing those skills, of course.

I was surprised by the refusal of almost everyone at the conference to take long term AI safety seriously, as I had previously held the belief that it was more of a mixed debate given the existence of expert computer scientists who were involved in the issue. I sensed that the recent wave of popular press and public interest in dangerous AI has made researchers and policymakers substantially less likely to take the issue seriously. None of them seemed to be familiar with actual arguments or research on the control problem, so their opinions didn't significantly change my outlook on the technical issues. I strongly suspect that the majority of them had their first or possibly only exposure to the idea of the control problem after seeing badly written op-eds and news editorials featuring comments from the likes of Elon Musk and Stephen Hawking, which would naturally make them strongly predisposed to not take the issue seriously. In the run-up to the conference, websites and press releases didn't say anything about whether this conference would be about long or short term AI safety, and they didn't make any reference to the idea of superintelligence.

I sympathize with the concerns and strategy given by people such as Andrew Moore and Andrew Grotto, which make perfect sense if (and only if) you assume that worries about long term AI safety are completely unfounded. For the community that is interested in long term AI safety, I would recommend that we avoid competitive dynamics by (a) demonstrating that we are equally strong opponents of bad press, inaccurate news, and irrational public opinion which promotes generic uninformed fears over AI, (b) explaining that we are not interested in removing funding for AI research (even if you think that slowing down AI development is a good thing, restricting funding yields only limited benefits in terms of changing overall timelines, whereas those who are not concerned about long term AI safety would see a restriction of funding as a direct threat to their interests and projects, so it makes sense to cooperate here in exchange for other concessions), and (c) showing that we are scientifically literate and focused on the technical concerns. I do not believe that there is necessarily a need for the two "sides" on this to be competing against each other, so it was disappointing to see an implication of opposition at the conference.

Anyway, Ed Felten announced a request for information from the general public, seeking popular and scientific input on the government's policies and attitudes towards AI: https://www.whitehouse.gov/webform/rfi-preparing-future-artificial-intelligence

Overall, I learned quite a bit and benefited from the experience, and I hope the insight I've gained can be used to improve the attitudes and approaches of the long term AI safety community.

Presidents, asteroids, natural categories, and reduced impact

1 Stuart_Armstrong 06 July 2015 05:44PM

A putative new idea for AI control; index here.

EDIT: I feel this post is unclear, and will need to be redone again soon.

This post attempts to use the ideas developed about natural categories in order to get high impact from reduced impact AIs.

 

Extending niceness/reduced impact

I recently presented the problem of extending AI "niceness" given some fact X, to niceness given ¬X, choosing X to be something pretty significant but not overwhelmingly so - the death of a president. By assumption we had a successfully programmed niceness, but no good definition (this was meant to be "reduced impact" in a slight disguise).

This problem turned out to be much harder than expected. It seems that the only way to do so is to require the AI to define values dependent on a set of various (boolean) random variables Zj that did not include X/¬X. Then as long as the random variables represented natural categories, given X, the niceness should extend.

What did we mean by natural categories? Informally, it means that X should not appear in the definitions of these random variables. For instance, nuclear war is a natural category; "nuclear war XOR X" is not. Actually defining this was quite subtle; diverting through the grue and bleen problem, it seems that we had to define how we update X and the Zj given the evidence we expected to find. This was put in equation as picking Zj's that minimize

  • Variance{log[ P(X∧Z|E)*P(¬X∧¬Z|E) / P(X∧¬Z|E)*P(¬X∧Z|E) ]} 

where E is the random variable denoting the evidence we expected to find. Note that if we interchange X and ¬X, the ratio inverts, the log changes sign - but this makes no difference to the variance. So we can equally well talk about extending niceness given X to ¬X, or niceness given ¬X to X.

 

Perfect and imperfect extensions

The above definition would work for an "perfectly nice AI". That could be an AI that would be nice, given any combination of estimates of X and Zj. In practice, because we can't consider every edge case, we would only have an "expectedly nice AI". That means that the AI can fail to be nice in certain unusual and unlikely edge cases, in certain strange set of values of Zj that almost never come up...

...or at least, that almost never come up, given X. Since the "expected niceness" was calibrated given X, the such an expectedly nice AI may fail to be nice if ¬X results in a substantial change in the probability of the Zj (see the second failure mode in this post; some of the Zj may be so tightly coupled to the value of X that an expected niceness AI considers them fixed, and this results in problems if ¬X happens and their values change).

One way of fixing this is to require that the "swing" of the Zj be small upon changing X to ¬X or vice versa. Something like, for all values of {aj}, the ratio P({Zj=aj} | X) / P({Zj=aj} | ¬X) is contained between 100 and 1/100. This means that a reasonably good "expected niceness" calibrated on the Zj will transfer from X to ¬X (though the error may grow). This approach has some other advantages, as we'll see in the next section.

Of course, problems arise if the Zj are defined by smashing together events relatively independent of X with some that are very strongly dependent on X, into the same variable. Thus we'll further require that Zj cannot be decomposed into natural subvariables whose values swing more under changes from X to ¬X.

 

High impact from reduced impact

With this, we are finally in a position to define a reduced impact AI design that can have a large impact. Imagine that an asteroid is coming to hit the Earth. Two AIs are charged with targeting a laser at it. The laser isn't perfectly effective, and the asteroid isn't perfectly certain to hit the Earth anyway: let's say aiming the laser correctly will reduce the odds of impact by a factor of 10 or so (well within the probability swing restrictions of the previous section).

In the original formulation of this, the AIs were tasked with outputting the x and y coordinates to aim the laser. Here, instead, they'll be tasked with constructing the motivation of a robotic subagent that will move around for a while, see things and interact, before physically aiming the laser in the right direction. This task is harder.

Then the top level AI charged with aiming the y coordinates is given the following task:

  • It is motivated to cause the correct y-coordinate to be outputted.
  • If the x-coordinate AI is not turned on (call this event ¬X), it is strongly motivated to have reduced impact.
  • It must phrase the robotic subagent's utility function in terms of the Zj that obey all the properties above.
  • The evidence E considered for the equation governing the naturalness of Zj is considered (for example) to be a random sampling of space at the time both AIs are turned on. Most importantly, this is defined to be a theoretical process that the AI cannot affect physically.

So what will the AI do? It will produce a robot that will aim to aim the y-coordinate of the laser correctly, given ¬X, and not expand dangerously. However, we've setup things so that X is virtually certain, so the robot will see another, x-aiming robot, wander around. The chances of this happening, given ¬X, are minuscule, so the swing of the probability from ¬X to X, is huge. So, by the restrictions we've put on the Zj, the robot will not be motivated to change its behaviour just because its sees the other robot.

And if the laser is fired, the impact on humanity will, conversely, not be that large, so well within the parameters of the Zj. Thus the "general reduced impact" portion of the robot's motivation will be preserved.

And, of course, the x-robot will have the mirror image motivations, so they will both aim correctly, it seems. This should evade the problem described here.

 

Failure modes

There are two ways the approach could fail: one bad way, and two graceful ways. The bad way is if the "no decomposition" requirements (one of them is presented above, the other is in this post) fail. In this case, the niceness or reduced impact are not guaranteed to be extended to ¬X, and the AI's behaviour can become undefined. Note that this is is not bad optimisation pressure, just an absence of optimisation pressure - but that could still be potentially dangerous.

The first graceful failure happens if the AI is not able to create a reduced impact sub-agent given the restrictions on the Zj above. In that case, the AI does nothing. The second graceful failure happens if the AI evades our attempts to increase its impact, given ¬X. In that case, it simply becomes a reduced impact AI that does little. Not ideal, but not deadly.

 

Overall status: I'm not sure the idea is sound, at least not yet. Critiques welcome.

Help needed: nice AIs and presidential deaths

1 Stuart_Armstrong 08 June 2015 04:47PM

A putative new idea for AI control; index here.

This is a problem that developed from the "high impact from low impact" idea, but is a legitimate thought experiment in its own right (it also has connections with the "spirit of the law" idea).

Suppose that, next 1st of April, the US president may or may not die of natural causes. I chose this example because it's an event of potentially large magnitude, but not overwhelmingly so (neither a butterfly wing nor an asteroid impact).

Also assume that, for some reason, we are able to program an AI that will be nice, given that the president does die on that day. Its behaviour if the president doesn't die is undefined and potentially dangerous.

Is there a way (either at the initial stages of programming or at the later) to extend the "niceness" from the "presidential death world" into the "presidential survival world"?

To focus on how tricky the problem is, assume for argument's sake that the vice-president is a war monger that will start a nuclear war if they become president. Then "launch a coup on the 2nd of April" is a "nice" thing of the AI to do, conditional on the president dying. However, if you naively import that requirement into the "presidential survival world", the AI will launch a pointeless and counterproductive coup. This is illustrative of the kind of problems that could come up.

So the question is, can we transfer niceness in this way, without needing a solution to the full problem of niceness in general?

EDIT: Actually, this seems ideally setup for a Bayes network (or for the requirement that a Bayes network be used).

EDIT2: Now the problem of predicates like "Grue" and "Bleen" seem to be the relevant bit. If you can avoid concepts such as "X={nuclear war if president died, peace if president lived}", you can make the extension work.

Does the Utility Function Halt?

3 OrphanWilde 28 January 2015 04:08AM

Suppose, for a moment, that somebody has written the Utility Function.  It takes, as its input, some Universe State, runs it through a Morality Modeling Language, and outputs a number indicating the desirability of that state relative to some baseline, and more importantly, other Universe States which we might care to compare it to.

Can I feed the Utility Function the state of my computer right now, as it is executing a program I have written?  And is a universe in which my program halts superior to one in which my program wastes energy executing an endless loop?

If you're inclined to argue that's not what the Utility Function is supposed to be evaluating, I have to ask what, exactly, it -is- supposed to be evaluating?  We can reframe the question in terms of the series of keys I press as I write the program, if that is an easier problem to solve than what my computer is going to do.

Human Memory: Problem Set

13 BrienneYudkowsky 31 October 2013 04:08AM

I'm working on a post about how best to use human memorywhen it's good to store things in your own brain and why, when it's best to outsource your memory, what memory upgrades are worthwhile in what contexts, and how to integrate and apply memory systems in real life. I'm hoping the following set of memory problems will draw out approaches that haven't occurred to me so I can compare a wider range of methods.

I'll post the first solutions I thought of myself later on, but for now I'd like to hear what you would do in each of these situations and what you believe to be the pros and cons of your answers. Can you think of ways to improve upon your first thoughts and the answers of others?

(You don't have to respond to all of the questions; feel free to post as little or as much as comes to mind.)


continue reading »

Why one-box?

7 PhilosophyStudent 30 June 2013 02:38AM

I have sympathy with both one-boxers and two-boxers in Newcomb's problem. Contrary to this, however, many people on Less Wrong seem to be staunch and confident one-boxers. So I'm turning to you guys to ask for help figuring out whether I should be a staunch one-boxer too. Below is an imaginary dialogue setting out my understanding of the arguments normally advanced on LW for one-boxing and I was hoping to get help filling in the details and extending this argument so that I (and anyone else who is uncertain about the issue) can develop an understanding of the strongest arguments for one-boxing.

One-boxer: You should one-box because one-boxing wins (that is, a person that one-boxes ends up better off than a person that two-boxes). Not only does it seem clear that rationality should be about winning generally (that a rational agent should not be systematically outperformed by irrational agents) but Newcomb's problem is normally discussed within the context of instrumental rationality, which everyone agrees is about winning.

Me: I get that and that's one of the main reasons I'm sympathetic to the one-boxing view but the two-boxers has a response to these concerns. The two-boxer agrees that rationality is about winning and they agree that winning means ending up with the most utility. The two-boxer should also agree that the rational decision theory to follow is one that will one-box on all future Newcomb's problems (those where the prediction has not yet occurred) and can also agree that the best timeless agent type is a one-boxing type. However, the two-boxer also claims that two-boxing is the rational decision.

O: Sure, but why think they're right? After all, two-boxers don't win.

M: Okay, those with a two-boxing agent type don't win but the two-boxer isn't talking about agent types. They're talking about decisions. So they are interested in what aspects of the agent's winning can be attributed to their decision and they say that we can attribute the agent's winning to their decision if this is caused by their decision. This strikes me as quite a reasonable way to apportion the credit for various parts of the winning. (Of course, it could be said that the two-boxer is right but they are playing a pointless game and should instead be interested in winning simpliciter rather than winning decisions. If this is the claim then the argument is dissolved and there is no disagreement. But I take it this is not the claim).

O: But this is a strange convoluted definition of winning. The agent ends up worse off than one-boxing agents so it must be a convoluted definition of winning that says that two-boxing is the winning decision.

M: Hmm, maybe... But I'm worried that relevant distinctions aren't being made here (you've started talking about winning agents rather than winning decisions). The two-boxer relies on the same definition of winning as you and so agrees that the one-boxing agent is the winning agent. They just disagree about how to attribute winning to the agent's decisions (rather than to other features of the agent). And their way of doing this strikes me as quite a natural one. We credit the decision with the winning that it causes. Is this the source of my unwillingness to jump fully on board with your program? Do we simply disagree about the plausibility of this way of attributing winning to decisions?

Meta-comment (a): I don't know what to say here? Is this what's going on? Do people just intuitively feel that this is a crazy way to attribute winning to decisions? If so, can anyone suggest why I should adopt the one-boxer perspective on this?

O: But then the two-boxer has to rely on the claim that Newcomb's problem is "unfair" to explain why the two-boxing agent doesn't win. It seems absurd to say that a scenario like Newcomb's problem is unfair.

M: Well, the two-boxing agent means something very particular by "unfair". They simply mean that in this case the winning agent doesn't correspond to the winning decision. Further, they can explain why this is the case without saying anything that strikes me as crazy. They simply say that Newcomb's problem is a case where the agent's winnings can't entirely be attributed to the agent's decision (ignoring a constant value). But if something else (the agent's type at time of prediction) also influences the agent's winning in this case, why should it be a surprise that the winning agent and the winning decision come apart? I'm not saying the two-boxer is right here but they don't seem to me to be obviously wrong either...

Meta-comment (b): Interested to know what response should be given here.

O: Okay, let's try something else. The two-boxer focuses only on causal consequences but in doing so they simply ignore all the logical non-causal consequences of their decision algorithm outputting a certain decision. This is an ad hoc, unmotivated restriction.

M: Ah hoc? I'm not sure I see why. Think about the problem with evidential decision theory. The proponent of EDT could say a similar thing (that the proponent of two-boxing ignores all the evidential implications of their decision). The two-boxer will respond that these implications just are not relevant to decision making. When we make decisions we are trying to bring about the best results, not get evidence for these results. Equally, they might say, we are trying to bring about the best results, not derive the best results in our logical calculations. Now I don't know what to make of the point/counter-point here but it doesn't seem to me that the one-boxing view is obviously correct here and I'm worried that we're again going to end up just trading intuitions (and I can see the force of both intuitions here).

Meta-comment: Again, I would love to know whether I've understood this argument and whether something can be said to convince me that the one-boxing view is the clear cut winner here.

End comments: That's my understanding of the primary argument advanced for one-boxing on LW. Are there other core arguments? How can these arguments be improved and extended?

The autopilot problem: driving without experience

23 Stuart_Armstrong 13 May 2013 12:42PM

Consider a mixed system, in which an automated system is paired with a human overseer. The automated system handles most of the routine tasks, while the overseer is tasked with looking out for errors and taking over in extreme or unpredictable circumstances. Examples of this could be autopilots, cruise control, GPS direction finding, high-frequency trading – in fact nearly every automated system has this feature, because they nearly all rely on humans "keeping an eye on things".

But often the human component doesn't perform as well as it should do – doesn't perform as well as it did before part of the system was automated. Cruise control can impair driver performance, leading to more accidents. GPS errors can take people far more off course than following maps did. When the autopilot fails, pilots can crash their planes in rather conventional conditions. Traders don't understand why their algorithms misbehave, or how to stop this.

There seems to be three factors at work here:

  1. Firstly, if the automation performs flawlessly, the overseers will become complacent, blindly trusting the instruments and failing to perform basic sanity checks. They will have far less procedural understanding of what's actually going on, since they have no opportunity to exercise their knowledge.
  2. This goes along with a general deskilling of the overseer. When the autopilot controls the plane for most of its trip, pilots get far less hands-on experience of actually flying the plane. Paradoxically, less efficient automation can help with both these problems: if the system fails 10% of the time, the overseer will watch and understand it closely.
  3. And when the automation does fail, the overseer will typically lack situational awareness of what's going on. All they know is that something extraordinary has happened, and they may have the (possibly flawed) readings of various instruments to guide them – but they won't have a good feel for what happened to put them in that situation.

So, when the automation fails, the overseer is generally dumped into an emergency situation, whose nature they are going to have to deduce, and, using skills that have atrophied, they are going to have to take on the task of the automated system that has never failed before and that they have never had to truly understand.

And they'll typically get blamed for getting it wrong.

Similarly, if we design AI control mechanisms that rely on the presence of a human in the loop (such as tools AIs, Oracle AIs, and, to a lesser extent, reduced impact AIs), we'll need to take the autopilot problem into account, and design the role of the overseer so as not to deskill them, and not count on them being free of error.

The Worst Problem You've Ever Encountered and Solved. And the One You Didn't, Yet!

2 diegocaleiro 04 December 2012 08:43PM

EDIT: No one was doing what the post suggests, so I accepted an idea from one of the comments, and embedded my response in a comment, not the post itself

 

I'd like to ask this question to you, and I'll respond it myself as well.

What Is The Worst Problem You've Ever Encountered and Solved? And the One You Didn't, Yet!

Some prior considerations:

1) I mean "problem" in a very general sense, it could be a math problem, an existential problem, a social problem, an akrasia problem,  a disease problem etc...

2) I'd like people to give informative/didactic responses.  Try not only to state the facts, but also to help someone who'd encounter similar situations to be able to deal with them.

3) When talking about the one you didn't, give enough specifics that someone would actually be able to help you.

The general idea is to teach people how to Win by example, taking in consideration all the shortcomings of biases etc...

 

Well, that is all. One solved, one not yet solved. State your own issues and help others here. Someone else's rationality is always welcome.

The hundred-room problem

0 APMason 21 January 2012 06:12PM

This thought-experiment has been on my mind for a couple of days, and no doubt it's a special case of a more general problem identified somewhere by some philosopher that I haven't heard of yet. It goes like this:

You are blindfolded, and then scanned, and ninety-nine atom-for-atom copies of you are made, each blindfolded, meaning a hundred in all. To each one is explained (and for the sake of the thought experiment, you can take this explanation as true (p is approx. 1)) that earlier, a fair-coin was flipped. If it came down heads, ninety-nine out of a hundred small rooms were painted red, and the remaining one was painted blue. If it came down tails, ninety-nine out of a hundred small rooms were painted blue, and the remaining one was painted red. Now, put yourself in the shoes of just one of these copies. When asked what the probability is that the coin came down tails, you of course answer “.5”. It is now explained to you that each of the hundred copies is to be inserted into one of the hundred rooms, and will then be allowed to remove their blindfolds. You feel yourself being moved, and then hear a voice telling you you can take your blindfold off. The room you are in is blue. The voice then asks you for your revised probability estimate that the coin came down tails.

It seems at first (or maybe at second, depending on how your mind works) that the answer ought to be .99 – ninety-nine out of the hundred copies will, if they follow the rule “if red, then heads, if blue then tails”, get the answer right.

However, it also seems like the answer ought to be .5, because you have no new information to update on. You already knew that at least one copy of you would, at this time, remove their blindfold and find themselves in a blue room. What have you discovered that should allow you to revise your probability of .5 to .99?

And the answer, of course, cannot be both .5 and .99. Something has to give.

Is there something basically quite obvious that I'm missing that will resolve this problem, or is it really the mean sonofabitch it appears to be? As it goes, I'm inclined to say the probability is .5 – I'm just not quite sure why. Thoughts?

Which fields of learning have clarified your thinking? How and why?

12 [deleted] 11 November 2011 01:04AM

Did computer programming make you a clearer, more precise thinker? How about mathematics? If so, what kind? Set theory? Probability theory?

Microeconomics? Poker? English? Civil Engineering? Underwater Basket Weaving? (For adding... depth.)

Anything I missed?

Context: I have a palette of courses to dab onto my university schedule, and I don't know which ones to chose. This much is for certain: I want to come out of university as a problem solving beast. If there are fields of inquiry whose methods easily transfer to other fields, it is those fields that I want to learn in, at least initially.

Rip apart, Less Wrong!

Marsh et al. "Serotonin Transporter Genotype (5-HTTLPR) Predicts Utilitarian Moral Judgments"

10 Jack 07 October 2011 07:08AM

The whole paper is here.  In short, they found a genotype that predicts people's response to the original trolley problem:

A trolley (i.e. in British English a tram) is running out of control down a track. In its path are five people who have been tied to the track by a mad philosopher. Fortunately, you could flip a switch, which will lead the trolley down a different track to safety. Unfortunately, there is a single person tied to that track. Should you flip the switch or do nothing?

Participants with one kind of serotonin transmitter (LL-homozygotes) judged flipping the switch to be better than a morally neutral action. Participants with the other kind (S-carriers) judged flipping the switch to be no better than a morally neutral action. The groups responded equally to the "fat man scenario" both rejecting the 'push' option.


Some quotes:

We hypothesized that 5-HTTLPR genotype would interact with intentionality in respondents who generated moral judgments. Whereas we predicted that all participants would eschew intentionally harming an innocent for utilitarian gains, we predicted that participants' judgments of foreseen but unintentional harm would diverge as a function of genotype. Specifically, we predicted that LL homozygotes would adhere to the principle of double effect and preferentially select the utilitarian option to save more lives despite unintentional harm to an innocent victim, whereas S-allele carriers would be less likely to endorse even unintentional harm. Results of behavioral testing confirmed this hypothesis.

Participants in this study judged the acceptability of actions that would unintentionally or intentionally harm an innocent victim in order to save others' lives. An analysis of variance revealed a genotype × scenario interaction, F(2, 63) = 4.52, p = .02. Results showed that, relative to long allele homozygotes (LL), carriers of the short (S) allele showed particular reluctance to endorse utilitarian actions resulting in foreseen harm to an innocent individual. LL genotype participants rated perpetrating unintentional harm as more acceptable (M = 4.98, SEM = 0.20) than did SL genotype participants (M = 4.65, SEM = 0.20) or SS genotype participants (M = 4.29, SEM = 0.30).

...

The results indicate that inherited variants in a genetic polymorphism that influences serotonin neurotransmission influence utilitarian moral judgments as well. This finding is interpreted in light of evidence that the S allele is associated with elevated emotional responsiveness.

 

Omega can be replaced by amnesia

15 Bongo 26 January 2011 12:31PM

Let's play a game. Two times, I will give you an amnesia drug and let you enter a room with two boxes inside. Because of the drug, you won't know whether this is the first time you've entered the room. On the first time, both boxes will be empty. On the second time, box A contains $1000, and Box B contains $1,000,000 iff this is the second time and you took only box B the first time. You're in the room, do take both boxes or only box B?

This is equivalent to Newcomb's Problem in the sense that any strategy does equally well on both, where by "strategy" I mean a mapping from info to (probability distributions over) actions.

I suspect that any problem with Omega can be transformed into an equivalent problem with amnesia instead of Omega.

Does CDT return the winning answer in such transformed problems?

Discuss.

 

Not exactly the trolley problem

3 NancyLebovitz 24 October 2010 02:21PM

An unusual incident. Are you obligated to be on the side of the plane with the crocodile if the other passengers are overbalancing the plane? To push other passengers over to the side with the crocodile?

Everyday Questions Wanting Rational Answers

5 Relsqui 05 October 2010 06:04AM

I'm working on a list of question types which come up frequently in day-to-day life but which I haven't yet found a reliable, rational way to answer. Here are some examples, including summaries of any progress made in the comments.

continue reading »