[SEQ RERUN] Sympathetic Minds

2 MinibearRex 06 February 2013 05:59AM

Today's post, Sympathetic Minds was originally published on 19 January 2009. A summary (taken from the LW wiki):

 

Mirror neurons are neurons that fire both when performing an action oneself, and watching someone else perform the same action - for example, a neuron that fires when you raise your hand or watch someone else raise theirs. We predictively model other minds by putting ourselves in their shoes, which is empathy. But some of our desire to help relatives and friends, or be concerned with the feelings of allies, is expressed as sympathy, feeling what (we believe) they feel. Like "boredom", the human form of sympathy would not be expected to arise in an arbitrary expected-utility-maximizing AI. Most such agents would regard any agents in its environment as a special case of complex systems to be modeled or optimized; it would not feel what they feel.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was In Praise of Boredom, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

[LINK] The Cryopreservation of Kim Suozzi

16 [deleted] 01 February 2013 05:39AM

http://www.alcor.org/blog/?p=2716

With the inevitable end in sight – and with the cancer continuing to spread throughout her brain – Kim made the brave choice to refuse food and fluids. Even so, it took around 11 days before her body stopped functioning. Around 6:00 am on Thursday January 17, 2013, Alcor was alerted that Kim had stopped breathing. Because Kim’s steadfast boyfriend and family had located Kim just a few minutes away from Alcor, Medical Response Director Aaron Drake arrived almost immediately, followed minutes later by Max More, then two well-trained Alcor volunteers. As soon as a hospice nurse had pronounced clinical death, we began our standard procedures. Stabilization, transport, surgery, and perfusion all went smoothly. A full case report will be forthcoming.

Previously on LW: Aug 18, Aug 25, Aug 27, Jan 22.

Thoughts on designing policies for oneself

74 John_Maxwell_IV 28 November 2012 01:27AM

Note: This was originally written in relation to this rather scary comment of lukeprog's on value drift.  I'm now less certain that operant conditioning is a significant cause of value drift (leaning towards near/far type explanations), but I decided to share my thoughts on the topic of policy design anyway.


Several years ago, I had a reddit problem.  I'd check reddit instead of working on important stuff.  The more I browsed the site, the shorter my attention span got.  The shorter my attention span got, the harder it was for me to find things that were enjoyable to read.  Instead of being rejuvenating, I found reddit to be addictive, unsatisfying, and frustrating.  Every time I thought to myself that I really should stop, there was always just one more thing to click on.

So I installed LeechBlock and blocked reddit at all hours.  That worked really well... for a while.

Occasionally I wanted to dig up something I remembered seeing on reddit.  (This wasn't always bad--in some cases I was looking up something related to stuff I was working on.)  I tried a few different policies for dealing with this.  All of them basically amounted to inconveniencing myself in some way or another whenever I wanted to dig something up.

After a few weeks, I no longer felt the urge to check reddit compulsively.  And after a few months, I hardly even remembered what it was like to be an addict.

However, my inconvenience barriers were still present, and they were, well, inconvenient.  It really was pretty annoying to make an entry in my notebook describing what I was visiting for and start up a different browser just to check something.  I figured I could always turn LeechBlock on again if necessary, so I removed my self-imposed barriers.  And slid back in to addiction.

After a while, I got sick of being addicted again and decided to do something about it (again).  Interestingly, I forgot my earlier thought that I could just turn LeechBlock on again easily.  Instead, thinking about LeechBlock made me feel hopeless because it seemed like it ultimately hadn't worked.  But I did try it again, and the entire cycle then finished repeating itself: I got un-addicted, I removed LeechBlock, I got re-addicted.

This may seem like a surprising lack of self-awareness.  All I can say is: Every second my brain gathers tons of sensory data and discards the vast majority of it.  Narratives like the one you're reading right now don't get constructed on the fly automatically.  Maybe if I had been following orthonormal's advice of keeping and monitoring a record of life changes attempted, I would've thought to try something different.

continue reading »

Random LW-parodying Statement Generator

59 Armok_GoB 11 September 2012 07:57PM

So, I were looking at this, and then suddenly this thing happened.

EDIT:

New version! I updated the link above to it as well. Added LOADS and LOADS of new content, although I'm not entirely sure if it's actually more fun (my guess is there's more total fun due to varity, but that it's more diluted).

I ended up working on this basically the entire day to day, and implemented practically all my ideas I have so far, except for some grammar issues that'd require disproportionately much work. So unless there are loads of suggestions or my brain comes up with lots of new ideas over the next few days, this may be the last version in a while and I may call it beta and ask for spell-check. Still alpha as of writing this thou.

Since there were some close calls already, I'll restate this explicitly: I'd be easier for everyone if there weren't any forks for at least a few more days, even ones just for spell-checking. After that/I move this to beta feel more than free to do whatever you want.

Thanks to everyone who commented! ^_^

old Source, old version, latest source

Credits: http://lesswrong.com/lw/d2w/cards_against_rationality/ , http://lesswrong.com/lw/9ki/shit_rationalists_say/ , various people commenting on this article with suggestions, random people on the bay12 forums that helped me with the engine this is a descendent from ages ago.

Punctuality - Arriving on Time and Math

81 Xachariah 03 May 2012 01:35AM

In hindsight, this post seems incredibly obvious.  The meat of it already exists in sayings which we all know we ought to listen to: "Always arrive 10 minutes earlier than you think early is," "If you arrive on time, then you're late," or "Better three hours too soon than one minute too late." Yet even with these sayings, I still never trusted them nor arrived on time.  I'd miss deadlines, show up late, and just be generally tardy.  The reason is that I never truly understood what it took to arrive on time until I grokked the math of it.  So, while this may be remedial reading for most of you, I'm posting this because maybe there's someone out there who missed the same obviousness that I missed.

 

 

Statistical Distributions

Everyone here understands that our universe is controlled and explained by math.  Math describes how heavenly bodies move.  Math describes how our computers run.  Math describes how other people act in aggregate.  Wait a second, something's not right with that statement... "other people".  The way it comes out it's natural to think that math controls the way that other people act, and not myself.  Intellectually, I am aware that I am not a special snowflake who is exempt from the laws of math.  While I had managed to propagate this thought far enough to crush my belief in libertarian free will, I hadn't propagated it fully through my mind.  Specifically, I hadn't realized I could also use math to describe my actions and reap the benefit of understanding them mathematically.  I was still late to arrive and missing deadlines, and nothing seemed to help.

 

But wait, I'm a rationalist!  I know all about the planning fallacy; I know to take the outside view!  That's enough to save me right?  Well, not quite.  It seemed I missed one last part of the puzzle... Bell Curves.

 

When I go to work every day, the time from when I do nothing but getting ready to go to work until the time that I actually arrive there (I'll just call this prep time) usually takes 45 minutes, but sometimes it can take more time or less time.  Weirdly and crazily enough, if you plot all the prep times on a graph, the shape would end up looking roughly like a bell.  Well that's funny.  Math is for other people, but my behavior appears like it can be described statistically.  Some days I will have deviations from the normal routine that help me arrive faster while other days will have things that slow me down.  Some of them happen more often, some of them happen less often.  If I were describable by math, I could almost call these things standard deviations: days where I have almost zero traffic prep time takes 1 standard deviation less, days when I can't find my car keys my prep time takes 1 standard deviation more,  days I realize would be late and skip showering take 2 standard deviations less, and days when there is a terrible accident on the freeway end up requiring +2 or +3 standard deviations more in time.  To put it in other words, my prep time is a bell curve, and I've got 1-sigma and 2-sigma (and occasionally 3-sigma) events speeding me up and slowing me down.

 

This holds true for more than just going to work.  Everything's time-until-completion can be described this way: project completion times, homework, going to the airport, the duration of foreplay and sex.  Everything.  It's not always bell curves, but it's a probability distribution with respect to completion times, and that can help give useful insights.

 

Starting 'On Time' Means You Won't be On Time

What do we gain by understanding that our actions are described by a probability distribution?  The first and most important take away is this: If you only allocate the exact amount of time to do something, you'll be late 50% of the time.  I'm going to repeat it and italicize because I think it's that important of a point.  If you only allocate the exact amount of time to do something, you'll be late 50% of the time.  That's the way bell curves work.

 

I know I've heard jokes about how 90% of the population has above average children, but it wasn't until I really looked at the math of my behavior that I realized I was doing the exact same thing.  I'd say "oh it takes me 45 minutes on average to go to work every day, so I'll leave at 7:15."  Yet I never realized that I was completely ignoring that half the time would take longer than average.  So half the time, I'd end up be pressed for time and have to skip shaving (or something) or I'd end up late. I was terribly unpunctual until I realized I that I had to arrive early to always arrive on time.  "If you arrive on time, then you are late."  Hmm.  You win this one, folk wisdom.

 

Still, the question remained.  How much early would it take to never be late?  The answer lay in bell curves.

 

 

Acceptable Lateness and Standard deviation

Looking at time requirements as a bell curve implies another thing: One can never completely eliminate all lateness; the only option is to make a choice about what probability of lateness is acceptable.  A person must decide what lateness ratio they're willing to take, and then start prepping that many standard deviations beforehand.  And, despite what employers say, 0% is not a probability.

If my prep time averages 45 minutes with a standard deviation of 10 minutes then that means...

  • Starting 45 minutes beforehand will force me to be late or miss services (eg shaving) around 50% of the time or about 10 workdays a month.
  • Starting 55 minutes beforehand will force me to be late or miss services (eg shaving) around 16% of the time or about 3 workdays a month.
  • Starting 65 minutes beforehand will force me to be late or miss services (eg shaving) around 2.3% of the time or about 1 day every other month.

That's really good risk reduction for a small amount of time spent.  (NB, remember that averages are dangerous little things.  Taking this to a meta level, consider that being late to work about 3 times a month isn't helpful if you arrive late only once the first month, then get fired the next month when you arrive late 5 times. Hence, "Always arrive 10 minutes earlier than you think early is."  God I hate folk wisdom, especially when it's right.)

 

The risk level you're acceptable with dictates how much time you need for padding.  For job interviews, I'm only willing to arrive late to 1 in 1000, so I prepare 3 standard deviations early now.  For first dates, I'm willing to miss about 5%.  For dinners with the family, I'm okay with being late half the time.  It feels similar to the algorithm I used before, which was a sort of ad-hoc thing where I'd prepared earlier for important things.  The main difference is that now I can quantify the risk I'm assuming when I procrastinate.  It causes each procrastination to become more concrete for me, and drastically reduces the chance that I'll be willing to make those tradeoffs.  Instead of being willing to read lesswrong for 10 more minutes in exchange for "oh I might have to rush", I can now see that it would increase my chance of being late from 16% to 50%, which is flatly unacceptable.  Viewing procrastination in terms of the latter tradeoff makes it much easier to get myself moving. 

 

The last quote is "Better three hours too soon than one minute too late."  I'm glad that at least that one's wrong.  I'm sure Umesh would have some stern words for that saying.  My key to arriving on time is locating your acceptable risk threshold and making an informed decision about how much risk you are willing to take.

 

Summary

The time it takes for you to complete any task is (usually) described by a bell curve.  How much time you think you'll take is a lie, and not just because of the planning fallacy.  Even if you do the sciency-thing and take the outside view, it's still not enough to keep you from getting fired or showing up to your interview late.  To consistently show up on time, you must incorporate padding time.

 

So I've got a new saying, "If you wish to be late only 2.3% of the time, you must start getting ready at least two standard deviations before the average prep time you have needed historically."  I wish my mom would have told me this one.  It's so much easier to understand than all those other sayings!


(Also my first actual article-thingy, so any comments or suggestions is welcome)

[LINK] '3 Secrets of Wise Decision Making'

1 Voltairina 20 April 2012 08:53AM

Personal Decision Making (textbook about applied decision-making, for the class I'm taking right now)

The reason the book struck me as interesting was the author is employed at the university I'm going to, so if I get stuck its possible I could go and talk to him myself, and that its based in experimental psychology / psychology of decision making research.

The "3 Secrets" the book talks about are various techniques for addressing and recognizing biases, recognizing and overcoming failures of creativity, and developing the courage necessary to make and commit to rational choices. It covers various techniques for dealing with each of these dimensions of decision making, such as forced fit and stimulus variation for creativity.

From his blurb at the uni website:

"Dr. Anderson has been teaching at Portland State University since 1968. He received his B.A. in Psychology from Stanford University in 1957 and his Ph.D. in Experimental Psychology from The Johns Hopkins University in 1963. His current interests are in applications of decision psychology and decision analysis to personal decision making and public policy decision making."

From the book sleeve:

"Barry F. Anderson is professor emeritus of Psychology at Portland State University. He teaches courses on Personal Decision Making,
Decision Psychology, Conflict Resolution, and Ethical Decision Making. He also publishes in the areas of cognitive psychology and judgment and decision making and consults on personal decisions and on public and private policy decisions. In The Three Secrets of Wise Decision Making he uses his many years of varied experience to bring the technology of rational decision making to the lay reader in a manner that is understandable, engaging, and readily applicable to real-life decision making."

from the website:

"As the world has become more complex and information more abundant, decisions have become more difficult. As the pace of change and the range of choice have increased, decisions have to be made more often. Yet most of us still make decisions with no more knowledge about decision processes than our ancestors had in a simpler age, hundreds of years ago.

    Mathematicians, economists, psychologists, and practitioners have developed a variety of powerful and easily applied tools for decision making. Evidence is accumulating that better decision processes lead to better outcomes and that unaided human decision processes are not good enough for many decisions. More to the present point, evidence is also accumulating that learning better decision processes can make people better decision makers in their daily lives. The Three Secrets of Wise Decision Making brings the best of the new methods to the intelligent reader.

    The Three Secrets is designed expressly to help people make better decisions. It has been repeatedly tested in a course on personal decision making. The approach of the book is unabashedly practical. Except for portions of the second chapter, the emphasis is consistently on what to do. What the second chapter does is provide a brief overview of basic cognitive processes and the ways in which they tend to limit decision quality and also a brief explanation of the basic decision aids and the ways in which each supplements basic cognitive processes to enhance rationality, creativity, or judgment—the "three secrets". Some understanding of why the techniques are needed and how they work should enable the reader to apply them with greater effectiveness and satisfaction.

    The Three Secrets is organized around the Decision Ladder, a structured array of techniques to suit all decision problems and all decision makers. The Ladder extends from largely intuitive approaches, at the bottom, to decision trees, at the top. The key rung on the Ladder is the decision table and its variants: fact tables, plusses-and-minuses value tables, and 1-to-10 value tables. In the last chapter, the decision tree is introduced as a more sophisticated way of dealing with risky decisions and sequences of decisions. It is recommended that the reader start at the bottom of the Decision Ladder when beginning work on any decision problem and work up only so far as necessary. This keeps the process of decision making from becoming more complicated than would be appropriate for either the decision problem or the decision maker.

    The Three Secrets is richly provided with examples taken from life. One of the examples, Amelia’s career decision, runs through the entire book, adding human interest and conceptual continuity."

 

Against the Bottom Line

5 gRR 21 April 2012 10:20AM

In the spirit of contrarianism, I'd like to argue against The Bottom Line.

As I understand the post, its idea is that a rationalist should never "start with a bottom line and then fill out the arguments".

It sounds neat, but I think it is not psychologically feasible. I find that whenever I actually argue, I always have the conclusion already written. Without it, it is impossible to have any direction, and an argument without any direction does not go anywhere.

What actually happens is:

  1. I arrive at a conclusion, intuitively, as a result of a process which is usually closed to introspection.
  2. I write the bottom line, and look for a chain of reasoning that supports it.
  3. I check the argument and modify/discard it or parts of it if any are found defective.

It is at the point 3 that the biases really struck. Motivated Stopping makes me stop checking too early, and Motivated Continuation makes me look for better arguments when defective ones are found for the conclusion I seek, but not for alternatives, resulting in Straw Men.

How can we get more and better LW contrarians?

58 Wei_Dai 18 April 2012 10:01PM

I'm worried that LW doesn't have enough good contrarians and skeptics, people who disagree with us or like to find fault in every idea they see, but do so in a way that is often right and can change our minds when they are. I fear that when contrarians/skeptics join us but aren't "good enough", we tend to drive them away instead of improving them.

For example, I know a couple of people who occasionally had interesting ideas that were contrary to the local LW consensus, but were (or appeared to be) too confident in their ideas, both good and bad. Both people ended up being repeatedly downvoted and left our community a few months after they arrived. This must have happened more often than I have noticed (partly evidenced by the large number of comments/posts now marked as written by [deleted], sometimes with whole threads written entirely by deleted accounts). I feel that this is a waste that we should try to prevent (or at least think about how we might). So here are some ideas:

  • Try to "fix" them by telling them that they are overconfident and give them hints about how to get LW to take their ideas seriously. Unfortunately, from their perspective such advice must appear to come from someone who is themselves overconfident and wrong, so they're not likely to be very inclined to accept the advice.
  • Create a separate section with different social norms, where people are not expected to maintain the "proper" level of confidence and niceness (on pain of being downvoted), and direct overconfident newcomers to it. Perhaps through no-holds-barred debate we can convince them that we're not as crazy and wrong as they thought, and then give them the above-mentioned advice and move them to the main sections.
  • Give newcomers some sort of honeymoon period (marked by color-coding of their usernames or something like that), where we ignore their overconfidence and associated social transgressions (or just be extra nice and tolerant towards them), and take their ideas on their own merits. Maybe if they see us take their ideas seriously, that will cause them to reciprocate and take us more seriously when we point out that they may be wrong or overconfident.
I guess these ideas sounded better in my head than written down, but maybe they'll inspire other people to think of better ones. And it might help a bit just to keep this issue in the back of one's mind and occasionally think strategically about how to improve the person you're arguing against, instead of only trying to win the particular argument at hand or downvoting them into leaving.
P.S., after writing most of the above, I saw  this post:
OTOH, I don’t think group think is a big problem. Criticism by folks like Will Newsome, Vladimir Slepnev and especially Wei Dai is often upvoted. (I upvote almost every comment of Dai or Newsome if I don’t forget it. Dai makes always very good points and Newsome is often wrong but also hilariously funny or just brilliant and right.) Of course, folks like this Dymytry guy are often downvoted, but IMO with good reason.
To be clear, I don't think "group think" is the problem. In other words, it's not that we're refusing to accept valid criticisms, but more like our group dynamics (and other factors) cause there to be fewer good contrarians in our community than is optimal. Of course what is optimal might be open to debate, but from my perspective, it can't be right that my own criticisms are valued so highly (especially since I've been moving closer to the SingInst "inner circle" and my critical tendencies have been decreasing). In the spirit of making oneself redundant, I'd feel much better if my occasional voice of dissent is just considered one amongst many.

Waterfall Ethics

9 calef 30 January 2012 09:14PM

I recently read Scott Aaronson's "Why Philosophers Should Care About Computational Complexity" (http://arxiv.org/abs/1108.1791), which has a wealth of interesting thought-food.  Having chewed on it for a while, I've been thinking through some of the implications and commitments of a computationalist worldview, which I don't think is terribly controversial around here (there's a brief discussion in the paper about the Waterfall Argument, and its worth reading if you're unfamiliar with either it or the Chinese room thought experiment).

That said, suppose we ascribe to a computationalist worldview.  Further suppose that we have a simulation of a human running on some machine.  Even further suppose that this simulation is torturing the human through some grisly means.

By our supposed worldview, our torture simulation is reducible to some finite state machine, say a one tape turing machine.  This one tape turing machine representation, then, must have some initial state.

 

My first question: Is more 'harm' done in actually carrying out the computation of the torture simulation on our one tape turing machine than simply writing out the initial state of the torture simulation on the turing machine's tape?

 

The computation, and thus the simulation itself, are uniquely specified by that initial encoding.  My gut feeling here is that no, no more harm is done in actually carrying out the computation, because the 'torture' that occurs is a structural property of the encoding.  This might lead to perhaps ill-formed questions like "But when does the 'torture' actually 'occur'?" for some definition of those words.  But, like I said, I don't think that question makes sense, and is more indicative of the difficulty in thinking about something like our subjective experience as something reducible to deterministic processes than it is a criticism of my answer.

If one thinks more harm is done in carrying out the simulation, then is twice as much harm done by carrying out the simulation twice?  Does the representation of the simulation matter?  If I go out to the beach and arrange sea shells in a way that mimics the computation of the torture, has the torture 'occurred'?

 

My second question:  If the 'harm' occurring in the simulation is uniquely specified by the initial state of the Turing machine, how are we to assign moral weight (or positive/negative utility, if you prefer) to actually carrying out this computation, or even the existence of the initial state?

 

As computationalists, we agree that the human being represented by the one tape turing machine is feeling just as real pain as we are.  But (correct me if I'm wrong), it seems like we're committed to the idea that the 'harm' occurring in the torture simulation is a property of the initial state, and this initial state exists independent of us actually enumerating that state.  That is, there is some space of all possible simulations of a human as represented by encodings on a one tape turing machine. 

Is the act of specifying one of those states 'wrong'?  Does the act of recognizing such a possible space of encodings realize all of them, and thus cause an uncountable number of tortures and pleasures?

 

I don't think so.  That just seems silly.  But this also seems to rob a simulated human of any moral worth.  Which is kinda contradictory--we recognize that the pain a simulated human feels is real, yet we don't assign any utility to it.  Again, I don't think my answers are *right*, they were just my initial reactions.  Regardless of how we answer either of my questions, we seem committed to strange positions.

Initially, the whole exercise was looking for a way to dodge the threats of some superintelligent malevolent AI simulating the torture of copies of me.  I don't think I've actually successfully dodged that threat, but it was interesting to think about.

[LINK] "We have a new form of knowing."

6 [deleted] 05 January 2012 12:36AM

Interesting corollary to Tyler Cowen's TED talk:

Models this complex -- whether of cellular biology, the weather, the economy, even highway traffic -- often fail us, because the world is more complex than our models can capture. But sometimes they can predict accurately how the system will behave. At their most complex these are sciences of emergence and complexity, studying properties of systems that cannot be seen by looking only at the parts, and cannot be well predicted except by looking at what happens.

[...]

With the new database-based science, there is often no moment when the complex becomes simple enough for us to understand it. The model does not reduce to an equation that lets us then throw away the model. You have to run the simulation to see what emerges. For example, a computer model of the movement of people within a confined space who are fleeing from a threat--they are in a panic--shows that putting a column about one meter in front of an exit door, slightly to either side, actually increases the flow of people out the door. Why? There may be a theory or it may simply be an emergent property. We can climb the ladder of complexity from party games to humans with the single intent of getting outside of a burning building, to phenomena with many more people with much more diverse and changing motivations, such as markets. We can model these and perhaps know how they work without understanding them. They are so complex that only our artificial brains can manage the amount of data and the number of interactions involved.

[...]

Model-based knowing has many well-documented difficulties, especially when we are attempting to predict real-world events subject to the vagaries of history; a Cretaceous-era model of that eras ecology would not have included the arrival of a giant asteroid in its data, and no one expects a black swan. Nevertheless, models can have the predictive power demanded of scientific hypotheses. We have a new form of knowing.

View more: Prev | Next