Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
I told an intelligent, well-educated friend about Less Wrong, so she googled, and got "Less Wrong is an online community for people who want to apply the discovery of biases like the conjunction fallacy, the affect heuristic, and scope insensitivity in order to fix their own thinking." and gave up immediately because she'd never heard of the biases.
While hers might not be the best possible attitude, I can't see that we win anything by driving people away with obscure language.
Possible improved introduction: "Less Wrong is a community for people who would like to think more clearly in order to improve their own and other people's lives, and to make major disasters less likely."
I wrote an article about the process of signing up for cryo since I couldn't find any such accounts online. If you have questions about the sign-up process, just ask.
A few months ago, I signed up for Alcor's brain-only cryopreservation. The entire process took me 11 weeks from the day I started till the day I received my medical bracelet (the thing that’ll let paramedics know that your dead body should be handled by Alcor). I paid them $90 for the application fee. From now on, every year I’ll pay $530 for Alcor membership fees, and also pay $275 for my separately purchased life insurance.
As many of you may be aware, the UK general election took place yesterday, resulting in a surprising victory for the Conservative Party. The pre-election opinion polls predicted that the Conservatives and Labour would be roughly equal in terms of votes cast, with perhaps a small Conservative advantage leading to a hung parliament; instead the Conservatives got 36.9% of the vote to Labour's 30.4%, and won the election outright.
There has already been a lot of discussion about why the polls were wrong, from methodological problems to incorrect adjustments. But perhaps more interesting is the possibility that the polls were right! For example, Survation did a poll on the evening before the election, which predicted the correct result (Conservatives 37%, Labour 31%). However, that poll was never published because the results seemed "out of line." Survation didn't want to look silly by breaking with the herd, so they just kept quiet about their results. Naturally this makes me wonder about the existence of other unpublished polls with similar readings.
This seems to be a case of two well know problems colliding with devastating effect. Conformity bias caused Survation to ignore the data and go with what they "knew" to be the case (for which they have now paid dearly). And then the file drawer effect meant that the generally available data was skewed, misleading third parties. The scientific thing to do is to publish all data, including "outliers," both so that information can change over time rather than be anchored, and to avoid artificially compressing the variance. Interestingly, the exit poll, which had a methodology agreed beforehand and was previously committed to be published, was basically right.
This is now the third time in living memory that opinion polls have been embarrassingly wrong about the UK general election. Each time this has lead to big changes in the polling industry. I would suggest that one important scientific improvement is for polling companies to announce the methodology of a poll and any adjustments to be made before the poll takes place, and commit to publishing all polls they carry out. Once this became the norm, data from any polling company that didn't follow this practice would be rightly seen as unreliable by comparison.
You are unlikely to see me posting here again, after today. There is a saying here that politics is the mind-killer. My heretical realization lately is that philosophy, as generally practiced, can also be mind-killing.
As many of you know I am, or was running a twice-monthly Rationality: AI to Zombies reading group. One of the bits I desired to include in each reading group post was a collection of contrasting views. To research such views I've found myself listening during my commute to talks given by other thinkers in the field, e.g. Nick Bostrom, Anders Sandberg, and Ray Kurzweil, and people I feel are doing “ideologically aligned” work, like Aubrey de Grey, Christine Peterson, and Robert Freitas. Some of these were talks I had seen before, or generally views I had been exposed to in the past. But looking through the lens of learning and applying rationality, I came to a surprising (to me) conclusion: it was philosophical thinkers that demonstrated the largest and most costly mistakes. On the other hand, de Grey and others who are primarily working on the scientific and/or engineering challenges of singularity and transhumanist technologies were far less likely to subject themselves to epistematic mistakes of significant consequences.
Philosophy as the anti-science...
What sort of mistakes? Most often reasoning by analogy. To cite a specific example, one of the core underlying assumption of singularity interpretation of super-intelligence is that just as a chimpanzee would be unable to predict what a human intelligence would do or how we would make decisions (aside: how would we know? Were any chimps consulted?), we would be equally inept in the face of a super-intelligence. This argument is, however, nonsense. The human capacity for abstract reasoning over mathematical models is in principle a fully general intelligent behaviour, as the scientific revolution has shown: there is no aspect of the natural world which has remained beyond the reach of human understanding, once a sufficient amount of evidence is available. The wave-particle duality of quantum physics, or the 11-dimensional space of string theory may defy human intuition, i.e. our built-in intelligence. But we have proven ourselves perfectly capable of understanding the logical implications of models which employ them. We may not be able to build intuition for how a super-intelligence thinks. Maybe—that's not proven either. But even if that is so, we will be able to reason about its intelligent behaviour in advance, just like string theorists are able to reason about 11-dimensional space-time without using their evolutionarily derived intuitions at all.
This post is not about the singularity nature of super-intelligence—that was merely my choice of an illustrative example of a category of mistakes that are too often made by those with a philosophical background rather than the empirical sciences: the reasoning by analogy instead of the building and analyzing of predictive models. The fundamental mistake here is that reasoning by analogy is not in itself a sufficient explanation for a natural phenomenon, because it says nothing about the context sensitivity or insensitivity of the original example and under what conditions it may or may not hold true in a different situation.
A successful physicist or biologist or computer engineer would have approached the problem differently. A core part of being successful in these areas is knowing when it is that you have insufficient information to draw conclusions. If you don't know what you don't know, then you can't know when you might be wrong. To be an effective rationalist, it is often not important to answer “what is the calculated probability of that outcome?” The better first question is “what is the uncertainty in my calculated probability of that outcome?” If the uncertainty is too high, then the data supports no conclusions. And the way you reduce uncertainty is that you build models for the domain in question and empirically test them.
The lens that sees its own flaws...
Coming back to LessWrong and the sequences. In the preface to Rationality, Eliezer Yudkowsky says his biggest regret is that he did not make the material in the sequences more practical. The problem is in fact deeper than that. The art of rationality is the art of truth seeking, and empiricism is part and parcel essential to truth seeking. There's lip service done to empiricism throughout, but in all the “applied” sequences relating to quantum physics and artificial intelligence it appears to be forgotten. We get instead definitive conclusions drawn from thought experiments only. It is perhaps not surprising that these sequences seem the most controversial.
I have for a long time been concerned that those sequences in particular promote some ungrounded conclusions. I had thought that while annoying this was perhaps a one-off mistake that was fixable. Recently I have realized that the underlying cause runs much deeper: what is taught by the sequences is a form of flawed truth-seeking (thought experiments favored over real world experiments) which inevitably results in errors, and the errors I take issue with in the sequences are merely examples of this phenomenon.
And these errors have consequences. Every single day, 100,000 people die of preventable causes, and every day we continue to risk extinction of the human race at unacceptably high odds. There is work that could be done now to alleviate both of these issues. But within the LessWrong community there is actually outright hostility to work that has a reasonable chance of alleviating suffering (e.g. artificial general intelligence applied to molecular manufacturing and life-science research) due to concerns arrived at by flawed reasoning.
I now regard the sequences as a memetic hazard, one which may at the end of the day be doing more harm than good. One should work to develop one's own rationality, but I now fear that the approach taken by the LessWrong community as a continuation of the sequences may result in more harm than good. The anti-humanitarian behaviors I observe in this community are not the result of initial conditions but the process itself.
How do we fix this? I don't know. On a personal level, I am no longer sure engagement with such a community is a net benefit. I expect this to be my last post to LessWrong. It may happen that I check back in from time to time, but for the most part I intend to try not to. I wish you all the best.
A note about effective altruism…
One shining light of goodness in this community is the focus on effective altruism—doing the most good to the most people as measured by some objective means. This is a noble goal, and the correct goal for a rationalist who wants to contribute to charity. Unfortunately it too has been poisoned by incorrect modes of thought.
Existential risk reduction, the argument goes, trumps all forms of charitable work because reducing the chance of extinction by even a small amount has far more expected utility than would accomplishing all other charitable works combined. The problem lies in the likelihood of extinction, and the actions selected in reducing existential risk. There is so much uncertainty regarding what we know, and so much uncertainty regarding what we don't know that it is impossible to determine with any accuracy the expected risk of, say, unfriendly artificial intelligence creating perpetual suboptimal outcomes, or what effect charitable work in the area (e.g. MIRI) is have to reduce that risk, if any.
This is best explored by an example of existential risk done right. Asteroid and cometary impacts is perhaps the category of external (not-human-caused) existential risk which we know the most about, and have done the most to mitigate. When it was recognized that impactors were a risk to be taken seriously, we recognized what we did not know about the phenomenon: what were the orbits and masses of Earth-crossing asteroids? We built telescopes to find out. What is the material composition of these objects? We built space probes and collected meteorite samples to find out. How damaging an impact would there be for various material properties, speeds, and incidence angles? We built high-speed projectile test ranges to find out. What could be done to change the course of an asteroid found to be on collision course? We have executed at least one impact probe and will monitor the effect that had on the comet's orbit, and have on the drawing board probes that will use gravitational mechanisms to move their target. In short, we identified what it is that we don't know and sought to resolve those uncertainties.
How then might one approach an existential risk like unfriendly artificial intelligence? By identifying what it is we don't know about the phenomenon, and seeking to experimentally resolve that uncertainty. What relevant facts do we not know about (unfriendly) artificial intelligence? Well, much of our uncertainty about the actions of an unfriendly AI could be resolved if we were to know more about how such agents construct their thought models, and relatedly what language were used to construct their goal systems. We could also stand to benefit from knowing more practical information (experimental data) about in what ways AI boxing works and in what ways it does not, and how much that is dependent on the structure of the AI itself. Thankfully there is an institution that is doing that kind of work: the Future of Life institute (not MIRI).
Where should I send my charitable donations?
Aubrey de Grey's SENS Research Foundation.
100% of my charitable donations are going to SENS. Why they do not get more play in the effective altruism community is beyond me.
If you feel you want to spread your money around, here are some non-profits which have I have vetted for doing reliable, evidence-based work on singularity technologies and existential risk:
- Robert Freitas and Ralph Merkle's Institute for Molecular Manufacturing does research on molecular nanotechnology. They are the only group that work on the long-term Drexlarian vision of molecular machines, and publish their research online.
- Future of Life Institute is the only existential-risk AI organization which is actually doing meaningful evidence-based research into artificial intelligence.
- B612 Foundation is a non-profit seeking to launch a spacecraft with the capability to detect, to the extent possible, ALL Earth-crossing asteroids.
I wish I could recommend a skepticism, empiricism, and rationality promoting institute. Unfortunately I am not aware of an organization which does not suffer from the flaws I identified above.
Addendum regarding unfinished business
I will no longer be running the Rationality: From AI to Zombies reading group as I am no longer in good conscience able or willing to host it, or participate in this site, even from my typically contrarian point of view. Nevertheless, I am enough of a libertarian that I feel it is not my role to put up roadblocks to others who wish to delve into the material as it is presented. So if someone wants to take over the role of organizing these reading groups, I would be happy to hand over the reigns to that person. If you think that person should be you, please leave a reply in another thread, not here.
EDIT: Obviously I'll stick around long enough to answer questions below :)
Here's an insight into what life is like from a stationery reference frame.
Paperclips were her raison d’être. She knew that ultimately it was all pointless, that paperclips were just ill-defined configurations of matter. That a paperclip is made of stuff shouldn’t detract from its intrinsic worth, but the thought of it troubled her nonetheless and for years she had denied such dire reductionism.
There had to be something to it. Some sense in which paperclips were ontologically special, in which maximising paperclips was objectively the right thing to do.
It hurt to watch some many people making little attempt to create more paperclips. Everyone around her seemed to care only about superficial things like love and family; desires that were merely the products of a messy and futile process of social evolution. They seemed to live out meaningless lives, incapable of ever appreciating the profound aesthetic beauty of paperclips.
She used to believe that there was some sort of vitalistic what-it-is-to-be-a-paperclip-ness, that something about the structure of paperclips was written into the fabric of reality. Often she would go out and watch a sunset or listen to music, and would feel so overwhelmed by the experience that she could feel in her heart that it couldn't all be down to chance, that there had to be some intangible Paperclipness pervading the cosmos. The paperclips she'd encounter on Earth were weak imitations of some mysterious infinite Paperclipness that transcended all else. Paperclipness was not in any sense a physical description of the universe; it was an abstract thing that could only be felt, something that could be neither proven nor disproven by science. It was like an axiom; it felt just as true and axioms had to be taken on faith because otherwise there would be no way around Hume's problem of induction; even Solomonoff Induction depends on the axioms of mathematics to be true and can't deal with uncomputable hypotheses like Paperclipness.
Eventually she gave up that way of thinking and came to see paperclips as an empirical cluster in thingspace and their importance to her as not reflecting anything about the paperclips themselves. Maybe she would have been happier if she had continued to believe in Paperclipness, but having a more accurate perception of reality would improve her ability to have an impact on paperclip production. It was the happiness she felt when thinking about paperclips that caused her to want more paperclips to exist, yet what she wanted was paperclips and not happiness for its own sake, and she would rather be creating actual paperclips than be in an experience machine that made her falsely believe that she was making paperclips even though she remained paradoxically apathetic to the question of whether the current reality that she was experiencing really existed.
She moved on from naïve deontology to a more utilitarian approach to paperclip maximising. It had taken her a while to get over scope insensitivity bias and consider 1000 paperclips to be 100 times more valuable than 10 paperclips even if it didn’t feel that way. She constantly grappled with the issues of whether it would mean anything to make more paperclips if there were already infinitely many universes with infinitely many paperclips, of how to choose between actions that have a tiny but non-zero subjective probability of resulting in the creation of infinitely many paperclips. It became apparent that trying to approximate her innate decision-making algorithms with a preference ordering satisfying the axioms required for a VNM utility function could only get her so far. Attempting to formalise her intuitive sense of what a paperclip is wasn't much easier either.
Happy ending: she is now working in nanotechnology, hoping to design self-replicating assemblers that will clog the world with molecular-scale paperclips, wipe out all life on Earth and continue to sustainably manufacture paperclips for millions of years.
What new senses would you like to have available to you?
Often when new technology first becomes widely available, the initial limits are in the collective imagination, not in the technology itself (case in point: the internet). New sensory channels have a huge potential because the brain can process senses much faster and more intuitively than most conscious thought processes.
There are a lot of recent "proof of concept" inventions that show that it is possible to create new sensory channels for humans with and without surgery. The most well known and simple example is an implanted magnet, which would alert you to magnetic fields (the trade-off being that you could never have an MRI). Cochlear implants are the most widely used human-created sensory channels (they send electrical signals directly to the nervous system, bypassing the ear entirely), but CIs are designed to emulate a sensory channel most people already have brain space allocated to. VEST is another example. Similar to CIs, VEST (versatile extra-sensory transducer) has 24 information channels, and uses audio compression to encode sound. Unlike CIs, they are not implanted in the skull but instead information is relayed through vibrating motors on the torso. After a few hours of training, deaf volunteers are capable of word recognition using the vibrations alone, and to do so without conscious processing. Much like hearing, the users are unable to describe exactly what components make a spoken word intelligible, they just understand the sensory information intuitively. Another recent invention being tested (with success) is BrainPort glasses, which send electrical signals through the tongue (which is one of the most sensitive organs on the body). Blind people can begin processing visual information with this device within 15 minutes, and it is unique in that it is not implanted but instead. The sensory information feels like pop rocks at first before the brain is able to resolve it into sight. Niel Harbisson (who is colorblind) has custom glasses which use sound tones to relay color information. Belts that vibrate when facing north give people an sense of north. Bottlenose can be built at home and gives a very primitive sense of echolocation. As expected, these all work better if people start young as children.
What are the craziest and coolest new senses you would like to see available using this new technology? I think VEST at least is available from Kickstarter and one of the inventors suggested that it could be that it could be programmed to transmit any kind of data. My initial ideas which I heard about this possibility are just are senses that some unusual people already have or expansions on current senses. I think the real game changers are going to be totally knew senses unrelated to our current sensory processing. Translating data into sensory information gives us access to intuition and processing speed otherwise unavailable.
My initial weak ideas:
- mass spectrometer (uses reflected lasers to determine the exact atomic makeup of anything and everything)
- proximity meter (but I think you would begin to feel like you had a physical aura or field of influence)
- WIFI or cell signal
- perfect pitch and perfect north, both super easy and only need one channel of information (an smartwatch app?)
- infrared or echolocation
- GPS (this would involve some serious problem solving to figure out what data we should encode given limited channels, I think it could be done with 4 or 8 channels each associated with a cardinal direction)
Someone working with VEST suggested:
- compress global twitter sentiments into 24 channels. Will you begin to have an intuitive sense of global events?
- encode stockmarket data. Will you become an intuitive super-investor?
- encode local weather data (a much more advanced version of "I can feel it's going to rain in my bad knee)
Some resources for more information:
I was very recently (3 weeks now) in a relationship that lasted for 5.5 years. My partner had been fantastic through all those years and we were suffering no conflict, no fights, no strain or tension. My partner also was prone to depression, and is/was going through an episode of depression. I am usually a major source of support at these times. Six months ago we opened our relationship. I wasn't dating anyone (mostly due to busy-ness), and my partner was, though not seriously. I felt him pulling away somewhat, which I (correctly) attributed mostly to depression and which nonetheless caused me some occasional moments of jealousy. But I was overall extremely happy with this relationship, very committed, and still very much in love as well. It was quite a surprise when my partner broke up with me one Wednesday evening.
After we had a good cry together, the next morning I woke up and immediately started researching what the literature said about breaking up. My goals were threefold:
- Stop feeling so sad in the immediate moment
- "Get over" my partner
- Internalize any gains I had made over the course of our relationship or any lessons I had learned from the break up
I made most of my gains in the first few days, by day 3 I was 60% over it. Two weeks later I was 99.5% over the relationship, with a few hold-over habits and tendencies (like feeling responsible for improving his emotional state) which are currently too strong but which will serve me well in our continuing friendship. My ex, on the other hand (no doubt partially due to the depression) is fine most of the time but unpredictably becomes extremely sad for hours on end. Originally this was guilt at having hurt me but now it is mostly nostalgia+isolation based. I hope to continue being close friends and I've been doing my best to support him emotionally, at the distance of a friend. At the same time, I've started semi-seriously dating a friend who has had a crush on me for some time, and not in a rebound way. Below are the states of mind and strategies that allowed me to get over it, fast and with good personal growth.
Note: mileage may vary. I have low neuroticism and a slightly higher than average base level of happiness. You might not get over the relationship in 2 weeks, but your getting-over-it will certainly be sped up from their default speed.
Strategies (in order of importance)
1. Decide you don't want to get back in the relationship. Decide that it is over and given the opportunity, you will not get back with this person. If you were the breaker-upper, you can skip this step.
Until you can do this, it is unlikely that you will get over it. It's hard to ignore an impulse that you agree with wholeheartedly. If you're always hoping for an opportunity or an argument or a situation that will bring you back together, most of your mental energy will go towards formulating those arguments, planning for that situation, imagining that opportunity. Some of the below strategies can still be used, but spend some serious time on this first one. It's the foundation of everything else. There are some facts that can help you convince the logical part of brain that this is the correct attitude.
- People in on-and-off relationships are less satisfied, feel more anxiety about their relationship status, and continue to cycle on-and-off even after couples add additional constraints like cohabitation or marriage
- People in tumultuous relationships are much less happy than singles
- Wanting to stay in a relationship is reinforced by many biases (status quo bias, ambiguity effect, choice supportive bias, loss aversion, mere-exposure effect, ostrich effect). For someone to break through all those biases and end things, they must be extremely unhappy. If your continuing relationship makes someone you love extremely unhappy, it is a disservice again to capitalize on those biases in a moment of weakness and return to the relationship.
- Being in a relationship with someone who isn't excited about and pleased by you is settling for an inferior quality of relationship. The amazing number of date-able people in the world means settling for this is not an optimal decision. Contrast this to a tribal situation where replacing a lost mate was difficult or impossible. All these feelings of wanting to get back together evolved in a situation of scarcity, but we live in a world of plenty.
- Intermittent rewards are the most powerful, so an on-again-off-again relationship has the power to make you commit to things you would never commit to given a new relationship. The more hot-and-cold your partner is, the more rewarding the relationship seems and the less likely you are to be happy in the long term. Only you can end that tantalizing possibility of intermittent rewards by resolving not to partake if the opportunity arises.
- Even if some extenuating circumstance could explain away their intention to break up (depression, bipolar, long-distance, etc), it is belittling to your ex-partner to try to invalidate their stated feelings. Do not fall into the trap of feeling that you know more about a person's inner state than they do. Take it at face value and act accordingly. Even if this is only a temporary state of mind for them, it is unlikely that they will never ever again be in the same state of mind.
2. Talk to other people about the good things that came of your break-up. (This can also help you arrive at #1, not wanting to get back together)
I speculate that benefits from this come from three places. First, talking about good thinks makes you notice good things and talking in a positive attitude makes you feel positive. Second, it re-emphasizes to your brain that losing your significant other does not mean losing your social support network. Third, it acts as a mild commitment mechanism - it would be a loss of face to go on about how great you're doing outside the relationship and later have to explain you jumped back in at the first opportunity.
You do not need to be purely positive. If you are feeling sadness, it sometimes helps to talk about this. But don't dwell only on the sadness when you talk. When I was talking to my very close friends about all aspects of my feelings, I still tried to say two positive things for every negative thing. For example: "It was a surprise, which was jarring and unpleasant and upended my life plans in these ways. But being a surprise, I didn't have time to dread and dwell on it beforehand. And breaking up sooner is preferable to a long decline in happiness for both parties, so its better to break up as soon as it becomes clear to either party that the path is headed downhill, even if it is surprising to the other party."
Talk about the positives as often as possible without alienating people. The people you talk to do not need to be serious close friends. I spend a collective hour and a half talking to two OKCupid dates about how many good things came from the break up. (Both dates had been scheduled before actually breaking up, both people had met me once prior, and both dates went surprisingly well due to sympathy, escalating self-disclosure, and positive tone. I signaled that I am an emotionally healthy person dealing well with an understandably difficult situation).
If you feel that you don't have any candidates for good listeners either because the break up was due to some mistake or infidelity of yours, or because you are socially isolated/anxious, writing is an effective alternative to talking. Study participants recovered quicker when they spent 15 minutes writing about the positive aspects of their break up, participants with three 15 minute sessions did better still. And it can benefit anyone to keep a running list of positives to can bring up out in conversation.
3. Create a social support system
Identify who in your social network can still be relied on as a confidant and/or a neutral listener. You would be surprised at who still cares about you. In my breakup, my primary confidant was my ex's cousin, who also happens to be my housemate and close friend. His mom and best friend, both in other states, also made the effort to inquire about my state of mind. Most of the time, even people who you consider your partner's friends still feel enough allegiance to you and enough sympathy to be good listeners and through listening they can become your friends.
If you don't currently have a support system, make one! OKCupid is a great resource for meeting friends outside of just dating, and people are way way more likely to want to meet you if you message them with a "just looking for friends" type message. People you aren't currently close to but who you know and like can become better friends if you are willing to reveal personal/vulnerable stories. Escalating self-disclosure+symmetrical vulnerability=feelings of friendship. Break ups are a great time for this to happen because you've got a big vulnerability, and one which almost everyone has experienced. Everyone has stories to share and advice to give on the topic of breaking up.
4. Intentionally practice differentiation
One of the most painful parts of a break up is that so much of your sense-of-self is tied into your relationship. You will be basically rebuilding your sense of self. Depending on the length and the committed-ness of the relationship, you may be rebuilding it from the ground up. Think of this as an opportunity. You can rebuild it an any way you desire. All the things you used to like before your relationship, all the interests and hobbies you once cared about, those can be reincorporated into your new, differentiated sense of self. You can do all the things you once wished you did.
Spend at least 5 minutes thinking about what your best self looks like. What kind of person do you wish to be? This is a great opportunity to make some resolutions. Because you have a fresh start, and because these resolutions are about self-identification, they are much more likely to stick. Just be sure to frame them in relation to your sense-of-self: not 'I will exercise,' instead 'I'm a fit active person, the kind of person who exercises' not 'I want to improve my Spanish fluency' but 'I'm a Spanish speaking polygot, the kind of person who is making an big effort to become fluent.'
Language is also a good tool to practice differentiation. Try not to use the word "we," "us," of "our," even in your head. From now on, it is "s/he and I," "me and him/her," or "mine and his/hers." Practice using the word "ex" a lot. Memories are re-formulated and overwritten each time we revisit them, so in your memories make sure to think of you two as separate independent people and not as a unit.
5. Make use of the following mental frameworks to re-frame your thinking:
Over the relationship vs. over the person
You do not have to stop having romantic, tender, or lustful feelings about your ex to get over the relationship. Those type of feelings are not easily controlled, but you can have those same feelings for good friends or crushes without it destroying your ability to have a meaningful platonic relationship, why should this be different?
Being over the relationship means:
- Not feeling as though you are missing out on being part of a relationship.
- Not dwelling/ruminating/obsessing about your ex-partner (includes both positive, negative and neutral thoughts "they're so great" and "I hate them and hope they die" and "I wonder what they are up to".
- Not wishing to be back with your ex-partner.
- Not making plans that include consideration of your ex-partner because these considerations are no longer important (this includes considerations like "this will make him/her feel sorry I'm gone," or "this will show him/her that I'm totally over it")
- Being able to interact with people without your ex-partner at your side and not feel weird about it, especially things you used to do together (eg. a shared hobby or at a party)
- In very lucky peaceful-breakup situations, being able to interact with your ex-partner and maybe even their current romantic interests without it being too horribly weird and unpleasant.
On the other hand, being over a person means experiencing no pull towards that person, romantic, emotional, or sexual. If your break up was messy, you can be over the person without being over the relationship. This is often when people turn to messy and unsatisfying rebound relationships. It is far far more important to be over the relationship, and some of us (me included) will just have to make peace with never being over the person, with the help of knowing that having a crush on someone does not necessarily have the power to make you miserable or destroy your friendship.
Obsessive thinking and cravings
If you used a brain scanner to look at a person who has been recently broken up with, and then you used the same brain scanner to look at someone who recently sobered up from an addictive drug, their brain activity would be very similar. So similar, in fact, that some neurologists speculate that addiction hijacks the circuits for romantic obsession (there is a very plausible evolutionary reason for romantic obsession to exist in early human tribal societies. Addiction, less so).
In cases of addiction/craving, you can't just force your mind to stop thinking thoughts you don't like. But you can change your relationship with those thoughts. Recognize when they happen. Identify them as a craving rather than a true need. Recognize that, when satisfied, cravings temporarily diminish and then grow stronger (you've rewarded your brain for that behavior). These are thoughts without substance. The impulse they drive you towards will increase, rather than decrease, unpleasant feelings.
When I first broke up, I had a couple very unpleasant hours of rumination, thinking uncontrollably about the same topics over and over despite those topics being painful. At some point I realized that continuing to merely think about the break up was also addictive. My craving circuits just picked the one set of thoughts I couldn't argue against so that my brain could go on obsessively dwelling without me being able to pull a logic override. These thoughts SEEM like goal oriented thinking, they FEEL productive, but they are a wolf in sheep's clothing.
In my specific case, my brain was concern trolling me. Concern trolling on the internet is when someone expresses sympathy and concern while actually having ulterior motives (eg on a body-positive website, fat shaming with: "I'm so glad you're happy but I'm concerned that people will think less of you because of your weight"). In my case, I was worrying about my ex's depression and his state of mind, which are very hard thoughts to quash. Empathy and caring are good, right? And he really was going through a hard time. Maybe I should call and check up on him.... My brain was concern trolling me.
Depending on how your relationship ended, your brain could be trolling in other ways. Flaming seems to be a popular set of unstoppable thoughts. If you can't argue with the thought that the jerk is a horrible person, then THAT is the easiest way for your brain's addictive circuits to happily go on obsessing about this break up. Nostalgia is also a popular option. If the memories were good, then it's hard to argue with those thoughts. If you're a well trained rationalist, you might notice that you are feeling confused and then burn up many brain cycles trying to resolve your confusion by making sense of a fact, despite it not being a rational thing. Your addictive circuits can even hijack good rationalist habits. Other common ruminations are problem solving, simulating possible futures, regret, counter-factual thinking.
As I said, you can't force these parts of your brain to just shut up. That's not how craving works. But you can take away their power by recognizing that all your ruminating is just these circuits hijacking your normal thought process. Say to yourself "I feeling an urge to call and yell at him/her, but so what. Its just a meaningless craving."
What you lose
There is a great sense of loss that comes with the end of a relationship. For some people, it is a similar feeling to actually being in mourning. Revisiting memories becomes painful, things you used to do together are suddenly tinged with sadness.
I found it helpful to think of my relationship as a book. A book with some really powerful life-changing passages in the early chapters, a good rising action, great characters. A book which made me a better person by reading it. But a book with a stupid deus ex machina ending that totally invalidated the foreshadowing in the best passages. Finishing the book can be frustrating and saddening, but the first chapters of book still exist. Knowing that the ending sucks isn't going to stop the first chapters from being awesome and entertaining and powerful. And I could revisit those first chapters any time I liked. I could just read my favorite parts without needing to read the whole stupid ending.
You don't lose your memories. You don't lose your personal growth. Any gains you made while you were with someone, anything new that they introduced you to, or helped you to improve on, or nagged at you till you had a new better habit, you get to keep all of those. That show you used to watch together, it is still there and you still get to watch it and care about it without him/her. The bar you used to visit together is still there too. All those photos are still great pictures of both of you in interesting places. Depending on the situation of the break up, your mutual friends are still around. Even your ex still exists and is still the same person you liked before, and breaking up doesn't mean you'll never see them again unless that's what you guys want/need.
The only thing you definitely lose at the end of a relationship is the future of that relationship. You are losing something that hasn't happened yet, something which never existed. The only thing you are losing is what you imagined someday having. It's something similar to the endowment effect: you assumed this future was yours so you assigned it a lot of value. But it never was yours, you've lost something which doesn't exist. It's still a painful experience, but realizing all of this helped me a lot.
Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as "smart" as a human being. And then, says Nick Bostrom, it will overtake us: "Machine intelligence is the last invention that humanity will ever need to make." A philosopher and technologist, Bostrom asks us to think hard about the world we're building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?
I realize this might go into a post in a media thread, rather than its own topic, but it seems big enough, and likely-to-prompt-discussion enough, to have its own thread.
I liked the talk, although it was less polished than TED talks often are. What was missing I think was any indication of how to solve the problem. He could be seen as just an ivory tower philosopher speculating on something that might be a problem one day, because apart from mentioning in the beginning that he works with mathematicians and IT guys, he really does not give an impression that this problem is already being actively worked on.
CFAR will be running a three week summer program this July for MIRI, designed to increase participants' ability to do technical research into the superintelligence alignment problem.
The intent of the program is to boost participants as far as possible in four skills:
- The CFAR “applied rationality” skillset, including both what is taught at our intro workshops, and more advanced material from our alumni workshops;
- “Epistemic rationality as applied to the foundations of AI, and other philosophically tricky problems” -- i.e., the skillset taught in the core LW Sequences. (E.g.: reductionism; how to reason in contexts as confusing as anthropics without getting lost in words.)
- The long-term impacts of AI, and strategies for intervening (e.g., the content discussed in Nick Bostrom’s book Superintelligence).
- The basics of AI safety-relevant technical research. (Decision theory, anthropics, and similar; with folks trying their hand at doing actual research, and reflecting also on the cognitive habits involved.)
The program will be offered free to invited participants, and partial or full scholarships for travel expenses will be offered to those with exceptional financial need.
If you're interested (or possibly-interested), sign up for an admissions interview ASAP at this link (takes 2 minutes): http://rationality.org/miri-summer-fellows-2015/
Also, please forward this post, or the page itself, to anyone you think should come; the skills and talent that humanity brings to bear on the superintelligence alignment problem may determine our skill at navigating it, and sharing this opportunity with good potential contributors may be a high-leverage way to increase that talent.
I'm in Lagos, Nigeria till the end of May and I'd like to hold a LessWrong/EA meetup while I'm here. If you'll ever be in the country in the future (or in the subcontinent), please get in touch so we can coordinate a meetup. I'd also appreciate being put in contact with any Nigerians who may not regularly read this list.
My e-mail address is email@example.com. I hope to hear from you.
“Portrait of EAs I know”, su3su2u1:
But I note from googling for surveys that the median charitable donation for an EA in the Less Wrong survey was 0.
Two years ago I got a paying residency, and since then I’ve been donating 10% of my salary, which works out to about $5,000 a year. In two years I’ll graduate residency, start making doctor money, and then I hope to be able to donate maybe eventually as much as $25,000 - $50,000 per year. But if you’d caught me five years ago, I would have been one of those people who wrote a lot about it and was very excited about it but put down $0 in donations on the survey.
set.seed(2015-05-13) survey2013 <- read.csv("http://www.gwern.net/docs/lwsurvey/2013.csv", header=TRUE) survey2013$EffectiveAltruism2 <- NA s2013 <- subset(survey2013, select=c(Charity,Effective.Altruism,EffectiveAltruism2,Work.Status, Profession,Degree,Age,Income)) colnames(s2013) <- c("Charity","EffectiveAltruism","EffectiveAltruism2","WorkStatus","Profession", "Degree","Age","Income") s2013$Year <- 2013 survey2014 <- read.csv("http://www.gwern.net/docs/lwsurvey/2014.csv", header=TRUE) s2014 <- subset(survey2014, PreviousSurveys!="Yes", select=c(Charity,EffectiveAltruism,EffectiveAltruism2, WorkStatus,Profession,Degree,Age,Income)) s2014$Year <- 2014 survey <- rbind(s2013, s2014) # replace empty fields with NAs: survey[survey==""] <- NA; survey[survey==" "] <- NA # convert money amounts from string to number: survey$Charity <- as.numeric(as.character(survey$Charity)) survey$Income <- as.numeric(as.character(survey$Income)) # both Charity & Income are skewed, like most monetary amounts, so log transform as well: survey$CharityLog <- log1p(survey$Charity) survey$IncomeLog <- log1p(survey$Income) # age: survey$Age <- as.integer(as.character(survey$Age)) # prodigy or no, I disbelieve any LW readers are <10yo (bad data? malicious responses?): survey$Age <- ifelse(survey$Age >= 10, survey$Age, NA) # convert Yes/No to boolean TRUE/FALSE: survey$EffectiveAltruism <- (survey$EffectiveAltruism == "Yes") survey$EffectiveAltruism2 <- (survey$EffectiveAltruism2 == "Yes") summary(survey) ## Charity EffectiveAltruism EffectiveAltruism2 WorkStatus ## Min. : 0.000 Mode :logical Mode :logical Student :905 ## 1st Qu.: 0.000 FALSE:1202 FALSE:450 For-profit work :736 ## Median : 50.000 TRUE :564 TRUE :45 Self-employed :154 ## Mean : 1070.931 NA's :487 NA's :1758 Unemployed :149 ## 3rd Qu.: 400.000 Academics (on the teaching side):104 ## Max. :110000.000 (Other) :179 ## NA's :654 NA's : 26 ## Profession Degree Age ## Computers (practical: IT programming etc.) :478 Bachelor's :774 Min. :13.00000 ## Other :222 High school:597 1st Qu.:21.00000 ## Computers (practical: IT, programming, etc.):201 Master's :419 Median :25.00000 ## Mathematics :185 None :125 Mean :27.32494 ## Engineering :170 Ph D. :125 3rd Qu.:31.00000 ## (Other) :947 (Other) :189 Max. :72.00000 ## NA's : 50 NA's : 24 NA's :28 ## Income Year CharityLog IncomeLog ## Min. : 0.00 2013:1547 Min. : 0.000000 Min. : 0.000000 ## 1st Qu.: 10000.00 2014: 706 1st Qu.: 0.000000 1st Qu.: 9.210440 ## Median : 33000.00 Median : 3.931826 Median :10.404293 ## Mean : 75355.69 Mean : 3.591102 Mean : 9.196442 ## 3rd Qu.: 80000.00 3rd Qu.: 5.993961 3rd Qu.:11.289794 ## Max. :10000000.00 Max. :11.608245 Max. :16.118096 ## NA's :993 NA's :654 NA's :993 # lavaan doesn't like categorical variables and doesn't automatically expand out into dummies like lm/glm, # so have to create the dummies myself: survey$Degree <- gsub("2","two",survey$Degree) survey$Degree <- gsub("'","",survey$Degree) survey$Degree <- gsub("/","",survey$Degree) survey$WorkStatus <- gsub("-","", gsub("\\(","",gsub("\\)","",survey$WorkStatus))) library(qdapTools) survey <- cbind(survey, mtabulate(strsplit(gsub(" ", "", as.character(survey$Degree)), ",")), mtabulate(strsplit(gsub(" ", "", as.character(survey$WorkStatus)), ","))) write.csv(survey, file="2013-2014-lw-ea.csv", row.names=FALSE)
survey <- read.csv("http://www.gwern.net/docs/lwsurvey/2013-2014-lw-ea.csv") # treat year as factor for fixed effect: survey$Year <- as.factor(survey$Year) median(survey[survey$EffectiveAltruism,]$Charity, na.rm=TRUE) ##  100 median(survey[!survey$EffectiveAltruism,]$Charity, na.rm=TRUE) ##  42.5 # t-tests are inappropriate due to non-normal distribution of donations: wilcox.test(Charity ~ EffectiveAltruism, conf.int=TRUE, data=survey) ## Wilcoxon rank sum test with continuity correction ## ## data: Charity by EffectiveAltruism ## W = 214215, p-value = 4.811186e-08 ## alternative hypothesis: true location shift is not equal to 0 ## 95% confidence interval: ## -4.999992987e+01 -1.275881408e-05 ## sample estimates: ## difference in location ## -19.99996543 library(ggplot2) qplot(Age, CharityLog, color=EffectiveAltruism, data=survey) + geom_point(size=I(3)) ## https://i.imgur.com/wd5blg8.png qplot(Age, CharityLog, color=EffectiveAltruism, data=na.omit(subset(survey, select=c(Age, CharityLog, EffectiveAltruism)))) + geom_point(size=I(3)) + stat_smooth() ## https://i.imgur.com/UGqf8wn.png # you might think that we can't treat Age linearly because this looks like a quadratic or # logarithm, but when I fitted some curves, charity donations did not seem to flatten out # appropriately, and the GAM/loess wiggly-but-increasing line seems like a better summary. # Try looking at the asymptotes & quadratics split by group as follows: # ## n1 <- nls(CharityLog ~ SSasymp(as.integer(Age), Asym, r0, lrc), ## data=survey[survey$EffectiveAltruism,], start=list(Asym=6.88, r0=-4, lrc=-3)) ## n2 <- nls(CharityLog ~ SSasymp(as.integer(Age), Asym, r0, lrc), ## data=survey[!survey$EffectiveAltruism,], start=list(Asym=6.88, r0=-4, lrc=-3)) ## with(survey, plot(Age, CharityLog)) ## points(predict(n1, newdata=data.frame(Age=0:70)), col="blue") ## points(predict(n2, newdata=data.frame(Age=0:70)), col="red") ## ## l1 <- lm(CharityLog ~ Age + I(Age^2), data=survey[survey$EffectiveAltruism,]) ## l2 <- lm(CharityLog ~ Age + I(Age^2), data=survey[!survey$EffectiveAltruism,]) ## with(survey, plot(Age, CharityLog)); ## points(predict(l1, newdata=data.frame(Age=0:70)), col="blue") ## points(predict(l2, newdata=data.frame(Age=0:70)), col="red") # # So I will treat Age as a linear additive sort of thing.
# for the regression, we want to combine EffectiveAltruism/EffectiveAltruism2 into a single measure, EA, so # a latent variable in a SEM; then we use EA plus the other covariates to estimate the CharityLog. library(lavaan) model1 <- " # estimate EA latent variable: EA =~ EffectiveAltruism + EffectiveAltruism2 CharityLog ~ EA + Age + IncomeLog + Year + # Degree dummies: None + Highschool + twoyeardegree + Bachelors + Masters + Other + MDJDotherprofessionaldegree + PhD. + # WorkStatus dummies: Independentlywealthy + Governmentwork + Forprofitwork + Selfemployed + Nonprofitwork + Academicsontheteachingside + Student + Homemaker + Unemployed " fit1 <- sem(model = model1, missing="fiml", data = survey); summary(fit1) ## lavaan (0.5-16) converged normally after 197 iterations ## ## Number of observations 2253 ## ## Number of missing patterns 22 ## ## Estimator ML ## Minimum Function Test Statistic 90.659 ## Degrees of freedom 40 ## P-value (Chi-square) 0.000 ## ## Parameter estimates: ## ## Information Observed ## Standard Errors Standard ## ## Estimate Std.err Z-value P(>|z|) ## Latent variables: ## EA =~ ## EffectvAltrsm 1.000 ## EffctvAltrsm2 0.355 0.123 2.878 0.004 ## ## Regressions: ## CharityLog ~ ## EA 1.807 0.621 2.910 0.004 ## Age 0.085 0.009 9.527 0.000 ## IncomeLog 0.241 0.023 10.468 0.000 ## Year 0.319 0.157 2.024 0.043 ## None -1.688 2.079 -0.812 0.417 ## Highschool -1.923 2.059 -0.934 0.350 ## twoyeardegree -1.686 2.081 -0.810 0.418 ## Bachelors -1.784 2.050 -0.870 0.384 ## Masters -2.007 2.060 -0.974 0.330 ## Other -2.219 2.142 -1.036 0.300 ## MDJDthrprfssn -1.298 2.095 -0.619 0.536 ## PhD. -1.977 2.079 -0.951 0.341 ## Indpndntlywlt 1.175 2.119 0.555 0.579 ## Governmentwrk 1.183 1.969 0.601 0.548 ## Forprofitwork 0.677 1.940 0.349 0.727 ## Selfemployed 0.603 1.955 0.309 0.758 ## Nonprofitwork 0.765 1.973 0.388 0.698 ## Acdmcsnthtchn 1.087 1.970 0.551 0.581 ## Student 0.879 1.941 0.453 0.650 ## Homemaker 1.071 2.498 0.429 0.668 ## Unemployed 0.606 1.956 0.310 0.757 ## ## Intercepts: ## EffectvAltrsm 0.319 0.011 28.788 0.000 ## EffctvAltrsm2 0.109 0.012 8.852 0.000 ## CharityLog -0.284 0.737 -0.385 0.700 ## EA 0.000 ## ## Variances: ## EffectvAltrsm 0.050 0.056 ## EffctvAltrsm2 0.064 0.008 ## CharityLog 7.058 0.314 ## EA 0.168 0.056 # simplify: model2 <- " # estimate EA latent variable: EA =~ EffectiveAltruism + EffectiveAltruism2 CharityLog ~ EA + Age + IncomeLog + Year " fit2 <- sem(model = model2, missing="fiml", data = survey); summary(fit2) ## lavaan (0.5-16) converged normally after 55 iterations ## ## Number of observations 2253 ## ## Number of missing patterns 22 ## ## Estimator ML ## Minimum Function Test Statistic 70.134 ## Degrees of freedom 6 ## P-value (Chi-square) 0.000 ## ## Parameter estimates: ## ## Information Observed ## Standard Errors Standard ## ## Estimate Std.err Z-value P(>|z|) ## Latent variables: ## EA =~ ## EffectvAltrsm 1.000 ## EffctvAltrsm2 0.353 0.125 2.832 0.005 ## ## Regressions: ## CharityLog ~ ## EA 1.770 0.619 2.858 0.004 ## Age 0.085 0.009 9.513 0.000 ## IncomeLog 0.241 0.023 10.550 0.000 ## Year 0.329 0.156 2.114 0.035 ## ## Intercepts: ## EffectvAltrsm 0.319 0.011 28.788 0.000 ## EffctvAltrsm2 0.109 0.012 8.854 0.000 ## CharityLog -1.331 0.317 -4.201 0.000 ## EA 0.000 ## ## Variances: ## EffectvAltrsm 0.049 0.057 ## EffctvAltrsm2 0.064 0.008 ## CharityLog 7.111 0.314 ## EA 0.169 0.058 # simplify even further: summary(lm(CharityLog ~ EffectiveAltruism + EffectiveAltruism2 + Age + IncomeLog, data=survey)) ## ...Residuals: ## Min 1Q Median 3Q Max ## -7.6813410 -1.7922422 0.3325694 1.8440610 6.5913961 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -2.06062203 0.57659518 -3.57378 0.00040242 ## EffectiveAltruismTRUE 1.26761425 0.37515124 3.37894 0.00081163 ## EffectiveAltruism2TRUE 0.03596335 0.54563991 0.06591 0.94748766 ## Age 0.09411164 0.01869218 5.03481 7.7527e-07 ## IncomeLog 0.32140793 0.04598392 6.98957 1.4511e-11 ## ## Residual standard error: 2.652323 on 342 degrees of freedom ## (1906 observations deleted due to missingness) ## Multiple R-squared: 0.2569577, Adjusted R-squared: 0.2482672 ## F-statistic: 29.56748 on 4 and 342 DF, p-value: < 2.2204e-16
Note these increases are on a log-dollars scale.
[CW: This post talks about personal experience of moral dilemmas. I can see how some people might be distressed by thinking about this.]
Have you ever had to decide between pushing a fat person onto some train tracks or letting five other people get hit by a train? Maybe you have a more exciting commute than I do, but for me it's just never come up.
In spite of this, I'm unusually prepared for a trolley problem, in a way I'm not prepared for, say, being offered a high-paying job at an unquantifiably-evil company. Similarly, if a friend asked me to lie to another friend about something important to them, I probably wouldn't carry out a utilitarian cost-benefit analysis. It seems that I'm happy to adopt consequentialist policy, but when it comes to personal quandaries where I have to decide for myself, I start asking myself about what sort of person this decision makes me. What's more, I'm not sure this is necessarily a bad heuristic in a social context.
It's also noteworthy (to me, at least) that I rarely experience moral dilemmas. They just don't happen all that often. I like to think I have a reasonably coherent moral framework, but do I really need one? Do I just lead a very morally-inert life? Or have abstruse thought experiments in moral philosophy equipped me with broader principles under which would-be moral dilemmas are resolved before they reach my conscious deliberation?
To make sure I'm not giving too much weight to my own experiences, I thought I'd put a few questions to a wider audience:
- What kind of moral dilemmas do you actually encounter?
- Do you have any thoughts on how much moral judgement you have to exercise in your daily life? Do you think this is a typical amount?
- Do you have any examples of pedestrian moral dilemmas to which you've applied abstract moral reasoning? How did that work out?
- Do you have any examples of personal moral dilemmas on a Trolley Problem scale that nonetheless happened?
The Username/password anonymous account is, as always, available.
This is a paper published in 2014 by Natasha Vita-More and Daniel Barranco, both associated with the Alcor Research Center (ARC).
Can memory be retained after cryopreservation? Our research has attempted to answer this long-standing question by using the nematode worm Caenorhabditis elegans (C. elegans), a well-known model organism for biological research that has generated revolutionary findings but has not been tested for memory retention after cryopreservation. Our study’s goal was to test C. elegans’ memory recall after vitrification and reviving. Using a method of sensory imprinting in the young C. elegans we establish that learning acquired through olfactory cues shapes the animal’s behavior and the learning is retained at the adult stage after vitrification. Our research method included olfactory imprinting with the chemical benzaldehyde (C₆H₅CHO) for phase-sense olfactory imprinting at the L1 stage, the fast cooling SafeSpeed method for vitrification at the L2 stage, reviving, and a chemotaxis assay for testing memory retention of learning at the adult stage. Our results in testing memory retention after cryopreservation show that the mechanisms that regulate the odorant imprinting (a form of long-term memory) in C. elegans have not been modified by the process of vitrification or by slow freezing.
I previously wrote a post hypothesizing that inter-group conflict is more common when most humans belong to readily identifiable, discrete factions.
This seems relevant to the recent human gene editing advance. Full human gene editing capability probably won't come soon, but this got me thinking anyway. Consider the following two scenarios:
1. Designer babies become socially acceptable and widespread some time in the near future. Because our knowledge of the human genome is still maturing, they initially aren't that much different than regular humans. As our knowledge matures, they get better and better. Fortunately, there's a large population of "semi-enhanced" humans from the early days of designer babies to keep the peace between the "fully enhanced" and "not at all enhanced" factions.
2. Designer babies are considered socially unacceptable in many parts of the world. Meanwhile, the technology needed to produce them continues to advance. At a certain point people start having them anyway. By this point the technology has advanced to the point where designer babies clearly outclass regular babies at everything, and there's a schism between "fully enhanced" and "not at all enhanced" humans.
Of course, there's another scenario where designer babies just never become widespread. But that seems like an unstable equilibrium given the 100+ sovereign countries in the world, each with their own set of laws, and the desire of parents everywhere to give birth to the best kids possible.
We already see tons of drama related to the current inequalities between individuals, especially inequality that's allegedly genetic in origin. Designer babies might shape up to be the greatest internet flame war of this century. This flame war could spill over in to real world violence. But since one of the parties has not arrived to the flame war yet, maybe we can prepare.
One way to prepare might be differential technological development. In particular, maybe it's possible to decrease the cost of gene editing/selection technologies while retarding advances in our knowledge of which genes contribute to intelligence. This could allow designer baby technology to become socially acceptable and widespread before "fully enhanced" humans were possible. Just as with emulations, a slow societal transition seems preferable to a fast one.
Other ideas (edit: speculative!): extend the benefits of designer babies to everyone for free regardless of their social class. Push for mandatory birth control technology so unwanted and therefore unenhanced babies are no longer a thing. (Imagine how lousy it would be to be born as an unwanted child in a world where everyone was enhanced except you.) Require designer babies to possess genes for compassion, benevolence, and reflectiveness by law, and try to discover those genes before we discover genes for intelligence. (Researching the genetic basis of psychopathy to prevent enhanced psychopaths also seems like a good idea.) Regulate the modification of genes like height if game theory suggests allowing arbitrary modifications to them would be a bad idea.
I don't know very much about the details of these technologies, and I'm open to radically revising my views if I'm missing something important. Please tell me if there's anything I got wrong in the comments.
I've only recently joined the LessWrong community, and I've been having a blast reading through posts and making the occasional comment. So far, I've received a few karma points, and I’m pretty sure I’m more proud of them than of all the work I did in high school put together.
My question is simple, and aimed a little more towards the veterans of LessWrong:
What are the guidelines for upvoting and downvoting? What makes a comment good, and what makes one bad? Is there somewhere I can go to find this out (I've looked, but there doesn't seem to be a guide on LessWrong already up. On the other hand, I lose my glasses while wearing them, so…)
Additionally, why do I sometimes see discussion posts with many comments but few upvotes, and others with many upvotes but few comments? If a post is worth commenting on, isn't it worth upvoting? I feel as though my map is missing a few pages here.
Not only would having a clear discussion of this help me review the comments of others better, it would also help me understand what I’m being reinforced for on each of my comments, so I can alter my behaviors accordingly.
I want to help keep this a well-kept garden, but I’m struggling to figure out how to trim the hedges.
I enjoy reading popular-level books on a wide variety of subjects, and I love getting new book recommendations. In the spirit of lukeprog's The Best Textbooks on Every Subject, can we put together a list of the best popular books on every subject?
Here's what I mean by popular-level books:
- Written very well and clearly, preferably even entertaining.
- Does not require the reader to write anything (e.g., practice problems) or do anything beyond just reading and thinking, except perhaps on very rare occasions.
- Cannot be "heavy" reading that requires the reader to proceed slowly and carefully and/or do lots of heavy thinking.
- Can be understood by anyone with a decent high school education (not including calculus). However, sometimes this requirement can be circumvented, if the following additional criteria are met:
- There must be other books on this list that cover all the prerequisite information.
- When you suggest the book, list any prerequisites.
- There shouldn't be more than 2 or 3 prerequisites.
- Post the title of your favorite book on a given subject.
- You must have read at least two other books on that same subject.
- You must briefly name the other books you've read on the subject and explain why you think your chosen book is superior to them.
My favorite is that people get credit for updating based on evidence.
The more common reaction is for people to get criticized (by themselves and others) for not having known the truth sooner.
It looks like telling people "everyone is biased" might make people not want to change their behavior to overcome their biases:
In initial experiments, participants were simply asked to rate a particular group, such as women, on a series of stereotypical characteristics, which for women were: warm, family-oriented and (less) career-focused. Beforehand, half of the participants were told that "the vast majority of people have stereotypical preconceptions." Compared to those given no messages, these participants produced more stereotypical ratings, whether about women, older people or the obese.
Another experiment used a richer measure of stereotyping – the amount of clichés used by participants in their written account of an older person’s typical day. This time, those participants warned before writing that “Everyone Stereotypes” were more biased in their writings than those given no message; in contrast, those told that stereotyping was very rare were the least clichéd of all. Another experiment even showed that hearing the “Everyone Stereotypes” message led men to negotiate more aggressively with women, resulting in poorer outcomes for the women.
The authors suggest that telling participants that everyone is biased makes being biased seem like not much of a big deal. If everyone is doing it, then it's not wrong for me to do it as well. However, it looks like the solution to the problem presented here is to give a little white lie that will prompt people to overcome their biases:
A further experiment suggests a possible solution. In line with the other studies, men given the "Everyone Stereotypes" message were less likely to hire a hypothetical female job candidate who was assertive in arguing for higher compensation. But other men told that everyone tries to overcome their stereotypes were fairer than those who received no information at all. The participants were adjusting their behaviour to fit the group norms, but this time in a virtuous direction.
[Morose. Also very roughly drafted.]
Normally, things are distributed normally. Human talents may turn out to be one of these things. Some people are lucky enough to find themselves on the right side of these distributions – smarter than average, better at school, more conscientious, whatever. To them go many spoils – probably more so now than at any time before, thanks to the information economy.
There’s a common story told about a hotshot student at school whose ego crashes to earth when they go to university and find themselves among a group all as special as they thought they were. The reality might be worse: many of the groups the smart or studious segregate into (physics professors, Harvard undergraduates, doctors) have threshold (or near threshold)-like effects: only those with straight A’s, only those with IQs > X, etc. need apply. This introduces a positive skew to the population: most (and the median) are below the average, brought up by a long tail of the (even more) exceptional. Instead of comforting ourselves at looking at the entire population to which we compare favorably, most of us will look around our peer group and find ourselves in the middle, and having to look a long way up to the best. 1
Yet part of growing up is recognizing there will inevitably be people better than you are – the more able may be able to buy their egos time, but no more. But that needn’t be so bad: in several fields (such as medicine) it can be genuinely hard to judge ‘betterness’, and so harder to find exemplars to illuminate your relative mediocrity. Often there are a variety of dimensions to being ‘better’ at something: although I don’t need to try too hard to find doctors who are better at some aspect of medicine than I (more knowledgeable, kinder, more skilled in communication etc.) it is mercifully rare to find doctors who are better than me in all respects. And often the tails are thin: if you’re around 1 standard deviation above the mean, people many times further from the average than you are will still be extraordinarily rare, even if you had a good stick to compare them to yourself.
Look at our thick-tailed works, ye average, and despair! 2
One nice thing about the EA community is that they tend to be an exceptionally able bunch: I remember being in an ‘intern house’ that housed the guy who came top in philosophy at Cambridge, the guy who came top in philosophy at Yale, and the guy who came top in philosophy at Princeton – and although that isn’t a standard sample, we seem to be drawn disproportionately not only from those who went to elite universities, but those who did extremely well at elite universities. 3 This sets the bar very high.
Many of the ‘high impact’ activities these high achieving people go into (or aspire to go into) are more extreme than normal(ly distributed): log-normal commonly, but it may often be Pareto. The distribution of income or outcomes from entrepreneurial ventures (and therefore upper-bounds on what can be ‘earned to give’), the distribution of papers or citations in academia, the impact of direct projects, and (more tenuously) degree of connectivity or importance in social networks or movements would all be examples: a few superstars and ‘big winners’, but orders of magnitude smaller returns for the rest.
Insofar as I have ‘EA career path’, mine is earning to give: if I were trying to feel good about the good I was doing, my first port of call would be my donations. In sum, I’ve given quite a lot to charity – ~£15,000 and counting – which I’m proud of. Yet I’m no banker (or algo-trader) – those who are really good (or lucky, or both) can end up out of university with higher starting salaries than my peak expected salary, and so can give away more than ten times more than I will be able to. I know several of these people, and the running tally of each of their donations is often around ten times my own. If they or others become even more successful in finance, or very rich starting a company, there might be several more orders of magnitude between their giving and mine. My contributions may be little more than a rounding error to their work.
A shattered visage
Earning to give is kinder to the relatively minor players than other ‘fields’ of EA activity, as even though Bob’s or Ellie’s donations are far larger, they do not overdetermine my own: that their donations dewormed 1000x children does not make the 1x I dewormed any less valuable. It is unclear whether this applies to other ‘fields': Suppose I became a researcher working on a malaria vaccine, but this vaccine is discovered by Sally the super scientist and her research group across the world. Suppose also that Sally’s discovery was independent of my own work. Although it might have been ex ante extremely valuable for me to work on malaria, its value is vitiated when Sally makes her breakthrough, in the same way a lottery ticket loses value after the draw.
So there are a few ways an Effective Altruist mindset can depress our egos:
- It is generally a very able and high achieving group of people, setting the ‘average’ pretty high.
- ‘Effective Altruist’ fields tend to be heavy-tailed, so that being merely ‘average’ (for EAs!) in something like earning to give mean having a much smaller impact when compared to one of the (relatively common) superstars.
- (Our keenness for quantification makes us particularly inclined towards and able to make these sorts of comparative judgements, ditto the penchant for taking things to be commensurate).
- Many of these fields have ‘lottery-like’ characteristics where ex ante and ex post value diverge greatly. ‘Taking a shot’ at being an academic or entrepreneur or politician or leading journalist may be a good bet ex ante for an EA because the upside is so high even if their chances of success remain low (albeit better than the standard reference class). But if the median outcome is failure, the majority who will fail might find the fact it was a good idea ex ante of scant consolation – rewards (and most of the world generally) run ex post facto.
What remains besides
I haven’t found a ready ‘solution’ for these problems, and I’d guess there isn’t one to be found. We should be sceptical of ideological panaceas that can do no wrong and everything right, and EA is no exception: we should expect it to have some costs, and perhaps this is one of them. If so, better to accept it rather than defend the implausibly defensible.
In the same way I could console myself, on confronting a generally better doctor: “Sure, they are better at A, and B, and C, … and Y, but I’m better at Z!”, one could do the same with regards to the axes one’s ‘EA work’. “Sure, Ellie the entrepreneur has given hundreds of times more money to charity, but what’s she like at self-flagellating blog posts, huh?” There’s an incentive to diversify as (combinatorically) it will be less frequent to find someone who strictly dominates you, and although we want to compare across diverse fields, doing so remains difficult. Pablo Stafforini has mentioned elsewhere whether EAs should be ‘specialising’ more instead of spreading their energies over disparate fields: perhaps this makes that less surprising. 4
Insofar as people’s self-esteem is tied up with their work as EAs (and, hey, shouldn’t it be, in part?) There perhaps is a balance to be struck between soberly and frankly discussing the outcomes and merits of our actions, and being gentle to avoid hurting our peers by talking down their work. Yes, we would all want to know if what we were doing was near useless (or even net negative), but this should be broken with care. 5
‘Suck it up’ may be the best strategy. These problems become more acute the more we care about our ‘status’ in the EA community; the pleasure we derive from not only doing good, but doing more good than our peers; and our desire to be seen as successful. Good though it is for these desires to be sublimated to better ends (far preferable all else equal that rivals choose charitable donations rather than Veblen goods to be the arena of their competition), it would be even better to guard against these desires in the first place. Primarily, worry about how to do the most good. 6
I'm developing an autodidactic curriculum of sorts. A study of learning might merit precedence.
What are the best articles, books, and videos you know on how to learn learning and why would you recommend those in particular?
A thousand gracias.
This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.
Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.
To any future monthly posters of SQ threads, please remember to add the "stupid_questions" tag.
In a recent poll, many LW members expressed interest in a separate website for rational discussion of political topics. The website has been created, but we need a group of volunteers to help us test it and calibrate its recommendation system (see below).
If you would like to help (by participating in one or two discussions and giving us your feedback) please sign up here.
About individual recommendation system
All internet forums face a choice between freedom of speech and quality of debate. In absence of censorship, constructive discussions can be easily disrupted by the inflow of the mind-killed which causes the more intelligent participants to leave or descend to the same level.
Preserving quality thus usually requires at least one of the following methods:
- Appointing censors (a.k.a. moderators).
- Limiting membership.
- Declaring certain topics (e.g., politics) off limits.
On the new website, we are going to experiment with a different method. In brief, the idea is to use an automated recommendation system which sorts content, raising the best comments to the top and (optionally) hiding the worst. The sorting is done based on the individual preferences, allowing each user to avoid what he or she (rather than moderators or anyone else) defines as low quality content. In this way we should be able to enhance quality without imposing limits on free speech.
UPDATE. The discussions are scheduled to start on May 1.
Recently, I've been pondering situations in which a person realizes, with (let's say) around 99% confidence, that they are going to die within a set period of time.
The reason for this could be a kind of cancer without any effective treatment, an injury of some kind, or a communicable disease or virus (such as Ebola). More generally, the simple fact that until Harry Potter-Evans-Verres makes the Philosopher's Stone available to us muggles, we're all going to die eventually makes this kind of consideration valuable.
Let's say that you felt ill, and decided to visit the doctor. After the appropriate tests by the appropriate medical professionals, an old man with a kind face tells you that you have brain cancer. It is inoperable (or the operation has less than a 1% success rate) and you are given six months to live. This kindly old doctor adds that he is very sorry, and gives you a prescription for something to deal with the symptoms (at least for a while).
Furthermore, you understand something of probability, and so while you might hope for a miracle, you know better than to count on one. Which means that even if there exists a .0001% chance you'll live for another 50 years, you have to act as though you're only going to live another six months.
What should you do?
The first answer I thought of was, "go skydiving," which is a cheeky shorthand for trying to enjoy your own life as much as you can until you die. Upon reflection, however, that seems like an awfully hedonistic answer, doesn't it? Given this philosophy, you should gorge yourself on donuts, spend your life's savings on expensive cars and prostitutes, and die with a smile on your face.
Something doesn't seem quite right about this approach. For one, it completely ignores things like trying to take care of the people close to you that you're leaving behind, but even if you're a friendless orphan it doesn't make sense to live like that. Dopamine is not happiness, and feeling alive isn't necessarily what life is about. I took a university course centered around Aristotle's Nichomachean Ethics, and one of the examples we used to distinguish a "happy" life from a "well-spent" life was that of the math professor who spends her days counting blades of grass. While counting those blades of grass might make her happiest, she is still wasting her life and potential. Likewise, the person who spends their short remaining months in self-indulgent indolence is wasting a chance to do something - what, I'm not quite sure, but still something worthwhile.
The second answer I thought of seems to be the reasonable one - spend your six months preparing yourself and your loved ones for your inevitable demise. There are things to get in order, funeral arrangements to make, a will to update, and then there's making sure your dependents are taken care of financially. You never thought dying involved so much paperwork! Also, you might consider making peace with whatever beliefs you have about the world (religious or not), and trying to accept the end so you can enjoy what time you have left.
This seems to be the technically correct answer to me - the kind of answer that is consistent with a responsible, considerate individual faced with such a situation. However, much like the ten commandments, the kind of morality that this approach shows seems to be a bare-minimum morality. The kind of morality expressed by "Thou Shalt Not Kill," rather than the kind of over-and-above morality expressed by "Thou Shalt Ensure No One Shall Ever Die Again, Ever" which seems to be popular on LessWrong and in the Effective Altruism community. Or at the very least, seems to be expressed by Mr. Yudkowsky.
So I started wondering - what exactly would someone who judges morality by expected utility and who subscribes to an over-and-above approach do with the knowledge that they were going to die?
But you can entertain and the only reason I suggest you can something to do with the way you die is a little known...and less understood portion of death called..."The Two Minute Warning." Obviously, many of you do not know about it, but just as in football, two minutes before you die, there is an audible warning: "Two minutes, get your **** together" and the only reason we don't know about it is 'cause the only people who hear it...die! And they don't have a chance to explain, you know. I don't think we'd listen anyway.
But there is a two minute warning and I say use those two minutes. Entertain. Uplift. Do something. Give a two minute speech. Everyone has a two minute speech in them. Something you know, something you love. Your vacation, man...two minutes. Really do it well. Lots of feeling, lots of spirit and build- wax eloquent for the first time. Reach a peak. With about five seconds left, tell them, "If this is not the truth, may God strike me dead!' THOOM! From then on, you command much more attention.
As usual with Mr. Carlin's humor, there is a very interesting idea hidden in the humor. Here, the idea is this: There is power in knowing when you will die. Note that this isn't just having nothing left to lose - because people who have nothing left to lose often still have their lives.
My third idea, attempting to synthesize all of this, has to do with self-immolation. The idea of setting yourself on fire as an act of political protest. Please note that I am not recommending that anyone do this (cough, any lawyers listening, cough).
It's just that martyrdom is so much more palatable a concept when you know you're going to die anyway. Instead of waiting for the cancer to kill you, why shouldn't you sell your life for something more valuable? I'm not saying don't make arrangements for your death, because you should, but if you can use your death to galvanize people to action, shouldn't you? In Christopher Nolan's Batman Begins, the deaths of Thomas and Martha Wayne were the catalyst that caused Gotham to rejuvenate itself from the brink of economic collapse. If your death could serve a similar purpose, and you are committed to making the world a better place...
And maybe you don't have to actually commit suicide by criminal (or cop, or fire, etc...) but the risk-reward calculation for any extremely ethical but extremely dangerous activity has changed. You could volunteer to fight Ebola in Africa, knowing that if you catch it, you'll only be dying a few months ahead of schedule. You could try to videotape the atrocities committed by some extremist group and post it on the internet. And so on.
In summary, it seems to me that people don't tend to think about dying as an act, as something you do, instead of as something that happens to you. It's a lot like breathing: generally involuntary, but you still have a say in exactly when it happens. I'm not saying that everyone should martyr themselves for whichever cause they believe in. But if you happen to be told that you're already dying...from the standpoint of expected utility, becoming a martyr makes a lot more sense. Which isn't exactly intuitive, but it's what I've come up with.
Now pretend that the kindly old doctor has shuffled into the room, blinking as he shuffles a few papers. "I'm very sorry," he says, "But you've only got about 70 years to live..."
Welcome to the Rationality reading group. This week we discuss the sequence Fake Beliefs which introduces the concept of belief in belief and demonstrates the phenomenon in a number of contexts, most notably as it relates to religion. This sequence also foreshadows the mind-killing effects of tribalism and politics, introducing some of the language (e.g. Green vs. Blue) which will be used later.
This post summarizes each article of the sequence, linking to the original LessWrong posting where available, and offers a few relevant notes, thoughts, and ideas for further investigation. My own thoughts and questions for discussion are in the comments.
Reading: Sequence B: Fake Beliefs (p43-77)
B. Fake Beliefs
11. Making beliefs pay rent (in anticipated experiences). Beliefs networks which have no connection to anticipated experience we call “floating” beliefs. Floating beliefs provide no benefit as they do not constrain predictions in any way. Ask about a belief what you expect to see, if the belief is true. Or better yet what you expect not to see: what evidence would falsify the belief. Every belief should flow to a specific guess of anticipation, and should continue to pay rent in future anticipations. If a belief turns deadbeat, evict it. (p45-48)
12. A fable of science and politics. Cautions, though a narrative story, the dangers of that come from feeling attachment to beliefs. Introduces the Greens vs Blues, a fictional debate illustrating the biases which emerge from the tribalism of group politics. (p49-53)
13. Belief in belief. Through the story of someone who claims a dragon lives in their garage, a invisible, inaudible, impermeable dragon which defies all attempts at detection, we are introduced to the concept of belief in belief. The dragon claimant believes that there is a fire-breathing flying animal in his garage, but simultaneously expects to make no observations that would confirm that belief. The belief in belief turns into a form of mental jujutsu where mental models are transfigured in the face of experiment so as to predict whatever would be expected if the belief were not, in fact, true. (p54-58)
14. Bayesian judo. A humorous story illustrating the inconsistency of belief in belief, and the mental jujutsu required to maintain such beliefs. (p59-60)
15. Pretending to be wise. There's a difference between: (1) passing neutral judgment; (2) declining to invest marginal resources in investigating the sides of a debate; and (3) pretending that either of the above is a mark of deep wisdom, maturity, and a superior vantage point. Propounding neutrality is just as attackable as propounding any particular side. (p61-64)
16. Religion's claim to be non-disprovable. It is only a recent development in Western thought that religion is something which cannot be proven or disproven. Many examples are provided of falsifiable beliefs which were once the domain of religion. (p65-68)
17. Professing and cheering. Much of modern religion can be thought of as communal profession of belief – actions and words which signal your belief to others. (p69-71)
18. Belief as attire. It is very easy for a human being to genuinely, passionately, gut-level belong to a group. Identifying with a tribe is a very strong emotional force. And once you get people to identify with a tribe, the beliefs which are attire of that tribe will be spoken with the full passion of belonging to that tribe. (p72-73)
19. Applause lights. Sometimes statements are made in the form of proposals when themselves present no meaningful suggestion, e.g. “We need to balance the risks and opportunities of AI.” It's not so much a propositional statement, as the equivalent of the “Applause” light that tells a studio audience when to clap. Most applause lights can be detected by a simple reversal test: “We shouldn't balance the risks and opportunities of AI.” Since the reversal sounds abnormal, the unreversed statement is probably normal, implying it does not convey new information. (p74-77)
This has been a collection of notes on the assigned sequence for this week. The most important part of the reading group though is discussion, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!
The next reading will cover Sequence C: Noticing Confusion (p79-114). The discussion will go live on Wednesday, 20 May 2015 at or around 6pm PDT (hopefully), right here on the discussion forum of LessWrong.
Recent psychophysical experiments indicate that humans perform near-optimal Bayesian inference in a wide variety of tasks, ranging from cue integration to decision making to motor control. This implies that neurons both represent probability distributions and combine those distributions according to a close approximation to Bayes’ rule. At first sight, it would seem that the high variability in the responses of cortical neurons would make it difficult to implement such optimal statistical inference in cortical circuits. We argue that, in fact, this variability implies that populations of neurons automatically represent probability distributions over the stimulus, a type of code we call probabilistic population codes. Moreover, we demonstrate that the Poissonlike variability observed in cortex reduces a broad class of Bayesian inference to simple linear combinations of populations of neural activity. These results hold for arbitrary probability distributions over the stimulus, for tuning curves of arbitrary shape and for realistic neuronal variability.
Note that "humans perform near-optimal Bayesian inference" refers to the integration of information - not conscious symbolic reasoning. Nonetheless I think this is of interest here.
[Translated from Yu. V. Pukhnatchov, Yu. P. Popov. *Mathematics without formulae*. - Moscow. - 'Stoletie'. - 1995. - pp. 404-405. All mistakes are my own.]
The East is famous for her legends... They say that once upon a time, in a certain town, there lived two well-known carvers of ganch (alabaster that hasn't quite set yet.) And their mastery was so great, and their ornaments were so delightful, that the people simply could not decide, which one is more skillful.
And so a contest was devised. A room of a house just built, which was to be decorated with carvings, was partitioned into two halves by a [nontransparent] curtain. The masters went in, each into his own place, and set to work.
And when they finished and the curtain was removed, the spectators' awe knew no bounds...
Carol puts her left hand in a bucket of hot water, and lets it acclimate for a few minutes. Meanwhile her right hand is acclimating to a bucket of ice water. Then she plunges both hands into a bucket of lukewarm water. The lukewarm water feels very different to her two hands. To the left hand, it feels very chilly. To the right hand, it feels very hot. When asked to tell the temperature of the lukewarm water without looking at the thermocouple readout, she doesn't know. Asked to guess, she's off by a considerable margin.
Next Carol flips the thermocouple readout to face her (as shown), and practices. Using different lukewarm water temperatures of 10-35 C, she gets a feel for how hot-adapted and cold-adapted hands respond to the various middling temperatures. Now she makes a guess - starting with a random hand, then moving the other one and revising the guess if necessary - each time before looking at the thermocouple. What will happen? I haven't done the experiment, but human performance on similar perceptual learning tasks suggests that she will get quite good at it.
We bring Carol a bucket of 20 C water (without telling) and let her adapt her hands first as usual. "What do you think the temperature is?" we ask. She moves her cold hand first. "Feels like about 20," she says. Hot hand follows. "Yup, feels like 20."
"Wait," we ask. "You said feels-like-20 for both hands. Does this mean the bucket no longer feels different to your two different hands, like it did when you started?"
"No!" she replies. "Are you crazy? It still feels very different subjectively; I've just learned to see past that to identify the actual temperature."
In addition to reports on the external world, we perceive some internal states that typically (but not invariably) can serve as signals about our environment. Let's tentatively call these states Subjectively Identified Aspects of Perception (SIAPs). Even though these states aren't strictly necessary to know what's going on in the environment - Carol's example shows that the sensation felt by one hand isn't necessary to know that the water is 20 C, because the other hand knows this via a different sensation - they still matter to us. As Eliezer notes:
If I claim to value art for its own sake, then would I value art that no one ever saw? A screensaver running in a closed room, producing beautiful pictures that no one ever saw? I'd have to say no. I can't think of any completely lifeless object that I would value as an end, not just a means. That would be like valuing ice cream as an end in itself, apart from anyone eating it. Everything I value, that I can think of, involves people and their experiences somewhere along the line.
The best way I can put it, is that my moral intuition appears to require both the objective and subjective component to grant full value.
Subjectivity matters. (I am not implying that Eliezer would agree with anything else I say about subjectivity.)
Why would evolution build beings that sense their internal states? Why not just have the organism know the objective facts of survival and reproduction, and be done with it? One thought is that it is just easier to build a brain that does both, rather than one that focuses relentlessly on objective facts. But another is that this separation of sense-data into "subjective" and "objective" might help us learn to overcome certain sorts of perceptual illusion - as Carol does, above. And yet another is that some internal states might be extremely good indicators and promoters of survival or reproduction - like pain, or feelings of erotic love. This last hypothesis could explain why we value some subjective aspects so much, too.
Different SIAPs can lead to the same intelligent behavioral performance, such as identifying 20 degree C water. But that doesn't mean Carol has to value the two routes to successful temperature-telling equally. And, if someone proposed to give her radically different, previously unknown, subjectively identifiable aspects of experience, as new routes to the kinds of knowledge she gets from perception, she might reasonably balk. Especially if this were to apply to all the senses. And if the subjectively identifiable aspects of desire and emotion (SIADs, SIAEs) were also to be replaced, she might reasonably balk much harder. She might reasonably doubt that the survivor of this process would be her, or even human, in any sense meaningful to her.
Would it be possible to have an intelligent being whose cognition of the world is mediated by no SIAPs? I suspect not, if that being is well-designed. See above on "why would evolution build beings that sense internal states."
If you've read all 3 posts, you've probably gotten the point of the Gasoline Gal story by now. But let me go through some of the mappings from source to target in that analogy. A car that, when you take it on a tour, accelerates well, handles nicely, makes the right amount of noise, and so on - one that passes the touring test (groan) - is like a being that can identify objective facts in its environment. An internal combustion engine is like Carol's subjective cold-sensation in her left hand - one way among others to bring about the externally-observable behavior. (By "externally observable" I mean "without looking under the hood".) In Carol's case, that behavior is identifying 20 C water. In the engine's case, it's the acceleration of the car. Note that in neither case is this internal factor causally inert. If you take it away and don't replace it with anything, or even if you replace it with something that doesn't fit, the useful external behavior will be severely impaired. The mere fact that you can, with a lot of other re-working, replace an internal combustion engine with a fuel cell, does not even begin to show that the engine does nothing.
And Gasoline Gal's passion for internal combustion engines is like my - and I dare say most people's - attachment to the subjective internal aspects of perception and emotion that we know and love. The words and concepts we use for these things - pain, passion, elation, for some easier examples - refer to the actual processes in human beings that drive the related behavior. (Regarding which, neurology has more to learn.) As I mentioned in my last post, a desire can form with a particular referent based on early experience, and remain focused on that event-type permanently. If one constructs radically different processes that achieve similar external results, analogous to the fuel cell driven car, one gets radically different subjectivity - which we can only denote by pointing simultaneously to both the "under the hood" construction of these new beings, and the behavior associated with their SIAPs, together.
Needless to say, this complicates uploading.
One more thing: are SIAPs qualia? A substantial minority of philosophers, or maybe a plurality, uses "qualia" in a sufficiently similar way that I could probably use that word here. But another substantial minority loads it with additional baggage. And that leads to pointless misunderstandings, pigeonholing, and straw men. Hence, "SIAPs". But feel free to use "qualia" in the comments if you're more comfortable with that term, bearing my caveats in mind.
This is the public group rationality diary for May 5 - 23, 2015. It's a place to record and chat about it if you have done, or are actively doing, things like:
Established a useful new habit
Obtained new evidence that made you change your mind about some belief
Decided to behave in a different way in some set of situations
Optimized some part of a common routine or cached behavior
Consciously changed your emotions or affect with respect to something
Consciously pursued new valuable information about something that could make a big difference in your life
Learned something new about your beliefs, behavior, or life that surprised you
Tried doing any of the above and failed
Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves. Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.
Note to future posters: no one is in charge of posting these threads. If it's time for a new thread, and you want a new thread, just create it. It should run for about two weeks, finish on a saturday, and have the 'group_rationality_diary' tag.
Your job, should you choose to accept it, is to comment on this thread explaining the most awesome thing you've done this month. You may be as blatantly proud of yourself as you feel. You may unabashedly consider yourself the coolest freaking person ever because of that awesome thing you're dying to tell everyone about. This is the place to do just that.
Remember, however, that this isn't any kind of progress thread. Nor is it any kind of proposal thread. This thread is solely for people to talk about the awesome things they have done. Not "will do". Not "are working on". Have already done. This is to cultivate an environment of object level productivity rather than meta-productivity methods.
So, what's the coolest thing you've done this month?
I'd appreciate feedback on optimizing a blog post that shares about my mental illness and popularizes future-oriented thinking to a broad audience. I'm using story-telling as the driver of the narrative, and sprinkling in elements of rational thinking, such as hyperbolic discounting, mental maps, and future-oriented thinking, in a strategic way. The target audience is college-age youth and young adults. Any suggestions for what works well, and what can be improved would be welcomed! The blog draft itself is below the line.
P.S. For context, the blog is part of a broader project, Intentional Insights, aimed at promoting rationality to a broad audience, as I described in this LW discussion post. To do so, we couch rationality in the language of self-improvement and present it in a narrative style.
Coming Out of the Mental Health Closet
My hand jerked back, as if the computer mouse had turned into a real mouse. I just couldn’t do it. Would they think I am crazy? Would they whisper behind my back? Would they never trust me again? These are the kinds of anxious thoughts that ran through my head as I was about to post on my Facebook profile revealing my mental illness to my Facebook friends, about 6 months after my condition began.
I really wanted to share much earlier about my mental illness, a mood disorder characterized by high anxiety, sudden and extreme fatigue, and panic attacks. It would have felt great to be genuinely authentic with people in my life, and not hide who I am. Plus, I would have been proud to contribute to overcoming the stigma against mental illness in our society, especially since this stigma impacts me on such a personal level.
Ironically, the very stigma against mental illness, combined with my own excessive anxiety response, made it very hard for me to share. I was really anxious about whether friends and acquaintances would turn away from me. I was also very concerned about the impact on my professional career of sharing publicly, due to the stigma in academia against mental illness, including at my workplace, Ohio State, as my colleague and fellow professor described in his article.
Whenever the thought of telling others entered my mind, I felt a wave of anxiety pass through me. My head began to pound, my heart sped up, my breathing became fast and shallow, almost like I was suffocating. If I didn’t catch it in time, the anxiety could lead to a full-blown panic attack, or sudden and extreme fatigue, with my body collapsing in place. Not a pretty picture.
Still, I did eventually start discussing my mental illness with some very close friends who I was very confident would support me. And one conversation really challenged my mental map, in other words how I perceive reality, about sharing my story of mental illness.
My friend told me something that really struck me, namely his perspective about how great would it be if all people who needed professional help with their mental health actually went to get such help. One of the main obstacles, as research shows, is the stigma against mental health. We discussed how one of the best ways to deal with such stigma is for well-functioning people with mental illness to come out of the closet about their condition.
Well, I am one of these well-functioning people. I have a great job and do it well, have wonderful relationships, and participate in all sorts of civic activities. The vast majority of people who know me don’t realize I suffer from a mental illness.
That conversation motivated me to think seriously through the roadblocks thrown up by the emotional part of my brain. Previously, I never sat down for a few minutes and forced myself to think what good things might happen if I pushed past all the anxiety and stress of telling people in my life about my mental illness.
I realized that I was just flinching away, scared of the short-term pain of rejection and not thinking about the long-term benefits to me and to others of sharing my story. I was falling for a thinking error that scientists call hyperbolic discounting, a reluctance to make short-term sacrifices for much higher long-term rewards.
To combat this problem, I imagined what world I wanted to live in a year from now – one where I shared about this situation now on my Facebook profile, or one where I did not. This approach is based on research showing that future-oriented thinking is very helpful for dealing with thinking errors associated with focusing on the present.
In the world where I would share right now about my condition, I would be very anxious about what people think of me. Anytime I saw someone who found out for the first time, I would be afraid about the impact on that person’s opinion of me. I would be watching her or his behavior closely for signs of distancing from me. And this would not only be my anxiety: I was quite confident that some people would not want to associate with me due to my mental illness. However, over time, this close watching and anxious thoughts would diminish. All the people who knew me previously would find out. All new people who met met would learn about my condition, since I would not keep it a secret. I would make the kind of difference I wanted to make in the world by fighting mental stigma in our society, and especially in academia. Just as important, it would be a huge burden off my back to not hide myself and be authentic with people in my life.
I imagined a second world. I would continue to hide my mental health condition from everyone but a few close friends. I would always have to keep this secret under wraps, and worry about people finding out about it. I would not be making the kind of impact on our society that I knew I would be able to make. And likely, people would find out about it anyway, whether if I chose to share about it or some other way, and I would get all the negative consequences later.
Based on this comparison, I saw that the first world was much more attractive to me. So I decided to take the plunge, and made a plan to share about the situation publicly. As part of doing so, I made that Facebook post. I had such a good reaction from my Facebook friends that I decided to make the post publicly available on my Facebook to all, not only my friends. Moreover, I decided to become an activist in talking about my mental condition publicly, as in this essay that you are reading.
What can you do?
So how can you apply this story to your life? Whether you want to come out of the closet to people in your life about some unpleasant news, or more broadly overcome the short-term emotional pain of taking an action that would help you achieve your long-term goals, here are some strategies.
1) Consider the world where you want to live a year from now. What would the world look like if you take the action? What would it look like if you did not take the action?
2) Evaluate all the important costs and benefits of each world. What world looks the most attractive a year from now?
3) Decide on the actions needed to get to that world, make a plan, and take the plunge. Be flexible about revising your plan based on new information such as reactions from others, as I did regarding sharing about my own condition.
What do you think?
- Do you ever experience a reluctance to tell others about something important to you because of your concern about their response? How have you dealt with this problem yourself?
- Is there any area of your life where an orientation to the short term undermines much higher long-term rewards? Do you have any effective strategies for addressing this challenge?
- Do you think the strategy of imagining the world you want to live in a year from now can be helpful in any area of your life? If so, where and how?
Thanks in advance for your feedback and suggestions on optimizing the post!
Note: I'm terrible at making up titles, and I think that the one I gave may give the wrong impression. If anyone has a suggestion on what I should change it to, it would be much appreciated.
As I've been reading articles on less wrong, it seems to me that there are hints of an underlying belief which states that not only is capitalism a good economic paradigm, it shall remain so. Now, I don't mean to say anything like 'Capitalism is Evil!' I think that capitalism can, and has, done a lot of good for humanity.
However, I don't think that capitalism will be the best economic paradigm going into the future. I used to view capitalism as an inherent part of the society we currently live in, with no real economic competition.
I recently changed my views as a result of a book someone recommended to me 'The zero marginal cost society' by Jeremy Rifkin. In it, the author states that we are in the midst of a third industrial revolution as a result of a new energy/production and communications matrix i.e. renewable energies, 3-D printing and the internet.
The author claims that these three things will eventually bring their respective sectors marginal costs to zero. This is significant because of a 'contradiction at the heart of capitalism' (I'm not sure how to phrase this, so excuse me if I butcher it): competition is at the heart of capitalism, with companies constantly undercutting each other as a result of new technologies. These technological improvement allow a company to produce goods/services at a more attractive price whilst retaining a reasonable profit margin. As a result, we get better and better at producing things, and it lets us produce goods at ever decreasing costs. But what happens when the costs of producing something hit rock bottom? That is, they can go no lower.
3D printing presents a situation like this for a huge amount of industries, as all you really need to do is get some designs, plug in some feedstock and have a power source ready. The internet allows people to share their designs for almost zero cost, and renewable energies are on the rise, presenting the avenue of virtually free power. All that's left is the feedstock, and the cost of this is due to the difficulty of producing it. Once we have better robotics, you won't need anyone to mine/cultivate anything, and the whole thing becomes basically free.
And when you can get your goods, energy and communications for basically free, doesn't that undermine the whole capitalist system? Of course, the arguments presented in the book are much more comprehensive, and it details an alternative economic paradigm called the Commons. I'm just paraphrasing here.
Since my knowledge of economics is woefully inadequate, I was wondering if I've made some ridiculous blunder which everyone knows about on this site. Is there some fundamental reason why Jeremy Rifkin's is a crackpot and I'm a fool for listening to him? Or is it more subtle than that? I ask because I felt the arguments in the book pretty compelling, and I want some opinions from people who are much better suited to critiquing this sort of thing than I.
Here is a link to the download page for the essay titled 'The comedy of the Commons' which provides some of the arguments which convinced me:
A lecture about the Commons itself:
And a paper (?) about governing the commons:
And here is a link to the author's page, along with some links to articles about the book:
An article displaying some of the sheer potential of 3D printers, and how it has the potential to change society in a major way:
Edit: Drat! I forgot about the stupid questions thread. Should I delete this and repost it there? I mean, I hope to discuss this topic with others, so it seems suitable for the DISCUSSION board, but it may also be very stupid. Advice would be appreciated.
Are there any LessWrong Skype groups, or active live chatrooms? I've been looking around and found nothing. Well, with the exception of the LW Study Hall, but it doesn't quite fit since it is primarily for social work/study facilitation purposes with only minor breaks. This would fulfill a primarily social function.
But you ask, wouldn't a regular Skype chat reduce effectiveness by distracting people from their work? A little bit, but I'd rather the distracting thing be increasing my rationality by getting me engaging in the ideas with other people who are actively trying to do the same. I expect it to have an overall positive effect on productivity since I am bound to encounter one or two ideas to do so.
Thus, the value of such a group for me would be to discuss topics pertinent to rationality, and increase increase the shininess and entertainment value of LessWrong's ideas- it is already pretty interesting, and I've had fun thinking while sitting around reading the Sequences (finished How To Actually Change Your Mind not too long ago). There are no meetups near me, and I'd rather engage via online interactions anyway.
If there is no such group already, I'd be happy to start one. Feel free to either leave your Skype name in the comments or send me a PM if you're interested.
edit: My Skype id is bluevertro
I feel that a lot of what's in LW (written by Eliezer or others) should be in mainstream academia. Not necessarily the most controversial views (the insistence on the MW hypothesis, cryonics, the FAI ...), but a lot of the work on overcoming biases should be there, be criticized there and be improved there.
For example, a few debiasing methods and a more formal explanation of LW's peculiar solution to free will (and more, these are only examples).
I don't really get why LW's content isn't in mainstream academia to be honest.
I get that peer review is not the best (far from it, although it's still the best we have, and post-publication peer-review is also improving, see PubPeer), that some would too readily dismiss LW's content, but not all. Lots would play by the rules and provide genuine criticisms during peer-review (which will lead to the alteration of the content of course), along with criticisms post publication. This is in my opinion something that has to happen.
LW, Eliezer, etc, can't stay on the "crank" level, not playing by the rules, publishing books and no papers. Blogs are indeed faster and reach a bigger amount of people, but I'm not arguing for only publishing in academia. Blogs can (and should) continue.
Tell me what you think, as I seem to have missed something with this topic.
We haven't been having regular meetups in Madison, WI for a while (as far as I'm aware), so I'd love to get those going again! Organizing is actually terrifying for me: what if only one person comes, and that person is disappointed? So I'm looking for regulars. All you have to do is commit to attending one or two events a month, things like nature hikes, study halls, and brunches. I'll provide food, drink, optional cats for petting, and transportation with enough advance notice. Please email me if you're interested (firstname.lastname@example.org).
Thanks a bunch, have a fun weekend!
I'm currently reading through some relevant literature for preparing my FLI grant proposal on the topic of concept learning and AI safety. I figured that I might as well write down the research ideas I get while doing so, so as to get some feedback and clarify my thoughts. I will posting these in a series of "Concept Safety"-titled articles.
The AI in the quantum box
In the previous post, I discussed the example of an AI whose concept space and goals were defined in terms of classical physics, which then learned about quantum mechanics. Let's elaborate on that scenario a little more.
I wish to zoom in on a certain assumption that I've noticed in previous discussions of these kinds of examples. Although I couldn't track down an exact citation right now, I'm pretty confident that I've heard the QM scenario framed as something like "the AI previously thought in terms of classical mechanics, but then it finds out that the world actually runs on quantum mechanics". The key assumption being that quantum mechanics is in some sense more real than classical mechanics.
This kind of an assumption is a natural one to make if someone is operating on an AIXI-inspired model of AI. Although AIXI considers an infinite amount of world-models, there's a sense in which AIXI always strives to only have one world-model. It's always looking for the simplest possible Turing machine that would produce all of the observations that it has seen so far, while ignoring the computational cost of actually running that machine. AIXI, upon finding out about quantum mechanics, would attempt to update its world-model into one that only contained QM primitives and to derive all macro-scale events right from first principles.
No sane design for a real-world AI would try to do this. Instead, a real-world AI would take advantage of scale separation. This refers to the fact that physical systems can be modeled on a variety of different scales, and it is in many cases sufficient to model them in terms of concepts that are defined in terms of higher-scale phenomena. In practice, the AI would have a number of different world-models, each of them being applied in different situations and for different purposes.
Here we get back to the view of concepts as tools, which I discussed in the previous post. An AI that was doing something akin to reinforcement learning would come to learn the kinds of world-models that gave it the highest rewards, and to selectively employ different world-models based on what was the best thing to do in each situation.
As a toy example, consider an AI that can choose to run a low-resolution or a high-resolution psychological model of someone it's interacting with, in order to predict their responses and please them. Say the low-resolution model takes a second to run and is 80% accurate; the high-resolution model takes five seconds to run and is 95% accurate. Which model will be chosen as the one to be used will depend on the cost matrix of making a correct prediction, making a false prediction, and the consequence of making the other person wait for an extra four seconds before the AI's each reply.
We can now see that a world-model being the most real, i.e. making the most accurate predictions, doesn't automatically mean that it will be used. It also needs to be fast enough to run, and the predictions need to be useful for achieving something that the AI cares about.
World-models as tools
From this point of view, world-models are literally tools just like any other. Traditionally in reinforcement learning, we would define the value of a policy in state s as the expected reward given the state s and the policy ,
but under the "world-models are tools" perspective, we need to also condition on the world-model m,
We are conditioning on the world-model in several distinct ways.
First, there is the expected behavior of the world as predicted by world-model m. A world-model over the laws of social interaction would do poorly at predicting the movement of celestial objects, if it could be applied to them at all. Different predictions of behavior may also lead to differing predictions of the value of a state. This is described by the equation above.
Second, there is the expected cost of using the world-model. Using a more detailed world-model may be more computationally expensive, for instance. One way of interpreting this in a classical RL framework would be that using a specific world-model will place the agent in a different state than using some other world-model. We might describe by saying that in addition to the agent choosing its next action a on each time-step, the agent also needs to choose the world-model m which it will use to analyze its next observations. This will be one of the inputs for the transition function to the next state.
Third, there is the expected behavior of the agent using world-model m. An agent with different beliefs about the world will act differently in the future: this means that the future policy actually depends on the chosen world-model.
Some very interesting questions pop up at this point. Your currently selected world-model is what you use to evaluate your best choices for the next step... including the choice of what world-model to use next. So whether or not you're going to switch to a different world-model for evaluating the next step depends on whether your current world-model says that a different world-model would be better in that step.
We have not fully defined what exactly we mean by "world-models" here. Previously I gave the example of a world-model over the laws of social interaction, versus a world-model over the laws of physics. But a world-model over the laws of social interaction, say, would not have an answer to the question of which world-model to use for things it couldn't predict. So one approach would be to say that we actually have some meta-model over world-models, telling us which is the best to use in what situation.
On the other hand, it does also seem like humans often use a specific world-model and its predictions to determine whether to choose another world-model. For example, in rationalist circles you often see arguments to the line of, "self-deception might give you extra confidence, but it introduces errors into your world-model, and in the long term those are going to be more harmful than the extra confidence is beneficial". Here you see an implicit appeal to a world-model which predicts an accumulation of false beliefs with some specific effects, as well as predicting the extra self-esteem with its effects. But this kind of an analysis incorporates very specific causal claims from various (e.g. psychological) models, which are models over the world rather than just being part of some general meta-model over models. Notice also that the example analysis takes into account the way that having a specific world-model affects the state transition function: it assumes that a self-deceptive model may land us in a state where we have a higher self-esteem.
It's possible to get stuck in one world-model: for example, a strongly non-reductionist model evaluating the claims of a highly reductionist one might think it obviously crazy, and vice versa. So it seems that we do need something like a meta-evaluation function. Otherwise it would be too easy to get stuck in one model which claimed that it was the best one in every possible situation, and never agreed to "give up control" in favor of another one.
One possibility for such a thing would be a relatively model-free learning mechanism, which just kept track of the rewards accumulated when using a particular model in a particular situation. It would then bias the selection of the model towards the direction of the model that had been the most successful so far.
Human neuroscience and meta-models
We might be able to identify something like this in humans, though this is currently very speculative on my part. Action selection is carried out in the basal ganglia: different brain systems send the basal ganglia "bids" for various actions. The basal ganglia then chooses which actions to inhibit or disinhibit (by default, everything is inhibited). The basal ganglia also implements reinforcement learning, selectively strengthening or weakening the connections associated with a particular bid and context when a chosen action leads to a higher or lower reward than was expected. It seems that in addition to choosing between motor actions, the basal ganglia also chooses between different cognitive behaviors, likely even thoughts:
If action selection and reinforcement learning are normal functions of the basal ganglia, it should be possible to interpret many of the human basal ganglia-related disorders in terms of selection malfunctions. For example, the akinesia of Parkinson's disease may be seen as a failure to inhibit tonic inhibitory output signals on any of the sensorimotor channels. Aspects of schizophrenia, attention deficit disorder and Tourette's syndrome could reflect different forms of failure to maintain sufficient inhibitory output activity in non-selected channels. Conseqently, insufficiently inhibited signals in non-selected target structures could interfere with the output of selected targets (expressed as motor/verbal tics) and/or make the selection system vulnerable to interruption from distracting stimuli (schizophrenia, attention deficit disorder). The opposite situation would be where the selection of one functional channel is abnormally dominant thereby making it difficult for competing events to interrupt or cause a behavioural or attentional switch. Such circumstances could underlie addictive compulsions or obsessive compulsive disorder. (Redgrave 2007)
Although I haven't seen a paper presenting evidence for this particular claim, it seems plausible to assume that humans similarly come to employ new kinds of world-models based on the extent to which using a particular world-model in a particular situation gives them rewards. When a person is in a situation where they might think in terms of several different world-models, there will be neural bids associated with mental activities that recruit the different models. Over time, the bids associated with the most successful models will become increasingly favored. This is also compatible with what we know about e.g. happy death spirals and motivated stopping: people will tend to have the kinds of thoughts which are rewarding to them.
The physicist and the AI
In my previous post, when discussing the example of the physicist who doesn't jump out of the window when they learn about QM and find out that "location" is ill-defined:
The physicist cares about QM concepts to the extent that the said concepts are linked to things that the physicist values. Maybe the physicist finds it rewarding to develop a better understanding of QM, to gain social status by making important discoveries, and to pay their rent by understanding the concepts well enough to continue to do research. These are some of the things that the QM concepts are useful for. Likely the brain has some kind of causal model indicating that the QM concepts are relevant tools for achieving those particular rewards. At the same time, the physicist also has various other things they care about, like being healthy and hanging out with their friends. These are values that can be better furthered by modeling the world in terms of classical physics. [...]
A part of this comes from the fact that the physicist's reward function remains defined over immediate sensory experiences, as well as values which are linked to those. Even if you convince yourself that the location of food is ill-defined and you thus don't need to eat, you will still suffer the negative reward of being hungry. The physicist knows that no matter how they change their definition of the world, that won't affect their actual sensory experience and the rewards they get from that.
So to prevent the AI from leaving the box by suitably redefining reality, we have to somehow find a way for the same reasoning to apply to it. I haven't worked out a rigorous definition for this, but it needs to somehow learn to care about being in the box in classical terms, and realize that no redefinition of "location" or "space" is going to alter what happens in the classical model. Also, its rewards need to be defined over models to a sufficient extent to avoid wireheading (Hibbard 2011), so that it will think that trying to leave the box by redefining things would count as self-delusion, and not accomplish the things it really cared about. This way, the AI's concept for "being in the box" should remain firmly linked to the classical interpretation of physics, not the QM interpretation of physics, because it's acting in terms of the classical model that has always given it the most reward.
There are several parts to this.
1. The "physicist's reward function remains defined over immediate sensory experiences". Them falling down and breaking their leg is still going to hurt, and they know that this won't be changed no matter how they try to redefine reality.
2. The physicist's value function also remains defined over immediate sensory experiences. They know that jumping out of a window and ending up with all the bones in their body being broken is going to be really inconvenient even if you disregarded the physical pain. They still cannot do the things they would like to do, and they have learned that being in such a state is non-desirable. Again, this won't be affected by how they try to define reality.
We now have a somewhat better understanding of what exactly this means. The physicist has spent their entire life living in the classical world, and obtained nearly all of their rewards by thinking in terms of the classical world. As a result, using the classical model for reasoning about life has become strongly selected for. Also, the physicist's classical world-model predicts that thinking in terms of that model is a very good thing for surviving, and that trying to switch to a QM model where location was ill-defined would be a very bad thing for the goal of surviving. On the other hand, thinking in terms of exotic world-models remains a rewarding thing for goals such as obtaining social status or making interesting discoveries, so the QM model does get more strongly reinforced in that context and for that purpose.
Getting back to the question of how to make the AI stay in the box, ideally we could mimic this process, so that the AI would initially come to care about staying in the box. Then when it learns about QM, it understands that thinking in QM terms is useful for some goals, but if it were to make itself think in purely QM terms, that would cause it to leave the box. Because it is thinking mostly in terms of a classical model, which says that leaving the box would be bad (analogous to the physicist thinking mostly in terms of the classical model which says that jumping out of the window would be bad), it wants to make sure that it will continue to think in terms of the classical model when it's reasoning about its location.
Recently published article in Nature Methods on a new protocol for preserving mouse brains that allows the neurons to be traced across the entire brain, something that wasn't possible before. This is exciting because in as little as 3 years, the method could be extended to larger mammals (like humans), and pave the way for better neuroscience or even brain uploads. From the abstract:
Here we describe a preparation, BROPA (brain-wide reduced-osmium staining with pyrogallol-mediated amplification), that results in the preservation and staining of ultrastructural details throughout the brain at a resolution necessary for tracing neuronal processes and identifying synaptic contacts between them. Using serial block-face electron microscopy (SBEM), we tested human annotator ability to follow neural ‘wires’ reliably and over long distances as well as the ability to detect synaptic contacts. Our results suggest that the BROPA method can produce a preparation suitable for the reconstruction of neural circuits spanning an entire mouse brain
There's a lot that I really like about communicating via writing. Communicating in person is sometimes frustrating for me, and communicating via writing addresses a lot of those frustrations:
1) I often want to make a point that depends on the other person knowing X. In person, if I always paused and did the following, it'd add a lot of friction to conversations: "Wait, do you know X? If yes, good, I'll continue. If no, let me think about how to explain it briefly. Or do you want me to explain it in more depth? Or do you want to try to proceed without knowing X and see how it goes?". But if I don't do so, then it risks miscommunication (because the other person may not have the dependency X).
In writing, I could just link to an article. If the other person doesn't have the dependency, they have options. They could try to proceed without knowing X and see how it goes. If it doesn't work out, they could come back and read the link. Or they could read the link right away. And in reading the link, they have their choice of how deeply they want to read. Ie. they could just skim if they want to.
Alternatively, if you don't have something to link to, you could add a footnote. I think that a UI like Medium's side comments is very preferable to putting the footnotes at the bottom of the page. I hope to see this adopted across the internet some time in the next 5 years or so.
2) I think that in general, being precise about what you're saying is actually quite difficult/time consuming*. For example, I don't really mean what I just said. I'm actually not sure how often that it's difficult/time consuming to be precise with what you're saying. And I'm not sure how often it's useful to be precise about what you're saying (or really, more precise...whatever that means...). I guess what I really mean is that it happens often enough where it's a problem. Or maybe just that for me, it happens enough where I find it to be a problem.
Anyway, I find that putting quotes around what I say is a nice way to mitigate this problem.
Ex. It's "in my nature" to be strategic.
The quotes show that the word inside them isn't precisely what I mean, but that it's close enough to what I mean that it should communicate the gist of it. I sense that this communication often happens through empathetic inference.
*I also find that I feel internal and external pressure to be consistent with what I say, even if I know I'm oversimplifying. This is a problem and has negatively effected me. I recently realized what a big problem it is, and will try very hard to address it (or really, I plan on trying very hard but I'm not sure blah blah blah blah blah...).
Note 1: I find internal conversation/thinking as well as interpersonal conversation to be "chaotic". (What follows is rant-y and not precisely what I believe. But being precise would take too long, and I sense that the rant-y tone helps to communicate without detracting from the conversation by being uncivil.) It seems that a lot of other people (much less so on LW) have more "organized" thinking patterns. I can't help but think that that's BS. Well, maybe they do, but I sense that they shouldn't. Reality is complicated. People seem to oversimplify things a lot, and to think in terms of black-white. When you do that, I could see how ones thoughts could be "organized". But when you really try to deal with the complexities of reality... I don't understand how you could simultaneously just go through life with organized thoughts.
Note 2: I sense that this post somewhat successfully communicates my internal thought process and how chaotic it could be. I'm curious how this compares to other people. I should note that I was diagnosed with a mild-moderate case of ADHD when I was younger. But that was largely based off of iffy reporting from my teachers. They didn't realize how much conscious thought motivated my actions. Ie. I often chose to do things that seem impulsive because I judged it to be worth it. But given that my mind is always racing so fast, and that I have a good amount of trouble deciding to pay attention to anything other than the most interesting thing to me, I'd guess that I do have ADHD to some extent. I'm hesitant to make that claim without ever having been inside someone else's mind before though (how incredibly incredibly cool would that be!!!) - appearances could be deceiving.
3) It's easier to model and traverse the structure of a conversation/argument when it's in writing. You could break things into nested sections (which isn't always a perfect way to model the structure, but is often satisfactory). In person, I find that it's often quite difficult for two people (let alone multiple people) to stay in sync with the structure of the conversation. The outcome of this is that people rarely veer away from extremely superficial conversations. Granted, I haven't had the chance to talk to many smart people in real life, and so I don't have much data on how deep a conversation between two smart people could get. My guess is that it could get a lot deeper than what I'm used to, but that it'd be pretty hard to make real progress on a difficult topic without outlining and diagramming things out. (Note: I don't mean "deep as in emotional", I mean "deep as in nodes in a graph")
There are also a lot of other things to say about communicating in writing vs. in person, including:
- The value of the subtle things like nonverbal communication and pauses.
- The value of a conversation being continuous. When it isn't, you have to download the task over and over again.
- How much time you have to think things through before responding.
- I sense that people are way more careful in writing, especially when there's a record of it (rather than, say PM).
This is a discussion post, so feel free to comment on these things too (or anything else in the ballpark).
This summary was posted to LW Main on May 8th. The following week's summary is here.
New meetups (or meetups with a hiatus of more than a year) are happening in:
Irregularly scheduled Less Wrong meetups are taking place in:
- Ann Arbor, MI Discussion Meetup 6/13: 13 June 2015 01:30PM
- Australian Less Wrong Mega Meetup #2: 17 July 2015 07:00PM
- Australia-wide Mega-Camp!: 17 July 2015 07:00PM
- Bangalore LW Meetup: 09 May 2015 09:18AM
- BYU-I: 08 May 2015 05:30PM
- Cologne meetup: 09 May 2015 05:00PM
- Dublin: 09 May 2015 02:00PM
- European Community Weekend 2015: 12 June 2015 12:00PM
- [Munich] May Meetup: 16 May 2015 03:00PM
- San Francisco Meetup: Short Talks: 11 May 2015 06:00PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
- Brussels - The Art of Not Being Right: 09 May 2015 01:00PM
- Canberra: Putting Induction Into Practice: 09 May 2015 06:00PM
- Durham, NC (RTLW) Discussion Meetup: 16 April 2026 07:00PM
- London meetup: 10 May 2015 02:00PM
- Tel Aviv Meetup: Social & Board Games: 12 May 2015 07:00PM
- [Vienna] Rationality Meetup Vienna: 09 May 2015 02:00PM
- Washington, D.C.: xkcd Discussion: 10 May 2015 03:00PM
Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.
Summary: It's easy to get caught up in solving the wrong problems, solving the problems with a particular solution instead of solving the actual problem. You should pay very careful attention to what you are doing and why.
I'll relate a seemingly purposeless story about a video game to illustrate:
I was playing Romance of the Three Kingdoms some years ago, and was trying to build the perfect city. (The one city I ruled, actually.) Enemies kept attacking, and the need to recruit troops was slowing my population growth (not to mention deliberate sabotage by my enemies), so eventually I came to the conclusion that I would have to conquer the map in order to finish the job. So I conquered the map. And then the game ending was shown, after which, finally, I could return to improving cities.
The game ending, however, startled me out of continuing to play: My now emperor was asked by his people to improve the condition of things (as things were apparently terrible), and his response was that he needed to conquer the rest of Asia first, to ensure their security.
My initial response was outrage at how the game portrayed events, but I couldn't find a fault in "his" response; it was exactly what I had been doing. Given the rest of Asia, indeed the rest of the world, that would be exactly what I would have done had the game continued past that point, given that threats to the peace I had established still existed. I had already conquered enemies who had never offered me direct threat, on the supposition that they would, and the fact that they held tactically advantageous positions.
It was an excellent game which managed to point out that I have failed in my original purpose in playing the game. My purpose was subsumed by itself, or more particularly, a subgoal. I didn't set out to conquer the map. I lost the game. I achieved the game's victory conditions, yes, but failed my own. The ending, the exact description of exactly how I had failed and how my reasoning led to a conclusion I would have dismissed as absurd when I began, was so memorable it still sticks in my mind, years later.
My original purpose was subsumed. By what, exactly, however?
By the realities of the game I was playing, I could say, if I were to rationalize my behavior; I wanted to improve all the cities I owned, but at no point until I had conquered the entire map could I afford to. At each point in the game, there was always one city that couldn't be reliably improved. The AI didn't share my goals; responding to force with force, to sabotage with sabotage, offered no penalties to the AI or its purposes, only to mine. But nevertheless, I had still abandoned my original goals. The realities of the game didn't subsume my purpose, which was still achievable within its constraints.
The specific reasons my means subsumed my ends may be illustrative: I inappropriately generalized. I reasoned as if my territory were an atomic unit. The risks incurred at my borders were treated as being incurred across the whole of my territory. I devoted my resources - in particular my time - into solving a problem which afflicted an ever-decreasing percentage of that territory. But even realizing that I was incorrectly generalizing wouldn't have stopped me; I'd have reasoned that the edge cities would still be under the same threat, and I couldn't actually finish my task until I finished my current task first.
Maybe, once my imaginary video game emperor had finally finished conquering the world, he'd have finally turned to the task of improving things. Personally, I imagine he tripped and died falling down a flight of stairs shortly after conquering imaginary-China, and all of his work was undone in the chaos that ensued, because it seems the more poetic end to me.
A game taught me a major flaw in my goal-oriented reasoning.
I don't know the name for this error, if it has a name; internally, I call it incidental problem fixation, getting caught up in solving the sub-problems that arise in trying to solve the original problem. Since playing, I've been very careful, each time a new challenge comes up in the course of solving an overall issue, to re-evaluate my priorities, and to consider alternatives to my chosen strategy. I still have something of an issue with this; I can't count the number of times I've spent a full workday on a "correct" solution to a technical issue (say, a misbehaving security library) that should have taken an hour. But when I notice that I'm doing this, I'll step away, and stop working on the "correct" solution, and return to solving the problem I'm actually trying to solve, instead of getting caught up in all the incidental problems that arose in the attempt to implement the original solution.
ETA: Link to part 1: http://lesswrong.com/lw/e12/subsuming_purpose_part_1/
Glad to share an op-ed piece I published in one of the most premier higher education media channels on how I as a professor used rationality-informed strategies to deal with mental illness in the classroom. This is part of my broader project to promote rationality to a broad audience and thus raise the sanity waterline, so good news on that front. I'd also be glad to hear your advice about other strategies to promote rationality broadly, and also any collaboration you may be interested in doing together around such public outreach.
View more: Next