Rationality test: Vote for trump

-18 pwno 16 June 2016 08:33AM

If there's such a small chance of your vote making a difference in the election, you should be comfortable voting for trump. 

Ethicality of Denying Agency

9 pwno 07 July 2014 05:40AM

If your 5-year-old seems to have an unhealthy appetite for chocolate, you’d take measures to prevent them from consuming it. Any time they’d ask you to buy them some, you’d probably refuse their request, even if they begged. You might make sure that any chocolate in the house is well-hidden and out of their reach. You might even confiscate chocolate they already have, like if you forced them to throw out half their Halloween candy. You’d almost certainly trigger a temper tantrum and considerably worsen their mood. But no one would label you an unrelenting tyrant. Instead, you’d be labeled a good parent.
 
Your 5-year-old isn’t expected to have the capacity to understand the consequences to their actions, let alone have the efficacy to accomplish the actions they know are right. That’s why you’re a good parent when you force them to do the right actions, even against their explicit desires.
 
You know chocolate is a superstimulus and that 5-year-olds have underdeveloped mental executive functions. You have good reasons to believe that your child’s chocolate obsession isn’t caused by their agency, and instead caused by an obsolete evolutionary adaptation. But from your child’s perspective, desiring and eating chocolate is an exercise in agency. They’re just unaware of how their behaviors and desires are suboptimal. So by removing their ability to act upon their explicit desires, you’re denying their agency.  
 
So far, denying agency doesn’t seem so bad. You have good reason to believe your child isn’t capable of acting rationally and you’re only helping them in the long run. But the ethicality gets murky when your assessment of their rationality is questionable.
 
Imagine you and your mother have an important flight to catch 2 hours from now. You realize that you have to leave to the airport now in order to make it on time. As you’re about to leave, you recalled the 2 beers you recently consumed. But you feel the alcohol left in your system will barely affect your driving, if at all. The problem is that if your mother found out about your beer consumption, she’d refuse to be your passenger until you completely sobered up - as she’s done in the past. You know this would cause you to miss your flight because she can’t drive and there are no other means of transportation.
 
A close family member died in a drunk driving accident several years ago and, ever since, she overreacts to drinking and driving risks. You think her reaction is irrational and reveals she has non-transitive preferences. For example, one time she was content on being your passenger after you warned her you were sleep deprived and your driving might be affected. Another time she refused to be your passenger after finding out you had one cocktail that hardly affected you. She’s generally a rational person, but with the recent incident and her past behavior, you deem her incapable of having a calibrated reaction. With all this in mind, you contemplate the ethicality of denying her agency by not telling her about your beer drinking.  
 
Similar to the scenario with your 5-year-old, your intention is to ultimately help the person whose agency you’re denying. But in the scenario with your mother, it’s less clear whether you have enough information or are rational enough yourself to assess your mother’s capacity to act within her preferences. Humans are notoriously good at self-deception and rationalizing their actions. Your motivation to catch your flight might be making you irrational about how much alcohol affects your driving. Or maybe the evidence you collected against her rationality is skewed by confirmation bias. If you’re wrong about your assessment, you’d be disrespecting her wishes.  
 
I can modify the scenario to make its ethicality even murkier. Imagine your mother wasn’t catching the plane with you. Instead, you promised to drive her back to her retirement home before your flight. You don’t want to break your promise nor miss your flight, so you contemplate not telling her about your beer consumption.
 
In this modified version, you’re not actually making your mother better off by denying her agency - you’re only benefiting yourself. You just believe her reaction to your beer consumption isn’t calibrated, and it would cause you to miss your flight. Even if you had plenty of evidence to back up your assessment of her rationality, would it be ethical to deny her agency when it’s only benefiting you?


What are some times you’ve denied someone’s agency? What are your justifications for doing so?

Biases of Intuitive and Logical Thinkers

27 pwno 13 August 2013 03:50AM

Any intuition-dominant thinker who's struggled with math problems or logic-dominant thinker who's struggled with small-talk knows how difficult and hopeless the experience feels like. For a long time I was an intuition thinker, then I developed a logical thinking style and soon it ended up dominating -- granting me the luxury of experiencing both kinds of struggles. I eventually learned to apply the thinking style better optimized for the problem I was facing. Looking back, I realized why I kept sticking to one extreme.

I hypothesize that one-sided thinkers develop biases and tendencies that prevent them from improving their weaker mode of thinking. These biases cause a positive feedback loop that further skews thinking styles in the same direction.

The reasons why one style might be overdeveloped and the other underdeveloped vary greatly. Genes have a strong influence, but environment also plays a large part. A teacher may have inspired you to love learning science at a young age, causing you to foster to a thinking style better for learning science. Or maybe you grew up very physically attractive and found socializing with your peers a lot more rewarding than studying after school, causing you to foster a thinking style better for navigating social situations. Environment can be changed to help develop certain thinking styles, but it should be supplementary to exposing and understanding the biases you already have. Entering an environment that penalizes your thinking style can be uncomfortable, stressful and frustrating without being prepared. (Such a painful experience is part of why these biases cause a positive feedback loop, by making us avoid environments that require the opposite thinking style.)

Despite genetic predisposition and environmental circumstances, there's room for improvement and exposing these biases and learning to account for them is a great first step.

Below is a list of a few biases that worsen our ability to solve a certain class of problems and keep us from improving our underdeveloped thinking style.


Intuition-dominant Biases


Overlooking crucial details

Details matter in order to understand technical concepts. Overlooking a word or sentence structure can cause complete misunderstanding -- a common blunder for intuition thinkers.

Intuition is really good at making fairly accurate predictions without complete information, enabling us to navigate the world without having a deep understanding of it. As a result, intuition trains us to experience the feeling we understand something without examining every detail. In most situations, paying close attention to detail is unnecessary and sometimes dangerous. When learning a technical concept, every detail matters and the premature feeling of understanding stops us from examining them.

This bias is one that's more likely to go away once you realize it's there. You often don't know what details you're missing after you've missed them, so merely remembering that you tend to miss important details should prompt you to take closer examinations in the future.

Expecting solutions to sound a certain way

The Internship has a great example of this bias (and a few others) in action. The movie is about two middle-aged unemployed salesmen (intuition thinkers) trying to land an internship with Google. Part of Google's selection process has the two men participate in several technical challenges. One challenge required the men and their team to find a software bug. In a flash of insight, Vince Vaughn's character, Billy, shouts "Maybe the answer is in the question! Maybe it has something to do with the word bug. A fly!" After enthusiastically making several more word associations, he turns to his team and insists they take him seriously.

Why is it believable to the audience that Billy can be so confident about his answer?

Billy's intuition made an association between the challenge question and riddle-like questions he's heard in the past. When Billy used his intuition to find a solution, his confidence in a riddle-like answer grew. Intuition recklessly uses irrelevant associations as reasons for narrowing down the space of possible solutions to technical problems. When associations pop in your mind, it's a good idea to legitimize those associations with supporting reasons.

Not recognizing precise language

Intuition thinkers are multi-channel learners -- all senses, thoughts and emotions are used to construct a complex database of clustered knowledge to predict and understand the world. With robust information-extracting ability, correct grammar/word-usage is, more often than not, unnecessary for meaningful communication.

Communicating technical concepts in a meaningful way requires precise language. Connotation and subtext are stripped away so words and phrases can purely represent meaningful concepts inside a logical framework. Intuition thinkers communicate with imprecise language, gathering meaning from context to compensate. This makes it hard for them to recognize when to turn off their powerful information extractors.

This bias explains part of why so many intuition thinkers dread math "word problems". Introducing words and phrases rich with meaning and connotation sends their intuition running wild. It's hard for them to find correspondences between words in the problem and variables in the theorems and formulas they've learned.

The noise intuition brings makes it hard to think clearly. It's hard for intuition thinkers to tell whether their automatic associations should be taken seriously. Without a reliable way to discern, wrong interpretations of words go undetected. For example, without any physics background, an intuition thinker may read the statement "Matter can have both wave and particle properties at once" and believe they completely understand it. Unrelated associations of what matter, wave and particle mean, blindly take precedence over technical definitions.

The slightest uncertainty about what a sentence means should raise a red flag. Going back and finding correspondence between each word and how it fits into a technical framework will eliminate any uncertainty.

Believing their level of understanding is deeper than what it is

Intuition works on an unconscious level, making intuition thinkers unaware of how they know what they know. Not surprisingly, their best tool to learn what it means to understand is intuition. The concept "understanding" is a collection of associations from experience. You may have learned that part of understanding something means being able to answer questions on a test with memorized factoids, or knowing what to say to convince people you understand, or just knowing more facts than your friends. These are not good methods for gaining a deep understanding of technical concepts.

When intuition thinkers optimize for understanding, they're really optimizing for a fuzzy idea of what they think understanding means. This often leaves them believing they understand a concept when all they've done is memorize some disconnected facts. Not knowing what it feels like to have deeper understanding, they become conditioned to always expect some amount of surprise. They can feel max understanding with less confidence than logical thinkers when they feel max understanding. This lower confidence disincentivizes intuition thinkers to invest in learning technical concepts, further keeping their logical thinking style underdeveloped.

One way I overcame this tendency was to constantly ask myself "why" questions, like a curious child bothering their parents. The technique helped me uncover what used to be unknown unknowns that made me feel overconfident in my understanding.


Logic-dominant Biases


Ignoring information they cannot immediately fit into a framework

Logical thinkers have and use intuition -- problem is they don't feed it enough. They tend to ignore valuable intuition-building information if it doesn't immediately fit into a predictive model they deeply understand. While intuition thinkers don't filter out enough noise, logical thinkers filter too much.

For example, if a logical thinker doesn't have a good framework for understanding human behavior, they're more likely to ignore visual input like body language and fashion, or auditory input like tone of voice and intonation. Human behavior is complicated, there's no framework to date that can make perfectly accurate predictions about it. Intuition can build powerful models despite working with many confounding variables.  

Bayesian probability enables logical thinkers to build predictive models from noisy data without having to use intuition. But even then, the first step of making a Bayesian update is data collection.

Combatting this tendency requires you to pay attention to input you normally ignore. Supplement your broader attentional scope with a researched framework as a guide. Say you want to learn how storytelling works. Start by grabbing resources that teach storytelling and learn the basics. Out in the real-world, pay close attention to sights, sounds, and feelings when someone starts telling a story and try identifying sensory input to the storytelling elements you've learned about. Once the basics are subconsciously picked up by habit, your conscious attention will be freed up to make new and more subtle observations.

Ignoring their emotions

Emotional input is difficult to factor, especially because you're emotional at the time. Logical thinkers are notorious for ignoring this kind of messy data, consequently starving their intuition of emotional data. Being able to "go with your gut feelings" is a major function of intuition that logical thinkers tend to miss out on.

Your gut can predict if you'll get along long-term with a new SO, or what kind of outfit would give you more confidence in your workplace, or if learning tennis in your free time will make you happier, or whether you prefer eating a cheeseburger over tacos for lunch. Logical thinkers don't have enough data collected about their emotions to know what triggers them. They tend to get bogged down and mislead with objective, yet trivial details they manage to factor out. A weak understanding of their own emotions also leads to a weaker understanding of other's emotions. You can become a better empathizer by better understanding yourself.

You could start from scratch and build your own framework, but self-assessment biases will impede productivity. Learning an existing framework is a more realistic solution. You can find resources with some light googling and I'm sure CFAR teaches some good ones too. You can improve your gut feelings too. One way is making sure you're always consciously aware of the circumstances you're in when experiencing an emotion.

Making rules too strict

Logical thinkers build frameworks in order to understand things. When adding a new rule to a framework, there's motivation to make the rule strict. The stricter the rule, the more predictive power, the better the framework. When the domain you're trying to understand has multivariable chaotic phenomena, strict rules are likely to break. The result is something like the current state of macroeconomics: a bunch of logical thinkers preoccupied by elegant models and theories that can only exist when useless in practice.

Following rules that are too strict can have bad consequences. Imagine John the salesperson is learning how to make better first impressions and has built a rough framework so far. John has a rule that smiling always helps make people feel welcomed the first time they meet him. One day he makes a business trip to Russia to meet with a prospective client. The moment he meet his russian client, he flashes a big smile and continues to smile despite negative reactions. After a few hours of talking, his client reveals she felt he wasn't trustworthy at first and almost called off the meeting. Turns out that in Russia smiling to strangers is a sign of insincerity. John's strict rule didn't account for cultural differences, blindsiding him from updating on his clients reaction, putting him in a risky situation.

The desire to hold onto strict rules can make logical thinkers susceptible to confirmation bias too. If John made an exception to his smiling rule, he'd feel less confident about his knowledge of making first impressions, subsequently making him feel bad. He may also have to amend some other rule that relates to the smiling rule, which would further hurt his framework and his feelings.

When feeling the urge to add on a new rule, take note of circumstances in which the evidence for the rule was found in. Add exceptions that limit the rule's predictive power to similar circumstances. Another option is to entertain multiple conflicting rules simultaneously, shifting weight from one to the other after gathering more evidence. 

continue reading »

How to understand people better

76 pwno 14 October 2011 07:53PM
I’ve been taking notes on how I empathize, considering I seem to be more successful at it than others. I broke down my thought-patterns, implied beliefs, and techniques, hoping to unveil the mechanism behind the magic. I shared my findings with a few friends and noticed something interesting: They were becoming noticeably better empathizers. 

I realized the route to improving one’s ability to understand what people feel and think is not a foreign one. Empathy is a skill; with some guidance and lots of practice, anyone can make drastic improvements. 

I want to impart the more fruitful methods/mind-sets and exercises I’ve collected over time. 

Working definitions:
Projection: The belief that others feel and think the same as you would under the same circumstances
Model: Belief or “map” that predicts and explains people’s behavior


Stop identifying as a non-empathizer

This is the first step towards empathizing better—or developing any skill for that matter. Negative self-fulfilling prophecies are very real and very avoidable. Brains are plastic; there’s no reason to believe an optimal path-to-improvement doesn’t exist for you. 

Not understanding people's behavior is your confusion, not theirs

When we learn our housemate spent 9 hours cleaning the house, we should blame our flawed map for being confused by his or her behavior. Maybe they’re deathly afraid of cockroaches and found a few that morning, maybe they’re passive aggressively telling you to clean more, or maybe they just procrastinate by cleaning. Our model of the housemate has yet to account for these tendencies. 
continue reading »

The strongest status signals

-1 pwno 06 March 2010 08:13AM

The community’s awareness and strong understanding of status-motivated behavior in humans is clearly evident. However, I still believe the community focuses too much on a small subset of observable status transactions; namely, status transactions that occur between people of approximately the same status level. My goal is to bring attention to the rest of the status game.

Because your attention is a limited resource and carries an opportunity cost, your mind is evolved to constantly be on the look-out for stimuli that may affect your survival and reproductive success and ignore stimuli that doesn’t. Of course, the stimulus doesn’t really have to affect your fitness, it just needs some experienceable property that correlates with an experience in the ancestral environment that did. But when our reaction to stimuli proves to be non-threatening, through repeated exposure, we eventually become desensitized and stop reacting. Much like how first time drivers are more reactive to stimuli than experienced drivers: the majority of past mental processes are demoted from executive functions and become automated. So it’s safe to posit a sort of adaptive mechanism that filters sensory input to keep your attention-resources spent efficiently. This attention-conserving mechanism is the crux of status transactions.

When someone is constantly surrounded by people who don’t have power i.e. status over them, their attention-conserving mechanism goes to work. In this case, the stimulus they’re filtering out is “people who share experienceable characteristics with low status people they’re constantly surrounded by.” The stimulus, over time, proved it’s not worthy of being paid attention to. And just like an experienced driver, the person devotes substantially less attention-resources towards the uninteresting stimuli.

The important thing to note is the behavior that’s a function of how much attention-resources are used. These behaviors can be interpreted as evidence of the relative status levels in an interaction. And because it’s evolutionarily advantageous to recognize your own status level, we’ve evolved a mechanism that detects these behaviors in order to assist us in figuring out our status level. [Notice how this isn’t a chicken or the egg problem].

This behavior manifests itself in all sorts of ways in humans. Instead of enumerating all the behaviors, think of such behaviors like this:

Assume an individual optimizes for their comfort in a given experienceable environment. If an additional stimulus (In terms of status, the relevant stimulus is other people) enters their environment and causes them to change their previous behavior, that stimulus has non-zero expected power over the individual. Why else would they change their most comfortable state if the stimulus presented nothing of value or no threat? Of course every stimulus will cause some change in behavior (at least initially) so the interesting question is how much behavior changed. The greater the reactivity from the stimulus, the more expected power the stimulus has over the individual.

The strongest status signal is observable reactivity; not only because we naturally react to interesting stimuli, but also because we’re evolved to interpret reactivity as evidence for status.

Most status signaling discussed on Lesswrong is about certain stuff people wear, say, associate with, argue about, etc. What Lesswrongers may not realize is how bothering to change your behavior at all towards other people is inherently status lowering. For instance, if you just engage in an argument with someone you’re telling them they’re important enough to use so much of your attention and effort—even if you “act” high status the whole time. If a rock star simply gazes at their biggest fan, the fan will feel higher status. That’s because just getting the rock star’s attention is an accomplishment.

By engaging in a high-involvement activity with others, like having a conversation, participants assume a plausible upper and lower bound status level for each other. The fact they both care enough to engage in an activity together is evidence they’re approximately the same status level. Because of this, they can’t do any signals that reliably indicate they’re much higher status the other. So most status signaling they’ll be doing to each other won’t influence their status much.

The behavior induced by indifference and reactivity to stimuli is where the strong evidence resides. Everything else merely budges what’s already been proven by indifference and reactivity. In short, the sort of status signaling Lesswrong has been concerned with is only the tip of the iceberg.

Lie to me?

-1 pwno 24 June 2009 09:56PM

I used to think that, given two equally capable individuals, the person with more true information can always do at least as good as the other person. And hence, one can only gain from having true information. There is one implicit assumption that makes this line of reason not true in all cases. We are not perfectly rational agents; our mind isn’t stored in a vacuum, but in a Homo sapien brain.There are certain false beliefs that benefit you by exploiting your primitive mental warehouse, e.g., self-fulfilling prophecies.

Despite the benefits, adopting false beliefs is an irrational practice. If people never acquire the maps that correspond the best to the territory, they won’t have the most accurate cost-benefit analysis for adopting false beliefs. Maybe, in some cases, false beliefs make you better off. The problem is you'll have a wrong or sub-optimal cost-benefit analysis, unless you first adopt reason.

Also, it doesn’t make sense to say that the rational decision could be to “have a false belief” because in order to make that decision, you would have to compare that outcome against “having a true belief.” But in order for a false belief to work, you must truly believe in it — you cannot deceive yourself into believing the false belief after knowing the truth! It’s like figuring out that taking a placebo leads to the best outcome, yet knowing it’s a placebo no longer makes it the best outcome.

Clearly, it is not in your best interest to choose to believe in a falsity—but what if someone else did the choosing? Can’t someone whose advice you rationally trust be the decider of whether to give you false information or not (e.g. a doctor deciding whether you receive a placebo or not)? They could perform a cost-benefit analysis without diluting the effects of the false belief. We only want to know the truth, but prefer to be unknowingly lied to in some cases.

Which brings me to my question: do we program an AI to only tell us the truth or to lie when the AI believes (with high certainty) the lie will lead us to a net benefit over our expected lifetime?

Added: Keep in mind that knowledge of the truth, even for a truth-seeker, is finite in value. The AI can believe that the benefit of a lie would outweigh a truth-seeker's cost of being lied to. So unless someone values the truth above anything else (which I highly doubt), would a truth-seeker ever choose only to be told the truth from the AI?

"Self-pretending" is not as useful as we think

1 pwno 25 April 2009 11:01PM

A few weeks ago I made a draft of a post that was originally intended to be about the same issue addressed in MBlume’s post regarding beneficial false beliefs. Coincidentally, my draft included the same exact hypothetical about entering a club believing you’re the most attractive person in the room in order to increase chances of attracting women. There seems to be a general agreement with MBlume’s “it’s ok to pretend because it’s not self-deception and produces similar results” conclusion. I was surprised to see so much agreement considering that when I made my original draft I reached a completely different conclusion.

I do agree, however, that pretending may have some benefits, but those benefits are much more limited than MBlume makes them out to be. He brings up a time where pretending helped him better fit into his character in a play. Unfortunately, his anecdote is not an appropriate example of overcoming vestigial evolutionary impulses by pretending. His mind wasn’t evolutionarily programmed to “be afraid” when pretending to be someone else, it was programmed to “be afraid” when hitting on attractive women. When I am alone in my room I can act like a real alpha male all day long, but put me in front of attractive women (or people in general) and I will retreat back to my stifled self.

The only way false beliefs can overcome your obsolete evolutionary impulses is to truly believe in those false beliefs. And we all know why that would be a bad idea. Furthermore, pretending can be dangerous just like reading fiction can be dangerous. So the small benefit that pretending might give may not even be worth the cost (at times).

But there is something we can learn from these (sometimes beneficial) false beliefs.

Obviously, there is no direct casual chain that goes from self-fulfilling beliefs to real-world success. Beliefs, per se, are not the key variables in causing success; instead, these beliefs give rise to whatever the key variable is. We should figure out what are the key variables that arise and find a systematic way of getting those variables.

With the club example, we should instead figure out what behavior changes may result from believing that every girl is attracted to you. Then, figure out which of those behaviors attract women and find a way to perfect those behaviors. This is the approach the seduction community adopts for learning how to attract women—and it works.

Same goes with public speaking. If you have a fear of public speaking, you can’t expect to pretend your fear away. There are ways of reducing unnecessary emotions; the ways that work, however, don’t depend on pretending.

 

Thoughts on status signals

7 pwno 23 March 2009 09:25PM

The LW community knows all too well about the status-seeking tendencies everyone has, not excluding themselves. However, the discussion on status signaling needs to be developed further. Here are some questions I don’t think have been addressed: what can we conclude about people who are blatantly signaling higher status? Should we or can we stop people from signaling?

First, let me clarify what I believe to be the nature of status signals. A status signal only exists in certain contexts. A signal in one community may not be affective in another simply because the other community has a different value system. Driving up to a Singularity Summit with 24 inch spinning rims on your car will signal low status, if anything.

An interesting property of status signals is that they expire. If everybody knows that everybody knows that a certain behavior has been used as a status signal in the past, it no longer works. One example of a status signal that is nearing expiration is buying an unacquainted woman a drink at the bar (note the context I am referring to; buying someone a drink may signal high status in other contexts). There is nothing inherently wrong with this act; it’s just that women know that most men are just trying to signal for high status—therefore, the signal won’t work. Some men know that women know about this signal and, thus, stop using the signal.

On LW, one signal on the verge of expiring is being a contrarian about everything or always finding faults with another’s arguments. This, however, could lead to a new anti-signal signal: agreeing too much.  

Signals that have completely expired are infinitely more numerous. For example, showing your resume or college transcript in most contexts is unacceptable. Even when applying for a job, the resume is no longer sufficient—several interviews are now necessary. Of course, in the interviews, the interviewer is just looking for unexpired signals i.e. signals they don’t know are signals.  

This discussion on the expiration of signals raises this question: why do signals expire?

When A realizes that B is signaling, B’s incentive scheme is exposed. A knows that B is trying to make himself appear higher status in the eyes of A or anyone else he is signaling to. Furthermore, A knows that B thinks A doesn’t know the signal is, in fact, a signal. Otherwise, B wouldn’t have done the signal. A now knows that B is trying to impress (a low status behavior by the way) and therefore has the incentive to lie. Since A knows that B doesn’t know that A knows he is signaling, A figures B thinks he can get away with lying or exaggerating the truth. Since A knows that B has the incentive to lie, A will find the signal not credible. In short, a signal expires once it’s common knowledge that the signal is a signal.

In an ideal world, we would all just cooperate and tell the truth about ourselves and we wouldn’t have to play this silly signal game. Unfortunately, if people start cooperating, the incentive to defect just gets higher. As you see, this is a classic Prisoner’s Dilemma game.

How can we get people to tell the truth?

Easy, everyone needs to learn about status-seeking behavior in order to weed out unreliable signals. The signal game may never end, but with everyone’s knowledge of status-seeking behaviors, the signals that aren’t yet weeded out will correspond more accurately to one’s true status.

Is it rational to take psilocybin?

8 pwno 06 March 2009 04:44AM

Is it rational to take psilocybin?

Just to make my definition of rational clear:

Rationality is only intelligible when in the context of a goal (whether that goal be rational or irrational). Now, if one acts rationally, given their information set, will chose the best plan-of-action towards succeeding their goal. Part of being rational is knowing which goals will maximize one’s utility function.

According to Discovery:

“Scientists released their findings on a recent survey of volunteer psilocybin users 14 months after they took the drug.”

“Sixty-four percent of the volunteers said they still felt at least a moderate increase in well-being or life satisfaction, in terms of things like feeling more creative, self-confident, flexible and optimistic. And 61 percent reported at least a moderate behavior change in what they considered positive ways.”

Assuming you won’t get a bad trip, is it rational to take the drug?

I doubt a psychedelic experience can help me optimize my current utility function better than my sober self. How can I be more rational from a drug? Therefore, I conclude that it must, in fact, change my preference ordering—make me care about things more than I would have otherwise. I prefer my preferences and therefore would rather keep my preferences the way they are now.

If you were guaranteed to have all these positive results from taking the drug, would you take it?