Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Vaniver 10 June 2017 07:42:43PM 2 points [-]

I've moved a post about an ongoing legal issue to its author's drafts. They can return it to public discussion when the trial concludes.

Comment author: Zack_M_Davis 11 June 2017 07:58:20PM 2 points [-]

What specific bad things would you expect to happen if the post was left up, with what probabilities? (I'm aware of the standard practice of not discussing ongoing legal cases, but have my doubts about whether allowing the legal system to operate under conditions of secrecy actually makes things better on net.)

Comment author: Viliam 08 June 2017 03:15:22PM 4 points [-]

The mature way to handle suicidal people is to call professional help, as soon as possible. If the suicidal thinking is caused by some kind of hormonal imbalance -- which the person will report as "I have logically concluded that it is better for me to die", because that is how it feels from inside -- you cannot fix the hormonal imbalance by a clever argument; that would be magical thinking. Most likely, you will talk to the person until their hormonal spike passes, then the person will say "uhm, what you said makes a lot of sense, I already feel better, thanks!", and the next day you will find them hanging from the noose in their room, because another hormonal spike hit them later in the evening, and they "logically concluded" that life actually is meaningless and there is no hope and no reason to delay the inevitable, so they wouldn't even call you or wait until the morning, because that also would be pointless.

(Been there, failed to do the right thing, lost a friend.)

Sure, this seems like an unfalsifiable hypothesis "you believe it is not caused by hormones because that belief is caused by hormones". But that's exactly the reason to seek professional help instead of debating it; to measure your actual level of hormones, and if necessary, to fix it. Body and mind are connected more than most people admit.

That's all from my side. If you are sincere, I wish you luck. Any meaningful help I could offer is exactly what you refuse, so I have nothing more to add.

Comment author: Zack_M_Davis 08 June 2017 05:54:05PM 1 point [-]

The mature way to handle suicidal people is to call professional help, as soon as possible.

It's worth noting that this creates an incentive to never talk about your problems.

My advice for people who value not being kidnapped and forcibly drugged by unaccountable authority figures who won't listen to reason is to never voluntarily talk to psychiatrists, for the same reason you should never talk to cops.

Comment author: Viliam 07 June 2017 09:27:45PM *  10 points [-]

WTF is this? Please take a step back, and look at what you did here.

Your literally first words on this website are about suicide. Then you say no suicide, and then you explain in detail how people are not supposed to talk about your possible suicide. Half of your total contribution on this website is about your suicide-not-suicide. Thanks; now everyone can understand they are not supposed to think about the pink elephant in the room. So... why have you mentioned it, in the first place? Three times in a row, using a bold font once, just to be sure. Seems like you actually want people to think about your possible suicide, but also to feel guilty if they mention it. Because the same comment, without this mind game, could be written like this:

I have recently realized that ASI-provided immortal life is significantly likely to be bad rather than good. If you are very familiar with the topics of AI risk, mind uploading, and utilitarianism, I am interested in your opinions about this topic.

Much less drama, right?

Next, you provide zero information about yourself. You are a stranger here, and you use anonymized e-mail. And I guess we will not learn more about you here, because you prefer private conversations anyway. However, you "urge" people to contact you, and provide an "appropriately genuine introduction", a brief explanation of their beliefs, and their intent to help you. But they are not supposed to mention your suicide-not-suicide, right? But they are supposed to want to help you. But they are not allowed to suggest seeking expert help. And they are supposed to tell you things about themselves, without knowing anything about you. And this all is supposed to happen off-site, without any observers, inter alie because the word limit on LW messages is problematic. Right. How weird no one else has realized yet how much this problematic word limit prevents us from debating AI-related topics here.

More red flags than in China on Mao's birthday.

I don't think you are in a risk of suicide. Instead, I think that people who would contact you are in serious risk of being emotionally exploited (and reminded of your suicide-not-suicide, and their intent to help). Something like: "I told you that I am ready to die unless you convince me not to; and you promised you would help me; and you know that I will never seek expert help; and you don't know whether anyone else talks to me; so... if you stop interacting with me, you might be responsible for my death; is that really okay for you as a utilitarian?"

If anyone wants to play this game, go ahead. I have already seen my share of "suicidal" people giving others detailed instructions how to interact with them, and unsurprisingly, decades later all of them are still alive; and the people who interacted with them regret having that experience.

Comment author: Zack_M_Davis 08 June 2017 03:54:39AM 3 points [-]

I corresponded with sad_dolphin. It added a little bit of gloom to my day, but I don't regret doing it: having suffered from similar psychological problems in the past, I want to be there with my hard-won expertise for people working through the same questions. I agree that most people who talk about suicide in such a manner are unlikely to go through with it, but that doesn't mean they're not being subjectively sincere. I'd rather such cries for help not be disincentivized here (as you seem to be trying to do); I'd rather people be able to seek and receive support from people who actually understand their ideas, rather than callously foisted off onto alleged "experts" who don't understand.

Comment author: Lumifer 08 June 2017 01:13:11AM *  0 points [-]

but you can do that using standard probability theory

Of course I can. I can represent my beliefs about the probability as a distribution, a meta- (or a hyper-) distribution. But I'm being told that this is "meta-uncertainty" which right-thinking Bayesians are not supposed to have.

No one is talking about inventing new fields of math

say it's normally distributed

Clearly not since the normal distribution goes from negative infinity to positive infinity and the probability goes merely from 0 to 1.

the probability of r having a particular value is a different question from the the probability of getting heads on your first toss of Coin 2, which is still 0.5

That 0.5 is conditional on the distribution of r, isn't it? That makes it not a different question at all.

Notably, if I'm risk-averse, the risk of betting on Coin 1 looks different to me from the risk of betting on Coin2.

St. Cox probably does.

Can you elaborate? It's not clear to me.

Comment author: Zack_M_Davis 08 June 2017 03:19:18AM 0 points [-]

But I'm being told that this is "meta-uncertainty" which right-thinking Bayesians are not supposed to have.

Hm. Maybe those people are wrong??

Clearly not since the normal distribution goes from negative infinity to positive infinity

That's right; I should have either said "approximately", or chosen a different distribution.

That 0.5 is conditional on the distribution of r, isn't it? That makes it not a different question at all.

Yes, it is averaging over your distribution for r. Does it help if you think of probability as relative to subjective states of knowledge?

Can you elaborate?

(Attempted humorous allusion to how Cox's theorem derives probability theory from simple axioms about how reasoning under uncertainty should work, less relevant if no one is talking about inventing new fields of math.)

Comment author: Lumifer 07 June 2017 08:24:06PM 0 points [-]

That would be missing the point.

Would it? My interest is in constructing a framework which provides useful, insightful, and reasonably accurate models for actual human decision-making. The vNM theorem is quite useless in this respect -- I don't know what my (or other people's) utility function is, I cannot calculate or even estimate it, a great deal of important choices can be expressed as a set of lotteries only in very awkward ways, etc. And this is even besides the fact that empirical human preferences tend to not be coherent and they change with time.

Risk aversion is an easily observable fact. Every day in financial markets people pay very large amounts of money in order to reduce their risk (for the same expected return). If you think they are all wrong, by all means, go and become rich off these misguided fools.

But your different beliefs about the coins don't need to show up in your probability for a single coinflip.

Why not? As I said, I want a richer way to talk about probabilities, more complex than taking them as simple scalars. Do you think it's a bad idea? Does St.Bayes frown upon it?

Comment author: Zack_M_Davis 07 June 2017 11:45:53PM 4 points [-]

As I said, I want a richer way to talk about probabilities, more complex than taking them as simple scalars. Do you think it's a bad idea?

That's right, I think it's a bad idea: it sounds like what you actually want is a richer way to talk about your beliefs about Coin 2, but you can do that using standard probability theory, without needing to invent a new field of math from scratch.

Suppose you think Coin 2 is biased and lands heads some unknown fraction r of the time. Your uncertainty about the parameter r will be represented by a probability distribution: say it's normally distributed with a mean of 0.5 and a standard deviation of 0.1. The point is, the probability of r having a particular value is a different question from the the probability of getting heads on your first toss of Coin 2, which is still 0.5. You'd have to ask a different question than "What is the probability of heads on the first flip?" if you want the answer to distinguish the two coins. For example, the probability of getting exactly k heads in n flips is C(n, k)(0.5)^k(0.5)^(nk) for Coin 1, but (I think?) ∫₀¹ (1/√(0.02π))e^−((p−0.5)^2/0.02) C(n, k)(p)^k(p)^(nk) dp for Coin 2.

Does St.Bayes frown upon it?

St. Cox probably does.

Comment author: Lumifer 07 June 2017 06:56:49PM 0 points [-]

expected utility maximization

You are just rearranging the problem without solving it. Can my utility function include risk aversion? If it can, we're back to the square one: a risk-averse Bayesian rational agent.

And that's even besides the observation that being Bayesian and being committed to expected utility maximization are orthogonal things.

The kind of meta-uncertainty you seem to want, that gets you out of uncomfortable bets, doesn't exist for Bayesians.

I have no need for something that can get me out of uncomfortable bets since I'm perfectly fine with not betting at all. What I want is a representation for probability that is more rich than a simple scalar.

In my hypothetical the two 50% probabilites are different. I want to express the difference between them. There are no sequences involved.

Comment author: Zack_M_Davis 07 June 2017 07:49:30PM 4 points [-]

Can my utility function include risk aversion?

That would be missing the point. The vNM theorem says that if you have preferences over "lotteries" (probability distributions over outcomes; like, 20% chance of winning $5 and 80% chance of winning $10) that satisfy the axioms, then your decisionmaking can be represented as maximizing expected utility for some utility function over outcomes. The concept of "risk aversion" is about how you react to uncertainty (how you decide between lotteries) and is embodied in the utility function; it doesn't apply to outcomes known with certainty. (How risk-averse are you about winning $5?)

See "The Allais Paradox" for how this was covered in the vaunted Sequences.

In my hypothetical the two 50% probabilites are different. I want to express the difference between them. There are no sequences involved.

Obviously you're allowed to have different beliefs about Coin 1 and Coin 2, which could be expressed in many ways. But your different beliefs about the coins don't need to show up in your probability for a single coinflip. The reason for mentioning sequences of flips, is because that's when your beliefs about Coin 1 vs. Coin 2 would start making different predictions.

Comment author: lmn 05 June 2017 10:28:22PM 0 points [-]

A stereotype is a relation of the form X => Y. It maps a class of people/individuals/what have you to a property X. For example, people who wear glasses are smart. Occasionally, some individuals may conceive the relation as Y <=> X. E.g. Smart people wear glasses. I suspect this is due to reasons unrelated to the stereotype (e.g. inability to distinguish between ’=>’ and ’<=>’). I hope this is not common among the general population—the average human can’t be that irrational, right? I shall give a charitable interpretation of the masses, and discuss only the relation 'X => Y’.

It would be better to think of it as X correlates with Y, or X is evidence for Y. And unlike your => relation, which you never adequately specified, these two relations are symmetric.

In response to comment by lmn on Birth of a Stereotype
Comment author: Zack_M_Davis 05 June 2017 10:52:02PM 0 points [-]

Correlations are symmetric, but is evidence for may not be (depending on how you interpret the phrase): P(A|B) ≠ P(B|A) (unless P(A) == P(B)).

Comment author: Zack_M_Davis 05 June 2017 04:59:32AM 3 points [-]

Expected utility is not the same thing as expected dollars. As AgentStonecutter explained to you on Reddit last month, the standard assumption of diminishing marginal utility of money is entirely sufficient to account for preferring the guaranteed $250,000; no need to patch standard decision theory. (The von Neumann–Morgenstern theorem doesn't depend on decisions being repeated; if you want to escape your decisions being describable as the maximization of some utility function, you have to reject one of the axioms, even if your decision is the only decision in the universe.)

Comment author: Zack_M_Davis 05 June 2017 04:43:30AM 2 points [-]

more than I’m willing to commit to an article I am writing out of boredom

As a reader, this gives me pause. If you didn't have any more compelling reason to write than that, you shouldn't expect anyone to have a compelling reason to read. Maybe give yourself more credit: you weren't merely bored, the fact that you may have felt bored is incidental to the fact that you had something to say!

there is no way I’m going to go through the rigours of Solomonoff induction

Solomonoff induction is uncomputable; it's great to be aware that the theoretical foundations exist, but it's also important to be aware of what the theoretical foundations are and aren't good for. (Imagine saying "there's no way I'm going to go through the rigors of predicting the future state of all air molecules here given their current state" when what you actually want is a thermometer.)

On first encountering the glasses users that were smart the fact that they wore glasses might have left a deep impression on the people who encountered them and may have been associated with their perceived intelligence.

But "On first encountering X that were Y, the fact that they were X might have left a deep impression on the people who encountered them" works for any X and Y; it can't explain why the stereotype links glasses and intelligence in particular.

A more specific hypothesis: people are more likely to need glasses while reading, and reading is mentally associated with intelligence because it is in fact the case that P(likes to read | intelligent) > P(likes to read | not intelligent).

The above is the charitable hypothesis. I decline—at this juncture—to mention the less charitable one.

Don't leave your readers in suspense like that; it's cruel! (Also, what makes a hypothesis "charitable", exactly?)

Alas, the stereotypes seem to be unfounded.

Are they?

I am not Yudkowsky, and so I would not proffer an evolutionary psychology hypothesis

Eliezer doesn't have a magic license authorizing him in particular to tell just-so stories: if he can do it, anyone can! (Some argue that we shouldn't, but I don't think I agree.)

Comment author: tcheasdfjkl 01 June 2017 02:32:26AM 4 points [-]

Zack, I think the problem (from my perspective) is that you tried being respectful in private, and by the time you started talking about this publicly, you were already being really harsh and difficult to talk to. I never got to interact with careful/respectful you on this topic.

(I understand this may have been emotionally necessary/unavoidable for you. But still, from my perspective there was a missing step in your escalation process. Though I should acknowledge that you spurred me to do some reading & writing I would not otherwise have done, and it's not impossible that your harshness jolted me into feeling the need to do that.)

Comment author: Zack_M_Davis 01 June 2017 02:49:07AM 1 point [-]

Yeah, that makes sense. Sorry. Feel free to say more or PM me if you want to try to have a careful-and-respectful discussion now (if you trust me).

View more: Next