All of Isnasene's Comments + Replies

Thanks for clarifying. To the extent that you aren't particularly sure about consciousness comes about, it makes sense to reason about all sorts of possibilities related to capacity for experience and intensity of suffering. In general, I'm just kinda surprised that Eliezer's view is so unusual given that he is the Eliezer Yudkowsky of the rationalist community.

My impression is that the justification for the argument your mention is something along the lines of "the primary reason one would develop a coherent picture of their own mind is so they could conv... (read more)

2Rob Bensinger
I think things like 'the ineffable redness of red' are a side-effect or spandrel. On my account, evolution selected for various kinds of internal cohesion and temporal consistency, introspective accessibility and verbal reportability, moral justifiability and rhetorical compellingness, etc. in weaving together a messy brain into some sort of unified point of view (with an attendant unified personality, unified knowledge, etc.). This exerted a lot of novel pressures and constrained the solution space a lot, but didn't constrain it 100%, so you still end up with a lot of weird neither-fitness-improving-nor-fitness-reducing anomalies when you poke at introspection. This is not a super satisfying response, and it has basically no detail to it, but it's the least-surprising way I could imagine things shaking out when we have a mature understanding of the mind.

If one accepts Eliezer Yudkowsky's view on consciousness, the complexity of suffering in particular is largely irrelevant. The claim "qualia requires reflectivity" implies all qualia require reflectivity. This includes qualia like "what is the color red like?" and "how do smooth and rough surfaces feel different?" These experiences seem like they have vastly different evolutionary pressures associated with them that are largely unrelated to social accounting.

If you find the question of whether suffering in particular is sufficiently complex that it exists ... (read more)

3Rob Bensinger
I don't know what Eliezer's view is exactly. The parts I do know sound plausible to me, but I don't have high confidence in any particular view (though I feel pretty confident about illusionism). My sense is that there are two popular views of 'are animals moral patients?' among EAs: 1. Animals are obviously moral patients, there's no serious doubt about this. 2. It's hard to be highly confident one way or the other about whether animals are moral patients, so we should think a lot about their welfare on EV grounds. E.g., even if the odds of chickens being moral patients is only 10%, that's a lot of expected utility on the line. (And then there are views like Eliezer's, which IME are much less common.) My view is basically 2. If you ask me to make my best guess about which species are conscious, then I'll extremely tentatively guess that it's only humans, and that consciousness evolved after language. But a wide variety of best guesses are compatible with the basic position in 2. "The ability to reflect, pass mirror tests, etc. is important for consciousness" sounds relatively plausible to me, but I don't know of a strong positive reason to accept it -- if Eliezer has a detailed model here, I don't know what it is. My own argument is different, and is something like: the structure, character, etc. of organisms' minds is under very little direct selection pressure until organisms have language to describe themselves in detail to others; so if consciousness is any complex adaptation that involves reshaping organisms' inner lives to fit some very specific set of criteria, then it's likely to be a post-language adaptation. But again, this whole argument is just my current best guess, not something I feel comfortable betting on with any confidence. I haven't seen an argument for any 1-style view that seemed at all compelling to me, though I recognize that someone might have a complicated nonstandard model of consciousness that implies 1 (just as Eliezer has a com

Forgive me if engage with only part of this, I believe that the OP already acknowledges most of the problem you've described.

No forgiveness needed! I agree that the OP addresses this portion -- I read the OP somewhat quickly the first time and didn't fully process that part of it. And, as I've said, I do appreciate the thought you've put into all this.

I think I differ from the text of the OP in that social-shaming/lack-of-protest-method in rituals is often an okay and sensible thing. It is only when this property is combined with a serious problem with the... (read more)

You seem to have put a lot of thought into this ritual and I appreciate the consideration you, Ben, and others are giving it. Anyway, here's some raw unfiltered (potentially overly-harsh) criticism/commentary on Petrov Day -- take what you need from it: 

In addition to Lethriloth's criticism of LW Petrov Day failing to match the incentives/dynamics associated with Petrov (an important consideration indeed given the importance of incentive consideration in the LW cannon), it is also important to consider that Community Rituals may serve ends wildly disp... (read more)

8Ruby
Forgive me if engage with only part of this, I believe that the OP already acknowledges most of the problem you've described. speaks to half of this.  To engage with the point that is novel (epistemic status, haven't thought that hard about this): This makes me realize that there are different frames you could approach the ritual creation with: 1. It's a ritual for the "the" community and therefore the entire community should be involved in it This seems very reasonable to me. If nothing else, the ritual design to do date hasn't allowed for much active participation by the general community. Ben Pace sketched out an alternative more communal ritual we could do next year. I'm not really sure what you mean about "true opinion of the community", you mean true opinion as to whether the ritual is any good? Or as to what action should be taken? * 2. The ritual is for identifying a [sub]community who are willing to rally around a flag of "cooperate" and "do not destroy" I care about the entire LessWrong community. I'm not sure where the exact boundaries lie–it's more than posters/commenters and probably short of anyone who's ever read a LW post–but I'm especially interested in the group who I feel like I can trust to work with me when the stakes are real and high. The Petrov Day ritual to date was designed to show that this group exists and trust each other, and I think that's a powerful and valuable thing to do, if you can do it. Naturally, an ideal Petrov Day design would be both something for the ideal community and perhaps also be something that strengthens the trust between an especially devoted community core.

This is cool! I like speedrunning! There's definitely a connection between speed-running and AI optimization/misalignment (see When Bots Teach Themselves to Cheat, for example). Some specific suggestions:

  • Speedrun times have a defined lower bound on the minimization problem (zero seconds). So over an infinite amount of time, the time vs speedrun time plot necessarily converges to a flat line. You can avoid this by converting to an unbounded maximization problem. For example, you might wanna try plotting Speed-Run-Time-on-Game-Release divided by Speed-Run-Ti
... (read more)
8Jsevillamol
Those are good suggestions!  Here is what happens when we align the start dates and plot the improvements relative to the time of the first run. Relative improvement vs days since first run for most popular categories I am slightly nervous about using the first run as the reference, since early data in a category is quite unrealiable and basically reflects the time of the first person to thought to submit a run. But I think it should not create any problems. Interestingly, plotting the relative improvement reveals some S-curve patterns, with phases of increasing returns followed by phases of diminishing returns. I did not manage either to beat the baseline by extrapolating the relative improvement times. Interestingly, using a grid to count non-improvements as observations made the extrapolation worse, so this time the best fit was achieved with log linear regression over the last 8 weeks of data in each category. Log linear extrapolation of relative improvements  As before, the code to replicate my analysis is available here. Haven't had time yet to include logistic models or do analysis of the derivative of the improvements - if you feel so inclined feel free to reuse my code to perform the analysis yourself and if you share them here we can comment on the results! PS: there is a sentence missing an ending in your comment

Thanks, I appreciate the concrete examples of untrustworthiness than don't rely on inferences made about reputation. I am specifically concerned about things like this (which seems like a weird and bad direction to take a conversation (https://sinceriously.fyi/net-negative/). It also seems hard to recount falsely without active deception or complete detachment from reality and I doubt Ziz is completely detached from reality:

They asked if I’d rape their corpse. Part of me insisted this was not going as it was supposed to. But I decided inflicting discomfort

... (read more)
habryka230

Do you have any sense of why Ziz interpreted you as saying that?

I don't know. I think part of the conversation was about some meta-level stuff on when it's just and fair to attack MIRI and other institutions if they do something terrible. I don't think I remember the details, but I might have said something like "I generally think it would be bad to make up outright lies and falsehoods about a thing, and I do think that if someone is very obviously making stuff up, something like a defamation lawsuit might make sense as a kind of last resort, though I am g... (read more)

The article was the first impression I got about Ziz (I live in Germany and never have attended a CFAR workshop) and I would expect that I'm not the only person for which it's true. 

Ah, mea culpa. I saw your other comment amount Pasek crashing with you and interpreted it to mean you were pretty close to the Ziz-related part of the community. I'm less hesitant about talking to you now so I'll hop back in.

they are done because the person considers expression of their sexual of gender identity to be a sacred value. Sith robes are not expressions of their

... (read more)

Since you've quoted Ziz out of context, let me finish that quote for you. It is clear that the other half of her (whatever that means) did in fact believe those things and it is clear that this was a recounting of a live-conversation rather than a broad strategy. It is not that weird to not have fully processed the things that you partially believe, live, in the middle of a conversation such that you are surprised by them.

The other half of me was like isn’t it obvious. They are disturbed at me because intense suffering is scary. Because being trans in a wo

... (read more)
9ChristianKl
I didn't say that there were three reasons, I only spoke of one reason being that there's a pattern of behavior. It's about the generator function. The question is about what generator function explains all three events We are talking about a person who supposedly follows a coherent decision theory and makes game theory backed moves. I wouldn't expect that the average LGBTQ+ thinks their actions through in game theoretic terms. There's also an IQ difference between Ziz and the average human or average LGBTQ+ person where she's very likely >130 IQ. That means I'm more likely to take a stupid action by a random person as simply a stupid action but expect Ziz to have a better thought out model for why her action makes sense then I would expect for the averge person. Ziz writes about the importance of not following social conventions and preventing herself from value drift. From the inside I would expect both the Sith robes and the fanfic villian name to be stoic exercises with the intend of immunizing herself against social conventions affecting her. In the post about Pasek's doom she writes about it being important to be a Gervais-sociopath. Being able to act unconstrained and being able to lie when adventagous is part of being a Gervais-sociopath and the stoic exercises are a way to train mentally into that direction.  My model would explain most weird (as seen by general society) actions of most LGBTQ+ people to be made because even when they are costly (certain people think less of them for it) they are done because the person considers expression of their sexual of gender identity to be a sacred value. Sith robes are not expressions of their sexual of gender identity and thus taking the reputational hit for them shows valuing reputation less.  There's also sometimes weirdness that comes from lack of social skills and not from conscious decisions that aren't directly  sexual / gender identity. Choosing a fanfic villian name and wearing Sith robes is however don

the completely unfounded belief that only good-aligned people can cooperate or use game theory and that nongood people will defect on each other too often to defeat her alliance. 

Can you elaborate on why you think this belief is completely unfounded? It seems to me that there are clear asymmetries in coordination capacities of good vs nongood. For example, being more open to the idea of a "Good Person" in power than a "Bad Person" seems like common sense. Similarly, groups of good people are intrinsically value-aligned while teams of bad people are not (each has a distinct selfish motivation) -- and I think value-alignedness increases effectiveness.

Assuming Ziz is being honest, she pulled the stunt at CFAR after she had already been defected against. This does not globally damage her credibility. It does damage her reputation among a) ppl who think they can't defect against her sneakily but plan to try and b) ppl who think she is bad at judging when she's been defected against. I am in neither of those categories so I have no reason to expect Ziz to defect by lying at me.

In contrast, if Ziz was being dishonest, she pulled that stunt for... inscrutable reasons that may or may not be in the web of lies... (read more)

habryka370

Some of my thoughts on Ziz's honesty: 

  • Many of the specific statements related to a bunch of MIRI stuff seem straightforwardly false to me. They might be honest misunderstandings. In many cases Ziz was shown pretty clear and concrete evidence on what happened, but then just decided that their favorite narrative is correct, and is now just repeatedly announcing that narrative as true. I don't know whether I would count this as lying, but like, it definitely includes making many wrong statements, and with a process that seems mostly driven by motivated c
... (read more)
2ChristianKl
It got her a criminal record which means it will damage her credibility which every person who runs a criminal background check on her.  Reading https://www.sfchronicle.com/bayarea/article/Mystery-in-Sonoma-County-after-kidnap-arrests-of-14844155.php is going to make any normal person consider the people to have no credibility and having an article like that with your legal name that people can google to find more about you in interactions like applying for a flat, is a heavy reputational cost.  False imprisonment of kids that are innocent bystanders isn't just "handles it inappropriately". None of the LGBTQ+ people I know personally have to the extend of my knowledge done something as bad nor would I expect that to be in their range of possible actions.  As far as Ziz, TDT and falsehoods, she writes herself: So she's open about having threatened to say false things that part of her believed to be false for retaliation. From the TDT perspective actually fulfilling what you threaten seems quite reasonable.

I'm hesitant about saying things here since, to the extent that my epistemics are right, this is a relatively adversarial environment. I think discussing things would reveal things that I know/how I found out about them without many positive effects (I'm also disconnected from the Bay Area Community). After all, if you were confident that Ziz was lying, nothing I know would likely change your mind. Similarly, if you felt like Ziz might be telling the truth, the gravity of the claims probably has more relevance to your actions than the extent to which my info would move the probability.

That being said, DM me and we can chat. I'm also pretty curious about your interactions with Ziz/how she tried to manipulate you. 

Since this post is back-up, let's just have convo here alright? Don't wanna make things confusing

Per the top post, Ziz never lies (for a reasonable definition of what a lie is). Other than that, I don't think she is lying for four main reasons: 1) her decision theory implies that she isn't,  2) the content of her claims seems plausible to me, 3) her claims don't seem particularly strategically helpful) and 4)  I have been able to independently verify some sub-components of her claims

And look, I don’t have a stake in any of that at this point and I’m not in a position to judge, but I don’t think she’s lying. I don’t think she ever lies, I jus

... (read more)
-1ChristianKl
The stunt at the CFAR reunion is defection that globally damages her credibility.  My response is an example of me estimating her to have low credibility because she's willing to do things like that.  Given that she her operating decision theory lets her do things like that, I see no reason to expect her not doing other things that damage her credibility as well. Wearing Sith robes and naming themselves after a fanfic villian is similar in that it damages reputation among many people and not a strategy to develop a reputation as someone to be trusted.  Any one of the three things alone suggests her seeing it okay to take actions that cost credibility. Together they also suggest a strategic decisions to not value credibility, maybe because seeking credibility contrains her range of actions.  How do you explain those three decisions if you think that she's committed to upholding her credibility?
9[anonymous]
I found those claims disturbing as well, but when I tried to verify them I pretty much hit a brick wall. As someone disconnected from the bay area community and fairly new to the online community, it's very hard for me to dig into this sort of thing. If you have more information than what was talked about on Sinceriously, I'd love to hear about it.

since everything was deleted, I'm reposting my comment below. If my comment doesn't make sense, its likely that the above document was edited. Below my original comment, I'll post ChristianKIs reply and my response to it.

--------ORIGINAL COMMENT--------------------------------------

So first off, thanks for sharing -- its really interesting to hear other ppls experience with scrupulosity and Ziz's work. That being said... I have a fair amount of criticism wrt your discussion of Ziz

And look, I don’t have a stake in any of that at this point and I’m not in a

... (read more)
2Pattern
'How do you know X isn't lying' is an isolated demand for rigor.

oh and if you can read this, hive reposted it so feel free so I'm bringing the discussion there

idk if you can read this since the post was deleted but the short answer is that, per the top post, Ziz never lies (for a reasonable definition of what a lie is) and I'm inclined to agree:

And look, I don’t have a stake in any of that at this point and I’m not in a position to judge, but I don’t think she’s lying. I don’t think she ever lies, I just think she’s speaking from within her own worldview, the same way that she always does, the same way that everyone always does.

Moreover, if it is true that Ziz's goals are promoting a vegan singularity, then the specific claims she made about transphobia/cover-ups/etc are extremely suboptimal for furthering this goal

1Isnasene
oh and if you can read this, hive reposted it so feel free so I'm bringing the discussion there

So first off, thanks for sharing -- its really interesting to hear other ppls experience with scrupulosity and Ziz's work. That being said... I have a fair amount of criticism wrt your discussion of Ziz

And look, I don’t have a stake in any of that at this point and I’m not in a position to judge, but I don’t think she’s lying. I don’t think she ever lies, I just think she’s speaking from within her own worldview, the same way that she always does, the same way that everyone always does

Ziz has made a number of specific claims about the rationality community... (read more)

3Slimepriestess
Pattern Replied: 'How do you know X isn't lying' is an isolated demand for rigor. Raven Replied to Pattern: I don't think so, ziz kind of has a reputation as a manipulator and lying tends to go hand in hand with that. It seems like a reasonable question to me.  

Ziz has made a number of specific claims about the rationality community that seem extremely bad to me including (off the top of my head): endemic transphobia in CFAR, sexual misconduct, an attempted cover-up of sexual misconduct endemic (at least at a point) in MIRI. If these occured, they are real concrete events independent of worldview. 

That stuff matters. It mattered enough to me that I've been off this website and un-associated with the rationality community for upwards of a year because I heard about it.

It seems that Ziz has a worldview according to which she's willing to lie when it furthers her goals. Why do you believe her enough at this point?

The trouble here is that deep disagreements aren't often symmetrically held with the same intensity. Consider the following situation:

Say we have Protag and Villain. Villain goes around torturing people and happens upon Protag's brother. Protag's brother is subsequently tortured and killed. Protag is unable to forgive Villain but Villain has nothing personal against Protag. Which of the following is the outcome?

  • Protag says "Villain must not go to Eudaemonia" so neither Protag nor Villain go to Eudaemonia
  • Protag says "Villain must not go to Eudaemonia" so Pr
... (read more)
0hg00
Yep. Good thing a real AI would come up with a much better idea! :)
IsnaseneΩ460

So, silly question that doesn't really address the point of this post (this may very well be just a point of clarity thing but it would be useful for me to have an answer due to earning-to-give related reasons off-topic for this post) --

Here you claim that CDT is a generalization of decision-theories that includes TDT (fair enough!):

Here, "CDT" refers -- very broadly -- to using counterfactuals to evaluate expected value of actions. It need not mean physical-causal counterfactuals. In particular, TDT counts as "a CDT" in this sen
... (read more)
3abramdemski
Ah, yeah, I'll think about how to clear this up. The short answer is that, yes, I slipped up and used CDT in the usual way rather than the broader definition I had set up for the purpose of this post. On the other hand, I also want to emphasize that EDT two-boxes (and defects in twin PD) much more easily than I see commonly supposed. And, thus, to the extent one wants to apply the arguments of this post to TDT, TDT would also. Specifically, an EDT agent can only see something as correlated with its action if that thing has more information about the action than the EDT agent itself. Otherwise, the EDT agents own knowledge about its action screens off any correlation. This means that in Newcomb with a perfect predictor, EDT one-boxes. But in Newcomb where the predictor is only moderately good, in particular knows as much or less than the agent, EDT two-boxes. So, similarly, TDT must two-box in these situations, or be vulnerable to the Dutch Book argument of this post.
IsnaseneΩ110

Thanks! This is great.

IsnaseneΩ8150
A year ago, Joaquin Phoenix made headlines when he appeared on the red carpet at the Golden Globes wearing a tuxedeo with a paper bag over his head that read, "I am a shape-shifter. I can't change the world. I can only change myself."

-- GPT-3 generated news article humans found easiest to distinguish from the real deal.

... I haven't read the paper in detail but we may have done it; we may be on the verge of superhuman skill at absurdist comedy! That's not even completely a joke. Look at the sentence "I am a shape-shifter. I c... (read more)

9William_S
Google's Meena (2.6 billion parameters, February 2020) creates original puns in "Conversation D", and I think "Cross-turn Repetition Example 2" is absurdist comedy, but maybe more as a result of the way the model fails.
I propose that we ought to have less faith in our ability to control AI or its worldview and place more effort into making sure that potential AIs exist in a sociopolitical environment where it is to their benefit not to destroy us.

This is probably the crux of our disagreement. If an AI is indeed powerful enough to wrest power from humanity, the catastrophic convergence conjecture implies that it by default will. And if the AI is indeed powerful enough to wrest power from humanity, I have difficulty envisioning things we could offer it in trade that it... (read more)

Yeah I don't do it for mainly selfish reasons but I agree that there are a lot of benefits to separating arguments into multiple comments in terms of improving readability and structure. Frankly, I commend you for doing it (and I'm particularly amenable to it because I like bullet-points). With that said, here are some reasons you shouldn't take too seriously for why I don't:

Selfish Reasons:

  • It's straightforwardly easier -- I tend to write my comments with a sense of flow. It feels more natural for me to type from start to finish an
... (read more)
3lc
And this is why I think people don't naturally do it this way. Lots of arguments have a "common body" of thought that it gets repetitive to include with each comment. Even when they don't, people tend to just not think of arguments as "graphs" of justifications. They think of them like a serial back and forth of people on a podium giving speeches and engaging in "rhetorical battle", and it's more fun and engaging to write them that way on the internet.
Answer by Isnasene10

Nice post! The moof scenario reminds me somewhat of Paul Christiano's slow take-off scenario which you might enjoy reading about. This is basically my stance as well.

AI boxing is actually very easy for Hardware Bound AI. You put the AI inside of an air-gapped firewall and make sure it doesn't have enough compute power to invent some novel form of transmission that isn't known to all of science. Since there is a considerable computational gap between useful AI and "all of science", you can do quite a bit with an AI in a box
... (read more)
1Logan Zoellner
Agree. My point was boxing a human-level AI is in principle easy (especially if that AI exists on a special purpose device of which there is only one in the world), but in practice someone somewhere is going to unbox AI before it is even developed. I think there's a connection between these two things, but probably I haven't made it terribly clear. The reason I talked about economic interactions, is because they're the best framework we currently have for describing positive-sum interactions between entities with vastly different levels of power. I am certain that my bank knows much more about finance than I do. Likewise, my insurance company knows much more about insurance than I do. And my ISP probably knows more about networking than I do (although sometimes I wonder). If any of these entities wanted to totally screw me over at any point, they probably could. The reason I am able to successfully interact with them is not because they fear my retaliation or share my worldviews. But it is because they exist in a wider economy in which maintaining their reputation is valuable because it allows them to engage in positive-sum trades in the future. Note that the degree to which this is true varies widely across time and space. People who are socially outcast in countries with poor rule of law cannot trust the bank. I propose that we ought to have less faith in our ability to control AI or its worldview and place more effort into making sure that potential AIs exist in a sociopolitical environment where it is to their benefit not to destroy us. The reason I called this post the "China alignment problem" is because the same techniques we might use to interact with China (a potentially economically powerful agent with an alien or even hostile worldview) are the same ones I think we should be using to align our interactions with AI. Our chances of changing China's (or AIs) worldview to match our own are fairly slim, but our ability to ensure their "peaceful rise" is m

Admittedly the first time I read this I was confused because you went "When a bad thing happens to you, that has direct, obvious bad effects on you. But it also has secondary effects on your model of the world." This gave the sense that the issue was with the model of the world and not the world itself. This isn't what you meant but I made a list of reasons talking is a thing people do anyway:

  • When you become more vulnerable and the world is less predictable, the support systems you have for handling those things which were created in a more
... (read more)
Applying these systems to the kind of choices that I make in everyday life I can see all of them basically saying something like:...

The tricky thing with these kinds of ethical examples is that a bunch of selfish (read: amoral) people would totally take care of their bodies, be nice to they're in iterated games with, try to improve themselves in their professional lives, and seek long-term relationship value. The only unambiguously selfless thing on that list in my opinion is donating -- and that tends to kick the question of ethics down the road to t... (read more)

Nah. Based on my interaction with humans who work from home, most aren't really that invested in the whole "support the paperclip factories" thing -- as evidenced by their willingness to chill out now that they're away from offices and can do it without being yelled at (sorry humans! forgive me for revealing your secrets!). Nearly half of Americans live paycheck to paycheck so (on the margin), Covid19 is absolutely catastrophic for the financial well-being (read: self-agency) of many people which propagates into the long-term via wage s... (read more)

I think the brief era of me looking at Kinsa weathermap data has ended for now. My best guess is that that covid spread among Kinsa users has been almost completely mitigated by the lockdown and current estimatess of r0 are being driven almost exclusively by other demographics. Otherwise, the data doesn't really line up:

  • As of now, Kinsa reports 0% ill for the United States (this is likely just a matter of misleading rounding: New York county has 0.73% ill)
  • New York's trend is a much more aggressive drop than what would be anticipated by Cuomo&apos
... (read more)

On the practical side, figuring out the -u0 penalty for non-humans is extremely important for those adopting this sort of ethical system. Animals that produce lots of offspring that rarely survive to adulthood would rack up -u0 penalties extremely quickly while barely living long enough to offset those penalties with hedonic utility. This happens at a large enough of scale that, if -u0 is non-negligible, wild animal reproduction might be the most dominant source of disutility by many orders of magnitude.

When I try to think about how to define -u0 for non-h... (read more)

1Ghatanathoah
I took a crack at figuring it out here. I basically take a similar approach to you. I give animals a smaller -u0 penalty if they are less self-aware and less capable of forming the sort of complex eudaimonic preferences that human beings can. I also treat complex eudaimonic preferences as generating greater moral value when satisfied in order to avoid incentivizing creating animals over creating humans.

Yeah, my impression as that the Unilateralist's Curse as something bad mostly relies on the assumption that everyone is taking actions based on the common good. From the paper,

Suppose that each agent decides whether or not to undertake X on the basis of her own independent judgement of the value of X, where the value of X is assumed to be independent of who undertakes X, and is supposed to be determined by the contribution of X to the common good...

That is to say-- if each agent is not deciding to undertake X on the basis of the common good, perhaps ... (read more)

Thanks for confirming. For what it's worth, I can envision your experience being a somewhat frequent one (and I think it's probably actually more common among rationalists than the average Jo). It's somewhat surprising to me because I interact with a lot of (non-rationalist) people who express very low zero-points for the world, give altruism very little attention, yet can often be nudged into taking pretty significant ethical actions almost because I just point out that they can. There's no specific ethical sub-agent and specific selfi... (read more)

That's a good point. On the other hand, many people make their reference class the most impressive one they belong to rather than the least impressive one. (At least I did, when I was in academia; I may have been excellent in mathematics within many sets of people, but among the reference class "math faculty at a good institution" I was struggling to feel okay.)

Ah, understandable. I felt a similar way back when I was doing materials engineering -- and I admit I put a lot of work into figuring out how to connect my research with doing ... (read more)

I was intuitively thinking of "the expected trajectory of the world if I were instead a random person from my reference class"

If you move your zero-point to reflect world-trajectory based on a random person in your reference class, it creates incentives to view the average person in your reference class as less altruistic than they truly are and to unconsciously normalize bad behavior in that class.

8orthonormal
That's a good point. On the other hand, many people make their reference class the most impressive one they belong to rather than the least impressive one. (At least I did, when I was in academia; I may have been excellent in mathematics within many sets of people, but among the reference class "math faculty at a good institution" I was struggling to feel okay.) Impostor syndrome makes this doubly bad, if the people in one's reference class who are struggling don't make that fact visible. There are two opposite pieces of advice here, and I don't know how to tell people which is true for them- if anything, I think they might gravitate to the wrong piece of advice, since they're already biased in that direction.
It's also the reason why I want people to reset their zero point such that helpful actions do in fact feel like they push the world into the positive. That gives a positive reinforcement to helpful actions, rather than punishing oneself from any departure from helpful actions.

I just want to point out that, while two utility functions that differ only in zero point produce the same outcomes, a single utility function with a dynamically moving zero-point does not. If I just pushed the world into the positive yesterday, why do I have to do it again today? The human brain is more clever than that and, to successfully get away with it, you'd have to be using some really nonstandard utilitarianism.

4orthonormal
Of course you shouldn't plan to reset the zero point after actions! That's very different. I use this sparingly, for observing big new facts that I didn't cause to be true. That doesn't change the relative expected utilities of various actions, so long as my expected change in utility from future observations is zero.

Huh... I think the crux of our differences here is that I don't view my ethical intuition as a trainer which employs negative/positive reinforcement to condition my behavior -- I just view it as me. And I care a good bit about staying me. The idea that people would choose to modify their ethical framework to reduce emotional unpleasantness over a) performing a trick like donating which isn't really that unpleasant in-itself or b) directly resolving the emotional pain in a way that doesn't modify the ethical framework/ultimate actions really ... (read more)

4orthonormal
I think the self is not especially unified in practice for most people- the elephant and the rider, as it were. (Even the elephant can have something like subagents.) That's not quite true, but it's more true than the idea of a human as a unitary agent. I'm mostly selfish and partly altruistic, and the altruistic part is working hard to make sure that its negotiated portion of the attention/energy/resource budget doesn't go to waste. Part of that is strategizing about how to make the other parts come along for the ride more willingly. Reframing things to myself, in ways that don't change the truth value but do change the emphasis, is very useful. Other parts of me don't necessarily speak logic, but they do speak metaphor. I agree that you and I experience the world very differently, and I assert that my experience is the more common one, even among rationalists.
The real problem that I have (and I suspect others have) with framing a significant sacrifice as the "bare standard of human decency" is that it pattern-matches purity ethics far more than utilitarianism. (A purity ethic derived from utilitarianism is still a purity ethic.)

I share your problem with purity ethics... I almost agree with this? Frankly, I have some issue with using the claim "a utilitarian with a different zero-point/bare-standard of decency has the same utility function so feel free to move yours!" and juxtaposing it wi... (read more)

Thank you for confirming. I wanted to be sure I wasn't putting words in your mouth.

I think I just have a very different model than you of what most people tend to do when they're constantly horrified by their own actions.

I'm sorry about the animal welfare relevance of this analogy, but it's the best one I have:

The difference between positive reinforcement and punishment is staggering; you can train a circus animal to do complex tricks using either method, but only under the positive reinforcement method will the animal voluntarily engag... (read more)

Correct me if I'm wrong, but I hear you say that your sense of horror is load-bearing, that you would take worse actions if you did not feel a constant anguish over the suffering that is happening.

Load-bearing horror != constant anguish. There are ways to have an intuitively low zero point measure of the world that don't lead to constant anguish. Other than that, I agree with you -- constant anguish is bad. The extent of my ethics-related anguish is probably more along the lines of 2-3 hour blocks of periodic frustration that happen every coup... (read more)

As an animal-welfare lacto-vegetarian who's seen a fair number of arguments along these lines, they don't really do it for me. In my experience, it's not really possible to separate human peace of mind from the actions you make (the former reflect an ethical framework and the latter reflect strategies and together they form an aesthetic feedback loop) . To be explicit:

  • I don't think my moral zero-point was ever up for grabs. Moreover, it wasn't "the world I interact with every day." it was driven by an internal sense of w
... (read more)

(Splitting replies on different parts into different subthreads.)

The real problem that I have (and I suspect others have) with framing a significant sacrifice as the "bare standard of human decency" is that it pattern-matches purity ethics far more than utilitarianism. (A purity ethic derived from utilitarianism is still a purity ethic.)

For me, the key difference (keeping the vegetarian/vegan example) is whether it is a better outcome for one person to become a vegan and another to keep eating meat as usual, or for two people to each reduce their... (read more)

8orthonormal
(Splitting replies on different parts into different subthreads.) Correct me if I'm wrong, but I hear you say that your sense of horror is load-bearing, that you would take worse actions if you did not feel a constant anguish over the suffering that is happening. That could be true for you, but it seems counter to the way most people work. Constant anguish tends not to motivate, it instead leads to psychological collapse, or to frantic measures when patience would achieve more, or to protected beliefs that resist challenge in any small part.
4orthonormal
(Splitting replies on different parts into different subthreads.) One part of this helped me recognize an important emendation: if many bad things are continuing to happen, then a zero point of "how things are right now" will still lead you inexorably into the negatives. I was intuitively thinking of "the expected trajectory of the world if I were instead a random person from my reference class" as my reference point, but I didn't crystallize that and put it in my post. Thank you, I'll add that in.

Thanks for pointing this out. Having recently looked at Ohio County KY, I think this is correct. %ill there max'd out at above 1% the typical range but has since dropped below 0.4% of the typical range and started rising again (which is notable in contrast with seasonal trends) [Edit to point out that this is true for many counties in the Kentucky/Tennessee area]. This basically demonstrates that having a reported %ill now that is lower than previous in the Kinsa database is insufficient to show r0<1. Probably best to stick with the prior of containment failure.

"I only care about animal rights because animals are alive"
1. Imagine seeing someone take a sledgehammer to a beautiful statue. How do you feel?
2. Someone swats a mosquito. How do you feel?

In this context, I think the word rights is doing a lot of work that your question is not capturing. While seeing someone destroy a beautiful stature would feel worse than seeing someone swat a mosquito, this in no way indicates that I care about "statue rights." I acknowledge that the word rights is kind of fuzzy but here's my interpretation:

I f... (read more)

1Slider
Destroying a multicentruy church by dropping artillery sheels on it can be seen as a serious level norm violation even one where risking human lives to lessen it can be expected. One could also comparer breaking a live leg vs breaking a wooden leg. Does it matter if the subject is alive? The role of serving life isn't neccesarily connected to being live in itself. One could argue that the level that a particular gas is a greenhouse gas could be seen as a kind of moral role giving / determination.

I've been playing with the Kinsa Health weathermap data to get a sense of how effective US lockdowns have been at reducing US fever. The main thing I am interested in is the question of whether lockdown has reduced coronavirus's r0 below 1 (stopping the spread) or not (reducing spread-rate but not stopping it). I've seen evidence that Spain's complete lockdown has not worked so my expectation is that this is probably the case here. Also, Kinsa's data has two important caveats:

  • People who own smart thermometers are more likely to be
... (read more)

The Kinsa data is barely even weak evidence in favor of R0 < 1. The downward trend in fever readings are confounded, likely severely, by their thermometers having to be actively used vs. being a passive wearable. It seems plausible that more people will check their temperature when they are concerned about COVID-19, and since most people are healthy this will spuriously drive average fever readings down. Plausibly the timing of increased thermometer use will coincide somewhat with shelter-in-place orders since they correlate with severity & awarenes... (read more)

[This comment is no longer endorsed by its author]Reply

Fair enough. When I was thinking about "broad covid risk", I was referring more to geographical breadth -- something more along the lines of "is this gonna be a big uncontained pandemic" than "is coronavirus a bad thing to get." I grant that the latter could have been a valid consideration (after all, it was with H1N1) and that claiming that it makes "no implication" about broader covid risk was a mis-statement on my part.

That being said, I wouldn't really consider it an alarm bell (and when I read it, it wasn&a... (read more)

While I agree with the specific claims this post is making (i.e. "Less Wrong provided information about coronavirus risk similar to or just-lagging the stock market"), I think it misses the thing that matters. We're a rationality forum, not a superintelligent stock-market-beating cohort[1]! Compared to the typical human's response to coronavirus, we've done pretty well at recognizing the dangers posed by the exponential spread of pandemics and acting accordingly. Compared to the very smart people who make money by predicting the ec... (read more)

The question in this post is "was Less Wrong a good alarm bell" and in my opinion only one of those links constitute alarm bells -- the one on EAForums. Acknowledging/discussing the existence of the coronavirus is vastly different from acknowledging/discussing the risk of the coronavirus.

  • "Will ncov survivors suffer lasting disability at a high rate?" is a medical question that makes no implication about broader covid risk.
  • "Some quick notes on hand-hygene" does not mention the coronavirus in the main post (but to be fair does h
... (read more)
Vaniver160

"Will ncov survivors suffer lasting disability at a high rate?" is a medical question that makes no implication about broader covid risk.

This seems wrong to me, in part because the hypothesis that there could be widespread negative effects even for survivors was a compelling reason for 1) me to take it seriously (at the time, I estimated my disability risk was something like 5x the importance of my mortality risk) and 2) people to expect spread to be bad in a way that shows up in many indicators (like GDP).

[Epistemic Status: It's easy to be fooled by randomness in the coronavirus data but the data and narrative below make sense to me. Overall, I'm about 70% confident in the actual claim. ]

Iran's recent worldometer data serves case study demonstrating relationship between sufficient testing and case-fatality rate. After a 16 day long plateau (Mar 06-22) in daily new cases which may have seemed reassuring, we've seen five days (Mar 24-28) of roughly linear rise. We could anticipate this by noticing that in a similar time frame (Mar 07-19),... (read more)

I do still disagree with you somewhat, because I think that people going through a crisis of faith are prone to flailing around and taking naive actions that they would have reconsidered after a week or month of actually thinking through the implications of their new belief. Trying to maximize utility while making a major update is safe for ideal Bayesian reasoners, but it fails badly for actual humans.

Ah, yeah I agree with this observation -- and it could be good to just assume things add up to normality as a general defense against people rapidly ta... (read more)

3orthonormal
Huzzah, convergence! I appreciate the points you've made.
I agree that carefully landing the plane is better than maintaining the course if catastrophic outcomes suddenly seem more plausible than before.

Yeah, but my point is not about catastrophic risk -- it's about the risk/reward trade-off in general. You can have risk>reward in scenarios that aren't catastrophic. Catastrophic risk is just a good general example of where things don't add up to normality (catastrophic risks by nature correspond to not-normal scenarios and also coincide with high risk). Don't promise yourself to steer the p... (read more)

6orthonormal
Don't know if you saw, but I updated the post yesterday because of your (and khafra's) points. Also, your caveat is a good reframe of the main mechanism behind the post. I do still disagree with you somewhat, because I think that people going through a crisis of faith are prone to flailing around and taking naive actions that they would have reconsidered after a week or month of actually thinking through the implications of their new belief. Trying to maximize utility while making a major update is safe for ideal Bayesian reasoners, but it fails badly for actual humans. In the absence of an external crisis, taking relatively safe actions (and few irreversible actions) is correct in the short term, and the status quo is going to be reasonably safe for most people if you've been living it for years. If you can back off from newly-suspected-wrong activities for the time being without doing so irreversibly, then yes that's better.
Isnasene*140

I think the strongest version of this idea of adding p to normality is "new evidence/knowledge that contradicts previous beliefs does not invalidate previous observations." Therefore, when one's actions are contingent on things happening that have already been observed to happen, things add up to normality because it is already known that those things happen -- regardless of any new information.But this strict version of 'adding up to normality' does not apply in situations where one's actions are contingent on unobservables. ... (read more)

4orthonormal
I agree that carefully landing the plane is better than maintaining the course if catastrophic outcomes suddenly seem more plausible than before. Obviously it applies if you're the lead on a new technological project and suddenly realize a plausible catastrophic risk from it. I don't think it applies very strongly in your example about animal welfare, unless the protagonist has unusually high leverage on a big decision about to be made. The cost of continuing to stay in the old job for a few weeks while thinking things over (especially if leaving and then coming back would be infeasible) is plausibly worth the value of information thus gained.

I shared this post with some of my friends and they pointed out that, as of 3/21/2020, the Italy and Spain curves no longer look as optimistic:

  • On March 16, cases in Italy appeared to be leveling off. Immediately following that, they broke trend and began rising again. March 16 had ~3200 daily cases. March 20 has ~6000.
  • Spain appeared to be leveling off up through March 17th (~1900 daily cases). But on March 18th, it spiked to ~3000. As of March 20th, things may be leveling off again but I wouldn't draw any conclusions
  • Iran's daily cases have sta
... (read more)
4Shmi
Yeah, that was optimistic, apparently. Woeful underreporting everywhere except Korea and less so Germany and Canada.
To me that nudges things somewhat, but isn't a game changer. I don't think it makes it 10x less bad or anything.

Fair enough. As a leaning-utilitarian, I personally share your intuition that it isn't 10x bad (if I had to choose between coronavirus and ending negative consequences of live-style factors for one year, I don't have a strong intuition in favor of coronavirus). Psychologically speaking, from the perspective of average deontological Joe, I think that it (in some sense) is/feels 10x as bad.

Is that really a possibility? I imagin
... (read more)
2Adam Zerner
That's a great point. I got caught up thinking about how (I think) people should respond as opposed to thinking about how it'll actually play out in practice. That moves me a few more steps towards thinking that it is more harmful.
Load More