If one accepts Eliezer Yudkowsky's view on consciousness, the complexity of suffering in particular is largely irrelevant. The claim "qualia requires reflectivity" implies all qualia require reflectivity. This includes qualia like "what is the color red like?" and "how do smooth and rough surfaces feel different?" These experiences seem like they have vastly different evolutionary pressures associated with them that are largely unrelated to social accounting.
If you find the question of whether suffering in particular is sufficiently complex that it exists ...
Forgive me if engage with only part of this, I believe that the OP already acknowledges most of the problem you've described.
No forgiveness needed! I agree that the OP addresses this portion -- I read the OP somewhat quickly the first time and didn't fully process that part of it. And, as I've said, I do appreciate the thought you've put into all this.
I think I differ from the text of the OP in that social-shaming/lack-of-protest-method in rituals is often an okay and sensible thing. It is only when this property is combined with a serious problem with the...
You seem to have put a lot of thought into this ritual and I appreciate the consideration you, Ben, and others are giving it. Anyway, here's some raw unfiltered (potentially overly-harsh) criticism/commentary on Petrov Day -- take what you need from it:
In addition to Lethriloth's criticism of LW Petrov Day failing to match the incentives/dynamics associated with Petrov (an important consideration indeed given the importance of incentive consideration in the LW cannon), it is also important to consider that Community Rituals may serve ends wildly disp...
This is cool! I like speedrunning! There's definitely a connection between speed-running and AI optimization/misalignment (see When Bots Teach Themselves to Cheat, for example). Some specific suggestions:
Thanks, I appreciate the concrete examples of untrustworthiness than don't rely on inferences made about reputation. I am specifically concerned about things like this (which seems like a weird and bad direction to take a conversation (https://sinceriously.fyi/net-negative/). It also seems hard to recount falsely without active deception or complete detachment from reality and I doubt Ziz is completely detached from reality:
...They asked if I’d rape their corpse. Part of me insisted this was not going as it was supposed to. But I decided inflicting discomfort
Do you have any sense of why Ziz interpreted you as saying that?
I don't know. I think part of the conversation was about some meta-level stuff on when it's just and fair to attack MIRI and other institutions if they do something terrible. I don't think I remember the details, but I might have said something like "I generally think it would be bad to make up outright lies and falsehoods about a thing, and I do think that if someone is very obviously making stuff up, something like a defamation lawsuit might make sense as a kind of last resort, though I am g...
The article was the first impression I got about Ziz (I live in Germany and never have attended a CFAR workshop) and I would expect that I'm not the only person for which it's true.
Ah, mea culpa. I saw your other comment amount Pasek crashing with you and interpreted it to mean you were pretty close to the Ziz-related part of the community. I'm less hesitant about talking to you now so I'll hop back in.
...they are done because the person considers expression of their sexual of gender identity to be a sacred value. Sith robes are not expressions of their
Since you've quoted Ziz out of context, let me finish that quote for you. It is clear that the other half of her (whatever that means) did in fact believe those things and it is clear that this was a recounting of a live-conversation rather than a broad strategy. It is not that weird to not have fully processed the things that you partially believe, live, in the middle of a conversation such that you are surprised by them.
...The other half of me was like isn’t it obvious. They are disturbed at me because intense suffering is scary. Because being trans in a wo
the completely unfounded belief that only good-aligned people can cooperate or use game theory and that nongood people will defect on each other too often to defeat her alliance.
Can you elaborate on why you think this belief is completely unfounded? It seems to me that there are clear asymmetries in coordination capacities of good vs nongood. For example, being more open to the idea of a "Good Person" in power than a "Bad Person" seems like common sense. Similarly, groups of good people are intrinsically value-aligned while teams of bad people are not (each has a distinct selfish motivation) -- and I think value-alignedness increases effectiveness.
Assuming Ziz is being honest, she pulled the stunt at CFAR after she had already been defected against. This does not globally damage her credibility. It does damage her reputation among a) ppl who think they can't defect against her sneakily but plan to try and b) ppl who think she is bad at judging when she's been defected against. I am in neither of those categories so I have no reason to expect Ziz to defect by lying at me.
In contrast, if Ziz was being dishonest, she pulled that stunt for... inscrutable reasons that may or may not be in the web of lies...
Some of my thoughts on Ziz's honesty:
I'm hesitant about saying things here since, to the extent that my epistemics are right, this is a relatively adversarial environment. I think discussing things would reveal things that I know/how I found out about them without many positive effects (I'm also disconnected from the Bay Area Community). After all, if you were confident that Ziz was lying, nothing I know would likely change your mind. Similarly, if you felt like Ziz might be telling the truth, the gravity of the claims probably has more relevance to your actions than the extent to which my info would move the probability.
That being said, DM me and we can chat. I'm also pretty curious about your interactions with Ziz/how she tried to manipulate you.
Since this post is back-up, let's just have convo here alright? Don't wanna make things confusing
Per the top post, Ziz never lies (for a reasonable definition of what a lie is). Other than that, I don't think she is lying for four main reasons: 1) her decision theory implies that she isn't, 2) the content of her claims seems plausible to me, 3) her claims don't seem particularly strategically helpful) and 4) I have been able to independently verify some sub-components of her claims
...And look, I don’t have a stake in any of that at this point and I’m not in a position to judge, but I don’t think she’s lying. I don’t think she ever lies, I jus
since everything was deleted, I'm reposting my comment below. If my comment doesn't make sense, its likely that the above document was edited. Below my original comment, I'll post ChristianKIs reply and my response to it.
--------ORIGINAL COMMENT--------------------------------------
So first off, thanks for sharing -- its really interesting to hear other ppls experience with scrupulosity and Ziz's work. That being said... I have a fair amount of criticism wrt your discussion of Ziz
...And look, I don’t have a stake in any of that at this point and I’m not in a
oh and if you can read this, hive reposted it so feel free so I'm bringing the discussion there
idk if you can read this since the post was deleted but the short answer is that, per the top post, Ziz never lies (for a reasonable definition of what a lie is) and I'm inclined to agree:
And look, I don’t have a stake in any of that at this point and I’m not in a position to judge, but I don’t think she’s lying. I don’t think she ever lies, I just think she’s speaking from within her own worldview, the same way that she always does, the same way that everyone always does.
Moreover, if it is true that Ziz's goals are promoting a vegan singularity, then the specific claims she made about transphobia/cover-ups/etc are extremely suboptimal for furthering this goal
So first off, thanks for sharing -- its really interesting to hear other ppls experience with scrupulosity and Ziz's work. That being said... I have a fair amount of criticism wrt your discussion of Ziz
And look, I don’t have a stake in any of that at this point and I’m not in a position to judge, but I don’t think she’s lying. I don’t think she ever lies, I just think she’s speaking from within her own worldview, the same way that she always does, the same way that everyone always does
Ziz has made a number of specific claims about the rationality community...
Ziz has made a number of specific claims about the rationality community that seem extremely bad to me including (off the top of my head): endemic transphobia in CFAR, sexual misconduct, an attempted cover-up of sexual misconduct endemic (at least at a point) in MIRI. If these occured, they are real concrete events independent of worldview.
That stuff matters. It mattered enough to me that I've been off this website and un-associated with the rationality community for upwards of a year because I heard about it.
It seems that Ziz has a worldview according to which she's willing to lie when it furthers her goals. Why do you believe her enough at this point?
The trouble here is that deep disagreements aren't often symmetrically held with the same intensity. Consider the following situation:
Say we have Protag and Villain. Villain goes around torturing people and happens upon Protag's brother. Protag's brother is subsequently tortured and killed. Protag is unable to forgive Villain but Villain has nothing personal against Protag. Which of the following is the outcome?
So, silly question that doesn't really address the point of this post (this may very well be just a point of clarity thing but it would be useful for me to have an answer due to earning-to-give related reasons off-topic for this post) --
Here you claim that CDT is a generalization of decision-theories that includes TDT (fair enough!):
Here, "CDT" refers -- very broadly -- to using counterfactuals to evaluate expected value of actions. It need not mean physical-causal counterfactuals. In particular, TDT counts as "a CDT" in this sen...
Thanks! This is great.
A year ago, Joaquin Phoenix made headlines when he appeared on the red carpet at the Golden Globes wearing a tuxedeo with a paper bag over his head that read, "I am a shape-shifter. I can't change the world. I can only change myself."
-- GPT-3 generated news article humans found easiest to distinguish from the real deal.
... I haven't read the paper in detail but we may have done it; we may be on the verge of superhuman skill at absurdist comedy! That's not even completely a joke. Look at the sentence "I am a shape-shifter. I c...
I propose that we ought to have less faith in our ability to control AI or its worldview and place more effort into making sure that potential AIs exist in a sociopolitical environment where it is to their benefit not to destroy us.
This is probably the crux of our disagreement. If an AI is indeed powerful enough to wrest power from humanity, the catastrophic convergence conjecture implies that it by default will. And if the AI is indeed powerful enough to wrest power from humanity, I have difficulty envisioning things we could offer it in trade that it...
Yeah I don't do it for mainly selfish reasons but I agree that there are a lot of benefits to separating arguments into multiple comments in terms of improving readability and structure. Frankly, I commend you for doing it (and I'm particularly amenable to it because I like bullet-points). With that said, here are some reasons you shouldn't take too seriously for why I don't:
Selfish Reasons:
Nice post! The moof scenario reminds me somewhat of Paul Christiano's slow take-off scenario which you might enjoy reading about. This is basically my stance as well.
AI boxing is actually very easy for Hardware Bound AI. You put the AI inside of an air-gapped firewall and make sure it doesn't have enough compute power to invent some novel form of transmission that isn't known to all of science. Since there is a considerable computational gap between useful AI and "all of science", you can do quite a bit with an AI in a box...
Admittedly the first time I read this I was confused because you went "When a bad thing happens to you, that has direct, obvious bad effects on you. But it also has secondary effects on your model of the world." This gave the sense that the issue was with the model of the world and not the world itself. This isn't what you meant but I made a list of reasons talking is a thing people do anyway:
Applying these systems to the kind of choices that I make in everyday life I can see all of them basically saying something like:...
The tricky thing with these kinds of ethical examples is that a bunch of selfish (read: amoral) people would totally take care of their bodies, be nice to they're in iterated games with, try to improve themselves in their professional lives, and seek long-term relationship value. The only unambiguously selfless thing on that list in my opinion is donating -- and that tends to kick the question of ethics down the road to t...
Nah. Based on my interaction with humans who work from home, most aren't really that invested in the whole "support the paperclip factories" thing -- as evidenced by their willingness to chill out now that they're away from offices and can do it without being yelled at (sorry humans! forgive me for revealing your secrets!). Nearly half of Americans live paycheck to paycheck so (on the margin), Covid19 is absolutely catastrophic for the financial well-being (read: self-agency) of many people which propagates into the long-term via wage s...
I think the brief era of me looking at Kinsa weathermap data has ended for now. My best guess is that that covid spread among Kinsa users has been almost completely mitigated by the lockdown and current estimatess of r0 are being driven almost exclusively by other demographics. Otherwise, the data doesn't really line up:
...On the practical side, figuring out the -u0 penalty for non-humans is extremely important for those adopting this sort of ethical system. Animals that produce lots of offspring that rarely survive to adulthood would rack up -u0 penalties extremely quickly while barely living long enough to offset those penalties with hedonic utility. This happens at a large enough of scale that, if -u0 is non-negligible, wild animal reproduction might be the most dominant source of disutility by many orders of magnitude.
When I try to think about how to define -u0 for non-h...
Yeah, my impression as that the Unilateralist's Curse as something bad mostly relies on the assumption that everyone is taking actions based on the common good. From the paper,
Suppose that each agent decides whether or not to undertake X on the basis of her own independent judgement of the value of X, where the value of X is assumed to be independent of who undertakes X, and is supposed to be determined by the contribution of X to the common good...
That is to say-- if each agent is not deciding to undertake X on the basis of the common good, perhaps ...
Thanks for confirming. For what it's worth, I can envision your experience being a somewhat frequent one (and I think it's probably actually more common among rationalists than the average Jo). It's somewhat surprising to me because I interact with a lot of (non-rationalist) people who express very low zero-points for the world, give altruism very little attention, yet can often be nudged into taking pretty significant ethical actions almost because I just point out that they can. There's no specific ethical sub-agent and specific selfi...
That's a good point. On the other hand, many people make their reference class the most impressive one they belong to rather than the least impressive one. (At least I did, when I was in academia; I may have been excellent in mathematics within many sets of people, but among the reference class "math faculty at a good institution" I was struggling to feel okay.)
Ah, understandable. I felt a similar way back when I was doing materials engineering -- and I admit I put a lot of work into figuring out how to connect my research with doing ...
I was intuitively thinking of "the expected trajectory of the world if I were instead a random person from my reference class"
If you move your zero-point to reflect world-trajectory based on a random person in your reference class, it creates incentives to view the average person in your reference class as less altruistic than they truly are and to unconsciously normalize bad behavior in that class.
It's also the reason why I want people to reset their zero point such that helpful actions do in fact feel like they push the world into the positive. That gives a positive reinforcement to helpful actions, rather than punishing oneself from any departure from helpful actions.
I just want to point out that, while two utility functions that differ only in zero point produce the same outcomes, a single utility function with a dynamically moving zero-point does not. If I just pushed the world into the positive yesterday, why do I have to do it again today? The human brain is more clever than that and, to successfully get away with it, you'd have to be using some really nonstandard utilitarianism.
Huh... I think the crux of our differences here is that I don't view my ethical intuition as a trainer which employs negative/positive reinforcement to condition my behavior -- I just view it as me. And I care a good bit about staying me. The idea that people would choose to modify their ethical framework to reduce emotional unpleasantness over a) performing a trick like donating which isn't really that unpleasant in-itself or b) directly resolving the emotional pain in a way that doesn't modify the ethical framework/ultimate actions really ...
The real problem that I have (and I suspect others have) with framing a significant sacrifice as the "bare standard of human decency" is that it pattern-matches purity ethics far more than utilitarianism. (A purity ethic derived from utilitarianism is still a purity ethic.)
I share your problem with purity ethics... I almost agree with this? Frankly, I have some issue with using the claim "a utilitarian with a different zero-point/bare-standard of decency has the same utility function so feel free to move yours!" and juxtaposing it wi...
Thank you for confirming. I wanted to be sure I wasn't putting words in your mouth.
I think I just have a very different model than you of what most people tend to do when they're constantly horrified by their own actions.
I'm sorry about the animal welfare relevance of this analogy, but it's the best one I have:
The difference between positive reinforcement and punishment is staggering; you can train a circus animal to do complex tricks using either method, but only under the positive reinforcement method will the animal voluntarily engag...
Correct me if I'm wrong, but I hear you say that your sense of horror is load-bearing, that you would take worse actions if you did not feel a constant anguish over the suffering that is happening.
Load-bearing horror != constant anguish. There are ways to have an intuitively low zero point measure of the world that don't lead to constant anguish. Other than that, I agree with you -- constant anguish is bad. The extent of my ethics-related anguish is probably more along the lines of 2-3 hour blocks of periodic frustration that happen every coup...
As an animal-welfare lacto-vegetarian who's seen a fair number of arguments along these lines, they don't really do it for me. In my experience, it's not really possible to separate human peace of mind from the actions you make (the former reflect an ethical framework and the latter reflect strategies and together they form an aesthetic feedback loop) . To be explicit:
(Splitting replies on different parts into different subthreads.)
The real problem that I have (and I suspect others have) with framing a significant sacrifice as the "bare standard of human decency" is that it pattern-matches purity ethics far more than utilitarianism. (A purity ethic derived from utilitarianism is still a purity ethic.)
For me, the key difference (keeping the vegetarian/vegan example) is whether it is a better outcome for one person to become a vegan and another to keep eating meat as usual, or for two people to each reduce their...
Thanks for pointing this out. Having recently looked at Ohio County KY, I think this is correct. %ill there max'd out at above 1% the typical range but has since dropped below 0.4% of the typical range and started rising again (which is notable in contrast with seasonal trends) [Edit to point out that this is true for many counties in the Kentucky/Tennessee area]. This basically demonstrates that having a reported %ill now that is lower than previous in the Kinsa database is insufficient to show r0<1. Probably best to stick with the prior of containment failure.
"I only care about animal rights because animals are alive"
1. Imagine seeing someone take a sledgehammer to a beautiful statue. How do you feel?
2. Someone swats a mosquito. How do you feel?
In this context, I think the word rights is doing a lot of work that your question is not capturing. While seeing someone destroy a beautiful stature would feel worse than seeing someone swat a mosquito, this in no way indicates that I care about "statue rights." I acknowledge that the word rights is kind of fuzzy but here's my interpretation:
I f...
I've been playing with the Kinsa Health weathermap data to get a sense of how effective US lockdowns have been at reducing US fever. The main thing I am interested in is the question of whether lockdown has reduced coronavirus's r0 below 1 (stopping the spread) or not (reducing spread-rate but not stopping it). I've seen evidence that Spain's complete lockdown has not worked so my expectation is that this is probably the case here. Also, Kinsa's data has two important caveats:
The Kinsa data is barely even weak evidence in favor of R0 < 1. The downward trend in fever readings are confounded, likely severely, by their thermometers having to be actively used vs. being a passive wearable. It seems plausible that more people will check their temperature when they are concerned about COVID-19, and since most people are healthy this will spuriously drive average fever readings down. Plausibly the timing of increased thermometer use will coincide somewhat with shelter-in-place orders since they correlate with severity & awarenes...
Fair enough. When I was thinking about "broad covid risk", I was referring more to geographical breadth -- something more along the lines of "is this gonna be a big uncontained pandemic" than "is coronavirus a bad thing to get." I grant that the latter could have been a valid consideration (after all, it was with H1N1) and that claiming that it makes "no implication" about broader covid risk was a mis-statement on my part.
That being said, I wouldn't really consider it an alarm bell (and when I read it, it wasn&a...
While I agree with the specific claims this post is making (i.e. "Less Wrong provided information about coronavirus risk similar to or just-lagging the stock market"), I think it misses the thing that matters. We're a rationality forum, not a superintelligent stock-market-beating cohort[1]! Compared to the typical human's response to coronavirus, we've done pretty well at recognizing the dangers posed by the exponential spread of pandemics and acting accordingly. Compared to the very smart people who make money by predicting the ec...
The question in this post is "was Less Wrong a good alarm bell" and in my opinion only one of those links constitute alarm bells -- the one on EAForums. Acknowledging/discussing the existence of the coronavirus is vastly different from acknowledging/discussing the risk of the coronavirus.
"Will ncov survivors suffer lasting disability at a high rate?" is a medical question that makes no implication about broader covid risk.
This seems wrong to me, in part because the hypothesis that there could be widespread negative effects even for survivors was a compelling reason for 1) me to take it seriously (at the time, I estimated my disability risk was something like 5x the importance of my mortality risk) and 2) people to expect spread to be bad in a way that shows up in many indicators (like GDP).
[Epistemic Status: It's easy to be fooled by randomness in the coronavirus data but the data and narrative below make sense to me. Overall, I'm about 70% confident in the actual claim. ]
Iran's recent worldometer data serves case study demonstrating relationship between sufficient testing and case-fatality rate. After a 16 day long plateau (Mar 06-22) in daily new cases which may have seemed reassuring, we've seen five days (Mar 24-28) of roughly linear rise. We could anticipate this by noticing that in a similar time frame (Mar 07-19),...
I do still disagree with you somewhat, because I think that people going through a crisis of faith are prone to flailing around and taking naive actions that they would have reconsidered after a week or month of actually thinking through the implications of their new belief. Trying to maximize utility while making a major update is safe for ideal Bayesian reasoners, but it fails badly for actual humans.
Ah, yeah I agree with this observation -- and it could be good to just assume things add up to normality as a general defense against people rapidly ta...
I agree that carefully landing the plane is better than maintaining the course if catastrophic outcomes suddenly seem more plausible than before.
Yeah, but my point is not about catastrophic risk -- it's about the risk/reward trade-off in general. You can have risk>reward in scenarios that aren't catastrophic. Catastrophic risk is just a good general example of where things don't add up to normality (catastrophic risks by nature correspond to not-normal scenarios and also coincide with high risk). Don't promise yourself to steer the p...
I think the strongest version of this idea of adding p to normality is "new evidence/knowledge that contradicts previous beliefs does not invalidate previous observations." Therefore, when one's actions are contingent on things happening that have already been observed to happen, things add up to normality because it is already known that those things happen -- regardless of any new information.But this strict version of 'adding up to normality' does not apply in situations where one's actions are contingent on unobservables. ...
I shared this post with some of my friends and they pointed out that, as of 3/21/2020, the Italy and Spain curves no longer look as optimistic:
To me that nudges things somewhat, but isn't a game changer. I don't think it makes it 10x less bad or anything.
Fair enough. As a leaning-utilitarian, I personally share your intuition that it isn't 10x bad (if I had to choose between coronavirus and ending negative consequences of live-style factors for one year, I don't have a strong intuition in favor of coronavirus). Psychologically speaking, from the perspective of average deontological Joe, I think that it (in some sense) is/feels 10x as bad.
Is that really a possibility? I imagin...
Thanks for clarifying. To the extent that you aren't particularly sure about consciousness comes about, it makes sense to reason about all sorts of possibilities related to capacity for experience and intensity of suffering. In general, I'm just kinda surprised that Eliezer's view is so unusual given that he is the Eliezer Yudkowsky of the rationalist community.
My impression is that the justification for the argument your mention is something along the lines of "the primary reason one would develop a coherent picture of their own mind is so they could conv... (read more)