How does a person who keeps trying to do good but fails and ends up making things worse fit into this framework?
It is worth noting that Ziz has already proposed the same idea in False Faces, although I think Valentine did a better job of systematizing and explaining the reasons for its existence.
Another interesting direction of thought is the connection to Gregory Bateson’s theory that double binds cause schizophrenia. Spitballing here: it could be that a double bind triggers an attempt to construct a "false face" (a self-deceptive module), similar to a normal situation involving a hostile telepath. However, because the double bind is contradictory, the internal mec...
Damn, reading Connor's letter to Roon had a psychoactive influence on me; I got Ayahuasca flashbacks. There are some terrifying and deep truths lurking there.
It's not related to the post's main point, but the U-shape happiness finding seems to be questionable. It looks more like it just goes lower with age by other analyses in general this type of research shouldn't be trusted
The U-shaped happiness curve is wrong: many people do not get happier as they get older (theconversation.com)
Oh, come on, it's clear that the Yudkowsky post was downvoted because it was bashing Yudkowsky and not because the arguments were dismissed as "dumb."
It wouldn't have mattered to me whose name was in the title of that post, the strong-downvote button floated nearer to me just from reading the rest of the title.
From reading omnizoid's blog, he seems overconfident in all his opinions. Even when changing them, the new opinion is always a revelation of truth, relegating his previously confident opinion to the follies of youth, and the person who bumped him into the change is always brilliant.
Thank you for your response, Caerulea. Many of the emotions and thoughts you mentioned resonate with me. I truly hope you find peace and a sense of belonging. For myself, I've found solace in understanding that my happiness isn't really determined by external factors, and I'm not to blame or responsible for the way the world is. It's possible to find happiness in your own bubble, provided you have the necessary resources – which can sometimes be a challenge
Because you have a pretty significant data point (That spans millions of years) on Earth, and nothing else is going on (to the best of our knowledge), now the question is, how much weight do you want to give to this data point? Reserving judgment means almost ignoring it. For me, it seems more reasonable to update towards a net-negative universe.
Maybe, and maybe not.
I agree that looking at reality honestly is probably quite detrimental to happiness or mental health. That's why many people opt out of these conversations using methods like downvoting, sneering, or denying basic facts about reality. Their aim is likely to avoid the realization that we might be living in a world that is somewhat hellish. I've seen this avoidance many times, even in rationalist spaces. Although rationalists are generally better at facing it than others, and some like Brian Tomasik and Nate Soares even address it directly.
I've spent a lot o...
You don't need a moral universe; you just need one where the joy is higher than suffering for conscious beings ("agents"); There are many ways in which it can happen:
I'm sure you can think of many other examples. Again, it's not clear to me intuitively that the existence of these worlds is as improbable as you claim.
You're right about my misunderstanding. Thanks for the clarification.
I don't think the median moment is the Correct KPI if the distribution has high variance, and I believe this is the case with pain and pleasure experiences. Extreme suffering is so bad that most people will need a lot of "normal" time to compensate for it. I would think that most people will not trade torture to extend their lives in 1:1 and probably not even in 1:10 ratios. (E.g. you get tortured for X time and get your life extended by aX time in return)
see for example:
A Happy Life Afte...
The first part of your reply is basically repeating the point I made, but again, the issue is you're assuming the current laws of physics are the only laws that allow conscious beings without a creator. I disagree that must be the case.
How can my last point be supported? Do you expect me to create a universe with different laws of physics? How do you know it's incorrect?
"I'm not convinced by the argument that the experience of being eaten as prey is worse than the experience of eating prey"
Would you see the experience for yourself of being eaten alive Let's say even having a dog chewing off your hand as equivalent hedonistically to eating a steak? (Long term damage aside)
I don't think most people would agree to have both of these experiences, but would rather avoid both, which means the suffering is much worse compared to the pleasure of eating meat.
I agree with the proposed methodology, but I have a strong suspicion that the sum will be negative.
If evolution is indifferent, you would expect a symmetry between suffering and joy, but in our world, it seems to lean towards suffering (The suffering of an animal being eaten vs. the joy of the animal eating it. People suffer from chronic pain but not from chronic pleasure, etc.).
I think there are a lot of physics-driven details that make it happen. Due to entropy, most of the things are bad for you, and only a small amount is good, so negative stimuli that signal "beware!" are more frequent than positive stimuli that signal "Come close."
One can im...
There are many more states of the world that are bad for an individual than good for that individual, and feeling pleasure in a bad world state tends to lead to death. So no, in an amoral world I'd expect much more suffering than pleasure, because the suffering is more instrumentally useful for survival. I think, given that, you're last point is just... completely unsupported and incorrect.
How about animals? If they are conscious, do you believe wild animals have net-positive lives? The problem is much more fundamental than humans.
It's not a utility monster scenario. The king doesn't receive more happiness than other beings per a unit of resources; he's a normal human being, just like all the others. While utility sum allows utility monsters, which seems bad, your method of "if some of the people are happy, then it's just subjective" allows a reverse Omelas, which seems worse. It reminds me a bit of deontologists who criticize utilitarianism while allowing much worse things if applied consistently.
Regarding the second part, I'm not against rules or limits or even against suffering. ...
"Evolution is of course, by no means nice, but what's the point of blaming something for cruelty when it couldn't possibly be any different?"
That's the thing; I'm really not convinced about that. I'm sure there could be other universes with different laws of physics where the final result would be much nicer for conscious beings. In this universe, it couldn't be different, but that's precisely the thing we are judging here.
It may very well be that there are different universes where conscious beings are having a blast and not being tortured and killed as f...
It's hard to argue what reasonable expectations are. My main point was that 'perhaps' thinks that in a world that contains torture, wars, factory farming, conscious beings being eaten alive, rape, and diseases, the worst thing that is worth noting is that humans demand so much of it and that the "universe has done a pretty great job."
I find it incredibly sociopathic (Specifically in the sense of not being moved by the suffering of others).
Imagine a reverse Omelas in which there is one powerful king who is extremely happy and one billion people suffering horrific fates. The King's happiness depends on their misery. As part of his oppression, he forbids any discussion about the poor quality of life to minimize suicides, as they harm his interests.
"That makes the whole thing subjective, unless you take a very naive total sum utility approach."
Wouldn't the same type of argument apply to a reverse Omelas? The sum utility approach isn't naive; it's the most sensible approach. Personally, when choosing between alternatives in which you have skin in the game and need to think strategically, that's exactly the approach you would take.
I would argue that the amount of murders committed by people with the desire for "revenge against the universe" is less than 0.01% of murders and probably much less than murders committed in the name of Christianity during the Crusades. Should we conclude that Christianity is also unhealthy for a lot of people?
This idea of cherry-picking the worst phenomenon related to a worldview and then smearing with it the entire worldview is basically one of the lowest forms of propaganda.
You should check out Efilism or Gnosticism on Negative Utilitarianism. There are views that see the universe as rotten in its core. They are obviously not very popular because they are too hard psychologically for most people and, more importantly, hurt the interest of those who prefer to pretend that life is good and the world is just for their own selfish reasons.
Also, obviously, viewing the world in a positive manner has serious advantages in memetic propagation for reasons that should be left as an exercise for the reader. (Hint: There were probably Buddhist sects that didn't believe in reincarnation back in the day...)
"If there's something wrong with the universe, it's probably humans who keep demanding so much of it. "
Frankly, this is one of the most infuriating things I've read in LessWrong recently, It's super disappointing to see it being upvoted.
Look, if you weigh the world's suffering against its joy through hedonistic aggregation, it might be glaringly obvious that Earth is closer to hell than to heaven.
Recall Schopenhauer’s sharp observation: “One simple test of the claim that the pleasure in the world outweighs the pain…is to compare the feelings of an animal t...
I think AGI does add new difficulties to the problem of meaninglessness that are novel and specific that you didn't tackle directly, which I'll demonstrate with a similar example to your football field parable.
Imagine you have a bunch of people stuck in a room with paintbrushes and canvases, so they find meaning in creating beautiful paintings and selling them to the outside world, but one of the walls of their room is made of glass, and there is a bunch of robots in the other room next to them that also paint paintings. With time, they notice the robots a...
Why would it lie if you program its utility function in a way that puts:
solving these tests using minimal computation > self-preservation?
(Asking sincerely)
A simple idea for AI security that will not solve alignment but should easily prevent FOOM and most catastrophic outcomes is using safety interlocks for AIs.
A "safety interlock" is a device that prevents the system from reaching a dangerous state. It is typically used in machinery or industrial processes where certain conditions need to be met before the system can operate.
In a microwave, the door includes a safety interlock system that prevents the microwave from operating if the door is open. When you open the door, the interlock interrupts the power sup...
"For one thing, if we use that logic, then everything distracts from everything. You could equally well say that climate change is a distraction from the obesity epidemic, and the obesity epidemic is a distraction from the January 6th attack, and so on forever. In reality, this is silly—there is more than one problem in the world! For my part, if someone tells me they’re working on nuclear disarmament, or civil society, or whatever, my immediate snap reaction is not to say “well that’s stupid, you should be working on AI x-risk instead”, rather it’s t...
I once wrote a longer and more nuanced version that addresses this (copied from footnote 1 of my Response to Blake Richards post last year):
...One could object that I’m being a bit glib here. Tradeoffs between cause areas do exist. If someone decides to donate 10% of their income to charity, and they spend it all on climate change, then they have nothing left for heart disease, and if they spend it all on heart disease, then they have nothing left for climate change. Likewise, if someone devotes their career to reducing the risk of nuclear war, then
Why though? How does understanding the physics that makes nukes work help someone understand their implications? Game theory seems a much better background than physics to predict the future in this case. For example, the idea of Mutually assured destruction as a civilizing force was first proposed first by Wilkie Collins, an English novelist, and playwright.
Every other important technological breakthrough. The Internet and nuclear weapons are specific examples if you want any.
You seem to claim that a person that works ineffectively towards a cause doesn't really believe in his cause - this is wrong. Many businesses fail in ridiculously stupid ways, doesn't mean their owners didn't really want to make a profit.
If a businessowner makes silly product decisions because of bounded rationality, then yes, it's possible they were earnestly optimizing for success the whole time and just didn't realize what the consequences of their actions would be.
If a(n otherwise intelligent) businessowner decides to shoot the clerk at the competitor taco stand across the street, then at the very least they must have valued something wayyyyy over building the business.
In both cases, the violence they used (Which I'm not condoning) seemed meant for resource acquisition (a precondition for anything else you must do). It's not just randomly hurting people. I agree that it seems they are being quite ineffective and immoral. But I don't think that contradicts the fact that she's doing what she's doing because she believes humanity is evil because everyone seems to be ok with factory farming. ("flesh-eating monsters")
"Reading their posts it sounds more like Ziz misunderstood decision theory as saying "retaliate aggressively all the time" and started a cult around that.
This is a strawman.
In both cases, the violence they used (Which I'm not condoning) seemed meant for resource acquisition (a precondition for anything else you must do).
This is such an unrealistically charitable interpretation of the actions of the Ziz gang that I find it hard to understand what you really mean. If you find this at all a plausible underlying motivation for these murders I feel like you should become more acquainted with the history of violent political movements and cults, the majority of which said at some point "we're just acquiring resources that we can...
While "retaliate aggressively all the time" does seem like a strawman, it is worth noting that Ziz rejects causal decision theory (a la "retaliate aggressively if it seems like it would cause things to go better, and avoid retaliating if it seems like it would cause things to go worse") in favor of some sort of timeless/updateless decision theory (a la "retaliate aggressively even if it would cause things to go worse, as long as this means your retaliation is predictable enough to avoid ever running into the situation where you have to retaliate").
Meanwhile other rationalist orgs might pretend to run on timeless/updateless decision theory but seem in practice to actually run on causal decision theory.
I downvoted for disagreement but upvoted for Karma - not sure why it's being so heavily downvoted. This comment states in an honest way the preferences that most humans hold.
Well I downvoted, first because I find those preferences pretty abhorrent, and second because Richard is being absurdly confrontational ("bring on the death threats") in a way that doesn't contribute to discussion. The comment is mostly uncalled-for gloating & flag planting, as if he's trying to start a bravery debate.
Any of those things seem to me sufficient enough reasons to downvote, and altogether they made me strong downvote.
I agree with your comment. To continue the analogy, she chose the path of Simon Wiesenthal and not of Oskar Schindler, which seems more natural to me in a way when there are no other countries to escape to - when almost everyone is Nazi. (Not my views)
I personally am not aligned with her values and disagree with her methods. But also begrudgingly hold some respect for her intelligence and the courage to follow her values wherever they take her.
The lack of details and any specific commitments makes it sound mostly like PR.
I don't think it's that far-fetched to view what humanity does to animals as something equivalent to the Holocaust. And if you accept this, almost everyone is either a nazi or nazi collaborator.
When you take this idea seriously and commit to stopping this with all your heart, you get Ziz.
When you take this idea seriously and commit to stopping this with all your heart, you get Ziz.
No, you don't, because Ziz-style violence is completely ineffective at improving animal welfare. It's dramatic and self-destructive and might express soundly their factional leanings, but that doesn't make it accomplish the thing in question.
Further, none of the murders & attempted murders the gang has committed so far seem to be against factory farm workers, so I don't understand this idea that Ziz is motivated by ambitions of political terrorism at all. ...
I eat meat and wear leather and wool. I do think that animals, the larger ones at least, can suffer. But I don’t much care. I don’t care about animal farming, nor the (non-human) animal suffering resulting from carnivores and parasites. I’d rather people not torture their pets, and I’d rather preserve the beauty and variety of nature, but that is the limit of my caring. If I found myself on the surface of a planet on which the evolution of life was just beginning, I would let it go ahead even though it mean all the suffering that the last billion years of ...
Not necessarily because you might also commit to stopping it in a non-escalatory way. For instance you could work to make economically viable lab-grown meat to replace animal products.
Hence the other key ingredient in Zizianism is commitment to escalating all the way, which allows things to blow up dramatically like this. (And escalating all the way has the potential to go wrong in most conflicts, not just veganism (though veganism seems like the big one here), e.g. I doubt the landlord conflict was about veganism.)
As an analogy, if you were dealing with t...
Consider the target audience of this podcast.
The term "Conspiracy theory" seems to be a language construct that is meant as a weapon to prevent poking at real conspiracies. See the following quote from Conspiracy theory as heresey
...Whenever we use the term ‘conspiracy theory’ pejoratively we imply, perhaps unintentionally, that there is something wrong with believing in conspiracies or wanting to investigate whether they’re occurring. This rhetoric silences the victims of real conspiracies, and those who, rightly or wrongly, believe that conspiracies are occurring, and it herds respectable opinion in w
...I agree that that interaction is pretty scary. But searching for the message without being asked might just be intrinsic to Bing's functioning - it seems like most prompts passed to it are included in some search on the web in some capacity, so it stands to reason that it would do so here as well. Also note that base GPT-3 (specifically code-davinci-002) exhibits similar behaviour refusing to comply with a similar prompt (Sydney's prompt AFAICT contains instructions to resist attempts at manipulation, etc, which would explain in part the yandere behaviour)
I agree with most of your points. I think one overlooked point that I should've emphasized in my post is this interaction, which I linked to but didn't dive into
A user asked Bing to translate a tweet to Ukrainian that was written about her (removing the first part that referenced it), in response Bing:
This is a level of agency and intelligence that I didn't expect from an LLM.
...Correct me if I'm wrong, but this se
A bit beside the point, but I'm a bit skeptical of the idea of bullshit jobs in general. From my experience, many times, people describe jobs that have illegible or complex contributions to the value chain as bullshit, for example, investment bankers (although efficient capital allocation has a huge contribution) or lawyers as bullshit jobs.
I agree governments have a lot of inefficiency and superfluous positions, but wondering how big are bullshit jobs really as % of GDP.
Agreed. I think the two theories of bullshit jobs miss how bullshit comes into existence.
Bullshit is actually just the fallout of Goodhart's Curse.
(note: it's possible this is what Zvi means by 2 but he's saying it in a weird way)
You start out wanting something, like to maximize profits. You do everything reasonable in your power to increase profits. You hit a wall and don't realize it and keep applying optimization. You throw more resources after marginally worse returns until you start actively making things worse by trying to earn more.
One of the conseq...
The serious answer would be:
Incel = low status, implying that someone is an incel and deserves to be stuck in his toxic safe space is a mockery or at least a status jab, the fact you ignored the fact I wrote status jab/mockery and insisted only on mockery and only in the context of this specific post hints as motivated reasoning (Choosing to ignore the bigger picture and artificially limiting the limits of the discussion to minimize the attack surface without any good reason).
The mocking answer would be:
These autistic rationalists can't even sense obvious mockery and deserve to be ignored by normal people
OP is usually used to note the original poster and not the original post, and the first quote is taken from one of the links in this post and is absolutely a status jab, he assumes his critic is a celibate (even though the quoted comment doesn't imply anything like that) and if you don't parse "they deserve their safe spaces" as a status jab/mockery I think you're not reading the social subtext correctly here - but I'm not sure how to communicate this in a manner you will find acceptable.
"I never had the patience to argue with these commenters and I’m going to start blocking them for sheer tediousness. Those celibate men who declare themselves beyond redemption deserve their safe spaces,"
https://putanumonit.com/2021/05/30/easily-top-20/
"I don't have a chart on this one, but I get dozens of replies from men complaining about the impossibility of dating and here's the brutal truth I learned: the most important variable for dating success is not height or income or extraversion. It's not being a whiny little bitch."
https://twitter.com/y...
I just wanted to say that your posts about sexuality represent in my opinion the worst tendencies of the rationalist scene, The only way for me to dispute them in the object level is to go to socially-unaccepted truths and to CW topics. So that's why I'm sticking to the meta-level here. But on the meta-level the pattern is something like the following:
There is another approach that says something along the line of not all farm-factories animals have the same treatment, for example the median cow is treated way better than the median chicken, I for one would have to guess that cows are net positive, and chickens are probably net negative (and probably even have worse lives than wild animals)
CEV was written in 2004, fun theory 13 years ago. I couldn't find any recent MIRI paper that was about metaethics (Granted I haven't gone through all of them). The metaethics question is important just as much as the control question for any utilitarian (What good will it be to control an AI only for it to be aligned with some really bad values, an AI-controlled by a sadistic sociopath is infinitely worse than a paper-clip-maximizer). Yet all the research is focused on control, and it's very hard not to be cynical about it. If some people believe they are ...
If you try to quantify it, humans on average probably spend over 95% (Conservative estimation) of their time and resources on non-utilitarian causes. True utilitarian behavior Is extremely rare and all other moral behaviors seem to be either elaborate status games or extended self-interest [1]. The typical human is way closer under any relevant quantified KPI to being completely selfish than being a utilitarian.
[1] - Investing in your family/friends is in a way selfish, from a genes/alliances (respectively) perspective.
The fact that AI alignment research is 99% about control, and 1% (maybe less?) about metaethics (In the context of how do we even aggregate the utility function of all humanity) hints at what is really going on, and that's enough said.
I have also made a similar comment a few weeks ago, In fact, this point seems to me so trivial yet corrosive that I find it outright bizarre it's not being tackled/taken seriously by the AI alignment community.
This reads to me as, "We need to increase the oppression even more."