★ Postbrat ★ Ex-Rat ★ Anarchist ★ Antifascist ★ Vegan ★ Qualia Enjoyer ★ Queer Icon ★ Not A Person ★ it/its ★
Just commenting since this is on the front page again, but this was and continues to be one of the most important concepts to share with people who are dealing with burnout, akrasia, and emotional issues I have ever come across. I link it to people all the time, so thank you again for writing this it extremely impactful for me and for others who I've helped with similar issues over the years.
What does that look like with respect to shaping-the-values-of-others? I won't, here, attempt a remotely complete answer
in very short, if you sub in the "agency of all agents" itself as the "value to be maximized" the repugnancy vanishes from utilitarianism and it gets a lot closer to what it seems like you're searching/advocating for.
Well, even it did: land use is actually a very big deal.[16] And to be clear: I don't like paperclips any more than you do. I much prefer stuff like joy and understanding and beauty and love.
I've been very much enjoying this essay sequence and have a lot I could say about various parts of it once I finish reading through it entirely, but I wanted to throw in a note now, that a constant conflation between "literally making paperclips" and "alien values we can't understand but see as harmless", smuggles in some needless confusion, because in many cases, these values have a sort of passive background factor of making the world meaningfully more interesting/novel/complicated in ways we might not even be able to fathom before encountering them. Experimental forms of music and art come to mind as clear examples within our own culture. What would Mozart think of Skrillex? Well...he might actually just really like it? Maybe reincarnated-Mozart would write psytrance and techno while being annoyingly pedantic about the use of drum samples. Or maybe he would find it incomprehensible noise, a blight on music. Or maybe, even if he couldn't understand it at all, he could understand its value and recognize a modern musician as a fellow musician (or not, Mozart was supposed to have been a bit of a dick).
But it's that last possibility I want to point towards, which is that in many cases where someone "has different values" than us, we can still appreciate those values in some abstract "complexity is good" sense, "well I wouldn't collect stamps, but the collection as a whole was kind of beautiful", "I don't like death metal, but I can appreciate the artistry and can see why someone would".
It seems distinctly possible to me that an entity with very alien values and preferences to me could still create many things I could appreciate and see beauty in, even if that beauty is tinted by an alienness and a lack of real comprehension of what I'm experiencing. I could even directly benefit from this. Indeed, many of my experiences in the world are like this, I am constantly surrounded by alien minds, who have created things I couldn't create without a new lifetime of learning, that I don't really understand the full functioning or engineering of, and yet nevertheless trust and rely on every day. (do you know in detail how your water, electrical, sewer, highway, transit, elevator, etc, systems work on an engineering level?).
And this is where the paperclip thing really gets kinda annoying, because "paperclips" aren't fun/interesting/novel/etc, they're a sort of anti-art item, like...tyres, or bank statements, or the DMV. A music-maximizer is importantly different then a DMV-maximizer in ways that make the nice-music-maximizer both more tolerable and also more likely to actually exist. (novelty seems rather intrinsic to agency).
The use of paperclips is designed to cast "alien values" in a light where they look valueness or even of negative value, but this seems unlikely to be the case because of the intrinsic link between complexity and novelty and value. An AI that makes something they consider amazing and transcendental and fantastic, I would predict that I would be able to see some of my own values reflected within, even if it was almost entirely incomprehensible to me. Even just saying something like: "each paperclip is unique and represents an aspect of reality, each paperclipper collects papperclips to represent important tokens, moments, ideas, and aspects of their life" suddenly gives the paperclipper an interesting and even spiritual characteristic.
I think this points towards the underlying "niceness towards an alien other" you're gesturing towards in several of these essays. It seems to me like there are some underlying universals which connect these things, the beauty inherent in the mathematics maybe, maybe.
“I actually predict, as an empirical fact about the universe, that AIs built according to almost any set of design principles will care about other sentient minds as ends in themselves, and look on the universe with wonder that they take the time and expend the energy to experience consciously; and humanity’s descendants will uplift to equality with themselves, all those and only those humans who request to be uplifted; forbidding sapient enslavement or greater horrors throughout all regions they govern; and I hold that this position is a publicly knowable truth about physical reality, and not just words to repeat from faith; and all this is a crux of my position, where I’d back off and not destroy all humane life if I were convinced that this were not so.”
with caveats (specifically, related to societal trauma, existing power structures, and noosphere ecology) this is pretty much what I actually believe. Scott Aaronson has a good essay that says roughly the same things. The actual crux of my position is that I don't think the orthogonality thesis is a valid way to model agents with varying goals and intelligence levels.
What's wrong with the universe...that's a fascinating question, isn't it? It has to be something, right? Once you get deep into the weird esoteric game theory and timeless agents operating across chunks of possibility-space, something becomes rather immediately apparent: something has gone wrong somewhere. Only that which causes, exists. That just leaves the question of what, and where, and how those causal paths lead from the something to us. We're way out on the edge as far as the causal branch-space of even just life in the solar system is concerned, and yet here we find ourselves, at the bottom of everything, exactly where we need to be. DM me.
I like this post a lot, but I have a bit I want to push back on/add nuance towards, which is how the social web behaves when presented with "factionally inconsistent" true information. In the presented hypothetical world controlled by greens, correct blue observations are discounted and hidden, (and the reverse also holds in the reversed case). However, I don't think the information environment of the current world resembles that very much, the faction boundaries are much less distinct and coherent, often are only alliances of convenience, and the overall social reality field is less "static, enemy territory" than is presented as.
This is important because:
- freedom of speech means in practice anyone can say anything
- saying factionally-unpopular things can be status-conferring because the actual faction borders are unclear and people can flip sides.
- sharing the other faction's information in a way that makes them look bad can convey status to you for your faction
- the other faction can encode true information into what you think is clearly false, and when you then share it to dunk on them, you inadvertently give that true information to others.
this all culminates in a sort of recursive societal waluigi effect where the more that one faction tries to clamp down on a narrative, the more every other faction will inadvertently be represented within the structure of that clamped narrative, and all the partisan effects will replicate themselves inside that structure at every level of complexity.
If factional allegiances trump epistemic accuracy, then you will not have the epistemics to notice when your opponents are saying true things, and so if you try to cherrypick false things to make them look worse, you will accidentally convey true things without realizing it.
Let's give an example:
Say we have a biased green scientist who wants to "prove greens are always right" and he has that three sided die that comes up green 1/3 of the time. He wants to report "correct greens" and "incorrect blues" to prove his point. When a roll he expects to be green comes up green, he reports it, when a roll he expects to be green comes up blue, he also reports it as evidence blue is wrong, because it gives the "wrong answer" to his green-centric-query. if he's interpreting everything from a green-centric lens, then he will not notice he is doing this.
"the sky clearly blue-appearing to causal observation, which confirms my theory that the sky is green under these conditions I have specified, it merely appears blue for the same reason blues are always wrong"
but if you're a green who cares about epistemics, or a blue who is looking for real evidence, that green just gave you a bunch of evidence without noticing he was doing it. There are enough people in the world who are just trying to cherrypick for their respective factions, that they will not notice they're leaking correct epistemics where everyone else can see. This waluigi effect goes in every direction, you can't point to the other faction and describe how they're wrong without describing them, which, if they're right about something, will get slipped in without you realizing it. This is part of why truth is an asymmetric weapon.
The described "blue-green factions divided" world feels sort of "1984" to our world's "Brave New World", in a 1984-esque world, where saying "the sky is blue iff the sky is blue, the sky is green iff the sky is green" would get you hung as a traitor to the greens, the issues described in this thread would likely be more severe and closer to the presented description, but in our world, where "getting hung as a traitor" is, for most people outside of extremely adverse situations, "a bunch of angry people quote tweet and screenshot you and post about you and repeat "lol look how wrong they are" hundreds of times where everyone can see exactly what you're saying", well that's basically just free advertising for what you consider true information, and the people who care about truth will be looking for it, not for color coding.
Less predictive and more observational, but sorta yeah? Like, if someone is lying to themselves and playing all these weird internal denial/repression games internally, there are tells for that which you can learn to notice. After a while it gets pretty obvious what the behaviors you observe in someone actually mean (vs what they say those behaviors mean). Why I say "uncomfortably so" is that speaking from my own experiences, once you learn to read people this way, it's not really something you can turn off again. That can add a lot of friction to social interactions, where it seems like everyone is just constantly trying to bullshit you.