EA has gotten a little more sympathetic to vibes-based reasoning recently, and will continue to incorporate more of it.
The mind (ie. your mind), and how it is experienced from the inside, is potentially a very rich source of insights for keeping AI minds aligned on the inside.
The virtue of the void is indeed the virtue above all others (in rationality), and fundamentally unformalizable.
There is likely a deep compositional structure to be found for alignment, possibly to the extent that AGI alignment could come from "merely" stacking together "microalignment", even if in non-trivial ways.
I haven't read this post super deeply yet, but obviously this is one of those excellent posts that's going to become a Schelling point for various semi-related gripes after a mere token skim, even though most of them have been anticipated already in the post!
Some of those gripes are:
- Near enemies: Once a term for a phenomenon is entrenched in a community, it's a lot a lot a lot of work to name anything that's close to it but not quite it. (See, for example, "goodhart" for what is IMO a very diverse and subtle cluster of clumsiness in holding onto intentio...
Even if SEP was right about getting around the infinity problem and CK was easy to obtain before, it certainly isn't now! (Because there is some chance that whoever you're talking to has read this post, and whoever reads this post will have some doubt about whether the other believes that...)
Love this post overall! It's hard to overstate the importance of (what is believed to be) common knowledge. Legitimacy is, as Vitalik notes[1], the most important scarce resource (not just in crypto) and is likely closer to whatever we usually intend to name when we sa...
I've been a longtime CK atheist (and have been an influence on Abram's post), and your comment is in the shape of my current preferred approach. Unfortunately, rational ignorance seems to require CK that agents will engage in bounded thinking, and not be too rational!
(CK-regress like the above is very common and often non-obvious. It seems plausible that we must accept this regress and in fact humans need to be Created Already in Coordination, in analogy with Created Already in Motion)
I think it is at least possible to attain p-CK in the case that th...
What are the standard doomy "lol no" responses to "Any AGI will have a smart enough decision theory to not destroy the intelligence that created it (ie. us), because we're only willing to build AGI that won't kill us"?
(I suppose it isn't necessary to give a strong reason why acausality will show up in AGI decision theory, but one good one is that it has to be smart enough to cooperate with itself.)
Some responses that I can think of (but I can also counter, with varying success):
A. Humanity is racing to build an AGI anyway, this "decision" is not really eno...
When there's not a "right" operationalization, that usually means that the concepts involved were fundamentally confused in the first place.
Curious about the scope of the conceptual space where this belief was calibrated. It seems to me to tacitly say something like "everything that's important is finitely characterizable".
Maybe the "fundamentally confused" in your phrasing already includes the case of "stupidly tried to grab something that wasn't humanly possible, even if in principle" as a confused way for a human, without making any claim of reality bei...
It seems like one of the most useful features of having agreement separate from karma is that it lets you vote up the joke and vote down the meaning :)
Thanks for clarifying! And for the excellent post :)
Finally, when steam flows out to the world, and the task passes out of our attention, the consequences (the things we were trying to achieve) become background assumptions.
To the extent that Steam-in-use is a kind of useful certainty about the future, I'd expect "background assumptions" to become an important primitive that interacts in this arena as well, given that it's a useful certainty about the present. I realize that's possibly already implicit in your writing when you say figure/ground.
I think some equivalent of Steam pops out as an important concept in enabling-agency-via-determinism (or requiredism, as Eliezer calls it), when you have in your universe both:
The latter is something that can also become a very solid (full of Steam) thing to lean on for your choice-making, and that's an especially useful model to apply to your selves across time or to a community trying to self-organize. It s...
I'm unsure if open sets (or whatever generalization) are a good formal underpinning of what we call concepts, but I'm in agreement that there seems needed at least a careful reconsideration of intuitions one takes for granted when working with a concept, when you're actually working with a negation-of-concept. And "believing in" might be one of those things that you can't really do with negation-of-concepts.
Also, I think a typo: you said "logical complement", I'm imagining you meant "set-theoretic complement". (This seems important to point out since in to...
I began reading this charitably (unaware of whatever inside baseball is potentially going on, and seems to be alluded to), but to be honest struggled after "X" seemed to really want someone (Eliezer) to admit they're "not smart"? I'm not sure why that would be relevant.
I think I found these lines especially confusing, if you want to explain:
There's probably a radical constructivist argument for not really believing in open/noncompact categories like . I don't know how to make that argument, but this post too updates me slightly towards such a Tao of conceptualization.
(To not commit this same error at the meta level: Specifically, I update away from thinking of general negations as "real" concepts, disallowing statements like "Consider a non-chair, ...").
But this is maybe a tangent, since just adopting this rule doesn't resolve the care required in aggregation with even compact categori...
(A suggestion for the forum)
You know that old post on r/ShowerThoughts which went something like "People who speak somewhat broken english as their second language sound stupid, but they're actually smarter than average because they know at least one other language"?
I was thinking about this. I don't struggle with my grasp of English the language so much, but I certainly do with what might be called an American/Western cadence. I'm sure it's noticeable occasionally, inducing just the slightest bit of microcringe in the typical person that hangs around here...
I like this question. I imagine the deeper motivation is to think harder about credit assignment.
I wrote about something similar a few years ago, but with the question of "who gets moral patienthood" rather than "who gets fined for violating copyright law". In the language of that comment, "you publishing random data" is just being an insignificant Seed.
Yeah, this can be really difficult to bring out. The word "just" is a good noticer for this creeping in.
It's like a deliberate fallacy of compression: sure you can tilt your view so they look the same and call it "abstraction", but maybe that view is too lossy for what we're trying to do! You're not distilling, you're corrupting!
I don't think the usual corrections for fallacies of compression can help either (eg. Taboo) because we're operating at the subverbal layer here. It's much harder to taboo cleverness at that layer. Better off meditating on the virtue of The Void instead.
But it is indeed a good habit to try to unify things, for efficiency reasons. Just don't get caught up on those gains.
The "shut up"s and "please stop"s are jarring.
Definitely not, for example, norms to espouse in argumentation (and tbf nowhere does this post claim to be a model for argument, except maybe implicitly under some circumstances).
Yet there's something to it.
There's a game of Chicken arising out of the shared responsibility to generate (counter)arguments. If Eliezer commits to Straight, ie. refuses to instantiate the core argument over and over again (either explicitly, by saying "you need to come up with the generator" or implicitly, by refusing to engage with ...
It occurred to me while reading your comment that I could respond entirely with excerpts from Minding our way. Here's a go (it's just fun, if you also find it useful, great!):
You will spend your entire life pulling people out from underneath machinery, and every time you do so there will be another person right next to them who needs the same kind of help, and it goes on and on forever
This is a grave error, in a world where the work is never finished, where the tasks are neverending.
Rest isn't something you do when everything else is finished. Everything e...
You draw boundaries towards questions.
As the links I've posted above indicate, no, lists don't necessarily require questions to begin noticing joints and carving around them.
Questions are helpful however, to convey the guess I might already have and to point at the intension that others might build on/refute. And so...
Your list doesn't have any questions like that
...I have had some candidate questions in the post since the beginning, and later even added some indication of the goal at the end.
EDIT: You also haven't acknowledged/objected to my response to y...
In Where to Draw the Boundaries, Zack points out (emphasis mine):
...The one replies:
But reality doesn't come with its joints pre-labeled. Questions about how to draw category boundaries are best understood as questions about values or priorities rather than about the actual content of the actual world. I can call dolphins "fish" and go on to make just as accurate predictions about dolphins as you can. Everything we identify as a joint is only a joint because we care about it.
No. Everything we identify as a joint is a joint not "because we care about it", but
It seemed to me that avoiding fallacies of compression was always a useful thing (independent of your goal, so long as you have the time for computation), even if negligibly. Yet these questions seem to be a bit of a counterexample in mind, namely that I have to be careful when what looks like decoupling might be decontextualizing.
Importantly, I can't seem to figure out a sharp line between the two. The examples were a useful meditation for me, so I shared them. Maybe I should rename the title to reflect this?
(I'm quite confused by my failure of conv...
Yes, this is the interpretation.
If I'm doing X wrong (in some way), it's helpful for me to notice it. But then I notice I'm confused about when decoupling context is the "correct" thing to do, as exemplified in the post.
Rationalists tend to take great pride in decoupling and seeing through narratives (myself included), but I sense there might be some times when you "shouldn't", and they seem strangely caught up with embeddedness in a way.
I think I might have made a mistake in putting in too many of these at once. The whole point is to figure out which forms of accusations are useful feedback (for whatever), and which ones are not, by putting them very close to questions we think we've dissolved.
Take three of these, for example. I think it might be helpful to figure out whether I'm "actually" enjoying the wine, or if it's a sort of a crony belief. Disentangling those is useful to make better decisions for myself, in say, deciding to go to a wine-tasting if status-boost with those people wou...
[ETA: posted a Question instead]
Question: What's the difference, conceptually, between each of the following if any?
..."You're only enjoying that food because you believe it's organic"
"You're only enjoying that movie scene because you know what happened before it"
"You're only enjoying that wine because of what it signals"
"You only care about your son because of how it makes you feel"
"You only had a moving experience because of the alcohol and hormones in your bloodstream"
"You only moved your hand because you moved your fingers"
"You're only showing courage bec
So... it looks like the second AI-Box experiment was technically a loss.
Not sure what to make of it, since it certainly imparts the intended lesson anyway. Was it a little misleading that this detail wasn't mentioned? Possibly. Although the bet was likely conceded, a little disclaimer of "overtime" would have been nice when Eliezer discussed it.
I was also surprised. Having spoken to a few people with crippling impostor syndrome, the summary seemed to be "people think I'm smart/skilled, but it's not Actually True."
I think the claim in the article is they're still in the game when saying that, just another round of downplaying themselves? This becomes really hard to falsify (like internalized misogyny) even if true, so I appreciate the predictions at the end.
I like the idea of it being closer to noise, but there are also reasons to consider the act of advertising theft, or worse:
There's a game of chicken in "who has to connect potential buyers to sellers, the buyers or the sellers?" and depending on who's paying to make the transaction happen, we call it "advertisement" or "consultancy".
(You might say "no, that distinction comes from the signal-to-noise ratio", so question: if increasing that ratio is what works, how come advertisements are so rarely informative?)
As a meta-example, even to this I want to add:
Often, people like that will respond well to criticism about X and Y but not about Z.
One (dark-artsy) aspect to add here is that the first time you ask somebody for criticism, you're managing more than your general identity, you're also managing your interaction norms with that person. You're giving them permission to criticize you (or sometimes, even think critically about you for the first time), creating common knowledge that there does exist a perspective from which it's okay/expected for them to do that. This is playing with the ...
Incidentally Eliezer, is this really worth your time?
This comment might have caused a tremendous loss of value, if Eliezer took Marcello's words seriously here and so stopped talking about his metaethics. As Luke points out here, despite all the ink spilled, very few seemed to have gotten the point (at least, from only reading him).
I've personally had to re-read it many times over, years apart even, and I'm still not sure I fully understand it. It's also been the most personally valuable sequence, the sole cause of significant fundame...
Ping!
I've read/heard a lot about double crux but never had the opportunity to witness it.
EDIT: I did find one extensive example, but this would still be valuable since it was a live debate.
This one? From the CT-thesis section in A first lesson in meta-rationality.
the objection turns partly on the ambiguity of the terms “system” and “rationality.” These are necessarily vague, and I am not going to give precise definitions. However, by “system” I mean, roughly, a set of rules that can be printed in a book weighing less than ten kilograms, and which a person can consciously follow.11 If a person is an algorithm, it is probably an incomprehensibly vast one, which could not written concisely. It is proba...
Ideally, I'd make another ninja-edit that would retain the content in my post and the joke in your comment in a reflexive manner, but I am crap at strange loops.
Cold Hands Fallacy/Fake Momentum/Null-Affective Death Stall
Although Hot Hands has been the subject of enough controversy to perhaps no longer be termed a fallacy, there is a sense in which I've fooled myself before with a fake momentum. I mean when you change your strategy using a faulty bottomline: incorrectly updating on your current dynamic.
As a somewhat extreme but actual example from my own life: when filling out answersheets to multiple-choice questions (with negative marks for incorrect responses) as a kid, I'd sometimes get excited about...
There's a whole section on voting in the LDT For Economists page on Arbital. Also see the one for analytic philosophers, which has a few other angles on voting.
From what I can tell from your other comments on this page, you might already have internalized all the relevant intuitions, but it might be useful anyway. Superrationality is also discussed.
Sidenote: I'm a little surprised no one else mentioned it already. Somehow arbital posts by Eliezer aren't considered as canon as the sequences, maybe it's the structure (rather than just the content)?
Thank you for this comment. I went through almost exactly the same thing, and might have possibly tabled it at the "I am really confused by this post" stage had I not seen someone well-known in the community struggle with and get through it.
My brain especially refused to read past the line that said "pushing it to 50% is like throwing away information": Why would throwing away information correspond to the magic number 50%?! Throwing away information brings you closer to maxent, so if true, what is it about the setup that makes 50% the unique solution, ind
...While you're technically correct, I'd say it's still a little unfair (in the sense of connoting "haha you call yourself a rationalist how come you're failing at akrasia").
Two assumptions that can, I think you'll agree, take away from the force of "akrasia is epistemic failure":
I'm interested in this. The problem is that if people consider the value provided by the different currencies at all fungible, side markets will pop up that allow their exchange.
An idea I haven't thought about enough (mainly because I lack expertise) is to mark a token as Contaminated if its history indicates that it has passed through "illegal" channels, ie has benefited someone in an exchange not considered a true exchange of value, and so purists can refuse to accept those. Purist communities, if large, would allow stability of such non-contaminated tok
...The expectations you do not know you have control your happiness more than you know. High expectations that you currently have don't look like high expectations from the inside, they just look like how the world is/would be.
But "lower your expectations" can often be almost useless advice, kind of like "do the right thing".
Trying to incorporate "lower expectations" often amounts to "be sad". How low should you go? It's not clear at all if you're using territory-free un-asymmetric simple rules like "lower". Like any other attempt at truth-finding, it is not
...I think a counterexample to "you should not devote cognition to achieving things that have already happened" is being angry at someone who has revealed they've betrayed you, which might acause them to not have betrayed you.
Is metarationality about (really tearing open) the twelfth virtue?
It seems like it says "the map you have of map-making is not the territory of map-making", and gets into how to respond to it fluidly, with a necessarily nebulous strategy of applying the virtue of the Void.
(this is also why it always felt like metarationality seems to only provide comments where Eliezer would've just given you the code)
The parts that don't quite seem to follow is where meaning-making and epistemology collide. I can try to see it as a "all models are false, some models are useful" but I'm not sure if that's the right perspective.
I want to ask this because I think I missed it the first few times I read Living in Many Worlds: Are you similarly unsatisfied with our response to suffering that's already happened, like how Eliezer asks, about the twelfth century? It's boldface "just as real" too. Do you feel the same "deflation" and "incongruity"?
I expect that you might think (as I once did) that the notion of "generalized past" is a contrived but well-intentioned analogy to manage your feelings.
But that's not so at all: once you've redone your ontology, where the naive idea of time isn
...Soares also did a good job of impressing this in Dive In:
In my experience, the way you end up doing good in the world has very little to do with how good your initial plan was. Most of your outcome will depend on luck, timing, and your ability to actually get out of your own way and start somewhere. The way to end up with a good plan is not to start with a good plan, it's to start with some plan, and then slam that plan against reality until reality hands you a better plan.
The idea doesn't have to be good, and it doesn't have to be feasible...
I don't think the "idea of scientific thinking and evidence" has so much to do with throwing away information as adding reflection, post which you might excise the cruft.
Being able to describe what you're doing, ie usefully compress existing strategies-in-use, is probably going to be helpful regardless of level of intelligence because it allows you to cheaply tweak your strategies when either the situation or the goal is perturbed.
Computer science & ML will become lower in relevance/restricted in scope for the purposes of working with silicon-based minds, just as human-neurosurgery specifics are largely but not entirely irrelevant for most civilization-scale questions like economic policy, international relations, foundational research, etc.
Or IOW: Model neuroscience (and to some extent, model psychology) requires more in-depth CS/ML expertise than will the smorgasbord of incoming subfields of model sociology, model macroeconomics, model corporate law, etc.