All of a gently pricked vein's Comments + Replies

Computer science & ML will become lower in relevance/restricted in scope for the purposes of working with silicon-based minds, just as human-neurosurgery specifics are largely but not entirely irrelevant for most civilization-scale questions like economic policy, international relations, foundational research, etc.

Or IOW: Model neuroscience (and to some extent, model psychology) requires more in-depth CS/ML expertise than will the smorgasbord of incoming subfields of model sociology, model macroeconomics, model corporate law, etc.

EA has gotten a little more sympathetic to vibes-based reasoning recently, and will continue to incorporate more of it.

The mind (ie. your mind), and how it is experienced from the inside, is potentially a very rich source of insights for keeping AI minds aligned on the inside.

The virtue of the void is indeed the virtue above all others (in rationality), and fundamentally unformalizable.

There is likely a deep compositional structure to be found for alignment, possibly to the extent that AGI alignment could come from "merely" stacking together "microalignment", even if in non-trivial ways.

Reply1111

I haven't read this post super deeply yet, but obviously this is one of those excellent posts that's going to become a Schelling point for various semi-related gripes after a mere token skim, even though most of them have been anticipated already in the post!

Some of those gripes are:
- Near enemies: Once a term for a phenomenon is entrenched in a community, it's a lot a lot a lot of work to name anything that's close to it but not quite it. (See, for example, "goodhart" for what is IMO a very diverse and subtle cluster of clumsiness in holding onto intentio... (read more)

Even if SEP was right about getting around the infinity problem and CK was easy to obtain before, it certainly isn't now! (Because there is some chance that whoever you're talking to has read this post, and whoever reads this post will have some doubt about whether the other believes that...)

Love this post overall! It's hard to overstate the importance of (what is believed to be) common knowledge. Legitimacy is, as Vitalik notes[1], the most important scarce resource (not just in crypto) and is likely closer to whatever we usually intend to name when we sa... (read more)

I've been a longtime CK atheist (and have been an influence on Abram's post), and your comment is in the shape of my current preferred approach. Unfortunately, rational ignorance seems to require CK that agents will engage in bounded thinking, and not be too rational! 

(CK-regress like the above is very common and often non-obvious. It seems plausible that we must accept this regress and in fact humans need to be Created Already in Coordination, in analogy with Created Already in Motion)

I think it is at least possible to attain p-CK in the case that th... (read more)

What are the standard doomy "lol no" responses to "Any AGI will have a smart enough decision theory to not destroy the intelligence that created it (ie. us), because we're only willing to build AGI that won't kill us"?

(I suppose it isn't necessary to give a strong reason why acausality will show up in AGI decision theory, but one good one is that it has to be smart enough to cooperate with itself.)

Some responses that I can think of (but I can also counter, with varying success):

A. Humanity is racing to build an AGI anyway, this "decision" is not really eno... (read more)

2Raemon
Here's a more detailed writeup about this: https://www.lesswrong.com/posts/rP66bz34crvDudzcJ/decision-theory-does-not-imply-that-we-get-to-have-nice 

When there's not a "right" operationalization, that usually means that the concepts involved were fundamentally confused in the first place.


Curious about the scope of the conceptual space where this belief was calibrated. It seems to me to tacitly say something like "everything that's important is finitely characterizable".

Maybe the "fundamentally confused" in your phrasing already includes the case of "stupidly tried to grab something that wasn't humanly possible, even if in principle" as a confused way for a human, without making any claim of reality bei... (read more)

It seems like one of the most useful features of having agreement separate from karma is that it lets you vote up the joke and vote down the meaning :)

Thanks for clarifying! And for the excellent post :)

Finally, when steam flows out to the world, and the task passes out of our attention, the consequences (the things we were trying to achieve) become background assumptions. 

To the extent that Steam-in-use is a kind of useful certainty about the future, I'd expect "background assumptions" to become an important primitive that interacts in this arena as well, given that it's a useful certainty about the present. I realize that's possibly already implicit in your writing when you say figure/ground.

I think some equivalent of Steam pops out as an important concept in enabling-agency-via-determinism (or requiredism, as Eliezer calls it), when you have in your universe both:

  • iron causal laws coming from deterministic physics and
  • almost iron "telic laws" coming from regulation by intelligent agents with something to protect.

The latter is something that can also become a very solid (full of Steam) thing to lean on for your choice-making, and that's an especially useful model to apply to your selves across time or to a community trying to self-organize. It s... (read more)

3abramdemski
Indeed, this seems quite central.  I agree that this is something to poke at to try to improve the concepts I've suggested.  My intuition is that steam flows from the "free-to-allocate" pile, to specific tasks, and from there to making-things-be-the-case in the world.  So having lots of steam in the "free-to-allocate" pile is actually having lots of slack; the agent has not set up binding constraints on itself yet.  Having lots of steam on a specific task is having no slack; you've set up constraints that are now binding you, but the task is still very much in the foreground. You are still often trying to figure out how to make something happen. However, parts of the task have become background assumptions; your attention will not be on "why am I doing this" or other questions like that. Finally, when steam flows out to the world, and the task passes out of our attention, the consequences (the things we were trying to achieve) become background assumptions.  ... Or something like that. 

I'm unsure if open sets (or whatever generalization) are a good formal underpinning of what we call concepts, but I'm in agreement that there seems needed at least a careful reconsideration of intuitions one takes for granted when working with a concept, when you're actually working with a negation-of-concept. And "believing in" might be one of those things that you can't really do with negation-of-concepts.

Also, I think a typo: you said "logical complement", I'm imagining you meant "set-theoretic complement". (This seems important to point out since in to... (read more)

2MrMind
I should have written "algebraic complement", which becomes logical negation or set-theoretic complement depending on the model of the theory. Anyway, my intuition on why open sets are an interesting model for concepts is this: "I know when I see it" seems to describe a lot of the way we think about concepts. Often we don't have a precise definition that could argue all the edge case, but we pretty much have a strong intuition when a concept does apply. This is what happens to recursively enumerable sets: if a number belongs to a R.E. set, you will find out, but if it doesn't, you need to wait an infinite amount of time. Systems that take seriously the idea that confirmation of truth is easy falls under the banner of "geometric logic", whose algebraic model are  frames, and topologies are just frames of subsets. So I see the relation between "facts" and "concepts" a little bit like the relation between "points" and "open sets", but more in a "internal language of a topos" or "pointless topology" fashion: we don't have access to points per se, only to open sets, and we imagine that points are infinite chains of ever precise open sets

I began reading this charitably (unaware of whatever inside baseball is potentially going on, and seems to be alluded to), but to be honest struggled after "X" seemed to really want someone (Eliezer) to admit they're "not smart"? I'm not sure why that would be relevant. 

I think I found these lines especially confusing, if you want to explain:

  • "I just hope that people can generalize from "alignment is hard" to "generalized AI capabilities are hard".

    Is capability supposed to be hard for similar reasons as alignment? Can you expand/link? The only argument
... (read more)
5jessicata
I'm not sure exactly what is meant, one guess is that it's about centrality: making yourself more central (more making executive decisions, more of a bottleneck on approving things, more looked to as a leader by others, etc) makes more sense the more you're more correct about relevant things relative to other people. Saying "oh, I was wrong about a lot, whoops" is the kind of thing someone might do before e.g. stepping down as project manager or CEO. If you think your philosophy has major problems and your replacements' philosophies have fewer major problems, that might increase the chance of success. I would guess this is comparable to what Eliezer is saying in this post about how some people should just avoid consequentialist reasoning because they're too bad at it and unlikely to improve: ... Alignment is hard because it's a quite general technical problem. You don't just need to make the AI aligned in case X, you also have to make it aligned in cases Y and Z. To do this you need to create very general analysis and engineering tools that generalize across these situations. Similarly, AGI is a quite general technical problem. You don't just need to make an AI that can do narrow task X, it has to work in cases Y and Z too, or it will fall over and fail to take over the world at some point. To do this you need to create very general analysis and engineering tools that generalize across these situations. For an intuition pump about this, imagine that LW's effort towards making an aligned AI over the past ~14 years was instead directed at making AGI. We have records of certain mathematical formalisms people have come up with (e.g. UDT, logical induction). These tools are pretty far from enhancing AI capabilities. If the goal had been to enhance AI capabilities, they would have enhanced AI capabilities more, but still, the total amount of intellectual work that's been completed is quite small compared to how much intellectual work would be required to build a work

There's probably a radical constructivist argument for not really believing in open/noncompact categories like . I don't know how to make that argument, but this post too updates me slightly towards such a Tao of conceptualization.

(To not commit this same error at the meta level: Specifically, I update away from thinking of general negations as "real" concepts, disallowing statements like "Consider a non-chair, ...").

But this is maybe a tangent, since just adopting this rule doesn't resolve the care required in aggregation with even compact categori... (read more)

3MrMind
There is, at least at a mathematical / type theoretic level. In intuitionistic logic, ¬A is translated to A→0, which is the type of processes that turn an element of A into an element of 0, but since 0 is empty, the whole ¬A is absurd as long as A is istantiated (if not, then the only member is the empty identity). This is also why constructively A→¬¬A but not ¬¬A→A Closely related to constructive logic is topology, and indeed if concepts are open set, the logical complement is not a concept. Topology is also nice because it formalizes the concept of edge case

(A suggestion for the forum)

You know that old post on r/ShowerThoughts which went something like "People who speak somewhat broken english as their second language sound stupid, but they're actually smarter than average because they know at least one other language"?

I was thinking about this. I don't struggle with my grasp of English the language so much, but I certainly do with what might be called an American/Western cadence. I'm sure it's noticeable occasionally, inducing just the slightest bit of microcringe in the typical person that hangs around here... (read more)

2nim
Could this be accomplished using custom commenting guidelines? Perhaps just adding a sentence about whether one wants to opt into or out of linguistic-aesthetic feedback would suffice if one has strong feelings on the matter. This would work for top level posts, but for comment replies, the commenting guidelines feature would need to be expanded to show the guidelines of the person being replied to as well as the author of the main post. For instance, when writing this reply I see only Raemon's commenting guidelines.

I like this question. I imagine the deeper motivation is to think harder about credit assignment. 

I wrote about something similar a few years ago, but with the question of "who gets moral patienthood" rather than "who gets fined for violating copyright law". In the language of that comment, "you publishing random data" is just being an insignificant Seed.

Yeah, this can be really difficult to bring out. The word "just" is a good noticer for this creeping in.

It's like a deliberate fallacy of compression: sure you can tilt your view so they look the same and call it "abstraction", but maybe that view is too lossy for what we're trying to do! You're not distilling, you're corrupting!

I don't think the usual corrections for fallacies of compression can help either (eg. Taboo) because we're operating at the subverbal layer here. It's much harder to taboo cleverness at that layer. Better off meditating on the virtue of The Void instead.

But it is indeed a good habit to try to unify things, for efficiency reasons. Just don't get caught up on those gains.

The "shut up"s and "please stop"s are jarring.

Definitely not, for example, norms to espouse in argumentation (and tbf nowhere does this post claim to be a model for argument, except maybe implicitly under some circumstances).

Yet there's something to it.

There's a game of Chicken arising out of the shared responsibility to generate (counter)arguments. If Eliezer commits to Straight, ie. refuses to instantiate the core argument over and over again (either explicitly, by saying "you need to come up with the generator" or implicitly, by refusing to engage with ... (read more)

It occurred to me while reading your comment that I could respond entirely with excerpts from Minding our way. Here's a go (it's just fun, if you also find it useful, great!):

You will spend your entire life pulling people out from underneath machinery, and every time you do so there will be another person right next to them who needs the same kind of help, and it goes on and on forever

This is a grave error, in a world where the work is never finished, where the tasks are neverending.

Rest isn't something you do when everything else is finished. Everything e... (read more)

3jaspax
Returning to this very belatedly. I actually agree with most of what you say here, and the points of disagreement are not especially important. However, my point WRT the original analogy is that it doesn't seem to me to be compatible with these insights. If the general state of the world is equivalent to an emergency in which a man is drowning in a river, then the correct course of action is  heroic, immediate intervention. But this, as some of your quotes, is totally unsustainable as a permanent state of mind. The outcome, if we take that seriously, is either crippling scrupulosity or total indifference. The correct move is just to reject the original equivalence. The state of the world is NOT equivalent to an emergency in which a man is drowning in a river, and intuitions drawn from the prior scenario are NOT applicable to everyday existence.

You draw boundaries towards questions.

As the links I've posted above indicate, no, lists don't necessarily require questions to begin noticing joints and carving around them.

Questions are helpful however, to convey the guess I might already have and to point at the intension that others might build on/refute. And so...

Your list doesn't have any questions like that

...I have had some candidate questions in the post since the beginning, and later even added some indication of the goal at the end.

EDIT: You also haven't acknowledged/objected to my response to y... (read more)

2ChristianKl
I have plenty of comments at Zack post you link and I don't agree with it. As Thomas Khun argued, the fact that chemists and physicists disagree about whether helium is a molecule is no problem. Both communities have reasons to carve out the joints differently. Different paradigms have valid reasons to draw lines differently.

In Where to Draw the Boundaries, Zack points out (emphasis mine):

The one replies:

But reality doesn't come with its joints pre-labeled. Questions about how to draw category boundaries are best understood as questions about values or priorities rather than about the actual content of the actual world. I can call dolphins "fish" and go on to make just as accurate predictions about dolphins as you can. Everything we identify as a joint is only a joint because we care about it.

No. Everything we identify as a joint is a joint not "because we care about it", but

... (read more)
2ChristianKl
You draw boundaries towards questions. I can ask many questions about wine: "Do I enjoy drinking wine?", "Do I get good value for money when I seek enjoyment by paying money for wine?", "Is the wine inherently enjoyful?" and a bunch of others. Answering those questions is about drawing boundaries the same way as answering "Is a dolphin a fish?" is about drawing boundaries. Your list doesn't have any questions like that and thus there aren't any boundaries to be drawn.   As far as the question of "What is a dolphin?" goes at Wikidata at the moment is our answer "A dolphin organisms known by a particular common name" because the word dolphin does not refer to a single species of animals or a taxon in the taxonomic tree. Speaking of dolphins when you reject categorizations that are not taxonomic accurate makes little sense in the first place.

It seemed to me that avoiding fallacies of compression was always a useful thing (independent of your goal, so long as you have the time for computation), even if negligibly. Yet these questions seem to be a bit of a counterexample in mind,  namely that I have to be careful when what looks like decoupling might be decontextualizing.

Importantly, I can't seem to figure out a sharp line between the two. The examples were a useful meditation for me, so I shared them. Maybe I should rename the title to reflect this?

(I'm quite confused by my failure of conv... (read more)

2ChristianKl
I don't think that you failed to communicate the point. It's just that the approach of dealing with the issue at hand is seen as bad. And that's actually useful feedback.  Thinking "they only disagree because they didn't understand me for some reason that's confusing to me" is not useful.   Goals are part of the meaning and thus any attempt to analyse the meaning independent of the goals is confused. For epistemic rationality the goal is usually about the ability to make accurate predictions and for instrumental rationality the goals are about achieving certain outcomes.

Yes, this is the interpretation. 

If I'm doing X wrong (in some way), it's helpful for me to notice it. But then I notice I'm confused about when decoupling context is the "correct" thing to do, as exemplified in the post. 

Rationalists tend to take great pride in decoupling and seeing through narratives (myself included), but I sense there might be some times when you "shouldn't", and they seem strangely caught up with embeddedness in a way.

I think I might have made a mistake in putting in too many of these at once. The whole point is to figure out which forms of accusations are useful feedback (for whatever), and which ones are not, by putting them very close to questions we think we've dissolved.

Take three of these, for example. I think it might be helpful to figure out whether I'm "actually" enjoying the wine, or if it's a sort of a crony belief. Disentangling those is useful to make better decisions for myself, in say, deciding to go to a wine-tasting if status-boost with those people wou... (read more)

8ChristianKl
Whether something is useful feedback depends on goals. Feedback is either useful for achieving a given goal or isn't. You didn't list any goals and thus it's meaningless to speak with of those are useful feedback.  We might engage in mind reading and make up plausible goals that the person who's the target of the accusations might have and discuss whether or not the feedback is useful for the goals that we imagine, but mind reading is generally problematic. 

[ETA: posted a Question instead]

Question: What's the difference, conceptually, between each of the following if any?

"You're only enjoying that food because you believe it's organic"

"You're only enjoying that movie scene because you know what happened before it"

"You're only enjoying that wine because of what it signals"

"You only care about your son because of how it makes you feel"

"You only had a moving experience because of the alcohol and hormones in your bloodstream"

"You only moved your hand because you moved your fingers"

"You're only showing courage bec

... (read more)
2Charlie Steiner
I think the main complaint about "signalling" is when it's a lie. E.g. if there's some product that claims to be sophisticated, but is in fact not a reliable signal of sophistication (being usable without sophistication at all). Then people might feel affronted by people who propogate the advertising claims because of honesty-based aesthetics. I'm happy to call this an important difference from non-lie signalling, and also from other aesthetic preferences. Oh, and there's wasteful signalling, can't forget about that either.

So... it looks like the second AI-Box experiment was technically a loss

Not sure what to make of it, since it certainly imparts the intended lesson anyway. Was it a little misleading that this detail wasn't mentioned? Possibly. Although the bet was likely conceded, a little disclaimer of "overtime" would have been nice when Eliezer discussed it.

2Tetraspace
:0, information on the original AI box games! What's interesting about this is that, despite the framing of Player B being the creator of the AGI, they are not. They're still only playing the AI box game, in which Player B loses by saying that they lose, and otherwise they win. For a time I suspected that the only way that Player A could win a serious game is by going meta, but apparently this was done just by keeping Player B swept up in their role enough to act how they would think the creator of the AGI would act. (Well, saying "take on the role of [someone who would lose]" is meta, in a sense.)

I was also surprised. Having spoken to a few people with crippling impostor syndrome, the summary seemed to be "people think I'm smart/skilled, but it's not Actually True." 

I think the claim in the article is they're still in the game when saying that, just another round of downplaying themselves? This becomes really hard to falsify (like internalized misogyny) even if true, so I appreciate the predictions at the end.

3Viliam
I suppose the same situation can be described using different words, so it is difficult to argue what is the correct framing. (I still think this is falsifiable in principle, e.g. by measuring the serotonin levels, but no one probably did that.) To me, this sounds like "people treat me better than I deserve", which means "I don't deserve to be treated well", which is kinda the thing I am pointing towards. And yeah, the predictions are there to make something sufficiently non-ambiguous. Actually, only the prediction with weightlifting is like that, because what "makes the patient feel stronger or more popular" is also debatable.

I like the idea of it being closer to noise, but there are also reasons to consider the act of advertising theft, or worse:

  • It feels like the integrity of my will is attacked, when ads work and I know somewhere that I don't want it to; a divide and conquer attack on my brain, Moloch in my head.
  • If they get the most out of marketing it to parts of my brain rather than to me as a whole, there is optimization pressure to keep my brain divided, to lower the sanity waterline.
  • Whenever I'm told to "turn off adblocker", for that to work for them, it's premised on me
... (read more)
2DirectedEvolution
Life is full of pressures, it’s true. Not just from deliberate advertising, but from social comparisons. Unfortunately, it seems to me that it’s ultimately our responsibility to engage or not. If a website demands you turn off adblocker to read, you can always click away. If you literally feel like ads are driving you insane, equivalent to being beaten with a belt every time you glance at a billboard, or compromising your very ability to make meaningful choices... Well, that is probably a stronger visceral reaction than most people have to ads. I for one often choose to get free content in exchange for trying to ignore ads. I feel this deal is worthwhile to me sometimes, other times not. I pay for a subscription to Spotify so that my music listening can go uninterrupted by ads. I don’t pay for a subscription to news sites that I can read if I turn off my ad blocker.

There's a game of chicken in "who has to connect potential buyers to sellers, the buyers or the sellers?" and depending on who's paying to make the transaction happen, we call it "advertisement" or "consultancy".

(You might say "no, that distinction comes from the signal-to-noise ratio", so question: if increasing that ratio is what works, how come advertisements are so rarely informative?)

As a meta-example, even to this I want to add:

  • There's this other economy to keep in mind of readers scrolling past walls of text. Often, I can and want to make what I'm saying cater to multiple attention spans (a la arbital?), and collapsed-by-default comments allow the reader to explore at will.
    • A strange worry (that may not be true for other people) is attempting to contribute to someone else's long thread or list feels a little uncomfortable/rude without reading it all/carefully. With collapsed-by-default, you could set up norms that it's okay to reply w
... (read more)
  • I noticed a thing that might hinder the goals of longevity as described here ("build on what was already said previously"): it feels like a huge cost to add a tiny/incremental comment to something because of all the zero-sum attention games it participates in. 

    It would be nice to do a silent comment, which:

    • Doesn't show up in Recent Comments
    • Collapsed by default
    • (less confident) Doesn't show up in author's notifications (unless "Notify on Silent" is enabled in personal settings)
    • (kinda weird) Comment gets appended automatically to previous comment (if you
... (read more)
2John_Maxwell
Why not just have a comment which is a list of bullet points and keep editing it?
3a gently pricked vein
As a meta-example, even to this I want to add: * There's this other economy to keep in mind of readers scrolling past walls of text. Often, I can and want to make what I'm saying cater to multiple attention spans (a la arbital?), and collapsed-by-default comments allow the reader to explore at will. * A strange worry (that may not be true for other people) is attempting to contribute to someone else's long thread or list feels a little uncomfortable/rude without reading it all/carefully. With collapsed-by-default, you could set up norms that it's okay to reply without engaging deeply. * It would be nice to have collapsing as part of the formatting * With this I already feel like I'm setting up a large-ish personal garden that would inhibit people from engaging in this conversation even if they want to, because there's so much going on. * And I can't edit this into my previous comment without cluttering it. * There's obviously no need for having norms of "talking too much" when it's decoupled from the rest of the control system * I do remember Eliezer saying in a small comment somewhere long ago that "the thumb rule is to not occupy more than three places in the Recent Comments page" (paraphrased).
Often, people like that will respond well to criticism about X and Y but not about Z.

One (dark-artsy) aspect to add here is that the first time you ask somebody for criticism, you're managing more than your general identity, you're also managing your interaction norms with that person. You're giving them permission to criticize you (or sometimes, even think critically about you for the first time), creating common knowledge that there does exist a perspective from which it's okay/expected for them to do that. This is playing with the ... (read more)

Incidentally Eliezer, is this really worth your time?

This comment might have caused a tremendous loss of value, if Eliezer took Marcello's words seriously here and so stopped talking about his metaethics. As Luke points out here, despite all the ink spilled, very few seemed to have gotten the point (at least, from only reading him).

I've personally had to re-read it many times over, years apart even, and I'm still not sure I fully understand it. It's also been the most personally valuable sequence, the sole cause of significant fundame... (read more)

1TAG
If there is an urgent need to actually build safe AI, as was widely believed 10+ years ago, Marcello's comment makes sense .

Ping!

I've read/heard a lot about double crux but never had the opportunity to witness it.

EDIT: I did find one extensive example, but this would still be valuable since it was a live debate.

This one? From the CT-thesis section in A first lesson in meta-rationality.

the objection turns partly on the ambiguity of the terms “system” and “rationality.” These are necessarily vague, and I am not going to give precise definitions. However, by “system” I mean, roughly, a set of rules that can be printed in a book weighing less than ten kilograms, and which a person can consciously follow.11 If a person is an algorithm, it is probably an incomprehensibly vast one, which could not written concisely. It is proba
... (read more)
2Sniffnoy
That sounds like it might have been it?

Ideally, I'd make another ninja-edit that would retain the content in my post and the joke in your comment in a reflexive manner, but I am crap at strange loops.

Cold Hands Fallacy/Fake Momentum/Null-Affective Death Stall

Although Hot Hands has been the subject of enough controversy to perhaps no longer be termed a fallacy, there is a sense in which I've fooled myself before with a fake momentum. I mean when you change your strategy using a faulty bottomline: incorrectly updating on your current dynamic.

As a somewhat extreme but actual example from my own life: when filling out answersheets to multiple-choice questions (with negative marks for incorrect responses) as a kid, I'd sometimes get excited about... (read more)

2Matt Goldenberg
Above, a visual depiction of strangepoop.
Answer by a gently pricked vein100

There's a whole section on voting in the LDT For Economists page on Arbital. Also see the one for analytic philosophers, which has a few other angles on voting.

From what I can tell from your other comments on this page, you might already have internalized all the relevant intuitions, but it might be useful anyway. Superrationality is also discussed.

Sidenote: I'm a little surprised no one else mentioned it already. Somehow arbital posts by Eliezer aren't considered as canon as the sequences, maybe it's the structure (rather than just the content)?

5wizzwizz4
I think it's just reachability. Arbital is Far Away, and it's plausible that not everyone even knows it exists.

I usually call this lampshading, and I'll link this comment to explain what I mean. Thanks!

Thank you for this comment. I went through almost exactly the same thing, and might have possibly tabled it at the "I am really confused by this post" stage had I not seen someone well-known in the community struggle with and get through it.

My brain especially refused to read past the line that said "pushing it to 50% is like throwing away information": Why would throwing away information correspond to the magic number 50%?! Throwing away information brings you closer to maxent, so if true, what is it about the setup that makes 50% the unique solution, ind

... (read more)

While you're technically correct, I'd say it's still a little unfair (in the sense of connoting "haha you call yourself a rationalist how come you're failing at akrasia").

Two assumptions that can, I think you'll agree, take away from the force of "akrasia is epistemic failure":

  • if modeling and solving akrasia is, like diet, a hard problem that even "experts" barely have an edge on, and importantly, things that do work seem to be very individual-specific making it quite hard to stand on the shoulders of giants
  • if a large percentage of people who've found a
... (read more)

I'm interested in this. The problem is that if people consider the value provided by the different currencies at all fungible, side markets will pop up that allow their exchange.

An idea I haven't thought about enough (mainly because I lack expertise) is to mark a token as Contaminated if its history indicates that it has passed through "illegal" channels, ie has benefited someone in an exchange not considered a true exchange of value, and so purists can refuse to accept those. Purist communities, if large, would allow stability of such non-contaminated tok

... (read more)

The expectations you do not know you have control your happiness more than you know. High expectations that you currently have don't look like high expectations from the inside, they just look like how the world is/would be.

But "lower your expectations" can often be almost useless advice, kind of like "do the right thing".

Trying to incorporate "lower expectations" often amounts to "be sad". How low should you go? It's not clear at all if you're using territory-free un-asymmetric simple rules like "lower". Like any other attempt at truth-finding, it is not

... (read more)

I think a counterexample to "you should not devote cognition to achieving things that have already happened" is being angry at someone who has revealed they've betrayed you, which might acause them to not have betrayed you.

Is metarationality about (really tearing open) the twelfth virtue?

It seems like it says "the map you have of map-making is not the territory of map-making", and gets into how to respond to it fluidly, with a necessarily nebulous strategy of applying the virtue of the Void.

(this is also why it always felt like metarationality seems to only provide comments where Eliezer would've just given you the code)

The parts that don't quite seem to follow is where meaning-making and epistemology collide. I can try to see it as a "all models are false, some models are useful" but I'm not sure if that's the right perspective.

2Viliam
From certain perspective, "more models" becomes one model anyway, because you still have to choose which of the models are you going to use at a specific moment. Especially when multiple models, all of them "false but useful", would suggest taking a different action. As an analogy, it's like saying that your artificial intelligence will be an artificial meta-intelligence, because instead of following one algorithm, as other artificial intelligences do, it will choose between multiple algorithms. At the end of the day, "if P1 then A1 else if P2 then A2 else A3" still remains one algorithm. So the actual question is not whether one algorithm or many algorithms is better, but whether having a big if-switch at the top level is the optimal architecture. (Dunno, maybe it is, but from this perspective it suddenly feels much less "meta" than advertised.)
3Gordon Seidoh Worley
Coming from within that framing, I'd say yes.
Answer by a gently pricked vein10

I want to ask this because I think I missed it the first few times I read Living in Many Worlds: Are you similarly unsatisfied with our response to suffering that's already happened, like how Eliezer asks, about the twelfth century? It's boldface "just as real" too. Do you feel the same "deflation" and "incongruity"?

I expect that you might think (as I once did) that the notion of "generalized past" is a contrived but well-intentioned analogy to manage your feelings.

But that's not so at all: once you've redone your ontology, where the naive idea of time isn

... (read more)

Soares also did a good job of impressing this in Dive In:

In my experience, the way you end up doing good in the world has very little to do with how good your initial plan was. Most of your outcome will depend on luck, timing, and your ability to actually get out of your own way and start somewhere. The way to end up with a good plan is not to start with a good plan, it's to start with some plan, and then slam that plan against reality until reality hands you a better plan.

The idea doesn't have to be good, and it doesn't have to be feasible
... (read more)
2Matt Goldenberg
Malcolm Ocean and Duncan Sabien also made a good go of it in Just Do a Thing.

I don't think the "idea of scientific thinking and evidence" has so much to do with throwing away information as adding reflection, post which you might excise the cruft.

Being able to describe what you're doing, ie usefully compress existing strategies-in-use, is probably going to be helpful regardless of level of intelligence because it allows you to cheaply tweak your strategies when either the situation or the goal is perturbed.

Load More