1 min read

18

This is a special post for quick takes by Hazard. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.

In light of reading through Raemon's shortform feed, I'm making my own. Here will be smaller ideas that are on my mind.

Hazard's Shortform Feed
249 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

HOLY shit! I just checked out the new concepts portion of the site that shows you all the tags. This feels like a HUGE step in the direction the LW team's vision of a place where knowledge production can actually happen. 

4Raemon
Woo! Glad that had the intended effect. :)

"People are over sensitive to ostracism because human brains are hardwired to be sensitive to it, because in the ancestral environment it meant death."

Evopsyche seems mostly overkill for explaining why a particular person is strongly attached to social reality. 

People who did not care what their parents or school-teachers thought of them had a very hard time. "Socialization" as the process of the people around you integrating you (often forcefully) into the local social reality. Unless you meet a minimum bar of socialization, it's very common to be shunted through systems that treat you worse and worse. Awareness of this, and the lasting imprint of coercive methods used to integrate one into social reality, seem like they can explain most of an individuals resistance to breaking from it.

I've recently re-read Lou Keep's Uruk series, and a lot more ideas have clicked together. I'm going to briefly summarize each post (hopefully will tie things together if you have read them, might not make sense if you haven't). This is also a mini-experiment in using comments to make a twitter-esque idea thread.

7Hazard
#4 Without belief in a god, never without belief in the devil This post tracks ideas in The True Believer, by Hoeffer. The main MO of a MM is to replace action with identity. This is the general phenomena of which narcissism (TLP and samzdat brand) is a specific form of. Moloch like forces conspire such that the most successful MM will be the one's that do the best job of keeping their members very frustrated. Hate is often used to keep fire burning.
6Hazard
#3 Use and Abuse of Witchdoctors One sentence: Metis is the belief, the ritual, and the world view, and they are way less separable than you think. Explores the recent history of gri-gri, witch doctor magic used in Africa to make people invulnerable to bullets to fight against local warlords (it also can involve some nasty sacrifice and cannibalism rituals). Lou emphasizes the point that it's not enough to go "ah, gri-gri is a useful lie that helps motivates everyone to fight as a unified force, and fighting as a unified force is what actually has a huge impact on fighting of warlords..." The State's response is likely going to be "Ahhh, so gri-gri doesn't do anything, let's ban it and just tell people to fight in groups". This will fail, because this has no theory of individual adoption (i.e, the only reason people fought as one was because they literally thought they were invulnerable). This is all to hammer in the point that for any given piece of illegible metis, it's very hard to find a actual working replacement, and Very hard (possibly beyond the states pay grade) to find a legible replacement.
6Hazard
#2 The Meridian of Her Greatness: One sentence: People care about the social aspects of life, and the social is now embedded in market structures in a way that allows Moloch-esque forces to destroy the good social stuff. It starts my addressing the "weirdness" of everyone being angry, even though people are richer than ever. This post tracks the book The Great Transformation by Polanyi. Claim: (quote from Polanyi) Capitalism is differentiated from markets. Reason being is that markets have always been around (they were mediated and controlled through social relationships), the new/recent thing is building society around a market. Claim: Once you treat labor and land like common market goods and subject them to the flows of the market, you open up a pathway for Moloch to gnaw away at your soul. Now "incentives" can apply pressure such that you slowly sacrifice more and more of the social/relational aspects of life that people actually care about.
6Hazard
#1 Man as a rational animal The concept of legibility is introduced (I like Ribbon Farm's explanation of the concept). The state only talks in terms of legibility, and thus can't understand and illegible claims, ideas, practices. The powerless (i.e the illegible who can't speak in the terms of the state) tend to get crushed. (now adays an illegible group would be Christians) Lou points to the current process/trajectory of the state slowly legibilizing the world, and destroying all that is illegible in its path. Besides noting this process, Lou also claims that some of those illegible practices are valuable, and because the state does not truly understand the illegible practices it destroys, the state does not provide adequate replacements. Extra claim: a lot of the illegible metis being destroyed has to do with happiness, fulfillment, and other essential components of human experience.
4[anonymous]
I really like that you're doing this! I've tried to get into the series, but I haven't done so in a while. Thanks for the summaries! (Also, maybe it'd be good for future comments about what you're doing to be children of this post, so it doesn't break the flow of summaries.)

Over the past few months I've noticed a very consistent cycle.

  1. Notice something fishy about my models
  2. Struggle and strain until I was able to formulate the extra variable/handle needed to develop the model
  3. Re-read an old post from the sequences and realize "Oh shit, Eliezer wrote a very lucid description of literally this exact same thing."

What's surprising is how much I'm surprised by how much this happens.

1Hazard
Often I have an idea three times in various forms before it makes it to territory of, "Well thought out idea that I'm actually acting upon and having good stuff come from it." My default, I follow a pattern of, "semi-randomly expose myself to lots of ideas, not worry a lot about screening for repetitive stuff, let the most salient ideas at any given moment float up to receive tid-bits of conscious thought, then forget about them till the next semi-random event triggers it being thought about." I'd be interested if there was a better protocol for, "This thing I've encountered seems extra important/interesting, let me dwell on it more and more intentionally integrate it into my thinking/"
1Hazard
Ahh, the "meta-thoughts" idea in seems like a useful thing to apply if/when this happens again. (which begs the questions, when I wrote the above comment, why didn't I have the meta-thought that I did in the linked comment? (I don't feel up to thinking about that in this moment)) *tk*

Here's a pattern I'm noticing more and more: Gark makes a claim. Tlof doesn't have any particular contradictory beliefs, but takes up argument with Gark, because (and this is the actual-source-of-behavior because) the claim pattern matches, "Someone trying to lay claim to a tool to wield against me", and people often try to get claims "approved" to be used against the other.

Tlof behavior is a useful adaptation to a combative conversational environment, and has been normalized to feel like a "simple disagreement". Even in high trust scenarios, Tlof by habit continues to follow conversational behaviors that get in the way of good truth seeking.

4Hazard
A bit more generalized: there are various type of "gotcha!"s that people can pull in conversation, and it is possible to habituate various "gotcha!" defenses. These behaviors can detract from conversations where no one is pulling a "gotcha!".

Sketch of a post I'm writing:

"Keep your identity small" by Paul Graham $$\cong$$ "People get stupid/unreasonable about an issue when it becomes part of their identity. Don't put things into your identity."

"Do Something vs Be Someone" John Boyd distinction.

I'm going to think about this in terms of "What is one's main strategy to meet XYZ needs?" I claim that "This person got unreasonable because their identity was under attack" is more a situation of "This person is panicing at the possibility that their main strategy to meet XYZ need will fail."

Me growing up: I made effort to not specifically "identify" with any group or ideal. Also, my main strategy for meeting social needs was "Be so casually impressive that everyone wants to be my friend." I can't remember an instance of this, but I bet I would have looked like "My identity was under attack" if someone starting saying something that undermined that strategy of mine. Being called boring probably would have been terrifying.

"Keep your identity small" is not actionable advice. The target should be more "... (read more)

4Hazard
Yesterday I read the first 5 articles on google for "why arguments are useless". It seems pretty in the zeitgeist that "when people have their identity challenged you can't argue with them. A few of them stopped there and basically declared communication to be impossible if identity is involved, a few of them sequitously hinted at learning to listen and find common ground. A reason I want to get this post out is to add to the "Here's why identity doesn't have to be a stop sign."

Lol, one reason it's hard to talk to people about something I'm working through when there's a large inferential gap, is that when they misunderstand me and tell me what I think I sometimes believe them.

9Hazard
Example Me: "I'm thinking about possible alternatives typical ad revenue models of funding content creation and what it would take to switch, like what would it take to get eeeeeeveryone on patreon? Maybe we could eliminate some of the winner takes all popularity effects of selling eyeballs." Friend: somewhat indignantly "You're missing the point. Why would you think this could solve popularity contest? Patreon just shifts where that contest happens." Me: fumbles around trying to explain why I think patreon is a good idea, even though I DONT, and explicitly started the convo with I'm exploring possibilities, but because my thoughts aren't yet super clear I'm super into supporting something the other person thinks I think
4Dagon
This happens on LW as well, fairly often. It's hard to really introduce a topic in a way that people BELIEVE you when you say you're exploring concept space and looking for ideas related to this, rather than trying to evaluate this actual statement. It's still worth trying to get that across when you can. It's also important to know your audience/discussion partners. For many people, it's entirely predictable that when you say "I'm thinking about ... get everyone on patreon" they will react to the idea of getting their representation of "everyone" on their ideas of "patreon". In fact, I don't know what else you could possibly get. It may be better to try to frame your uncertainty about the problem, and explore that for awhile, before you consider solutions, especially solutions to possibly-related-but-different problems. WHY are you thinking about funding and revenue? Do you need money? Do you want to give money to someone? Do you want some person C to create more content and you think person D will fund them? It's worth it to explore where Patreon succeeds and fails at whatever goals you have, but first you have to identify the goals.
4Hazard
Separating two different points in my example, there's "You misunderstanding my point leads me to misunderstand my point" (the thing I think is the most interesting part) and there's also "blarg! Stop misunderstanding me!" I'm with you on your suggestion of framing a discussion as uncertainty about a problem, to get less of the misunderstanding.

I finished reading Crazy Rich Asians which I highly enjoyed. Some thoughts:

The characters in this story are crazy status obsessed, and my guess is because status games were the only real games that had ever existed in their lives. Anything they ever wanted, they could just buy, but you can't pay other rich people to think you are impressive. So all of their energy goes into doing things that will make others think their awesome/fashionable/wealthy/classy/etc. The degree to which the book plays this line is obscene.

Though you're never given exact numbers on Nick's family fortune, the book builds up an aura of impenetrable wealth. There is no way you will ever become as rich as the Youngs. I've already been someone who's been a grumpy curmudgeon about showing off/signalling/buying positional goods, but a thing that this book made real to me was just how deep these games can go.

If you pick the most straight forward status markers (money), you've decided to try and climb a status ladder of impossible height with vicious competition. If you're going to pick a domain in which you care more about your ordinality than your carnality, for the love of god ... (read more)

Thoughts on writing (I've been spending the 4 hours every morning the last week working on Hazardous Guide to Words):

Feedback

Feedback is about figuring out stuff you didn't already know. I wrote the first draft of HGTW a month ago, and I wrote it in "Short sentences that convince me personally that I have a coherent idea here". When I went to get feedback from some friends last week, I'd forgotten that I'd hadn't actually worked to make it understandable, and so most of the feedback was "this isn't understandable".

Writing with purpose

Almost always if I get bogged down when writing it's because I'm trying to "do something justice" instead of "do what I want". "Where is the meaning?" started as "oh, I'll just paraphrase Hofstadter's view of meaning". The first example I thought was to talk about how you can draw too much meaning from things, and look at claims of the pyramids predicting the future. I got bogged down righting those examples, because "what can lead you to think meaning is there when it's not?" was not really what I was talking about, nor was it w... (read more)

I'm torn on WaitButWhy?s new series The Story of Us. My initial reaction was mostly negative. Most of that came from not liking the frame of Higher Mind and Primitive Mind, as that sort of thinking has been reasonable for a lot of hiccups for me, making "doing what I want" and unnecessarily antagonistic process. And then along the way I see plenty of other ways I don't like how he slices up the world.

The torn part: maybe this is sorta post "most people" need to start bridging the inferential gap towards what I consider good epistemology? I expect most people on LW to find his series too simplistic, but I wonder if his posts would do more good than the Sequences for the average joe. As I'm writing this I'm acutely aware of how little I know about how "most people" think.

It also makes me think about how at some point in recent years I thought, "More dumbed down simplifications of crazy advanced math concepts should exist, to get more people a little bit closer to all the cool stuff there is." I guessed a mathematician might balk at this suggestion ("Don't tarnish my precious precision!") Am I reacting the same way?

I dunno, what do you think?

5romeostevensit
Agree, seems like LW for normies circa ten plus years ago? Reaction for standard metacontrarian reasons, seeing past self in it.
3bgold
I'd like to see someone in this community write an extension / refinement of it to further {need-good-color-name}pill people into the LW memes that the "higher mind" is not fundamentally better than the "animal mind"
3Daniel Kokotajlo
Yep, agreed. I want all my friends and family to read the series... and then have a conversation with me about the ways in which it oversimplifies and misleads, in particular the higher mind vs. primitive mind bit. On balance though I think it's great that it exists and I predict it will be the gateway drug for a bunch of new rationalists in years to come.

Memex Thread:

I've taken copious notes in notebooks over the past 6 years, I've used evernote on and off as a capture tool for the past 4 years, and for the past 1.5 years I've been trying to organize my notes via a personal wiki. I'm in the process of switching and redesigning systems, so here's some thoughts.

4Hazard
Concepts and Frames Association, linking and graphs A defining idea in this space is "Your memory works my association, get your note taking to mirror that." A simple version of this is what you have in a wiki. Every concept mentioned that has it's own page has a link to it. I'm a big fan of graph visualizations of information, and you could imagine looking at a graph of your personal wiki where edges are links. Roam embraces links with memory, all your notes know if they've been linked to and display this information. My idea for a memex tool to make really interesting graphs is to basically giving you free reign to make the type system of your nodes and edges, and give your really good filtering/search capacity on that type system. Basically a dope gui/text editor overtop of neo4j. Personal Lit review This is one way I frame what I want to myself. Sometimes I go "Okay, I want to rethink how I orient to loose-tie friendships." Then I remember that I've definitely thought about this before, but can't remember what I thought. This is the situation where I'd want to do a "lit review" of how I've attacked this issue in the past, and move forward in light of my history. Just-in-time ideation I take a shit ton of notes. Some are notes on what I'm reading, others are random ideas for jokes, projects, theories, arm chair philosophizing. Not all ideas should be, or can be acted upon right away, or at all (like "turn Spain into a tortilla"). But there is some possible future situation where it would be useful to have this idea brought to mind. My ideal memex would actually be a genie that remembers everything I've thought and written, follows me around, and constantly goes, "What would be useful for Hazard to remember right now?" This can be acted on in how you design your notes. Think, "What sort of situation would it be useful to remember this in? In that situation, what key words and phrases will be in my head? Include those in this note so they'll pop up in a searc
4Hazard
People Talking about Memex stuff Tiago Forte: Build a Second Brain (here's an introduction) He's been on my radar for a year, and I've just started reading more of his stuff. Suspicion that he might be me from the future. He's all about the process and design of the info flow and doesn't sell a memex tool. Big ideas: find what you need when the time is right, new organic connections, your second brain should surprise you, progressive summarization. Andrew Louis: I'm building a memex This guy takes the memex as a way of life. Self-proclaimed digital packrat, he's got every chat log since highschool saved, always has his gps on and records that location, and basically pours all of his digital data into a massive personal database. He's been developing an app for himself (partially for others) to manage and interact with this. This goes waaaaaaaay beyond note taking. I'd binge more of his stuff if I wanted to get a sense for the emergent revelations that could come from intense memexing. (check out his demo vid) Conor: Roam Conor both has a beta-product, and many ideas about how to organize ideas. Inspired by zettlekasten (post about zettlekasten, was the name of a physical note card system used by Niklas Luhmann). Check out his white paper for the philosophy
4Hazard
Products I've interacted with Nuclino Very cool. Mixes wiki, trello board, and graph centric views. Has all the nice content embedding, slash commands, etc. DOESN'T WORK OFFLINE :( (would be great otherwise) Style/Inspiration: Wiki meets trello + extra. Roam Conor has been developing this with the Zettelkasten system as his inspiration. Biggest feature (in my mind) is "deep linking" things. You can link other notes to your note, and have them "expanded", and if you edit the deep linked note in a parent note, it actual edits the linked note. Also, notes keep track of every place there mentioned. Allows for powerful spiderwebby knowledge connection. I'm playing with the beta, still getting familiar and don't yet have much to say except that deep linking is exactly the feature I've always wanted and couldn't find. Zim Wiki Desktop wiki that works for linux. Nothing fancy, uses a simple markdown esque syntax, everything is text files. I used that for a year, now I'm moving away. 1 reason is I want more rich outlining powers like folding, but I'm also broadly moving away from framing my notes as a "personal wiki" for reasons I'll mention in another post. PB Wiki Just a wiki software. When I first decided to use a wiki to organize my school notes, I used this. It's an online tool which is --, but works okay as a wiki. Emacs Org Mode (what I'm currently using) Emacs is a magical extensible text editor, and org mode is a specific package for that editor. Org mode has great outlining capabilities, and unlimited possibilities for how you can customize stuff (coding required). The current thing that I'd really need for org mode to fit my needs is to be able to search my notes and see previews of them (think evernote search, you see the titles of notes, and a preview of the content). I think deft can get me this, haven't installed yet though. Long term, it seems emacs is appealing because it seems like I can craft my own workflow with precision. Will take work though
1Sunny from QAD
I really like the idea of a personal wiki. I've been thinking for a while about how I can track concepts that I like but that don't seem to be part of the zeitgeist. I might set up a personal wiki for it!
1eigen
Yes! Thinking about it is a great idea. Is there any particular open source software you use to set this up?
1William_Darwin
I use GitBook.com, functions very well as a personal wiki (can link to other pages, categorise, etc)
1Sunny from QAD
IIRC, there is some kind of template software you can use to set up a basic wiki, kind of like how WordPress is a template software for a basic blog. If you google around you'll probably find it, if it exists.

Noticing an internal dynamic.

As a kid I liked to build stuff (little catapults, modify nerf guns, sling shots, etc). I entered a lot of those projects with the mindset of "I'll make this toy and then I can play with it forever and never be bored again!" When I would make the thing and get bored with it, I would be surprised and mildly upset, then forget about it and move to another thing. Now I think that when I was imagining the glorious cool toy future, I was actually imagining a having a bunch of friends to play with (didn't live around many other kids).

When I got to middle school and highschool and spent more time around other kids, the idea of "That person's talks like they're cool but they aren't." When I got into sub-cultures centering around a skill or activity (magic) I experienced the more concentrated form, "That person acts like they're good at magic, but couldn't do a show to save their life."

I got the message, "To fit in, you have to really be about the thing. No half assing it. No posing."

Why, historically, have I gotten so worried when my interests shift? I'm not yet at a point in my lif... (read more)

6Viliam
Years ago, I wrote fiction, and dreamed about writing a novel (I was only able to write short stories). I assumed I liked writing per se. But I was hanging out regularly with a group of fiction fans... and when later a conflict happened between me and them, so that I stopped meeting them completely, I found out I had no desire left to write fiction anymore. So, seems like this was actually about impressing specific people. I suspect this is only a part of the story. There are various ways to fit in a group. For example, if you are attractive or highly socially skilled, people will forgive you being mediocre at the thing. But if you are not, and you still want to get to the center of attention, then you have to achieve the extreme levels of the thing.

tldr;

In high-school I read pop cogSci books like "You Are Not So Smart" and "Subliminal: How the Subconscious Mind Rules Your Behavior". I learned that "contrary to popular belief", your memory doesn't perfectly capture events like a camera would, but it's changed and reconstructed every time you remember it! So even if you think you remember something, you could be wrong! Memory is constructed, not a faithful representation of what happened! AAAAAANARCHY!!!

Wait a second, a camera doesn't perfectly capture events. Or at least, they definitely didn't when t

... (read more)
1Rudi C
I think you’re falling for the curse of knowledge. Most people are so naive that they do think their, e.g., vision is a “direct experience” of reality. The more simplistic books are needed to bridge the inferential gap.
2Hazard
I'm ignoring that gap unless I find out that a bulk of the people reading my stuff think that way. I'm more writing to what feels like the edge of interesting and relevant to me.

Over this past year I've been thinking more in terms of "Much of my behavior exists because it was made as a mechanism to meet a need at some point."

Ideas that flow out of this frame seem to be things like Internal Family Systems, and "if I want to change behavior, I have to actually make sure that need is getting met."

Question: does anyone know of a source for this frame? Or at least writings that may have pioneered it?

2romeostevensit
Psycho-cybernetics is an early text in this realm.
2Matt Goldenberg
I think this has developed gradually. The idea of "behavior is based on unconscious desires" goes back as far as at least Freud, probably earlier.
1Hazard
Yeah. To home in more specifically, I'm looking at "All of your needs are legit". I've heard for a while "You have all these unconscious desires your optimizing for" and often followed with a "If only we could find a way to get rid of these desires." The new thing for me has been the idea that behind each of those "petty"/"base" desires there is a real valid need that is okay to have.
3George3d6
That seems like a potentially very unhealthy thing when applied to "basic" desires such as food and sex... Unless yoloing your way through a life of hookers, coke (the sugary kind) and jelo seems appealing. Our first order desires usually conflict with our long terms desires, and those are usually much better to aim for. But maybe I'm getting something wrong here. Where did you get this idea from ?
3Hazard
The sentence "All your needs are legitimate" is pretty under-specified so I'll try to flush out the picture. This gets a bit closer, "All your needs are legitimate, but not all of your strategies to meet those needs are legitimate." I can think there's nothing wrong with wanting sex, but there are still plenty of ways to meet that need which I'd fine abhorrent. "All your needs are legit" is not me claiming that any action you think to take is morally okay as long as it's an attempt to meet a need/desire. Another phrasing might be that I see a difference between, "I have a need for sporadic pleasurable experiences, and for consuming food so I don't die" and "Right now I want to go get a burger and a milkshake" Another thing that shapes my frame is the claim that a lot of our behavior, even some that looks like it's just pursing "basic" things, sources from needs/desires like "needing to feel loved" "needing to feel like your aren't useless" etc. This extends to the tentative claim: "If more people had most of their emotional needs met, lots of people would be far less inclined to engage it stereotypical "hedonistic debauchery'" Now to your "Where did this idea come from?" I don't remember when I first explicitly encountered this idea, but the most formative interaction might have been at CFAR a year ago. You mentioned "Our first order desires usually conflict with our long terms desires, and those are usually much better to aim for." I was investigating a lot of my 'long term desires' and other top-down frameworks I had to value parts of my life, and began to see how they had been carefully crafted to meet certain "basic" desires, like not being in situations where people would yell at me and never having to beg for attention. Many of my long term desires were actually strategies to meet various basic emotional needs, and they were also strategies that were causing conflicts with other parts of my life. My prior tendency was to go, "I'll just rebuke and disavow th
1George3d6
Even in this form I don't believe this sentence holds. For example, I am a smoke (well, vaper, but you get the point, nicotine user). I can guarantee you I have a very real need for: a) Nicotine's effect on the brain b) The throat hit nicotine gives c) The physical "action" of smoking Are those needs legitimate in the sense you seem to understand them ? Yes, they are pretty legitimate, or at least I can associate them to be on the same degree as other needs that most people would consider legitimate (e.g. need to take a piss, need to talk with a friend, w/e) Must those needs stay legitimate ? No, actually, having taken breaks of up to half a year from the practice I can actually tell those needs get less relevant the longer you go without smoking Should those needs stay legitimate ? Well, I'd currently argue "yes", since otherwise I wouldn't be vaping as I'm writing this. But, I'd equally argue that from a societal perspective the answer is "no", indeed, for parts of my brain (the ones that don't want to smoke), the answer is "no". 1. Now, either smoking is a legitimate need OR 2. Some needs that "seem" legitimate should actually be supressed OR 3. Needs not only need to "feel/seem" legitimate, they also need to have some other stamp of approval, such as being natural 1 - is a bad perspective to hold all things considered, you wouldn't teach your kid that you caught smoking that he should keep doing it because it's a legitimate need now that he kinda likes it. 2 - seem to counter-act your point because we can now claim any legitimate need should actually be suppressed rather than indulged in some way. 3 - You get into a nurture vs nature debate... in which case, I'm on the "you can't really tell" side for now and wouldn't personally go any further in that direction.
2Hazard
Okay, I agree that for "All your needs are legitimate...." the "all" part doesn't really seem to hold. Your example straightforwardly seems to address that. Stuff that's closer to "biological stuff we decent understanding of" (drugs, food) doesn't really fit the claim I was making. I think you also helped me figure out a better way to express my sentiment. I was about to rephrase it as "All of your emotional needs are legit" but that feels like it's a me going down the wrong path. I'll try to explain why I wanted to phrase it that way in the first place. I see the "standard view" as something like "Of course your emotions are important, but there are few unsavory feelings that just aren't acceptable and you shouldn't have them." I think I reached to quickly for "There is no such thing as unacceptable feelings" rather than "Here is why this specific feeling you are calling unacceptable actually is acceptable." I probably reached for that because it was easier. Claim 1: The reasoning that proclaims a given emotional/social need is not legitimate is normally flawed. (I could speak more to that, but it's sort of what I was mentioning at the end of my last comment) I think this thing you mentioned is relevant. I totally agree that something like smoking can have this "re-normalization" mechanism. Now I wonder what happens if we swap out the need for smoking with the need to feel like someone cares about you? Claim 2: Ignored emotional/social needs will not "re-normalize" and will be a recurring source of pain, suffering, and problems. The second claim seems like it could lead to very tricky debate. High-school-me would have insisted that I could totally just ignore my desire to be liked by people without ill consequences, because look at me, I'm doing it right now and everything's fine! I can currently see how this was causing me serious problems. So... if someone said to me that they can totally just ignore things that I'd call emotional/social needs with no ill
1George3d6
I can pretty much agree with these claims. I think it's worth breaking down emotional/social needs into lower-level entities than people usually do, e.g: * "I need to be in a sexual relationship with {X} even though they hate me" -- is an emotional need that's probably flawed * "I need to be in a sexual relationship" -- is an emotional need that's probably correct *** * "I need to be friends with {Y} even though they told me they don't enjoy my company" -- again, probably flawed * "I need to be friends with some of the people that I like" -- most likely correct But then you reach the problem of where exactly you should stop the breakdown, as in, if your need is "too" generic once you reach its core it might make it rather hard to act upon. If you don't break them down at all you end up acting like a sitcom character without the laugh-track, wit and happy coincidences. Also, whilst I disagree with your initial formulation: I don't particularly see anything against: But it seems from your reply that you hold them to be one and the same ?
2Hazard
In both of those examples you give I agree with you judgment of the needs. If you switch "All your needs are legit" to "All your social/emotional needs are legit", then yeah, I was thinking of that and "There is no such things as unacceptable feelings" as the same thing. Though I can now see two distinct ideas that they could point to. "All your S/E needs are legit" seems to say not only that it's okay to have the need, it's okay to do something to meet it. That's a bit harder to handle than just "It's okay to feel something." And yeah, there probably is some scenario where you could have a need that there's no way you could ethically meet, and that you can't breakdown into a need that can be met. Another thing that I noticed informed my initial phrasing is I think that there is a strong sour grapes pressure to go from "I have this need, and I don't see anyway to get it met that I'm okay with" to "Well then this is a silly need and I don't even really care about it." You've sparked many more thoughts from me on this, and I think those will come in a post sometime later. Thanks for prodding!

The general does not exist, there are only specifics.

If I have a thought in my head, "Texans like their guns", that thought got there from a finite amount of specific interactions. Maybe I heard a joke about texans. Maybe my family is from texas. Maybe I hear a lot about it on the news.

"People don't like it when you cut them off mid sentence". Which people?

At a local meetup we do a thing called encounter groups, and one rule of encounter groups is "there is no 'the group', just individual people". Having conversations in that mode has been incredibly helpful to realize that, in fact, there is no "the group".

1clone of saturn
But why stop at individual people? This kind of ontological deflationism can naturally be continued to say there are no individual people, just cells, and no cells, just molecules, and no molecules, just atoms, and so on. You might object that it's absurd to say that people don't exist, but then why isn't it also absurd to say that groups don't exist?
5Hazard
The idea was less "Individual humans are ontologically basic" and more: I see I often talking about broad groups of people has been less useful than dropping down to talk about interactions I've had with individual people. In writing the comment I was focusing more on what the action I wanted to take was (think about specific encounters with people when evaluating my impressions) and less my my ontological claims of what exists. I see how me lax opening sentence doesn't make that clear :)

What are the barriers to having really high "knowledge work output"?

I'm not capable of "being productive on arbitrary tasks". One winter break I made a plan to apply for all the small $100 essay scholarships people were always telling me no one applied for. After two days of sheer misery, I had to admit to myself that I wasn't able be productive on a task that involved making up bullshit opinions about topics I didn't care about.

Conviction is important. From experiments with TAPs and a recent bout of meditation, it seems ... (read more)

3Hazard
(Less a reply and more just related) I often think a sentence like, "I want to have a really big brain!". What would that actually look like? * Not experiencing fear or worry when encountering new math. * Really quick to determine what I'm most curious about. * Not having my head hurt when I'm thinking hard, and generally not feeling much "cognitive strain". * Be able to fill in the vague and general impressions with the concrete examples that originally created them. * Doing a hammers and nails scan when I encounter new ideas. * Having a clear, quickly accessible understanding of the "proof chains" of ideas, as well as the "motivation chains". * I don't need to know all the proofs or motivations, but I do have a clear sense of what I understand myself, and what I've outsourced. * Instead of feeling "generally confused" by things of just "not getting them", I always have concrete, "This doesn't make sense because BLANK" expressions that allow me to move forward.

Concrete example: when I'm full, I'm generally unable to imagine meals in the future as being pleasurable, even if I imagine eating a food I know I like. I can still predict and expect that I'll enjoy having a burger for dinner tomorrow, but if I just stuffed myself on french fries, and just can't run a simulation of tomorrow where the "enjoying the food experience" sense is triggered.

I take this as evidence for my internal food experience simulator has "code" that just asks, "If you ate XYZ right now, how would... (read more)

I'm in the process of turning this thought into a full essay.

Ideas that are getting mixed together:

Cached thoughts, Original Seeing, Adaption Executors not Fitness Maximizes, Motte and Bailey, Double Think, Social Improv Web.

  • A mind can perform original seeing (to various degrees), and it can also use cached thoughts.
    • Cache thoughts are more “Procedural instruction manuals” and original seeing is more “Your true anticipations of reality”.
  • Both reality and social reality (social improv web) apply pressures and rewards that shape your cached thoughts.
  • It
... (read more)
5Raemon
Just wanted to say I liked the core insight here (that people seem more-like-hidden-agenda executors when they're running on cached thoughts). I think it probably makes more sense to frame it as a hypothesis than a "this is a true thing about how social reality and motivation work", but a pretty good hypothesis. I'd be interested in the essay exploring what evidence. might falsify it or reinforce it. (This is something that's not currently a major pattern among rationalist thinkpieces on psychology but probably should be)
3Hazard
hmmmmm, ironically my immediate thought was, "Well of course I was considering it as a hypothesis which I'm examining the evidence for", though I'd bet that the map/territory separation was not nearly as emphasized in my mind when I was generating this idea. Yeah, I think your framing is how I'll take the essay.
1Hazard
Here's a more refined way of pointing out the problem that the parent comment was addressing: * I am a general intelligence that emerged running on hardware that wasn't intelligently designed for general intelligence. * Because of the sorts of problems I'm able to solve when directly applying my general intelligence (and because I don't understand intelligence that well), it is easy to end up implicitly believing that my hardware is far more intelligent than it actually is. * Examples of ways my hardware is "sub-par": * It don't seem to get automatic belief propagation. * There doesn't seem to be strong reasons to expect that all of my subsystems are guaranteed to be aligned with the motives that I have on a high level. * Because there are lots of little things that I implicitly believe my hardware does, which it does not, there are a lot of corrective measures I do not take to solve the deficiencies I actually have. * It's completely possible that my hardware works in such a way that I'm effectively working on different sets of beliefs and motives and various points in time, and I have a bias towards dismissing that because, "Well that would be stupid, and I am intelligent." Another perspective. I'm thinking about all of the examples from the sequences of people near Eliezer thinking that AI's would just do certain things automatically. It seems like that lens is also how we look at ourselves. Or it could humans are not automatically strategic, but on steroids. Humans do not automatically get great hardware.

I started writing on LW in 2017, 64 posts ago. I've changed a lot since then, and my writing's gotten a lot better, and writing is becoming closer and closer to something I do. Because of [long detailed personal reasons I'm gonna write about at some point] I don't feel at home here, but I have a lot of warm feelings towards LW being a place where I've done a lot of growing :)

2Ben Pace
I'm glad about your growth here :)

A forming thought on post-rationality. I've been reading more samzdat lately and thinking about legibility and illegibility. Me paraphrasing one point from this post:

State driven rational planning (episteme) destroys local knowledge (metis), often resulting in metrics getting better, yet life getting worse, and it's impossible to complain about this in a language the state understands.

The quip that most readily comes to mind is "well if rationality is about winning, it sounds like the state isn't being very rational, and this isn'... (read more)

4Hazard
Epistemic status: Some babble, help me prune. My thoughts on the basic divide between rationalist and post-rationalists, lawful thinkers and toolbox thinkers. Rat thinks: "I'm on board with The Great Reductionist Project, and everything can in theory be formalized." Post-Rat hears: "I personally am going to reduce love/justice/mercy and the reduction is going to be perfect and work great." Post-Rat thinks: "You aren't going to succeed in time / in a manner that will be useful for doing anything that matters in your life." Rat hears: "It's fundamentally impossible to reduce love/justice/mercy and no formalism of anything will do any good." Newcomb's Problem Another way I see the difference is that the post-rats look at Newcomb's problem and say "Those causal rationalist losers! Just one-box! I don't care what your decision theory says, tell your self whatever story you need in order to just one-box!" The post-rats rally against people who are doing things like two-boxing because "it's optimal". The most indignant rationalists are the one's who took the effort to create whole new formal decision theories that can one-box, and don't like that the post-rats think they'd be foolish enough to two-box just because a decision theory recommends it. While I think this gets the basic idea across, this example is also cheating. Rats can point to formalism that do one-box, and in LW circles there even seem to be people who have worked the rationality of one-boxing deep into their minds. Hypothesis: All the best rationalists are post-rationalists, they also happen to care enough about AI Safety that they continue to work diligently on formalism.

Alternative hypothesis: Post-rationality was started by David Chapman being angry at historical rationalism. Rationality was started by Eliezer being angry at what he calls "old-school rationality". Both talk a lot about how people misuse frames, pretend that rigorous definitions of concepts are a thing, and broadly don't have good models of actual cognition and the mind. They are not fully the same thing, but most of the time I talked to someone identifying as "postrationalist" they picked up the term from David Chapman and were contrasting themselves to historical rationalism (and sometimes confusing them for current rationalists), and not rationality as practiced on LW.

2Hazard
I'd buy that. Any idea what a good recent thing/person/blog example of embodying that historical rationalist mindset? The only context I have for the historical rationalist a is Descartes, and I have not personally seen anyone who felt super Descartes-esque.
2habryka
The default book that I see mentioned in conversation that explains historical rationalism is “Seeing like a state” though I have not read the whole book myself.
2Hazard
Cool. My back of the mind plan is "Actually read the book, find big names in the top down planning regimes, see if they've written stuff" for whenever I want to replace my Descartes stereotype with substance.

Sometimes when I talk to friends about building emotional strength/resilience, they respond with "Well I don't want to become a robot that doesn't feel anything!" to paraphrase them uncharitably.

I think wolverine is a great physical analog for how I think about emotional resilience. Every time wolverine gets shot/stabbed/clubbed it absolutely still hurts, but there is an important way in which these attacks "don't really do anything". On the emotional side, the aim is not that you never feel a twinge of hurt/sorrow/jealo... (read more)

4Viliam
Maybe emotional resilience is bad for some forms of signaling. The more you react emotionally, the stronger you signal that you care about something. Keeping calm despite feeling strong emotions can be misinterpreted by others as not caring. Misunderstandings created this way could possibly cause enough harm to outweigh the benefits of emotional resilience. Or perhaps the balance depends on some circumstances, e.g. if you are physically strong, people will be naturally afraid to hurt you, so then it is okay to develop emotional resilience about physical pain, because it won't result in them hurting you more simply because "you don't mind it anyway".
3Richard_Kennaway
That problem should be addressed by better mastery over one's presentation, not by relinquishing mastery over one's emotions.
2Kaj_Sotala
To some extent, the interpretation is arguably correct; if you personally suffer from something not working out, then you have a much greater incentive to actually ensure that it does work out. If a situation going bad would cause you so much pain that you can't just walk out from it, then there's a sense in which it's correct to say that you do care more than if you could just choose to give up whenever.

Quick description of a pattern I have that can muddle communication.

"So I've been mulling over this idea, and my original thoughts have changed a lot after I read the article, but not because of what the article was trying to persuade me of ..."

Genera Pattern: There is a concrete thing I want to talk about (a new idea - ???). I don't tell what it is, I merely provide a placeholder reference for it ("this idea"). Before I explain it, I begin applying a bunch of modifiers (typically by giving a lot of context "This idea is ... (read more)

4jimrandomh
Yep, I notice this sometimes when other people are doing it. I don't notice myself doing it, but that's probably because it's easier to notice from the receiving end. In writing, it makes me bounce off. (There are many posts competing for my attention, so if the first few sentences fail to say anything interesting, my brain assumes that your post is not competitive and moves on.) In speech, it makes me get frustrated with the speaker. If it's in speech and it's an interruption, that's especially bad, because it's displacing working memory from whatever I was doing before.
2habryka
I also do this a lot, and think it's not always a mistake, but I agree that it imposes significant cognitive burden on my conversational partner. 
2Hazard
Do you also do it as a preemptive move like I described, or for other reasons?

Ribbon Farm captured something that I've felt about nomadic travel. I'm thinking back to a 2 month bicycle trip I did through Vietnam, Cambodia, and Laos. During that whole trip, I "did" very little. I read lots of books. Played lots of cards. Occasionally chat with my biking partner. "Not much". And yet when movement is your natural state of affairs, every day is accompanied with a feeling of progress and accomplishment.

I love the experience of realizing what cognitive algorithm I'm running in a given scenario. This is easiest to spot when I screw something up. Today, I misspelled the word "process" by writing three "s" instead of two. I'm almost certain that while writing the word, there was a cached script of "this word has one more 's' than feels write, so add another one" that activated as I wrote the 1st "s", but then some idea popped into my mind (small context switch, working memory dump?) and I then execu... (read more)

Something as simple as talking too loud can completely screw you over socially. There's a guy in one of my classes who talks at almost a shouting level when he asks questions, and I can feel the rest of the class tense up. I'd guess he's unaware of it, and this is likely a way he's been for many years which has subtlety/not so subtlety pushed people away from him.

Would it be a good idea to tell him that a lot of people don't like him because he's loud? Could I package that message such that it's clear I'm just trying... (read more)

To everyone on the LW team, I'm so glad we do the year in review stuff! Looking over the table of contents for the 2018 book I'm like "damn, a whole list of bangers", and even looking at top karma for 2019 has a similar effect. Thanks for doing something that brings attention to previous good work.

2Ben Pace
You're welcome :) I'm loving reading yours and everyone's nominations, it's really great to hear about what people found valuable.

I've been having fun reading through Signals: Evolution, Learning, & Information. Many of the scenarios revolve around variations of the Lewis Signalling Game. It's a nice simple model that lets you talk about communication without having to talk about intentionality (what you "meant" to say).

Intention seems to mostly be about self-awareness of the existing signalling equilibrium. When I speak slowly and carefully, I'm constantly checking what I want to say against my understanding of our signalling equilibrium, and reasoning o... (read more)

"Moving from fossil fuels to renewable energy" but as a metaphor for motivational systems. Nate Soares replacing guilt seems to be trying to do this.

With motivation, you can more easily go, "My life is gonna be finite. And it's not like someone else has to deal with my motivation system after I die, so why not run on guilt and panic?"

Hmmmm, maybe something like, "It would be doper if large scale people got to more renewable motivational systems, and for that change to happen it feels important for people growing up to be able to see those who have made the leap."

Reverse-Engineering a World View

I've been having to do this a lot for Ribbonfarm's Mediocratopia blog chain. Rao often confuses me and I have to step up my game to figure out where he's coming from.

It's basically a move of "What would have to be different for this to make sense?"

Confusion: "But if you're going up in levels, stuff must be getting harder, so even though you're mediocre in the next tier, shouldn't you be loosing slack, which is antithetical to mediocrity?"

Resolution: "What if there&a... (read more)

[Everything is "free" and we inundate you in advertisements] feels bad. First thought alternative is something like paid subscriptions, or micropayments per thing consumed. But the question is begged, how does anyone find out about the sites they want to subscribe to? If only there was some website aggregator that was free for me to use so that I could browse different possible subscriptions...

Oh no. Or if not oh no, it seems like the selling eyeballs model won't go away just because alternatives exist, if only from the "people need to ... (read more)

3William_Darwin
Maybe it has something to do with the sentiment that "if it's free, the product is you". Perhaps without paying some form of subscription, you feel that there is no 'bounded' payment for the service - as you consume more of any given service, you are essentially paying more (in cognitive load or something similar?). Kind of feels like fixed vs variable costs - often you feel a lot better with fixed as it tends to be "more valuable" the more you consume. Just an off-the-cuff take based on personal experience, definitely interested in hearing other takes.

The university I'm at has meal plans were you get a certain number of blocks (meal + drink + side). These are things that one has, and uses them to buy stuff. Last week at dinner, I gave the cashier my order and he said "Sorry man, we ran out of blocks." If I didn't explain blocks well enough, this is a statement that makes no sense.

I completely broke the flow of the back and forth and replied with a really confused, "Huh?" At that point the guy and another worker started laughing. Turns out they'd been coming up with non... (read more)

I've been writing on twitter more lately. Sometimes when I'm trying to express and idea, to generate progress I'll think "What's the shortest sentence I can write that convinces me I know what I'm talking about?" This is different from "What's a simple but no simpler explanation for the reader?"

Starting a twitter thread and forcing several tweet sized chunk of ideas out are quite helpful for that. It helps get the concept clearer in my head, and then I have something out there and I can dwell on how I'd turn it into a consumable for others.

2Hazard
I've been writing A LOT on twitter lately. It's been hella fun. One thing that seems clear. Twitter threads are not the place to hash out deep disagreements start to finish. When you start multi threading, it gets chaotic real fast, and the character limit is a limiting force. On the other side of things, it's feels great for gestating ideas, and getting lots of leads on interesting ideas. 1) Leads: It helps me increase my "known unknowns". There's a lot of topics, ideas, disciplines I see people making offhand comments about, and while it's rarely enough to piece together the whole idea, I often can pick up the type signature and know where the idea relates to other ideas I am familiar with. This is dope. Expand you anti-library 2): gestation: there's a limit to how much you can squeeze into a single tweet, but threading really helps to shotgun blast out ideas. It often ends up being less step-by-step carefully reasoned arg, and more lots of quasi-independent thoughts on the topic that you then connect. Also, I easily get 5x engagement on twitter, and other people throwing in their thoughts is really helpful. I know Raemon and crew have mentioned trying to help with more gestation and development of ideas (without sacrificing overall rigor). post-rat-twitter / strangely-earnest-twitter feels like it's nailed the gestation part. Might be something to investigate.
2Hazard
See this for the best example of rapid brainstorming, and the closest twitter has to long form content.

Re Mental Mountains, I think one of the reasons that I get worried when I meet another youngin that is super gung-ho about rationality/"being logical and coherent", is that I don't expect them to have a good Theory of How to Change Your Mind. I worry that they will reason out a bunch of conclusions, succeed in high-level changing their minds, think that they've deeply changed their minds, but instead leave hoards of unresolved emotional memories/models that they learn to ignore and fuck them up later.

Weird hack for a weird tick. I've noticed I don't like audio abruptly ending. Like, sometimes I've listened to an entire podcast on a walk, even when I realized I wasn't into it, all because I anticipated the twinge of pain from turning it off. This is resolved by turning the volume down until it is silent, and then turning it off. Who'd of thunk it...

Me circa March 2018

"Should"s only make sense in a realm where you are divorced form yourself. Where you are bargaining with some other being that controls your body, and you are threatening it.

Update: This past week I've had an unusual amount of spontaneous introspective awareness on moments when I was feeling pulled my a should, especially one that came from comparing myself from others. I've also been meeting these thoughts with an, "Oh interesting. I wonder why this made me feel a should?" as opposed to a standard "end... (read more)

From Gwern's about page:

I personally believe that one should think Less Wrong and act Long Now, if you follow me.

Possibly my favorite catch-phrase ever :) What do I think is hiding there?

  • Think Less Wrong
    • Self anthropology- "Why do you believe what you believe?"
    • Hugging the Query and not sinking into confused questions
    • Litany of Tarski
    • Notice your confusion - "Either the story is false or you model is wrong"
  • Act Long Now
    • Cultivate habits and practice routines that seem small / trivial on a day/week/month timeline, but will result in you
... (read more)
7Hazard
What am I currently doing to Act Long Now? (Dec 4th 2019) * Switching to Roam: Though it's still in development and there are a lot of technical hurdles to this being a long now move (they don't have good import export, it's all cloud hosted and I can't have my own backups), putting ideas into my roam network feels like long now organization for maximized creative/intellectual output over the years. * Trying to milk a lot of exploration out of the next year before I start work, hopefully giving myself springboards to more things at points in the future where I might not have had the energy to get started / make the initial push. * Being kind. * Arguing Politics* With my Best Friends What am I currently doing to think Less Wrong? * Writing more has helped me hone my thinking. * Lot's of progress on understanding emotional learning (or more practically, how to do emotional unlearning) allowing me to get to a more even keeled center from which to think and act. * Getting better at ignoring the bottom line to genuinely consider what the world would be like for alternative hypothesis.
4Matt Goldenberg
This is a great list! I'd be curious about things you are currently doing to act short now and think more wrong as well. I often find I get a lot out of such lists.
7Hazard
Act Short Now * Sleeping in * Flirting more Think More Wrong * I longer buy that there's a structural difference between math/the formal/a priori and science/the empirical/ a posteriori. * Probability theory feels sorta lame.

Claim: There's a headspace you can be in where you don't have a bucket for explore/babble. If you are entertaining an idea or working through a plan, it must be because you already expect it to work/be interesting. If your prune filter is also growing in strength and quality, then you will be abandoning ideas and plans as soon as you see any reasonable indicator that they won't work.

Missing that bucket and enhancing your prune filter might feel like you are merely growing up, getting wiser, or maybe more cynical. This will be really strongly... (read more)

You can have infinite aspirations, but infinite plans are often out to get you.

When you make new plans, run more creative "what if?" inner-sims, sprinkle in more exploit, and ensure you have bounded loss if things go south.

When you feel like quitting, realize you have the opportunity to learn and update being asking, "What's different between now and when I first made this plan?"

Make your confidence in your plans explicit, so if you fail you can be surprised instead of disappointed.

If the thought of giving up feels terrible, you mi... (read more)

Stub Post: Thoughts on why it can be hard to tell if something is hindsight bias or not.

Imagine one's thought process as an idea-graph, with the process of thinking being hopping around nodes. Your long term memory can be thought of as the nodes and edges that are already there and persist strongly. The contents of your working memory are like temporary nodes and edges that are in your idea graph, and everything that is close to them gets a +10 to speed-of-access. A short term memory node can even cause edges to pop up between two other nodes around i... (read more)

1Hazard
This seems to be in accord with things like how the framing of questions has a huge effect on what people's answers are. There are probably some domains where you don't actually have much of a persistent model, and your "model" mostly consists of the temporary connections created by the contents of your working memory.

Utility functions aren't composable! Utility functions aren't composable! Sorry to shout, I've just realized a very specific way I've been wrong for quite some time.

VNM utility is completely ignores that structure and of outcomes and "similarities" between outcomes. U(1 apple) doesn't need to have any relation to U(2 apples). With decision scenarios I'm used to interacting in, there are often ways in which it is natural to things of outcomes as compositions or transformation of other outcomes or objects. When I thin... (read more)

4habryka
Yes, indeed quite important. This is a common confusion that has often lead me down weird conversational paths. I think some microeconomics has most made this clear to me, because in there you seem to be constantly throwing tons of affine transformations at your utility functions to make them convenient and get you analytic solutions, and it becomes clear very quickly that you are not preserving the relative magnitude of your original utility function.
5Hazard
I think one of the reasons it took me so long to notice was that I was introduced to VNM utility I'm the context of game theory, and winning at card games. Most of those problems do have the property of the utility of some base scoring system composing well to generate the utility of various end games. Since that was always the case, I guess I thought that it was a property of utility, and not the games.

I pointed out in this post that explanations can be confusing because you lack some assumed knowledge, or because the piece of info that will make the explanation click has yet to be presented (assuming a good/correct explanation to begin with). It seems like there can be a similar breakdown when facing confsion in the process of trying to solve a problem.

I was working on some puzzles in assembly code, and I made the mistake of interpreting hex numbers as decimal (treating 0x30 as 30 instead of 48). This lead me to draw a memory map that looked really weir... (read more)

For anyone curious about what the sPoOkY and mYsTeRiOuS Michael Vassar actually thinks about various shit, many of his friends have blogs and write about what they chat about, and he's also been on several long form podcasts.

https://naturalhazard.xyz/ben_jess_sarah_starter_pack

https://open.spotify.com/episode/1lJY2HJNttkwwmwIn3kyIA?si=em0lqkPaRzeZ-ctQx_hfmA

https://open.spotify.com/episode/01z3WDSIHPDAOuVp1ZYUoN?si=VOtoDpw9T_CahF31WEhZXQ

https://open.spotify.com/episode/2RzlQDSwxGbjloRKqCh1xg?si=XuFZB1CtSt-FbCweHtTnUA

https://open.spotify.com/episode/33nrhLwr... (read more)

6niplav
Thank you for collecting those links :-) I've listened to two or three of the interviews (and ~three other talks from a long time ago), and I still have no clue what the central claims are, what the reasoning supporting them is &c. (I understand it most for Zvi Mowshowitz and Sarah Constantin, less for Jessica Taylor, and least for Benjamin Hoffman & Vassar). I also don't know of anyone who became convinced of or even understood any of Michael Vassar's views/stances through his writing/podcasts alone—it appears to almost always happens through in-person interaction.
4Nick_Tarleton
I have understood and become convinced of some of Michael's/Ben's/Jessica's stances through a combination of reading their writing and semi-independently thinking along similar lines, during a long period of time when I wasn't interacting with any of them, though I have interacted with all of them before and since.
2niplav
Thank you, that's useful evidence!
2habryka
(I think Jessica and Ben have both been great writers and I have learned a lot from both of them. I have also learned a bunch of things from Michael, but definitely not via his writing or podcasts or anything that wasn't in-person, or second-hand in-person. If you did learn something from the Michael podcasts or occasional piece of writing he has done, like the ones linked above, that would be a surprise to me)
2Nick_Tarleton
I've gotten things from Michael's writing on Twitter, but also wasn't distinguishing him/Ben/Jessica when I wrote that comment.
2habryka
Makes sense. My agree-react and my sense of Niplav's comments were specifically about Michael's writing/podcasts.
3AprilSR
I want to say I have to an extent (for all three), though I guess there's been second-hand in person interactions which maybe counts. I dunno if there's any sort of central thesis I could summarize, but if you pointed me at like any more specific topics I could take a shot at translating. (Though I'd maybe prefer to avoid the topic for a little while.) In general, I think an actual analysis of the ideas involved and their merits / drawbacks existing would've been a lot more helpful for me than just... people having a spooky reputation was.
2Hazard
Does Jessica's Anti-Normativity post or Ben's Can Crimes be Discussed Literally & Guilt, Shame, Depravity posts make sense to you? If there's specific posts you want to talk about not making sense / not being clear what the point is, I'm down to chat about them.
2Viliam
Ben's articles were easier to read for me; the general idea seems to be that people are sometimes hypocritical. Which I something I agree with, but it also doesn't sound very surprising. So I assume that this is the motte. Now is there a place where I could find the bailey, spelled out? I am tired of trying to decipher vague hints. (My guess would be that the bailey is something like "everyone is 100% hypocritical about everything 100% of the time, all people are actually 100% stupid and evil; except maybe for the small group of people around Michael Vassar" or something like that. Which of course sounds silly when you put it this way, so it works better if you only make hints and let people connect the dots for themselves, preferably in an altered mind state; it helps if they were already mentally unstable so they only need a nudge.)

... those posts are saying much more specific things than 'people are sometimes hypocritical'?

"Can crimes be discussed literally?":

  • some kinds of hypocrisy (the law and medicine examples) are normalized
  • these hypocrisies are / the fact of their normalization is antimemetic (OK, I'm to some extent interpolating this one based on familiarity with Ben's ideas, but I do think it's both implied by the post, and relevant to why someone might think the post is interesting/important)
  • the usage of words like 'crime' and 'lie' departs from their denotation, to exclude normalized things
  • people will push back in certain predictable ways on calling normalized things 'crimes'/'lies', related to the function of those words as both description and (call for) attack
  • "There is a clear conflict between the use of language to punish offenders, and the use of language to describe problems, and there is great need for a language that can describe problems. For instance, if I wanted to understand how to interpret statistics generated by the medical system, I would need a short, simple way to refer to any significant tendency to generate false reports. If the available simple terms were also attack words
... (read more)
4Viliam
Thank you for the summary. I guess my bubble is more cynical than the average population, so I may underestimate how shocking similar thoughts would be for them. I might also add the replication crisis in science, etc. Yes. The world is bad. Almost everything is broken. People don't want to admit it. Those who understand how things work are often ashamed for their role in the system. Some respond by attacking those who point it out, or even those who merely refuse to participate in the same way. ...it still feels like I am waiting for the other shoe to drop. All these things, they make me feel sad. I feel bad about all the wasted opportunity to live better. I don't model most people as evil, just... overwhelmed by all the things that are wrong, and their inability to do something against it. Well, many are too stupid to care. Some people are genuinely evil. Many are going along with the flow, wherever it takes them. Most of human behavior is probably determined by habit; if you grow up in a dysfunctional environment, it will become your normal, but if your twin grew up in a better environment, they would recognize it as better. Even now, there are people who name the unpleasant truths. People who spend a large part of their life fighting against some specific dysfunction. But the world is complicated, and problems too numerous. So, what is that extra insight that I could gain by contemplating these things while taking drugs and listening to Vassar whispering dark thoughts into my ears? Me too, but nothing specific. Maybe it's like when you are high and you believe that you have amazing insights, but when you write them down and read them again when you are sober, there is nothing.
2Unnamed
Does it bother you that this is not what's happening in many of the examples in the post? e.g., With "the American hospital system is built on lies."
-3Viliam
After sleeping on it, it seems to me that the topic were are talking about is "staring into the abyss": whether, when, and how to do it properly, and for what outcome. The easiest way is to not do it at all. Just pretend that everything is flowers and rainbows, and refuse to talk about the darker aspects of reality. This is what we typically do with little children. A part of that is parental laziness: by avoiding difficult topics we avoid difficult conversations. But another part is that children are not cognitively ready to process nontrivial topics, so we try to postpone the debates about darker things until later, when they get the capability. Some lazy parents overdo it; some kids grow up living in a fairy tale world. Occasional glimpses of darkness can be dismissed as temporary exceptions to the general okay-ness of the world. "Grandma died, but now she is happy in Heaven." At this level, people who try to disrupt the peace are dismissed relatively gently, accused of spoiling the mood and frightening the kids. When this becomes impossible because the darkness pushes its way beyond our filters, the next lazy strategy is to downplay the darkness. Either it is not so bad, or there is some silver lining to everything. "Death gives meaning to life." "The animals don't mind dying so that we can have meat to eat; they understand it is their role in the system." "Slavery actually benefits the blacks; they do not have the mental capacity to survive without a master." "What doesn't kill you, makes you stronger." At this point the pushback against those trying to disrupt the peace is stronger; people are aware that their rationalizations are fragile. Luckily, we can reframe the rationalizations as a sign of maturity, and dismiss those who disagree with us as immature. "When you grow up, you will realize that..." Another possible reaction is trying to join the abyss. Yes, bad things happen, but since they are inevitable, there is no point worrying about that. Heck, if
0Hazard
Okay, you made me realize I've been wrong about Michael. Your comment is the single most credible instance I've seen of him causing acute psychosis in an individual. Well, I guess it's more the idea of Michael (and Ben), because no one who reads the linked blog posts or listens to the linked podcasts could mistakenly think your comments had anything to do with their content. I mean, it's possible a casual observer might mistake your earlier characterization of their content as, "isn't this just saying people can sometimes be hypocrites?" as merely garden variety functional illiteracy, but if they knew anything about this website and the high verbal IQ it selects for, they'd know to rule that possibility out immediately. I'd also forgive someone for mistaking your comments for garden variety tribalism and treating arguments as your soldiers, but again, one needs to take into account the context of the website we're on. There's no way Viliam could expect to stay in good standing with this community if he pretended he couldn't read while also making up a totally fabricated version of what others are saying. Like, maybe if the texts/audio in question were hidden and he had privileged access to them he could leverage his reputation and get people to take his word for it, but they're all on the open internet and people can just read them! He would obviously have zero reason to expect anyone would cover for such flagrant nonsense. So as unlikely as it seemed on priors, it really does look like Viliam has gone temporarily psychotic, with Michael Vassar as the proximal cause. Honestly this kinda scares me. I previously thought this was just dumb made up drama, but if people really can make people temporarily psychotic like this, it's a huge worldview shift for me and I'm gonna have to take some time to integrate it. I hope at the very least that you're in a safe environment and have loved ones that can help you out.
0mesaoptimizer
What? Michael Vassar has (AFAIK from Zack M. Davis' descriptions) not taken drugs or promoted becoming a drug addict or "killing yourself". If you hear his Spencer interview, you'll notice that he seems very sane and erudite, and clearly does not give off the unhinged 'Nick Land' vibe that you seem to be claiming that he has or he promotes. You are directly contributing to the increase of misinformation and FUD here, by making such claims without enough confidence or knowledge of the situation.
2Raemon
(I have not engaged with this thread deeply) I've talked to Michael Vassar many times in person. I'm somewhat confident he has taken LSD based on him saying so (although if this turned out wrong I wouldn't be too surprised, my memory is hazy) I definitely have the experiencing of him saying lots of things that sound very confusing and crazy, making pretty outlandish brainstormy-style claims that are maybe interesting, which he claims to take as literally true, that seem either false, or, at least require a lot of inferential gap. I have also heard him make a lot of morally charged, intense statements that didn't seem clearly supported. (I do think I have valued talking to Michael, despite this, he is one of the people who helped unstick me in certain ways, but, the mechanism by which he helped me was definitely via being kinda unhinged sounding.) 
9habryka
I would take bets at 9:1 odds that Michael has taken large amounts of psychedelics. I would also take bets at similar odds that he promotes the use of psychedelics.

I'm reflecting back on this sequence I started two years ago. There's some good stuff in it. I recently made a comic strip that has more of my up to date thoughts on language here. Who knows, maybe I'll come back and synthesize things.

The way I see "Politics is the Mind Killer" get used, it feels like the natural extension is "Trying to do anything that involves high stakes or involves interacting with the outside world or even just coordinating a lot of our own Is The Mind Killer".

From this angle, a commitment to prevent things from getting "too political" to "avoid everyone becoming angry idiots" is also a commitment to not having an impact.

I really like how jessica re-frames things in this comment. The whole comment is interesting, here's a snippet:

Basically, if the issue is adversar

... (read more)

The original post was mostly about not UNNECESSARILY introducing politics or using it as examples, when your main topic wasn't about politics in the first place.  They are bad topics to study rationality on.  

They are good topics to USE rationality on, both to dissolve questions and to understand your communication goals.  

They are ... varied and nuanced in applicability ... topics to discuss on LessWrong.  Generally, there are better forums to use when politics is the main point and rationality is a tool for those goals.  And generally, there are better topics to choose when rationality is the point and politics is just one application.  But some aspects hit the intersection just right, and LW is a fine place.  

So a thing Galois theory does is explain:

Why is there no formula for the roots of a fifth (or higher) degree polynomial equation in terms of the coefficients of the polynomial, using only the usual algebraic operations (addition, subtraction, multiplication, division) and application of radicals (square roots, cube roots, etc)?

Which makes me wonder; would there be a formula if you used more machinery that normal stuff and radicals? What does "more than radicals" look like?

3AprilSR
I think people usually just use “the number is the root of this polynomial” in and of itself to describe them, which is indeed more than radicals. There probably are more round about ways to do it, though.
1paragonal
https://en.wikipedia.org/wiki/Bring_radical

There are two times when Occam's razor comes to mind. One is for addressing "crazy" ideas ala "The witch down the road did it" and one is for picking which legit seeming hypothesis might I prioritize in some scientific context.

For the first one, I really like Eliezer's reminder that when going with "The witch did it" you have to include the observed data in your explanation.

For the second one, I've been thinking about the simplicity formulation that one of my professors uses. Roughly, A is simpler than B if all ... (read more)

1Lukas Finnveden
Maybe the less rough version is better, but this seems like a really bad formulation. Consider (a) an exact enumeration of every event that ever happened, making no prediction of the future, vs (b) the true laws of physics and the true initial conditions, correctly predicting every event that ever happened and every event that will happen. Intuitively, (b) is simpler to specify, and we definitely want to assign (b) a higher prior probability. But according to this formulation, (a) is simpler, since all future events are consistent with (a), while almost none are consistent with (b). Since both theories have equally much evidence, we'd be forced to assign higher probability to (a).
2Hazard
I think me adding more details will clear things up. The setup presupposes a certain amount of realism. Start with Possible Worlds Semantics, where logical propositions are attached to / refer to the set of possible worlds in which they are true. A hypothesis is some proposition. We think of data as getting some proposition (in practice this is shaped by the methods/tools you have to look at and measure the world), which narrows down the allowable possible worlds consistent with the data. Now is the part that I think addresses what you were getting at. I don't think there's a direct analog in my setup to your (a). You could consider the hypothesis/proposition, "the set of all worlds compatible with the data I have right now", but that's not quite the same. I have more thoughts, but first, do you still want feel like you idea is relevant to the setup I've described?
1Lukas Finnveden
That does seem to change things... Although I'm confused about what simplicity is supposed to refer to, now. In a pure bayesian version of this setup, I think you'd want some simplicity prior over the worlds, and then discard inconsistent worlds and renormalize every time you encounter new data. But you're not speaking about simplicity of worlds, you're speaking about simplicity of propositions, right? Since a propositions is just a set of worlds, I guess you're speaking about the combined simplicity of all the worlds. And it makes sense that that would increase if the proposition is consistent with more worlds, since any of the worlds would indeed lead to the proposition being true. So now I'm at "The simplicity of a proposition is proportional to the prior-weighted number of worlds that it's consistent with". That's starting to sound closer, but you seem to be saying that "The simplicity of a proposition is proportional to the number of other propositions that it's consistent with"? I don't understand that yet. (Also, in my formulation we need some other kind of simplicity for the simplicity prior.)
3Hazard
I'm currently turning my notes from this class into some posts, and I'll wait to continue this until I'm able to get those up. Then, hopefully, it will be easier to see if this notion of simplicity is lacking. I'll let you know when that's done.

"Contradictions aren't bad because they make you explode and conclude everything, they're bad because they don't tell you what to do next."

Quote from a professor of mine who makes formalisms for philosophy of science stuff.

1Pattern
Contradictions tell you to fix the contradiction/s next.

Looking at my calendar over the last 8 months, it looks like my attention span for a project is about 1-1.5 weeks. I'm musing on what it would like to lean into that. Have multiple projects at once? Work extra hard to ensure I hit save points before the weekends? Only work on things in week long bursts?

2Hazard
I'm noticing an even more granular version of this. Things that I might do casually (reading some blog posts) have a significant effect on what's loaded into my mind the next day. Smaller than the week level, I'm noticing a 2-3 day cycle of "the thing that was most recently in my head" and how it effects the question of "If I could work on anything rn what would it be?" This week on Tuesday I picked Wednesday as the day I was going to write a sketch. But because of something I was thinking before going to bed, on Wednesday my head was filled with thoughts on urbex. So I switched gears, and urbex thoughts ran their course through Wednesday, and on Thursday I was ready to actually write a sketch (comedy thoughts need to be loaded for that)
2Hazard
Possible hack related to small wins. Many of the projects that I stopped got stopped part way through "continuing more of the same". One was writing my Hazardous Guide to Words, and the other was researching how the internet works. Maybe I could work on one cohesive thing for longer if there was a significant victory and gear shift after a work. Like, if I was making a video game, "Yay, I finished making all the art assets, onto actual code" or something.
2Raemon
The target audience for Hazardous Guide is friends of yours, correct? (vaguely recall that) A thing that normally works for writing is that after each chunk, I get to publish a thing and get comments. One thing about Hazardous Guide is that it mostly isn't new material for LW veterans, so I could see it getting less feedback than average. Might be able to address by actually showing parts to friends if you haven't
2Hazard
Ooo, good point. I was getting a lot less feedback form than then from other things. There's one piece of feedback which is "am I on the right track?" and another that's just "yay, people are engaging!" both of which seem relevant to motivation.
2Raemon
If you can be deliberate about learning from projects, this could actually be a good setup – doing one project a week, learning what you can from it, and moving on actually seems pretty good if you're optimizing for skill growth.
2Hazard
Yeah, being explicit about 1 week would likely help. The projects that made me make this observation were all ones where I was trying to do more than a weeks worth of stuff, and a week is were I decided to move to something else. I expect "I have a week to learn about X" would both take into account waning/waxing interest, and add a bit of rush-motivation.

Elephant in the Brain style model of signaling:

Actually showing that you have XYZ skill/trait is the most beneficial thing you can do, because others can verify you've got the goods and will hire your / like you / be on your team. So now there's an incentive for everyone to be constantly displaying there skills/traits. This takes up a lot of time and energy, and I'm gonna guess that anti-competition norms created "showing off" as a bad thing to do to prevent this "over saturation".

So if there's an "no showing-of... (read more)

2Ruby
This has been my model too, deriving from EitB. But it's probably not just about preventing the over-saturation, it's also to the benefit of those who are more skilled at signaling covertly to promote a norm that disadvantages who only have skills, but not the covert-signaling skills.
2Hazard
Yeah, I see those playing together in the form of the base norm being about anti-competition, and then people can't want to enforce the norm from general "I'll get punished if I don't support it" and "I personally can skillfully subvert, so enforcing this norm helps me keep the unskilled out".
2Dagon
Be careful not to oversimplify - norms are complex, mutable, and context-sensitive. "no showing off" is not a very complete description of anyone's expectations. No showing off badly is closer, but "badly" is doing a LOT of work - in itself is a complex and somewhat recursive norm. Finding out where "showing" skills is aligned with "excercising" those skills to achieve an outcome is non-trivial, but ever so wonderful if you do find a profession and project where it's possible. See also https://en.wikipedia.org/wiki/Countersignaling , the idea where if you're confident that you're assumed to have some skills, you actually show HIGHER skills by failing to signal those skills.
2Hazard
Thanks on reminding me of nuance. Yeah, the "badly" does a lot of work, but also puts me in the right head space to guess at when I do and don't think real people would get annoyed at someone "showing off".

When I first read The Sequences, why did I never think to seriously examine if I was wrong/biased/partially-incomplete in my understanding of these new ideas?

Hyp: I believed that fooling one's self was all identity driven. You want to be a type of person, and your bias lets you comfortably sink into it. I was unable to see my identity. I also had a self narrative of "Yeah, this Eliezer dude, what ever, I'll just see if he has anything good to say. I don't need to fit in with the rationalists."

I saw myself as "just" taking... (read more)

Legibility. Seeing like a state. Reason isn't magic. The secret of our success.

There is chaos and one (or a state) is trying to predict and act on the world. It sure would be easier if things were simpler. So far, this seems like a pretty human/standard desire.

I think the core move of legibility is to declare that everything must be simple and easy to understand, and if reality (i.e people) aren't as simple as our planned simplification, well too bad for people.

As a rationalist/post-rationalist/person who thinks good, you don't have to do th... (read more)

"If we're all so good at fooling ourselves, why aren't we all happy?"

The zealot is only "fooling themselves" from the perspective of the "rational" outsider. The zealot has not fooled themselves. They have looked at the world and their reasoning processes have come to the clear and obvious conclusion that []. They have gri-gri, and it works.

But it seems like most of us are much better at fooling ourselves than we are at "happening to use the full capacity of our minds to come to false and useful conclusions"... (read more)

(tid bit from some recent deep self examination I've been doing)

I incurred judgment-fueled "motivational debt" by aggressively buying into the idea "Talk is worthless, the only thing that matters is going out and getting results" at a time where I was so confident I never expected to fail. It felt like I was getting free motivation, because I saw no consequences to making this value judgment about "not getting results".

When I learned more, the possibility of failure became more real, and that cannon of judgement I'd built swiveled around to point at me. Oops.


8Matt Goldenberg
This seems to be a specific instance of a more general phenomena that Leverage Research calls "De-simplification" The basic phenomena goes like this: 1. According to leverage research, your belief structure must always be such that you believe you can achieve your terminal values/goals. 2. When you're relatively powerless and unskilled, this means that by necessity you have to believe that the world is more simple than it is and things are easier to do than they are, because otherwise there'd be no way you could achieve your goals/values. 3. As you gain more skill and power, your ability to tackle complex and hard problems become greater, so you can begin to see more complex and difficulty in the world and the problems you're trying to solve. 4. If you don't know about this phenomena, it might feel like power and skills don't actually help you, and you're just treading water. In the worst case, you might think that power and ability actually make things worse. In fact, what's going on is that your new power and ability made salient things that were always there, but which you could not allow yourself to see. Being able to see things as harder or more complex as actually a signal that you've leveled up.
2Hazard
This is a very useful frame! Is the blog on Leverage Research's cite where most of there stuff is, or would I go somewhere else if I wanted to read about what they've been up to?
4Matt Goldenberg
There's not really anywhere to go to read what leverage has been up to, they're a very private organization. They did have an arm called paradigm academy that did teaching, which is where I learned this. However leverage recently downsized, and I'm not sure about the status of Paradigm or other splinter organizations.

I've spent the last three weeks making some simple apps to solve small problems I encounter, and practice the development cycle. Example.

I've already been sold on the concept of developing things in a Lean MVP style for products. Shorter feedback loops between making stuff and figuring out if anyone wants it. Less time spent making things people don't want to give you money for. It was only these past few weeks where I noticed the importance of a MVP approach for personal projects. Now it's a case of shortening the feedback loops betwe... (read more)

I love attention, but I HATE asking for it. I've noticed this a few times before in various forms. This time it really clicked. What changed?

  • This time around, the insight came in the context of performing magic. This made the "I love attention" part more obvious than other times, when I merely noticed, "I have an allergic reaction to seeming needy."
  • I was able to remember some of the context that this pattern arose from, and can observe "Yes, this may have helped me back then, but here are ways it isn't as helpful now, and it's not automatically terrible
... (read more)
2Raemon
I realize this is my fault, but when I click "what changed" I'm not actually sure what comment it's linking to. (I'll improve the comment-linking UI this week hopefully so it's more clear which comments link where). Which comment did you mean to be linking to? I'm interested in more details about what was going on in the particular example here (i.e. performing magic as in stage-magic? What made that different?)
4Hazard
http://www.jhazard.com/posts/magic_is_dead.html This is less about the noticing and more about effects of the previous frame.
4Raemon
I like this post, and think it'd be fine to crosspost to LW.
2Hazard
I'll be writing a post about this later. The comment it links to is the first child comment of the tippy top comment of this page. (yes, magic the performance art)

One of the more useful rat-techniques I've enjoyed has been the reframing of "Making a decision right here right now" to "Making this sort of decision in these sorts of scenarios". When considering how to judge a belief based on some arguments, the question becomes, "Am I willing to accept this sort of conclusion based on this sort of argument in similar scenarios?"

From that, if you accept claim-argument pair A "Dude, if electric forks where a good idea, someone would have done it by now", but not claim-argument... (read more)

6Hazard
Similarly is the re-framing, "what is the actual decision I am making?" One friend was telling me, "This linear algebra class is a waste of my time, I'd get more by skipping lecture and reading the book." When I asked him if he actually thought he'd read the book if he didn't go to lecture, he said probably not. Here, it felt like the choice was, "Go to lecture, or not?" but it would be better framed as, "Given I'm trying to learn linear algebra, but feasible paths do I have for learning it?" If you don't actually expect to be able to self-study, then you no longer can think of "just not going to lecture" as an option.

There are a few instances where I've had "re-have an idea" 3 times, each in a slightly different form, before it stuck and affected me in any significant way. I noticed this when going through some old notebooks and seeing stub-thoughts of ideas that I was currently flushing out (and had been unaware that I had given this thing thought before). One example is with TAPS. Two winters ago I was writing about an idea I called "micro habits/attitudes" and they felt super important, but nothing ever came of them. Now I see that basically... (read more)

2Hazard
I recently was going through the past 3 years of notebooks, and this pattern is incredibly persistent.

So Kolmogorov Complexity depends on the language, but the difference between complexity in any two languages differs by at most a constant (what ever the size of an interpreter from one to the other is).

This seems to mean that the complexity ordering of different hypothesis can be rearranged by switching languages, but "only so much". So

and

are both totally possible, as long as

I see how if you care about orders of magnitude, the description... (read more)

2Viliam
I am not an expert, but my guess is that KC is only used in abstract proofs, where these details do not matter. Things like: * KC in not computable * there is a constant "c" such that KC of any message is smaller than its length plus c Etc.
2Hazard
Yeah. I guess the only place I can remember seeing it referenced in actions was with regard to assigning priors for solomonoff induction. So I wonder if it changes anything there (though solomonoff is already pretty abstracted away from other things, so it might not make sense to do a sensitivity analysis)

Mini Post, Litany of Gendlin related.

Changing your mind feels like changing the world. If I change my mind and now think the world is a shittier place than I used to (all my friends do hate me), it feels like I just teleported into a shittier world. If I change my mind and now think the world is a better place than I used to (I didn't leave the oven on at home, so my house isn't going to burn down!) it feels like I've just been teleported into a better world.

Consequence of the above: if someone is trying to change your mind, it feels like t... (read more)

1Pattern
The nature of this experience may vary between people. I'd say finding out something bad and having to deal with the impact of that is more common/of an issue than rejecting the way things are (or might be), though: Offhandedly I'm not sure "rat" makes an effect here? 1. Figuring out what to do with new troubling information - making a plan and acting on it - can be hard. (Knowing what to do might help people with "accepting" their "new" reality?) 2. Just because you understand part of an issue doesn't mean you've wrapped your head around all the implications. 3. Realizing something "bad" can take a while. Processing might not happen all at once. 4. If it's taking you take a long time to work something out, you might already know what the answer is, and be afraid of it. 5. This gets into an area where things vary depending on the person (and the situation) - sometimes people may have more trouble accepting "new negative realities", sometimes people are too fast to jump to negative conclusions.

Collecting some recent observations from some self study:

5Hazard
In my freshman fall of university, I realized I was incredibly judgmental of myself and felt I should be capable of everything. I "dealt with it" and felt less suffering and self-loathing/judgment in the following months year. I more or less thought I had "learned how to stop being so harsh on myself." Now I see that I never reduced the harshness. What I did was convince my fear/judgement/loathing to use a new rubric for grading me. I did a huge systems, successfully started a shit ton of habits, and build a much better ability to focus. It was as if to say "See? Look at this awesome plan I have! Yes, I implicitly buy into the universe where it's imperative I do [all the shit]. All I ask is that you give me time. This plan is great and I'll totally be able to do [all the stuff], just not right now." I was fused with the judgement enough that I wasn't able to question it, only negotiate with it for better terms. The penalty for failure was still "feel like a miserable piece of shit". I now have a much better sense of what lead to this fear and judgement being built up in the first place, and that understanding has lead to not doing [all the stuff] feel more like "a less cool world than others" and not "hell, complete with eternal torment and self-loathing"
2Hazard
This comment

Something I noticed about what I take certain internal events to mean:

Over the past 4 years I've had trouble being in touch with "what I want". I've made a lot of progress in the past year (a huge part was noticing that I'd previously intentionally cut off communication with the parts of me that want).

Previously when I'd ask "what do I want right now?" I was basically asking, "What would be the most edifying to my self-concept that is also doable right now?"

So I've managed to stop doing that a lot. La... (read more)

Being undivided is cool. People who seem to act as one monolithic agent are inspiring. They get stuff done.

What can you do to try and be undivided if you don't know any of the mental and emotional moves that go into this sort of integration? You can tell everyone you know, "I'm this sort of person!" and try super super hard to never let that identity falter, and feel like a shitty miserable failure whenever it does.

How funny that I can feel like I shouldn't be having the "problem" of "feeling like I shouldn't be... (read more)

4Kaj_Sotala
You could also just avoid the feelings of miserable failure by reclassifying all of your failures as not-failures and then forgetting about them. :-)
3Hazard
More Malcolm Ocean: "So the aim isn’t to be productive all the time. It’s to be productive at the times when your internal society of mind generally agrees it would be good to be productive. It’s not to be able to motivate yourself to do anything. It’s to be able to motivate yourself to do anything it makes sense to do." I notice some of my older implicit and explicit strategies were, "Well first I'll get good at being able to do any arbitrary thing that I (i.e the dominant self-concept/identify I want to project) pick, and then I'll work on figuring out what I actually want and care about." Oops. Also, noting that the "then I'll figure out what I want" was more "Well I've got no idea how to figure out what I want, so let's do anything else!" Oops.

Reasons why I currently track or have tracked various metrics in my life:

1. A mindfulness tool. Tacking the time to record and note some metric is itself the goal.

2. Have data to be able to test an hypothesis about ways some intervention would affect my life. (i.e Did waking up earlier give me less energy in the day?)

3. Have data that enables me to make better predictions about the future (mostly related to time tracking, "how long does X amount of work take?")

4. Understanding how [THE PAST] was different of [THE PRESENT] to help defeat the Deadl... (read more)

Current beliefs about how human value works: various thoughts and actions can produce a "reward" signal in the brain. I also have lots of predictive circuits that fire when they anticipate a "reward" signal is coming as a result of what just happened. The predictive circuits have been trained to use the patterns of my environment to predict when the "reward" signal is coming.

Getting an "actual reward" and a predictive circuit firing will both be experienced as something "good". Because of this, predictive ... (read more)

1Hazard
Weirdness that comes from reflection: In this frame, I can self-reflect on a given circuit and ask, "Does this circuit actually push me towards what I think is good?" When doing this, I'll be using some more meta/higher-order circuits (concepts I've built up over time about what a "good" brain looks like) but I'll also be using lower level circuits, and I might even end up using the evaluated circuit itself in this evaluation process. Sometimes this reflection process will go smooth. Sometimes it won't. But one takeaway/claim is you have this complex roundabout process for re-evaluating your values when some circuits begin to think that other circuits have diverged from "good". Because of this ability to reflect and change, it seems correct to say that "I value things conditional on my environment" (where environment has a lot of flex, it could be as small as your work space, or as broad as "any existing human culture"). Example. Let's say there was literally no scarcity for survival goods (food water etc). It seems like a HUGE chunk of my values and morals are built up inferences and solutions to resource allocation problems. If resource scarcity was magically no longer a problem, much of my values have lost their connection to reality. From what I've seen so far of my own self-reflection process, it seems likely that overtime I would come to reorganize my values in such a post-scarcity world. I've also currently got no clue what that reorganization would look like.
1Hazard
AFI worry: A human-in-the-loop AI that only takes actions that get human approval (and whose expected outcomes have human approval) hits big problems when the context the AI is acting in is a very different context from where our values were trained. Is there any way around this besides simulating people having their values re-organized given the new environment? Is this what CEV is about?

The slogan version of some thoughts I've been having lately are in the vein of "Hurry is the root of all evil". Thinking in terms of code. I've been working in a new dev environment recently and have felt the siren song of, "Copy the code in the tutorial. Just import all the packages they tell you to. Don't sweat the details man, just go with it. Just get it running." All that as opposed to "Learn what the different abstractions are grounded in, figure out what tools do what, figure out exactly what I need, and use w... (read more)

The fact that utility and probability can be transformed while maintaing the same decisions matches what the algo feels like from the inside. When thinking about actions, I often just feel like a potential action is "bad", and it takes effort to piece out if I don't think the outcome is super valuable, or if there's a good outcome that I don't think is likely.

1Hazard
Thinking about belief in belief. You can have things called "beliefs" which are of type action. "Having" this belief is actually your decision to take certain actions in certain scenarios. You can also have things called "beliefs" which are of type probability, and are part your deep felt sense of what is and isn't likely/true. A belief-action that has a high EV (and feels "good") will probably feel the same as a belief-probability that is close to 1. Take a given sentence/proposition. You can put a high EV on the belief-action version of that sentence (mayhaps it has important consequences for your social groups) while putting a low probability on the belief-probability version of the sentence. Meta Thoughts: The above idea is not fundamentally different from belief in belief or crony beliefs, both of which I've read a year or more ago. What I just wrote felt like a genuine insight. What do I think I understand now that I don't think I understood then? I think that recently (past two months, since CFAR) I've had better luck with going into "Super-truth" mode, looking into my own soul and asking, "Do you actually belief this?" Now, I've got many more data points of, "Here's a thing that I totally thought that I believed(probability) but actually I believed(action)." Maybe the insight is that it's easy to get mixed up between belief-prob and belief-action because the felt sense of probability and EV are very very similar, and genuinely non-trivial to peel apart. ^yeah, that feels like it. I think previously I thought, "Oh cool, now that I know that belief-action and belief-prob are different things, I just won't do belief-action". Now, I believe that you need to teach yourself to feel the difference between them, otherwise you will continue to mistake belief-actions for belief-probs. Meta-Meta-Thought: The meta-thoughts was super useful to do, and I think I'll do it more often, given that I often have a sense of, "Hmmmm, isn't this basically [insert post in

Don't ask people for their motives if you are only asking so that you can shit on their motives. Normally when I see someone asking someone else, "Why did you do that?" I interpret the statement to come from a place of, "I'm already about to start making negative judgments about you, this is the last chance for you to offer a plausible excuse for your behavior before I start firing."

If this is in fact the dynamic, then no one is incentivised to give you their actual reasons for things.

2Elo
I have been looking at intentions and trying to act with intentions in mind. No one ever has ill intentions, they can have a "make the sale at your detriment" intention. But no one ever has a "worse off for everyone" intention.
3Hazard
I like that phrasing. Yeah, a was speaking and (slightly) thinking about people with the pure motive to harm, which wouldn't be a typical case of this. Refrase with, "Don't blah blah blah if you will end up making explicit negative judgments at them," and you have a better version of my thought.

I'm looking at notebook from 3 years ago, and reading some scribbles from past me excitedly describing how they think they've pieced together that anger and the desire to punish are adaptations produced by evolution because they had good game theoretic properties. In the haste of the writing, and in the number of exclamation marks used, I can see that this was a huge realization for me. It's surprising how absolutely normal and "obvious" the idea is to me now. I can only remember a glimmer of the "holy shit!"ness that I felt at the time. It's so easy to forget that I haven't always thought the way I currently do. As if I'm typical-minding my past self.

An uncountable finite set is any finite set that contains the source code to a super intelligence that can provably prevent anyone from counting all of it's elements.

2Hazard
I still think this is genius.

In a fight between the CMU student body and the rationalist community, CMU would probably forget about the fight unless it was assigned for homework, and the rationalists would all individually come to the conclusion that it is most rational to retreat. No one would engage in combat, and everyone would win.

I notice a disparity between my ability to parse difficult texts when I'm just "reading for fun" versus when I'm trying to solve a particular problem for a homework assignment. It's often easier to do it for homework assignments. When I've got time that's just, "reading up on fun and interesting things," I bounce-off of difficult texts more often than I would like.

After examining some recent instances of this happening, I've realized that when I'm reading for fun, my implicit goal has often been, "... (read more)

2Hazard
This flared up again recently. Besides "wanting insight" often I simply am searching for fluency. I want something that I can fluently engage with, and if there's an impediment to fluency, I bounce off. Wanting an experience of fluency is a very different goal from wanting to understand the thing. Rn I don't have too many domains where I have technical fluency. I'm betting if I had more of that, it would extend my patience/ability to slog through texts that are hard for me.

I've been working on some more emotional bugs lately, and I'm noticing that many of the core issues that I'm dragging up are ones I've noticed at various points in the past and then just... ? I somehow just managed to forget about them, though I remember that in round 1 it also took a good deal of introspection for these issues to rise to the top. Keeping a permanent list of core emotional bugs would be an easy fix. The list would need to be somewhere I look at least once a week. I don't always have to be working on all of them, but I at least need to not forget that these problems exist.

4Qiaochu_Yuan
Probably not an accident. Forgetfulness is one of the main tools your mind will use to get you to stop thinking about things. If you make a list you might end up flinching away from looking at the list.
1Hazard
Is that a prediction about how one's default "forget painful stuff" mechanisms work, or have you previously made a list and also ended up ignoring it? You've written elsewhere about conquering a lot of emotional bugs in the past year, and I'd be interested to know what you did to keep those bugs in mind and not forget about them.
4Qiaochu_Yuan
I have forgotten about important emotional bugs before, and have seen other people literally forget the topic of the conversation when it turns to a sufficiently thorny emotional bug. The thing that usually happens to my lists is that they feel wrong and I have to regenerate them from scratch constantly; they're like Focusing labels that expire and aren't quite right anymore. The past year I was dealing with what felt to me like approximately one very large bug (roughly an anxious-preoccupied attachment thing), so it was easy to remember.

"With a sufficiently negligent God, you should be able to hack the universe."

Just a fun little thought I had a while ago. The idea being that if your deity intervenes with the world, or if if there are prayers, miracles, "supernatural creatures" or anything of that sort, then with enough planning and chutzpah, you should be able to hack reality unless God has got a really close eye on you.

This partially came from a fiction premise I have yet to act on. Dave (garden variety atheist) wakes up in hell. Turns out that the Christian God TM is real, though a bit of a dunce. Dave and Satan team up and go on a wacky adventure to overthrow God.

Quick thoughts on TAPS:

The past few weeks I've been doing a lot of posture/physical tick based TAPs (not slouching, not biting lips etc). These seem to be very well fit to TAPs, because the trigger is a physical movement, making it easier to notice. I've noticed roughly three phases of noticing triggers

  1. I suddenly become aware of the fact I've been doing the action.
  2. I become aware of the fact that I've initiated the action.
  3. Before any physical movement happens, I notice the "impulse" to do the thing .

To make a TAP run deep, it se... (read more)

Here's a pattern I want to outline and possible suggestions on how to fix it.

Sometimes when I'm trying to find the source of the bug, I make incorrect updates. An explanation of what the problem might be pops to mind, and it seems to fit (ex. "oops, this machine is Big Endian, not Little Endian"). Then I work on the bug some more, things still don't work, and at some point I find the real problem. Today, when I found a bug I was hunting for, I had a moment of, "Oh shit, an hour ago I updated my beliefs about how this machine w... (read more)

3Hazard
Had a similar style bug while programming today. I caught it much faster though I can't say if that can be attributed to previously identifying this pattern. But did think of the previous big as soon as I made the mental leap to figure out what was wrong this time.

Previously when I'd encountered the distinction between synthetic and analytic thought (as philosophers used them), I didn't quite get it. Yesterday I started reading Kant's Prolegomena and have a new appreciation for the idea. I used to imagine that "doing the analytic method" meant looking at definitions. 

I didn't imagine the idea actually being applied to concepts in one's head. I imagined the process being applied to a word. And it seemed clear to me that you're never going to gain much insight or wisdom from investigation a words definition and g... (read more)

This comment will collect things that I think beginner rationalists, "naive" rationalists, or "old school" rationalists (these distinctions are in my head, I don't expect them to translate) do which don't help them.

2Hazard
You have an exciting idea about how people could do things differently. Or maybe you think of norms which if they became mainstream would drastically increase epistemic sanity. "If people weren't so sensitive and attached to their identities then they could receive feedback and handle disagreements, allowing us to more rapidly work towards the truth." (example picked because versions of this stance have been discussed on LW) Sometimes the rationalist is thinking "I've got no idea how becoming more or less sensitive, gaining a thicker or thinner skin, or shedding or gaining identity works in humans. So I'm just going to black box this, tell people they should change, negatively reinforce them when they don't, and hope for the best." (ps I don't think everyone thinks this, though I know at least one person who does) (most relevant parts in italics) Comments will be continued thoughts on this behavior.
3Hazard
When I see this behavior, I worry that the rationalist is setting themselves up to have a blindspot when it comes themselves being "overly sensitive" to feedback. I worry about this because it's happened to me. Not with reactions to feedback but with other things. It's partially the failure mode of thinking that some state is beneath you, being upset and annoyed at others for being in that state, and this disdain making it hard to see when you engage in it. K, I get that thinking a mistake is trivial doesn't automatically mean your doomed to secretly make it forever. Still, I worry.
3Hazard
The way this can feel to the person being told to change: "None of us care about how hard this is for your, nor the pain you might be feeling right now. Just change already, yeesh." (it can be true or false that the rationalist actually things this. I think I've seen some people playing the rationalist role in this story who explicitly endorsed communicating this sentiment) Now, I understand that making someone feel emotionally supported takes various levels of effort. Sometimes it might seem like the effort required is not worse the loss in pursing the original rationality target. We could have lots of fruitful discussion about what would be fruitful norms for drawing that line. But I think another problematic thing that can happen, is that in the rationalists rush to get back on track to pursing the important target, they intentionally or unintentionally communicate. "You aren't really in pain. Or if you are, you shouldn't be in pain / you suck or are weak for feeling pain right now." Being told you aren't in pain SUCCCKS, especially when you're in pain. Being reprimanded being in pain SUCCCKS, especially when you're in pain. Claim: Even if you've reached a point it would be to costly to give the other person adequate emotional support, the least you can do is not make them think they're being gaslit about their pain or reprimanded for it.
1Pattern
Errata. or [un]intentionally communicate: The dialogue refers to two possibilities, A and B, but only A is referenced afterwards. (I wonder what the word for 'telling people their pain doesn't matter' is.)
2Hazard
Yeah, I only talked about A after. Is the parenthetical rhetorical? If not I'm missing the thing you want to say.
1Pattern
Non-rhetorical. The spelling suggestion suggests an improvement which largely unambiguous/style-agnostic. Suggesting adding a word requires choosing a word - a matter which is ambiguous/style dependent. Sometimes writing contains grammatical errors - but when people other than the author suggest fixes, the fixes don't have the same voice. This is why I included a prompt for what word you (Hazard) would use. For clarity, I can make less vague comments in the future. What I wanted to say rephrased: Here the [] serve one purpose - suggesting improvement, even when there's multiple choices.
2Hazard
Aaaah, I see now. Just edited to what I think fits.
3Hazard
If you really had no idea... fine, can't do much better than trying to operant conditioning a person towards the end goal. In my world, getting a deep understanding of how to change is the biggest goal/point of rationality (I've given myself away, I care about AI Alignment less than you do ;). So trying to skip to the rousing debate and clash of ideas while just hoping everyone figures out how to handle it feels like leaving most of the work undone.
1Pattern
Meta note: Me upvoting the comment above could make things go out of order. It could also be seen as selection - get rid of the people who aren't X. This risks getting rid of people who might learn, which could be an issue if the goal of that place (whether it's LW, SSC, or etc.) includes learning. An organization, consisting only of people who have a PhD might be an interesting place, perhaps enabling collaboration and cutting edge work that couldn't be done anywhere else. But without a place where people can get a Phd, eventually there will be no such organizations.
2Hazard
(Meta: the order wasn't important, thanks for thinking about that though) The selection part is something else I was thinking about. One of my thoughts was your "If there's no way to train PhDs, they die out." And the other was me being a bit skeptical of how big the pool would be right this second if we adopted a really thick skin policy. Reflecting on that second point, I realize I'm drawing from my day to day distribution, and don't have thoughts about how thick skinned most LW people are or aren't.
2Hazard
Thought that is related to this general pattern, but not this example. Think of having an idea of an end skill that you're excited by (doing bayes updates irl, successfully implementing TAPs, being swayed by "solid logical arguments"). Also imagine not having a theory of change. I personally have sometimes not noticed that there is or could be an actual theory of how to move from A to B (often because I thought I should already be able to do that), and so would use the black box negative reinforcement strategy on myself. Being in that place involved being stuck for a while and feeling bad about being stuck. Progress was only made when I managed to go "Oh. There are steps to get from A to B. I can't expect to already know them. I most focus on understanding this progression, and not on just punishing myself whenever I fail."
2Hazard
I've been thinking about this as a general pattern, and have specifically filled in "you should be thick skinned" to make it concrete. Here's a thought that applies to this concrete example that doesn't necessarily apply to the general pattern. There's all sorts of reasons why someone might feel hurt, put-off, or upset about how someone gives them feedback or disagrees with them. One of these ways can be something like, "From past experience I've learned that someone how uses XYZ language or ABC tone of voice is saying what they said to try and be mean to me, and they will probably try to hurt and bully me in the future." If you are the rationalist in this situation, you're annoyed that someone thinks you're a bully. You aren't a bully! And it sure would suck if they convinced other people that you were a bully. So you tell them that, duh, you aren't trying to be mean that this is just how you talk, and that they should trust you. If your the person being told to change, you start to get even more worried (after all, this is exactly what you piece of shit older brother would do to you), this person is telling to trust that they aren't a bully when you have no reason to, and you're worried they're going to turn the bystanders against you. Hmmmm, after writing this out the problem seems much harder to deal with than I first thought.

Have some horrible jargon: I spit out a question or topic and ask you for your NeMRIT, your Next Most Relevant Interesting Take.

Either give your thoughts about the idea I presented as you understand it, unless that's boring, then give thoughts that interests you that seem conceptually closest to the idea I brought up.


3Pattern
MIST*, Most Interesting Similar Take? *This is a backronym.
3Hazard
I like that because I can verb it while speaking. "How much cattle could you fit in this lobby? You can answer directly or mist."

Kevin Zollman at CMU looks like he's done a decent amount of research on group epistemology. I plan to read the deets at some point, here's a link if anyone wanted to do it first and post something about it.

I often don't feel like I'm "doing that much", but find that when I list out all of the projects, activities, and thought streams going on, there's an amount that feels like "a lot". This has happened when reflecting on every semester in the past 2 years.

Hyp: Until I write down a list of everything I'm doing, I'm just probing my working memory for "how much stuff am I up to?" Working mem has a limit, and reliably I'm going to get only a handful of things. Anytime when I'm doing more things than what fit in working memory, when I stop to write them all down, I will experience "Huh, that's more than it feels like."

4Matt Goldenberg
Relatedly, the KonMari cleaning method involves taking all items of a category "e.g. all books" and putting them in on big pile, before clearing them out. You often feel like you don't own "that much stuff" and are almost always surprised by the size of the pile.

Short framing on one reason it's often hard to resolve disagreements:

[with some frequency] disagreements don't come from the same place that they are found. You're brain is always running inference on "what other people think". From a statement like, "I really don't think it's a good idea to homeschool", you're mind might already be guessing at a disagreement you have 3 concepts away, yet only ping you with a "disagreement" alarm.

Combine that with a decent ability to confabulate. You ask yourself "Why do I disagree about homeschooling?" and you are given a plethora of possible reasons to disagree and start talking about those.

True if you squint at it right: Learning more about "how things work" is a journey that starts at "Life is a simple and easy game with random outcomes" and ends in "Life is a complex and thought intensive game with deterministic outcomes"

Idea that I'm going to use in these short form posts: for ideas/things/threads that I don't feel are "resolved" I'm going to write "*tk*" by the most relevant sentence for easy search later (I vaguely remember Tim Ferris talking about using "tk" as a substitute for "do research and put the real numbers in" since "tk" is not a letter pair that shows up much in English words. )

I've taken a lot of programming courses at university, and now I'm taking some more math and proof based courses. I notice that it feels considerably worse to not fully understand what's going on in Real Analysis than it did to not fully understand what was going on in Data Structures and Algorithms.

When I'm coding and pulling on levers I don't understand (outsourcing tasks to a library, or adding this line to the project because, "You just have to so it works") there's a yuck feeling, but there's also, "We... (read more)

The other day at a lunch time I realized I'd forgot to make and pack a lunch. It felt odd that I only realized it right when I was about to eat and was looking through my bag for food. Tracing back, I remembered that something abnormal had happened in my morning routine, and after dealing with the pop-up, I just skipped a step in my routine and never even noticed.

One thing I've done semi-intentionally over the past few years is decrease the amount of ambient thought that goes to logistics. I used to consider it to be "useless worrying", but given how a small disruption was able to make me skip a very important step, now I think of it more as trading off efficiency for "robustness".

Here is a an abstraction of a type of disagreement:

Claim: it is common for one to be more concerned with questions like, "How should I respond to XYZ system?" over "How should I create an accurate model of XYZ system?"

Let's say the system / environment is social interactions.

Liti: Why are you supposed to give someone a strong handshake when you meet them?

Hale: You need to give a strong handshake

Here Hale misunderstands Liti as asking for information about the proper procedure to perform. Really, Liti wants to know how this system ... (read more)

Last fall I hosted a discussion group with friends on three different occasions. I pitched it as "get interesting people together and intentionally have an interesting conversation" and was not a rationalist discussion group. One thing that I noticed was that whenever I wanted to really fixate on and solve a problem we identified, it felt wrong, like it would break some implicit rule I never remembered setting.

Later I pin pointed the following as the culprit. I personally can't consistantly produce quality clear thinking at "conversatio... (read more)

3Raemon
Dunno how easy this is to implement in random non-rationalist group settings, but a) if you're the one who brought the group together, you can set rules. (See Archipelego model of community standards) b) In NYC (in an admittedly rationalist setting), I had success implementing the 12-second rule of think-before-speaking

Highly speculative thought.

I don't often get angry/upset/exasperated with the coding or math that I do, but today I've gotten royally pissed at some Java project of mine. Here's a guess at a possible mechanism.

The more human-like a system feels, the easier it is to anthropomorphize and get angry at. When dealing with my code today, it has felt less like the world of being able to reason carefully over a deterministic system, and more like dealing with an unpredictable possibly hostile agent. Mayhaps part of my brain pattern matches that behaviour to something inteligent -> something human -> apply anger strategy.

Good description of what was happening in my head when I went was experiencing the depths of the uncanny valley of rationality:

I was more genre savvy than reality savvy. Even when I first started to learn about biases, I was more genre-of-biases savvy than actual bias-savvy. My first contact with the sequences successfully prevented me from being okay with double-thinking, and mostly removed my ability to feel okay about guiding my life via genre-savvyness. I also hadn't learned enough to make any sort of superior "basis" from which to act and decide. So I hit some slumps.

Likely false semi-explicit belief that I've had for a while: changes in patterns of behavior and thought are "merely" a matter of conditioning/training. Whenever it's hard to change behavior, it's just because the system is already in motion in a certain direction, and it takes energy/effort to push it in a new direction.

Now, I'm more aware of some behaviors that seem to have access to some optimization power that has the goal of keeping them around. Some behaviors seem to be part of a deeper strategy run my some sub-process ... (read more)

I've always been off-put when someone says, "free will is a delusion/illusion". There seems to be a hinting that one's feelings or experiences are in some way wrong. Here's one way to think you have fundamental free will without being 'deluded' -> "I can imagine a system where agents have an ontologically basic 'decision' option, and it seems like that system would produce experiences that match up with what I experience, therefore I live in a system with fundamental free-will". Here, it's not ... (read more)

Person I talked to once: "Moral rules are dumb because they aren't going to work in every scenario you're going to encounter. You should just everything case by case."

The thing that feels most wrong about this to me is the proposition that there is an action you can do which is, "Judge everything case by case". I don't think there is. You wouldn't say, "No abstraction covers every scenario, so you should model everything in quarks."

For someone reason or another, it sometimes feels like you can "model t... (read more)

I can't remember the exact quote or where it came from, so I'm going to paraphrase.

The end goal of meditation is not to be able to calm your mind while you are sitting cross-legged on the floor, it's to be able to calm your mind in the middle of a hurricane.

Mapping this onto rationality, there are two question you can ask yourself.

How rational can I be while making decisions in my room?

How rational can I be in the middle of a hurricane?

I think the distinction is important because recognizing it allows you to train both skills separately.

2Elo
I suspect there is relevance here to maps of different details. For example playing a ball sport. I can intellectually know a lot more than I can carry out in my system 1 while running from the other players. For s1 I need tighter models that I can do on the fly. Not sure if that matches perfectly to. Meditating in a hurricane.

Some thoughts on a toy model of productivity and well-being

T = set of task

S = set of physiological states

R = level of "reflective acceptance" of current situation (ex. am I doing "good" or "bad")

Quality of Work = some_function(s,t) + stress_applied

Quality of Subjective Experience = Quality - stress + R


Some states are stickier than others. It's easier to jump out of "I'm distracted" then it is to escape "I've got the flu". States can be better or worse at doing tasks, and tasks can be of varyi... (read more)

2Hazard
Or, if you're okay with being a bit less of a canonical robust agent and don't want to take on the costs of reliability, you could try to always match your work to your state. I'm thinking more of "mood" than "state" here. Be infinitely creative chaos. Oooh, I don't know any blog post the cite, but Duncan mentioned at a CFAR workshop the idea of being a King or a Prophet. Both can be reliable and robust agents. The King does so by putting out Royal Decrees about what they will do, and then executing said plans. The Prophet gives you prophecies about what they will do in the future, and they come true. While you can count on both the decrees of the king and the prophecies of the prophet, the actions of the prophet are more unruly and chaotic, and don't seem to make as much sense as the king's.

I notice that there’s almost a sort of pressure that builds up when I look at someone, as if it’s a literal indicator of, “Dude, you’re approaching a socially unacceptable staring time!”

It seems obvious what is going on. If you stare at someone for too long, things get “weird” and you come off as a “creep”. I know that. Most people know that. And since we all have common knowledge about that rule, I understand that there are consequences to staring at someone for more than a second or two. Thus, the reason I don’t stare at people for very long is because ... (read more)

3Hazard
Another example of "I was running a less general and more hacky algorithm than anticipated". On a bike trip through Vietnam, very few people in the countryside spoke English. Often, we'd just talk at each other in our respective languages and gesticulate wildly to actually make our points. I noticed that I was still smiling and laughing in response to things said to me in Vietnamese, even though I had no idea what was going on. This has lead me to see the decision to laugh or smile to be mostly based on non-verbal stuff, and not, "Yes, I've understand the thing you have said, and what you said is funny."

I'm currently reading The Open Veins of Latin America, which is a detailed history of how Latin America has been screwed over across the centuries. It reminds me of a book I read a while ago, Confessions of an Economic Hit-man. Though it's clear the author thinks that what has happened to Latin America has been unjust, he does a good job of not adding lots of, "and therefor..."s. It's mostly a poetic historical account. There's a lot more cartoonishly evil things that have happened in history than I realized.

I'm simulatin... (read more)

Fun Framing: Empiricism is trying to predict TheUniverse(t = n + delta) using TheUniverse(t=n) as your blackbox model.

Sometimes the teacher makes a typo. In conversation, sometimes people are "just wrong". So a lot of the times, when you notice confusion, it can be dismissed with "the other person just screwed up". But reality doesn't screw up. It just is. Always pay attention to confusion that comes from looking at reality.

(Also, when you come to the conclusion that another person "screwed up", you aren't completely done until you have some understanding of how they might of screwed up)

A rephrasing of ideas from the recent Care Less post.

Value allocation is not zero sum, though time allocation is. In order to not break down at the "colossal injustice of it all", a common strategy is to operate as if value is zero-sum.

To be as effective as possible, you need to be able to see the dark world, one that is beyond the reach of God. Do not explain why the current state of affairs is acceptable. Instead, look at reality very carefully and move towards the goal. Explaining why your world is acceptable shuts down the sense that more is ... (read more)

I just finished reading and rereading Debt: The First 5000 Years. I was tempted to go, "Yep, makes sense, I was basically already thinking about money and debt like that." Then I remembered that not but two months ago I was arguing with a friend and asserting that there was nothing disfunctional about being able to sell your kidney. It's hard to remember what I used to think about certain things. When there's a concrete reminder, sometimes it comes as a shock that I used to think differently from how I do. For whatever the big things I've changed... (read more)

2Qiaochu_Yuan
Worth reading the mountains of criticism of this book, e.g. these blog posts. I still got something interesting out of reading it though.
1Hazard
Most of what I've gotten out of the book as been lenses for viewing coordination issue, and less "XYZ events in history happened because of ABC." (and skimming the posts you linked , they seemed more to do with the latter) I think when I read Nassim Taleb's Black Swan was the first time I immediately afterwards googled "book name criticism". Taleb had made some minor claim about network theory being not used for anything practical, which turned out to just be wrong (a critic cited it being used for developing solutions to malaria outbreaks). Seeing that made me realize had hadn't even wondered whether or not the claim was true when I first read it. Since then I've been more credulous of any given details an author uses, unless it seems like a "basic" element of their realm of expertise (like, I don't doubt and of the anthropological details Graeber presented about the Tiv, though I may disagree with his extrapolations)

"It seems like you are arguing/engaging with something I'm not saying."

I can remember a argument with a friend who went to great lengths to defend a point he didn't feel super strongly about, all because he implicitly assumed I was about to go "Given point A, X conclusion, checkmate."

It seems like a pretty common "argumental movement" is to get someone to agree to a few simple propositions, with the goal of later "trapping" them with a dubious "and therefore!". People are good at spotting this, an... (read more)

I really like the phrasing alkjash used, One Inch Punch. Recently I've been paying closer attention to when I'm in "doing" or "trying" mode, and whether or not those are quality handles, there do seem to be multiple forms of "doing" that have distinct qualities to them.

It's way easier for me to "just" get out of bed in the morning, than to try and convince myself getting out of bed is a good idea. It's way easier for me to "just" hit send on an email or message that might not be worded ... (read more)

I really like the phrasing alkjash used, One Inch Punch. Recently I've been paying closer attention to when I'm in "doing" or "trying" mode, and whether or not those are quality handles, there do seem to be multiple forms of "doing" that have distinct qualities to them.

It's way easier for me to "just" get out of bed in the morning, than to try and convince myself getting out of bed is a good idea. It's way easier for me to "just" hit send on an email or message that might not be worded ... (read more)