I wish there were more discussion posts on LessWrong.
Right now it feels like it weakly if not moderately violates some sort of cultural norm to publish a discussion post (similar but to a lesser extent on the Shortform). Something low effort of the form "X is a topic I'd like to discuss. A, B and C are a few initial thoughts I have about it. What do you guys think?"
It seems to me like something we should encourage though. Here's how I'm thinking about it. Such "discussion posts" currently happen informally in social circles. Maybe you'll text a friend. Maybe you'll bring it up at a meetup. Maybe you'll post about it in a private Slack group.
But if it's appropriate in those contexts, why shouldn't it be appropriate on LessWrong? Why not benefit from having it be visible to more people? The more eyes you get on it, the better the chance someone has something helpful, insightful, or just generally useful to contribute.
The big downside I see is that it would screw up the post feed. Like when you go to lesswrong.com and see the list of posts, you don't want that list to have a bunch of low quality discussion posts you're not interested in. You don't want to spend time and energy sifting...
I just learned some important things about indoor air quality after watching Why Air Quality Matters, a presentation by David Heinemeier Hanson, the creator of Ruby on Rails. It seems like something that is both important and under the radar, so I'll brain dump + summarize my takeaways here, but I encourage you to watch the whole thing.
Project idea: virtual water coolers for LessWrong
Previous: Virtual water coolers
Here's an idea: what if there was a virtual water cooler for LessWrong?
Seems like an experiment that is both cheap and worthwhile.
If there is interest I'd be happy to create a MVP.
(Related: it could be interesting to abstract this and build a sort of "virtual water cooler platform builder" such that eg. LessWrong could use the builder to build a virtual water cooler platform for LessWrong and OtherCommunity could use the builder to build a virtual water cooler platform for their community.)
In How to Get Startup Ideas, Paul Graham provides the following advice:
Live in the future, then build what's missing.
Something that feels to me like it's present in the future and missing in today's world: OkCupid for friendship.
Think about it. The internet is a thing. Billions and billions of people have cheap and instant access to it. So then, logistics are rarely an obstacle for chatting with people.
The actual obstacle in today's world is matchmaking. How do you find the people to chat with? And similarly, how do you communicate that there is a strong match so that each party is thinking "oh wow this person seems cool, I'd love to chat with them" instead of "this is a random person and I am not optimistic that I'd have a good time talking to them".
This doesn't really feel like such a huge problem though. I mean, assume for a second that you were able to force everyone in the world to spend an hour filling out some sort of OkCupid-like profile, but for friendship and conversation rather than romantic relationships. From there, it seems doable enough to figure out whatever matchmaking algorithm.
I think the issue is moreso getting people to fill out the survey in the first place. T...
Every day I check Hacker News. Sometimes a few times, sometimes a few dozen times.
I've always felt guilty about it, like it is a waste of time and I should be doing more productive things. But recently I've been feeling a little better about it. There are things about coding, design, product, management, QA, devops, etc. etc. that feel like they're "in the water" to me, where everyone mostly knows about them. However, I've been running into situations where people turn out to not know about them.
I'm realizing that they're not actually "in the water", and that the reason I know about them is probably because I've been reading random blog posts from the front page of Hacker News every day for 10 years. I probably shouldn't have spent as much time doing this as I have, but I feel good about the fact that I've gotten at least something out of it.
Against "yes and" culture
I sense that in "normie cultures"[1] directly, explicitly, and unapologetically disagreeing with someone is taboo. It reminds me of the "yes and" from improv comedy.[2] From Wikipedia:
"Yes, and...", also referred to as "Yes, and..." thinking, is a rule-of-thumb in improvisational comedy that suggests that an improviser should accept what another improviser has stated ("yes") and then expand on that line of thinking ("and").
If you want to disagree with someone, you're supposed to take a "yes and" approach where you say something somewhat agreeable about the other person's statement, and then gently take it in a different direction.
I don't like this norm. From a God's Eye perspective, if we could change it, I think we probably should. Doing so is probably impractical in large groups, but might be worth considering in smaller ones.
(I think this really needs some accompanying examples. However, I'm struggling to come up with ones. At least ones I'm comfortable sharing publicly.)
The US, at least. It's where I live. But I suspect it's like this in most western cultures as well.
See also this Curb Your Enthusiasm clip.
Something frustrating happened to me a week or two ago.
I wish that we had a culture of words being used more literally.
I've noticed that there's a pretty big difference in the discussion that follows from me showing someone a draft of a post and asking for comments and the discussion in the comments section after I publish a post. The former is richer and more enjoyable whereas the latter doesn't usually result in much back and forth. And I get the sense that this is true for other authors as well.
I guess one important thing might be that with drafts, you're talking to people who you know. But I actually don't suspect that this plays much of a role, at least on LessWrong. As an anecdote, I've had some incredible conversations with the guy who reviews drafts of posts on LessWrong for free and I had never talked to him previously.
I wonder what it is about drafts. I wonder if it can or should be incorporated into regular posts.
Against difficult reading
I kinda have the instinct that if I'm reading a book or a blog post or something and it's difficult, then I should buckle down, focus, and try to understand it. And that if I don't, it's a failure on my part. It's my responsibility to process and take in the material.
This is especially true for a lot of more important topics. Like, it's easy to clearly communicate what time a restaurant is open -- if you find yourself struggling to understand this, it's probably the fault of the restaurant, not you as the reader -- but for quantum ...
I think I just busted a cached thought. Yay.
I'm 30 years old now and have had achilles tendinitis since I was about 21. Before that I would get my cardio by running 1-3 miles a few times a week, but because of the tendinitis I can't do that anymore.
Knowing that cardio is important, I spent a bunch of time trying different forms of cardio. Nothing has worked though.
6th vs 16th grade logic
I want to write something about 6th grade logic vs 16th grade logic.
I was talking to someone, call them Alice, who works at a big well known company, call it Widget Corp. Widget Corp needs to advertise to hire people. They only advertise on Indeed and Google though.
Alice was telling me that she wants to explore some other channels (LinkedIn, ZipRecruiter, etc.). But in order to do that, Widget Corp needs evidence that advertising on those channels would be cheap enough. They're on a budget and really want to avoid spending money they...
I am a web developer. I remember reading some time in these past few weeks that it's good to design a site such that if the user zooms in/out (eg. by pressing cmd+/-), things still look reasonably good. It's like a form of responsive design, except instead of responding to the width of the viewport your design responds to the zoom level.
Anyway, since reading this, I started zooming in a lot more. For example, I just spent some time reading a post here on LessWrong at a 170% zoom level. And it was a lot more comfortable. I've found this to be a helpful little life hack.
Thought: It's better to link to tag pages rather than blog posts. Like Reversed Stupidity Is Not Intelligence instead of Reversed Stupidity Is Not Intelligence.
There is something inspiring about watching this little guy defeat all of the enormous sumo wrestlers. I can't quite put my finger on it though.
Maybe it's the idea of working smart vs working hard. Maybe something related to fencepost security, like how there's something admirable about, instead of trying to climb the super tall fence, just walking around it.
Noticing confusion about the nucleus
In school, you learn about forces. You learn about gravity, and you learn about the electromagnetic force. For the electromagnetic force, you learn about how likes repel and opposites attract. So two positively charged particles close together will repel, whereas a positively and a negatively charged particle will attract.
Then you learn about the atom. It consists of a bunch of protons and a bunch of neutrons bunched up in the middle, and then a bunch of electrons orbiting around the outside. You learn that protons are p...
"It's not obvious" is a useful critique
I recall hearing "it's not obvious that X" a lot in the rationality community, particularly in Robin Hanson's writing.
Sometimes people make a claim without really explaining it. Actually, this happens a lot of times. Often times the claim is made implicitly. This is fine if that claim is obvious.
But if the claim isn't obvious, then that link in the chain is broken and the whole argument falls apart. Not that it's been proven wrong or anything, just that it needs work. You need to spend the time establishing that claim...
Why not more specialization and trade?
I can probably make something like $100/hr doing freelance work as a programmer. Yet I'll spend an hour cooking dinner for myself.
Does this make any sense? Imagine if I spent that hour programming instead. I'd have $100. I can spend, say, $20 on dinner, end up with something that is probably much better than what I would cook, and have $80 left over. Isn't that a better use of my time than cooking?
Similarly, sometimes I'll spend an hour cleaning my apartment. I could instead spend that hour making $100, and paying some...
The other day Improve your Vocabulary: Stop saying VERY! popped up in my YouTube video feed. I was annoyed.
This idea that you shouldn't use the word "very" has always seemed pretentious to me. What value does it add if you say "extremely" or "incredibly" instead? I guess those words have more emphasis and a different connotation, and can be better fits. I think they're probably a good idea sometimes. But other times people just want to use different words in order to sound smart.
I remember there was a time in elementary school when I was working on a paper...
Virtual watercoolers
As I mentioned in some recent Shortform posts, I recently listened to the Bayesian Conspiracy podcast's episode on the LessOnline festival and it got me thinking.
One thing I think is cool is that Ben Pace was saying how the valuable thing about these festivals isn't the presentations, it's the time spent mingling in between the presentations, and so they decided with LessOnline to just ditch the presentations and make it all about mingling. Which got me thinking about mingling.
It seems plausible to me that such mingling can and should h...
Sometimes I think to myself something along these lines:
I could read this post/comment in detail and respond to it, but I expect that others won't put much effort into the discussion and it will fizzle out, and so it isn't worth it for me to put the effort in in the first place.
This presents a sort of coordination problem, and one that would be reasonably easy to solve with some sort of assurance contract-like functionality.
There's a lot to say about whether or not such a thing is worth pursuing, but in short, it seems like trying it out as an experiment w...
Using examples of people being stupid
I've noticed that a lot of cool concepts stem from examples of people being stupid. For example, I recently re-read Detached Lever Fallacy and Beautiful Probability.
Detached Lever Fallacy:
...Eventually, the good guys capture an evil alien ship, and go exploring inside it. The captain of the good guys finds the alien bridge, and on the bridge is a lever. "Ah," says the captain, "this must be the lever that makes the ship dematerialize!" So he pries up the control lever and carries it back to his ship, after which his s
Closer to the truth vs further along
Consider a proposition P. It is either true or false. The green line represents us believing with 100% confidence that P is true. On the other hand, the red line represents us believing with 100% confidence that P is false.
We start off not knowing anything about P, so we start off at point 0, right at that black line in the middle. Then, we observe data point A. A points towards P being true, so we move upwards towards the green line a moderate amount, and end up at point 1. After that we observe data point B. B is weak ...
More dakka with festivals
In the rationality community people are currently excited about the LessOnline festival. Furthermore, my impression is that similar festivals are generally quite successful: people enjoy them, have stimulating discussions, form new relationships, are exposed to new and interesting ideas, express that they got a lot out of it, etc.
So then, this feels to me like a situation where More Dakka applies. Organize more festivals!
How? Who? I dunno, but these seem like questions worth discussing.
Some initial thoughts:
A line of thought that I want to explore: a lot of times when people appear to be close-minded, they aren't actually being (too) close-minded. This line of thought is very preliminary and unrefined.
It's related to Aumann's Ageement Theorem. If you happen to have two perfectly Bayesian agents who are able to share information, then yes, they will end up agreeing. In practice people aren't 1) perfectly Bayesian or 2) able to share all of their information. I think (2) is a huge problem. A huge reason why it's hard to convince people of things.
Well, I guess w...
I think it's generally agreed that pizza and steak (and a bunch of other foods) taste significantly better when they're hot. But even if you serve it hot, usually about halfway through eating, the food cools enough such that it's notably worse because it's not hot enough.
One way to mitigate this is to serve food on a warmed plate. But that doesn't really do too much.
What makes the most sense to me would be to serve smaller portions in multiple courses. Like instead of a 10" pie, serve two 5" pies. Or instead of a 16oz ribeye, divide it into four 4oz ribeye...
Long text messages
I run into something that I find somewhat frustrating. When I write text messages to people, they're often pretty long. At least relative to the length of other people's text messages. I'll write something like 3-5 paragraphs at times. Or more.
I've had people point this out as being intimidating and a lot to read. That seems odd to me though. If it were an email, it'd be a very normal-ish length, and wouldn't feel intimidating, I suspect. If it were a blog post, it'd be quite short. If it were a Twitter thread, it'd be very normal and not...
Words as Bayesian Evidence
Alice: Hi, how are you?
Bob: Good. How are you?
Alice: Actually, I'm not doing so well.
Let me ask you a question. How confident are you that Bob is doing good? Not very confident, right? But why not? After all, Bob did say that he is doing good. And he's not particularly well known for being a liar.
I think the thing here is to view Bob's words as Bayesian evidence. They are evidence of Bob doing good. But how strong is this evidence? And how do we think about such a question?
Let's start with how we think about such a question. I...
There's a concept I want to think more about: gravy.
Turkey without gravy is good. But adding the gravy... that's like the cherry on top. It takes it from good to great. It's good without the gravy, but the gravy makes it even better.
An example of gravy from my life is starting a successful startup. It's something I want to do, but it is gravy. Even if I never succeed at it, I still have a great life. Eg. by default my life is, say, a 7/10, but succeeding at a startup would be so awesome it'd make it a 10/10. But instead of this happening, my brain pulls a ...
Squinting
...“You should have deduced it yourself, Mr Potter,” Professor Quirrell
said mildly. “You must learn to blur your vision until you can see the forest
obscured by the trees. Anyone who heard the stories about you, and who
did not know that you were the mysterious Boy-Who-Lived, could eas-
ily deduce your ownership of an invisibility cloak. Step back from these
events, blur away their details, and what do we observe? There was a great
rivalry between students, and their competition ended in a perfect tie.
That sort of thing only happens in stories, Mr Potter,
As a programmer, compared to other programmers, I am extremely uninterested in improving the speed of web apps I work on. I find that (according to my judgement) it rarely has more than a trivial impact on user experience. On the other hand, I am usually way more interested than others are in things like improving code quality.
I wonder if this has to do with me being very philosophically aligned with Bayesianism. Bayesianism preaches to update your beliefs incrementally, whereas Alternative is a lot more binary. For example, the way scientific experiments ...
I've had success with something: meal prepping a bunch of food and freezing it.
I want to write a blog post about it -- describing what I've done, discussing it, and recommending it as something that will quite likely be worthwhile for others as well -- but I don't think I'm ready. I did one round of prep that lasted three weeks or so and was a huge success for me, but I don't think that's quite enough "contact with reality". I think there's a risk that, after more "contact with reality", it proves to be not nearly as useful as it currently seems. So yeah, ...
I've gotta vent a little about communication norms.
My psychiatrist recommended a new drug. I went to take it last night. The pills are absolutely huge and make me gag. But I noticed that the pills look like they can be "unscrewed" and the powder comes out.
So I asked the following question (via chat in this app we use):
For the NAC, the pill is a little big and makes me gag. Is it possible to twist it open and pour the powder on my tongue? Or put it in water and drink it?
The psychiatrist responded:
...Yes it seems it may be opened and mixed into food or somethin
Subtextual politeness
In places like Hacker News and Stack Exchange, there are norms that you should be polite. If you said something impolite and Reddit-like such as "Psh, what a douchebag", you'd get flagged and disciplined.
But that's only one form of impoliteness. What about subtextual impoliteness? I think subtextual impoliteness is important too. Similarly important. And I don't think my views here are unique.
I get why subtextual impoliteness isn't policed though. Perhaps by definition, it's often not totally clear what the subtext behind a statement i...
Life decision that actually worked for me: allowing myself to eat out or order food when I'm hungry and pressed for time.
I don't think the stress of frantically trying to get dinner together is worth the costs in time or health. And after listening to this podcast episode, I'm suspect that, I'm not sure how to say this: "being overweight is bad, but like, it's not that bad, and stressing about it is also bad since stress is bad, all of this in such a way where stressing out over being marginally more overweight is worse for your health than being a little ...
I think that, for programmers, having good taste in technologies is a pretty important skill. A little impatience is good too, since it can drive you to move away from bad tools and towards good ones.
These points seem like they should generalize to other fields as well.
Inverted interruptions
Imagine that Alice is talking to Bob. She says the following, without pausing.
That house is ugly. You should read Harry Potter. We should get Chinese food.
We can think of it like this. Approach #1:
t=1
Alice says "That house is ugly."t=2
Alice says "You should read Harry Potter."t=3
Alice says "We should get Chinese food."Suppose Bob wants to respond to the comment of "That house is ugly." Due to the lack of pauses, Bob would have to interrupt Alice in order to get that response in. On the other hand, if Alice paused in betwee...
Something that I run into, at least in normie culture, is that writing (really) long replies to comments has a connotation of being contentious, or even hostile (example). But what if you have a lot to say? How can you say it without appearing contentious?
I'm not sure. You could try to signal friendliness by using lots of smiley faces and stuff. Or you could be explicit about it and say stuff like "no hard feelings".
Something about that feels distasteful to me though. It shouldn't need to be done.
Also, it sets a tricky precedent. If you start using smiley ...
Capabilities vs alignment outside of AI
In the field of AI we talk about capabilities vs alignment. I think it is relevant outside of the field of AI though.
I'm thinking back to something I read in Cal Newport's book Digital Minimalism. He talked about how the Amish aren't actually anti-technology. They are happy to adopt technology. They just want to make sure that the technology actually does more good than harm before they adopt it.
And thy have a neat process for this. From what I remember, they first start by researching it. Then have small groups of pe...
Spreading the seed of ideas
A few of my posts actually seem like they've been useful to people. OTOH, a large majority don't.
I don't have a very good ability to discern this from the beginning though. Given this situation, it seems worth "spreading the seed" pretty liberally. The chance of it being a useful idea usually outweighs the chance that it mostly just adds noise for people to sift through. Especially given the fact that the LW team encourages low barriers for posting stuff. Doubly especially as shortform posts. Triply especially given that I person...
Notice when trust batteries start off low
...The basic idea is that your trust battery is pre-charged at 50% when you’re first hired or start working with someone for the first time. Every interaction you have with your colleagues from that point on then either charges, discharges, or maintains the battery - and as a result, affects how much you enjoy working with them and trust them to do a good job.
The things that influence your trust battery charge vary wildly - whether the other person has done what they said they’ll do, how well you get on with that per
Covid-era restaurant choice hack: Korean BBQ. Why? Think about it...
They have vents above the tables! Cool, huh? I'm not sure how much that does, but my intuition is that it cuts the risk in half at least.
Science as reversed stupidity
Epistemic status: Babbling. I don't have a good understanding of this, but it seems plausible.
Here is my understanding. Before science was a thing, people would derive ideas by theorizing (or worse, from the bible). It wasn't very rigorous. They would kinda just believe things willy-nilly (I'm exaggerating).
Then science came along and was like, "No! Slow down! You can't do that! You need to have sufficient evidence before you can justifiably believe something like that!" But as Eliezer explains, science is too slow. It judges t...
I was just listening to the Why Buddhism Is True episode of the Rationally Speaking podcast. They were talking about what the goal of meditation is. The interviewee, Robert Wright, explains:
the Buddha said in the first famous sermon, he basically laid out the goal, "Let's try to end suffering."
What an ambitious goal! But let's suppose that it was achieved. What would be the implications?
Well, there are many. But one that stands out to me as particularly important as well as ignored, is that it might be a solution to existential risk. Maybe if people we...
Just as you can look at an arid terrain and determine what shape a river will one day take by assuming water will obey gravity, so you can look at a civilization and determine what shape its institutions will one day take by assuming people will obey incentives.
- Scott Alexander, Meditations on Moloch
There's been talk recently about there being a influx of new users to LessWrong and a desire to prevent this influx from harming the signal-to-noise ratio on LessWrong too much. I wonder: what if it costed something like $1 to make an account? Or $1/month? Some trivial amount of money that serves as a filter for unserious people.
From Childhoods of exceptional people:
...The importance of tutoring, in its more narrow definition as in actively instructing someone, is tied to a phenomenon known as Bloom’s 2-sigma problem, after the educational psychologist Benjamin Bloom who in the 1980s claimed to have found that tutored students
. . . performed two standard deviations better than students who learn via conventional instructional methods—that is, “the average tutored student was above 98% of the students in the control class.”
Simply put, if you tailor your instruction to a sing
Nonfiction books should be at the end of the funnel
Books take a long time to read. Maybe 10-20 hours. I think that there are two things that you should almost always do first.
Read a summary. This usually gives you the 80/20 and only takes 5-10 minutes. You can usually find a summary by googling around. Derek Sivers and James Clear come to mind as particularly good resources.
Listen to a podcast or talk. Nowadays, from what I could tell, authors typically go on a sort of podcast tour before releasing a book in order to promote it. I find that this typi
I've been in pursuit of a good startup idea lately. I went through a long list I had and deleted everything. None were good enough. Finding a good idea is really hard.
One way that I think about it is that a good idea has to be the intersection of a few things.
Bayesian traction
A few years ago I worked on a startup called Premium Poker Tools as a solo founder. It is a web app where you can run simulations about poker stuff. Poker players use it to study.
It wouldn't have impressed any investors. Especially early on. Early on I was offering it for free and I only had a handful of users. And it wasn't even growing quickly. This all is the opposite of what investors want to see. They want users. Growth. Revenue.
Why? Because those things are signs. Indicators. Signal. Traction. They point towards an app being a big hi...
Collaboration and the early stages of ideas
Imagine the lifecycle of an idea being some sort of spectrum. At the beginning of the spectrum is the birth of the idea. Further to the right, the idea gets refined some. Perhaps 1/4 the way through the person who has the idea texts some friends about it. Perhaps midway through it is refined enough where a rough draft is shared with some other friends. Perhaps 3/4 the way through a blog post is shared. Then further along, the idea receives more refinement, and maybe a follow up post is made. Perhaps towards the ve...
I wish more people used threads on platforms like Slack and Discord. And I think the reason to use threads is very similar to the reason why one should aim for modularity when writing software.
Here's an example. I posted this question in the #haskell-beginners
Discord channel asking whether it's advisable for someone learning Haskell to use a linter. I got one reply, but it wasn't as a thread. It was a normal message in #haskell-beginners
. Between the time I asked the question and got a response, there were probably a couple dozen other messages. So then, ...
This is super rough and unrefined, but there's something that I want to think and write about. It's an epistemic failure mode that I think is quite important. It's pretty related to Reversed Stupidity is Not Intelligence. It goes something like this.
You think 1. Alice thinks 2. In your head, you think to yourself:
Gosh, Alice is so dumb. I understand why she thinks 2. It's because A, B, C, D and E. But she just doesn't see F. If she did, she'd think 1 instead of 2.
Then you run into other people being like:
...Gosh, Bob is so dumb. I understand why he thinks 1.
When I think about problems like these, I use what feels to me like a natural generalization of the economic idea of efficient markets. The goal is to predict what kinds of efficiency we should expect to exist in realms beyond the marketplace, and what we can deduce from simple observations. For lack of a better term, I will call this kind of thinking inadequacy analysis.
I think this is pretty applicable to highly visible blog posts, such as ones that make the home page in popular communities such as Less...
It's weird that people tend so strongly to be friends with people so close to their age. If you're 30, why are you so much more likely to be friends with another 30 year old than, say, a 45 year old?
...There were other lines of logic leading to the same conclusion. Complex machinery was always universal within a sexually reproducing species. If gene B relied on gene A, then A had to be useful on its own, and rise to near-universality in the gene pool on its own, before B would be useful often enough to confer a fitness advantage. Then once B was universal you would get a variant A* that relied on B, and then C that relied on A* and B, then B* that relied on C, until the whole machine would fall apart if you removed a single piece. But it all had to happe
I wonder if the Facebook algorithm is a good example of the counterintuitive difficulty of alignment (as a more general concept).
You're trying to figure out the best posts and comments to prioritize in the feed. So you look at things like upvotes, page views and comment replies. But it turns out that that captures things like how much of a demon thread it is. Who would have thought metrics like upvotes and page views could be so... demonic?
I don't think this is an alignment-is-hard-because-it's-mysterious, I think it's "FB has different goals than me". FB wants engagement, not enjoyment. I am not aligned with FB, but FB's algorithm is pretty aligned with its interests.
Open mic posts
In stand up comedy, performances are where you present your good jokes and open mics are where you experiment.
Sometimes when you post something on a blog (or Twitter, Facebook, a comment, etc.), you intend for it to be more of a performance. It's material that you have spent time developing, are confident in, etc.
But other times you intend for it to be more of an open mic. It's not obviously horribly or anything, but it's certainly experimental. You think it's plausibly good, but very well might end up being garbage.
Going further, in stand up...
On Stack Overflow you could offer a bounty for a question you ask. You sacrifice some karma in exchange for having your question be more visible to others. Sometimes I wish I could do that on LessWrong.
I'm not sure how it'd work though. Giving the post +N karma? A bounties section? A reward for the top voted comment?
Alignment research backlogs
I was just reading AI alignment researchers don't (seem to) stack and had the thought that it'd be good to research whether intellectual progress in other fields is "stackable". That's the sort of thing that doesn't take an Einstein level of talent to pursue.
I'm sure other people have similar thoughts: "X seems like something we should do and doesn't take a crazy amount of talent".
What if there was a backlog for this?
I've heard that, to mitigate procrastination, it's good to break tasks down further and further until they become ...
Mustachian Grants
I remember previous discussions that went something like this:
Alice: EA has too much money and not enough places to spend it.
Bob: Why not give grants anyone and everyone who wants to do, for example, alignment research?
Alice: That sets up bad incentives. Malicious actors would seek out those grants and wouldn't do real work. And that'd have various bad downstream effects.
But what if those grants were minimal? What if they were only enough to live out a Mustachian lifestyle?
Well, let's see. A Mustachian lifestyle costs something like $2...
Asset ceilings for politicians
A long time when I was a sophomore in college, I remember a certain line of thinking I went through:
Goodhart's Law seems like a pretty promising analogy for communicating the difficulties of alignment to the general public, particularly those who are in fields like business or politics. They're already familiar with the difficulty and pain associated with trying to get their organization to do X.
When better is apples to oranges
I remember talking to a product designer before. I brought up the idea of me looking for ways to do things more quickly that might be worse for the user. Their response was something along the lines of "I mean, as a designer I'm always going to advocate for whatever is best for the user."
I think that "apples-to-oranges" is a good analogy for what is wrong about that. Here's what I mean.
Suppose there is a form and the design is to have inline validation (nice error messages next to the input fields). And suppose that "global"...
I was just watching this YouTube video on portable air conditioners. The person is explaining how air conditioners work, and it's pretty hard to follow.
I'm confident that a very large majority of the target audience would also find it hard to follow. And I'm also confident that this would be extremely easy to discover with some low-fi usability testing. Before releasing the video, just spend maybe 20 mins and have a random person watch the video, and er, watch them watch it. Ask them to think out loud, narrating their thought process. Stuff like that.
Moreo...
I think that people should write with more emotion. A lot more emotion!
Emotion is bayesian evidence. It communicates things.
...One could also propose making it not full of rants, but I don’t think that would be an improvement. The rants are important. The rants contain data. They reveal Eliezer’s cognitive state and his assessment of the state of play. Not ranting would leave important bits out and give a meaningfully misleading impression.
...
The fact that this is the post we got, as opposed to a different (in many ways better) post, is a reflection of the
I wonder whether it would be good to think about blog posts as open journaling.
When you write in a journal, you are writing for yourself and don't expect anyone else to read it. I guess you can call that "closed journaling". In which case "open journaling" would mean that you expect others to read it, and you at least loosely are trying to cater to them.
Well, there are pros and cons to look at here. The main con of treating blog posts as open journaling is that the quality will be lower than a more traditional blog post that is more refined. On the other h...
Inconsistency as the lesser evil
It bothers me how inconsistent I am. For example, consider covid-risk. I've eaten indoors before. Yet I'll say I only want to get coffee outside, not inside. Is that inconsistent? Probably. Is it the right choice? Let's say it is, for arguments sake. Does the fact that it is inconsistent matter? Hell no!
Well, it matters to the extent that it is a red flag. It should prompt you to have some sort of alarms going off in your head that you are doing something wrong. But the proper response to those alarms is to use that as an op...
The other day I was walking to pick up some lunch instead of having it delivered. I also had the opportunity to freelance for $100/hr (not always available to me), but I still chose to walk and save myself the delivery fee.
I make similarly irrational decisions about money all the time. There are situations where I feel like other mundane tasks should be outsourced. Eg. I should trade my money for time, and then use that time to make even more money. But I can't bring myself to do it.
Perhaps food is a good example. It often takes me 1-2 hours to "do" dinner...
Betting is something that I'd like to do more of. As the LessWrong tag explains, it's a useful tool to improve your epistemics.
But finding people to bet with is hard. If I'm willing to bet on X with Y odds and I find someone else eager to, it's probably because they know more than me and I am wrong. So I update my belief and then we can't bet.
But in some situations it works out with a friend, where there is mutual knowledge that we're not being unfair to one another, and just genuinely disagree, and we can make a bet. I wonder how I can do this more often. And I wonder if some sort of platform could be built to enable this to happen in a more widespread manner.
Idea: Athletic jerseys, but for intellectual figures. Eg. "Francis Bacon" on the back, "Science" on the front.
I've always heard of the veil of ignorance being discussed in a... social(?) context: "How would you act if you didn't know what person you would be?". A farmer in China? Stock trader in New York? But I've never heard it discussed in a temporal context: "How would you act if you didn't know what era you would live in?" 2021? 2025? 2125? 3125?
This "temporal veil of ignorance" feels like a useful concept.
I just came across an analogy that seems applicable for AI safety.
AGI is like a super powerful sports car that only has an accelerator, no brake pedal. Such a car is cool. You'd think to yourself:
Nice! This is promising! Now we have to just find ourselves a brake pedal.
You wouldn't just hop in the car and go somewhere. Sure, it's possible that you make it to your destination, but it's pretty unlikely, and certainly isn't worth the risk.
In this analogy, the solution to the alignment problem is the brake pedal, and we really need to find it.
Alice, Bob, and Steve Jobs
In my writing, I usually use the Alice and Bob naming scheme. Alice, Bob, Carol, Dave, Erin, etc. Why? The same reason Steve Jobs wore the same outfit everyday: decision fatigue. I could spend the time thinking of names other than Alice and Bob. It wouldn't be hard. But it's just nice to not have to think about it. It seems like it shouldn't matter, but I find it really convenient.
Epistemic status: Rambly. Perhaps incoherent. That's why this is a shortform post. I'm not really sure how to explain this well. I also sense that this is a topic that is studied by academics and might be a thing already.
I was just listening to Ben Taylor's recent podcast on the top 75 NBA players of all time, and a thought started to crystalize for me that I always have wanted to develop. For people who don't know him (everyone reading this?), his epistemics are quite good. If you want to see good epistemics applied to basketball, read his series of posts...
I wonder if it would be a good idea groom people from an early age to do AI research. I suspect that it would. Ie identify who the promising children are, and then invest a lot of resources towards grooming them. Tutors, therapists, personal trainers, chefs, nutritionists, etc.
Iirc, there was a story from Peak: Secrets from the New Science of Expertise about some parents that wanted to prove that women can succeed in chess, and raised three daughters doing something sorta similar but to a smaller extent. I think the larger point being made was that if you ...
I suspect that the term "cognitive" is often over/misused.
Let me explain what my understanding of the term is. I think of it as "a disagreement with behaviorism". If you think about how psychology progressed as a field, first there was Freudian stuff that wasn't very scientific. Then behaviorism emerged as a response to that, saying "Hey, you have to actually measure stuff and do things scientifically!" But behaviorists didn't think you could measure what goes on inside someone's head. All you could do is measure what the stimulus is and then how the human...
Everyone hates spam calls. What if a politician campaigned to address little annoyances like this? Seems like it could be a low hanging fruit.
Against "change your mind"
I was just thinking about the phrase "change your mind". It kind of implies that there is some switch that is flipped, which implies that things are binary (I believe X vs I don't believe X). That is incorrect[1] of course. Probability is in the mind, it is a spectrum, and you update incrementally.
Well, to play devils advocate, I guess you can call 50% the "switch". If you go from 51% to 49% it's going from "I believe X" to "I don't believe X". Maybe not though. Depends on what "believe" means. Maybe "believe" moreso means som