A recent (well, a few weeks ago) discussion with Collisteru taught me a useful piece of game design that feels really useful for one of my lines of thinking on Chaos Investing.
In Duplicate Bridge, a deck of cards is shuffled, then the order of the cards is written down (or the hands are recorded or something equivalent) then the deck is played by players who don't know the order. (That is, Referee Adam wrote down the order, and Players Bob Carla Debbie and Evan are sitting down to play.) The players can then compare their score to the scores of other...
Looks like a fascinating setup, and essentially the deck order is more or less a seed for the game state. Slay the Spire does something similar with the Daily Run which allows you to compare your run directly against other players who had the exact same setup. I take it there would be some kind of central ledger of starting seeds where the scores would be recorded?
Reading through the rules there is a slight point of variation in that if you've gone through the trouble to have a starting seed you might want to also fix the starting player if that's importan...
[Fiction]
A novice needed a map of the northern mountain passes. He approached the temple cartographer.
"Draw me the northern passes," he said, "showing the paths, the fords, and the shelters."
The cartographer studied many sources and produced a map. The novice examined it carefully: the mountains were drawn, the paths clearly traced, fords and shelters marked in their proper notation. The distances seemed reasonable. The penmanship was excellent.
"This is good work," said the novice, and he led a merchant caravan into the mountains.
On the third night, they r...
The master traveler eventually returned to the temple, richer from having successfully led the caravan through and back. He approached the cartographer again, and gave them a small notebook with a nod.
The cartographer's student was confused. "Teacher, have you heard of the other caravan having gone recently through the northern mountains? It seems they didn't make it. Why is that so, when you gave them the same map?"
"I could say," answered the cartographer, "that the earlier one had been unlucky. Wild animals? Bad weather? But I've seen enough travelers to...
Epistemic status: puzzling something out, very uncertain, optimizing for epistemic legibility so I’m easy to argue with. All specific numbers are ass pulls]
In my ideal world, the anti-X-risk political ecosystem has an abundance of obviously high quality candidates. This doesn’t appear to be on the table.
Adjusting for realism, my favorite is probably “have a wide top of funnel for AI safety candidates, narrow sharply as they have time to demonstrate traits we can judge them on”. But this depends on having an ecosystem with good judgement, and I’m not ...
Yeah it's possible all you need is a few high powered people who Get It and a good ecosystem for lobbying everyone else, but then you have to evaluate the lobbyists.
Another one I've seen is the use of 'Not A, But B' statements.
"This is not just an existential crisis. It's a full-blown catastrophe."
OP's writing also contained something like that: "This is not science fiction—it’s unfolding right now, within our lifetimes." It's a shame, because it is not a bad sentence structure. But, like with emdashes, you now have to monitor your writing for overuse of logic like this.
So I’ve noticed that some people here have thoughts on multi-agent models of the mind. I operate consistently day-to-day with a multi-agent model of my mind that I’ve developed out of my own internal experience—after experimenting with both perspectives, multi-agent feels intuitive and sense-making to me, whereas trying to see myself as a single agent feels wrong or confusing. Is it a metaphor? Is my model different or is my brain different? Is this a symptom of something? Who knows! Sometimes I try to “tone this down” when explaining thoughts and feelings...
theory: a huge part of having a good social life is just taking social bids whenever they become available. examples of social bids both large and small include: deciding whether to join your friends on a roadtrip; getting to know someone you just met; getting to better know someone you bump into occasionally but usually never talk to; standing in line, seeing something amusing, and having the option to point this out to another stranger in line; saying something funny in a group conversation; following up over text with someone after meeting them; flirtin...
I'm sufficiently extroverted that if the social interaction goes well, it gives me more than enough psychological energy to pay for multiple additional social bids. obviously, this is separate from physiological energy; if I'm sleep deprived and physically exhausted, this is insufficient. but I don't generally get that physically exhausted from social interaction, unless I'm at neurips or something.
much has been made of the millennial aversion to phone calls that could have been an email, and I have a little bit of this nature, but I think most of my aversion here is to being on hold and getting bounced around different call departments.
I kind of want to check if 1. the aversion is real and generational as common wisdom holds, 2. if it is real, if calling became a genuinely worse experience around the time millennials started trying to do things.
Data point. I was born in 1968 and I got a lot more averse to phone calls as email and texting got better. My reasons are visual cues and taking my time to think (as Wedge said), as well as certain kinds of phone calls having become much worse. The call to a large business where one can expect a phone tree has become far far worse than when one could expect a human to pick up. During the earlier years of cell phones, and again when digital audio started out, the call quality was frequently so bad that it was another significant push away from a phone call even to a friend. Finally, a phone call these days socially seems like more of an interruption, a demand, than an asynchronous communication.
OpenAI claims 5.2 solved an open COLT problem with no assistance: https://openai.com/index/gpt-5-2-for-science-and-math/
This might be the first thing that meets my bar of autonomously having an original insight??
Interesting. Have they shared the GPT chatlog? I don't see it anywhere.
Rewind is a tool for scrolling back in time. It automatically records screen and audio data. I leave it running in the background, in spite of this incurring some performance overhead. I have collected over 200GB over the past year.
Limitless.ai was acquired by Meta and will shut down the product on December 19th. I will back up my files, but I do not know if it is possible to roll back the update which disables recording. I am not aware of any recommended alternative which is actively maintained and was unable to discover this with a quick search. I would appreciate suggestions.
About once every 15 minutes, someone tweets "you can just do things". It seems like a rather powerful and empowering meme and I was curious where it came from, so I did some research into its origins. Although I'm not very satisfied with what I was able to reconstruct, here are some of the things that I found:
In 1995, Steve Jobs gives the following quote in an interview:
...Life can be much broader, once you discover one simple fact, and that is that everything around you that you call life was made up by people that w
Update! I missed an entire evolutionary branch of the meme: "You can just do stuff" (rather than "things").
In March 2021, @leaacta tweets:
life hack: you don't have to explain yourself or understand anything, you can just do stuff
And gets retweeted by a bunch of people in TPOT.
Then, in June 2022, comedian Rodney Norman posts a video called Go Be Weird with a motivational speech of some sort:
...Hey, you know you can just do stuff?
Like, you don't need anybody's permission or anything.
You just... you just kind of come up with weird stuff you want to go do,
If I see a YouTube video pop up in my feed right after it’s published, I can often come up with a comment that gets a lot of likes and ends up near the top of the comment section.[1] It’s actually not that hard to do: the hardest part is being quick enough[2] to get into the first 10-30 comments (which I assume is the average number of comments viewers glance over), but the comment itself might be pretty generic and not that relevant to the video’s content.
Do you know a way I could use tha...
IIUC, those are just bots who copy early and liked comments. So my comment would also be copied by other bots.
An AI content X/Twitter account with nearly 100k followers blocked me, and I got a couple of disapproving replies for pointing out that the account was AI-generated. I quote-tweeted the account mostly to share a useful Chrome Extension that I've been using the detect AI content, but I was surprised that there was a negative reaction in the form of a few replies pointing out the account was AI-generated. I am neither pro- nor anti-AI accounts, but being aware of the nature of the content seems to be useful.
Would be curious to hear others' thoughts on the ph...
Bot farms have been around for awhile. Use of AI for this purpose (along with all other, more useful purposes) has been massively increasing over the last few years, and a LOT in the last 6 months.
Personally, I'd rather have someone point out the errors or misleading statements in the post, rather than worrying about whether it's AI or just a content farm of low-paid humans or someone with too much time and a bad agenda. But a lot of folks think "AI generated" is bad, and react as such (some by stopping following such accounts, some by blocking the complainers).
The non-dumb solution is to sunset the Jones Act, isn't it? The problem with workarounds is that they generally need to be approved by the same government that is maintaining the law in the first place.
One theme I've been thinking about recently is how bids for connection and understanding are often read as criticism. For example:
Person A shares a new idea, feeling excited and hoping to connect with Person B over something they've worked hard on and hold dear.
Person B asks a question about a perceived inconsistency in the idea, feeling excited and hoping for an answer which helps them better understand the idea (and Person B).
Person A feels hurt and unfairly rejected by Person B. Specifically, Person A feels like Person B isn't willing to give their sinc...
It was definitely relevant! Thank you for the link--I think introducing this idea might assist communication in some of my relationships.
I found Yarrow Bouchard's quick take on the EA Forum regarding LessWrong's performance in the COVID-19 pandemic quite good.
I don't trust her to do such an analysis in an unbiased way [[1]] , but the quick take was pretty full of empirical investigation that made me change my mind wrt to how well LessWrong in particular did.
There's much more historiography to be done here, who believed what, when, what the long-term effects of COVID-19 are, which interventions did what, but this seems like the state of the art on "how well did LessWrong actually p...
Analysis of "first to talk seriously about it" is probably not worth much, for COVID-19 OR for the Soviet Union. Actual behavior and impact are what matter, and I don't know that LW members were significantly different from their non-LW-using cohorts in their areas.
I very roughly polled METR staff (using Fatebook) what the 50% time horizon will be by EOY 2026, conditional on METR reporting something analogous to today's time horizon metric.
I got the following results: 29% average probability that it will surpass 32 hours. 68% average probability that it will surpass 16 hours.
The first question got 10 respondents and the second question got 12. Around half of the respondents were technical researchers. I expect the sample to be close to representative, but maybe a bit more short-timelines than the rest of METR staff.
The average probability that the question doesn't resolve AMBIGUOUS is somewhere around 60%.
Just for context, the reason we might not report something like today's time horizon metric is we don't have enough tasks beyond 8 hours. We're actively working on several ways to extend this, but there's always a chance none of them will work out and we won't have enough confidence to report a number by the end of 2026.
People not working with LLMs often say things like "nope, they just follow stochastic patterns in the data, matrices of floats don't have beliefs or goals". People on LessWrong could, I think, claim something like "they have beliefs, and to what extent they have goals is a very important empirical question".
Here's my attempt at writing a concise decent quality answer the second group could give to the first.
Consider a houseplant. Its leaves are directed towards the window. If you ...
Note that the usage of these terms and demand for rigor varies by orders of magnitude based on who you're talking with and what aspects of "belief" are salient to the question at hand. My friends and coworkers don't bat an eye at "Claude believes that Paris is the capital of France", or even "Claude thinks it's wasteful to spend money on 3p antivirus software".
Only when considering whether a given AI instance is a moral actor or moral patient does the ambiguity matter, and then we're really best off tabooing these words that imply high similarity to the way humans experience things.
Hi, does anyone from the US want to donation-swap with me to a German tax-deductible organization? I want to donate $2410 to the Berkeley Genomics Project via Manifund.
For anyone considering niplav's offer, the most obvious tax-deductible-in-Germany donation options for EAs / rationalists is probably Effektiv Spenden's "giving funds":
Learned about 'Harberger tax' recently.
The motivation is like
I think unless they explicitly want to harm or threaten you, was the point - which incidentally is often a situation not accounted for in the foundational assumptions of many economic models (utility functions generally considered to be independent and monotonic in resources and so on).