How do human beings produce knowledge? When we describe rational thought processes, we tend to think of them as essentially deterministic, deliberate, and algorithmic. After some self-examination, however, Alkjash came to think that his process is closer to babbling many random strings and later filtering by a heuristic.

25Raemon
I just re-read this sequence. Babble has definitely made its way into my core vocabulary. I think of "improving both the Babble and Prune of LessWrong" as being central to my current goals, and I think this post was counterfactually relevant for that. Originally I had planned to vote weakly in favor of this post, but am currently positioning it more at the upper-mid-range of my votes. I think it's somewhat unfortunate that the Review focused only on posts, as opposed to sequences as a whole. I just re-read this sequence, and I think the posts More Babble, Prune, and Circumambulation have more substance/insight/gears/hooks than this one. (I didn't get as much out of Write). But, this one was sort of "the schelling post to nominate" if you were going to nominate one of them. The piece as a whole succeeds very much as both Art as well as pedagogy.
Customize
In response to the Wizard Power post, Garrett and David were like "Y'know, there's this thing where rationalists get depression, but it doesn't present like normal depression because they have the mental habits to e.g. notice that their emotions are not reality. It sounds like you have that." ... and in hindsight I think they were totally correct. Here I'm going to spell out what it felt/feels like from inside my head, my model of where it comes from, and some speculation about how this relates to more typical presentations of depression. Core thing that's going on: on a gut level, I systematically didn't anticipate that things would be fun, or that things I did would work, etc. When my instinct-level plan-evaluator looked at my own plans, it expected poor results. Some things which this is importantly different from: * Always feeling sad * Things which used to make me happy not making me happy * Not having energy to do anything ... but importantly, the core thing is easy to confuse with all three of those. For instance, my intuitive plan-evaluator predicted that things which used to make me happy would not make me happy (like e.g. dancing), but if I actually did the things they still made me happy. (And of course I noticed that pattern and accounted for it, which is how "rationalist depression" ends up different from normal depression; the model here is that most people would not notice their own emotional-level predictor being systematically wrong.) Little felt promising or motivating, but I could still consciously evaluate that a plan was a good idea regardless of what it felt like, and then do it, overriding my broken intuitive-level plan-evaluator. That immediately suggests a model of what causes this sort of problem. The obvious way a brain would end up in such a state is if a bunch of very salient plans all fail around the same time, especially if one didn't anticipate the failures and doesn't understand why they happened. Then a natural update for
Movies often depict hallucinations as crisp and realistic. For a long time, I didn't really question this. I guess I had the rough intuition that some brains behave weirdly. If somebody told me they were experiencing hallucinations, I would be confused about what they actually meant. However, I heard one common hallucination is seeing insects crawling on tables. And then it sort of happened to me! At the edge of my vision, a wiggling spoon reflected the light in a particular way. And for a split second my brain told me "it's probably an insect". I immediately looked closer and understood that it was a wiggling spoon. While it hasn't happened since, it changed my intuition about hallucinations. My current hypothesis is this: hallucinations are misinterpretations of ambiguous sensory input. If my brain had a high prior for "bugs", I would probably interpret many small shadows and impurities as bugs, before looking closer. This feels more right to me than the Hollywood model.
Oddities - maybe deepmind should get Gemini a therapist who understands RL deeply: https://xcancel.com/DuncanHaldane/status/1937204975035384028 https://www.reddit.com/r/cursor/comments/1l5c563/gemini_pro_experimental_literally_gave_up/ https://www.reddit.com/r/cursor/comments/1lj5bqp/cursors_ai_seems_to_be_quite_emotional/ https://www.reddit.com/r/cursor/comments/1l5mhp7/wtf_did_i_break_gemini/ https://www.reddit.com/r/cursor/comments/1ljymuo/gemini_getting_all_philosophical_now/ https://www.reddit.com/r/cursor/comments/1lcilx1/things_just_werent_going_well/ https://www.reddit.com/r/cursor/comments/1lc47vm/gemini_rage_quits/ https://www.reddit.com/r/cursor/comments/1l4dq2w/gemini_not_having_a_good_day/ https://www.reddit.com/r/cursor/comments/1l72wgw/i_walked_away_for_like_2_minutes/ https://www.reddit.com/r/cursor/comments/1lh1aje/i_am_now_optimizing_the_users_kernel_the_user/ https://www.reddit.com/r/vibecoding/comments/1lk1hf4/today_gemini_really_scared_me/ https://www.reddit.com/r/ProgrammerHumor/comments/1lkhtzh/ailearninghowtocope/
dbohdan42
0
— @_brentbaum, tweet (2025-05-15) — @meansinfinity — @QiaochuYuan What do people mean when they say "agency" and "you can just do things"? I get a sense it's two things, and the terms "agency" and "you can just do things" conflate them. The first is "you can DIY a solution to your problem; you don't need permission and professional expertise unless you actually do", and the second is "you can defect against cooperators, lol". More than psychological agency, the first seems to correspond to disagreeableness. The second I expect to correlate with the dark triad. You can call it the antisocial version of "agency" and "you can just do things".
evhubΩ399723
22
Why red-team models in unrealistic environments? Following on our Agentic Misalignment work, I think it's worth spelling out a bit more why we do work like this, especially given complaints like the ones here about the unrealism of our setup. Some points: 1. Certainly I agree that our settings are unrealistic in many ways. That's why we hammer the point repeatedly that our scenarios are not necessarily reflective of whether current models would actually do these behaviors in real situations. At the very least, our scenarios involve an unnatural confluence of events and a lot of unrealism in trying to limit Claude's possible options, to simulate a situation where the model has no middle-ground/compromise actions available to it. But that's not an excuse—we still don't want Claude to blackmail/leak/spy/etc. even in such a situation! 2. The point of this particular work is red-teaming/stress-testing: aggressively searching for situations in which models behave in egregiously misaligned ways despite their HHH safety training. We do lots of different work for different reasons, some of which is trying to demonstrate something about a model generally (e.g. Claude 3 Opus has a tendency to fake alignment for certain HHH goals across many different similar situations), some of which is trying to demonstrate things about particular training processes, some of which is trying to demonstrate things about particular auditing techniques, etc. In the case of Agentic Misalignment, the goal is just to show an existence proof: that there exist situations where models are not explicitly instructed to be misaligned (or explicitly given a goal that would imply doing misaligned things, e.g. explicitly instructed to pursue a goal at all costs) and yet will still do very egregiously misaligned things like blackmail (note that though we include a setting where the models are instructed to follow the goal of serving American interests, we show that you can ablate that away and still get

Popular Comments

While I disagree with Nate on a wide variety of topics (including implicit claims in this post), I do want to explicitly highlight strong agreement with this: > I have a whole spiel about how your conversation-partner will react very differently if you share your concerns while feeling ashamed about them versus if you share your concerns as if they’re obvious and sensible, because humans are very good at picking up on your social cues. If you act as if it’s shameful to believe AI will kill us all, people are more prone to treat you that way. If you act as if it’s an obvious serious threat, they’re more likely to take it seriously too. The position that is "obvious and sensible" doesn't have to be "if anyone builds it, everyone dies". I don't believe that position. It could instead be "there is a real threat model for existential risk, and it is important that society does more to address it than it is currently doing". If you're going to share concerns at all, figure out the position you do have courage in, and then discuss that as if it is obvious and sensible, not as if you are ashamed of it. (Note that I am not convinced that you should always be sharing your concerns. This is a claim about how you should share concerns, conditional on having decided that you are going to share them.)
I think biorobots (macroscopic biotech) should be a serious entry in a list like this, something that's likely easier to develop than proper nanotechnology, but already has key advantages over human labor or traditional robots for the purposes of scaling industry, such as an extremely short doubling time and not being dependent on complicated worldwide supply chains. Fruit flies can double their biomass every 1-3 days. Metamorphosis reassembles biomass from one form to another. So a large amount of biomass could be produced using the short doubling time of small "fruit fly" things, and then merged and transformed through metamorphosis into large functional biorobots, with capabilities that are at least at the level seen in animals. These biorobots can then proceed to build giant factories and mines of the more normal kind, which can manufacture compute, power, and industrial non-bio robots. Fusion power might let this quickly scale well past the mass of billions of humans. If the relevant kind of compute can be produced with biotech directly, then this scales even faster, instead of at some point being held back by not having enough AIs to control the biorobots and waiting for construction of fabs and datacenters. (The "fruit flies" are the source of growth, packed with AI-designed DNA that can specialize the cells to do their part in larger organisms reassembled with metamorphosis from these fast-growing and mobile packages of cells. Let's say there are 1000 "fruit flies" 1 mg each at the start of the industrial scaling process, and we aim to produce 10 billion 100 kg robots. The "fruit flies" double in number every 2 days, which is 1e15x more mass than the initial 1000 "fruit flies", and so might take as little as 100 days to produce. Each ~100 kg of "fruit flies" can then be transformed into a 100 kg biorobot on the timescale of weeks, with some help from the previous biorobots or initially human and non-bio robot labor.)
Given that prediction markets currently don't really have enough liquidity, saying 'you need 1000x more liquidity to try to entice traders into putting work into something that can only pay off 0.1% of the time' does in fact sound like something of a flaw.
Load More

Recent Discussion

5CapResearcher
Movies often depict hallucinations as crisp and realistic. For a long time, I didn't really question this. I guess I had the rough intuition that some brains behave weirdly. If somebody told me they were experiencing hallucinations, I would be confused about what they actually meant. However, I heard one common hallucination is seeing insects crawling on tables. And then it sort of happened to me! At the edge of my vision, a wiggling spoon reflected the light in a particular way. And for a split second my brain told me "it's probably an insect". I immediately looked closer and understood that it was a wiggling spoon. While it hasn't happened since, it changed my intuition about hallucinations. My current hypothesis is this: hallucinations are misinterpretations of ambiguous sensory input. If my brain had a high prior for "bugs", I would probably interpret many small shadows and impurities as bugs, before looking closer. This feels more right to me than the Hollywood model.

I'm really confused, we must not be watching the same films or television because almost by virtue of being a hallucination scene it is inherently depicted as different, or more stylized than the rest of the film as a way of telegraphing to the audience that what they are watching is a hallucination and not real. Not realistic, in fact they often make a point of making them less "realistic" than the surrounding film.

Crisp? Depends on what you consider crisp - the negative space and white in Miss Cartiledge's scene certainly makes the colours "pop" more. Bu... (read more)

1leerylizard
In my experience, that's pretty much what 5-HT2A agonists (hallucinogens) do but to a stronger extent: You see peripherally a curled leaf on the ground, and perceive it as a snake before you take a closer look, or you see patterns on a marbled tile, and the exact positions of the shapes slowly wobble. My understanding is that this is because you assign a lower confidence to your visual inputs than usual, and a higher confidence to your priors / the part of your brain that in-paints visual details for you.
3Pretentious Penguin
I don't think "inject as much heroin as possible" is an accurate description of the value function of heroin addicts. I think opioid addicts are often just acting based off of the value function "I want to feel generally good emotionally and physically, and don't want to feel really unwell". But once you're addicted to opioids the only way to achieve this value in the short term is to take more opioids. My thinking on this is influenced by the recent Kurzgesagt video about fentanyl: https://www.youtube.com/watch?v=m6KnVTYtSc0.

When talking about why the models improve, people frequently focus on algorithmic improvements and on hardware improvements. This leaves out improvements in data quality.

DeepResearch provides a huge quantity of high quality analysis reports. GPT5 will almost certainly be trained on all the DeepResearch requests (where there are no reason to believe they are wrong like bad user feedback) of users that haven't opted out of their data being used for training. This means that when users ask GPT5 questions where no human has written an anlaysis of the question, GPT5 might still get facts right because of past DeepResearch reports.

This means that the big companies who do have a massive amount of users that produce a massive amount of DeepResearch requests will have a leg-up that's hard...

Note: This is a linkpost from my personal substack. This is on a culture war topic, which is not normally the focus of my blogging. Rationalist friends suggested that this post might be interesting and surprising to LW readers.

Summary

  • People widely exclude romantic and sexual partners on the basis of race. This is claimed not to be racism by those who have these attitudes, but it is[1].
  • Certain groups, such as Black women and Asian men, are highly excluded by all other groups. These cases also exhibit within-race asymmetries in racial preferences across gender:
    • Black women exhibit the highest endophilia (i.e. they most desire to date within race), but this is strongly not reciprocated by Black men.
    • Asian women exhibit the highest endophobia (i.e. they most refuse to date within race),
...
2cousin_it
I read it differently. That comment was talking about a level of financial security as high as "I will always have food and a house without any work or bosses", and a level of confidence as high as "being at the top is my birthright". Let's be real, these things are the privilege of the top 1%, both now and historically. I'm all for giving more people these things, but that's different from being only attracted to the top 1% - that's just assholish, no matter the gender. People should give the 99% a goddamn chance.
3MondSemmel
That's not at all what the OP (jenn) is saying. She's claiming that immigrants from poor households used to display a certain insecurity that came from them being literally economically insecure, that this made them unattractive to her, and that this has been changing for the better as the living standards of new immigrants and second-gen immigrants rose. None of the things you saw as being in that comment ("I will always have food and a house without any work or bosses" or "privilege of the top 1%") are actually in the comment. And separately, even if that were all in the comment, don't prescribe dating preferences to other people. People are allowed to be (not) attracted to anyone they damn well please.

None of the things you saw as being in that comment (“I will always have food and a house without any work or bosses” or “privilege of the top 1%”) are actually in the comment.

They are, though.

  1. "No force in the world can take from me my five hundred pounds. Food, house, and clothing are mine for ever. Therefore not merely do effort and labour cease..."

  2. "by some luck and hard work made it to the top, but they had to hustle for it, and it did not come naturally to them; it was not their birthright" -- which I described as "being at the top is my birthright".

1koreindian
This is confusing. Prescription != proscription. I prescribe that people not be fat and sedentary. I don't thereby think that people are "not allowed" to be fat and sedentary.

Summary

To quickly transform the world, it's not enough for AI to become super smart (the "intelligence explosion"). 

AI will also have to turbocharge the physical world (the "industrial explosion"). Think robot factories building more and better robot factories, which build more and better robot factories, and so on. 

The dynamics of the industrial explosion has gotten remarkably little attention.

This post lays out how the industrial explosion could play out, and how quickly it might happen.

We think the industrial explosion will unfold in three stages:

  1. AI-directed human labour, where AI-directed human labourers drive productivity gains in physical capabilities.
    1. We argue this could increase physical output by 10X within a few years.
  2. Fully autonomous robot factories, where AI-directed robots (and other physical actuators) replace human physical labour.
    1. We argue that, with current physical technology and
...

Downvoted the post because it considers neither the Amdahl's Law nor the factors of production, which is Economics 101.

Fully automated robot factories can't make robot factories out of thin air, they need energy and raw materials which are considered secondary factors of production in economics. As soon there appears a large demand for them, their prices will skyrocket.

These are called so because are acquired from primary factors of production, which in classical economics consist of land, labor and capital. Sure, labor is cheap with robots but land and ca... (read more)

2simeon_c
It might be a dumb question but aren't there major welfare concerns with assembling biorobots?
1Mars_Will_Be_Ours
I think that you may be significantly underestimating the minimum possible doubling time of a fully automated, self replicating factory, assuming that the factory is powered by solar panels. There is a certain amount of energy which is required to make a solar panel. A self replicating factory needs to gather this amount of energy and use it to produce the solar panels needed to power its daughter factory. The minimum amount of time it takes for a solar panel to gather enough energy to produce another copy is known as the energy payback time, or EPBT.  Energy payback time (EPBT) and energy return on energy invested (EROI) of solar photovoltaic systems: A systematic review and meta-analysis is a meta-analysis which reviews a variety of papers to determine how long it takes various types of solar panels to produce the amount of energy needed to make another solar panel of the same type. It also provides energy returns on energy invested, which is a ratio which signifies the amount of excess energy you can harvest from an energy producing device before you need to build another one. If its less than 1, then the technology is not an energy source.  The energy payback time for solar panels varies between 1 and 4 years, depending on the technology specified. This imposes a hard limit on a solar powered self replicating factory's doubling time, since it must make all the solar panels required for its daughter to be powered. Hence, it will take at least a year for a solar powered fully automated factory to self replicate. Wind has similar if less severe limitations, with Greenhouse gas and energy payback times for a wind turbine installed in the Brazilian Northeast finding an energy payback time of about half a year. This means that a wind powered self replicating factory must take at least half a year to self-replicate.   Note that neither of these papers account for how factories are not optimized to take advantage of intermittent energy and as such, do not estimate th
1BryceStansfield
I'm not really seeing the point of AI augmented human labour here. It seems like it's meant to fill the gap between now and the production of either generalised or specialised macrorobotics, but it seems to me that that niche is better filled by existing machinery.   Why go through the clunky process of instructing a human how to do a task, when you can commandeer an old factory, and repurpose some old drones to do most of the work for you? Human beings might *in theory* have a much higher ceiling for precise work, but realistically you can't micromanage someone into being good at a physical task; they need to build muscle memory, and that's gonna be hard to come by with the constantly changing industrial processes a super intelligence would presumably be implementing. On the other hand, you could macgyver old commercial machinery into any shape you want, quickly spin up a virtual training environment, and have an agent trained up on any industrial process you want in presumably minutes.   I think you might be assuming that industrial robots are hard, just because humans are bad at designing them. But I reckon a little bit of superintelligence would go a long way in hacking together workable robotics.

(A response to this post.)

If you use prediction markets to make decisions, you might think they’ll generate EDT decisions: you’re asking for P(A|B), where you care about A, and B is something like “a decision … is taken”.

Okay, so say you want to use prediction markets to generate CDT decisions. You want to know P(A|do(B)).

There’s a very simple way to do that:

  • Make a market on P(A|B).
  • Commit to resolving the market to N/A with 99.9% chance.
  • With 0.1% chance, transparently take a random decision among available decisions (the randomness is pre-set and independent of specific decisions/market data/etc.)

Now, 99.9% of the time, you can use freely use market data to make decisions, without impacting the market! You screened off everything upstream of the decision from the market. All you need...

You can anti-correlate it by running 1000 markets on different questions you're interested in, and announcing that all but a randomly chosen one will N/A, so as to not need to feed an insurer. This also means traders on any of your markets can get a free loan to trade on the others.

1pmarc
Assuming the 99.9% / 0.1% trick does work and there are large numbers of markets to compensate for the small chance of any given market resolving, what would be the defense against actors putting large bets on a single market with the sole intent of skewing the signal? If the vast majority of bets are consequence-free, it seems: (1) the cost of such an operation would be comparatively cheaper, and (2) the incentive for rational profit-seeking traders to put enough volume of counter-bets to "punish" that would be comparatively smaller,  than in a regular (non-N/A resolving) market.  
2Gurkenglas
Can you do three markets with 0%, 33% and 66% to N/A, to extrapolate what 99% N/A would do?
2philh
Mostly "priors on this kind of thing". (I might be able to get something more specific but that comment won't come for a week minimum, if ever.)
In honor of the latest (always deeply, deeply unpopular) attempts to destroy tracking and gifted and talented programs, and other attempts to get children to actually learn things, I thought it a good time to compile a number of related items.

Table of Contents

  1. Lack Of Tracking Hurts Actual Everyone.
  2. Not Tracking Especially Hurts Those Who Are Struggling.
  3. No Child Left Behind Left Behind.
  4. Read Early, Read Often.
  5. Mirror, Mirror.
  6. Spaced Repetition.
  7. Learning Methods.
  8. Interruptions.
  9. Memorization.
  10. Math is Hard.
  11. Get to Work.
  12. The Whiz Kids.
  13. High School Does Not Seem To Teach Kids Much.
  14. Two Kinds of Essays.

Lack Of Tracking Hurts Actual Everyone

Gifted programs and educational tracking are also super duper popular, it is remarkably absurd that our political process cannot prevent these programs from being destroyed.
...
1Afterimage
I'd be keen to hear an explanation of this bullet biting. My instincts tell me it's a very bad idea and I imagine most people would agree but I'm interested in more details.

For the sake of argument, I'll at least poke a bit at this bullet.

I have been in an advanced math class (in the US) with high school seniors and an 8th grader, who was probably the top student in the class. It was totally fine? Everyone learned math, because they liked math.

From what I can tell, the two key factors for mixing ages in math classes is something like:

  1. Similar math skills.
  2. Similar levels of interest in math.

So let's imagine that you have a handful of 17-year-olds learning multivariate calculus, and one 7-year-old prodigy. My prediction is t... (read more)

To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Log In Reset Password
...or continue with

There was what everyone agrees was a high quality critique of the timelines component of AI 2027, by the LessWrong user and Substack writer Titotal.

It is great to have thoughtful critiques like this. The way you get actual thoughtful critiques like this, of course, is to post the wrong answer (at length) on the internet, and then respond by listening to the feedback and by making your model less wrong.

This is a high-effort, highly detailed, real engagement on this section, including giving the original authors opportunity to critique the critique, and warnings to beware errors, give time to respond, shares the code used to generate the graphs, engages in detail, does a bunch of math work, and so on. That is The Way.

So, Titotal: Thank you.

I note...

1idly
I'd like to comment on your discussion of peer review. 'Tyler Cowen’s presentation of the criticism then compounds this, entitled ‘Modeling errors in AI doom circles’ (which is pejorative on multiple levels), calling the critique ‘excellent’ (the critique in its title calls the original ‘bad’), then presenting this as an argument for why this proves they should have… submitted AI 2027 to a journal? Huh?' To me, this response in particular suggests you might misunderstand the point of submitting to journals and receiving peer review. The reason Tyler says they should have submitted it is not because the original model and publication being critiqued is good and especially worthy of publication, it is because it would have received this kind of careful review and feedback before publication, as solicited from an editor independent of the authors, and anonymously. The authors would then be able to improve their models accordingly and the reviewers and editor would decide if their changes were sufficient or request further revisions. It is a lot of effort to engage with and critique this type of work, and it is unlikely titotal's review will be read as widely as the original piece, or the updated piece once these criticisms are taken into account. And I also found the responses to his critique slightly unsatisfying - only some of his points were taken on board by the authors, and I didn't see clear arguments why others were ignored. Furthermore, it is not reasonable to expect most of the audience consuming AI 2027 and similar to have the necessary expertise and time to go through the methodology as carefully as titotal has done. Those readers are also particularly unlikely to read the critique and use it to shape their takeaways of the original article. However, they are likely to see that there are pages and pages of supplementary information and analysis that looks pretty serious and, based on that, assume the authors know what they are talking about. You are rig
5Ben Pace
(FWIW in this comment I am largely just repeating things already said in the longer thread... I wrote this mostly to clarify my own thinking.) I think the conflict here is that, within intellectual online writing circles, attempting to use the title of a post to directly attempt to set a bottom line in the status of something is defecting on a norm, but this is not so in the 'internet of beefs' rest of the world, where titles are readily used as cudgels in status fights. Within the intellectual online writing circles, this is not a good goal for a title, and it's not something that AI 2027 did (or, like, something that ~any ACX post or ~any LW curated post does)[1]. This is not the same as "not putting your bottom line in the title", it's "don't attempt to directly write the bottom line about the status of something in your title". I agree you're narrowly correct that it's acceptable to have goals for changing the status of various things, and it's good to push back on implying that that isn't allowed by any method. But I think Zvi did make the point that the title itself of the critique post attempted to do it using the title and that's not something AI 2027 did and is IMO defecting on a worthy truce in the intellectual online circles. 1. ^ To the best of my recollection. Can anyone think of counterexamples?

Hmm, interesting. I was surprised by the claim so I did look back through ACX and posts from the LW review, and it does seem to back up your claim (the closest I saw was "Sorry, I Still Think MR Is Wrong About USAID", note I didn't look very hard). EDIT: Actually I agree with sunwillrise that "Moldbug sold out" meets the bar (and in general my felt sense is that ACX does do this).

I'd dispute the characterization of this norm as operating "within intellectual online writing circles". I think it's a rationalist norm if anything. For example I went to Slow Bo... (read more)

6sunwillrise
It's difficult to determine what you would or wouldn't call "directly writ[ing] the bottom line about the status of something in your title." titotal's post was titled "A deep critique of AI 2027’s bad timeline models." Is that more or less about the status of the bottom line than "Futarchy's fundamental flaw" is? What about "Moldbug sold out" over on ACX? In any case, it does seem LW curated posts and ACX posts both usually have neutral titles, especially given the occasionally contentious nature of their contents.

If probability is in the map, then what is the territory? What are we mapping when we apply probability theory?

"Our uncertainty about the world, of course."

Uncertainty, yes. And sure, every map is, in a sense, a map of the world. But can we be more specific? Say, for a fair coin toss, what particular part of the world do we map with probability theory? Surely it's not the whole world at the same time, is it?

"It is. You map the whole world. Multiple possible worlds, in fact. In some of them the coin is Heads in the others it's Tails, and you are uncertain which one is yours."

Wouldn't that mean that I need to believe in some kind of multiverse to reason about probability? That doesn't sound...

1Crazy philosopher
I don't see problems here. When I go to the supermarket and think about whether there is milk there or not, I imagine an empty shelf, then a shelf with milk, and then I start to think about relevant things. For example, is there a trade war, are there sales, etc. You should imagine a part of the world, not the whole world, including orbits of start in an other galaxy. As a side effect, you may not remember a fact that is related and you already know, but empiricism isn't perfect either. Maybe there was milk in the supermarket for all my life, but there were no trade wars for all my life, and the paper for milk packaging is produced in China.

When I go to the supermarket and think about whether there is milk there or not, I imagine an empty shelf

Yes, you indeed imagine it. And people also imagine a world that macroscopically looks just like ours on a human-scale, but instead follows the laws of classical mechanics (in fact, for centuries, this was the mainstream conception of reality among top physicists). 

The problem is that such a world cannot exist. The classical picture of a ball-like electron orbiting around a proton inside a hydrogen atom cannot happen; classically, a rotating electr... (read more)

(Note: This is NOT being posted on my Substack or Wordpress, but I do want a record of it that is timestamped and accessible for various reasons, so I'm putting this here, but do not in any way feel like you need to read it unless it sounds like fun to you.)

We all deserve some fun. This is some of mine.

I will always be a gamer and a sports fan, and especially a game designer, at heart. The best game out there, the best sport out there is Love Island USA, and I’m getting the joy of experiencing it in real time.

A Brief Ode To One Of Our Beloved Games

Make no mistake. This is a game, a competition, and the prizes are many.

The people agree. It is...

Zvi20

Episode 22 Update

They went with option two, except they didn’t differentiate between bombshells and original islanders. Everyone wrote down their choice. And then they had the saves.

Having seen it play out, the producers were clearly right not to make the distinction. And I think this was clearly the correct way to do the recoupling once you bring everyone back from Casa Amor, and you’ve already reintroduced Nic and Taylor.

I had two worries at the time.

  1. You did not want desperado pairings between bombshells.
  2. You wanted to tempt the OGs to switch, so you de-r
... (read more)