Often you can compare your own Fermi estimates with those of other people, and that’s sort of cool, but what’s way more interesting is when they share what variables and models they used to get to the estimate. This lets you actually update your model in a deeper way.
When physicists were figuring out quantum mechanics, one of the major constraints was that it had to reproduce classical mechanics in all of the situations where we already knew that classical mechanics works well - i.e. most of the macroscopic world. Likewise for special and general relativity - they had to reproduce Galilean relativity and Newtonian gravity, respectively, in the parameter ranges where those were known to work. Statistical mechanics had to reproduce the fluid theory of heat; Maxwell's equations had to agree with more specific equations governing static electricity, currents, magnetic fields and light under various conditions.
Even if the entire universe undergoes some kind of phase change tomorrow and the macroscopic physical laws change entirely, it would still be true that the old laws did work...
There's a bit of disconnect from statement "it all adds up to normality" and how it actually could look like. Suppose someone discovered new physical law, if you say "fjrjfjjddjjsje" out loud you can fly, somehow. Now you are reproducing all the old model's confirmed behaviour while flying. Is this normal?
AI for Epistemics is about helping to leverage AI for better truthseeking mechanisms — at the level of individual users, the whole of society, or in transparent ways within the AI systems themselves. Manifund & Elicit recently hosted a hackathon to explore new projects in the space, with about 40 participants, 9 projects judged, and 3 winners splitting a $10k prize pool. Read on to see what we built!
From the opening speeches; lightly edited.
In...
Just asked Claude for thoughts on some technical mech interp paper I'm writing. The difference with and without the non-sycophantic prompt is dramatic (even with extended thinking):
Me: What do you think of this bullet point structure for this section based on those results?
Claude (normal): I think your proposed structure makes a lot of sense based on the figures and table. Here's how I would approach reorganizing this section: {expand my bullet points but sticks to my structure}
Claude (non-sycophantic): I like your proposed structure, but I think it misses...
Prerequisites: Graceful Degradation. Summary of that: Some skills require the entire skill to be correctly used together, and do not degrade well. Other skills still work if you only remember pieces of it, and do degrade well.
Summary of this: The property of graceful degradation is especially relevant for skills which allow groups of people to coordinate with each other. Some things only work if everyone does it, other things work as long as at least one person does it.
Examples:
I agree with this although it makes me think about company culture
There is huge emergent value to some of the.. let's call them "softer" communication approaches
It becomes possible to get out of random suboptimal Nash equilibriums almost immediately
People can give more to each other, and better receive feedback
But I think the only way to do this is by having the type of people who already think in those terms and prefer them
There's not a lot you can do to enforce it
But it's still a thing, and in my opinion it's still a t...
PDF version. berkeleygenomics.org. Twitter thread. (Bluesky copy.)
The world will soon use human germline genomic engineering technology. The benefits will be enormous: Our children will be long-lived, will have strong and diverse capacities, and will be halfway to the end of all illness.
To quickly bring about this world and make it a good one, it has to be a world that is beneficial, or at least acceptable, to a great majority of people. What laws would make this world beneficial to most, and acceptable to approximately all? We'll have to chew on this question ongoingly.
Genomic Liberty is a proposal for one overarching principle, among others, to guide public policy and legislation around germline engineering. It asserts:
Parents have the right to freely choose the genomes of their children.
If upheld,...
I'll repeat that I'm not very learned about genetics, so if you want to convince even me in particular, the best way is to respond to the strongest case, which I can't present. But ok:
First I'll say that an empirical set of facts I'd quite like to have for many traits (disease, mental disease, IQ, personality) would be validation of tails. E.g. if you look at the bottom 1% on a disease PRS, what's the probability of disease? Similarly for IQ.
or beyond those?
I rarely make claims about going much beyond natural results; generally I think it's pretty plau...
According to Nick Cammarata, who rubs shoulders with mystics and billionaires, arahants are as rare as billionaires.
This surprised me, because there are 2+[1] thoroughly-awakened people in my social circle. And that's just in meatspace. Online, I've interacted with a couple others. Plus I met someone with Stream Entry at Less Online last year. That brings the total to a minimum of 4, but it's probably at least 6+.
Meanwhile, I know 0 billionaires.
The explanation for this is obvious; billionaires cluster, as do mystics. If I were a billionaire, then it would be strange for me to not have rubbed shoulders with at least a 6+ other billionaires.
But there's a big difference between billionaires and arahants; it's easy to prove you're a billionaire; just throw a party on your private...
shit happens, I deal with it, and move on. (Because what's the alternative? Not dealing with it.
In my daily work as software consultant I'm often dealing with large pre-existing code bases. I use GitHub Copilot a lot. It's now basically indispensable, but I use it mostly for generating boilerplate code, or figuring out how to use a third-party library.
As the code gets more logically nested though, Copilot crumbles under the weight of complexity. It doesn't know how things should fit together in the project.
Other AI tools like Cursor or Devin, are pretty good at generating quickly working prototypes, but they are not great at dealing with large existing codebases, and they have a very low success rate for my kind of daily work.
You find yourself in an endless loop of prompt tweaking, and at that point, I'd rather write the code myself with...
If the code uses other code you've written, how do you ensure all dependencies fit within the context window? Or does this only work for essentially dependencyless code?
In my post on value systematization I used utilitarianism as a central example of value systematization.
Value systematization is important because it's a process by which a small number of goals end up shaping a huge amount of behavior. But there's another different way in which this happens: core emotional motivations formed during childhood (e.g. fear of death) often drive a huge amount of our behavior, in ways that are hard for us to notice.
Fear of death and utilitarianism are very different. The former is very visceral and deep-rooted; it typically inf...
TLDR: I sketch a language for describing (either Bayesian or infra-Bayesian) hypotheses about computations (i.e. computable[1] logical/analytic facts). This is achieved via a novel (AFAIK) branch of automata theory, which generalizes both ordinary and tree automata by recasting them in category-theoretic language. The resulting theory can be equipped with a intriguing self-referential feature: automata that "build" and run new automata in "runtime".
Epistemic status: this is an "extended shortform" i.e. the explanations are terse, the math is a sketch (in particular the proofs are not spelled out) and there might be errors (but probably not fatal errors). I made it a top-level post because it seemed worth reporting and was too long for a shortform.
Finite-state automata are a convenient tool for constructing hypotheses classes that admit efficient learning....
More precisely, they are algebras over the free operad generated by the input alphabet of the tree automaton
Wouldn't this fail to preserve the arity of the input alphabet? i.e. you can have trees where a given symbol occurs multiple times, and with different amounts of children? That wouldn't be allowed from the perspective of the tree automaton right?
I recently left OpenAI to pursue independent research. I’m working on a number of different research directions, but the most fundamental is my pursuit of a scale-free theory of intelligent agency. In this post I give a rough sketch of how I’m thinking about that. I’m erring on the side of sharing half-formed ideas, so there may well be parts that don’t make sense yet. Nevertheless, I think this broad research direction is very promising.
This post has two sections. The first describes what I mean by a theory of intelligent agency, and some problems with existing (non-scale-free) attempts. The second outlines my current path towards formulating a scale-free theory of intelligent agency, which I’m calling coalitional agency.
By a “theory of intelligent agency” I mean a...
...However, this strongly limits the space of possible aggregated agents. Imagine two EUMs, Alice and Bob, whose utilities are each linear in how much cake they have. Suppose they’re trying to form a new EUM whose utility function is a weighted average of their utility functions. Then they’d only have three options:
- Form an EUM which would give Alice all the cakes (because it weights Alice’s utility higher than Bob’s)
- Form an EUM which would give Bob all the cakes (because it weights Bob’s utility higher than Alice’s)
- Form an EUM which is totally indifferent abo