Do you believe running a trade surplus causes a country to be wealthier? If so, how do we know that?
And so, like OpenAI and Anthropic, Google DeepMind wants the United States' AI to be stronger than China's AI. And like OpenAI, it intends to make weapons for the US government.
One might think that in dropping its commitments not to cause net harm and not to violate international law and human rights, Google is signalling its intent to violate human rights. On the contrary, I believe it's merely allowing itself to threaten human rights — or rather, build weapons that will enable the US government to threaten human rights in order to achieve its goals.
(That...
Google's AI principles used to say:
...In addition to the above objectives, we will not design or deploy AI in the following application areas:
- Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
- Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
- Technologies that gather or use information for su
Update: It's even better than that. Not only will they make a lab order for you, but they will also pay for the test itself, at a steep discount to the consumer price.
I didn't know about ownyourlabs, thanks! While patients can order a small number of tests directly from Labcorp and Quest Diagnostics, it seems ownyourlabs will sell you a lab order for many tests that you can't get that way.
Exhibit 13 is a sort of Oppenheimer-meets-Truman email thread in which Ilya Sutskever says:
Yesterday while we were considering making our final commitment given the non-solicit agreement, we realized we'd made a mistake.
Today, OpenAI republished that email (along with others) on its website (archived). But the above sentence is different in OpenAI's version of the email:
Yesterday while we were considering making our final commitment (even the non-solicit agreement), we realized we’d made a mistake.
I wonder which sentence is the one Ilya actually wr...
check out exhibit 13...
Section 3.3(f)(iii):
...Within 120 days of the date of this memorandum, DOE, acting primarily through the National Nuclear Security Administration (NNSA) and in close coordination with AISI and NSA, shall seek to develop the capability to perform rapid systematic testing of AI models’ capacity to generate or exacerbate nuclear and radiological risks. This initiative shall involve the development and maintenance of infrastructure capable of running classified and unclassified tests, including using restricted data and relevant classified threat information.
So was the launch code really 000000?
A fun lesson on the scientific method: A few days ago, OpenAI announced a rollout of "Advanced Voice Mode". Confused by the messaging, I ran a test to see if their phone app could respond to an interruption:
Nisan: Can you count to 10, and then I'll interrupt you in the middle and see how that works.
ChatGPT 4o: Sure! I'll start counting now: One, two, three, four, fiv—
Nisan: Stop. Interruption.
ChatGPT 4o: How'd the interruption go?
Nisan: Oh, that went great. You stopped at five.
ChatGPT 4o: Awesome! Glad it worked smoothly. Do you want to try another test or...
I tried to replicate. At 20 it went on to 25, and I explained what it got wrong. I tried again. I interrupted at 6 and it stopped at 7, saying "Gotcha, stopped right at eleven!". I explained what happened and it said something like "Good job, you found the horrible, marrow cricket" (these last 3 words are verbatim) and then broke.
The coin flip is a brilliant piece of technology for generating trustworthy random noise:
Thus when teaching the concept of a Bernoulli variable, we use the example of coin flips, because everyone already knows what they are. This is unfortunate because the very next concept we introduce is a biased Bernoulli variable, which corresponds to a "weighted" coin. But weighted coins don't exist! If it were practical to manufacture trick coins with arbitrary biases, coin flipping wouldn't be as popular as it is.
If there was a consensus among the 8 as to which tuning is better, that would be significant, right? Since the chance of that is 1/128 if they can't tell the difference. You can even get p < 0.05 with one dissenter if you use a one-tailed test (which is maybe dubious). Of course we don't know what the data look like, so I'm just being pedantic here.
Progress towards a robotic piano tuner: Entropy piano tuner attempts to accommodate "variations in string thickness, stretching, corrosion, dents, the harp flexing", etc. by minimizing the entropy of the power spectrum. Using it should be better than mindlessly tuning to a digital guitar tuner.
According to the website, professional pianists still prefer a human-tuned piano, but no one else can tell the difference. And the general opinion on piano tuner message boards seems to be that it's not quite good enough to replace a professional tuner's judgment.
This post is wrong. Thanks to SymplecticMan for the thought experiment demonstrating that a mixture of ideal gases follows a law rather than my proposed law. (It's also different from Newton's law.)
I made a pretty but unjustified assumption — that a cooling baking sheet can be modeled as a dynamical system where each possible transition is equally likely and in which heat is transferred in fixed quanta, one at a time. This contradicted Newton's law, and I got excited when I realized that Newton's law was merely a first-order approximation.
My mist...
This is the perfect time to start an AI + education project. AI today is not quite reliable enough to be a trustworthy teacher; and in the near future generic AI assistants will likely be smart enough to teach anything well (if they want to).
In the meantime, Eureka Labs faces an interesting alignment problem: Can they ensure that their AI teachers teach only true things? It will be tempting to make teachers that only seem to teach well. I hope they figure out how to navigate that!
On 2018-04-09, OpenAI said[1]:
OpenAI’s mission is to ensure that artificial general intelligence (AGI) [...] benefits all of humanity.
In contrast, in 2023, OpenAI said[2]:
[...] OpenAI’s mission: to build artificial general intelligence (AGI) that is safe and benefits all of humanity.
This archived snapshot is from 2023-05-17, but the document didn't get much attention until November that year. ↩︎
Another example is risk compensation: You make an activity safer (yay) and participants compensate by taking more risks (oh no).
Interesting, it felt less messy to me than, say, rationalist-adjacent research retreats.
lsuser says that as a result of his spiritual journey, "now if there is so much as a cardboard box on my kitchen counter, it bothers me". Has your spiritual practice changed your tolerance of clutter?
In other words, the zero-information oblivion that produced you once can produce you again, maybe in a different form.
Huh, that's Epicurus's argument against fearing death. But while Epicurus assumed there is no afterlife, you're using it to argue there is one!
Re: safety, it depends on exactly where you are, your skill in assessing strangers' intentions from a distance, and probably the way you carry yourself.
Speaking of which, I'd be interested in playing some improv games with you at less.online, if you want to do that!
I'd like to know what Holden did while serving on the board, and what OpenAI would have done if he hadn't joined. That's crucial for assessing the grant's impact.
But since board meetings are private, this will remain unknown for a long time. Unfortunately, the best we can do is speculate.
Of course, Karpathy's post could be in the multimodal training data.
12 years ago, in The state of Computer Vision and AI: we are really, really far away, Andrej Karpathy wrote:
The picture above is funny.
But for me it is also one of those examples that make me sad about the outlook for AI and for Computer Vision. What would it take for a computer to understand this image as you or I do? [...]
In any case, we are very, very far and this depresses me. What is the way forward? :(
I just asked gpt-4o what's going on in the picture, and it understood most of it:
...In this image, a group of men in business attire are seen in a l
That does look like a rough commute, the kind that can use up the mental energy you want to spend on learning. One thing you could consider is staying in a hotel overnight near your school sometimes.
Also, consider wearing ear protection on the Transbay Tube. I wish I had done that when I commuted that way for a year.
I suppose if you had more hidden states than observables, you could distinguish hidden-state prediction from next-token prediction by the dimension of the fractal.
If I understand correctly, the next-token prediction of Mess3 is related to the current-state prediction by a nonsingular linear transformation. So a linear probe showing "the meta-structure of an observer's belief updates over the hidden states of the generating structure" is equivalent to one showing "the structure of the next-token predictions", no?
The subject of this post appears in the "Did you know..." section of Wikipedia's front page(archived) right now.
I'm saying "transformers" every time I am tempted to write "LLMs" because many modern LLMs also do image processing, so the term "LLM" is not quite right.
"Transformer"'s not quite right either because you can train a transformer on a narrow task. How about foundation model: "models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks".
I agree 100%. It would be interesting to explore how the term "AGI" has evolved, maybe starting with Goertzel and Pennachin 2007 who define it as:
a software program that can solve a variety of complex problems in a variety of different domains, and that controls itself autonomously, with its own thoughts, worries, feelings, strengths, weaknesses and predispositions
On the other hand, Stuart Russell testified that AGI means
machines that match or exceed human capabilities in every relevant dimension
so the experts seem to disagree. (On the other hand, ...
I'm surprised to see an application of the Banach fixed-point theorem as an example of something that's too implicit from the perspective of a computer scientist. After all, real quantities can only be represented in a computer as a sequence of approximations — and that's exactly what the theorem provides.
I would have expected you to use, say, the Brouwer fixed-point theorem instead, because Brouwer fixed points can't be computed to arbitrary precision in general.
(I come from a mathematical background, fwiw.)
This article saved me some time just now. Thanks!
Scaling temperature up by a factor of 4 scales up all the velocities by a factor of 2 [...] slowing down the playback of a video has the effect of increasing the time between collisions [....]
Oh, good point! But hm, scaling up temperature by 4x should increase velocities by 2x and energy transfer per collision by 4x. And it should increase the rate of collisions per time by 2x. So the rate of energy transfer per time should increase 8x. But that violates Newton's law as well. What am I missing here?
constant volume
Ah, so I'm working at a level of generality that applies to all sorts of dynamical systems, including ones with no well-defined volume. As long as there's a conserved quantity , we can define the entropy as the log of the number of states with that value of . This is a univariate function of , and temperature can be defined as the multiplicative inverse of the derivative .
if the proportionality depends on thermodynamic variables
By
I mean
for some constant that doesn't vary with time. S...
Yeah, as Shankar says, this is only for conduction (and maybe convection?). The assumption about transition probabilities is abstractly saying there's a lot of contact between the subsystems. If two objects contact each other in a small surface area, this post doesn't apply and you'll need to model the heat flow with the heat equation. I suppose radiative cooling acts abstractly like a narrow contact region, only allowing photons through.
I am suspicious of this "Lambert's law". Suppose the environment is at absolute zero -- nothing is moving at all. Then "Lambert's law" says that the rate of cooling should be infinite: our object should itself instantly drop to absolute zero once placed in an absolute-zero environment. Can that be right?
We're assuming the environment carries away excess heat instantly. In practice the immediate environment will warm up a bit and the cooling rate will become finite right away.
But in the ideal case, yeah, I think instant cooling makes sense. The environment's coldness is infinite!
Oh neat! Very interesting. I believe your argument is correct for head-on collisions. What about glancing blows, though?
Assume two rigid, spherical particles with the same mass and radius.
Pick a coordinate system (at rest) where the collision normal vector is aligned with the x-axis.
Then move the coordinate system along the x axis so that the particles have equal and opposite x-velocities. (The y-velocities will be whatever.) In this frame, the elastic collision will negate the x-velocities and leave the y-velocities untouched.
Back in the rest frame, this ...
I'd love if anyone can point me to anywhere this cooling law (proportional to the difference of coldnesses) has been written up.
Also my assumptions about the dynamical system are kinda ad hoc. I'd like to know assumptions I ought to be using.
We can derive Newton's law of cooling from first principles.
Consider an ergodic discrete-time dynamical system and group the microstates into macrostates according to some observable variable . ( might be the temperature of a subsystem.)
Let's assume that if , then in the next timestep can be one of the values , , or .
Let's make the further assumption that the transition probabilities for these three possibilities have the same ratio as the number of microstates.
Then it turns out that the rate of change over time is proportional to ...
Wow, that's a lot of kale. Do you eat 500g every day? And 500g is the mass of the cooked, strained kale?
I wonder why Gemini used RLHF instead of Direct Preference Optimization (DPO). DPO was written up 6 months ago; it's simpler and apparently more compute-efficient than RLHF.
Another example is the obfuscated arguments problem. As a toy example:
For every cubic centimeter in Texas, your missing earring is not in the cubic centimeter.
Therefore, your missing earring is not in Texas.
Even if the conclusion of the argument is a lie, each premise is spot-checkable and most likely true. The lie has been split up into many statements each of which is only slightly a lie.
Thanks! For convex sets of distributions: If you weaken the definition of fixed point to , then the set has a least element which really is a least fixed point.
Hyperbolic growth
The differential equation , for positive and , has solution
(after changing the units). The Roodman report argues that our economy follows this hyperbolic growth trend, rather than an exponential one.
While exponential growth has a single parameter — the growth rate or interest rate — hyperbolic growth has two parameters: is the time until singularity, and is the "hardness" of the takeoff.
A value of close to zero gives a "soft" takeoff where the derivative gets high well in advance of the singularity. A large va...
Ah, beginning-of-line-text
is nice. It skips over the initial # or // of comments and the initial * of Org headings. I've now bound it to M-m.
Consider seeing a doctor about the panicky and stressed feelings. They may test you for hormone imbalances or prescribe you antianxiety medication.
You bring up a point that I definitely should've mentioned in the post: I am diagnosed with an anxiety disorder (OCD) and am currently taking medicine for it. It doesn't solve everything (such as the issues mentioned here), but the diagnosis does help to explain why I might be having these problems in the first place.
Ok. It's strange, then, that wikipedia does not say this. On the contrary, it says:
(This doesn't necessarily contradict your claim, but it would be misleading for the article to say this but not mention a consensus view that trade surpluses are beneficial.)