8

Ω 2

This is a special post for quick takes by Nisan. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
19 comments, sorted by Click to highlight new comments since:
[-]NisanΩ15280

On 2018-04-09, OpenAI said[1]:

OpenAI’s mission is to ensure that artificial general intelligence (AGI) [...] benefits all of humanity.

In contrast, in 2023, OpenAI said[2]:

[...] OpenAI’s mission: to build artificial general intelligence (AGI) that is safe and benefits all of humanity.


  1. Archived ↩︎

  2. This archived snapshot is from 2023-05-17, but the document didn't get much attention until November that year. ↩︎

Recent interviews with Eliezer:

Hyperbolic growth

The differential equation , for positive and , has solution

(after changing the units). The Roodman report argues that our economy follows this hyperbolic growth trend, rather than an exponential one.

While exponential growth has a single parameter — the growth rate or interest rate — hyperbolic growth has two parameters: is the time until singularity, and is the "hardness" of the takeoff.

A value of close to zero gives a "soft" takeoff where the derivative gets high well in advance of the singularity. A large value of gives a "hard" takeoff, where explosive growth comes all at once right at the singularity. (Paul Christiano calls these "slow" and "fast" takeoff.)

Paul defines "slow takeoff" as "There will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles." This corresponds to . (At , the first four-year doubling starts at and the first one-year doubling starts at years before the singularity.)

So the simple hyperbola with counts as "slow takeoff". (This is the "naive model" mentioned in footnote 31 of Intelligence Explosion Microeconomics.)

Roodman's estimates of historical are closer to (see Table 3).

People are fond of using the neologism "cruxy", but there's already a word for that: "crucial". Apparently this sense of "crucial" can be traced back to Francis Bacon.

The point of using a word like this is to point to different habits of thoughts. If you use an existing word that's unlikely to happen in listerners.

If you don't do that you get a lot of motte-and-bailey issues. 

A cruxy point doesn't have to be important, the whole question being considered doesn't have to be important. This is an unfortunate connotation of "crucial", because when I'm pointing out that the sky is blue, I'm usually not saying that it's important that it's blue, or that it's important for this object level argument to be resolved. It's only important to figure out what caused a simple mistake that's usually reliably avoided, and to keep channeling curiosity to fill out the map, so that it's not just the apparently useful parts that are not wild conjecture.

I think it's relative.  A crux is crucial to a question, whether the question is crucial to anything else or not.  If you're pointing out the sky is blue, that's only a crux if it's important to some misunderstanding or disagreement.  

I'm with Nisan.  "Crucial" is simply the proper and common term that should be used instead of the backformation "cruxy".  

[-]Nisan100

A fun lesson on the scientific method: A few days ago, OpenAI announced a rollout of "Advanced Voice Mode". Confused by the messaging, I ran a test to see if their phone app could respond to an interruption:

Nisan: Can you count to 10, and then I'll interrupt you in the middle and see how that works.

ChatGPT 4o: Sure! I'll start counting now: One, two, three, four, fiv—

Nisan: Stop. Interruption.

ChatGPT 4o: How'd the interruption go?

Nisan: Oh, that went great. You stopped at five.

ChatGPT 4o: Awesome! Glad it worked smoothly. Do you want to try another test or experiment?

Nisan: Yeah, let's do another interruption test. This time, please count to 20.

ChatGPT 4o: Got it! Starting now: One, two, three, four, five, six, seven, eight, nine, ten, eleven, twelve, thirt—

Nisan: Stop. Interruption.

ChatGPT 4o: How was that?

Nisan: Wow. [Thinks: It seemed to stop slightly before I interrupted it!]

ChatGPT 4o: Glad that impressed you! Want to try something else?

Nisan: Uh, hold on.

ChatGPT 4o: Take your time!

Nisan: [Generates a random number.] Okay, please count to 30, and I'll interrupt at a random point.

ChatGPT 4o: Alright, starting now: One, two, three, four, five, six, seven, eight, nine, ten, eleven, twelve, thirteen, fourteen, fifteen—

[Nisan interrupts at some point, I forgot exactly where.]

ChatGPT 4o: —sixteen, seventeen, eighteen, nineteen, twenty, twenty-one, twenty-two, twenty-three, twenty-four, twenty-five, twenty-six, twenty-seven, twenty-eight, twenty-nine, thirty. Looks like I made it to the end! Did you mean to interrupt somewhere?

Nisan: Yeah, I did. It must not have worked. Let's try that again. [Generates a random number.] Please count up to 30.

ChatGPT 4o: No problem, let's try again! Starting now: One, two, three, four, five, six, seven, eight, nine, ten, eleven, twelve, thirteen, fourteen, fifteen—

[Nisan interrupts at some point.]

ChatGPT 4o: —sixteen, seventeen, eighteen, nineteen, twenty, twenty-one, twenty-two, twenty-three, twenty-four, twenty-fiv— Did it work this time?

Nisan: Uh, no.

I believe ChatGPT can't hear me when it's talking. But it correctly guessed when I would interrupt on the first two trials!

[-]kave162

I tried to replicate. At 20 it went on to 25, and I explained what it got wrong. I tried again. I interrupted at 6 and it stopped at 7, saying "Gotcha, stopped right at eleven!". I explained what happened and it said something like "Good job, you found the horrible, marrow cricket" (these last 3 words are verbatim) and then broke.

12 years ago, in The state of Computer Vision and AI: we are really, really far away, Andrej Karpathy wrote:

The picture above is funny.

But for me it is also one of those examples that make me sad about the outlook for AI and for Computer Vision. What would it take for a computer to understand this image as you or I do? [...]

In any case, we are very, very far and this depresses me. What is the way forward? :(

I just asked gpt-4o what's going on in the picture, and it understood most of it:

In this image, a group of men in business attire are seen in a locker room or a similar setting. The focus is on two men, where the taller man is standing on a scale. The shorter man, who appears to be playfully pressing down on the taller man's shoulders to increase his weight on the scale, is creating a humorous situation. Both men and those observing in the background are smiling or laughing, indicating that they are enjoying the lighthearted moment. The man pressing down seems to be taking part in a playful prank or joke, adding a sense of camaraderie and fun to the scene.

Of course, Karpathy's post could be in the multimodal training data.

The coin flip is a brilliant piece of technology for generating trustworthy random noise:

  • Making a two-headed coin is forgery, which is a crime.
  • Such trick coins can be foiled anyways by calling the toss in the air.

Thus when teaching the concept of a Bernoulli variable, we use the example of coin flips, because everyone already knows what they are. This is unfortunate because the very next concept we introduce is a biased Bernoulli variable, which corresponds to a "weighted" coin. But weighted coins don't exist! If it were practical to manufacture trick coins with arbitrary biases, coin flipping wouldn't be as popular as it is.

Yeah, coins can only be weighted very slightly. See Andrew Gelman & Deborah Nolan: You Can Load a Die, But You Can't Bias a Coin

Yeah, and it's so very easy to make a weighted die. Why don't teachers switch to talking about weighted dice when explaining biased variables? You can label the sides of a six sided die with three 1s and three 2s to get a binary die easily enough. Just seems weird that something which is very physically difficult to ever make exist, and almost certainly nobody in the class has ever seen would be chosen as a teaching example over something which does exist and could even be made into a physical object for in-class demonstrations!

[-]NisanΩ360

Conception is a startup trying to do in vitro gametogenesis for humans!

[-]Nisan40

We can derive Newton's law of cooling from first principles.

Consider an ergodic discrete-time dynamical system and group the microstates into macrostates according to some observable variable . ( might be the temperature of a subsystem.)

Let's assume that if , then in the next timestep can be one of the values , , or .

Let's make the further assumption that the transition probabilities for these three possibilities have the same ratio as the number of microstates.

Then it turns out that the rate of change over time is proportional to , where is the entropy, which is the logarithm of the number of microstates.

Now suppose our system consists of two interacting subsystems with energies and . Total energy is conserved. How fast will energy flow from one system to the other? By the above lemma, is proportional to .

Here and are the coldnesses of the subsystems. Coldness is the inverse of temperature, and is more fundamental than temperature.

Note that Newton's law of cooling says that the rate of heat transfer is proportional to . For a narrow temperature range this will approximate our result.

[-]Nisan20

I'd love if anyone can point me to anywhere this cooling law (proportional to the difference of coldnesses) has been written up.

Also my assumptions about the dynamical system are kinda ad hoc. I'd like to know assumptions I ought to be using.

Agents who model each other can be modeled as programs with access to reflective oracles. I used to think the agents have to use the same oracle. But actually the agents can use different oracles, as long as each oracle can predict all the other oracles. This feels more realistic somehow.

I'm not sure there's a functional difference between "same" and "different" oracles at this level of modeling.