Sequences

Singularity now: is GPT-4 trying to takeover the world?

Wikitag Contributions

Comments

Sorted by

We hope sharing this will help other researchers perform more cost-effective experiments on alignment faking.

And also it is a cheap example of a model organism of misalignment.

I think this would be fixed if they didn't force yes and no to add to 100%. If they have the same interest rate, the price ratio would reveal the true odds.

The problem is you're forcing a one year loan for $1 to add up to $1 in the present. It should add up to less than $1.

I'm assuming the LDT agent knows what the game is and who their opponent is.

Towards the end of the post in the No agent is rational in every problem section, I provided a more general argument. I was assuming LDT would fall under case 1, but if not then case 2 demonstrates it is irrational.

Towards the end of the post in the No agent is rational in every problem section, I provided a more general argument. I was assuming LDT would fall under case 1, but if not case 2 will demonstrate it is irrational.

Ultimately, though, we are not wedded to our particular formulation. Perhaps there is some clever sampling-based verifier that "trivializes" our conjecture as well, in which case we would want to revise it.

I think your goal should be to show that your abstract conjecture implies the concrete result you're after, or is even equivalent to it.

At ARC, we are interested in finding explanations of neural network behavior. Concretely, a trained neural net (such as GPT-4) exhibits a really surprising property: it gets low loss on the training set (far lower than a random neural net).

We can formalize this in a similar way as the reversible circuit conjecture. Here's a rough sketch:

Transformer performance no-coincidence conjecture: Consider a computable process that randomly generates text. The distribution has significantly lower entropy than the uniform distribution. Consider the property P(T) that says "the transformer T gets low average loss when predicting this process". There is a deterministic polynomial time verifier V(T, π) such that:

  1. P(T) implies that there exists a polynomial length π with V(T,π) = 1.
  2. For 99% of transformers T, there is no π with V(T,π) = 1.

Note that "ignore π, and then test T on a small number of inputs" doesn't work. P is only asking if T has low average loss, so you can't falsify P with a small number of inputs.

I mean, beating a chess engine in 2005 might be a "years-long task" for a human? The time METR is measuring is how long it would hypothetically take a human to do the task, not how long it takes the AI.

You're saying that if you assigned 1 human contractor the task of solving superalignment, they would succeed after ~3.5 billion years of work? 🤔 I think you misunderstood what the y-axis on the graph is measuring.

I think the most mysterious part of this trend is that the x-axis is release date. Very useful but mysterious.

Load More