Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by

This doesn't make complete sense to me, but you are going down a line of thought I recognize.

There are certainly stable utility functions which, while having some drawbacks, don't result in dangerous behavior from superintelligences. Finding a good one doesn't seem all that difficult.

The real nasty challenge is how to build a superintelligence that has the utility function we want it to have. If we could do this, then we could start by choosing an extremely conservative utility function and slowly and cautiously iterate towards a balance of safe and useful.

I've been thinking about a similar thing a lot.

Consider a little superintelligent child who always wants to eat as much candy as possible over the course of the next ten minutes. Assume the child doesn't ever care about what happens ten minutes from now.

This child won't work very hard at any instrumental goals like self improvement and conquering the world to redirect resources towards candy production, since that would be a waste of time, even though it might maximize candy consumption in the long term.

AI alignment isn't any easier here, the point of this is just to illustrate that instrumental convergence is far from given.

Claude says the vibes are 'inherently cursed'

But then it chooses not to pull the lever because it's 'less karmically disruptive'

Note that if computing an optimization step reduces the loss, the training process will reinforce it, even if other layers aren’t doing similar steps, so this is another reason to expect more explicit optimizers.

Basically, self attention is a function of certain matrices, something like this:

Which looks really messy when you put it like this but is pretty natural in context.

If you can get the big messy looking term to approximate a gradient descent step for a given loss function, then you're golden.

In appendix A.1., they show the matrices that yield this gradient descent step. They are pretty simple, and probably an easy point of attraction to find.

All of this reasoning is pretty vague, and without the experimental evidence it wouldn't be nearly good enough. So there's definitely more to understand here. But given the experimental evidence I think this is the right story about what's going on.

I think you do this post a disservice by presenting it as a failure. It had a wrong conclusion, but its core arguments are still interesting and relevant, and exploring the reasons they are wrong is very useful.

Your model of neural nets predicted the wrong thing, that's super exciting! We can improve the model now.

The fundamental idea about genes having an advantage over weights at internally implementing looping algorithms is apparently wrong though (even though I don't understand how the contrary is possible...)

I've been trying to understand this myself. Here’s a the understanding I’ve come to, which is very simplistic. If someone who knows more about transformers than me says I’m wrong I will defer to them.

I used this paper to come to this understanding.

In order to have a mesa-optimizer, lots and lots of layers need to be in on the game of optimization, rather than just one or several key elements which gets referenced repeatedly during the optimization process.

But self-attention is, by default, not very far away from being one step in gradient descent. Every layer doesn't need to learn to do optimization independently from scratch, since it's relatively easy to find given the self-attention architecture.

That's why it's not forbiddingly difficult for neural networks to implement internal optimization algorithms. It still could be forbiddingly difficult for most optimization algorithms, ones that aren't easy to find from the basic architecture.

Why don’t I do the project myself?

Because I think I’m one of the smartest young supergeniuses, and I’m working on things that I think are even more useful in expectation, and which almost nobody except me can do.

 

Even if this is by some small chance actually true, it's stupid of you to say it, because from the perspective of your readers, you are almost certainly wrong and so you undermine your own credibility. I'm sure you were aware some people would think this, and don't care. Have you experimented with trying not to piss people off and see if it helps you?

As for your actual idea, it's cool and even if it doesn't work out we could learn some important things. Good luck!

Why do you think Anthropic and OpenAI are making such bold predictions? (https://x.com/kimmonismus/status/1897628497427701961)

As I see it, one of the following is true:

  1. They agree with you but want shape the narrative away from the truth to sway investors
  2. They have mostly the same info as you but come to a different conclusion
  3. They have evidence we don't have which gives them confidence

Where on this planet could the USA cheaply put people instead of executing them where they

  1. Have the option to survive if they try
  2. Can't escape
  3. Can't cause harm to non-exiled people?

If you haven't already, I'd recommend reading Vinge's 1993 essay on 'The Coming Technological Singularity': https://accelerating.org/articles/comingtechsingularity

He is remarkably prescient, to the point that I wonder if any really new insights into the broad problem have been made in the last 22 years since he wrote. He discusses, among other things, using humans as a base to build superintelligence on as an possible alignment strategy, as well as the problems with this approach.

Here's one quote:

Eric Drexler [...] agrees that superhuman intelligences will be available in the near future — and that such entities pose a threat to the human status quo. But Drexler argues that we can confine such transhuman devices so that their results can be examined and used safely. This is I. J. Good's ultraintelligent machine, with a dose of caution. I argue that confinement is intrinsically impractical. For the case of physical confinement: Imagine yourself locked in your home with only limited data access to the outside, to your masters. If those masters thought at a rate — say — one million times slower than you, there is little doubt that over a period of years (your time) you could come up with "helpful advice" that would incidentally set you free. [...] 

Load More