All of evalu's Comments + Replies

evalu10

99% of random[3] reversible circuits , no such  exists.

Do you mean 99% of circuits that don't satisfy P? Because there probably are distributions of random reversible circuits that satisfy P exactly 1% of the time, and that would make V's job as hard as NP = coNP.

2Eric Neyman
We are interested in natural distributions over reversible circuits (see e.g. footnote 3), where we believe that circuits that satisfy P are exceptionally rare (probably exponentially rare).
evalu10

Have you felt this from your own experience trying to get funding, or from others, or both? Also, I'm curious what you think is their specific kind of bullshit, and if there's things you think are real but others thought to be bullshit. 

1Kabir Kumar
Both. Not sure, its something like lesswrong/EA speak mixed with the VC speak. 
evalu*137

I disagree because to me this just looks like LLMs are one algorithmic improvement away from having executive function, similar to how they couldn't do system 2 style reasoning until this year when RL on math problems started working.

For example, being unable to change its goals on the fly: If a kid kept trying to go forward when his pokemon were too weak. He would keep losing, get upset, and hopefully in a moment of mental clarity, learn the general principle that he should step back and reconsider his goals every so often. I think most children learn som... (read more)

8Cole Wyeth
They might solve it in a year, with one stunning conceptual insight. They might solve it in ten years or more. There's no deciding evidence either way; by default, I expect the trend of punctuated equilibria in AI research to continue for some time. 
evalu20

There's a lot of discussion about evolution as an example of inner and outer alignment.

However, we could instead view the universe as the outer optimizer that maximizes entropy, or power, or intelligence. From this view, both evolution and humans are inner optimizers, and the difference between evolution's and our optimization targets is more of an alignment success than a failure.

Before evolution, the universe increased entropy by having rocks in space crash into each other. When life and evolution finally came around, it was way more effective than rock ... (read more)

evalu20

I've had caps lock remapped to escape for a few years now, and I also remapped a bunch of symbol keys like parentheses to be easier to type when coding. On other people's computers it is slower for me type text with symbols or use vim, but I don't mind since all of my deeply focused work (when the mini-distraction of reaching for a difficult key is most costly) happens on my own computers.

evalu148

I'm skeptical of the claim that the only things that matter are the ones that have to be done before AGI.

Ways it could be true:

  • The rate of productivity growth has a massive step increase after AI can improve its capabilities without the overhead of collaborating with humans. Generally the faster the rate of productivity growth, the less valuable it is to do long-horizon work. For example, people shouldn't work on climate change because AGI will instantly invent better renewables.
  • If we expect short timelines and also smooth takeoff, then that might mean our
... (read more)