All of DFNaiff's Comments + Replies

That is not my experience at all. Maybe it is because my friends from outside of the AI community are also outside of the tech bubble, but I've seen a lot of pessimism recently with the future of AI. In fact, they seem to easily both the orthogonality and the instrumentality thesis. Although I avoid delving into this topic of human extinction, since I don't want to harm anyone's mental health, the rare times were this topic comes up they seem to easily agree that this is a non-trivial possibility.

I guess the main reason is that, since they are outside of t... (read more)

I believe I understand your point, but there are two things that I need to clarify, that kind of bypasses some of these criticism:
a) I am not assuming any safety technique applied to language models. In a sense, this is the worst-case scenario, one thing that may happen if the language model is run "as-it-is". In particular, the scenario I described would be mitigated if we could possibly stop the existence of stable sub-agents appearing in language models, although how to do this I do not know.
b) The incentives for the language models to be a superoptimiz... (read more)

Answer by DFNaiff20

One speculative way I see it, that I've yet to expand on, is that GPT-N, to minimize prediction error in training, could simulate some sort of entity enacting some reasoning, to minimize the prediction error in non-trivial settings. In a sense, GPT would be a sort of actor interpreting a play through extreme method acting. I have in mind something like what the protagonist of "Pierre Menard, author of Don Quixote" tries to do to replicate the book Don Quixote word by word.

This would mean that, for some set of strings, GPT-N would boot and run some agent A,... (read more)

2Lone Pine
I feel like there is a failure mode in this line of thinking, that being 'confusingly pervasive consequentialism'. AI x-risk is concerned with a self-evidently dangerous object, that being superoptimizing agents. But whenever a system is proposed that is possibly intelligent without being superoptimizing, an argument is made, "well, this thing would do its job better if it was a superoptimizer, so the incentives (either internal or external to the system itself) will drive the appearance of a superoptimizer." Well, yes, if you define the incredibly dangerous thing as the only way to solve any problem, and claim that incentives will force that dangerous thing into existence even if we try to prevent it, then the conclusion flows directly from the premise. You have to permit the existence of something that is not a superoptimizer in order to solve the problem. Otherwise you are essentially defining a problem that, by definition, cannot be solved, and then waving your hands saying "There is no solution!"
2fiso64
I posted a somewhat similar response to MSRayne, with the exception that what you accidentally summon is not an agent with a utility function, but something that tries to appear like one and nevertheless tricks you into making some big mistake. Here, what you get is a genuine agent which works across prompts by having some internal value function which outputs a different value after each prompt, and acts accordingly, if I understand correctly. It doesn't seem incredibly unlikely, as there is nothing in the process of evolution that necessarily has to make humans themselves be optimizers, but it happened anyways because that is what best performed in the overall goal of reproduction. This AI will still probably have to somehow convince the people communicating with it to give it "true" agency independent of the user's inputs. Seems like an instrumental value in this case.

Thanks for the reflection, it is how a part of me feels (I usually never post on LessWrong, being just a lurker, but your comment inspired me a bit).

Actually, I do have some background that could, maybe, be useful in alignment, and I did just complete the AGISF program. Right now, I'm applying to some positions (particularly, I'm focusing now on the SERIMATS application, which is an area that I may be differentially talented), and just honestly trying to do my best. After all, it would be outrageous if I could do something, but I simply did not.

But I recog... (read more)

1soth02
Develop a training set for alignment via brute force.  We can't defer alignment to the ubernerds.  If enough ordinary people (millions? tens of millions?) contribute billions or trillions of tokens, maybe we can increase the chance of alignment.  It's almost like we need to offer prayers of kindness and love to the future AGI: writing alignment essays of kindness that are posted to reddit, or videos  extolling the virtue of love that are uploaded to youtube.