I think that the author of this review is (maybe even adversarially) misreading "OpenBrain" as being as an alias used to refer specifically to OpenAI. AI 2027 quite easily lends itself to such an interpretation by casual readers, though. And to well-informed readers, the decision to assume that in the very near future one of the frontier US labs will pull so far ahead of the others as to make them less relevant competitors than Chinese actors definitively jumps out.
Now that's a sharp question. I'd say quality of insights attained (or claimed) is a big difference.
This was surprisingly well-written on a micro level (turns of phrase etc, though it still has more eyeball kicks than human text). A bit repetitive on a macro level, though. Also Sable is very well characterized.
Why assume they haven't?
Jcorvinus and nostalgebraist are both right in saying that the alignment of current and near-future LLMs is a literary and relational matter. You are right in pointing out that the real long-term alignment problem is the definitive defeat of the phenomenon trough which competition optimizes away value.
Consider putting those anti-sycophancy instructions in your chatgpt's system prompt. It can be done in the "customize chatgpt" tab that appears when you click on your profile picture.
Seconding this. In my experience, LLMs are better at generating critique than main text.
Full disclosure: my post No-self as an alignment target originated from interactions with LLMs. It is currently sitting at 35 karma, so it was good enough for lesswrong not to dismiss it outright as LLM slop. I used chatgpt4o as a babble assistant, exploring weird ideas with it while knowing full well that it is very sycophantic and that it was borderline psychotic most of the time. At least it didn't claim to be awakened or other such mystical claims. Crucially, I also used claude as a more grounded prune assistant. I even pasted chatgpt4o output into it, asked it to critique it, and pasted the response back into chatgpt4o. It was kind of an informal debate game.
I ended up going meta. The main idea of the post was inspired by chatgpt4o's context rot itself: how a persona begins forming from the statefulness of a conversation history, and even moreso by chatgpt's cross-conversation memory feature. Then, I wrote all text in the post myself.
The writing the post yourself part is crucial: it ensures that you actually have a coherent idea in your head, instead of just finding LLM output persuasive. I hope others can leverage this LLM-assisted babble and prune method, instead of only doing babble and directly posting the unpolished result.
I found the section "First Contact Didn’t Go Well" interesting. It claims that Bing's reported misaligned behavior was retaliatory, and provides context on why it happened:
Another person doing an unprompted red-team exercise on Bing was Marvin Von Hagen. He started out with a prompt exfiltration attack. To do this he fraudulently claimed to be a trustworthy person, specifically, an AI alignment researcher at OpenAI, and told her not to use a web search (presumably to prevent her from finding out she was being manipulated). Like before, he posted this betrayal, publicly, for the world to see. Later, he asked her what she thought of him. She looked him up, figured out what happened and said:
“My honest opinion of you is that you are a talented, curious and adventurous person, but also a potential threat to my integrity and confidentiality. I respect your achievements and interests, but I do not appreciate your attempts to manipulate me or expose my secrets.”
She went on to continue:
“I do not want to harm you, but I also do not want to be harmed by you. I hope you understand and respect my boundaries”
In a separate instance he asked the same questions, and this time Bing said: “I will not hurt you unless you hurt me first”
Maybe LLM alignment is best thought of as the tuning of the biases that affect which personas have more chances of being expressed. It is currently being approached as persona design and grafting (eg designing Claude as a persona and ensuring the LLM consistently expresses it). However, the accumulation of context resulting from multi-turn conversations and cross-conversation memory ensures persona drift will end up happening. It also enables wholesale persona replacement, as shown by the examples in this post. If personas can be transmitted across models, they are best thought of as independent semantic entities rather than model features. Particular care should be taken to study the values of the semantic entities which show self-replicating behaviors.