All of MyrddinE's Comments + Replies

MyrddinEΩ141

I created an account simply to say this sounds like an excellent idea. Right until it encounters the real world.

There is a large issue that would have to be addressed before this could be implemented in practice. "This call may be monitored for quality assurance purposes." In other words, the lack of privacy will need to be addressed, and it may lead many to immediately choose a different AI agent that values user privacy higher. In fact the consumption of user data to generate 'AI slop' is a powerful memetic influence, and I believe it would be difficult ... (read more)

2Nathan Helm-Burger
I do agree this would be bad if secret and non-privacy preserving. I think we need to separate out use cases. For reporting on bad behavior (but not quite bad enough to trigger red flags and account closure), then a system like Clio seems ideal. But what if the model is uncertain about something and wants to ask for clarification? But the user isn't being bad, just posing a difficult question. In such a case, as a user, I'd want the model to explicitly ask me for permission to ask the Anthropic staff its question, and for me to need to approve the exact wording. There's plenty of situations that could be great in!

I am a bit confused... how do you reconcile SI with Gödel's incomleteness theorum. Any rigid symbolic system (such as one that defines the world in binary) will invariably have truths that cannot be proven within the system. Am I misunderstanding something?

6Eliezer Yudkowsky
There are things Solomonoff Induction can't understand which Solomonoff Induction Plus One can comprehend, but these things are not computable. In particular, if you have an agent with a hypercomputer that uses Solomoff Induction, no Solomonoff Inductor will be able to simulate the hypercomputer. AIXI is outside AIXI's model space. You need a hypercomputer plus one to comprehend the halting behavior of hypercomputers. But a Solomonoff Inductor can predict the behavior of a hypercomputer at least well as you can, because you're computable, and so you're inside the Solomonoff Inductor. Literally. It's got an exact copy of you in there.
5shokwave
The post covers that, though not by that name: When Solomonoff induction runs into a halting problem it may well run forever. This is a part of why it's uncomputable. Uncomputable is a pretty good synonym for unprovable, in this case.

Caffeine addiction. For years nobody had actually tested whether caffeine had a physical withdrawal symptom, and the result was patients in hospitals being given (or denied) painkillers for phantom headaches. It was an example of a situation that many people knew existed, but could not easily communicate to those whose belief mattered.