All of Oliver Siegel's Comments + Replies

Correct! That's my point with the main post. I don't see anyone discussing conscience, I mostly hear them contemplate consciousness or computability. 

As far as how to actually do this, I've dropped a few ideas on this site, they should be listed on my profile.

Makes perfect sense!

Isn't that exactly why we should develop an artificial conscience, to prevent an AI from lying or having a shadow side? 

A built in conscience would let the AI know that lying is not something it should do. Also, using a conscience in the AI algorithm would make the AI combat it's own potential shadow. It'll have knowledge of right and wrong / good or bad, and it's even got superhuman ability to orient itself towards that which is good & right, rather than to be "seduced" by the dark side.

3quanticle
Ah, but how do you make the artificial conscience value aligned with humanity? An "artificial conscience" that is capable of aligning a superhuman AI... would itself be an aligned superhuman AI.

Thank you for your comment!

In your opinion, what's the biggest challenge about feeding a DNN with human values, and then adjusting the biases in such a manner that it's not degrading them?

We've taught AI how to speak, and it appears that openAI has taught their AI how to produce as little offensive content as possible. So it seems to be feasible, or not?

2quanticle
The problem is that the AI can (and does) lie. Right now, ChatGPT and its ilk are a less than superhuman levels of intelligence, so we can catch their lies. But when a superhuman AI starts lying to you, how does one correct for that? If a superhuman AI starts veering off in a direction that is unexpected, how does one bring it back on track? @gwern short story, Clippy highlights many of the issues with naively training a superintelligent algorithm on human-generated data and expecting that algorithm to pick up human values as a result. Another post to consider is The Waluigi Effect, which raises the possibility that the more you train an agent to say correct, inoffensive things, the more you've also trained a shadow-agent to say incorrect, offensive things.

Fair point! But how do you know that this ungrounded mysticism doesn't apply to current debate about the potential capabilities of AI systems?

Why is an AI suddenly able to figure out how to break the laws of physics and be super intelligent about how to end intelligent life, but somehow incapable of comprehending the human laws of ethics and morality, and valuing life as we know it?

What makes the laws of physics easier to understand and easier to circumvent than the human laws of ethics and morality? (And also, navigating the human laws of ethics and moral... (read more)

2quanticle
Why do you think an AI would need to break the laws of physics in order to become superintelligent? As Eliezer and gwern have pointed out, the laws of physics are no bar to a machine achieving power beyond our capability to stop. "Accidentally ending all intelligent life" is the default outcome. It's what happens when you program a self-optimizing maximizing process and unleash it. As Eliezer said, once, "The AI does not hate you. The AI does not fear you. The AI merely sees that you are composed of atoms that it could use for its own purposes." Furthermore, why do you think the comprehension is the problem? A superintelligence may fully comprehend human values, but it might be programmed in a way where it just doesn't care. A superintelligent AI tasked with maximizing the number of paperclips in the universe will of course be capable of comprehending human morality and ethics. It might even say that it agrees. But its utility function is fixed. Its goal is to maximize paperclips. It will do whatever it can to maximize the number of paperclips and if that happens to go against what it knows of human morality, well, so much the worse for human morality, then. I look forward to you producing such a database. That is a misunderstanding of the Halting Problem.

Absolutely, I'm here for the feedback! No solution should go without criticism, regardless of what authority posted the idea, or how much experience the author has.  :) 

Interesting article! It reminds me of Monica Anderson's blog: https://experimental-epistemology.ai/

She embraces the mysticism and proposes that holistic, non-reductionist model free systems are undeniably effective.

> "The biggest problem, as I see it, is that you haven't come to a thorough understanding of what you mean" 

That's another one, that Monica writes a lot about: Understanding.

What does is mean to understanding something? And what is the meaning of meaning? 

Yes, they sounds like metaphysical, mystical ideas, and they might be fundamen... (read more)

2quanticle
It's an engineering problem. If I'm honest, I see essentially zero room for the humanities in AI alignment. The level of fuzzy-thinking and lack of rigor that characterizes the humanities is a hindrance, where alignment is concerned. In other words, we can discuss the philosophical implications of having machines that "understand", after we've implemented the guardrails that prevent those machines from ending intelligent life. EDIT: I read the first two articles on the blog that you linked, and I found it to be a classic example of what "Mysterious Answers To Mysterious Questions" is warning about. "Understanding" is used the same way that ancient natural philosophers used "phlogiston", or "elan vital" or "luminiferous aether".

Yea, i agree!

But if it were easy, everyone would do it... ;p

Based on your knowledge, what do you think might be the biggest hurdles to making it possible, using a system similar like the one i described above?

8quanticle
The biggest problem, as I see it, is that you haven't come to a thorough understanding of what is it that you mean by "all the actionable tangible methods and systems that help fulfill this positive value goal and then contrast with all the negative problems that exist in the world that exist with respect to that positive goal". In other words, what you've written there is just, "Make the computer do good things, and also make the computer not do bad things." Yes, it would be wonderful if we could make the computer just do good things and make the computer not do bad things. But if it were that easy, AI alignment would be a trivial problem. Edit: Mysterious Answers To Mysterious Questions is a good sequence post that explains the issues with your approach.

Thank you for the resource!

I'm planning to continue publishing more details about this concept. I believe it will address many of the things mentioned in the post you linked.

Instead of posting it all at once, I'm posting it in smaller chunks that all connect.

I have something coming up about preventing instrumental convergence with formalized critical thinking, as well as a general problem solving algorithm. It'll hopefully make sense once it's all there!

1Maxwell Clarke
Respect for thinking about this stuff yourself. You seem new to alignment (correct me if I'm wrong) - I think it might be helpful to view posting as primarily about getting feedback rather than contributing directly, unless you have read most of the other people's thoughts on whichever topic you are thinking/writing about.

Thanks for sharing! Yes, is seems that the computational complexity could indeed explode at some point.

But then again, an average human brain is capable of storing common sense values and ethics, so unless there's a magic ingredient in the human brain, it's probably not impossible to rebuild it on a computer.

Then, with an artificial brain that has all the benefits of never fatiguing and such, we may come close to a somewhat useful Genie that can at least advise on the best course of action given all the possible pitfalls.

Even if it'll just be, say 25% bett... (read more)

5quanticle
Of course it's possible to rebuild human morality on a computer. There is, however, a vast unfathomable chasm between possible and easy.

Thank you! Could I get a link to "The Sequences" ? I can't find it here: https://www.lesswrong.com/tags/all 

2Ruby
The Sequences is the original name, but it got edited down and renamed to "Rationality:A-Z" https://www.lesswrong.com/rationality
1Maxwell Clarke
I think you might also be interested in this: https://www.lesswrong.com/posts/Nwgdq6kHke5LY692J/alignment-by-default In general John Wentworths alignment agenda is essentially extrapolating your thoughts here and dealing with the problems in it. It's unfortunate but I agree with Ruby- your post is fine but a top-level lesswrong post isn't really the place for it anymore. I'm not sure where the best place to get feedback on this kind of thing is (maybe publish here on LW but as a short-form or draft?) - but you're always welcome to send stuff to me! (Although busy finishing master's next couple of weeks)
5quanticle
https://www.readthesequences.com/ Edit: Specifically, you may wish to read: https://www.readthesequences.com/The-Hidden-Complexity-Of-Wishes

Thank you! I've been researching this for quite some time. 

But I also don't want to overload anyone by going too deep into the subject right away and make it too jargony.