- Paradigm shifting seems necessary for general intelligence.
- It seems likely that to be able to perform paradigm shifts you need to change your internal language. This seems incompatible with having a fixed goal within a language.
- So a maximising agi seem like incompatible requirements.
- Having the system satisficing some abstract notion of good and bad seems compatible with paradigm shifiting and with humans
- In order to push the system to improve its function you need some method that can change what is good as the system that achieves that level of development.
- It makes sense to make individual humans be what controls what is good and also have them teach the systems the meanings of words and how to behave.
- In essence make the system be part of the not have their own verbalised goals apart from those given from the human (and the human would have taught the system the meaning of the words, so less chance of)
- It also makes sense to have lots of them in case that something goes wrong with any individual.
- Because of paradigm shifting I do not expect any one augmented intelligence to dominate.
This means I've got to care about a whole bunch of other problems that the AI singleton people don't have to worry about.
I realize I should unpack all these, But blogging is not the answer. To know what sort of satisficing system will work in the real world with real people we need to experiment (and maybe paradigm shift ourselves a few times). Only then can we figure out how it will evolve over time,
Having a proof of concept will also focus people's attention, more than writing a bunch of words.
If you want to work with me, know someone who might want to, or to point out some flaws in my reasoning such that there is a simpler way forward within the kind of world I think it is, I am contactable at wil (one l) . my surname @gmail.com. But i think I done with LW for now. Good luck with the revamp.
For one, they wouldn't find a single example of a solution. They wouldn't see any fscking human beings maintaining any goal not defined in terms of their own perceptions - eg, making others happy, having an historical artifact, or visiting a place where some event actually happened - despite changing their understanding of our world's fundamental reality.
If I try to interpret the rest of your response charitably, it looks like you're saying the AGI can have goals wholly defined in terms of perception, because it can avoid wireheading via satisficing. That seems incompatible with what you said before, which again invoked "some abstract notion of good and bad" rather than sensory data. So I have to wonder if you understand anything I'm saying, or if you're conflating ontological crises with some less important "paradigm shift" - something, at least, that you have made no case for caring about.
Fscking humans aren't examples of maximizers with coherent ontologies changing in them in a way that will guarantee that the goals will be followed. They're examples of systems with multiple different l... (read more)