wedrifid comments on The Concepts Problem - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (18)
While I am not particularly optimistic about the creation of an FAI I say:
Please justify your claims (particularly #2).
(Only slightly less briefly)
Human ontologies are complex, redundant and outright contradictory at times. Not only is some of it not needed to create an AI it would be counter-productive to include it.
When AI comes to model human preferences and the elements of the universe which are most relevant to fulfilling them they need not interfere at all with the implementation of the AI itself. It's just a complex form of data and metadata to keep in mind. When it comes to things that would more fundamentally influence the direct operation of the AI goals it can ensure that any alterations do not contradict the old version or do so only to resolve a discovered contradiction in whatever the sanest way possible is.
Humans suck compared to superintelligences. They even suck at knowing what they want. I'd rather tell a friendly superintelligence to do what I want it to do rather than try to program my goals into it. Did I mention that it is smarter than me? It can even emulate me and ask em-me my goals that way if it hasn't got a better option. There is no downside to getting the FAI to do it for me. If it isn't friendly then....
Humans suck at creating ontologies. Less than any other species I know but they still suck. I wouldn't include stupid parts in a FAI, that'd make it particularly hard to prove friendly. But it would naturally be able to look at humans and figure out any necessary stupid parts itself.
That is rather dense, I'll admit. But the gist of the reasoning is there.