TheAncientGeek comments on Magical Categories - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (89)
I mean the proposal to solve morality and code it in to an AI.
Whrich is to say that full Fat friendliness is a superset of minimal friendliness . But minimal friendliness is just what I have been calling morality, and I dont see why I shouldn't continue. So friendliness is a superset of morality, as I said.
.....by your assumptions, that morality/friendliness needs to say be solved separately from intelligence. But that is just what I am disputing.
An AGI can be useful without wanting to do anything but answer questions accurately.
You didn't do use the word. But I think "not doing bad things, whilst not necessarily doing fun things either" picks out the same referent.
I find it hard to interpret that statement. How can making things worse forever not be immoral? What non-moral definition of worse are you mean using ?
We have very good reason to think that the one true theory of something will be simpler, in Kolmogorov terms, than a mishmash of everybody's guesses Physics is simpler than folk physics. (It is harder to learn, because that requires the effortful system II to engage...but effort and complexity are different things).
And remember , my assumption is that the AI works out morality itself.
If an ASI can figure out such high level subjects as biology and decision theory, why shouldn't it be useful able to figure out morality?
Why wouldn't an AI that is smarter than US not be able to realise that for itself ?
That is confusingly phrased. A learning system needs some basis to learn, granted. You assume, tacitly, that it need not be preprogrammed with the right rules of grammar or economics. Why make exception for ethics?
A learning system needs some basis other than external stimulus to learn: given that, it is quite possible for most of the information to be contained in the stimulus, the data. Consider language. Do you think an AI will have to be preprogrammed with all the contents of every dictionary
"It is a truism in evolutionary biology that conditional responses require more genetic complexity than unconditional responses. To develop a fur coat in response to cold weather requires more genetic complexity than developing a fur coat whether or not there is cold weather, because in the former case you also have to develop cold-weather sensors and wire them up to the fur coat.
"But this can lead to Lamarckian delusions: Look, I put the organism in a cold environment, and poof, it develops a fur coat! Genes? What genes? It's the cold that does it, obviously.
"There were, in fact, various slap-fights of this sort, in the history of evolutionary biology - cases where someone talked about an organismal response accelerating or bypassing evolution, without realizing that the conditional response was a complex adaptation of higher order than the actual response. (Developing a fur coat in response to cold weather, is strictly more complex than the final response, developing the fur coat.) [...]
"But the upshot is that if you have a little baby AI that is raised with loving and kindly (but occasionally strict) parents, you're pulling the levers that would, in a human, activate genetic machinery built in by millions of years of natural selection, and possibly produce a proper little human child. Though personality also plays a role, as billions of parents have found out in their due times.
"It's easier to program in unconditional niceness, than a response of niceness conditional on the AI being raised by kindly but strict parents. If you don't know how to do that, you certainly don't know how to create an AI that will conditionally respond to an environment of loving parents by growing up into a kindly superintelligence. If you have something that just maximizes the number of paperclips in its future light cone, and you raise it with loving parents, it's still going to come out as a paperclip maximizer. There is not that within it that would call forth the conditional response of a human child. Kindness is not sneezed into an AI by miraculous contagion from its programmers. Even if you wanted a conditional response, that conditionality is a fact you would have to deliberately choose about the design.
"Yes, there's certain information you have to get from the environment - but it's not sneezed in, it's not imprinted, it's not absorbed by magical contagion. Structuring that conditional response to the environment, so that the AI ends up in the desired state, is itself the major problem. 'Learning' far understates the difficulty of it - that sounds like the magic stuff is in the environment, and the difficulty is getting the magic stuff inside the AI. The real magic is in that structured, conditional response we trivialize as 'learning'. That's why building an AI isn't as easy as taking a computer, giving it a little baby body and trying to raise it in a human family. You would think that an unprogrammed computer, being ignorant, would be ready to learn; but the blank slate is a chimera."