XiXiDu comments on Singularity Institute $100,000 end-of-year fundraiser only 20% filled so far - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (47)
He also talked to Jaan Tallinn. His best points in my opinion:
...
...
...
...
...
...
...
...
(Most of these considerations don't apply to developments in pure mathematics, which is my best guess at a fruitful mode of attacking FAI goals problem. The implementation-as-AGI aspect is a separate problem likely of a different character, but I expect we need to obtain basic theoretical understanding of FAI goals first to know what kinds of AGI progress are useful. Jumping to development of language translation software is way off-track.)
Thanks a lot for posting this link. The first point was especially good.
The "I feel" opening is telling. It does seem like the only way people can maintain this confusion beyond 10 seconds of thought is by keeping in the realm of intuition. In fact among the first improvements that could be made to the human predictive algorithm is to remove our tendency to let feelings and preferences get all muddled up with our abstract thought.
Given his influence he seems to be worth the time that it takes to try to explain to him how he is wrong?
The only way to approach general intelligence may be by emulating the human algorithms. The opinion that we are capable of inventing an artificial and simple algorithm exhibiting general intelligence is not a mainstream opinion among AI and machine learning researchers. And even if one assumes that all those scientists are not nearly as smart and rational as SI folks, they seem to have much headway when it comes to real world experience about the field of AI and its difficulties.
I actually share the perception that we have no reason to suspect that we could reach a level above ours without massive and time-costly experimentation (removing our biases merely sounds easy when formulated in English).
I think that you might be attributing too much to an expression uttered in an informal conversation.
What do you mean by "feelings" and "preferences". The use of intuition seems to be universal, even within the field of mathematics. I don't see how computational bounded agents could get around "feelings" when making predictions about subjects that are only vaguely understood and defined. Framing the problem in technical terms like "predictive algorithms" doesn't change anything about the fact that making predictions about subjects that are poorly understood is error prone.
Yes. He just doesn't seem to be someone whose opinion on artificial intelligence should be considered particularly important. He's just a layman making the typical layman guesses and mistakes. I'm far more interested in what he has to say on warps in spacetime!