I'm stuck wondering on a peculiar question lately - which are the useful areas of AI study? What got me thinking is the opinion occasionally stated (or implied) by Eliezer here that performing general AI research might likely have negative utility, due to indirectly facilitating a chance of unfriendly AI being developed. I've been chewing on the implications of this for quite a while, as acceptance of these arguments would require quite a change in my behavior.
Right now I'm about to start my CompSci PhD studies soon, and had initially planned to focus on unsupervised domain-specific knowledge extraction from the internet, as my current research background is mostly with narrow AI issues in computational linguistics, such as machine-learning, formation of concepts and semantics extraction. However, in the last year my expectations of singularity and existential risks of unfriendly AI have lead me to believe that focusing my efforts on Friendly AI concepts would be a more valuable choice; as a few years of studies in the area would increase the chance of me making some positive contribution later on.
What is your opinion?
Do studies of general AI topics and research in the area carry a positive or negative utility ? What are the research topics that would be of use to Friendly AI, but still are narrow and shallow enough to make some measurable progress by a single individual/tiny team in the course of a few years of PhD thesis preparation? Are there specific research areas that should be better avoided until more progress has been made on Friendliness research ?
Huffman encoding also works very well in many cases. Obviously you cannot compress any compressible data without GAI, but there are no programs in existence that can compress any compressible data and things like Huffman encoding do make numerical progress in compression, but they do not represent any real conceptual progress. Do you know of any compression programs that make conceptual progress toward GAI? If not, then why do you think that focusing on the compression aspect is likely to provide such progress?
Huffman coding dates back to 1952 - it has been replaced by othehr schemes in size-sensitive applications.
Alexander Ratushnyak seems to be making good progress - see: http://prize.hutter1.net/