If you want to make actual progress, you need to truly understand what it is that you are trying to make.
No you don't. You can build things you don't understand. Machine learning routinely produces algorithms that people don't understand, but work, and outperform other methods.
It's all right to have preferences for AI research, and beliefs about what will most likely work. Stating them as certitudes is overstating your case.
Today's post, Artificial Mysterious Intelligence was originally published on 07 December 2008. A summary (taken from the LW wiki):
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Shared AI Wins, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.