kjmiller
kjmiller has not written any posts yet.

Consider getting back into neuroscience!
AGI as a project is trying to make machines that can do what brains do. One great way to help that project is to study how the brains themselves work. Many many key ideas in AI come from ideas in neuroscience or psychology, and there are plenty of labs out there studying the brain with AI in mind.
Why am I telling you this? You claim that you'd like to be an AI researcher, but later you imply that you're new to computer programming. As mentioned in some of the comments, this is likely to present a large barrier to "pure" AI research in... (read more)
Seems to me we've got a gen-u-ine semantic misunderstanding on our hands here, Tim :)
My understanding of these ideas is mostly taken from reinforcement learning theory in AI (a la Sutton & Barto 1998). In general, an agent is determined by a policy pi that determines the probability that the agent will make a particular action in a particular state, P = pi(s,a). In the most general case, Pi can also depend on time, and is typically quite complicated, though usually not complex ;).
Any computable agent operating over any possible state and action space can be represented by some function pi, though typically folks in this field deal in... (read more)
Introduction to Neuroscience
Recommendation: Neuroscience:Exploring the Brain by Bear, Connors, Paradiso
Reasons: BC&P is simply much better written, more clear, and intelligible than it's competitors Neuroscience by Dale Purves and Fundamentals of Neural Science by Eric Kandel. Purves covers almost the same ground, but is just not written well, often just listing facts without really attempting to synthesize them and build understanding of theory. Bear is better than Purves in every regard. Kandel is the Bible of the discipline, at 1400 pages it goes into way more depth than either of the others, and way more depth than you need or will be able to understand if you're just starting out.... (read more)
Theoretical Neuroscience by Dayan and Abbot is a fantastic introduction to comp neuro, from single-neuron models like Hodgkin-Huxley through integrate-and-fire and connectionist (including Hopfield) nets up to things like perceptrons, reinforcement learning models. Requires some comfort with Calculus.
Computational Exploration in Cog Neuro by Randall O'Reilly purports to cover the similar material on a slightly more basic level, including lots of programming exercises. I've only skimmed it, but it looks pretty good. Kind of old, though, supposedly Randy's working on a new edition that should be out soon.
You can construct a set of values and a utility function to fit your observed behavior, no matter how your brain produces that behavior.
I'm deeply hesitant to jump into a debate that I don't know the history of, but...
Isn't it pretty generally understood that this is not true? The Utility Theory folks showed that behavior of an agent can be captured by a numerical utility function iff the agent's preferences conform to certain axioms, and Allais and others have shown that human behavior emphatically does not.
Seems to me that if human behavior were in general able to be captured by a utility function, we wouldn't need this website. We'd be... (read more)
"A scientific theory should be as simple as possible, but no simpler."
Einstein
Nice article!
Folks who are interested in this kind of thing might also be interested to see the Koch Lab's online demos of CFS, which you can experience for yourself if you happen to have some old-style blue-red 3D glasses kicking around. This is the method where you show an image to the nondominant eye, and a crazy high-contrast flashing stimulus to the dominant eye, and the subject remains totally unaware of the image for (up to) minutes. Pretty fun stuff :) http://www.klab.caltech.edu/~naotsu/CFS_color_demo.html
You might also be interested in Giulio Tononi's "Integrated Information" theory of consciousness. The gist is that a brain is "conscious" of features in the world to... (read more)
You have presented a very clear and very general description of the Reinforcement Learning problem.
I am excited to read future posts that are similarly clear and general and describe various solutions to RL. I'm imagining the kinds of things that can be found in the standard introduction, and hoping for a nonstandard perspective that might deepen my understanding.
Perhaps this is what Richard is waiting for as well?