All of kjmiller's Comments + Replies

You have presented a very clear and very general description of the Reinforcement Learning problem.

I am excited to read future posts that are similarly clear and general and describe various solutions to RL. I'm imagining the kinds of things that can be found in the standard introduction, and hoping for a nonstandard perspective that might deepen my understanding.

Perhaps this is what Richard is waiting for as well?

Consider getting back into neuroscience!

AGI as a project is trying to make machines that can do what brains do. One great way to help that project is to study how the brains themselves work. Many many key ideas in AI come from ideas in neuroscience or psychology, and there are plenty of labs out there studying the brain with AI in mind.

Why am I telling you this? You claim that you'd like to be an AI researcher, but later you imply that you're new to computer programming. As mentioned in some of the comments, this is likely to present a large barrier t... (read more)

Seems to me we've got a gen-u-ine semantic misunderstanding on our hands here, Tim :)

My understanding of these ideas is mostly taken from reinforcement learning theory in AI (a la Sutton & Barto 1998). In general, an agent is determined by a policy pi that determines the probability that the agent will make a particular action in a particular state, P = pi(s,a). In the most general case, Pi can also depend on time, and is typically quite complicated, though usually not complex ;).
Any computable agent operating over any possible state and action spac... (read more)

2TimFreeman
If we're talking about ascribing utility functions to humans, then the state space is the universe, right? (That is, the same universe the astronomers talk about.) In that case, the state space contains clocks, so there's no problem with having a time-dependent utility function, since the time is already present in the domain of the utility function. Thus, I don't see the semantic misunderstanding -- human behavior is consistent with at least one utility function even in the formalism you have in mind. (Maybe the state space is the part of the universe outside of the decision-making apparatus of the subject. No matter, that state space contains clocks too.) The interesting question here for me is whether any of those alternatives to having a utility function mentioned in the Allais paradox Wikipedia article are actually useful if you're trying to help the subject get what they want. Can someone give me a clue how to raise the level of discourse enough so it's possible to talk about that, instead of wading through trivialities? PM'ing me would be fine if you have a suggestion here but don't want it to generate responses that will be more trivialities to wade through.
kjmiller140

Introduction to Neuroscience

Recommendation: Neuroscience:Exploring the Brain by Bear, Connors, Paradiso

Reasons: BC&P is simply much better written, more clear, and intelligible than it's competitors Neuroscience by Dale Purves and Fundamentals of Neural Science by Eric Kandel. Purves covers almost the same ground, but is just not written well, often just listing facts without really attempting to synthesize them and build understanding of theory. Bear is better than Purves in every regard. Kandel is the Bible of the discipline, at 1400 pages it goe... (read more)

Theoretical Neuroscience by Dayan and Abbot is a fantastic introduction to comp neuro, from single-neuron models like Hodgkin-Huxley through integrate-and-fire and connectionist (including Hopfield) nets up to things like perceptrons, reinforcement learning models. Requires some comfort with Calculus.
Computational Exploration in Cog Neuro by Randall O'Reilly purports to cover the similar material on a slightly more basic level, including lots of programming exercises. I've only skimmed it, but it looks pretty good. Kind of old, though, supposedly Randy's working on a new edition that should be out soon.

You can construct a set of values and a utility function to fit your observed behavior, no matter how your brain produces that behavior.

I'm deeply hesitant to jump into a debate that I don't know the history of, but...

Isn't it pretty generally understood that this is not true? The Utility Theory folks showed that behavior of an agent can be captured by a numerical utility function iff the agent's preferences conform to certain axioms, and Allais and others have shown that human behavior emphatically does not.

Seems to me that if human behavior were in g... (read more)

1torekp
Allais did more than point out that human behavior disobeys utility theory, specifically the "Sure Thing Principle" or "Independence Axiom". He also argued - to my mind, successfully - that there needn't be anything irrational about violating the axiom.
5TimFreeman
A person's behavior can always be understood as optimizing a utility function, it just that if they are irrational (as in the Allais paradox) the utility functions start to look ridiculously complex. If all else fails, a utility function can be used that has a strong dependency on time in whatever way is required to match the observed behavior of the subject. "The subject had a strong preference for sneezing at 3:15:03pm October 8, 2011." From the point of view of someone who wants to get FAI to work, the important question is, if the FAI does obey the axioms required by utility theory, and you don't obey those axioms for any simple utility function, are you better off if: * the FAI ascribes to you some mixture of possible complex utility functions and helps you to achieve that, or * the FAI uses a better explanation of your behavior, perhaps one of those alternative theories listed in the wikipedia article, and helps you to achieve some component of that explanation? I don't understand the alternative theories well enough to know if the latter option even makes sense.

"A scientific theory should be as simple as possible, but no simpler."

Einstein

4PhilGoetz
Sounds good, but may not be meaningful outside of physics, where by "theory" you usually mean model, and a model can be made simpler or more complex as the occasion demands.

Nice article!

Folks who are interested in this kind of thing might also be interested to see the Koch Lab's online demos of CFS, which you can experience for yourself if you happen to have some old-style blue-red 3D glasses kicking around. This is the method where you show an image to the nondominant eye, and a crazy high-contrast flashing stimulus to the dominant eye, and the subject remains totally unaware of the image for (up to) minutes. Pretty fun stuff :) http://www.klab.caltech.edu/~naotsu/CFS_color_demo.html

You might also be interested in Giulio... (read more)

0Armok_GoB
I can see both at the same time, and switch between different modes at will. I also tend to be immune to or see much more clearly all kinds of optical illusions and visual effects like this at various at will, or see them all the different possible ways simultaneously, able to see intuitively why things like this works, separately see "colours" like "movement speed" and "local contrast" in a way that intuitively feels like they were sent from the eyes as separate from maps of those properties that have more a feel of concious deduction and are immune to many common optical illusions, am ascribed a good aesthetic sense and artistic talent, tend to move my eyes and scroll on displays in ways that others find disturbing or even painful but I feel constrained and tunnel visioned if I try not to, I have occupationally experienced something reminiscent of blindishgt for short periods of time (remicent in the same sense as a polar opposite is), and am able to "see" things other's can't, and a bunch of other related superpowers. Hope this is interesting enough as a case study to not come of as bragging. Feel free to run experiments on me since i love this kind of thing.
2atucker
Thanks for the link! Tononi is cool in that he (at least attempted to, I haven't followed the linear algebra proof) quantified a measure for a system both having distinct functional clusters and integrated states which covary enough with external factors that they provide mutual information with them. It seems to be in line with the "Intelligence is Compression" idea that I've run into a few times 'round these parts. I think that his theory is probably very related to intelligence (at least in human-style architectures), but not particularly related to consciousness. I do admire his audacity in saying that anything with a high enough measure on his value is conscious. Anil Seth has some intro/review type stuff on his work, which you can get off of his website: http://www.anilseth.com/