Thanks it is very handy to get something that is compatible with SUMO.
Thank you for the thoughtful comments. I am not certain that the approach that I am suggesting will be successful but I am hoping that more complex experiences may be explainable from simpler essences, similar to the behaviour of fluids from simpler atomic rules. I am currently focused on the assumption that the brain is similar to a modern reinforcement learning algorithm where there is a one or more large learnt structures and a relatively simple learning algorithm. The first thing I am hoping to look at is if all the concious experiences could be explained purely by behaviours associated with the learning algorithm. Even better if in trying to do this it indicates new structures that the learning algorithm should take. For example, we have strong memories of sad events and choices we regret, this implies we rank the importance of past experiences based on these situations and weight them more heavily when learning from them. We might avoid a stategy because our intuition says it makes us sad (it is like other situations that made us sad) rather than it simply being a poor strategy to achieve our goals.
Great links, thank you, I hadn't considered the drug effects before that is an interesting perspective on positive sensations. Also I wanted to say I am a big fan of your work, particularly your media synthesis stuff. I use it in teaching of deep learning to show examples of how to use academic source code to explore cutting edge techniques.
Perfect, thank you
A high level post on its use would be very interesting.
I think my main criticism of the Bayes approach is that it leads to the kind of work you are suggesting i.e. have a person construct a model and then have a machine calculate its parameters.
I think that much of what we value in intelligent people is their ability to form the model themselves. By focusing on parameter updating we aren't developing the AI techniques necessary for intelligent behavior. In addition, because correct updating does not guarantee good performance (because the model properties dominate) then we will always have to judge methods based on experimental results.
Because we always come back to experimental results, whatever general AI strategy we develop its structure is more likely to be one that searches for new ways to learn (with bayesian model updating and SVMs as examples) and validates these strategies using experimental data (replicating the behaviour of the AI field as a whole).
I find it useful to think about how people solve problems and examine the huge gulf between specific learning techniques and these approaches. For example, to replicate a Bayesian AI researcher an AI needs to take a small amount of data, an incomplete informal model of the process that generates it (e.g. based on informal metaphors of physical processes the author is familiar with) and then find a way of formalising this informal model (so that its behaviour under all conditions can be calculated) and possibly doing some theorem proving to investigate properties of the model. They then apply potentially standard techniques to determine the models parameters and judge its worth based on experiment (potentially repeating the whole process if it doesn't work).
By focusing on Bayesian approaches we aren't developing techniques that can replicate these kinds of lateral and creative thinking behaviour. Saying there is only one valid form of inference is absurd because it doesn't address these problems.
I feel that trying to force our problems to suit our tools is unlikely to make much progress. For example, unless we can model (and therefore largely solve) all of the problems we want an AI to address we can't create a "Really Good Model".
Rather than manually developing formalisations of specific forms of similarity we need an algorithm to learn different types of similarity and then construct the formalisation itself (or not as I don't think we actually formalise our notions of similarity and yet can still solve problems).
Automated theorem proving is a good example where the problems are well defined yet unique, so any algorithm that can construct proofs needs to see meta patterns in other proofs and apply them. This brings home the difficulty of identifying what it means for things to be similar and also emphasises the incompleteness of a probabilistic approach: the proof that the AI is trying to construct has never been encountered before, in order for it to benefit from experience it needs to invent a type of similarity to map the current problem to the past.
Eh not impossible... just very improbable (in a given world) and certain across all worlds.
I would have thought the more conventional explanation is that the other versions are not actually you (just very like you). This sounds like the issue of only economists acting in the way that economists model people. I would suspect that only people who fixate on such matters would confuse a copy with themselves.
I suspect that people who are vulnerable to these ideas leading to suicide are in fact generally vulnerable to suicide. There are lots of better reasons to kill yourself that most people ignore. If you think you're at risk of this I recommend you seek therapy, thought experiments should not have such drastic effects on your actions.
Thanks for your reference it is good to get down to some more specific examples.
Most AI techniques are model based by necessity: it is not possible to generalise from samples unless the sample is used to inform the shape of a model which then determines the properties of other samples. In effect, AI is model fitting. Bayesian techniques are one scheme for updating a model from data. I call them incomplete because they leave a lot of the intelligence in the hands of the user.
For example, in the thesis reference the author designs a model of transformations on handwritten letters that (thanks to the authors intelligence) is similar to the set of transformations applied to numeric characters. The primary reason why the technique is effective is because the author has constructed a good transformation. The only way to determine if this is true is through experimentation, I doubt the bayesian updating is contributing significantly to the results, if another scheme such as an SVM was chosen I would expect it to produce similar recognition results.
The point is that the legitimacy or otherwise of the model parameter updating scheme is relatively insignificant in comparison to the difficulty in selecting a good model in the first place. As far as I am aware, as there are a potentially infinite set of models, Bayesian techniques cannot be applied to select between them, leaving the real intelligence being provided by the user in the form of the model. In contrast, SVMs are an attempt to construct experimentally useful models from samples and so are much closer to being intelligent in the sense of being able to produce good results with limited human interaction. However, neither technique addresses the fundamental difficulty of replicating the intelligence used by the author in creating the transformation in the first place. Fixating on a particular approach to model updating when model selection is not addressed is to miss the point, it may be meaningful for gambling problems but for real AI challenges the difference it makes appears to be irrelevant to actual performance.
I would love to discuss what the real challenges of GAI are and explore ways of addressing them, but often the posts on LW seem to focus on seemingly obscure game theory or gambling based problems which don't appear to be bringing us closer to a real solution. If the model selection problem can't be addressed then there is no way to guarantee that whatever we want an AI to value, it won't create an internal model that finds something similar (like paperclips) and decides to optimise for that instead.
Silently down voting criticism of Bayesian probability without justification is not helpful either.
From what I understand, in order to apply Bayesian approaches in practical situations it is necessary to make assumptions which have no formal justification, such as the distribution of priors or the local similarity of analogue measures (so that similar but not exact predictions can be informative). This changes the problem without necessarily solving it. In addition, it doesn't address the issue of AI problems not based on repeated experience, e.g. automated theorem proving. The advantage of statistical approaches such as SVMs is that they produce practically beneficial results with limited parameters. With parameter search techniques they can achieve fully automated predictions that often have good experimental results. Regardless of whether Bayesianism is the law of inference, if such approaches cannot be applied automatically they are fundamentally incomplete and only as valid as the assumptions they are used with. If Bayesian approaches carry a fundamental advantage over these techniques why is this not reflected in their practical performance on real world AI problems such as face recognition?
Oh and bring on the down votes you theory loving zealots :)
Thank you very much for your great reply. I'll look into all of the links. Your comments have really inspired me in my exploration of mathematics. They remind me of the aspect of academia I find most surprising. How it can so often be ideological, defensive and secretive whilst also supporting those who sincerely, openly and fearlessly pursue the truth.
Thanks for the comment, I think it is very interesting to think about the minimum complexity algorithm that could plausibly be able to have each conscious experience. The fact that we remember events and talk about them and can describe how they are similar e.g. blue is cold and sad, implies that our internal mental representations and the connections we can make between them must be structured in a certain way. It is fascinating to think about what the simplest 'feeling' algorithm might be, and exciting to think that we may someday be able to create new conscious sensations by integrating our minds with new algorithms.