I have some expertise in machine learning and AI. I broadly believe that human minds are similar to modern AI algorithms such as deep learning and reinforcement learning. I also believe that it is likely that consciousness is present wherever algorithms are executing (a form of panpsychism). I am trying to create theories about how AI algorithms could generate conscious experiences. For example, it may be the case that when an AI is in a situation where it believes that many actions it could take will lead to an improvement in its situation it might feel happiness. If it feels that most choices will lead to a worse situation and it is searching for the least worst option, it might feel fear and sadness. I am trying to find existing research that might give me a taxonomy of conscious experiences (ideally with associated experimental data e.g. surveys etc.) that I could use to define a scope of experiences that I could then try to map onto the execution of machine learning algorithms. Ideally I am looking for taxonomies that are quite comprehensive, I have found other taxonomies very useful in the past for similar goals, such as Wordnet, ConceptNet, TimeUse surveys, DSM (psychology diagnosis) etc.
I have a very limited understanding of phenomenology and believe that its goals in understanding conscious experience may be relevant but I am concerned that it is not a subject that is presented in a systematic textbook style format that I am looking for. I would be very grateful for any suggestions as to where I might find any systematic overview that I might be able to use. Perhaps from teaching materials or something from Wikipedia or any other source that attempts this kind of broad systematic taxonomy.
Thank you for the thoughtful comments. I am not certain that the approach that I am suggesting will be successful but I am hoping that more complex experiences may be explainable from simpler essences, similar to the behaviour of fluids from simpler atomic rules. I am currently focused on the assumption that the brain is similar to a modern reinforcement learning algorithm where there is a one or more large learnt structures and a relatively simple learning algorithm. The first thing I am hoping to look at is if all the concious experiences could be explained purely by behaviours associated with the learning algorithm. Even better if in trying to do this it indicates new structures that the learning algorithm should take. For example, we have strong memories of sad events and choices we regret, this implies we rank the importance of past experiences based on these situations and weight them more heavily when learning from them. We might avoid a stategy because our intuition says it makes us sad (it is like other situations that made us sad) rather than it simply being a poor strategy to achieve our goals.