TheAncientGeek comments on FAI and the Information Theory of Pleasure - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (19)
Are you referring to any specific "current research into qualia", or just the idea of qualia research in general? I definitely agree that valence research is a subset of qualia research- but there's not a whole lot of either going on at this point, or at least not much that has produced anything quantitative/predictive.
I suspect valence is actually a really great path to approach more 'general' qualia research, since valence could be a fairly simple property of conscious systems. If we can reverse-engineer one type of qualia (valence), it'll help us reverse other types.
There's a lot of philosophical research, and very little scientific research. That confirms the impression of philosophers qualia are a Hard Problem.
How do you reverse engineer a quale? how do you tell you have succeeded? I think that you have underestimated the hardness of the problem.
I do have some detailed thoughts on your two questions-- in short, given certain substantial tweaks, I think IIT (or variants by Tegmark/Griffiths) can probably be salvaged from its (many) problems in order to provide a crisp dataset on which to base testable hypotheses about qualia.
(If you're around the Bay Area I'd be happy to chat about this over a cup of coffee or something.)
I would emphasize, though, that this post only talks about the value results in this space would have for FAI, and tries to be as agnostic as possible on how any reverse-engineering may happen.
Im still not seeing how IIT would help with confirming that an attempt at reverse engineering had succeeded, absent circular reasoning along the lines of "IIT says the system will have qualia. therefore the system wil have qualia".
Testing hypotheses derived from or inspired by IIT will probably be on a case-by-case basis. But given some of the empirical work on coma patients IIT has made possible. I think it may be stretching things to critique IIT as wholly reliant on circular reasoning.
That said, yes there are deep methodological challenges with qualia that any approach will need to overcome. I do see your objection quite clearly- I'm confident that I address this in my research (as any meaningful research on this must do) but I don't expect you to take my word for it. The position that I'm defending here is simply that progress in valence research will have relevance to FAI research.
Out of curiosity, do you think valence has a large or small kolgoromov complexity?
I think it's smallish. and that's philosoophy, because I don't have qualiometer.
refs?
The stuff by Casali is pretty topical, e.g. his 2013 paper with Tononi.
You mean this?
But that isn't really saying anything about qualia. The authors can relate their PCI measure to consciousness as judged medically... in humans. But would that scale be applicable to very simple systems or artificial systems? There is a real possibility that qualia could go missing in computational simulations,even assuming strict physicalism. In fact , we standardly assume that AIs embedded in games don't suffer.
If you're looking for a Full, Complete Data-Driven And Validated Solution to the Qualia Problem, I fear we'll have to wait a long, long time. This seems squarely in the 'AI complete' realm of difficulty.
But if you're looking for clever ways of chipping away at the problem, then yes, Casali's Perturbational Complexity Index should be interesting. It doesn't directly say anything about qualia, but it does indirectly support Tononi's approach, which says much about qualia. (Of course, we don't yet know how to interpret most of what it says, nor can we validate IIT directly yet, but I'd just note that this is such a hard, multi-part problem that any interesting/predictive results are valuable, and will make the other parts of the problem easier down the line.)
That's what I am disputing. You are taking a problem we don;t know how to make a start on, and turning it into a smaller problem we also don't know how to make a start on. That is't an advance. Reducing or simplifying a problem isn't an unconditional, universal solvent, it only works where the simpler problem is one you can actually make progress on.
IIT isn't going toi be of any real use unless it is confirmed, and how are you goign to confirm it, as a theory of qualia, without qualiometers?
If we are going to continue not having qualiometers, we may have to give up on testing consciousness objectiively in favour oof subjective measures...phenomenology and heterophenomenology. But you can only do heterophenomenology on a system that can report its subjective sates. Starting with simpler systems, like a single simulated pain receptor, is not going to work.