Wiki Contributions

Comments

Sorted by

I think the elephant in the room is the purpose of the simulation.

Bostrom takes it as a given that future intelligences will be interested in running ancestor simulations. Why is that? If some future posthuman civilization truly masters physics, consciousness, and technology, I don't see them using it to play SimUniverse. That's what we would do with limitless power; it's taking our unextrapolated, 2017 volition and asking what we'd do if we were gods. But that's like asking a 5-year-old what he wants to do when he grows up, then taking the answer seriously.

Ancestor simulations sound cool to us- heck, they sound amazingly interesting to me, but I strongly suspect posthumans would find better uses for their resources.

Instead, I think we should try to reason about the purpose of a simulation from first principles.

Here's an excerpt from Principia Qualia, Appendix F:

Why simulate anything?

At any rate, let’s assume the simulation argument is viable- i.e., it's possible we're a simulation, and due to the anthropic math, that it's plausible that we're in one now.

Although it's possible that we are being simulated but for no reason, let's assume entities smart enough to simulate universes would have a good reason to do so. So- what possible good reason could there be to simulate a universe? Two options come to mind: (a) using the evolution of the physical world to compute something, or (b) something to do with qualia.

In theory, (a) could be tested by assuming that efficient computations will exhibit high degrees of Kolmogorov complexity (incompressibility) from certain viewpoints, and low Kolmogorov complexity from others. We could then formulate an anthropic-aware measure for this applicable from ‘within’ a computational system, and apply it to our observable universe. This is outside the scope of this work.

However, we can offer a suggestion about (b): if our universe is being simulated for some reason associated with qualia, it seems plausible that it has to do with producing a large amount of some kind of particularly interesting or morally relevant qualia.

We don't live in a universe that's nice or just all the time, so perhaps there are nightmare scenarios in our future. Not all traps have an escape. However, I think this one does, for two reasons.

(1) all the reasons that RobinHanson mentioned;

(2) we seem really confused about how consciousness works, which suggests there are large 'unknown unknowns' in play. It seems very likely that if we extrapolate our confused models of consciousness into extreme scenarios such as this, we'll get even more confused results.

A rigorous theory of valence wouldn't involve cultural context, much as a rigorous theory of electromagnetism doesn't involve cultural context.

Cultural context may matter a great deal in terms of how to build a friendly AGI that preserves what's valuable about human civilization-- or this may mostly boil down to the axioms that 'pleasure is good' and 'suffering is bad'. I'm officially agnostic on whether value is simple or complex in this way.

One framework for dealing with the stuff you mention is Coherent Extrapolated Volition (CEV)- it's not the last word on anything but it seems like a good intuition pump.

We're not on the same page. Let's try this again.

  • The assertion I originally put forth is AI safety; it is not about reverse-engineering qualia. I'm willing to briefly discuss some intuitions on how one may make meaningful progress on reverse-engineering qualia as a courtesy to you, my anonymous conversation partner here, but since this isn't what I originally posted about I don't have a lot of time to address radical skepticism, especially when it seems like you want to argue against some strawman version of IIT.

  • You ask for references (in a somewhat rude monosyllabic manner) on "some of the empirical work on coma patients IIT has made possible" and I give you exactly that. You then ignore it as "not really qualia research"- which is fine. But I'm really not sure how you can think that this is completely irrelevant to supporting or refuting IIT: IIT made a prediction, Casali et al. tested the prediction, the prediction seemed to hold up. No qualiometer needed. (Granted, this would be a lot easier if we did have them.)

This apparently leads to you say,

You are taking a problem we don;t know how to make a start on, and turning it into a smaller problem we also don't know how to make a start on.

More precisely, I'm taking a problem you don't know how to make a start on, and am turning it into a smaller problem that you also don't seem to know how to make a start on. Which is fine, and I don't wish to be a jerk about it, and not merely because Tononi/Tegmark/Griffith could be wrong in how they're approaching consciousness, and I could be wrong in how I'm adapting their stuff to try to explain some specific things about qualia. But you seem to just want to give up, to put this topic beyond the reach of science, and criticize anyone trying to find clever indirect approaches. Needless to say I vehemently disagree with the productiveness of that attitude.

I think we are in agreement that valence could be a fairly simple property. I also agree that the brain is Vastly Complex, and that qualia research has some excruciatingly difficult methodological hurdles to overcome, and I agree that IIT is still a very speculative hypothesis which shouldn't be taken on faith. I think we differ radically on our understandings of IIT and related research. I guess it'll be an empirical question whether IIT morphs into something that can substantially address questions of qualia- based on my understandings and intuitions, I'm pretty optimistic about this.

If you're looking for a Full, Complete Data-Driven And Validated Solution to the Qualia Problem, I fear we'll have to wait a long, long time. This seems squarely in the 'AI complete' realm of difficulty.

But if you're looking for clever ways of chipping away at the problem, then yes, Casali's Perturbational Complexity Index should be interesting. It doesn't directly say anything about qualia, but it does indirectly support Tononi's approach, which says much about qualia. (Of course, we don't yet know how to interpret most of what it says, nor can we validate IIT directly yet, but I'd just note that this is such a hard, multi-part problem that any interesting/predictive results are valuable, and will make the other parts of the problem easier down the line.)

The stuff by Casali is pretty topical, e.g. his 2013 paper with Tononi.

Testing hypotheses derived from or inspired by IIT will probably be on a case-by-case basis. But given some of the empirical work on coma patients IIT has made possible. I think it may be stretching things to critique IIT as wholly reliant on circular reasoning.

That said, yes there are deep methodological challenges with qualia that any approach will need to overcome. I do see your objection quite clearly- I'm confident that I address this in my research (as any meaningful research on this must do) but I don't expect you to take my word for it. The position that I'm defending here is simply that progress in valence research will have relevance to FAI research.

Out of curiosity, do you think valence has a large or small kolgoromov complexity?

I do have some detailed thoughts on your two questions-- in short, given certain substantial tweaks, I think IIT (or variants by Tegmark/Griffiths) can probably be salvaged from its (many) problems in order to provide a crisp dataset on which to base testable hypotheses about qualia.

(If you're around the Bay Area I'd be happy to chat about this over a cup of coffee or something.)

I would emphasize, though, that this post only talks about the value results in this space would have for FAI, and tries to be as agnostic as possible on how any reverse-engineering may happen.

Are you referring to any specific "current research into qualia", or just the idea of qualia research in general? I definitely agree that valence research is a subset of qualia research- but there's not a whole lot of either going on at this point, or at least not much that has produced anything quantitative/predictive.

I suspect valence is actually a really great path to approach more 'general' qualia research, since valence could be a fairly simple property of conscious systems. If we can reverse-engineer one type of qualia (valence), it'll help us reverse other types.

It would probably be highly dependent on the AI's architecture. The basic idea comes from Shulman and Bostrom - Superintelligence, chapter 9, in the "Incentive methods" section (loc 3131 of 8770 on kindle).

My understanding is that such a strategy could help as part of a comprehensive strategy of limitations and inventivization but wouldn't be viable on its own.

Load More