RobbBB comments on Can We Do Without Bridge Hypotheses? - Less Wrong

8 Post author: RobbBB 25 January 2014 12:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (9)

You are viewing a single comment's thread. Show more comments above.

Comment author: HoverHell 27 January 2014 01:03:34PM *  1 point [-]

A sidetracking bunch of badly-formed questions / suggestions:

  • In which ways (with which predictions) a bridge hypothesis could account for tampering with the agent's mechanism (“implementation”), e.g. the proverbial anvil? Could the “correct” bridge hypothesis change if part of the agent is destroyed, or, if not, would it require a more complex bridge hypothesis (that is never verified in practice)?
  • Is it supposed to be possible to define a single “correct” bridge mapping for some other agent than self?
  • Is the location of the agent in a world a part of the bridge hypothesis or a given? I.e., if not, a mapping should be from a model of the whole world, not some particular part of it e.g. a notebook or a brain (and the notion of “self” would be a part of the hypothesis as well).
Comment author: RobbBB 28 January 2014 07:39:59AM 0 points [-]

Could the “correct” bridge hypothesis change if part of the agent is destroyed, or, if not, would it require a more complex bridge hypothesis (that is never verified in practice)?

For an agent that can die or become fully unconscious, a complete and accurate bridge hypothesis should include conditions under which a physical state of the world corresponds to the absence of any introspection or data. I'll talk about a problem along these lines for AIXI in my next post.

It's similar to a physical hypothesis. You might update the hypothesis when you learn something new about death, but you of course can't update after dying, so any correct physical or mental or bridging belief about death will have to be prospective.

Is it supposed to be possible to define a single “correct” bridge mapping for some other agent than self?

I'm not sure about the 'single correct' part, but yes, you can have hypotheses about the link between an experience in another agent and the physical world. In some cases it may be hard to decide whether you're hypothesizing about a different agent's phenomenology, or about the phenomenology of a future self.

You can also hypothesize about the link between unconscious computational states and physical states, in yourself or others. For instance, in humans we seem to be able to have beliefs even when we aren't experiencing having them. So a fully general hypothesis linking human belief to physics wouldn't be a 'phenomenological bridge hypothesis'. But it might still be a 'computational bridge hypothesis' or a 'functional bridge hypothesis'.

Is the location of the agent in a world a part of the bridge hypothesis or a given?

I'll talk about this a few posts down the line. Indexical knowledge (including anthropics) doesn't seem to be a solved problem yet.

Comment author: HoverHell 29 January 2014 08:06:52PM 1 point [-]

I'm not sure about the 'single correct' part

My suspicion is that from a third-person point of view (without having access to the experience) it could be possible to make up multiple equally valid bridge hypotheses each of which would imply a different resulting experience (perception) of the (hypothesized) agent.