Comment author: Carinthium 27 January 2014 01:15:11PM 0 points [-]

You have a point. Then how do you justify induction?

Comment author: HoverHell 29 January 2014 08:00:02PM *  1 point [-]

You don't (or “I don't”, if that's what you meant).

You could say something like that: “if induction is impossible then decision-making and communication are futile”.

However, that by itself does not disprove / dejustify claims that induction is possible but in other ways / with exceptions (on the lines of “the induction is possible unless applied to god^W magic”).

Comment author: Carinthium 27 January 2014 12:00:20PM 0 points [-]

Which means that anti-scepticism is a position taken on faith in the religious sense. It is, after all, the anti-sceptic who claims something can be known.

What I'm looking for is an argument that starts from no assumptions whatsoever but the self-evident, that gets to a justifiable probability theory. That would get around arguments such as the Evil Demon argument.

Comment author: HoverHell 27 January 2014 01:12:12PM *  0 points [-]

As it has been fably noted, the assumption that cannot be verified is the induction (from past into the future). It is well-known in the area of philosophy (something with the words “induction is impossible”) that if something worked well before it won't necessarily work further (unless you assume apriori that induction is possible).

One can freely claim anything (any prediction / hypothesis) if induction is not assumed; the prediction happening to be false would not say anything about the validity of the induction and, thus, of the relevant models.

And the probabilities in particular can be justified from induction (additionally, some of the axioms are relevant to the mathematical representation of the probabilities, not to the notion itself). As well as Ockham's principle, and many other relevant ideas.

Comment author: HoverHell 27 January 2014 01:03:34PM *  1 point [-]

A sidetracking bunch of badly-formed questions / suggestions:

  • In which ways (with which predictions) a bridge hypothesis could account for tampering with the agent's mechanism (“implementation”), e.g. the proverbial anvil? Could the “correct” bridge hypothesis change if part of the agent is destroyed, or, if not, would it require a more complex bridge hypothesis (that is never verified in practice)?
  • Is it supposed to be possible to define a single “correct” bridge mapping for some other agent than self?
  • Is the location of the agent in a world a part of the bridge hypothesis or a given? I.e., if not, a mapping should be from a model of the whole world, not some particular part of it e.g. a notebook or a brain (and the notion of “self” would be a part of the hypothesis as well).
Comment author: Armok_GoB 14 January 2014 01:09:33AM 1 point [-]

Well yea that possible, and given that I do generally suck at introspection even plausible. However, is it relevant? If I don't experience experiencing something, then in what sense is it me experiencing it and not some other entity that may or may not be residing in the same brain?

Comment author: HoverHell 14 January 2014 12:05:31PM 2 points [-]

There is a bit of relevance; however, you are also touching the topic of “personal identity” which is too undeveloped yet to go in.

As a side note, it seems likely that bridge hypotheses can be built for some other entity (other than self), but there are inevitably less constraints on validity of them and, thus, less concentration of probability over them, and, thus, multiple hypotheses can easily be comparably plausible, basically making the situation into “multiple [possibly overlapping or non-overlapping] experiencing entities in any brain”.

The ethical implications can be funny but, at this point, are all too far-fetched.

Comment author: TheOtherDave 24 December 2013 12:24:55AM 5 points [-]

FWIW, I don't really see what the line you quote adds to the discussion here.

I mean, I believe that preferences for white wine over red or vice-versa are strictly subjective; there's nothing objectively preferable about one over the other. It doesn't follow that I can say very firmly "I now resolve to prefer red wine!" and subsequently experience red wine as preferable to white wine. And from this we conclude... nothing much, actually.

Conversely, if Eliezer said "Only people who win the lottery are me!" and the next day the numbers Eliezer picked didn't win, and when I talked to Eliezer it turned out they genuinely didn't identify as Eliezer anymore, and their body was going along identifying as someone different... it's not really clear what we could conclude from that, either.

Comment author: HoverHell 14 January 2014 11:53:14AM 1 point [-]

The thing we can indirectly conclude is that “social identity” (“when I talked to … genuinely didn't identify as”) and “personal identity” (whatever that is) can be (at least intuitively) separate.

There's something about subjective perception consituting facts and bridge hypotheses having a validity measure (based on prediction of those facts) despite being “subjective”; but I can't make a better formulation either.

And there's also something about “I now resolve to prefer red wine” possibly working in the same way as “I now set my desktop background to white” (and possibly failing just as well).

Comment author: IlyaShpitser 27 December 2013 07:27:37PM 1 point [-]

[ I am sure this is not new to anyone who's been thinking about this. ]

I worry that the world is not benign in general, but contains critters that want to mislead you to win utility. There might be such critters that may exploit the difficulty of the problem faced by Cai by trying to mislead Cai into constructing the wrong mapping from "stuff out there" to "stuff in Cai" (of course even the easy version is hard :( ).

Comment author: HoverHell 13 January 2014 06:05:32PM 1 point [-]

The mapping might be relatively easy to check (i.e. hard to construct a misleadement that would look plausible).

Misleading an agent to believe that agent has a different utility function could be more interesting though.

Comment author: RobbBB 24 December 2013 07:05:41PM *  3 points [-]

I guess I think it is distracting. Someone like Chalmers is unlikely to be convinced

Convinced of what? The only thing the paragraph you cited mentions is that (a) the hard problem concerns bridge hypotheses, and (b) the hard problem arises for minds (and not, say, squirrels or digestion) and is noticed by minds because minds type their subprocesses differently. Are those especially partisan or extreme statements? What would Chalmers' alternatives to (a) or (b) be?

I bring up the hard problem here because it's genuinely relevant. It's a real problem, and it really is hard. It's not a confusion, or if it is then it's not obvious how best to dissolve it. If the framework I provide above helps philosophers and psychological theorists like Chalmers come up with new and better theories for how human consciousness relates to neural computations, so much the better.

Comment author: HoverHell 13 January 2014 04:21:17PM 1 point [-]

It's not a confusion, or if it is then it's not obvious how best to dissolve it.

Note that there are many views and formulations that are all called “the hard problem of consciousness”, even though some of them are sufficienty different to need separate consideration (and sufficiently different for one formulation to need a conclusion and other one to need dissolution).

Also, I suspect that least one formulation that is called “hard problem of consciousness” can be interpreted as “figuring out the most plausible bridge mapping (given a physical world) for myself”.

Comment author: alexflint 24 December 2013 04:02:43AM *  2 points [-]

I think we should be at least mildly concerned about accepting this view of agents in which the agent's internal information processes are separated by a bright red line from the processes happening in the outside world. Yes I know you accept that they are both grounded in the same physics, and that they interact with one another via ordinary causation, but if you believe that bridging rules are truly inextricable from AI then you really must completely delineate this set of internal information processing phenomena from the external world. Otherwise, if you do not delineate anything, what are you bridging?

So this delineation seems somewhat difficult to remove and I don't know how to collapse it, but it's at least worth questioning whether it's at this point that we should start saying "hmmmm..."

One way to start to probe this question (although this does not come close to resolving the issue) is to think about an AI already in motion. Let's imagine an AI built out of gears and pulleys, which is busy sensing, optimizing, and acting in the world, as all well-behaved AIs are known to do. In what sense can we delineate a set of "internal information processing phenomena" within this AI from the external world? Perhaps such a delineation would exist in our model of the AI, where it would be expedient indeed to postulate that the gears and pulleys are really just implementing some advanced optimization routine. But that delineation sounds much more like something that should belong in the map than in the territory.

What I'm suggesting is that starting with the assumption of an internal sensory world delineated by a bright red line from the external world should at least give us some pause.

Comment author: HoverHell 13 January 2014 04:15:11PM *  1 point [-]

Indeed, in a better case the said "red line" is a part of the said bridge mapping; quite possibly even with anything that could be seen as a said line. Still, there's an inevitably implied point which, in a physical case, is the point where information enters some "optimization (and decision-making) routine"; even if it might not be clear where in the physical structures the said point is implemented in (which would be, again, a part of the bridge mapping).

Meta: I suspect we both need clearer ways to state these thoughts.

Comment author: Armok_GoB 23 December 2013 08:53:05PM 2 points [-]

I always get confused by these articles about "experience", but this is a good article because I get confused in an interesting way, (and also a less condescending way).

Normally, I just shrug and say "well, I don't have one". In regards to "human-style conscious experience" my answer is probably still "well, I don't have one". However, clearly even if imperfect there is have some sort of semi functional agency behavior in this brain, and so I must have some form of bridge hypothesis and sensory "experience"... but I can't find either. I can track the chain of causality from retina to processing to stored beliefs to action, but no point seems privileged or subjective, and yet it doesn't feel like there's anything missing or anything mysterious the way other people describe.

Thus, a seeming discrepancy; I can't find any flaw in your argument that any agent must have feature X, but I have an example of an agent in which I can not find X. In at least one of the objects I've examined, i must have missed something.

Comment author: HoverHell 13 January 2014 03:56:36PM *  1 point [-]

Note: while this may appear as insulting, that is not the intent and I simply do not have any better wordings.

"well, I don't have one"

That means you don't know of one. That means you cannot (behaviourally) speak of one. Which, for example, could mean that the concept of “own experience” is not part of your experience (and not mean that you don't have said “experience”); which would be a claim about your introspective abilities.

Tha being said, there are some assumed properties of intelligence that are not critically required for highly-apparently-intelligent behaviour (for example, a notion of physical model that involves physical particles; or an explicit notion of the said bridge – which could be built-in as implicit and even work correctly in most cases, with built-in patches to avoid the anvil problem).

Comment author: HoverHell 15 January 2013 04:42:29PM *  1 point [-]

these were not binary yes/no predictions

And how it would be most appropriate to correct for that? Normalizing by random on all alternative predictions (that were made or that can be come up with)?

(with non-binary those graphs, as it seems to me, get relatively useless)

View more: Prev | Next