All of bouilhet's Comments + Replies

Thanks for the reply, Robb. I've read your post and a good deal of the discussion surrounding it.

I think I understand the general concern, that an AI that either doesn't understand or care about our values could pose a grave threat to humanity. This is true on its face, in the broad sense that any significant technological advance carries with it unforeseen (and therefore potentially negative) consequences. If, however, the intelligence explosion thesis is correct, then we may be too late anyway. I'll elaborate on that in a moment.

First, though, I'm no... (read more)

Why do I reject "intentionality if and only if subjective experience"? For one thing, there are simple states of consciousness - moods, for example - that have no intentionality, so subjectivity fails to imply intentionality. Nor can I see any reason that the implication holds in the direction from intentionality to subjectivity.

I think this is a bit confused. It isn't that simple states of consciousness, qualia, etc. imply intentionality, rather that they are prerequisites for intentionality. X if and only if Y just means there can be no X ... (read more)

0torekp
"Intentionality" is an unfortunate word choice here, because it's not primarily about intention in the sense of will. Blame Brentano, and Searle for following him, for that word choice. Intentionality means aboutness, i.e. a semantic relation between word and object, belief and fact, or desire and outcome. The last example shows that intention in the sense of will is included within "intentionality" as Searle uses it, but it's not the only example. Your argument is still plausible and relevant, and I'll try to reply in a moment. As you suggest, I didn't even bother trying to argue against the contention that qualia are prerequisite for intentionality. Not because I don't think an argument can be made, but mainly because the Less Wrong community doesn't seem to need any convincing, or didn't until you came along. My argument basically amounts to pointing to plausible theories of what the semantic relationship is, such as teleosemantics or asymmetric dependence, and noting that qualia are not mentioned or implied in those theories. Now to answer your argument. I do think it's conceivable for an agent to have intentions to act, and have perceptions of facts, without having qualia as we know them. Call this agent Robbie Robot. Robbie is still a subject, in the sense that, e.g. "Robbie knows that the blue box fits inside the red one" is true, and expresses a semantic relation, and Robbie is the subject of that sentence. But Robbie doesn't have a subjective experience of red or blue; it only has an objective perception of red or blue. Unlike humans, Robbie has no cognitive access to an intermediate state between the actual external world of boxes, and the ultimate cognitive achievement of knowing that this box is red. Robbie is not subject to tricks of lighting. Robbie cannot be drugged in a way that makes it see colors differently. When it comes to box colors, Robbie is infallible, and therefore there is no such thing as "appears to be red" or "seems blue" to Robbie. T

The Lector/AI analogy occurred to me as well. The problem, in strategic-- and perhaps also existential-- terms, is that Starling/Gatekeeper is convinced that Lector/AI is the only one holding the answer to some problem that Starling/Gatekeeper is equally convinced must be solved. Lector/AI, that is, has managed to make himself (or already is) indispensable to Starling/Gatekeeper.

On a side note, these experiments also reminded me of the short-lived game show The Moment of Truth. I watched a few episodes back when it first aired and was mildly horrified. ... (read more)

Your response avoids the basic logic here. A human emulation would count as an AI, therefore human behavior is one possible AI behavior. There is nothing controversial in the statement; the conclusion is drawn from the premise. If you don't think a human emulation would count as AI, or isn't possible, or something else, fine, but... why wouldn't a human emulation count as an AI? How, for example, can we even think about advanced intelligence, much less attempt to model it, without considering human intelligence?

...humans respond to orders and request

... (read more)

I think the point is that if you accept this definition of intelligence, i.e. that it requires the ability to form deep and reliable abstractions about the world, then it doesn't make sense to talk about any intelligence (let alone a super one) being unable to differentiate between smiley-faces and happy people. It isn't a matter, at least in this instance, of whether it cares to make that differentiation or not. If it is intelligent, it will make the distinction. It may have values that would be unrecognizable or abhorrent to humans, and I suppose that... (read more)

1Kenny
Yes; good points! Do note that my original comment was made eight years ago! (At least – it was probably migrated from Overcoming Bias if this post is as early as it seems to be.) So I have had some time to think along these lines a little more :) But I don't think intelligence itself can lead one to conclude as you have: It's not obvious to me now that any particular distinction will be made by any particular intelligence. There's maybe not literally infinite, but still a VAST number of possible ontologies with which to make distinctions. The general class of 'intelligent systems' is almost certainly WAY more alien than we can reasonably imagine. I don't assume that even a 'super-intelligence' would definitely ever "differentiate between smiley-faces and happy people". But I don't remember this post that well, and I was going to re-read before I remembered that I didn't even know what I was originally replying to (as it didn't seem to be the post itself), and re-constructing the entire context to write a better reply which my temporal margin "is too narrow to contain" at the moment. But I think I still disagree with whatever Shane wrote!
2Rob Bensinger
I wrote a post about this! See The genie knows, but doesn't care. It may not make sense to talk about a superintelligence that's too dumb to understand human values, but it does make sense to talk about an AI smart enough to program superior general intelligences that's too dumb to understand human values. If the first such AIs ('seed AIs') are built before we've solved this family of problems, then the intelligence explosion thesis suggests that it will probably be too late. You could ask an AI to solve the problem of FAI for us, but it would need to be an AI smart enough to complete that task reliably yet too dumb (or too well-boxed) to be dangerous.

Occam's razor is, of course, not an arbitrary rule nor one justified by its practical success. It simply says that unnecessary elements in a symbolism mean nothing.

Signs which serve one purpose are logically equivalent, signs which serve no purpose are logically meaningless.

  • Ludwig Wittgenstein, Tractatus Logico-Philosophicus 5.47321
bouilhet
230

The conscientious. - It is more comfortable to follow one's conscience than one's reason: for it offers an excuse and alleviation if what we undertake miscarries--which is why there are always so many conscientious people and so few reasonable ones.

-- Nietzsche

Thanks for clarifying. The wording seems odd to me, but I get it now.

bouilhet
-20

How is this so? Surely, as a general proposition, ignorance and intention are much more loosely correlated than the quote suggests. What if the statement were altered slightly: "If (after great effort and/or reflection and/or prayer) you (still) don't know..." Does it still make sense to speak of intention? Or if the point is that the failure to solve a simple problem indicates a will to fail, well then the author has more faith in human will than I do--and IMO greatly underestimates the possible ways of not-knowing.

You're misreading the quote. The intention is on the part of the person who designed the gun, not the person who's trying to fire it.

Geulincx, from his own annotations to his Ethics (1665):

...our actions are as it were a mirror of Reason and God's law. If they reflect Reason, and contain in themselves what Reason dictates, then they are virtuous and praiseworthy; but if they distort Reason's reflection in themselves, then they are vicious and blameworthy. This has no effect on Reason, or God's law, which are no more beautiful or more ugly for it. Likewise, a thing represented in a mirror remains the same whether the mirror is true and faithfully represents it, or whether it is fals

... (read more)

Thanks for your reply, hen.

I guess I don't think you're making a truth claim when you say that the car you see is cream-colored. You're just reporting an empirical observation. If, however, someone sitting next to you objected that the same car was red, then there would be a problem to sort out, i.e. there would be some doubt as to what was being observed, whether one of you were color blind, etc. And in that case I think you would desire your perception to be the accurate one, not because cream-colored is better than red, but because humans, I think, g... (read more)

0[anonymous]
Thanks for clarifying.

Hello everyone.

I go by bouilhet. I don't typically spend much time on the Internet, much less in the interactive blogosphere, and I don't know how joining LessWrong will fit into the schedule of my life, but here goes. I'm interested from a philosophical perspective in many of the problems discussed on LW - AI/futurism, rationalism, epistemology, probability, bias - and after reading through a fair share of the material here I thought it was time to engage. I don't exactly consider myself a rationalist (though perhaps I am one), but I spend a great de... (read more)

0[anonymous]
So, at the moment I believe that the car I can see out the window to the left of me is cream colored. I don't think this belief is one I desire to be true (I would not be disappointed with a red car, for example). I have an (depending on how you count) an infinity of such beliefs about my immediate environment. What do you make of these beliefs, given your above claim?