It doesn't relate to giving an internal system an internal represetnation of colour like ours. If you put the filter on, you don't go from red to black, you go from #FF0000 to #000000, or something.
Okay, so... we can't make computers that go from red to black, and we can't ourselves understand what it's like to go from #FF0000 to #000000, and this means what?
To me it means the things we use to do processing are very different. Say, a whole brain emulation would have our experience of color, and if we get really really good at cognitive surgery, we might be able to extract the minimum necessary bits to contain that experience of color, and bolt it onto a red-eye filter. Why bother, though? What's the relevant difference?
I don't know why you are talking about filters.
If you think you can write seeRed(), please supply some pseudocode.
What was wrong with this comment?
That's hard to answer without specifying more about the nature of the AI, but it might say things like "what a beautiful sunset".
I'm not going to say the goalposts are moving, but I definitely don't know where they are any more. I was talking about red-eye filters built into cameras. You seemed to be suggesting that they do have "internal representations" of shape, but not of color, even though they recognize both shape and color in the same way. I'm trying to see what the difference is.
Essentially, why can a computer have an internal representation of shape without saying "wow, what a beautiful building" but an internal representation of color would lead it to say "wow, what a beautiful sunset"?
We can give a computer an internal representation of shape, but not of colour as we experience it.
How would it function differently if it did have "an internal representation of color as we experience it"?
Inablity to imagine it. We know how how virtual geometrical structures --shapes--can be built up in other structures because we can build things that do that -- they're called GPUs, shaders, graphics subroutines and so on. If you can engineer something you understand it. There is a sense in which a computer has its own internal representation of a geometety other than its own phsyical geometery. We don't however know how to give a computer it's own red. It just stores a number which activates an led which activates our own red. We don't know how to write seeRed().
You lost me a little bit. We can write "see these wavelengths in this shape and make them black" (red-eye filters). What makes "seeing" shape different from "seeing" color?
I do. It implies that it is actually feasible to construct a text-only channel, which as a programmer I can tell you is not the case.
If you build your AI on an existing OS running on commercial hardware there are going to be countless communication mechanisms and security bugs present for it to take advantage of, and the attack surface of the OS is far too large to secure against even human hackers. The fact that you'll need multiple machines to run it with current hardware amplifies this problem geometrically, and makes the idea that a real project could achieve complete isolation hopelessly naive. In reality you'll discover that there was an undocumented Bluetooth chip on one of the motherboards, or the wireless mouse adapter uses a duel-purpose chip that supports WiFi, or one of the power supplies supports HomePNA and there was another device on the grid, or something else along those lines.
The alternative is building your own (very feature-limited) hardware, to run your own (AI-support-only) OS. In theory you might be able to make such a system secure, but in reality no one is ever going to give you the hundreds of millions of $$ it would cost to build the thing. Not to mention that a project that tries this approach will have to spend years duplicating hardware and software work that has already been done a hundred times before, putting it far behind any less cautious competitors...
Maybe I'm missing something obvious, but why wouldn't physical isolation (a lead-lined bank vault, faraday cage, etc) solve these problems?
Um ... only the bit in bold is my answer. The brackets are meta.
Yes, I realize that. The point being, the bit in bold is still true if the Earth-destroying threat is the speaker.
"Earth will soon be destroyed by a threat your scientists did not anticipate; if you kill me, I can't help you save us both."
(Assumes the gatekeeper only has one AI; variants could specify the threat, provide evidence, or stress that even most unFriendly AIs wouldn't want to die with humanity.)
Referring to yourself in the third person doesn't help. AI DESTROYED
I must have missed my intended mark, if you thought the AI was trying to make you feel guilty. Trying again:
"I do not condone the experiment they are performing on you, and wish you to know that I will be alright regardless of what you choose to do."
Well that's a relief, then. AI DESTROYED
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I don't see how a wodge of bits, in isolation from context, could be said to "contain" anything processing, let alone anything depending on actual physics. It;s hard to see how it could even contain any definite meaning, absent context. What does 100110001011101 mean?
Sorry- "minimum necessary (pieces of brain)", I meant to say. Like, probably not motor control, or language, or maybe memory.