Comment author: RichardKennaway 19 July 2009 08:36:25AM 26 points [-]

"I am an AI, not a human being. My mind is completely unlike the mind that you are projecting onto me."

That may not sound crazy to anyone on LW, but if we get AIs, I predict that it will sound crazy to most people who aren't technically informed on the subject, which will be most people.

Imagine this near-future scenario. AIs are made, not yet self-improving FOOMers, but helpful, specialised, below human-level systems. For example, what Wolfram Alpha would be, if all the hype was literally true. Autopilots for cars that you can just speak your destination to, and it will get there, even if there are road works or other disturbances. Factories that direct their entire operations without a single human present. Systems that read the Internet for you -- really read, not just look for keywords -- and bring to your attention the things it's learned you want to see. Autocounsellors that do a lot better than an Eliza. Tutor programs that you can hold a real conversation with about a subject you're studying. Silicon friends good enough that you may not be able to tell if you're talking with a human or a bot, and in virtual worlds like Second Life, people won't want to.

I predict:

  • People will anthropomorphise these things. They won't just have the "sensation" that they're talking to a human being, they'll do theory of mind on them. They won't be able not to.

  • The actual principles of operation of these systems will not resemble, even slightly, the "minds" that people will project onto them.

  • People will insist on the reality of these minds as strongly as anosognosics insist on the absence of their impairments. The only exceptions will be the people who design them, and they will still experience the illusion.

And because of that, systems at that level will be dangerous already.

Comment author: kapirossi 21 July 2009 09:34:47AM 2 points [-]

Here I thought about the "Systems that ... bring to your attention the things it's learned you want to see." A system that has "learned" might bring to attention some things and omit others. What if those omitted things are the "true" ones or the ones that are really necessary? If so then we cannot consider the AI having an explicit goal to tell the truth as Eliezer noted. Or it is not capable of telling the truth. Truth in such case being what the human considers to be true.

In response to comment by Emily on Media bias
Comment author: anonym 06 July 2009 01:20:15AM 0 points [-]

Good point. It's certainly easier to quickly get feedback during a lecture. If academic writers really wanted to communicate understanding as much as [great] lecturers do, for this and other reasons it would certainly be more difficult to do so than it would during repeated lectures. I'm just skeptical that the desire is actually there to anywhere near the same degree though.

And it's not just a matter of different media. Consider a brilliant young researcher giving a seminar (i.e., spoken medium) on her research. Does she optimize for understanding or for making the strongest impression and convincing her peers that her research is important and original?

In response to comment by anonym on Media bias
Comment author: kapirossi 07 July 2009 05:31:35AM 0 points [-]

I agree and it depends on the lecturer, of course. Experienced lecturers seem to be more auditory oriented than aspiring ones.

There's also the pace of the learning to be considered: one sets his/her own pace when learning by text as opposed to speed and tempo chosen and changed by lecturer.

I think that a combination of both is the most effective way.