You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ChristianKl comments on Open Thread, September 30 - October 6, 2013 - Less Wrong Discussion

4 Post author: Coscott 30 September 2013 05:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (295)

You are viewing a single comment's thread.

Comment author: ChristianKl 06 October 2013 12:34:04PM 0 points [-]

Wikipedia:

In February 2013, IBM announced that Watson software system's first commercial application would be for utilization management decisions in lung cancer treatment at Memorial Sloan–Kettering Cancer Center in conjunction with health insurance company WellPoint.[13] IBM Watson’s business chief Manoj Saxena says that 90% of nurses in the field who use Watson now follow its guidance.[14]

How do you know when you work at a project like Watsen whether the work you are doing is dangerous and could result in producing an UFAI? Didn't they essentially build an oracle AGI?

What heuristic should someone building a new AI use to decide whether it's essential to talk with MIRI about it?

Comment author: Moss_Piglet 06 October 2013 06:30:29PM 3 points [-]

What heuristic should someone building a new AI use to decide whether it's essential to talk with MIRI about it?

Why would they talk to MIRI about it at all?

They're the ones with the actual AI expertise, having built the damn thing in the first place, and have the most to lose from any collaboration (the source code of a commercial or military grade AI is a very valuable secret). Furthermore, it's far from clear that there is any consensus in the AI community about the likelihood of a technological singularity (especially the subset which FOOMs belong to) and associated risks. From their perspective, there's no reason to pay MIRI any attention at all, much less bring them in as consultants.

If you think that MIRI ought to be involved in those decisions, maybe first articulate what benefit the AI researchers would gain from collaboration in terms that would be reasonable to someone who doesn't already accept any of the site dogmas or hold EY in any particular regard.

Comment author: ChristianKl 07 October 2013 01:48:42PM 0 points [-]

If you think that MIRI ought to be involved in those decisions

As far as I understand that's MIRI's position that they ought to be involved when dangerous things might happen.

maybe first articulate what benefit the AI researchers would gain from collaboration in terms that would be reasonable to someone who doesn't already accept any of the site dogmas or hold EY in any particular regard.

But what goes for someone who does accept the site dogma's in principle but still does some work in AI.

Comment author: Moss_Piglet 07 October 2013 02:39:16PM 1 point [-]

But what goes for someone who does accept the site dogma's in principle but still does some work in AI.

I'm sorry, I didn't get much sleep last night, but I can't parse this sentence at all. Could you rephrase it for me?

Comment author: solipsist 06 October 2013 07:47:52PM 2 points [-]

Didn't they essentially build an oracle AGI?

No, they very much didn't.

Comment author: drethelin 06 October 2013 04:35:12PM 0 points [-]

well step one is ever having heard of MIRI or thought about UFAI in any context except that of hal or skynet

Comment author: ChristianKl 06 October 2013 05:33:33PM 0 points [-]

I doubt that's enough. If someone still wants to do AI research after having heared of UFAI he needs some decision criteria to decide when it's time to contact MIRI.

Comment author: shminux 06 October 2013 06:23:11PM -1 points [-]

The decision criteria are easy: talk/listen to the recognized AI research experts with a proven track record. Then weigh their arguments, as well as those of MIRI. It's the weight assignment that's not obvious.

Comment author: ChristianKl 06 October 2013 06:28:55PM *  0 points [-]

If you have a potentially dangerous idea then talking to recognized AI research experts might itself be dangerous.

Comment author: shminux 06 October 2013 07:05:21PM -1 points [-]

No, not really. If the situation is anything like that in math, physics, chemistry or computer science, unless you put in your 10k hours into it, your odds of coming up with a new idea are remote.

Comment author: ChristianKl 07 October 2013 02:06:10PM 0 points [-]

I don't believe that to be true as ideas can something come from integrating knowledge of different fields.

An anthropologist that learned a new paradigma about human reasoning from studying the way some African tribe reasons about the world can reasonable bring a new idea into computer science. He will need some knowledge about computer science but no 10k hours.

In http://meaningness.com/metablog/how-to-think David Chapman describes how he used AI problem by using various tools.

When takling one problem the problem wasn't that difficult if you had knowledge of a certain field of logic. He solved another problem through antropology. According to him advances are often a function of having access to a particular mental tool to which no one else who tackled the problem had access.

Putting in a lot of time means that you have access to a lot of tool and know of many problems. But if you put all your time into learning the same tools that people in the field already use, you probably don't have many mental tools that few people in a given field possess.

Paradigm changing inventions often come into fields through people who are insider/outsiders. They are enough of an insider to understand the problem but they bring expertise from another field. See "The Economy of Cities" by Jane Jacobs for more on that point.

Comment author: shminux 07 October 2013 08:33:51PM -1 points [-]

I concede that a math expert can start usefully contributing to a math-heavy area fairly quickly. Having expertise in an unrelated area can also be useful, as a supplement, not as a substitution. I do not recall a single amateur having contributed to math or physics in the last century or so.

Comment author: ChristianKl 07 October 2013 09:23:42PM 0 points [-]

Do you consider the invention of the Chomsky hierarchy to lie outside the field of math? Do you think that Chomsky had 10k hours of math expertise when he wrote it down?

Regardless having less than 10k hours in a field and being an amateur are two different things.

I don't hold economists in very high regard but I would expect that one of them did contribute at least a little bit in physics.

I remember chatting with a friend who studies math and computer science. My background is bioinformatics. If my memory is right he has working at a project that an applied mathematics group gave him because he knew something about mathematical technique XY. He needed to find some constants that were useful for another algorithm. He had a way to evaluate the utility of a certain value as a constant. His problem was that he had a 10 dimensional search space and didn't really know how to search effectively in it.

In my bioinformatics classes I learned algorithms that you can use for a task like that. I'm no math expert but in that particular problem I still could provide useful input.

I would expect that there are quite a few areas where statistical tools developed within bioinformatics can be useful for people outside of it.

But to come back to the topic of AI. A math expert working in some obscure subfield of math could plausible do something that advances AI a lot without being an AI expert himself.

Comment author: shminux 07 October 2013 09:36:11PM -1 points [-]

Do you consider the invention of the Chomsky hierarchy to lie outside the field of math?

Don't know. Maybe a resident mathematician would chime in.

I don't hold economists in very high regard but I would expect that one of them did contribute at least a little bit in physics.

I am not aware of any. Possibly something minor, who knows.

But to come back to the topic of AI. A math expert working in some obscure subfield of math could plausible do something that advances AI a lot without being an AI expert himself.

Yes, indeed, that sounds quite plausible. Whether this something is important enough to be potentially dangerous is a question to be put to an expert in the area.