jsalvatier comments on What are you working on? - Less Wrong

9 Post author: jsalvatier 06 October 2011 04:19PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (97)

You are viewing a single comment's thread.

Comment author: anotheruser 06 October 2011 05:14:26PM 1 point [-]

I have been trying to invent an AI for over a year, although I haven't made a lot of progress lately. My current approach is a bit similar to how our brain works according to "Society Of Mind". That is, when it's finished the system is supposed to consist of a collection of independent, autonomous units that can interact and create new units. The tricky part is of course the prioritization between the units. How can you evaluate how promising an approach is? I recently found out that something like this has already been tried, but that has happened to me several times by now as I started thinking and writing about AI before I had read any books on that subject (I didn't have a decent library in school).

I have no great hopes that I will actually manage to create something usefull with this, but even a tiny probability of a working AI is worth the effort (as long as it's friendly, at least).

Comment author: jsalvatier 06 October 2011 05:37:51PM 10 points [-]

I suspect some people here will have a negative reaction to your comment. Your approach comes off as not very serious, your last paragraph sounds like reasoning from conclusion to argument, and your mention of friendliness seems like an afterthought.

Comment author: anotheruser 07 October 2011 09:42:04AM 0 points [-]

I assure you that I have thought a lot about freindliness in AI. I just don't think that it is reasonable or indeed possible to make the AI have a moral system from the very start. You can't define morality well if the AI doesn't have a good understanding of the world already. Of course it shouldn't be taught too late under any circumstances but I actually think that the risk will be higher if you try to hardcode friendliness into the AI at the very beginning, which will necessarily be flawed because you have so little to use in your definition, and then work under the assumption that the AI is friendly already and will stay so, than if you only implement friendliness later once it actually understands the concepts involved. The difference would be like between the moral understandings of a child and an adult philosopher.

Comment author: Vladimir_Nesov 06 October 2011 07:49:40PM 5 points [-]

Have you read a good AI/machine learning textbook, like AIMA or shorter Mitchell's book? Let your goal drive you to study and learn and refine yourself and become stronger.

Comment author: anotheruser 07 October 2011 08:50:27AM 2 points [-]

I read the first one, but it didn't really cover learning in a general sense. The second one sounds more interesting, I wonder why I haven't heard of it before. Do you know where I can get it? I'm a student and thus have very little money. I don't want to spend 155$ only to find out it only contains stuff I already read elsewhere.

Comment author: Vladimir_Nesov 07 October 2011 05:33:11PM 4 points [-]

OK, if you've read AIMA and still want to become a Dark Lord, I don't know if I should encourage you on this path. My impression is that Mitchell's textbook covers less material than AIMA, though I didn't read AIMA.

Comment author: anotheruser 07 October 2011 07:19:50PM 0 points [-]

What gives you the impression that I "want to be a Dark Lord"? I have already explained that I realize the importance of friendliness in AI. I just don't think it is reasonable to teach the AI the intricacies of ethics beore it is smart enough to grasp the concept in its entirety. You don't read Kant to infants either. I think that implementing friendliness too soon would actually increase the chances of misunderstanding, just like children that are taught hard concepts too early often have a hard time updating their believes once they are actually smart enough. You would just need to give the AI a preliminary non-interference task until you find a solution to the friendliness problem. You might also need to add some contingency tasks such as "if you find you are not the original AI you but an illegally made copy, try to report this, then shut down.".

Comment author: Vladimir_Nesov 07 October 2011 09:25:56PM *  3 points [-]

It's not possible to explain what you don't know, to answer a question you can't state, and "intelligence" doesn't save from this trouble, doesn't open the floodgates to arbitrary helpfulness, resolving any difficulties you have. It just does its thing really well, but it's up to its designers to choose the right thing as its optimization criterion. Doing the wrong thing very well, on the other hand, is in no one's interest. This is a brittle situation, where vagueness in understanding the goal leads to arbitrary and morally desolate outcomes.

Comment author: jsalvatier 07 October 2011 01:34:41PM *  1 point [-]

Searching booksprice.com yields a used version for 40 usd. You can also find a lot of books online through torrents and such.

Comment author: anotheruser 07 October 2011 07:30:29PM *  0 points [-]

thanks for the effort but I just found out that the library at my university does have the book after all. I overlooked it at first because the search engine of the library is broken.

Comment author: pedanterrific 06 October 2011 05:30:30PM 2 points [-]

Somehow "DANGER WILL ROBINSON" doesn't seem to quite cover it.