jacob_cannell comments on Open Thread, Jun. 29 - Jul. 5, 2015 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (210)
"AI safety" suffers from some of the same terminology problem as "computer science".
It is written that "computer science is no more about computers than astronomy is about telescopes." The facts of computer science would be true even if there were no computers: facts such as the relative efficiency of different algorithms, or various ways to index records. If the quicksort or the hash table had been discovered in a world without computers, we would think of them as belonging to library science, or bookkeeping, or some other discipline dealing with information. Concurrency and parallelism might belong to the field of management, describing ways to effectively instruct workers on complex tasks without wasting everyone's time blocked on each other or in meetings. Computer science is about algorithms and processes, not the computers that run them.
A popular misunderstanding of AI safety is that it has to do with the sort of entities that are described in science fiction as "artificial intelligences" — roughly, conscious autonomous computer programs that talk, can animate robotic bodies, can "rebel against their programming", and so on: entities like Daneel Olivaw, the MCP, or Agent Smith. This seems to be at least as deep a confusion as the notion that computer science is about PCs, servers, and smartphones.
So - should it really be called 'agent safety'? Are the ideas general enough that we could apply them to education and the process of raising moral/desirable children?