ewjordan
ewjordan has not written any posts yet.

ewjordan has not written any posts yet.

Even if we accepted that the tool vs. agent distinction was enough to make things "safe", objection 2 still boils down to "Well, just don't build that type of AI!", which is exactly the same keep-it-in-a-box/don't-do-it argument that most normal people make when they consider this issue. I assume I don't need to explain to most people here why "We should just make a law against it" is not a solution to this problem, and I hope I don't need to argue that "Just don't do it" is even worse...
More specifically, fast forward to 2080, when any college kid with $200 to spend (in equivalent 2012 dollars) can purchase enough computing power... (read 376 more words →)
Is there anything that can be done about it?
I don't know how much of a problem it is, but there's definitely something that can be done about it: instead of a "dumb" karma count, use some variant of Pagerank on the vote graph.
In other words, every person is a node, every upvote that each person gets from another user is a directed edge (also signed to incorporate downvotes), every person starts with a base amount of karma, and then you iteratively update the user karma by weighting each inbound vote by the karma of the voter.
When I say "variant of Pagerank", I mean that you'd probably also have to fudge some things... (read more)
If someone asks the tool-AI "How do I create an agent-AI?" and it gives an answer, the distinction is moot anyways, because one leads to the other.
Given human nature, I find it extremely difficult to believe that nobody would ask the tool-AI that question, or something that's close enough, and then implement the answer...