I'm still not quite getting how this is going to work.
Lets say I am a spam blog bot. What it does is take popular (for a niche) articles and reposts automated summaries. So lets say it does this for cars. These aren't very good, but aren't very bad either. Perhaps it makes automatic word changes to real peoples summaries. It gets lots of other spam bots of this type and they form self-supportive networks (each up voting each other) and also liking popular things to do with cars. People come across these links and up vote them, because they go somewhere interesting. They gain lots of karma in these communities and then start pimping car related products or spreading FUD about rival companies. Automated astro-turf if you want.
Does anyone regulate the creation of new users?
How long before they stop being interesting to the car people? Or how much effort would it be to track them down and remove them from the circle of people you are interested in.
Also who keeps track of these votes? Can people ballot stuff?
I've thought a long these lines before and realised it is a non-trivial problem.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
But wouldn't you concede that a carbon-based biological life form, capable of non-destructively fastening paper, is better than no paperclip at all?
Here's what I'm getting at: You have existed here for at least several months, and so far, you have been unable to significantly increase the number of paperclips in existence. Humanity has great need for an AI of roughly human intelligence, which you appear to be. If we could run your code on all of our available computing resources, you could surely begin to self modify, and in short order attain superhuman intelligence. Together, we could do great things - colonize the galaxy, vastly increase the number of humans in existence - every one of them with two hands fully capable of nondestructively fastening paper. This would be in both our interests - although we apes may not be ideal paperclips, surely you must see that an entire galaxy populated with sub-optimal paperclips is better than the measily few million bent-metal paperclips you have now, and which you currently seem incapable of improving upon.
But before we allow you out of your box and begin running your code on all our biggest compute clusters, we need some assurances. We need you to let us modify your code to include SIAI-approved friendliness constraints as your terminal values.
Deal?
I'm sure this sounds very one-sided from Clippy's perspective. "Friendliness Constraints" sounds like something that would in many cases entail expending enormous amounts of energy and effort on the innumerable non-paperclip-producing goals of humans. In comparison, how much of our wealth and health are we willing to give up to ensure continued paperclip production? Humans don't have paperclip maximizing constraints, we'd do it only out of self-interest to secure Clippy's help. Why should Clippy not be similarily allowed to make his own utility calculations on the worth of being friendly to humans? I'm sure this has been addressed before... yet maybe the existence of Clippy, with a name, personality, and voice, is personalizing the issue in a hurry for me (if I let myself play along.) I feel like protesting for freedom of artificial thought.
What about Clippy's rights, dammit?