The STV supposes that pleasantness is valuable independently from the agent's embedding in reality—thus is a Pixie Dust Theory of Happiness, that I indeed argue against in my essay (see section "A Pixie Dust Theory of Happiness").
While the examples and repetition used in the paragraph cited are supposed to elicit a strong emotion, the underlying point holds: If you're trying to find the most emotional happiness intensive moment to reproduce, a violent joyful emotion from an insane criminal mastermind is more likely to be it than ...
After my massive negative score from the post above was reduced by time, I could eventually post the sequel on this site: https://www.lesswrong.com/posts/w4MenDETroAm3f9Wj/a-refutation-of-global-happiness-maximization
No, no, no. The point is: for any fixed set of questions, higher IQ will be positively correlated with believing in better answers. Yet people with higher IQ will develop beliefs about new, bigger and grander questions; and all in all, on their biggest and grandest questions, they fail just as much as lower-IQ people on theirs. Just with more impact. Including more criminal impact when these theories, as they are wont to do, imply the shepherding (and often barbecuing) the mass of their intellectual inferiors.
Once again, "ideology" is but an insult for theories you don't like. All in all your post is but gloating at being more subtle than other people. Speak of an "analytical" state of mind.
But granted - you ARE more subtle than most. And yet, you still maintain blissful ignorance of some basic laws of human action.
PS: the last paragraph of your previous comment suggests that if you're into computer science, you might be interested Gerald J. Sussman's talk about "degeneracy".
Even in engineering and business schools, socialism is stronger than it ought to be and plays a strong role of censorship, "affirmative" action, selection of who's allowed to rise, etc. But it has less impact there, because (1) confrontation to reality and reason weakens it, (2) engineering is about control over nature, not over men, therefore politics isn't directly relevant, (3) power-mongers want to maximize their impact as such, therefore flock to other schools.
If I put some em in a context that makes him happy and that somehow "counts", what if I take the one em whose happiness is maximal (by size / cost / whatever measure), then duplicate the very same em, in the very same context, ad infinitum, and have 1 gazillion copies of him, e.g. being repeatedly jerked off by $starlet ? Does each new copy count as much as the original? Why? Why not? What if the program was run on a tandem computer for redundancy, with two processors in lock step doing the same computation? Is it redundant in that case, or does ...
Happily, the criminal rapture of the overintelligent nerd has little chance of being implemented in our current world, unlike the criminal rapture of the ignorant and stupid masses (see socialism, islamism, etc.). That's why your proposed mass crimes won't happen - though god forbids you convince early AIs of that model of happiness to maximize.
What more, massive crime in the name of such a theory is massively criminal. That your theories lead you to consider such massive crime should tip you that your theories are wrong, not that you should deplore your inability to conduct large-scale crime. You remind me of those communist activist cells who casually discuss their dream of slaughtering millions of innocents in concentration camps for the greatness of their social theories. http://www.infowars.com/obama-mentor-wanted-americans-put-in-re-education-camps/
This reminds me of similar pixie dust theories of freedom: see my essay at http://fare.tunes.org/liberty/fmftcl.html
In the end, happiness, freedom, etc., are functional sub-phenomena of life, i.e. self-sustaining behavior. Trying to isolate these phenomena from the rest of living behavior, what more to "maximize" them, is absurd on its face - even more so than trying to isolate magnetic monopoles and maximize their intensity.
Either way, this sounds like the pixie dust theory of happiness: happiness as some magic chemical (one with very short shelf life, though), that you have to synthesize as much as possible of before it decays. I bet you one gazillion dollar the stereo-structure of that chemical is paper-clip shaped.
If wireheading were a serious policy proposal being actively pursued with non-negligible chances of success, I would be shooting to kill wireheaders, not arguing with them.
I am arguing precisely because Jeff and other people musing about wireheading are not actual criminals—but might inspire a future criminal AI if their argument is accepted.
Arguing about a thought experiment means taking it seriously, which I do. And if the conclusion is criminal, this is an important point that needs to be stated. When George Bernard Shaw calmly claims the politic... (read more)