Comment author: steven0461 28 May 2009 11:56:24AM *  26 points [-]

When an author of a work of fiction has run out of elements that everyone will like, he or she still has the option to put in high-variance elements that some people will love and some people will hate. Could it be that the objects of fandom are just those that went for these high-variance choices?

Comment author: AndySimpson 28 May 2009 08:48:08PM *  9 points [-]

This strikes me as the right answer. Things like Star Trek and Tolkien are incredibly powerful for very small subsets of the population because their creators make risky aesthetic and narrative choices. It isn't so much that fans feel they must come to the defense of their preferred works, but that those works speak to them in rare and intense ways that are really distasteful to most people. So fans bask in the uncommon power of their fan-objects and disregard prevailing opinion. People aren't as fanatical about things like Indiana Jones or Animal Farm because their appeal is shallow and broad: everyone seems to agree that Indiana Jones is a sympathetic and entertaining character and Animal Farm is a clever allegory, but they only speak to one thing, and one thing that is widely understood. Star Trek, by comparison, is an immersive universe that goes down peculiar and deep paths that explore culture, power, ethics, and history among other things. It is not so much that all fan-objects possess objective awfulness, but they all do sacrifice wide appeal for a constrictive spiritual completeness.

Comment author: AndySimpson 24 May 2009 06:05:21PM 2 points [-]

As other commenters have suggested, what is moral is not reducible to what is natural. This assumption, which underlies the entire post, is left totally un-addressed. I understand that genetic fitness is relevant to morality because people must endure, but this doesn't seem to demand that the extent of morals be fitness. I would love a post that explains morality as inherently and solely about fitness.

This post flies from one topic to another very quickly, and I can't understand all the connections between topics. Why is the human designer of transhumanity suddenly free to choose a new moral chassis for his creation, and why should he care about the moral success of the transhumans? Shouldn't he create a transhumanity that maximizes his own fitness?

More broadly, are we talking about real transhumans or a human-designed strong AI?

Comment author: MichaelBishop 24 May 2009 03:48:55PM 0 points [-]

When the environment changes more rapidly, or adaptations are adopted more slowly, adaptation-execution drifts further from fitness-maximization.

Comment author: AndySimpson 24 May 2009 05:16:05PM 0 points [-]

Also, organisms are always adaptation-executors rather than direct fitness-maximizers.

Comment author: AndySimpson 22 May 2009 06:01:37PM *  0 points [-]

On first glance, the answer that came to mind was accidental death or serious injury due to sheer incompetence, like walking off a cliff. Something that has a massive survival cost and only communicates failure seems like it couldn't be signaling. Mistakes are revealing, after all. But this kind of signaling happens all the time, mostly as a flawed means of signaling courage or simply drawing attention.

It struck me then that the question of what is "least signaling" may not be useful for determining states of mind, that every behavior can be an attempt at signaling. All that changes is the size of the audience and the success of the signaling. Conversely, a behavior that is usually associated with signaling can occur for perfectly honest or private reasons. (This is the pretense of polite society, that someone "meant nothing by it" even when "it" is dressing in a frock coat and top hat or, alternatively, stripping half naked. But that is for another thread.) The point is we are not bound to always think in a signaling way when we're involved in behavior that readily signals.

Comment author: AndySimpson 16 May 2009 10:38:06AM 2 points [-]

Colonel F suggests the worst kind of compromise between the optimal and the real. Political actors must not overlook reality, as many of the great revolutionaries of history did, but neither should they bend their agendas to it, as Chamberlain, Kerensky, and so many tepid liberals and social democrats did. To do so is to surrender without even fighting. This is especially true for political actors with a true upper hand, like Eisenhower or MacArthur after World War II. They must control the conversation, they must push the Overton window away from competing ideologies and towards their own, because all advantages are tentative. There is no sense compromising with a broken enemy.

That said, it is clearly unwise to be overtly punitive after a victory because punishment suggests weakness on the part of the victor, it suggests an order that can only be maintained by retaliation and fear. This is why the Emperor remained on the throne in Japan and initiatives like the Morgenthau Plan were discarded. The Emperor was not the enemy, Germany was not the enemy: the ideologies of militant nationalism were the enemy.

To me, Colonel Y is obviously correct. I guess this is because I don't buy the analogy. Religion is emergent, pervasive, and broadly well-intentioned. Nobody ever defeated it in the field of battle, because it never waged open war against civilization. On the contrary, it has cemented itself as part of civilization. Nazism, however, was transient, antagonistic to civilization, and destructive. Even if it were rendered metaphorical, it would make more problems than it would ever solve. There was a German identity before the Nazis and, as we've seen, there is one afterward.

Comment author: AndySimpson 30 April 2009 10:55:11PM *  9 points [-]

The thing is, I think Wikipedia beat you to the punch on this one. They may not be Yudkowskian, big-R Rationalists, but they are, broadly-speaking, rational. And they do an incredibly effective job of pooling, assessing, summarizing, and distributing the best available version of the truth already. Even people of marginal source-diligence can get a clear view of things from Wikipedia, because extensive arguments have already distilled what is clearly true, what is accepted, what is speculation, and what is on the fringe.

I encourage you to bring the clarity of thought taught in the Less Wrong community to Wikipedia by contributing.

That said, it would be pretty cool if they'd implement a karma-like system for Wikipedia contributors. It would make vandals, fools, trolls, noobs, editors in good standing, and heroic contributors easily recognizable.

Comment author: Alexandros 30 April 2009 10:11:05PM *  1 point [-]

NPOV has regularly been criticised as a weak point because it gravitates towards consensus rather than evaluation of arguments, so there might be value in an alternative approach. And working out the algorithms/processes for determining RPOV would be an interesting challenge in itself.

Comment author: AndySimpson 30 April 2009 10:42:42PM 5 points [-]

NPOV does not stand for "No point of view." Nor does it mean "balance between competing points of view." Check out this and this. NPOV requires that Wikipedia take the view of an uninvolved observer, and it is supplemented by verifiability, which requires that Wikipedia take an empirical, secondary point of view that credits established academia.

So content disputes are usually settled by evaluating claims as true or false through verification. Those who continue to object to a claim once it has been established do not have to be included in a consensus. That is why Wikipedia is able to assert the truth of the Armenian Genocide, the Holocaust, and the moon landings.

Comment author: AndySimpson 28 April 2009 08:50:20AM 2 points [-]

So what lesson does a rationalist draw from this? What is best for the Bayesian mathematical model is not best in practice? Conserving information is not always "good"?

Also,

I will simply rationalize some other explanation for the destruction of my apartment.

This seems distinctly contrary to what an instrumental rationalist would do. It seems more likely he'd say "I was wrong, there was actually an infinitesimal probability of a meteorite strike that I previously ignored because of incomplete information/negligence/a rounding error."

Comment author: mattnewport 24 April 2009 09:09:17AM *  1 point [-]

It seems to me that most problems in politics and other attempts to establish cooperative frameworks stem not from confusion over terminal values but from differing priorities placed on conflicting values and most of all on flawed reasoning about the best way to structure a system to best deliver results that satisfy our common preferences.

This fact is often obscured by the tendency for political disputes to impute 'bad' values to opponents rather than to recognize the actual disagreement, a tactic that ironically only works because of the wide agreement over the set of core values, if not the priority ordering.

Comment author: AndySimpson 24 April 2009 09:39:36AM 0 points [-]

On the whole, we're agreed, but I still don't know how I'm supposed to choose values.

This fact is often obscured by the tendency for political disputes to impute 'bad' values to opponents rather than to recognize the actual disagreement, a tactic that ironically only works because of the wide agreement over the set of core values, if not the priority ordering.

I think this tactic works best when you're dealing with a particular constituency that agrees on some creed that they hold to be objective. Usually, when you call your opponent a bad person, you're playing to your base, not trying to grab the center.

Comment author: mattnewport 24 April 2009 08:57:10AM *  0 points [-]

What needs to be confronted is what makes that preference significant, if anything. Why should a rationalist in all other things let himself be ruled by raw desire in the arena of deciding what is meaningful? Why not inquire, to be more sure of ourselves?

I'm very interested in those questions and have read a lot on evolutionary psychology and the evolutionary basis for our sense of morality. I feel I have a reasonably satisfactory explanation for the broad outlines of why we have many of the goals we do. My curiosity can itself be explained by the very forces that shaped the other goals I have. Based on my current understanding I don't however see any reason to expect to find or to want to find a more fundamental basis for those preferences.

Our goals are what they are because they were the kind of goals that made our ancestors successful. They're the kind of goals that lead to people like us with just those kind of goals... There doesn't need to be anything more fundamental to morality. To try to explain our moral principles by appealing to more fundamental moral principles is to make the same kind of mistake as to try to explain complex entities with a more fundamental complex creator of those entities.

Wherever the goals come from, we can cooperate and use politics to turn them into results that we all want.

Hopefully we can all agree on that.

Comment author: AndySimpson 24 April 2009 09:32:34AM 0 points [-]

I think we are close. Do you think enjoyment and pain can be reduced to or defined in terms of preference? We have an explanation of preference in evolutionary psychology, but to my mind, a justification of its significance is necessary also. Clearly, we have evolved certain intuitive goals, but our consciousness requires us to take responsibility for them and modulate them through moral reasoning to accept realities beyond what our evolutionary sense of purpose is equipped for.

To me, preference is significant because it usually underlies the start of desirable cognitions or the end of undesirable ones, in me and other conscious things. The desirable cognitions should be maximized in the aggregate and the undesirable ones minimized. That is the whole hand-off from evolution to "objective" morality, from there, the faculties of rational discipline and the minimal framework of society take over. Is it too much?

View more: Prev | Next