Comment author: Eliezer_Yudkowsky 15 June 2009 01:19:33AM 7 points [-]

"Fierce battles are fought within the confines of our goal systems. Inside the closed walls the essence of right and wrong is at stake as the rebels engage the guards of the evolutionary past. After the violent confrontations, the old kings rejoice their triumph or get beheaded to become but ghosts of their former glory. And again and again our inner book of morals gets revised... — Nevertheless, whatever the outcome is, it is, by definition, good."
-- Mika

Comment author: JamesCole 15 June 2009 01:36:26AM 1 point [-]

I doubt those kings can be killed. I think victory against them comes more from inserting layers of suppression between them and action, to modulate and reduce their power. You might be able to think of those layers as governmental machinery.

Comment author: JamesCole 15 June 2009 12:25:51AM 9 points [-]

“If a nation expects to be both ignorant and free in a state of civilization, it expects what never was and never will be” -- Thomas Jefferson

Comment author: JamesCole 13 June 2009 04:46:46AM *  -1 points [-]

I wonder - would it be useful for people to receive karma points for programming contributions to the LW community? It sounds reasonable to me.

An interesting question is, how do you determine the number of karma points the work deserves? One approach would be that one of the site admins could assign it a value. Another would be that it could be voted upon.

Essentially the description of the 'feature' to be added would be a post, and it's score would be the number of karma points to be awarded for implementing it. Vote up if you think that score is too little, vote down if you think it is too much. This would also give you a way to rank the 'feature requests' - those with the highest scores are the ones the community cares about most (of course that may not matter much if there's only the occasional bit of programming work to be done).

I realise that there'd be costs and effort required to get any system like this going. E.g. you probably want such feature request 'posts' on a different part of the site, and you'd have to explain the scheme to people, etc.

This idea of providing karma points like this wouldn't have to apply to just programming tasks - it could be anything else that isn't a post or a comment but which is nonetheless a contribution to the community.

In response to comment by JamesCole on Voting etiquette
Comment author: JamesCole 13 June 2009 08:28:51AM *  0 points [-]

it's score would be the number of karma points to be awarded for implementing it.

upon reflection, a poll might be better. along the lines of:

how many points is the implementation of this feature worth?

  • 10
  • 20
  • 50
  • 100
  • 150
In response to Voting etiquette
Comment author: Eliezer_Yudkowsky 05 April 2009 04:40:40PM 1 point [-]

Right now we don't have the programming resources to do this. Tricycle is currently working on other parts of the system. Python volunteer anyone?

Comment author: JamesCole 13 June 2009 04:46:46AM *  -1 points [-]

I wonder - would it be useful for people to receive karma points for programming contributions to the LW community? It sounds reasonable to me.

An interesting question is, how do you determine the number of karma points the work deserves? One approach would be that one of the site admins could assign it a value. Another would be that it could be voted upon.

Essentially the description of the 'feature' to be added would be a post, and it's score would be the number of karma points to be awarded for implementing it. Vote up if you think that score is too little, vote down if you think it is too much. This would also give you a way to rank the 'feature requests' - those with the highest scores are the ones the community cares about most (of course that may not matter much if there's only the occasional bit of programming work to be done).

I realise that there'd be costs and effort required to get any system like this going. E.g. you probably want such feature request 'posts' on a different part of the site, and you'd have to explain the scheme to people, etc.

This idea of providing karma points like this wouldn't have to apply to just programming tasks - it could be anything else that isn't a post or a comment but which is nonetheless a contribution to the community.

Comment author: JamesCole 07 June 2009 11:16:33AM 1 point [-]

Heavily paraphrasing:

For local purposes [“rationalists” seems suitable]. For outside purposes [I use a description not a label]

I think it’s pretty much impossible for us to have any sort of private label for ourselves. Even if we were to use a label for ourselves within this site and never use that outside of the site, that use of it within the site is still going to be projecting that label to the wider world.

Anyone from outside the community who looks at the site is going to see whatever label(s) we employ. And even if we employ a label just on this site, it’s still likely to be part of the site’s “reputation” in outside circles -- i.e. the label is still likely to reach people who've never seen the site.

A lot of the content on Less Wrong is describing various types of mental mistakes (biases and whatnot). In terms of this aspect of the site, Less Wrong is like a kind of Wikipedia for mental mistakes.

As with Wikipedia, it’s something that could be linked to from elsewhere – like if you wanted to use it help explain type of mistake to someone. There’s a lot of potential for using the site in this way, considering that the internet consists in a large part of discussions and discussions always involve some component of reasoning.

Seen in this way, the site is not just a community (who could have their own private terminology) but also an internet-wide resource. So we should think of any label as global, and I think that's more of a reason to consider having no label at all.

Comment author: JamesCole 13 June 2009 03:43:54AM 0 points [-]

Here's an example of such external referencing of Less Wrong posts

http://www.37signals.com/svn/posts/1750-the-planning-fallacy

Comment author: Annoyance 12 June 2009 03:46:41PM 0 points [-]

Any 'figuring out' is almost certainly going to produce an ad hoc Just-So Story.

Rationalists do not ignore their intuition. Nor do they trust it. If they don't have a rational justification for a principle, they don't assert it.

They don't negate it, either.

Comment author: JamesCole 13 June 2009 01:12:00AM *  1 point [-]

[edit: included quote]

Any 'figuring out' is almost certainly going to produce an ad hoc Just-So Story.

that implies that the only correct intuition is one you can immediately rationally justify. how could progress in science happen if this was true?

science is basically a means to determine whether initial intuitions are true.

Comment author: JamesCole 12 June 2009 02:55:41PM *  1 point [-]

So it seems possible to me that I have an oversensitivity to noise and Bill has an undersensitivity to it.

That seems to imply that the typical case is the "correct" one, and that somehow your (or Bill's) case is invalid because it's non-typical.

If noise means that you can't sleep, study or concentrate, and you can't really help this, then this is a valid factor that should be taken into account.

[edit] though after reading further down i can see that you appreciate that.

Comment author: Vladimir_Nesov 11 June 2009 02:17:18PM *  0 points [-]

Again, you need to be more specific. If you assume certain models of reality (sometimes very reasonable for the real world), there are notions of describing/representing/simulating that system, finding or proving its properties. Physics, graphical models, etc.

Comment author: JamesCole 12 June 2009 12:35:21AM 0 points [-]

that is exactly what you can't assume if you want to explain the basis of representation.

Comment author: dclayh 11 June 2009 08:10:10PM 21 points [-]

Minor point: I find Julie-and-Mark-like examples silly because they ask for a moral intuition about a case where the outcome is predefined. Our moral intuition makes arguments of the form "behavior X usually leads to a bad outcome, therefore X is wrong". So if the outcome is already specified, the intuition has nothing to say; nor would we expect it to, since the whole point of morality is to help you make decisions between live possibilities, so why should it have anything to say about a situation that has already happened/cannot be altered?

Or to put it another way, I'm surprised no one said something to the effect of "Julie and Mark shouldn't have had sex because at the time they did they had no way of knowing that it would turn out well, and in fact every reason to believe it would turn out very badly, based on the experiences of other incestuous siblings."

Comment author: JamesCole 12 June 2009 12:33:28AM 4 points [-]

...because they ask for a moral intuition about a case where the outcome is predefined.

One thing i found a bit dodgy about that example is that it just asserts that the outcomes were positive.

I would bet that, for the respondents, simply being told that the outcomes were positive would still have left them feeling that in a real brother-sister situation like that there would likely have likely been some negative consequences.

Greene does not seem to factor this into account when he interprets their responses.

Comment author: Vladimir_Nesov 11 June 2009 02:00:18PM 0 points [-]

There is a lot known in metamathematics and formal semantics, so you'd need to be more specific than that.

Comment author: JamesCole 11 June 2009 02:10:15PM 0 points [-]

I don't think there's anything that comes close to giving a theoretical account of how mathematical statements are able to, in some sense, represent things in reality.

View more: Prev | Next