Comment author: Armok_GoB 14 February 2011 02:14:11PM 0 points [-]

I do find it annoying because I'll likely never be able to got to one for geographical reasons, but on the other hand I can see why it makes sense given how large a part of the readers DO live in the US.

Comment author: Jonii 14 February 2011 03:20:29PM 5 points [-]

Even majority of readers participated to these meetups every time, it doesn't matter. Quoting the about-post: ""Promoted" posts (appearing on the front page) are chosen by the editors on the basis of substantive new content, clear argument, good writing, popularity, and importance."

Meetup-posts do not contain new, important, argumentative content. It's meta-level discussion, meta that it bit by bit trying to take over the whole LW. I don't want LW that exists for posts about LW. Meetup-posts are not the only thing driving LW towards uselessness, but as far as I can tell, having those posts in the front page is by far the most visible and obvious warning sign.

Meetup posts as discussion threads, please

26 Jonii 14 February 2011 11:49AM

As of now, 4 of 10 newest promoted posts are about meetups, as well 4 of 10 newest posts overall. For casual readers like me, having frontpage flooded by this much irrelevant information, _especially promoted-section_, seems really, really discouraging. LW has tendency to contain too much useless meta-discussion compared to the actual rationality-related one, but having frontpage flooded by meta-discussion like this seems rather unbeliveable. Please, let's try to keep at least the promoted-section rationality-related.

Comment author: Quirinus_Quirrell 28 January 2011 10:10:16PM 15 points [-]

When should you punish someone for a crime they will commit in the future?

Easy. When they can predict you well enough and they think you can predict them well enough that if you would-counterfactually punish them for committing a crime in the future, it influences the probability that they will commit the crime by enough to outweigh the cost of administering the punishment times the probability that you will have to do so. Or when you want to punish them for an unrelated reason and need a pretext.

Not every philosophical question needs to be complicated.

Comment author: Jonii 29 January 2011 04:17:53PM 0 points [-]

So you can avoid being punished by not predicting potential punishers well enough, or by deciding to do something regardless of punishments you're about to receive? I'm not sure that's good.

Comment author: Jonii 11 December 2010 09:41:10PM 4 points [-]

I sought out the dangerous idea right after I heard about the commotion, and I was disappointed. I discussed the idea, and thought about it hard, I'm still a bit unsure if I figured out why people think of the idea as dangerous, but to me it seems to be just plain silly.

I don't regret knowing it. I figured right from the start that the probability of it actually being dangerous was low enough that I don't need to care about it, and seems that my initial guess was right on the spot. And I really do dislike not knowing about things that everybody says are really dangerous and can cause me and my loved ones much agony for reasons no one is allowed to tell

Comment author: Jonii 12 December 2010 02:10:08AM *  1 point [-]

Oh, thanks to more discussion today, I figured out why the dangerous idea is dangerous, and now I understand why people shouldn't seek it. More like, the actual idea is not dangerous, but it can potentially lead to dangerous ones. At least, if I understood the entire thing correctly. So, I understand that it is harmful for us to seek that idea, and if possible, it shouldn't be discussed.

Comment author: Leonhart 10 December 2010 01:56:47PM *  34 points [-]

I'm curious.

I am in the following epistemic situation: a) I missed, and thus don't know, BANNED TOPIC b) I do, however understand enough of the context to grasp why it was banned (basing this confidence on the upvotes to my old comment here

Out of the members here who share roughly this position, am I the only one who - having strong evidence that EY is a better decision theorist than me, and understanding enough of previous LW discussions to realise that yes, information can hurt you in certain circumstances - is PLEASED that the topic was censored?

I mean, seriously. I never want to know what it was and I significantly resent the OP for continuing to stir the shit and (no matter how marginally) increasing the likelihood of the information being reposted and me accidentally seeing it.

Of course, maybe I'm miscalibrated. It would be interesting to know how many people are playing along to keep the peace, while actually laughing at the whole thing because of course no mere argument could possibly hurt them in their invincible mind fortresses.

(David Gerard, I'd be grateful if you could let me know if the above trips any cultishness flags.)

Comment author: Jonii 11 December 2010 09:41:10PM 4 points [-]

I sought out the dangerous idea right after I heard about the commotion, and I was disappointed. I discussed the idea, and thought about it hard, I'm still a bit unsure if I figured out why people think of the idea as dangerous, but to me it seems to be just plain silly.

I don't regret knowing it. I figured right from the start that the probability of it actually being dangerous was low enough that I don't need to care about it, and seems that my initial guess was right on the spot. And I really do dislike not knowing about things that everybody says are really dangerous and can cause me and my loved ones much agony for reasons no one is allowed to tell

Comment author: Alicorn 01 December 2010 03:37:16PM *  2 points [-]

Although Nahuel has sisters in canon, their details are made up, including Allirea'a power and therefore how Allirea interacts with Demetri, Eleazar, et al.

Note that Eleazar did get a reading off Bella, albeit a brief and incomplete one.

Comment author: Jonii 01 December 2010 03:49:11PM 3 points [-]

Yes, but that incomplete-one means that his power can't override powers others have. Even if he could, after paying attention to Allirea, understand her power, it doesn't follow from what we know of his powers up to now that he could pay attention to her any more than any other person there. Even some sort of power-detection field would fail to reveal other than "There's is vampire that diverts attention paid to it in that general direction", if we assume it overrides her ability, which would make Eleazar severely handicapped in a fight anyway.

Yeah, and I wanted to say that you're treating the characters you create in an awful and cruel way. Stop that. They should be happy at least once in a while :p

Comment author: Jonii 01 December 2010 02:45:00PM 1 point [-]

Chapter 11:

Is Allirea + Eleazar thing canon? It sure doesn't seem to follow from what we've seen before, unless Eleazar lied to Bella.

Comment author: Jack 30 November 2010 03:11:50PM *  0 points [-]

Sure it's true. Thats just Meaningless(Sliar())... I guess I don't seen why the selected portion would imply otherwise.

Comment author: Jonii 30 November 2010 04:17:54PM 0 points [-]

Oh, right, now I get it.

Comment author: Jack 30 November 2010 10:34:09AM *  7 points [-]

The Liars Paradox appears to be a special case of infinite recursion.

liar() {

NOT liar()

}

Straight forward. A debugging tool would detect an infinite recursion. An English speaking logician could call it 'meaningless'. Now consider the 'strengthened paradox':

"Everything written on the board in Room 33 is either false or meaningless."

This isn't translatable as a function. 'Meaningful' and 'meaningless' aren't values bivalent functions return so they shouldn't be values in our logic. Instead they should be thought of as flags for errors detected by our brain's debugging tool. But our debugging tool is embedded into the semantics of our language. We talk about sentences having the property of 'meaninglessness' instead of our brains not knowing what to do with the string of letters shown to it. You could probably build a language that returned a pseudo-value of "Meaningless" for infinitely recursive functions. It wouldn't "really" be a value, the program would just output a line that read "x = Meaningless" (not, and this is crucial, assign the variable the value of the string 'Meaningless') when asked to find Liar(x). That is basically what the human brain does.

When we get confused by the strengthened liar paradox we're committing a category error, thinking "meaningless" is a value when it is really an error message. In fact both versions of the paradox are meaningless (assuming 'meaningless' can be taken to mean something like 'Can't compute').

Of course it gets hairy since error messages have truth values too. But

Meaningless(Sliar()) = 1

Is not the same as

Sliar() =1

Thus there is no contradiction. Same will go for Meaningless(Meaningless(Sliar())).

Comment author: Jonii 30 November 2010 02:39:20PM 0 points [-]

This isn't translatable as a function. 'Meaningful' and 'meaningless' aren't values bivalent functions return so they shouldn't be values in our logic.

So the sentence "The sentence 'Everything written on the board in Room 33 is either false or meaningless.' is meaningless" is not true?

Comment author: Yvain 17 November 2010 02:49:54PM *  11 points [-]

Humans seem to have a built-in solution to this dilemma, in that if I were presented with this situation and another human, where the payoff was something like minus ten, zero, or plus ten cents for me, versus insta-death, nothing, or ten billion dollars for the other human, I would voluntarily let the other person win and I would expect the other person to do the same to me if our situations were reversed. This means humans playing against other humans will all do exceptionally well in these sorts of dilemmas.

So this seems like an intelligent decision theoretic design choice, along the lines of "Precommit to maximizing the gains of the agent with the high gains now, in the hope of acausally influencing the other agent to do the same, thus making us both better off if we ever end up in a true prisoner's dilemma with skewed payoff matrix."

If I believe the alien to be sufficiently intelligent/well-programmed, and if I expect the alien to believe me to also be sufficiently intelligent/well-programmed, I would at least consider the alien graciously letting me win the first option in exchange for my letting the alien win the second. Even if only one of the two options is ever presented, and the second is the same sort of relevant hypothetical as a Counterfactual Mugging.

Comment author: Jonii 30 November 2010 12:28:37AM 0 points [-]

Yes, humans performing outstandingly well in this sort of problem was my inspiration for this. I am not sure how far it is possible to generalize this sort of winning. Humans themselves are kinda complex machines, so, if we start with perfectly rational LW reader and paperclip maximizer with one-shot PD with randomized payoff matrix, what's the least amount of handicaps we need to give them to reach this super-optimal solution? At first, I thought we could even remove the randomization alltogether, but it is making the whole problem more ambiguous I think.

View more: Prev | Next