All of CynicalOptimist's Comments + Replies

This is good, but I feel like we'd better represent human psychology if we said:

Most people don't make a distinction between the concepts of "x has probability <0.1%" and "x is impossible".

I say this because I think there's an important difference between the times when people have a precise meaning in mind, which they've expressed poorly, and the times when people's actual concepts are vague and fuzzy. (Often, people don't realise how fuzzy their concepts are).

This seems to me like an orthogonal question. (A question that can be entirely extricated and separated from the cryonics question).

You're talking about whether you are a valuable enough individual that you can justify resources being spent on maintaining your existence. That's a question that can be asked just as easily even if you have no concept of cryonics. For instance: if your life depends on getting medical treatment that costs a million dollars, is it worth it? Or should you prefer that the money be spent on saving other lives more efficiently?

(Inc... (read more)

I think I've got a good response for this one.

My non-episodic memory contains the "facts" that Buffy the Vampire Slayer was one of the best television shows that was ever made, and the Pink Floyd aren't an interesting band. My boyfriend's non-episodic memory contains the facts that Buffy was boring, unoriginal, and repetetive (and that Pink Floyd's music is trancendentally good).

Objectively, these are opinions, not facts. But we experience them as facts. If I want to preserve my sense of identity, then I would need to retain the facts that were i... (read more)

1TheOtherDave
So, on one level, my response to this is similar to the one I gave (a few years ago) [http://lesswrong.com/lw/qx/timeless_identity/9trc]... I agree that there's a personal relationship with BtVS, just like there's a personal relationship with my husband, that we'd want to preserve if we wanted to perfectly preserve me. I was merely arguing that the bitlength of that personal information is much less than the actual information content of my brain, and there's a great deal of compression leverage to be gained by taking the shared memories of BtVS out of both of your heads (and the other thousands of viewers) and replacing them with pointers to a common library representation of the show and then have your personal relationship refer to the common library representation rather than your private copy. The personal relationship remains local and private, but it takes up way less space than your mind currently does. That said... coming back to this conversation after three years, I'm finding I just care less and less about preserving whatever sense of self depends on these sorts of idiosyncratic judgments. I mean, when you try to recall a BtVS episode, your memory is imperfect... if you watch it again, you'll uncover all sorts of information you either forgot or remembered wrong. If I offered to give you perfect eideitic recall of BtVS -- no distortion of your current facts about the goodness of it, except insofar as those facts turn out to be incompatible with an actual perception (e.g., you'd have changed your mind if you watched it again on TV, too) -- would you take it? I would. I mean, ultimately, what does it matter if I replace my current vague memory of the soap opera Spike was obsessively watching with a more specific memory of its name and whatever else we learned about it? Yes, that vague memory is part of my unique identity, I guess, in that nobody else has quite exactly that vague memory... but so what? That's not enough to make it worth preserving. An

Very interesting. I'm going to try my hand at a short summary:

Assume that you have a number of different options you can choose, that you want to estimate the value of each option and you have to make your best guess as to which option is most valuable. In step one, you generate individual estimates using whatever procedure you think is best. In step 2 you make the final decision, by choosing the option that had the highest estimate in step one.

The point is: even if you have unbiased procedures for creating the individual estimates in step one (ie procedur... (read more)

Well in some circumstances, this kind of reasoning would actually change the decision you make. For example, you might have one option with a high estimate and very high confidence, and another option with an even higher estimate, but lower confidence. After applying the approach described in the article, those two options might end up switching position in the rankings.

BUT: Most of the time, I don't think this approach will make you choose a different option. If all other factors are equal, then you'll probably still pick the option that has the highest e... (read more)

1EliasHasle
The big problem arises when the number of choices is huge and sparsely explored, such as when optimizing a neural network. But restricting ourselves to n superficially evaluated choices with known estimate variance in each evaluation and with independent errors/noise, then if – as in realistic cases like Monte Carlo Tree Search – we are allowed to perform some additional "measurements" to narrow down the uncertainty, it will be wise to scrutinize the high-expectance choices most – in a way trying to "falsify" their greatness, while increasing the certainty of their greatness if the falsification "fails". This is the effect of using heuristics like the Upper Confidence Bound for experiment/branch selection. UCB is also described as "optimism in the face of uncertainty", which kind of defeats the point I am making if it is deployed as decision policy. What I mean is that in research, preparations and planning (with tree search in perfect information games as a formal example where UCB can be applied), one should put a lot of effort into finding out whether the seemingly best choice (of path, policy, etc.) really is that good, and then make a final choice that penalizes remaining uncertainty. I would like to throw in a Wikipedia article on a relevant topic, which I came across while reading about the related "Winner's curse": https://en.wikipedia.org/wiki/Order_statistic The math for order statistics is quite neat as long as the variables are independently sampled from the same distribution. In real life, "sadly", choice evaluations may not always be from the same distribution... Rather, they are by definition conditional upon the choices. (https://en.wikipedia.org/wiki/Bapat%E2%80%93Beg_theorem provides a kind of solution in the form of an intractable colossus of a calculation.) That is not to say that there can be found no valuable/informative approximations.

I think there's some value in that observation that "the all 45 thing makes it feel like a trick". I believe that's a big part of why this feels like a paradox.

If you have a box with the numbers "60" and "20" as described above, then I can see two main ways that you could interpret the numbers:

A: The number of coins in this box was drawn from a probability distribution with a mean of 60, and a range of 20.

B: The number of coins in this box was drawn from an unknown probability distribution. Our best estimate of the number of c... (read more)

I think that RobbBB has already done a great job of responding to this, but I'd like to have a try at it too. I'd like to explore the math/morality analogy a bit more. I think I can make a better comparison.

Math is an enormous field of study. Even if we limited our concept of "math" to drawing graphs of mathematical functions, we would still have an enormous range of different kinds of functions: Hyperbolic, exponential, polynomial, all the trigonometric functions, etc. etc.

Instead of comparing math to morality, I think it's more illustrative to ... (read more)

0TheAncientGeek
The range of possible values is only a problem if you hold to the theory that morality "is" values, without any further qualifications, then an AI is going to have trouble figuring out morality apriori. If you take the view that morality is a fairly uniform way of handling values, or a subset of values, then so long as then the AI can figure it out by taking prevailing values as input, as data. (We will be arguing that:- * Ethics fulfils a role in society, and originated as a mutually beneficial way of regulating individual actions to minimise conflict, and solve coordination problems. ("Social Realism"). * No spooky or supernatural entities or properties are required to explain ethics (naturalism is true) * There is no universally correct system of ethics. (Strong moral realism is false) * Multiple ethical constructions are possible... Our version of ethical objectivism needs to be distinguished from universalism as well as realism, Ethical universalism is unikely...it is unlikely that different societies would have identical ethics under different circumstances. Reproductive technology must affect sexual ethics. The availability of different food sources in the environment must affect vegetarianism versus meat eating. However, a compromise position can allow object-level ethics to vary non-arbitrarily. In other words, there is not an objective answer to questions of the form "should I do X", but there is an answer to the question "As a member of a society with such-and-such prevailing conditions, should I do X". In other words still, there is no universal (object level) ethics, but there there is an objective-enough ethics, which is relativised to societies and situations, by objective features of societies and situations...our meta ethics is a function from situations to object level ethics, and since both the functions and its parameters are objective, the output is objective. By objectivism-without-realism, we mean that mutually isolated groups of

But if you do care about your wishes being fulfilled safely, then safety will be one of the things that you want, and so you will get it.

So long as your preferences are coherent, stable, and self-consistent then you should be fine. If you care about something that's relevant to the wish then it will be incorporated into the wish. If you don't care about something then it may not be incorporated into the wish, but you shouldn't mind that: because it's something you don't care about.

Unfortunately, people's preferences often aren't coherent and stable. For in... (read more)

0TheOtherDave
Yes, absolutely. And yes, the fact that my preferences are not coherent, stable, and self-consistent is probably the sort of thing I was concerned about... though it was years ago.

I like this style of reasoning.

Rather than taking some arbitrary definition of black boxes and then arguing about whether they apply, you've recognised that a phrase can be understood in many ways, and we should use the word in whatever way most helps us in this discussion. That's exactly the sort of rationality technique we should be learning.

A different way of thinking about it though, is that we can remove the confusing term altogether. Rather than defining the term "black box", we can try to remember why it was originally used, and look for a... (read more)

"if the Pump could just be made to sense the proper (implied) parameters."

You're right, this would be an essential step. I'd say the main point of the post was to talk about the importance, and especially the difficulty, of achieving this.

Re optimisation for use: remember that this involves a certain amount of trial and error. In the case of dangerous technologies like explosives, firearms, or high speed vehicles, the process can often involve human beings dying, usually in the "error" part of trial and error.

If the technology in questi... (read more)

I agree, just because something MIGHT backfire, it doesn't mean we automatically shouldn't try it. We should weigh up the potential benefits and the potential costs as best we can predict them, along with our best guesses about the likelihood of each.

In this example, of course, the lessons we learn about "genies" are supposed to be applied to artificial intelligences.

One of the central concepts that Eliezer tries to express about AI is that when we get an AI that's as smart as humans, we will very quickly get an AI that's very much smarter than h... (read more)

I see where you're coming from on this one.

I'd only add this: if a genie is to be capable of granting this wish, it would need to know what your judgements were. It would need to understand them, at least as well as you do. This pretty much resolves to the same problem that Eliezer already discussed.

To create such a genie, you would either need to explain to the genie how you would feel about every possible circumstance, or you would need to program the genie so as to be able to correctly figure it out. Both of these tasks are probably a lot harder than they sound.

Can't agree with this enough.

Alternate answer:

If the Kremlin publicly announces a policy, saying that they may reward some soldiers who disobey orders in a nuclear scenario? Then this raises the odds that a Russian official will refuse to launch a nuke - even when they have evidence that enemy nukes have already been fired on Russia.

(So far, so good. However...)

The problem is that it doesn't just raise the odds of disobedience, it also raises the perceived odds as well. ie it will make Americans think that they have a better chance of launching a first strike and "getting away wi... (read more)

It may be an uncommon scenario, but it's the scenario that's under discussion. We're talking about situations where a soldier has orders to do one thing, and believes that moral or tactical considerations require them to do something else - and we're asking what ethical injunctions should apply in that scenario.

To be fair, Jubilee wasn't very specific about that.

Yup! I agree completely.

If you were modeling an octopus-based sentient species, for the purposes of writing some interesting fiction, then this would be a nice detail to add.

I believe the idea was to ask "hypothetically, if I found out that this hypothesis was true, how much new information would that give me?"

You'll have two or more hypotheses, and one of them is the one that would (hypothetically) give you the least amount of new information. The one that would give you the least amount of new information should be considered the "simplest" hypothesis. (assuming a certain definition of "simplest", and a certain definition of "information")

This is excellent advice.

I'd like to add though, that the original phrase was "algorithms that make use of gut feelings... ". This isn't the same as saying "a policy of always submitting to your gut feelings".

I'm picturing a decision tree here: something that tells you how to behave when your gut feeling is "I'm utterly convinced" {Act on the feeling immediately}, vs how you might act if you had feelings of "vague unease" {continue cautiously, delay taking any steps that constitute a major commitment, while you try... (read more)

I think this is the basis of good Business Analysis. A field I'm intending to move into.

It's the very essence of "Hold off on proposing solutions".

This is perfectly true. But it doesn't much matter, because the point here is that when these people reject the idea of evolution, for these kind of reasons, they use feelings of "absurdity" as a metric - without critically assessing the reasons why they feel that way.

The point here isnt that the lady was using sound and rational reasoning skills. The contention is that her style of reasoning was something a rationalist shouldn't want to use - and that it was something the author no longer wants to use in their own thinking.

2Jiro
The point was to compare a religious believer saying "evolution sounds absurd" to a rationalist saying "talking snakes sound absurd". But the situations are not comparable. The religious believer only claims that evolution sounds absurd because he applies different standards for absurdity to things that contradict his religion and things which don't. The rationalist claims that talking snakes sound absurd using consistent standards (though not the same standards as the religious believer).

Oh absolutely. We don't have time to thoroughly investigate the case for every idea we come across. There comes a time when you say that you're not interested in exploring an idea any further.

But there is an intellectual honesty to admitting that you haven't heard all of the evidence, and acknowledging that you might conceivably have changed your mind (or least significantly changed your probability estimates) if you had done more research.

And there's a value to it as well. Some ideas have been thoroughly researched and should be labelled in our minds as ... (read more)

5gjm
Again, I agree with all of that.

I think that absurdity, in this sense, is just an example of Occam's Razor / Bayesian rationalty in practice. If something has a low prior, and we've no evidence that would make us raise our probability estimates, then we should believe that the idea probably isn't true.

I've always assumed that the absurdity bias was a tendency to do something slightly different. In this context, absurdity is a measure of how closely an idea conforms to our usual experiences. It's a measure of how plausible an idea feels to our gut. By this definition, absurdity is being... (read more)

Incidentally, does this prime number have to be expressed in Base 10?

4gjm
Every base is base 10. (There is no prime number ending with a 2 in binary. Other than that, you're fine.)

I think the original poster would have agreed to this even before they had the realisation. The point here is that, even when you do listen to an explanation, the absurdity bias can still mislead you.

The lady in the story had an entire conversation about evolution and still rejected it as absurd. Some ideas simply take more than 20 minutes to digest, understand and learn about. Therfore after 20 minutes of conversation, you cannot reasonably conclude that you've heard everything there is. You cannot reasonably conclude that you wouldn't be convinced by more evidence.

It's just like any bias really. Even when you know about it and you think you've adjusted sufficiently, you probably haven't.

1Jiro
I think the unbelievability of evolution has been greatly exaggerated. People believe that diseases are caused by living things that they can't even see. They believe that you can destroy a city with enough uranium to fit into a car. They believe that burning fuel hundreds of miles away produces this stuff that comes through copper wires to their home and makes their refrigerator run. Evolution is not more unbelievable than those. It's likely that in most cases where someone "didn't digest and learn about" evolution, they are rejecting it because it conflicts with something they already believe for other reasons, and "it's just plain unbelievable" is an excuse, not a reason. I suspect that if you went up to a Christian Scientist and explained germ theory to him, he'd tell you it's unbelievable in the same way that literalist Christians or Muslims would tell you that evolution is unbelievable. Yet plenty of people whose religions don't contradict germ theory, but who haven't studied the science either, find it perfectly believable.
4gjm
I agree with all of that. But there's a limit to how much effort you can reasonably be expected to put into considering whether something that seems absurd to you is really not-absurd. I suggest that that depends on what other evidence there is for its non-absurdity. E.g., in the case of evolution, it's highly relevant that it's endorsed by the great majority of biologists, including biologists belonging to religions whose traditions contain stories that prima facie conflict with evolution. There are a lot of super-smart Christians too, which I think it's reasonable to take as evidence that Christianity can't rightly be dismissed simply because its tradition contains a story about a talking snake. On the other hand, there aren't so many super-smart talking-snake-believers -- even among Christians, most[1] of the cleverest and most educated don't take the story as indicating that there was ever a talking snake -- which suggests that treating a literal reading of the talking-snake story as absurd probably isn't unreasonable. [1] Though certainly not all.

I think this just underscores the original post's point.

The lesson here isn't that Christians are probably right or that Christians are probably wrong. The lesson here is that you can go very wrong by relying on the absurdity heuristic. And that that's true even when the claim seems really absurd.

Let's take a hypothetical atheist who really does think that all Christians believe in the literal word of the Bible. This atheist might reject the whole of Christianity because of the absurdity of talking snakes. Having rejected the entire school of thought that... (read more)

I think you might be deflecting the main point here. Possibly without realising it.

You have a better opportunity to practice your skills as a rationalist if you respond to the [least convenient] (http://tinyurl.com/LWleastconvenient) possible interpretation of this comment.

I would propose that the "experts" being referred to are experts in debating the existence of God. ie of all the arguments that have ever been put forward for the existence of God, these are the people who know the most compelling ones. The most rationally compelling, logica... (read more)

Exactly!

To demonstrate in this way that the absurdity heuristic is useful, you would have to claim something like:

The ratio of false absurd claims (that you are likely to encounter) to true absurd claims (that you are likely to encounter) is much higher than the ratio of false non-absurd claims (that you are likely to encounter) to true non-absurd claims (that you are likely to encounter).

EDIT wow. I'm the person who wrote that, and i still find it hard to read it. This is one of the reasons why rationality is hard. Even when you have a good intuition for the concepts, it's still hard to express the ideas in a concrete way.

I agree with AlexanderRM.

You stated that some of the autistic people you know are significantly different from most humans. That's in line with the original content, not a counter-argument to it.

And with that said, I'm not sure I'm happy being in a conversation about how "different" a group of people is from normal people. It's hard to know how that will be taken by the people involved, and it may not be a nice feeling to read it.

I think you're right. That squeamishness is very much a product of you having grown up as not-an-octopus.

Most creatures taste with an organ that's at the top of their digestive tract, it's fairly sensible that they have an aversion to tasting anything that they would be unhealthy for them to consume.

A species that had always had a chemical-composition-sense on all of it's limbs? Would almost certainly have a very different relationship with that sense than we have with taste.

1CCC
Hmmm. Fair enough. But even if they're not squeamish about it, it would make sense for them to select the material from which they make their walkways according to flavour (among other factors, such as strength and durability).

I think this might be the bias in action yet again.

Our idea of an alien experience is to taste with a different part of our bodies? That's certainly more different-from-human than most rubber-forehead aliens, but "taste" is still a pretty human-familiar experience. There are species with senses that we don't have at all, like a sensitivity to magnetism or electric fields.

TV tropes calls that the "planet of hats". (visit tv tropes at your own peril, it's a notorious time sink).

I think it represents a different fallacy: to assume that am unfamiliar group of things (or people) are much more homogenous than they really are. And more specifically: to assume that a culture or group of things is entirely defined by the things that make them different from us.

Yes, of course there are many good reasons why writers do this. Reasons why, for a writer, it can be good to do this, in addition to just being difficult to avoid.

But i don't think that's really the point. We're not here to critique science fiction. We're not tv critics. We're trying to learn rationality techniques to help us "win" whatever we're trying to win. And this is a fairly good description of a certain kind of bias.

You're right though. Sci-fi is a good example to demonstrate what the bias is, but not a great example to demonstrate why it's important.

This is fair, because you're using the technique to redirect us back to the original morality issue.

But i also don't think that MBlume was completely evading the question either. The question was about ethical principles, and his response does represent an exploration of ethical principles. MBlume suggests that it's more ethical to sacrifice one of the lives that was already in danger, than to sacrifice an uninvolved stranger. (remember, from a strict utilitarian view, both solutions leave one person dead, so this is definitely a different moral principle.)

This technique is good for stopping people from evading the question. But some evasions are more appropriate than others.

0abramdemski
Agreed.

Okay, well let's apply exactly the technique discussed above:

If the hypothetical Omega tells you that they're is indeed a maximum value for happiness, and you will certainly be maximally happy inside the box: do you step into the box then?

Note: I'm asking that in order to give another example of the technique in action. But still feel free to give a real answer if you'd choose to.

Side you didn't answer the question one way or another, I can't apply the second technique here. I can't ask what would have to change in order for you to change your answer.

-2thrawnca
This would depend on my level of trust in Omega (why would I believe it? Because Omega said so. Why believe Omega? That depends on how much Omega has demonstrated near-omniscience and honesty). And in the absence of Omega telling me so, I'm rather skeptical of the idea.
2Jiro
What if we ignore the VR question? Omega tells you that killing and eating your children will make you maximally happy. Should you do it? Omega can't tell you that doing X makes you maximally happy unless doing X actually makes you maximally happy. And a scenario where doing X actually makes you maximally happy may be a scenario where you are no longer human and don't have human preferences. Omega could, of course, also say "you are mistaken when you conclude that being maximally happy in this scenario is not a human preference". However, 1. This conclusion that that is not a human preference is being made by you, the reader, not just by the person in the scenario. It is not possible to stipulate that you, the reader, are wrong about your analysis of some scenario. 2. Even within the scenario, if someone is mistaken about something like this, it's a scenario where he can't trust his own reasoning abilities, so there's really nothing he can conclude about anything at all. (What if Omega tells you that you don't understand logic and that every use of logic you think you have done was either wrong or true only by coincidence?)

Yes, that's definitely true. If you know a little, or a lot, about genetics, then the theory is falsifiable.

I think it still works just fine as an example though. The goal was to explain the meaning and the importance of falsifiability. Spotiswood's theory, as presented and as it was being used, wasn't making any useful predictions. No one was looking at familial comparisons, and i implied that Spotiswood wasn't making any effort to identify the gene, so the only observations that were coming in were "person lives", or "person dies". Wi... (read more)

It's an appealing notion, but i think the logic doesn't hold up.

In simplest terms: if you apply this logic and choose to cooperate, then the machine can still defect. That will net more paperclips for the machine, so it's hard to claim that the machine's actions are irrational.

Although your logic is appealing, it doesn't explain why the machine can't defect while you co-operate.

You said that if both agents are rational, then option (C,D) isn't possible. The corollary is that if option (C,D) is selected, then one of the agents isn't being rational. If this... (read more)

There's a lot of value in that. Sometimes it's best not to go down the rabbit hole.

Whatever the technicalities might be, the jester definitely followed the normal, reasonable rules of this kind of puzzle, and by those rules he got the right answer. The king set it up that way, and set the jester up to fail.

If he'd done it to teach the jester a valuable lesson about the difference between abstract logic and real life, then it might have been justified. But he's going to have the jester executed, so that argument disappears.

I think we can all agree, The King is definitely a dick.

1jmh
I'll somewhat echo what CynicalOptimist wrote. I think the message is is one any first semester logic student should have been taught or know: a valid argument is not necessarily true. The validity of an argument's conclusion is all about form of the argument. The truth of the conclusion is an external fact existing completely independent from the argument's structure.
1mamert
I'm trying to stay levelheaded about King Richard. What I meant was that there seems to be extraneous details here - about the order things were done in, first inscribe ("key is here", on an empty(?) box), then put dagger in, or that it was written, not spoken. Many comments only enforce the importance of that. The "real" answer seems to be one that effectively makes all kinds of communication useless, and what I've spent so much time on was trying to pin down the borders of this insanity, some marker saying "abstract logic application to real life* not allowed past this point". *) the use of physical boxes binding the riddle to "real life"

I don't really recommend talking to a bunch of children and deliberately spreading the message "some of you just suck at most things".

There are positive and valuable ways to teach the lesson that people aren't all equally "good at stuff", but it's a tough one to communicate well. It's not a good thing to bring up casually as an example when you're talking about something else.

Incidentally, i think that you're proposing a test for susceptibility to the medicine. The relevant theory here is that any person who would be killed by a full dose, would be also be harmed but not killed, by a much smaller dose. That's a perfectly testable, falsifiable theory, but i don't think it would directly test the claim that the cause is genetic.

A better test for genetic causes, is to look at family relationships. If we believe the cause is genetic, then we predict that people who are more closely related to each other, are more likely to have t... (read more)

Absolutely.

If the first scientist can come up with a way to test his theory, then it would probably make his theory more useful. It would also make it more falsifiable.

I think it would be great to start with a theory that sounds very scientific, but is unfalsifiable, and therefore useless. Then we modify the theory to include an element that is falisfiable, and the theory becomes much more useful.

For example, we have a new kind of medicine, and it is very good for some people, but when other people take the medicine it kills them. Naturally, we want to know who would be killed by the medicine, and who would be helped by it.

A scientist has a theory. He believes there is a gene that he calls the "Spottiswood gene"... (read more)

0HungryHobo
The first theory is falsifiable as long as you're willing to let enough people die. Collect blood samples from everyone before they take the medecine. Sequence their full exome and put it on file. once you have a few thousand dead and a few thousand survivors you should be able to narrow candidates down to a few dozen genes. Make predictions about who will die out of the next few hundred who take the pill, bam. Turns out it's an eye color gene having some weird effect on a vital pathway that the drug is linked to. Alternatively if it's not genetic at all, if single members of pairs of twins taking the drug died at rates inconsistent with the expected numbers of mutations between twins then we could be pretty sure it's not genetic. or perhaps it's only partially genetic, again twins and siblings would let us work this out. Seems pretty falsifiable.
0johnlawrenceaspden
Couldn't Spottiswood make a gene-detector by feeding the medicine in tiny tiny amount and seeing whether you just died a bit? Could be way useful.

I think you're saying that all the cases described above, could be expressed as a mix of other fallacies, therefore it's not distinct fallacy in its own right?

I think a better question is "If we think of class of mistake as a specific named fallacy, will it help us to spot errors of reasoning that we would otherwise have missed? Or alternatively, help us to talk about errors of reasoning that we've noticed."

If it can be expressed in terms of other fallacies, but these mistakes aren't immediately obvious as examples of those fallacies, then it can... (read more)

"We have a pretty stupid banking system if you can..."

Yes, we do.

It's a complicated system that developed slowly, piece by piece, influenced by legislation, commercial pressures, other (contradictory) commercial pressures, and customers' needs. The need for backwards compatibility makes it impossible to rip up the old system and start again, and no one person is in charge of designing it. Naturally it's messed up and has inconsistencies.

---Meta comment: At first I was writing this with the intention of saying, basically: "Duh! isn't that ob... (read more)

Suffice to say: There are many different methods for sending money. Some of them will involve paper forms, some will not. Some of them involve the internet, some will not. And each one has its own rules.

"Maybe the scammer wants the part of their money returned using a different method (one that does not allow cancelling, or has shorter deadlines)"

This is essentially correct. I've read about similar scams, and I believe this was how they worked.

"I think it can be taken for granted that people on this site have an elevated sense of skepticism"

I disagree. Being a participant on this site implies that one has accepted some or all of the central premises of the community: that we can significantly improve our lives by thinking differently, that we should be willing to think and behave in ways that are very counter-intuitive to the average person, and that we can learn to do all of this by reading and talking on a website.

A great many 'normal' people would dismiss Less Wrong as a silly ventu... (read more)

I can't speak for Eliezer's intentions when he wrote this story, but I can see an incredibly simple moral to take away from this. And I can't shake the feeling that most of the commenters have completely missed the point.

For me, the striking part of this story is that the Jester is shocked and confused when they drag him away. "How?!" He says "It's logically impossible". The Jester seems not to understand how it is possible for the dagger to be in the second box. My explanation goes as follows, and I think I'm just paraphrasing the king... (read more)

0mamert
Breaking #24 of the Evil Overlord List makes me wince, too, even if it's a jester doing it. Not sure if that's the main point, though, but then, none of the proposed explanation for how the king could pull his "riddle" off without at any point lying feel entirely right to me, so, unless someone offers to help me, I shall have to take your advice and not let myself get entangled in the "complex and detailed logic", when the answer might as well be "BS".

Now I want to try having a watch that randomly speeds up, and slows down, within preset limits. So that at any point I could be as many as 5 minutes ahead, or 5 minutes behind.

That would probably get me used to showing up a few minutes early to everything.

I might be missing something here.

These seem to be application forms to lease or purchase land that the belongs to a railway-related organization?

Land that belongs to a railway-related organization isn't necessarily part of a railway. The land could be disused office-space, parking lots, or warehouses.

Completely Ad-Hoc proposal:

Ethics are very very heavily influenced by one consideration: other people's opinions. It may not be consciously admitted, but when people faced with an ethical conundrum, I think they make a decision that's based on the question "What will people think of me?". (The internalized version is: "What will I think of myself?" / "Will I be able to look at myself in the mirror?").

The question here relates to letting 5 people die (by inaction) or killing one person (by taking action). If you pick the second... (read more)

Load More