This seems to me like an orthogonal question. (A question that can be entirely extricated and separated from the cryonics question).
You're talking about whether you are a valuable enough individual that you can justify resources being spent on maintaining your existence. That's a question that can be asked just as easily even if you have no concept of cryonics. For instance: if your life depends on getting medical treatment that costs a million dollars, is it worth it? Or should you prefer that the money be spent on saving other lives more efficiently?
(Inc...
I think I've got a good response for this one.
My non-episodic memory contains the "facts" that Buffy the Vampire Slayer was one of the best television shows that was ever made, and the Pink Floyd aren't an interesting band. My boyfriend's non-episodic memory contains the facts that Buffy was boring, unoriginal, and repetetive (and that Pink Floyd's music is trancendentally good).
Objectively, these are opinions, not facts. But we experience them as facts. If I want to preserve my sense of identity, then I would need to retain the facts that were i...
Very interesting. I'm going to try my hand at a short summary:
Assume that you have a number of different options you can choose, that you want to estimate the value of each option and you have to make your best guess as to which option is most valuable. In step one, you generate individual estimates using whatever procedure you think is best. In step 2 you make the final decision, by choosing the option that had the highest estimate in step one.
The point is: even if you have unbiased procedures for creating the individual estimates in step one (ie procedur...
Well in some circumstances, this kind of reasoning would actually change the decision you make. For example, you might have one option with a high estimate and very high confidence, and another option with an even higher estimate, but lower confidence. After applying the approach described in the article, those two options might end up switching position in the rankings.
BUT: Most of the time, I don't think this approach will make you choose a different option. If all other factors are equal, then you'll probably still pick the option that has the highest e...
I think there's some value in that observation that "the all 45 thing makes it feel like a trick". I believe that's a big part of why this feels like a paradox.
If you have a box with the numbers "60" and "20" as described above, then I can see two main ways that you could interpret the numbers:
A: The number of coins in this box was drawn from a probability distribution with a mean of 60, and a range of 20.
B: The number of coins in this box was drawn from an unknown probability distribution. Our best estimate of the number of c...
I think that RobbBB has already done a great job of responding to this, but I'd like to have a try at it too. I'd like to explore the math/morality analogy a bit more. I think I can make a better comparison.
Math is an enormous field of study. Even if we limited our concept of "math" to drawing graphs of mathematical functions, we would still have an enormous range of different kinds of functions: Hyperbolic, exponential, polynomial, all the trigonometric functions, etc. etc.
Instead of comparing math to morality, I think it's more illustrative to ...
But if you do care about your wishes being fulfilled safely, then safety will be one of the things that you want, and so you will get it.
So long as your preferences are coherent, stable, and self-consistent then you should be fine. If you care about something that's relevant to the wish then it will be incorporated into the wish. If you don't care about something then it may not be incorporated into the wish, but you shouldn't mind that: because it's something you don't care about.
Unfortunately, people's preferences often aren't coherent and stable. For in...
I like this style of reasoning.
Rather than taking some arbitrary definition of black boxes and then arguing about whether they apply, you've recognised that a phrase can be understood in many ways, and we should use the word in whatever way most helps us in this discussion. That's exactly the sort of rationality technique we should be learning.
A different way of thinking about it though, is that we can remove the confusing term altogether. Rather than defining the term "black box", we can try to remember why it was originally used, and look for a...
"if the Pump could just be made to sense the proper (implied) parameters."
You're right, this would be an essential step. I'd say the main point of the post was to talk about the importance, and especially the difficulty, of achieving this.
Re optimisation for use: remember that this involves a certain amount of trial and error. In the case of dangerous technologies like explosives, firearms, or high speed vehicles, the process can often involve human beings dying, usually in the "error" part of trial and error.
If the technology in questi...
I agree, just because something MIGHT backfire, it doesn't mean we automatically shouldn't try it. We should weigh up the potential benefits and the potential costs as best we can predict them, along with our best guesses about the likelihood of each.
In this example, of course, the lessons we learn about "genies" are supposed to be applied to artificial intelligences.
One of the central concepts that Eliezer tries to express about AI is that when we get an AI that's as smart as humans, we will very quickly get an AI that's very much smarter than h...
I see where you're coming from on this one.
I'd only add this: if a genie is to be capable of granting this wish, it would need to know what your judgements were. It would need to understand them, at least as well as you do. This pretty much resolves to the same problem that Eliezer already discussed.
To create such a genie, you would either need to explain to the genie how you would feel about every possible circumstance, or you would need to program the genie so as to be able to correctly figure it out. Both of these tasks are probably a lot harder than they sound.
Can't agree with this enough.
Alternate answer:
If the Kremlin publicly announces a policy, saying that they may reward some soldiers who disobey orders in a nuclear scenario? Then this raises the odds that a Russian official will refuse to launch a nuke - even when they have evidence that enemy nukes have already been fired on Russia.
(So far, so good. However...)
The problem is that it doesn't just raise the odds of disobedience, it also raises the perceived odds as well. ie it will make Americans think that they have a better chance of launching a first strike and "getting away wi...
It may be an uncommon scenario, but it's the scenario that's under discussion. We're talking about situations where a soldier has orders to do one thing, and believes that moral or tactical considerations require them to do something else - and we're asking what ethical injunctions should apply in that scenario.
To be fair, Jubilee wasn't very specific about that.
Yup! I agree completely.
If you were modeling an octopus-based sentient species, for the purposes of writing some interesting fiction, then this would be a nice detail to add.
Thank you. :)
I believe the idea was to ask "hypothetically, if I found out that this hypothesis was true, how much new information would that give me?"
You'll have two or more hypotheses, and one of them is the one that would (hypothetically) give you the least amount of new information. The one that would give you the least amount of new information should be considered the "simplest" hypothesis. (assuming a certain definition of "simplest", and a certain definition of "information")
This is excellent advice.
I'd like to add though, that the original phrase was "algorithms that make use of gut feelings... ". This isn't the same as saying "a policy of always submitting to your gut feelings".
I'm picturing a decision tree here: something that tells you how to behave when your gut feeling is "I'm utterly convinced" {Act on the feeling immediately}, vs how you might act if you had feelings of "vague unease" {continue cautiously, delay taking any steps that constitute a major commitment, while you try...
I think this is the basis of good Business Analysis. A field I'm intending to move into.
It's the very essence of "Hold off on proposing solutions".
This is perfectly true. But it doesn't much matter, because the point here is that when these people reject the idea of evolution, for these kind of reasons, they use feelings of "absurdity" as a metric - without critically assessing the reasons why they feel that way.
The point here isnt that the lady was using sound and rational reasoning skills. The contention is that her style of reasoning was something a rationalist shouldn't want to use - and that it was something the author no longer wants to use in their own thinking.
Oh absolutely. We don't have time to thoroughly investigate the case for every idea we come across. There comes a time when you say that you're not interested in exploring an idea any further.
But there is an intellectual honesty to admitting that you haven't heard all of the evidence, and acknowledging that you might conceivably have changed your mind (or least significantly changed your probability estimates) if you had done more research.
And there's a value to it as well. Some ideas have been thoroughly researched and should be labelled in our minds as ...
I think that absurdity, in this sense, is just an example of Occam's Razor / Bayesian rationalty in practice. If something has a low prior, and we've no evidence that would make us raise our probability estimates, then we should believe that the idea probably isn't true.
I've always assumed that the absurdity bias was a tendency to do something slightly different. In this context, absurdity is a measure of how closely an idea conforms to our usual experiences. It's a measure of how plausible an idea feels to our gut. By this definition, absurdity is being...
Incidentally, does this prime number have to be expressed in Base 10?
I think the original poster would have agreed to this even before they had the realisation. The point here is that, even when you do listen to an explanation, the absurdity bias can still mislead you.
The lady in the story had an entire conversation about evolution and still rejected it as absurd. Some ideas simply take more than 20 minutes to digest, understand and learn about. Therfore after 20 minutes of conversation, you cannot reasonably conclude that you've heard everything there is. You cannot reasonably conclude that you wouldn't be convinced by more evidence.
It's just like any bias really. Even when you know about it and you think you've adjusted sufficiently, you probably haven't.
I think this just underscores the original post's point.
The lesson here isn't that Christians are probably right or that Christians are probably wrong. The lesson here is that you can go very wrong by relying on the absurdity heuristic. And that that's true even when the claim seems really absurd.
Let's take a hypothetical atheist who really does think that all Christians believe in the literal word of the Bible. This atheist might reject the whole of Christianity because of the absurdity of talking snakes. Having rejected the entire school of thought that...
I think you might be deflecting the main point here. Possibly without realising it.
You have a better opportunity to practice your skills as a rationalist if you respond to the [least convenient] (http://tinyurl.com/LWleastconvenient) possible interpretation of this comment.
I would propose that the "experts" being referred to are experts in debating the existence of God. ie of all the arguments that have ever been put forward for the existence of God, these are the people who know the most compelling ones. The most rationally compelling, logica...
Exactly!
To demonstrate in this way that the absurdity heuristic is useful, you would have to claim something like:
The ratio of false absurd claims (that you are likely to encounter) to true absurd claims (that you are likely to encounter) is much higher than the ratio of false non-absurd claims (that you are likely to encounter) to true non-absurd claims (that you are likely to encounter).
EDIT wow. I'm the person who wrote that, and i still find it hard to read it. This is one of the reasons why rationality is hard. Even when you have a good intuition for the concepts, it's still hard to express the ideas in a concrete way.
I agree with AlexanderRM.
You stated that some of the autistic people you know are significantly different from most humans. That's in line with the original content, not a counter-argument to it.
And with that said, I'm not sure I'm happy being in a conversation about how "different" a group of people is from normal people. It's hard to know how that will be taken by the people involved, and it may not be a nice feeling to read it.
I think you're right. That squeamishness is very much a product of you having grown up as not-an-octopus.
Most creatures taste with an organ that's at the top of their digestive tract, it's fairly sensible that they have an aversion to tasting anything that they would be unhealthy for them to consume.
A species that had always had a chemical-composition-sense on all of it's limbs? Would almost certainly have a very different relationship with that sense than we have with taste.
I think this might be the bias in action yet again.
Our idea of an alien experience is to taste with a different part of our bodies? That's certainly more different-from-human than most rubber-forehead aliens, but "taste" is still a pretty human-familiar experience. There are species with senses that we don't have at all, like a sensitivity to magnetism or electric fields.
TV tropes calls that the "planet of hats". (visit tv tropes at your own peril, it's a notorious time sink).
I think it represents a different fallacy: to assume that am unfamiliar group of things (or people) are much more homogenous than they really are. And more specifically: to assume that a culture or group of things is entirely defined by the things that make them different from us.
Yes, of course there are many good reasons why writers do this. Reasons why, for a writer, it can be good to do this, in addition to just being difficult to avoid.
But i don't think that's really the point. We're not here to critique science fiction. We're not tv critics. We're trying to learn rationality techniques to help us "win" whatever we're trying to win. And this is a fairly good description of a certain kind of bias.
You're right though. Sci-fi is a good example to demonstrate what the bias is, but not a great example to demonstrate why it's important.
This is fair, because you're using the technique to redirect us back to the original morality issue.
But i also don't think that MBlume was completely evading the question either. The question was about ethical principles, and his response does represent an exploration of ethical principles. MBlume suggests that it's more ethical to sacrifice one of the lives that was already in danger, than to sacrifice an uninvolved stranger. (remember, from a strict utilitarian view, both solutions leave one person dead, so this is definitely a different moral principle.)
This technique is good for stopping people from evading the question. But some evasions are more appropriate than others.
Okay, well let's apply exactly the technique discussed above:
If the hypothetical Omega tells you that they're is indeed a maximum value for happiness, and you will certainly be maximally happy inside the box: do you step into the box then?
Note: I'm asking that in order to give another example of the technique in action. But still feel free to give a real answer if you'd choose to.
Side you didn't answer the question one way or another, I can't apply the second technique here. I can't ask what would have to change in order for you to change your answer.
Yes, that's definitely true. If you know a little, or a lot, about genetics, then the theory is falsifiable.
I think it still works just fine as an example though. The goal was to explain the meaning and the importance of falsifiability. Spotiswood's theory, as presented and as it was being used, wasn't making any useful predictions. No one was looking at familial comparisons, and i implied that Spotiswood wasn't making any effort to identify the gene, so the only observations that were coming in were "person lives", or "person dies". Wi...
It's an appealing notion, but i think the logic doesn't hold up.
In simplest terms: if you apply this logic and choose to cooperate, then the machine can still defect. That will net more paperclips for the machine, so it's hard to claim that the machine's actions are irrational.
Although your logic is appealing, it doesn't explain why the machine can't defect while you co-operate.
You said that if both agents are rational, then option (C,D) isn't possible. The corollary is that if option (C,D) is selected, then one of the agents isn't being rational. If this...
There's a lot of value in that. Sometimes it's best not to go down the rabbit hole.
Whatever the technicalities might be, the jester definitely followed the normal, reasonable rules of this kind of puzzle, and by those rules he got the right answer. The king set it up that way, and set the jester up to fail.
If he'd done it to teach the jester a valuable lesson about the difference between abstract logic and real life, then it might have been justified. But he's going to have the jester executed, so that argument disappears.
I think we can all agree, The King is definitely a dick.
I don't really recommend talking to a bunch of children and deliberately spreading the message "some of you just suck at most things".
There are positive and valuable ways to teach the lesson that people aren't all equally "good at stuff", but it's a tough one to communicate well. It's not a good thing to bring up casually as an example when you're talking about something else.
Incidentally, i think that you're proposing a test for susceptibility to the medicine. The relevant theory here is that any person who would be killed by a full dose, would be also be harmed but not killed, by a much smaller dose. That's a perfectly testable, falsifiable theory, but i don't think it would directly test the claim that the cause is genetic.
A better test for genetic causes, is to look at family relationships. If we believe the cause is genetic, then we predict that people who are more closely related to each other, are more likely to have t...
Absolutely.
If the first scientist can come up with a way to test his theory, then it would probably make his theory more useful. It would also make it more falsifiable.
I think it would be great to start with a theory that sounds very scientific, but is unfalsifiable, and therefore useless. Then we modify the theory to include an element that is falisfiable, and the theory becomes much more useful.
For example, we have a new kind of medicine, and it is very good for some people, but when other people take the medicine it kills them. Naturally, we want to know who would be killed by the medicine, and who would be helped by it.
A scientist has a theory. He believes there is a gene that he calls the "Spottiswood gene"...
I think you're saying that all the cases described above, could be expressed as a mix of other fallacies, therefore it's not distinct fallacy in its own right?
I think a better question is "If we think of class of mistake as a specific named fallacy, will it help us to spot errors of reasoning that we would otherwise have missed? Or alternatively, help us to talk about errors of reasoning that we've noticed."
If it can be expressed in terms of other fallacies, but these mistakes aren't immediately obvious as examples of those fallacies, then it can...
"We have a pretty stupid banking system if you can..."
Yes, we do.
It's a complicated system that developed slowly, piece by piece, influenced by legislation, commercial pressures, other (contradictory) commercial pressures, and customers' needs. The need for backwards compatibility makes it impossible to rip up the old system and start again, and no one person is in charge of designing it. Naturally it's messed up and has inconsistencies.
---Meta comment: At first I was writing this with the intention of saying, basically: "Duh! isn't that ob...
Suffice to say: There are many different methods for sending money. Some of them will involve paper forms, some will not. Some of them involve the internet, some will not. And each one has its own rules.
"Maybe the scammer wants the part of their money returned using a different method (one that does not allow cancelling, or has shorter deadlines)"
This is essentially correct. I've read about similar scams, and I believe this was how they worked.
"I think it can be taken for granted that people on this site have an elevated sense of skepticism"
I disagree. Being a participant on this site implies that one has accepted some or all of the central premises of the community: that we can significantly improve our lives by thinking differently, that we should be willing to think and behave in ways that are very counter-intuitive to the average person, and that we can learn to do all of this by reading and talking on a website.
A great many 'normal' people would dismiss Less Wrong as a silly ventu...
I can't speak for Eliezer's intentions when he wrote this story, but I can see an incredibly simple moral to take away from this. And I can't shake the feeling that most of the commenters have completely missed the point.
For me, the striking part of this story is that the Jester is shocked and confused when they drag him away. "How?!" He says "It's logically impossible". The Jester seems not to understand how it is possible for the dagger to be in the second box. My explanation goes as follows, and I think I'm just paraphrasing the king...
Now I want to try having a watch that randomly speeds up, and slows down, within preset limits. So that at any point I could be as many as 5 minutes ahead, or 5 minutes behind.
That would probably get me used to showing up a few minutes early to everything.
I might be missing something here.
These seem to be application forms to lease or purchase land that the belongs to a railway-related organization?
Land that belongs to a railway-related organization isn't necessarily part of a railway. The land could be disused office-space, parking lots, or warehouses.
Completely Ad-Hoc proposal:
Ethics are very very heavily influenced by one consideration: other people's opinions. It may not be consciously admitted, but when people faced with an ethical conundrum, I think they make a decision that's based on the question "What will people think of me?". (The internalized version is: "What will I think of myself?" / "Will I be able to look at myself in the mirror?").
The question here relates to letting 5 people die (by inaction) or killing one person (by taking action). If you pick the second...
This is good, but I feel like we'd better represent human psychology if we said:
Most people don't make a distinction between the concepts of "x has probability <0.1%" and "x is impossible".
I say this because I think there's an important difference between the times when people have a precise meaning in mind, which they've expressed poorly, and the times when people's actual concepts are vague and fuzzy. (Often, people don't realise how fuzzy their concepts are).