CynicalOptimist
CynicalOptimist has not written any posts yet.

CynicalOptimist has not written any posts yet.

This seems to me like an orthogonal question. (A question that can be entirely extricated and separated from the cryonics question).
You're talking about whether you are a valuable enough individual that you can justify resources being spent on maintaining your existence. That's a question that can be asked just as easily even if you have no concept of cryonics. For instance: if your life depends on getting medical treatment that costs a million dollars, is it worth it? Or should you prefer that the money be spent on saving other lives more efficiently?
(Incidentally, i know that utilitarianism generally favours the second option. But I would never blame anyone for choosing the first... (read more)
I think I've got a good response for this one.
My non-episodic memory contains the "facts" that Buffy the Vampire Slayer was one of the best television shows that was ever made, and the Pink Floyd aren't an interesting band. My boyfriend's non-episodic memory contains the facts that Buffy was boring, unoriginal, and repetetive (and that Pink Floyd's music is trancendentally good).
Objectively, these are opinions, not facts. But we experience them as facts. If I want to preserve my sense of identity, then I would need to retain the facts that were in my non-episodic memory. More than that, I would also lose my sense of self if I gained contradictory memories. I would need to have my non-episodic memories and not have the facts from my boyfriend's memory.
That's the reason why "off the shelf" doesn't sound suitable in this context.
Very interesting. I'm going to try my hand at a short summary:
Assume that you have a number of different options you can choose, that you want to estimate the value of each option and you have to make your best guess as to which option is most valuable. In step one, you generate individual estimates using whatever procedure you think is best. In step 2 you make the final decision, by choosing the option that had the highest estimate in step one.
The point is: even if you have unbiased procedures for creating the individual estimates in step one (ie procedures that are equally likely to overestimate as to underestimate) biases will still be introduced in step 2, when you're looking at the list of all the different estimates. Specifically, the biases are that the highest estimate(s) are more likely to be overestimates, and the lowest estimate(s) are more likely to be underestimates.
Well in some circumstances, this kind of reasoning would actually change the decision you make. For example, you might have one option with a high estimate and very high confidence, and another option with an even higher estimate, but lower confidence. After applying the approach described in the article, those two options might end up switching position in the rankings.
BUT: Most of the time, I don't think this approach will make you choose a different option. If all other factors are equal, then you'll probably still pick the option that has the highest expected value. I think that what we learn from this article is more about something else: It's about understanding that the final result will probably be lower than your supposedly "unbiased" estimate. And when you understand that, you can budget accordingly.
I think there's some value in that observation that "the all 45 thing makes it feel like a trick". I believe that's a big part of why this feels like a paradox.
If you have a box with the numbers "60" and "20" as described above, then I can see two main ways that you could interpret the numbers:
A: The number of coins in this box was drawn from a probability distribution with a mean of 60, and a range of 20.
B: The number of coins in this box was drawn from an unknown probability distribution. Our best estimate of the number of coins in this box is 60, based on certain information... (read more)
I think that RobbBB has already done a great job of responding to this, but I'd like to have a try at it too. I'd like to explore the math/morality analogy a bit more. I think I can make a better comparison.
Math is an enormous field of study. Even if we limited our concept of "math" to drawing graphs of mathematical functions, we would still have an enormous range of different kinds of functions: Hyperbolic, exponential, polynomial, all the trigonometric functions, etc. etc.
Instead of comparing math to morality, I think it's more illustrative to compare math to the wider topic of "value-driven-behaviour".
An intelligent creature could have all sorts of different values. Even... (read 481 more words →)
But if you do care about your wishes being fulfilled safely, then safety will be one of the things that you want, and so you will get it.
So long as your preferences are coherent, stable, and self-consistent then you should be fine. If you care about something that's relevant to the wish then it will be incorporated into the wish. If you don't care about something then it may not be incorporated into the wish, but you shouldn't mind that: because it's something you don't care about.
Unfortunately, people's preferences often aren't coherent and stable. For instance an alcoholic may throw away a bottle of wine because they don't want to be tempted by it. Right now, they don't want their future selves to drink it. And yet they know that their future selves might have different priorities.
Is this the sort of thing you were concerned about?
I like this style of reasoning.
Rather than taking some arbitrary definition of black boxes and then arguing about whether they apply, you've recognised that a phrase can be understood in many ways, and we should use the word in whatever way most helps us in this discussion. That's exactly the sort of rationality technique we should be learning.
A different way of thinking about it though, is that we can remove the confusing term altogether. Rather than defining the term "black box", we can try to remember why it was originally used, and look for another way to express the intended concept.
In this case, I'd say the point was: "Sometimes, we will use a... (read more)
"if the Pump could just be made to sense the proper (implied) parameters."
You're right, this would be an essential step. I'd say the main point of the post was to talk about the importance, and especially the difficulty, of achieving this.
Re optimisation for use: remember that this involves a certain amount of trial and error. In the case of dangerous technologies like explosives, firearms, or high speed vehicles, the process can often involve human beings dying, usually in the "error" part of trial and error.
If the technology in question was a super-intelligent AI, smart enough to fool us and engineer whatever outcome best matched its utility function? Then potentially we could find... (read more)
This is good, but I feel like we'd better represent human psychology if we said:
Most people don't make a distinction between the concepts of "x has probability <0.1%" and "x is impossible".
I say this because I think there's an important difference between the times when people have a precise meaning in mind, which they've expressed poorly, and the times when people's actual concepts are vague and fuzzy. (Often, people don't realise how fuzzy their concepts are).