Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Laura490

Sorry for the random reminiscence if you'd rather not read it, but this post reminded me so much of an incident that happened in my 10th grade english class. The grey-bearded teacher turned out the light and lit two candles. He began to speak in a breathy, mysterious voice, "Colridge's metaphor of two candles which burn more brightly when brought together is so beautiful because it is also an optical reality." He brought the candles together, "See how they reach higher and burn brighter when they are near, like the two souls..." "That must be because of incomplete combustion!" I excitedly blurted out, "We just learned about this in chemistry! If you limit the amount of oxygen to the flame, you can't completely oxidize the hydrocarbons in the wax, so some carbon is released that reflects light. The candles have less access to oxygen when you bring them..." "Damn it Laura this is an english class! You've RUINED the effect!" I actually felt quite proud that I could "ruin" S.T. Colridge...

Laura00

Wow- I played many times - I thought it was fun to pretend to be a character- never read the rule books - never owned the rule books... must have missed something here.

Laura30

I would be interested in know if your opinion would change if the "predictions" of the super-being were wrong .5% of the time, and some small number of people ended up with the $1,001,000 and some ended up with nothing. Would you still 1 box it?

Laura00

For Robin's statistics:
Given no other data but the choice, I would have to choose torture. If we don't know anything about the consequences of the blinking or how many times the choice is being made, we can't know that we are not causing huge amounts of harm. If the question deliberately eliminated these unknowns- ie the badness was limited to an eyeblink that does not immediately result in some disaster for someone or blindness for another, and you really are the one and only person making the choice ever, then I'd go with the dust-- But these qualifications are huge when you consider 3^^^3. How can we say the eyeblink didn't distract a surgeon and cause a slip of his knife? Given enough trials, something like that is bound to happen.

Laura30

Elizer: "It's wrong when repeated because it's also wrong in the individual case. You just have to come to terms with scope sensitivity."

But determining whether or not a decision is right or wrong in the individual case requires that you be able to place a value on each outcome. We determine this value in part by using our knowledge of how frequently the outcomes occur and how much time/effort/money it takes to prevent or assuage them. Thus knowing the frequency that we can expect an event to occur is integral to assigning it a value in the first place. The reason it would be wrong in the individual case to tax everyone in the first world the penny to save one African child is that there are so many starving children that doing the same for each one would become very expensive. It would not be obvious, however, if there was only one child in the world that needed rescuing. The value of life would increase because we could afford it to if people didn't die so frequently.

People in a village might be willing to help pay the costs when someone's house burns down. If 20 houses in the village burned down, the people might still contribute, but it is unlikely they will contribute 20 times as much. If house-burning became a rampant problem, people might stop contributing entirely, because it would seem futile for them to do so. Is this necessarily scope insensitivity? Or is it reasonable to determine values based on frequencies we can realistically expect?

Laura100

I have experienced this problem before-- the teacher assumes you have prior knowledge that you just do not have, and all of what he says afterwards assumes you've made the logical leap. I wonder to what extent thoughtful people will reconstruct the gaps in their knowledge assuming the end conclusion is correct and working backwards to what they know in order to give themselves a useful (but possibly incorrect) bridge from B to A. For example, I recently heard a horrible biochem lecture about using various types of protein sequence and domain homology to predict function and cellular localization. Now, the idea that homology could be used to partially predict these things just seemed logical, and I think my brain just ran with the idea and thought about how I would go about using the technique, and placed everything he said piece-wise into that schema. When I actually started to question specifics at the end of the lecture, it became clear that I didn't understand anything the man was saying at all outside of the words "homology" and "prediction", and I had just filled in what seemed logical to me. How dangerous is it to try to "catch up" when people take huge inferential leaps?

Laura-20

Here's one for you: Lets assume for arguement's sake that "humans" could include human cosciousnesses, not just breathing humans. Then, if a universe with 3^^^^3 "humans" actually existed, what would be the odds that they were NOT all copies of the same parasitic consciousness?

Laura-10

To solve this problem, the AI would need to calculate the probability of the claim being true, for which it would need to calculate the probability of 3^^^^3 people even existing. Given what it knows about the origins and rate of reproduction of humans, wouldn't the probability of 3^^^^3 people even existing be approximately 1/3^^^^3? It's as you said, multiply or divide it by the number of characters in the bible, it's still nearly the same damned incomprehensably large number. Unless you are willing to argue that there are some bizarre properties of the other universe that would allow so many people to spontaneously arise from nothing- but this is yet another explanatory assumption, and one that I see no way of assigning a probability to.

Laura00

Elizer- Thanks for the links. I think people are sour-grapes, because it's so much easier to recognize what they might lose than imagine what they could gain through immortality. It's such an unknown. But choosing death to avoid such unknowns would be a poor form of risk minimization, since it's irreversible. Do you have a link to material about why you believe you will achieve immortality?

Laura00

Sorry to go on on this topic, but it seems to me that a false dichotomy has been developed in this thread between two ideas:

1) Death gives meaning to life.

2) Immortality is worth attempting/achieving.

I do not see why these ideas are at all mutually exclusive. Of course the idea that death gives ALL of the meaning to life would be incompatible with immortality, but certainly some of the transhumanists here must concede that it gives some meaning. Maybe the confusion is with the word "meaning." Many of the things that humans find meaningful in life, such as getting married, developing a career, and raising children, have developed their societal meaning within the confines of a short and finite life, and might even be absurd to pursue in similar ways given immortality. What would "till death do you part" mean without death? It would be ludicrous to make such a binding promise for an eternity entirely unfathomable. Choosing the one right person to raise children with would be unnecessary if you could reproduce indefinitely, and even your children would not be the same few special people if you had a multitude of them at all different ages.

Not that there is anything wrong or even worse about having infinite partners, children, occupations, etc, but the meaning we ascribe to these events would most definitely change.

Many people might not be receptive to these changes, and their conclusion that their imminent death gives meaning to their life is not so absurd as you all are claiming.

Load More