Comment author: Gram_Stone 03 February 2015 10:09:34AM *  8 points [-]

I think that your edit clarified things for me substantially. I read the entire article that you linked. I regret my earlier post for reasons that you will hopefully see.

I have a relevant anecdote about a simpler situation. I was with two friends. The One thought that it would be preferable for there to be less and/or simpler technology in the world, and the Other thought that the opposite was true. The One believed that technology causes people to live meaningless lives, and the Other conceded that he believed this to be true but also believed that technology has so many other benefits that this is acceptable. The One would always cite examples of how technology was used for entertainment, and the Other, examples of how technology was used for work. I stepped in and pointed out the patterns in their respective examples. I said that there were times when I had wasted time by using technology. I pointed out that if a person were like the One, and thus felt that they were leading a less meaningful life by the use of technology, then they should stop. It would be harmful were I to prescribe that a person like the One indiscriminately use technology. I then said that, through technology, I was able to meet people similar to me, people whom I would be far less likely to meet in physical life, and with whom I could hold conversations that I could not hold in physical life. In this way, my life had been made more meaningful by technology. And so it would be harmful for someone to prescribe that I indiscriminately do not use technology.

I learned three things from this event:

1) I should look for third alternatives.

I definitely did not consider this enough in my original response to you, and I apologize. Just like it is not a matter of less technology vs. more technology, it is not necessarily a matter of 'Keep your old life,' vs. 'Start a new life.' Honestly, your 'vague tentative plans' sound like potential third alternatives. I would say keep thinking about those, and also feel good for thinking of and about them. I'd love to hear about them, however vague and tentative. Vaniver touched on this. I would say that he found a third alternative in his own life. I'm bisexual; in physical life, I'm selective about whom I tell, and I don't feel outraged that this is pragmatic or feel inauthentic for doing it. Others would feel like they were in a prison of their own making. I picked the best alternative that I could live with.

2) I should remember that humans are never 'typical.'

There are people who feel like their skin is on wrong when they use technology that they consider undesirably advanced. I love technology. The One thought that people who used technology were suffering from a sense of meaninglessness, and they were simply unaware of this, or actively ignoring it. This was not true for me: Technology makes my life more meaningful. For either of us to act otherwise would be for us to act against our preferences. Likewise, it may have been more important for Shulem to act authentically than it was for him to keep his social relationships. Maryles had a sneaking suspicion that this is false. Yet, Shulem may really be more lonely and really not regret it.

3) I should remember that humans do things for more than just happiness.

People value other things besides happiness. The One saw that some people were happy playing mobile games all of the time, their reward centers firing away, but didn't think that it was worth it because their happiness was meaningless. The One valued meaning more than entertainment, and perhaps even more than happiness in general. People forget this easily. I see this in the article when Maryles says:

Not that I have a right to tell people how to live their lives. I just wish that he would have made choices that would have kept his family intact, and given him a better more meaningful life. Shulem says that he has no regrets. And yet I wonder if he has had similar thoughts? So I am sad for Shulem who still seems to live a very lonely life. I am sad for his children who lost a father they once loved. And yet I am hopeful that those with similar leanings that read his book will realize that the kind of radical change Shulem Deen made- even as he felt it was the right one based on being true to oneself -may not be the best solution for individual happiness.

He wishes that Shulem had made decisions to give himself a more meaningful life. He wishes that Shulem had made decisions to give himself a happier life. He wishes that Shulem had made decisions to give himself a less lonely life. He thinks that, ultimately, Shulem has made decisions to give himself a more authentic life at the price of forgoing these other possibilities. About this, he may be right. Another possibility is that there was no more preferable alternative. Maryles suggests otherwise: He seems to think either that authenticity, meaning, community, and happiness are all the same; or that all are reducible to one; or that all necessarily follow from one. I cannot glean which he believes from context. It is entirely possible that Shulem feels that his life is less happy, less meaningful, more lonely, and more authentic, and that he prefers all and regrets none of this. On the other hand, you, it seems, would not prefer this and would regret this, because you are not typical, as said above. I keep the complexity of value in mind when evaluating potential third alternatives.

Lastly, because things are often about that which they explicitly are not, I feel obliged to touch on this:

I was sad not so much about his erroneous (in my view) conclusions about God and Judaism. Although I am in no way minimizing the importance of that - this post isn’t about that.

If this is true, then 'The Lonely Man of No Faith' is a bad title, in the sense that it isn't representative of the article's implication. (It does, however, make for excellent link bait.) No one is thinking, "Surely his lack of faith is merely a coincidence. There must be other reasons that this man is lonely." Maryles has to say that the post is not about 'that' precisely because everyone has assumed that it's about that.

The general implication is that the so-called truth-seekers are worse off even though the opposite should be true. On this, I will say that any time that I have seen someone become less satisfied with their life by reading about the sorts of things that are posted here, it's because they have experienced a failure of imagination, or their new beliefs have not fully propagated. The failure modes that I've seen the most are:

You've given no indication that you believe any of these things, but I had to address that because of the article's implication, and you or others very well may believe these things, explicitly or implicitly, without indication. You identify as an open-minded person; you seem to take pride in it. As such, you may not really believe that there is no God; rather, you might believe that you ought to believe that there is no God, because perhaps that is what you believe open-minded people do, and you want to do what open-minded people do. (I had this very problem. Belief in belief goes both ways!) Saying that one atheist is less happy because he has been separated from his loved ones is very different from saying that atheists are universally dissatisfied because theism is essentially preferable. Though the author attempts to make that distinction, I think that he fails.

I'm also not saying that I deductively concluded that truth-seeking is preferable to ignorance. I inductively concluded it. Truth-seeking could have been horrible: It turns out it generally isn't.

Comment author: maxikov 05 February 2015 07:04:12AM 5 points [-]

The general implication is that the so-called truth-seekers are worse off even though the opposite should be true.

The opposite should be true for a rational agent, but humans aren't rational agents, and may or may not benefit from false beliefs. There is some evidence that religion could be beneficial for humans while being completely and utterly false:

http://www.tandfonline.com/doi/abs/10.1080/2153599X.2011.647849

http://www.colorado.edu/philosophy/vstenger/Folly/NewSciGod/De%20Botton.pdf

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1361002/

http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0003679

Of course, this is not "checkmate, atheists", and doesn't mean we should all convert to Christianity. There are ways to mitigate the negative impact of false beliefs while preserving the benefits of letting the wiring of the brain do what it wants to do. Unitarian Universalists from the religious side, and Raemon's Solstice from the atheist side are trying to approach this nice zone with the amount of epistemological symbolism and rituals optimal for real humans, until we found a way to rewire everyone. But in general, unless you value truth for its own sake, you may be better off in life with certain false beliefs.

Comment author: maxikov 05 February 2015 06:36:26AM 3 points [-]

Should we be concerned about the exposure to RF radiation? I always assumed that no, since it doesn't affect humans beyond heating, but then I found this:

http://www.emfhealthy.com/wp-content/uploads/2014/12/2012SummaryforthePublic.pdf

http://www.sciencedirect.com/science/article/pii/S0160412014001354

The only mechanism they suggest for non-thermal effects is:

changes to protein conformations and binding properties, and an increase in the production of reactive oxygen species (ROS) that may lead to DNA damage (Challis, 2005 and La Vignera et al., 2012)

One of the articles they cite is behind a paywall (http://www.ncbi.nlm.nih.gov/pubmed/15931683), and the other (http://www.ncbi.nlm.nih.gov/pubmed/21799142) doesn't actually seem to control for thermal effects (it has a non-exposed control, but doesn't have a control exposed to the same amount of energy in visible or infrared band). The fact that heat interferes with male fertility is no surprise (http://en.wikipedia.org/wiki/Heat-based_contraception), but it's not clear to me whether there's any difference between being exposed to RF and turning on the heater (maybe there is, if the organism deals with internal and external heat differently, or maybe this effect is negligible).

Nonetheless, if there is a significant non-thermal effect, that alone warrants a lot of research.

Comment author: Manfred 03 February 2015 02:40:47AM *  1 point [-]

Not OK in what sense - as in morally wrong to kill sapient beings or as terrifying as getting killed?

The first one - they're just a close relative :)

I don't quite get this part - can you elaborate?

TDT says to treat the world as a causal diagram that has as its input your decision algorithm, and outputs (among other things) whether you're a copy (at least, iff your decision changes how many copies of you there are). So you should literally evaluate the choices as if your action controlled whether or not you are a copy.

As to erasing memories - yeah I'm not sure either, but I'm learning towards it being somewhere between "almost a causal descendant" and "about as bad as being killed and a copy from earlier being saved."

Comment author: maxikov 03 February 2015 06:33:57PM 0 points [-]

OK, I'll have to read deeper into TDT to understand why that happens, currently that seems counterintuitive as heck.

Comment author: Kindly 02 February 2015 02:29:08PM 9 points [-]

Hypocrisy doesn't bother me. Everyone's got his ideal, and then the reality of what he can actually deliver. Scratch hypocrisy, and you're more likely to lose the ideal than the reality.

Milo Behr, Beowulf: A Bloody Calculus.

Comment author: maxikov 02 February 2015 10:50:42PM 1 point [-]

Hypocrisy isn't actually fundamentally wrong, even is deliberate. The idea that it's bad is a final yet arbitrary value that has to be taught to humans. Many religions contain the Golden Rule, which boils down to "don't be a hypocrite", and this is exactly an indicator that is was highly non-obvious before it permeated our culture.

Comment author: Manfred 02 February 2015 12:28:31PM 2 points [-]

Depend on how you feel about anthropically selfish preferences, and altruistic preferences that try to satisfy other peoples' selfish preferences. I, for instance, do not think it's okay to kill a copy of me even if I know I will live on.

In the earth-mars teleporter thought experiment, the missing piece is the idea that people care selfishly about their causal descendants (though this phrase is obscuring a lot of unsolved questions about what kind of causation counts). If the teleporter annihilates a person as it scans them, the person who get annihilated has a direct causal descendant on the other side. If it waits ten minutes, gives the original some tea and cake, and then annihilates them, the person who gets annihilated has no direct causal descendant - they really are getting killed off in a way that matters more to them than before.

Comment author: maxikov 02 February 2015 10:40:29PM 0 points [-]

I, for instance, do not think it's okay to kill a copy of me even if I know I will live on

Not OK in what sense - as in morally wrong to kill sapient beings or as terrifying as getting killed? I tend to care more about people who are closer to me, so by induction I will probably care about my copy more than any other human, but I still alieve the experience of getting killed to be fundamentally different and fundamentally more terrifying than the experience of my copy getting killed.

From the linked post:

The counterargument is also simple, though: Making copies of myself has no causal effect on me. Swearing this oath does not move my body to a tropical paradise. What really happens is that I just sit there in the cold just the same, but then later I make some simulations where I lie to myself.

If I understand correctly, the argument of timeless identity is that your copy is you in absolutely any meaningful sense, and therefore prioritizing one copy (original) over the others isn't just wrong, but even meaningless, and cannot be defined very well. I'm totally not buying that on gut level, but at the same time I don't see any strong logical arguments against it, even if I operate with 100% selfish 0% altruistic ethics.

When there is a decision your original body can make that creates a bunch of copies, and the copies are also faced with this decision, your decision lets you control whether you are the original or a copy.

I don't quite get this part - can you elaborate?

If it waits ten minutes, gives the original some tea and cake, and then annihilates them, the person who gets annihilated has no direct causal descendant - they really are getting killed off in a way that matters more to them than before

What's about the thought experiment with erasing memories though? I doesn't physically violate causality, but from the experience perspective it does - suddenly the person loses a chunk of their experience, and they're basically replaced with an earlier version of themselves, even though the universe has moved on. This experience may not be very pleasant, but it doesn't seem to be nearly as bad as getting cake and death in the Earth-Mars experiment. Yet it's hard to distinguish them on the logical level.

Comment author: maxikov 02 February 2015 06:05:46AM 4 points [-]

Disclaimer: the identity theory that I actually alieve is the most common intuitionist one, and it's philosophically inconsistent: I regard as death teleportation but not sleeping. This comment, however, is written from System 2 perspective, that can operate even with concepts that I don't alieve

The basic idea behind timeless identity is that "I" can only be meaningfully defined inductively as "an entity that has experience continuity with my current self". Thus, we can safely replace "I value my life" with "I value the existence of an entity that feels and behaves exactly like me". That allows us to be OK with quite useful (although hypothetical) things like teleportation, mind uploading, mind backups, etc. It also seems to provide an insight into why it's OK to make a copy of me on Mars, and immediately destroy Earth!me, but not OK to destroy Earth!me hours later: the experiences of Earth!me and Mars!me would diverge, and each of them would value their own lives.

However, here is the thing: in this case we merely replace the requirement "to have an entity with experience continuity with me" with "to have an entity with experience continuity with me, except this one hour". They're actually pretty interchangeable. For example, I forget most of my dreams, which means I'm nearly guaranteed to forget several hours of experience every day, and I'm OK with that. One might say that the value of genuine experiences exceeds that of hallucinations, but I would still be pretty OK with taking a suppressor of RNA synthesis, that would temporarily give me anterograde amnesia, and do something that I don't really care about remembering - clean the house or something. Heck, even retroactively erasing my most cherished memories, although extremely frustrating, is still not nearly as bad as death.

That implies that is there are multiple copies of me, the badness of killing any of them is no more than the increase in the likelihood of all of them being destroyed (which is not a lot, unless there's Armageddon happening around) plus the value of memories formed since the last replication. Also, every individual copy should consider alieve being killed to be no worse than forgetting what happened since the last replication, which also sounds not nearly as horrible as death. That also implies that simulating time travel by discarding time branches is also a pretty OK thing to do, unless the universes diverge strongly enough to create uniquely valuable memories.

Is that correct or am I missing something?

Comment author: FourFire 02 January 2015 09:15:49PM 0 points [-]

The video appears to be private, which is unfortunat since I was interested in watching how the event progressed.

Comment author: maxikov 03 January 2015 12:32:45AM 1 point [-]

We decided that keeping the whole video including personal stories public all the time wouldn't be a very good idea. All the songs, however, are publicly available here: https://www.youtube.com/playlist?list=PLhH76Ztpl1UIHsSvxSsHhoPLc95n_s_6N

LINK: TED-Ed video on death and cryonics

7 maxikov 26 December 2014 04:52AM

At what moment are you dead? - Randall Hayes

This is pretty much 101, and it leaves out some important considerations, but it had nice animation, and I think it may be a great starting point for introducing people to cryonics for the first time.

Comment author: Andy_McKenzie 24 December 2014 06:29:37PM 7 points [-]

Agreed this is not brain uploading. Actually this research is not that much different from what has previously been done in computer simulations. The advance is having embedded it in a physical substrate vs a computer.

However, are you implying that C. elegans uploading wouldn't count as uploading because it's so much simpler that a human brain? If so, I disagree with you there. A lot of people think that it would be basically impossible to encode preferences from a C elegans organism (eg learned patterns) into a computer. It certainly hasn't been done yet AFAIK. Doing it would be a conceptual advance and would allow us to tweak our models of how certain types of neurons, electrical synapses, and chemical synapses work, inter alia.

Also, whether you call the C. elegans nervous system a "brain" or a "ganglia" is a question of semantics. Many and perhaps most researchers do call it a brain, see eg here.

Comment author: maxikov 26 December 2014 04:38:25AM 3 points [-]

My primary concern is that the model is very simplified. Although even on this level it may be interesting to invent a metric for the accuracy of encoding the organism's behavior - from completely random to a complete copy.

Comment author: maxikov 24 December 2014 03:29:23AM *  22 points [-]

When you think about it, the brain is really nothing more than a collection of electrical signals.

Statements like this make me want to bang my head against a wall. No, it is not. Brain is a collection of neural and glial cells, the role of which we only partially understand. Most of the neurons are connected through various types of chemical synapses, and ignoring their chemical nature would fail to explain the effects of most psychoactive drugs and even hormones. Some of the neurons are linked directly. Some of them are myelinated, while others are not, and this is kinda big deal, since there's no clocking in the nervous system, and the entire outcome of the processing depends on how long it takes for the action potential to propagate through the axon. And how long it takes for the synapse to react. And how long the depolarization persists in the receiving neuron. And all of that is regulated by the chemistry of regulating gene expression patterns. And we're not even talking about learning and forming long-term memories, which are due to neuroplasticity, entirely controlled by gene expression patterns. It's enough to suppress RNA synthesis to cause anterograde amnesia - although it will also cause some retrograde amnesia too., since apparently merely using neurons causes them to change.

Also, C. elegans doesn't even have a brain; it has ganglia.

Look, I understand that this is some interesting research, but calling it "brain uploading" is like comparing the launch of a firework to interstellar travel: essentially, they're the same, but there are couple of nuances.

View more: Prev | Next