- Permalink
So... What do we make of this?
Excerpt:
He is a rationalist who is deeply against living by social norms and just sees them as defaults, and is “non-default” about pretty much everything including work path, values etc., as well as lifestyle including cooking (lives off takeaway so as not to spend time grocery shopping and cooking), cleaning (does not have much of a regular cleaning habit – I broke glass in his kitchen a month ago and he said I shouldn’t have to clean it up and it’s still there), sleeping (he has no regular sleep schedule and sleeps when he wants to. The kind of work that he does is largely from home with long deadlines. He ships a prescription anti-narcolepsy from overseas which allows him to stay awake for long stretches on little sleep – although he plans on giving this up soon). He also takes party drugs and for a while, was taking quite high amounts of MDMA on a weekly basis, which pretty much wiped him out the day or two after. I have always been uncomfortable around drugs, although he did not really know the extent of my discomfort, and I can’t take them myself due to mental health. He dropped back to once a month after I expressed concerns about escalation and he acknowledges that he has some susceptibility to addiction, although he is not currently dependent.“
One serious issue we had was that he gave me an STI. He had rationalised that he had a very limited risk of having an STI so despite my repeated requests and despite being informed that a previous partner had been infected, did not get tested. I was furious at his intellectual arrogance and the danger he had put us both in. I lost a week of unpaid time off work and my mum had to nurse me through my allergic reaction to the treatment. I told him I wanted to break up, but we ended up supporting each other through the treatment and ultimately decided to get back together and work things out.
One serious issue we had was that he gave me an STI. He had rationalised that he had a very limited risk of having an STI so despite my repeated requests and despite being informed that a previous partner had been infected, did not get tested.
I thought accepted theory was that rationalists, are less credulous but better at taking ideas seriously, but what do I know, really? Maybe he needs to read more random blog posts about quantum physics and AI to aspire for LW level of rationality.
I don't know what is conditional to accurate preservation of the mind,
It seems like you're saying you don't know whether cryonics can succeed or not. Whereas in your first reply you said "therefore cryonics in the current shape or form is unlikely to succeed."
Yes.
I don't know if it is going to succeed or not (my precognition skills are rusty today), but I am using my current beliefs and evidence (sometimes lack of thereof) to speculate that it seems unlikely to work, in the same way cryonics proponents speculate that it is likely (well, likely enough to justify the cost) that their minds are going to survive till they are revived in the future.
I don't assign high prior probability to the fact that we know enough about the brain to preserve minds correctly, and therefore cryonics in the current shape or form is unlikely to succeed.
Are you saying that accurate preservation depends on highly delicate molecular states of the brain, and this is the reason they cannot be preserved with current techniques?
I don't know what is conditional to accurate preservation of the mind, but I am sure that if someone came up with definite answer, it would be a great leap forward for the whole community.
Some people seem to put their faith in structure for an answer, but how to test this claim in a meaningful way?
Yes, it is indeed a common pattern.
People are likely to get agitated about the stuff they are actually working with, especially if it is somehow entangled with their state of knowledge, personal interests and employment. Belief that we are the ones to save the world, really helps to find motivation for continuing their pursuits (and helps fund-raising efforts, I would reckon). It is also a good excuse to push your values on others (Communism will save the world from our greed).
On the other hand, I don't think it is a bad thing. That way, we have many little small groups, each working on their small subset of problem space when also trying to save the world from the disaster, which they perceive to be the greatest danger. As long as response is proportional to actual risk, of course.
But I still agree with you that it is only prudent to treat any such claims with caution, so that we don't fall into a trap of using data taken from a small group of people working at Asteroid Defense Foundation as our only and true estimates of likelihood and effect of an asteroid impact, without verifying their claims using an unbiased source. It is certainly good to have someone looking at the sky from time to time, just in case their claims prove true, though.
Even though LW is far more open to the idea of cryonics than other places, the general opinion on this site still seems to be that cryonics is unlikely to succeed (e.g. has a 10% chance of success).
How do LW'ers reconcile this with the belief that mind uploading is possible?
Here is a parable illustrating relevant difficulty of both problems:
*Imagine you are presented with a modern manuscript in latin and asked to retype it on a computer and translate everything into English.
This is how uploading more or less looks like for me, data is there but it still needs to be understood, and copied. Ah, you also need a computer. Now consider the same has to be done with ancient manuscript, that has been preserved in a wooden box stored in ice cave and guarded by a couple of hopeful monks:
- Imagine the manuscript has been preserved using correct means and all letters are still there.
Uploading is easy. There is no data loss, so it is equivalent to uploading modern manuscript. This means that monks were smart enough to choose optimal storage procedure (or got there by accident) - very unlikely.
- Imagine the manuscript has been preserved using decent means and some letters are still there.
Now, we have to do a bit of guesswork... is the manuscript we translate the same thing original author had in mind? EY called it doing intelligent cryptography on a partially preserved brain, as far as I am aware. Monks knew just enough not to screw up the process, but their knowledge of manuscript-preservation-techniques was not perfect.
- Imagine the manuscript has been preserved using decent means all traces have vanished without trace.
Now we are royally screwed, or we can wait a couple of thousands of millions years so that oracle computer can deduce state of manuscript by reversing entropy. This means monks know very little about manuscript-preservation.
- Imagine there is no manuscript. There is a nice wooden box preserved with astonishing details, but manuscript have crumbled when monks put it inside.
Well, the monks who wanted to preserve manuscript didn't know that preserving the box does not help to preserve the manuscript, but they tried, right? This means monks don't understand connection between manuscript and box preservation techniques.
- Imagine there is no manuscript. The box has been damaged as well.
This is what happens when manuscript-preservation business is run by people with little knowledge about what should be done to store belongings for thousands of years without significant damage.
In other words, uploading is something that can be figured out correctly in far, far future while the problem of what is proper cryo-storage has to be solved correctly right now as incorrect procedure may lead to irreversible loss of information for people who want to be preserved now. I don't assign high prior probability to the fact that we know enough about the brain to preserve minds correctly, and therefore cryonics in the current shape or form is unlikely to succeed.
To summarize, belief in things that are not actually true, may have beneficial impact on your day to day life?
You don't really need require any level of rationality skills to arrive at that conclusion, but the writeup is quite interesting.
Just don't fall in the trap of thinking I am going to swallow this placebo and feel better, because I know that even though placebo does not work... crap. Let's start from the beginning....
Plans to enforce thought-taboo devices are likely to fail, as no self-respecting human being would allow such an crude ingerence of third parties into their own thought process.
I don't think that's the case. If I would present a technique about how everyone on LessWrong could install in himself Ugh-fields that prevents that person from engaging in akrasia I would think there would be plenty of people who would welcome the technique.
Uh... I agree with you that it really just depends on the marketing, and thought of people willingly mounting thought-taboo chips seems quite possible in the your given context. The connotations of "Though Crime" moved my away from thinking what are possible uses of such techniques towards why the hell should I allow other people to mess with my brain?
I cannot even think about the variety of interesting ways in which though-blocking technology can be applied.
We can fix this by incorporating a history to the utility function.
I think this is a sensible modelling as we value life exactly because of the continuity over time.
This does complicate matters a lot thought because it is not clear how the history should be taken into account. At least no obvious model as for the UFUs suggests itself (except for the trivial one to ignore the history).
Your examples sound plausible but I guess that trying to model human intuition of this leads to very complex functions.
Is just me, or is it somewhat to the contrary to normal approach taken by some utilitarians, I mean, here we are tweaking the models, while elsewhere some apparent utilitarians seem to be approaching it from the other case:
My intuition does not match current model, so I am making incorrect choice and need to change intuition and become more moral, and act according to preferred values.
Tweaking the model seems like several magnitudes harder, but as I guess, also several magnitudes more rewarding. I mean, I would love to see a self-consistent moral framework that maps to my personal values, but I assume it is not a goal that is easy to achieve, unless we include egoism, I guess.
Devil as always, seems to lie in the details, but as I see it some people may see it as a feature:
Assuming I am a forward looking agent who aims to maximize long term, not short term utility.
What is the utility of a person that is being currently preserved in suspended animation with hope of future revival? Am I being penalized as much as for a person who was, say, cremated?
Are we justified to make all current humans unhappy (without sacrificing their lives of course), so that means of reviving dead people are created faster, so that we can stop being penalized for their ended lifespans?
Wouldn't it be only prudent to stop creation of new humans, until we can ensure their lifespans would reach end of the universe, to avoid taking negative points?
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)