One serious issue we had was that he gave me an STI. He had rationalised that he had a very limited risk of having an STI so despite my repeated requests and despite being informed that a previous partner had been infected, did not get tested.
I thought accepted theory was that rationalists, are less credulous but better at taking ideas seriously, but what do I know, really? Maybe he needs to read more random blog posts about quantum physics and AI to aspire for LW level of rationality.
Yes.
I don't know if it is going to succeed or not (my precognition skills are rusty today), but I am using my current beliefs and evidence (sometimes lack of thereof) to speculate that it seems unlikely to work, in the same way cryonics proponents speculate that it is likely (well, likely enough to justify the cost) that their minds are going to survive till they are revived in the future.
I don't know what is conditional to accurate preservation of the mind, but I am sure that if someone came up with definite answer, it would be a great leap forward for the whole community.
Some people seem to put their faith in structure for an answer, but how to test this claim in a meaningful way?
Yes, it is indeed a common pattern.
People are likely to get agitated about the stuff they are actually working with, especially if it is somehow entangled with their state of knowledge, personal interests and employment. Belief that we are the ones to save the world, really helps to find motivation for continuing their pursuits (and helps fund-raising efforts, I would reckon). It is also a good excuse to push your values on others (Communism will save the world from our greed).
On the other hand, I don't think it is a bad thing. That way, we have many littl...
Here is a parable illustrating relevant difficulty of both problems:
*Imagine you are presented with a modern manuscript in latin and asked to retype it on a computer and translate everything into English.
This is how uploading more or less looks like for me, data is there but it still needs to be understood, and copied. Ah, you also need a computer. Now consider the same has to be done with ancient manuscript, that has been preserved in a wooden box stored in ice cave and guarded by a couple of hopeful monks:
To summarize, belief in things that are not actually true, may have beneficial impact on your day to day life?
You don't really need require any level of rationality skills to arrive at that conclusion, but the writeup is quite interesting.
Just don't fall in the trap of thinking I am going to swallow this placebo and feel better, because I know that even though placebo does not work... crap. Let's start from the beginning....
The major points are:
The insane thing is that you ca...
Uh... I agree with you that it really just depends on the marketing, and thought of people willingly mounting thought-taboo chips seems quite possible in the your given context. The connotations of "Though Crime" moved my away from thinking what are possible uses of such techniques towards why the hell should I allow other people to mess with my brain?
I cannot even think about the variety of interesting ways in which though-blocking technology can be applied.
Is just me, or is it somewhat to the contrary to normal approach taken by some utilitarians, I mean, here we are tweaking the models, while elsewhere some apparent utilitarians seem to be approaching it from the other case:
My intuition does not match current model, so I am making incorrect choice and need to change intuition and become more moral, and act according to preferred values.
Tweaking the model seems like several magnitudes harder, but as I guess, also several magnitudes more rewarding. I mean, I would love to see a self-consistent moral framework that maps to my personal values, but I assume it is not a goal that is easy to achieve, unless we include egoism, I guess.
Devil as always, seems to lie in the details, but as I see it some people may see it as a feature:
Assuming I am a forward looking agent who aims to maximize long term, not short term utility.
What is the utility of a person that is being currently preserved in suspended animation with hope of future revival? Am I being penalized as much as for a person who was, say, cremated?
Are we justified to make all current humans unhappy (without sacrificing their lives of course), so that means of reviving dead people are created faster, so that we can stop being pena...
I guess it is kind of a slippery slope, indeed. There are probably ways in which it could work only as intended (hardwired chip or whatever), but allowing other people to block your thoughts is only a couple of steps from turning you into their puppet.
As for simulation as though crime, I am not sure. If they need to peek inside your brain to check if you are not running illegally constructed internal simulations, the government can just simulate a copy of you (with a warrant, I guess), either torture or explicitly read it's mind (either way terrible) to fi...
The way I can see it in sci-fi terms:
If human mind is the first copy of a brain that has been uploaded to an computer, than it deserves the same rights as any human. There is a rule against running more than one instance of the same person at the same time.
Human mind created on my own computer from first principles, so to speak of, does not have any rights, but there is also a law in place to prevent such agents from being created, as human minds are dangerous toys.
Plans to enforce thought-taboo devices are likely to fail, as no self-respecting human being...
Let's make distinction between "I have a prejudice against" and "I know something about you"
Assuming I know that IQ is valid and true objective measure, I can use it to judge your cognitive skills, and your opinion about the result does not matter to anyone, just as much as your own opinion about BMI.
Assuming that I am not sure if IQ is valid, then I would rather refrain from reaching any conclusions or acting as if it actually mattered (because I am afraid of consequences), thus making it useless for me in my practical day to day life.
Yes, I do stand corrected.
Most of the explanations found on cryonics site, do indeed seem to base their arguments around the hopeful explanation that given nanotechnology and science of the future every problem connected to as you say rebooting would become essentially trivial.
This is good argument capable of convincing me into pro-cryonics position, if and only if someone can follow this claim by an evidence pointing to high probability estimate that preservation and restoration will become possible during a resonable time period.
If it so happens, that cryopreservation fails to prevent information-theoretic death then value of your cryo-magazines filled with with corpses will amount to exactly 0$ (unless you also preserve the organs for transplants).
I suspect our world views might differ for a bit, as I don't wish that my values where any different than they are. Why should I?
If Azathoth decided to instill the value that having children is somehow desirable deep into my mind, than I am very happy that as a first world parent I have all the resources I need to turn it into a pleasant endeavor with a very high expected value (happy new human who hopefully likes me and hopefully shares my values, but I don't have much confidence in a second bet).
In this case, I concur that your argument may be true if you include animals in your utility calculations.
While I do have reservations against causing suffering in humans, I don't explicitly include animals in my utility calculations, and while I don't support causing suffering for the sake of suffering, I don't have any ethical qualms against products made with animal fur, animal testing or factory farming, so that in regards to pigs, cows and chickens, I am an utility monster.
Ah, must have misread your representation, but English is not my first language, so sorry about that.
I guess if I was particularly well organized ruthlessly effective utilitarian ass some people here, I could now note down in my notebook, that he is happier then I previously thought and it is moral to kill him if, and only if the couple gives birth to 3, not 2 happy children.
I am assuming that all the old sad hermits are of this world are being systematically chopped for spare parts granted to deserving and happy young people, while good meaning utilitarians hide this sad truth from us, so that I don't become upset about those atrocities that are currently being committed in my name?
We are not even close to utility monster, and personally I know very few people who I would consider actual utilitarians.
Am I going to have a chance to actually interact with them, see them grow, etc?
I mean, assuming hypothetical case where as soon as a child is born, nefarious agents of Population Police snatch him never to be seen or heard from again, then I don't really see the point of having children.
If on the other hand, I have a chance to actually act as a parent to him, then I guess it is worth it, after all, even if the child disappears as soon as it reaches adulthood and joins Secret Society of Ineffective Altruism never to be heard from again. I get no benefit of...
Speaking broadly, the desire to lead happy / successful / interesting life (however winning is defined) it is a laudable goal shared by wast majority of humans. The problem was that some people took the idea further and decided that winning is a good qualification measure as to weather someone is a good rationalist or not, as debunked by Luke here. There are better examples, but I can't find them now.
Also, my two cents are that while rational agent may have some advantage over irrational one in a perfect universe, real world is so fuzzy and full of noisy i...
Well, this is something certainly I agree with, and after looking for the context of the quote I see that it can be interpreted that way.
I agree, that my interpretation wasn't very, well... charitable, but without context it really reads like yet another chronicle of superior debater celebrating victory over someone, who dared to be wrong on the Internet.
It seems to me that in the quote Yvain is admitting an error, not celebrating victory. Try taking his use of the word "reasonably" at face value.
I mean, it is either his authoritative summary or yours, and with all due honesty that guy actually takes care to construct an actual argument instead of resorting to appeals to authority and ridicule.
Personally I would be more interested in someone explaining exactly how cues of a piece of info are going to be reassembled and whole brain is going to be reconstructed from a partial data.
Proving that cryo-preservation + restoration does indeed work, and also showing the exact method as to how, seems like a more persuasive way to construct an argument rathe...
It is true, I wasn't specific enough, but I wanted to emphasize the opinion part, and the suffering part was meant to emphasize his life condition.
He was, presumably - killed without his consent, and therefore the whole affair seems so morally icky from a non-utilitarian perspective.
If your utility function does not penalize for making bad things as long as net result is correct, you are likely to end up in a world full of utility monsters.
I know it is a local trope that death and destruction is apparent and necessary logical conclusion of creating an intelligent machine capable of self improvement and goal modification, but I certainly don't share those sentiments.
How do you estimate the probability that AGI's won't take over the world (people who constructed them may use them for that purpose, but it is a different story), and would be used as simple tools and advisors in the same way boring, old fashioned and safe way 100% of our current technology is used?
I am explicitly saying that MRI ...
Seeing as we are talking about speculative dangers coming from a speculative technology that has yet to be developed, it seems pretty understandable.
I am pretty sure, that as soon as first AGI's arrive on the market, people would start to take possible dangers more seriously.
We might find out by trying to apply them to the real world and seeing that they don't work.
Well, it is less common now, but I think a slow retreat of the community from the position that instrumental rationality is applied science of winning at life is one of the cases when the beliefs had to be corrected to better match evidence.
Hopefully if their use of the world differs from expectations casual observers won't catch up, I mean...
We want to increase average human cognitive abilities by granting all the children with access to better education.
Wouldn't raise many eyebrows, but if you heard...
We want to increase average human cognitive abilities by discouraging lower IQ people from having children.
...then I can't help the feeling that e-word may crop up a lot. I would probably be inclined to use it myself, for all honesty.
It is also likely not written in the way they understand the world. I mean If charity is assuming that the other person is saying something interesting and worth consideration, such approach strikes me as an exact opposite:
Here, this is your bad, unoriginal argument, but I changed it into something better.
I mean, if you are better at arguing for the other side than your opposition, why do you even speak with them?
Still, if it is possible to have a happy children (and I assume happy humans are good stuff), where does the heap of dis-utility come into play?
EDIT: It is hard to form a meaningful relationship with money, and I would reckon that teaching it to uphold values similar to yours isn't an easy task either. As for taking care I don't mean palliative care as much as simply the relationship you have with your child.
I don't consider myself an explicit rationalist, but the desire to have children stems from the desire to have someone to take care of me when I am older.
Do you see your own conception and further life as a cause for "huge heap of disutility" that can't be surpassed by the good stuff?
Personally, I think principle of charity has more to do with having respect for ideas and arguments of the other person. I mean, let's say that someone says that he doesn't eat shrimps, because God forbids him from eating shrimps. If I am being charitable I am going to slightly alter his argument by saying that bible explicitly forbids shrimps. That way we don't have to get sidetracked discussing other topics.
You said that shrimps are wretched in the eyes of lord, and while I agree that old testament explicitly forbids eating them... blah blah....
That...
I am going to assume that opinion of the suffering hermit is irrelevant to this utility calculation.
Would you like to live forever?
For just 50$ monthly fee, agents of Time Patrol Institute promise to travel back in time extract your body a few miliseconds before death. In order to avoid causing temporal "parodxes", we pledge to replace your body with (almost) identical artificially constructed clone. After your body is extracted and moved to closest non-paradoxical future date we will reverse damage caused by aging, increase your lifespan to infinity and treat you with a cup of coffee.
While we are fully aware that time travel is not yet possib...
Anyone else sees has a problem with this particular statement taken from Cryonics institute FAQ?
One thing we can guarantee is that if you don't sign up for cryonics is that you will have no chance at all of coming back.
I mean, marketing something as one shot that might hopefully delay (or prevent) death, is hard to swallow, but I can cope with that, but this statement reads like cryonics is the one and only possible way to do that.
I was using Leech Block for old fashioned reddit-block for some time, but then I switched to Rescue Time (free version) which tracks time you spend on certain internet sites, and found it much more user friendly. It does not block the sites, but it shows you a percentage estimate of how productive you are today (e.g. Today, 1 hour on internet out of which 30min on Less Wrong - so 50% productive).
Can you please elaborate on how and why sufficient understanding of the concept of information-theoretic death as mapping many cognitive-identity-distinct initial physical states to the same atomic-level physical end state helps to alleviate concerns raised by the author?
The basic idea of getting cryonics is that it offers a chance of massively extended lifespan, because there is a chance that it preserves one's identity. That's the first-run approximation, with additional considerations arising from making this reasoning a bit more rigorous, e.g. that cryonics is competitive against other interventions, that the chance is not metaphysically tiny, etc.
One thing we might make more rigorous is what we mean by 'preservation'. Well, preservation refers to reliably being able to retrieve the person from the hopefully-preserved ...
I can't really offer anything more than a personal anecdotes, but here is what I usually do for when I try to grab attention of a group of my peers:
This matches my experience. When I don't want to engage in conversation and someone asks "How are you?", I always politely counter with "Fine, thanks" and just carry on whatever I am doing. I assume the same applies for other people.
One possible explanation, why we as humans might be incapable of creating Strong AI without outside help:
I think that for many people, getting fit (even if they arrived at fitness with incorrect justification) is far more important than spending time analyzing the theoretical underpinnings of fitness. Same thing with going to haven, choosing right cryo-preservation technique, learning to cook or any realm of human activity where we don't learn theory FOR THE SAKE OF BEING RIGHT
, but we learn it FOR THE SAKE OF ACHIEVING X GOALS
.
I mean, I concur that having vastly incorrect map can result in problems (injuries during workout, ineffecting training routine, endi...
Assuming your partner is not closely associated with LW or rationalist-transhumanist movement, you might be better of looking for advice elswhere. Just saying.
It can get even better, assuming you put your moral reasoning aside.
What you could do, is to deliberately defect and then publicly announce to everyone that it was a result of random chance.
If you are concerned about lying to others, then I concur, that accdientally choosing to defect is best of both worlds.
I also liked "Smarter Than Us", it sounds a lot like an popular science book from airport store.
I don't like other titles as they seem to rely on a fearmongering too much.
I am not sure I follow.
If you predict that majority of 'rational' people (say more than 50%) would pre-commit to cooperation, then you had a great opportunity to shaft them by defecting and running with their money.
Personally, I decided to defect as to ensure that other people who also defected won't take advantage of me.
The problem I see with your reasoning lies in the term "potentially save".
Personally I think it is better to focus our efforts on actions that bring >1% chance to increase the quality of life and average lifespans of a huge populations (say fighting diseases and famine) rather than on something that has a 0.0005% percent chance of possibly preserving your mind and body so that there is a 0.0005% chance that you achieve immortality or elongate your lifespan when future generations decide to "thaw" you (or even give you new awesome b...
Self Help, CBT and quantified self Android applications
A lot of people on LW seem to hold The Feeling Good Handbook, of Dr. Burns in high regard when it comes to effective self-help. I am through the process of browsing a PDF copy, and it indeed seems like a good resource, as it is not only written in an engaging way, but also packed with various exercises, such as writing your day plan and reviewing it later while assigning Pleasure and Purpose scores to various tasks.
The problem I have with this, and any other self-help-exercise style of books is that I ... (read more)