Calvin
Calvin has not written any posts yet.

Calvin has not written any posts yet.

One serious issue we had was that he gave me an STI. He had rationalised that he had a very limited risk of having an STI so despite my repeated requests and despite being informed that a previous partner had been infected, did not get tested.
I thought accepted theory was that rationalists, are less credulous but better at taking ideas seriously, but what do I know, really? Maybe he needs to read more random blog posts about quantum physics and AI to aspire for LW level of rationality.
Yes.
I don't know if it is going to succeed or not (my precognition skills are rusty today), but I am using my current beliefs and evidence (sometimes lack of thereof) to speculate that it seems unlikely to work, in the same way cryonics proponents speculate that it is likely (well, likely enough to justify the cost) that their minds are going to survive till they are revived in the future.
I don't know what is conditional to accurate preservation of the mind, but I am sure that if someone came up with definite answer, it would be a great leap forward for the whole community.
Some people seem to put their faith in structure for an answer, but how to test this claim in a meaningful way?
Yes, it is indeed a common pattern.
People are likely to get agitated about the stuff they are actually working with, especially if it is somehow entangled with their state of knowledge, personal interests and employment. Belief that we are the ones to save the world, really helps to find motivation for continuing their pursuits (and helps fund-raising efforts, I would reckon). It is also a good excuse to push your values on others (Communism will save the world from our greed).
On the other hand, I don't think it is a bad thing. That way, we have many little small groups, each working on their small subset of problem space when also trying... (read more)
Here is a parable illustrating relevant difficulty of both problems:
*Imagine you are presented with a modern manuscript in latin and asked to retype it on a computer and translate everything into English.
This is how uploading more or less looks like for me, data is there but it still needs to be understood, and copied. Ah, you also need a computer. Now consider the same has to be done with ancient manuscript, that has been preserved in a wooden box stored in ice cave and guarded by a couple of hopeful monks:
Uploading is easy. There is no data... (read more)
To summarize, belief in things that are not actually true, may have beneficial impact on your day to day life?
You don't really need require any level of rationality skills to arrive at that conclusion, but the writeup is quite interesting.
Just don't fall in the trap of thinking I am going to swallow this placebo and feel better, because I know that even though placebo does not work... crap. Let's start from the beginning....
Uh... I agree with you that it really just depends on the marketing, and thought of people willingly mounting thought-taboo chips seems quite possible in the your given context. The connotations of "Though Crime" moved my away from thinking what are possible uses of such techniques towards why the hell should I allow other people to mess with my brain?
I cannot even think about the variety of interesting ways in which though-blocking technology can be applied.
Is just me, or is it somewhat to the contrary to normal approach taken by some utilitarians, I mean, here we are tweaking the models, while elsewhere some apparent utilitarians seem to be approaching it from the other case:
My intuition does not match current model, so I am making incorrect choice and need to change intuition and become more moral, and act according to preferred values.
Tweaking the model seems like several magnitudes harder, but as I guess, also several magnitudes more rewarding. I mean, I would love to see a self-consistent moral framework that maps to my personal values, but I assume it is not a goal that is easy to achieve, unless we include egoism, I guess.
Devil as always, seems to lie in the details, but as I see it some people may see it as a feature:
Assuming I am a forward looking agent who aims to maximize long term, not short term utility.
What is the utility of a person that is being currently preserved in suspended animation with hope of future revival? Am I being penalized as much as for a person who was, say, cremated?
Are we justified to make all current humans unhappy (without sacrificing their lives of course), so that means of reviving dead people are created faster, so that we can stop being penalized for their ended lifespans?
Wouldn't it be only prudent to stop creation of new humans, until we can ensure their lifespans would reach end of the universe, to avoid taking negative points?
Self Help, CBT and quantified self Android applications
A lot of people on LW seem to hold The Feeling Good Handbook, of Dr. Burns in high regard when it comes to effective self-help. I am through the process of browsing a PDF copy, and it indeed seems like a good resource, as it is not only written in an engaging way, but also packed with various exercises, such as writing your day plan and reviewing it later while assigning Pleasure and Purpose scores to various tasks.
The problem I have with this, and any other self-help-exercise style of books is that I am simply too lazy to regularly print, draft or fill written exercise... (read more)