Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Calvin10

Self Help, CBT and quantified self Android applications

A lot of people on LW seem to hold The Feeling Good Handbook, of Dr. Burns in high regard when it comes to effective self-help. I am through the process of browsing a PDF copy, and it indeed seems like a good resource, as it is not only written in an engaging way, but also packed with various exercises, such as writing your day plan and reviewing it later while assigning Pleasure and Purpose scores to various tasks.

The problem I have with this, and any other self-help-exercise style of books is that I am simply too lazy to regularly print, draft or fill written exercise sheets. On the other hand - I have noticed that when prompted to do so by phone notification, I can usually be trusted to regularly fill in the forms of QS apps I have installed on my mobile or do exercises such as duolingo language tests.

Since the topics of CBT, depression and such seem to be quite widely discussed, I have two rather general questions I would like to ask to the community:

1) Do you know about any battle-tested mobile applications that implement CBT exercises as mentioned in the book of Dr. Burns? If so please do name them, as I would love to install one as well.

2) Do you think that creating a new mobile application to collect all Feel-Good-Hanbook exercises in one place, and remind user to do them regularly (i.e. once daily/weekly in most cases) is a good idea? Would you use such an application for yourself? I am a MSc Comp Sci student looking for some fun and useful projects to polish my android skills a bit, and I would love to work on something that might be useful to a wider community. [pollid:852]

Calvin10

One serious issue we had was that he gave me an STI. He had rationalised that he had a very limited risk of having an STI so despite my repeated requests and despite being informed that a previous partner had been infected, did not get tested.

I thought accepted theory was that rationalists, are less credulous but better at taking ideas seriously, but what do I know, really? Maybe he needs to read more random blog posts about quantum physics and AI to aspire for LW level of rationality.

Calvin-20

Yes.

I don't know if it is going to succeed or not (my precognition skills are rusty today), but I am using my current beliefs and evidence (sometimes lack of thereof) to speculate that it seems unlikely to work, in the same way cryonics proponents speculate that it is likely (well, likely enough to justify the cost) that their minds are going to survive till they are revived in the future.

Calvin00

I don't know what is conditional to accurate preservation of the mind, but I am sure that if someone came up with definite answer, it would be a great leap forward for the whole community.

Some people seem to put their faith in structure for an answer, but how to test this claim in a meaningful way?

Calvin80

Yes, it is indeed a common pattern.

People are likely to get agitated about the stuff they are actually working with, especially if it is somehow entangled with their state of knowledge, personal interests and employment. Belief that we are the ones to save the world, really helps to find motivation for continuing their pursuits (and helps fund-raising efforts, I would reckon). It is also a good excuse to push your values on others (Communism will save the world from our greed).

On the other hand, I don't think it is a bad thing. That way, we have many little small groups, each working on their small subset of problem space when also trying to save the world from the disaster, which they perceive to be the greatest danger. As long as response is proportional to actual risk, of course.

But I still agree with you that it is only prudent to treat any such claims with caution, so that we don't fall into a trap of using data taken from a small group of people working at Asteroid Defense Foundation as our only and true estimates of likelihood and effect of an asteroid impact, without verifying their claims using an unbiased source. It is certainly good to have someone looking at the sky from time to time, just in case their claims prove true, though.

Calvin00

Here is a parable illustrating relevant difficulty of both problems:

*Imagine you are presented with a modern manuscript in latin and asked to retype it on a computer and translate everything into English.

This is how uploading more or less looks like for me, data is there but it still needs to be understood, and copied. Ah, you also need a computer. Now consider the same has to be done with ancient manuscript, that has been preserved in a wooden box stored in ice cave and guarded by a couple of hopeful monks:

  • Imagine the manuscript has been preserved using correct means and all letters are still there.

Uploading is easy. There is no data loss, so it is equivalent to uploading modern manuscript. This means that monks were smart enough to choose optimal storage procedure (or got there by accident) - very unlikely.

  • Imagine the manuscript has been preserved using decent means and some letters are still there.

Now, we have to do a bit of guesswork... is the manuscript we translate the same thing original author had in mind? EY called it doing intelligent cryptography on a partially preserved brain, as far as I am aware. Monks knew just enough not to screw up the process, but their knowledge of manuscript-preservation-techniques was not perfect.

  • Imagine the manuscript has been preserved using decent means all traces have vanished without trace.

Now we are royally screwed, or we can wait a couple of thousands of millions years so that oracle computer can deduce state of manuscript by reversing entropy. This means monks know very little about manuscript-preservation.

  • Imagine there is no manuscript. There is a nice wooden box preserved with astonishing details, but manuscript have crumbled when monks put it inside.

Well, the monks who wanted to preserve manuscript didn't know that preserving the box does not help to preserve the manuscript, but they tried, right? This means monks don't understand connection between manuscript and box preservation techniques.

  • Imagine there is no manuscript. The box has been damaged as well.

This is what happens when manuscript-preservation business is run by people with little knowledge about what should be done to store belongings for thousands of years without significant damage.

In other words, uploading is something that can be figured out correctly in far, far future while the problem of what is proper cryo-storage has to be solved correctly right now as incorrect procedure may lead to irreversible loss of information for people who want to be preserved now. I don't assign high prior probability to the fact that we know enough about the brain to preserve minds correctly, and therefore cryonics in the current shape or form is unlikely to succeed.

Calvin10

To summarize, belief in things that are not actually true, may have beneficial impact on your day to day life?

You don't really need require any level of rationality skills to arrive at that conclusion, but the writeup is quite interesting.

Just don't fall in the trap of thinking I am going to swallow this placebo and feel better, because I know that even though placebo does not work... crap. Let's start from the beginning....

Calvin10

Uh... I agree with you that it really just depends on the marketing, and thought of people willingly mounting thought-taboo chips seems quite possible in the your given context. The connotations of "Though Crime" moved my away from thinking what are possible uses of such techniques towards why the hell should I allow other people to mess with my brain?

I cannot even think about the variety of interesting ways in which though-blocking technology can be applied.

Calvin20

Is just me, or is it somewhat to the contrary to normal approach taken by some utilitarians, I mean, here we are tweaking the models, while elsewhere some apparent utilitarians seem to be approaching it from the other case:

My intuition does not match current model, so I am making incorrect choice and need to change intuition and become more moral, and act according to preferred values.

Tweaking the model seems like several magnitudes harder, but as I guess, also several magnitudes more rewarding. I mean, I would love to see a self-consistent moral framework that maps to my personal values, but I assume it is not a goal that is easy to achieve, unless we include egoism, I guess.

Calvin00

Devil as always, seems to lie in the details, but as I see it some people may see it as a feature:

Assuming I am a forward looking agent who aims to maximize long term, not short term utility.

What is the utility of a person that is being currently preserved in suspended animation with hope of future revival? Am I being penalized as much as for a person who was, say, cremated?

Are we justified to make all current humans unhappy (without sacrificing their lives of course), so that means of reviving dead people are created faster, so that we can stop being penalized for their ended lifespans?

Wouldn't it be only prudent to stop creation of new humans, until we can ensure their lifespans would reach end of the universe, to avoid taking negative points?

Load More