All of Calvin's Comments + Replies

Calvin10

Self Help, CBT and quantified self Android applications

A lot of people on LW seem to hold The Feeling Good Handbook, of Dr. Burns in high regard when it comes to effective self-help. I am through the process of browsing a PDF copy, and it indeed seems like a good resource, as it is not only written in an engaging way, but also packed with various exercises, such as writing your day plan and reviewing it later while assigning Pleasure and Purpose scores to various tasks.

The problem I have with this, and any other self-help-exercise style of books is that I ... (read more)

2ChristianKl
I think the problem is a good problem to work on. The potential benefit is huge. The core reason to recommend Burns book over other resources is that he actually run a study to show that the book works. If you on the other hand want to create a completely new product I don't think it makes sense to copy the exercises directly. A book is a different medium than an app and your goal is to optimise for the App medium. The book was written 25 years ago. It feels dated. That's before we had Martin Seligman campaigning for positive psychology. If I remember right Burns book lacks gratitude exercises. I also consider it very helpful to locate emotions in one's own body and be aware of them on a kinesthetic level and I don't think that thought was in Burns book. Newer CBT books might also provide good input.
Calvin10

One serious issue we had was that he gave me an STI. He had rationalised that he had a very limited risk of having an STI so despite my repeated requests and despite being informed that a previous partner had been infected, did not get tested.

I thought accepted theory was that rationalists, are less credulous but better at taking ideas seriously, but what do I know, really? Maybe he needs to read more random blog posts about quantum physics and AI to aspire for LW level of rationality.

3DaFranker
So uh, let's run down the checklist... [ X ] Proclaims rationality and keeps it as part of their identity. [ X ] Underdog / against-society / revolution mentality. [ X ] Fails to credit or fairly evaluate accepted wisdom. [ ] Fails to produce results and is not "successful" in practice. [ X ] Argues for bottom-lines. [ X ] Rationalizes past beliefs. [ X ] Fails to update when run over by a train of overwhelming critical evidence. Well, at least, there's that, huh? From all evidence, they do seem to at least succeed in making money and stuff. And hold together a relationship somehow. Oh wait, after reading original link, looks like even that might not actually be working!
Calvin-20

Yes.

I don't know if it is going to succeed or not (my precognition skills are rusty today), but I am using my current beliefs and evidence (sometimes lack of thereof) to speculate that it seems unlikely to work, in the same way cryonics proponents speculate that it is likely (well, likely enough to justify the cost) that their minds are going to survive till they are revived in the future.

Calvin00

I don't know what is conditional to accurate preservation of the mind, but I am sure that if someone came up with definite answer, it would be a great leap forward for the whole community.

Some people seem to put their faith in structure for an answer, but how to test this claim in a meaningful way?

0passive_fist
It seems like you're saying you don't know whether cryonics can succeed or not. Whereas in your first reply you said "therefore cryonics in the current shape or form is unlikely to succeed."
Calvin80

Yes, it is indeed a common pattern.

People are likely to get agitated about the stuff they are actually working with, especially if it is somehow entangled with their state of knowledge, personal interests and employment. Belief that we are the ones to save the world, really helps to find motivation for continuing their pursuits (and helps fund-raising efforts, I would reckon). It is also a good excuse to push your values on others (Communism will save the world from our greed).

On the other hand, I don't think it is a bad thing. That way, we have many littl... (read more)

0pianoforte611
Good point, I'll include that.
Calvin00

Here is a parable illustrating relevant difficulty of both problems:

*Imagine you are presented with a modern manuscript in latin and asked to retype it on a computer and translate everything into English.

This is how uploading more or less looks like for me, data is there but it still needs to be understood, and copied. Ah, you also need a computer. Now consider the same has to be done with ancient manuscript, that has been preserved in a wooden box stored in ice cave and guarded by a couple of hopeful monks:

  • Imagine the manuscript has been preserved usin
... (read more)
0passive_fist
Are you saying that accurate preservation depends on highly delicate molecular states of the brain, and this is the reason they cannot be preserved with current techniques?
Calvin10

To summarize, belief in things that are not actually true, may have beneficial impact on your day to day life?

You don't really need require any level of rationality skills to arrive at that conclusion, but the writeup is quite interesting.

Just don't fall in the trap of thinking I am going to swallow this placebo and feel better, because I know that even though placebo does not work... crap. Let's start from the beginning....

0brazil84
Well there's potential value in coming up with a model of the underlying principles at work. It's like the difference between observing that stuff falls when you drop it and coming up with a theory of gravity.
8FeepingCreature
Huh? What are you talking about? Placebo does work (somewhat, in some cases). Placebo even works when you know it's a placebo. Even if you don't use these techniques. There's studies and everything. The brain is absurdly easy to fool when parts of it are complicit. Tell yourself "I will do twenty push-ups and then stop", put your everything in the twenty knowing you'll stop after, and after you reach twenty just keep going for a few more until your arms burn. This will work reliably and repeatably. Your brain simply does not notice that you're systematically lying to it.
So8res110

The major points are:

  1. You can't judge the sanity of a strategy in a vacuum
  2. Your mind may allocate resources differently to "terminal" and "instrumental" goals; leverage this
  3. Certain false beliefs may improve productivity; consider cultivating them
  4. Use compartmentalization to mitigate the negative effects of (3)
  5. You already are a flawed system, expect that the optimal strategies will sometimes look weird
  6. Certain quirks that look like flaws (such as compartmentalization) can be leveraged to your advantage

The insane thing is that you ca... (read more)

Calvin10

Uh... I agree with you that it really just depends on the marketing, and thought of people willingly mounting thought-taboo chips seems quite possible in the your given context. The connotations of "Though Crime" moved my away from thinking what are possible uses of such techniques towards why the hell should I allow other people to mess with my brain?

I cannot even think about the variety of interesting ways in which though-blocking technology can be applied.

Calvin20

Is just me, or is it somewhat to the contrary to normal approach taken by some utilitarians, I mean, here we are tweaking the models, while elsewhere some apparent utilitarians seem to be approaching it from the other case:

My intuition does not match current model, so I am making incorrect choice and need to change intuition and become more moral, and act according to preferred values.

Tweaking the model seems like several magnitudes harder, but as I guess, also several magnitudes more rewarding. I mean, I would love to see a self-consistent moral framework that maps to my personal values, but I assume it is not a goal that is easy to achieve, unless we include egoism, I guess.

Calvin00

Devil as always, seems to lie in the details, but as I see it some people may see it as a feature:

Assuming I am a forward looking agent who aims to maximize long term, not short term utility.

What is the utility of a person that is being currently preserved in suspended animation with hope of future revival? Am I being penalized as much as for a person who was, say, cremated?

Are we justified to make all current humans unhappy (without sacrificing their lives of course), so that means of reviving dead people are created faster, so that we can stop being pena... (read more)

Calvin10

I guess it is kind of a slippery slope, indeed. There are probably ways in which it could work only as intended (hardwired chip or whatever), but allowing other people to block your thoughts is only a couple of steps from turning you into their puppet.

As for simulation as though crime, I am not sure. If they need to peek inside your brain to check if you are not running illegally constructed internal simulations, the government can just simulate a copy of you (with a warrant, I guess), either torture or explicitly read it's mind (either way terrible) to fi... (read more)

Calvin10

The way I can see it in sci-fi terms:

If human mind is the first copy of a brain that has been uploaded to an computer, than it deserves the same rights as any human. There is a rule against running more than one instance of the same person at the same time.

Human mind created on my own computer from first principles, so to speak of, does not have any rights, but there is also a law in place to prevent such agents from being created, as human minds are dangerous toys.

Plans to enforce thought-taboo devices are likely to fail, as no self-respecting human being... (read more)

2Lumifer
Of course, provided the alternative is not to just be killed here and now. Men with weapons have been successfully persuading other people to do something they don't want to do for ages.
3ChristianKl
I don't think that's the case. If I would present a technique about how everyone on LessWrong could install in himself Ugh-fields that prevents that person from engaging in akrasia I would think there would be plenty of people who would welcome the technique.
3Richard_Kennaway
I have learned a new word today. Was that the French "ingérence", meaning "interference, intervention"?
1Scott Garrabrant
It seems to me that if the government can run a simulation of an individual, it can also get the information in a better way. I am not sure though. That is an interesting question.
3Scott Garrabrant
I wouldn't say they are doomed to fail because it is a slippery slope to NO THINKING ABOUT RESISTANCE , but I would say that is a good reason to object to thought-taboo devices. I think a law stopping you from creating a second copy of a human or creating a new human counts as a thought crime, if the copy or new human is being run in your mind.
Calvin00

Let's make distinction between "I have a prejudice against" and "I know something about you"

Assuming I know that IQ is valid and true objective measure, I can use it to judge your cognitive skills, and your opinion about the result does not matter to anyone, just as much as your own opinion about BMI.

Assuming that I am not sure if IQ is valid, then I would rather refrain from reaching any conclusions or acting as if it actually mattered (because I am afraid of consequences), thus making it useless for me in my practical day to day life.

6Moss_Piglet
So if we assume a measure is invalid, it is useless to us (as an accurate measure anyway; you already pointed out a possible rhetorical use)? If you'll forgive my saying it, that seems like more of a tautology about measurements in general than an argument about this specific case. If you have evidence that general intelligence as-measured-by-IQ is invalid, or even evidence that people unfamiliar with the field like Dr Atran or Gould take issue with 'reifying' it, that would be closer to what the original question was looking for. I realize this comes off as a bit rude, but this particular non sequitur keeps coming up and is becoming a bit of a sore spot.
6Moss_Piglet
From the OP: - Emphasis mine. We all know the standard "that's racist" argument already, newerspeak is clearly asking for a factual reason why measures of general intelligence are not real / invalid / not useful. Not to mention that the post did not make any claims about, or even mention, heredity of intelligence or race / gender differences in intelligence.
Calvin40

Most of the explanations found on cryonics site, do indeed seem to base their arguments around the hopeful explanation that given nanotechnology and science of the future every problem connected to as you say rebooting would become essentially trivial.

Calvin00

This is good argument capable of convincing me into pro-cryonics position, if and only if someone can follow this claim by an evidence pointing to high probability estimate that preservation and restoration will become possible during a resonable time period.

If it so happens, that cryopreservation fails to prevent information-theoretic death then value of your cryo-magazines filled with with corpses will amount to exactly 0$ (unless you also preserve the organs for transplants).

2lsparrish
At some point, you will have to specialize in cryobiology and neuroscience (with some information science in there too) in order to process the data. I can understand wanting to see the data for yourself, but expecting everyone to process it rationally and in depth before they get on board isn't necessarily realistic for a large movement. Brian Wowk has written a lot of good papers on the challenges and mechanisms of cryopreservation, including cryoprotectant toxicity. Definitely worth reading up on. Even if you don't decide to be pro-cryonics, you could use a lot of the information to support something related, like cryopreservation of organs. Until you have enough information to know, with very high confidence, that information-theoretic death has happened in the best cases, you can't really assign it all a $0 value in advance. You could perhaps assign a lower value than the cost of the project, but you would have to have enough information to do so justifiably. Ignorance cuts both ways here, and cryonics has traditionally been presented as an exercise in decision-making under conditions of uncertainty. I don't see a reason that logic would change if there are millions of patients under consideration. (Although it does imply more people with an interest in resolving the question one way or another, if possible.) I don't quite agree that the value would be zero if it failed. It would probably displace various end-of-life medical and funeral options that are net-harmful, reduce religious fundamentalism, and increase investment in reanimation-relevant science (regenerative medicine, programmable nanodevices, etc). It would be interesting to see a comprehensive analysis of the positive and negative effects of cryonics becoming more popular. More organs for transplantation could be one effect worth accounting for, since it does not seem likely that we will need our original organs for reanimation. There would certainly be more pressure towards assisted suicide, so th
Calvin00

I suspect our world views might differ for a bit, as I don't wish that my values where any different than they are. Why should I?

If Azathoth decided to instill the value that having children is somehow desirable deep into my mind, than I am very happy that as a first world parent I have all the resources I need to turn it into a pleasant endeavor with a very high expected value (happy new human who hopefully likes me and hopefully shares my values, but I don't have much confidence in a second bet).

Calvin-10

In this case, I concur that your argument may be true if you include animals in your utility calculations.

While I do have reservations against causing suffering in humans, I don't explicitly include animals in my utility calculations, and while I don't support causing suffering for the sake of suffering, I don't have any ethical qualms against products made with animal fur, animal testing or factory farming, so that in regards to pigs, cows and chickens, I am an utility monster.

Calvin00

Ah, must have misread your representation, but English is not my first language, so sorry about that.

I guess if I was particularly well organized ruthlessly effective utilitarian ass some people here, I could now note down in my notebook, that he is happier then I previously thought and it is moral to kill him if, and only if the couple gives birth to 3, not 2 happy children.

Calvin20

I am assuming that all the old sad hermits are of this world are being systematically chopped for spare parts granted to deserving and happy young people, while good meaning utilitarians hide this sad truth from us, so that I don't become upset about those atrocities that are currently being committed in my name?

We are not even close to utility monster, and personally I know very few people who I would consider actual utilitarians.

1Chrysophylax
No, but cows, pigs, hens and so on are being systematically chopped up for the gustatory pleasure of people who could get their protein elsewhere. For free-range, humanely slaughtered livestock you could make an argument that this is a net utility gain for them, since they wouldn't exist otherwise, but the same cannot be said for battery animals.
Calvin00

Am I going to have a chance to actually interact with them, see them grow, etc?

I mean, assuming hypothetical case where as soon as a child is born, nefarious agents of Population Police snatch him never to be seen or heard from again, then I don't really see the point of having children.

If on the other hand, I have a chance to actually act as a parent to him, then I guess it is worth it, after all, even if the child disappears as soon as it reaches adulthood and joins Secret Society of Ineffective Altruism never to be heard from again. I get no benefit of... (read more)

1DaFranker
Thanks for the response! This puts several misunderstandings I had to rest. Programming of Azathoth because Azathoth doesn't give a shit about what you wish your own values were. Therefore what you want has no impact whatsoever on what your body and brain are programmed to do, such as make some humans want to have children even when every single aspect of it is negative (e.g. painful sex, painful pregnancy, painful birthing, hell to raise children, hellish economic conditions, absolutely horrible life for the child, etc. etc. such as we've seen some examples of in slave populations historically)
Calvin00

Speaking broadly, the desire to lead happy / successful / interesting life (however winning is defined) it is a laudable goal shared by wast majority of humans. The problem was that some people took the idea further and decided that winning is a good qualification measure as to weather someone is a good rationalist or not, as debunked by Luke here. There are better examples, but I can't find them now.

Also, my two cents are that while rational agent may have some advantage over irrational one in a perfect universe, real world is so fuzzy and full of noisy i... (read more)

1lmm
I think that post is wrong as a description of the LW crowd's goals. That post talks as if one's akrasia were a fixed fact that had nothing to do with rationality, but in fact a lot of the site is about reducing or avoiding it. Likewise intelligence; that post seems to assume that your intelligence is fixed and independent of your rationality, but in reality this site is very interested in methods of increasing intelligence. I don't think anyone on this site is just interested in making consistent choices.
Calvin10

Well, this is something certainly I agree with, and after looking for the context of the quote I see that it can be interpreted that way.

I agree, that my interpretation wasn't very, well... charitable, but without context it really reads like yet another chronicle of superior debater celebrating victory over someone, who dared to be wrong on the Internet.

It seems to me that in the quote Yvain is admitting an error, not celebrating victory. Try taking his use of the word "reasonably" at face value.

2ephion
Speaking of the Principle of Charity...
6Apprentice
Yes, you could turn the quote upside down and it would still work. That was kind of the point. For effective communication it's not a good idea to talk as if your opponent is operating on your assumptions rather than her own assumptions.
Calvin10

I mean, it is either his authoritative summary or yours, and with all due honesty that guy actually takes care to construct an actual argument instead of resorting to appeals to authority and ridicule.

Personally I would be more interested in someone explaining exactly how cues of a piece of info are going to be reassembled and whole brain is going to be reconstructed from a partial data.

Proving that cryo-preservation + restoration does indeed work, and also showing the exact method as to how, seems like a more persuasive way to construct an argument rathe... (read more)

Calvin00

It is true, I wasn't specific enough, but I wanted to emphasize the opinion part, and the suffering part was meant to emphasize his life condition.

He was, presumably - killed without his consent, and therefore the whole affair seems so morally icky from a non-utilitarian perspective.

If your utility function does not penalize for making bad things as long as net result is correct, you are likely to end up in a world full of utility monsters.

2Chrysophylax
We live in a world full of utility monsters. We call them humans.
Calvin10

I know it is a local trope that death and destruction is apparent and necessary logical conclusion of creating an intelligent machine capable of self improvement and goal modification, but I certainly don't share those sentiments.

How do you estimate the probability that AGI's won't take over the world (people who constructed them may use them for that purpose, but it is a different story), and would be used as simple tools and advisors in the same way boring, old fashioned and safe way 100% of our current technology is used?

I am explicitly saying that MRI ... (read more)

1hairyfigment
You may have a different picture of current technology than I do, or you may be extrapolating different aspects. We're already letting software optimize the external world directly, with slightly worrying results. You don't get from here to strictly and consistently limited Oracle AI without someone screaming loudly about risks. In addition, Oracle AI has its own problems (tell me if the LW search function doesn't make this clear). Some critics appear to argue that the direction of current tech will automatically produce CEV. But today's programs aim to maximize a behavior, such as disgorging money. I don't know in detail how Google filters its search results, but I suspect they want to make you feel more comfortable with links they show you, thus increasing clicks or purchases from sometimes unusually dishonest ads. They don't try to give you whatever information a smarter, better informed you would want your current self to have. Extrapolating today's Google far enough doesn't give you a Friendly AI, it gives you the making of a textbook dystopia.
6Tenoke
1%? I believe that it is nearly impossible to use a foomed AI in a safe manner without explicitly trying to do so. That's kind of why I am worried about the threat of any uFAI developed before it is proven that we can develop a Friendly one and without using whatever the proof entails. Anyway, I wasn't aware that we use a 100% of our current technology in a safe way.
Calvin-20

Seeing as we are talking about speculative dangers coming from a speculative technology that has yet to be developed, it seems pretty understandable.

I am pretty sure, that as soon as first AGI's arrive on the market, people would start to take possible dangers more seriously.

3Tenoke
And it will be quite likely at that point that we are much closer to having an AGI that will foom than to having an AI that won't kill us and that it is too late.
Calvin30

We might find out by trying to apply them to the real world and seeing that they don't work.

Well, it is less common now, but I think a slow retreat of the community from the position that instrumental rationality is applied science of winning at life is one of the cases when the beliefs had to be corrected to better match evidence.

5lmm
Is it? I mean, I'd happily say that the LW crowd as a whole does not seem particularly good at winning at life, but that is and should be our goal.
Calvin00

Hopefully if their use of the world differs from expectations casual observers won't catch up, I mean...

We want to increase average human cognitive abilities by granting all the children with access to better education.

Wouldn't raise many eyebrows, but if you heard...

We want to increase average human cognitive abilities by discouraging lower IQ people from having children.

...then I can't help the feeling that e-word may crop up a lot. I would probably be inclined to use it myself, for all honesty.

Calvin50

It is also likely not written in the way they understand the world. I mean If charity is assuming that the other person is saying something interesting and worth consideration, such approach strikes me as an exact opposite:

Here, this is your bad, unoriginal argument, but I changed it into something better.

I mean, if you are better at arguing for the other side than your opposition, why do you even speak with them?

Calvin00

Still, if it is possible to have a happy children (and I assume happy humans are good stuff), where does the heap of dis-utility come into play?

EDIT: It is hard to form a meaningful relationship with money, and I would reckon that teaching it to uphold values similar to yours isn't an easy task either. As for taking care I don't mean palliative care as much as simply the relationship you have with your child.

0hyporational
You can have relationships with other people, and I think it's easier to influence what they're like. I'll list some forms of disutility later, but I think for now it's better not to bias the answers to the original question further. I removed the "heap of disutility" part, it was unnecessarily exaggerated anyway.
0[anonymous]
You can have a relationship with your friends, but don't expect them to take care of you when you're old.
Calvin10

I don't consider myself an explicit rationalist, but the desire to have children stems from the desire to have someone to take care of me when I am older.

Do you see your own conception and further life as a cause for "huge heap of disutility" that can't be surpassed by the good stuff?

4DaFranker
I've always been curious to see the response of someone with this view to the question: What if you knew, as much as any things about the events of the world are known, that there will be circumstances in X years that make it impossible for any child you conceive to possibly take care of you when you are older? In such a hypothetical, is the executive drive to have children still present, still being enforced by the programming of Azathoth, merely disconnected from the original trigger that made you specifically have this drive? Or does the desire go away? Or something else, maybe something I haven't thought of (I hope it is!)?
0hyporational
Not to me obviously. Not necessarily to my parents either, but I think they might have been quite lucky in addition to being good parents. Doesn't money take care of you when old too? As a side note, if I were old, dying and in a poor enough condition that I couldn't look after myself, I'd rather sign off than make other people take care of me because I can't imagine that being an enjoyable experience.
Calvin50

Personally, I think principle of charity has more to do with having respect for ideas and arguments of the other person. I mean, let's say that someone says that he doesn't eat shrimps, because God forbids him from eating shrimps. If I am being charitable I am going to slightly alter his argument by saying that bible explicitly forbids shrimps. That way we don't have to get sidetracked discussing other topics.

You said that shrimps are wretched in the eyes of lord, and while I agree that old testament explicitly forbids eating them... blah blah....

That... (read more)

6Brillyant
I don't know that this is being charitable. In this case to be charitable, I'd make the assumption that someone who told me God forbid them to eat something was drawing from OT law and not nitpick. "Smart person" and "capable of making good arguments" are different things, and both are relative and open to many definitions. As a former Fundamentalist Christian, I don't claim to be smart or very good at making arguments, but I'd say it is not a useful heuristic to enter into a debate or discussion assuming a sincere adherent of that belief system is capable of making a rational argument.
Calvin00

I am going to assume that opinion of the suffering hermit is irrelevant to this utility calculation.

0solipsist
I didn't mean for the hermit to be sad, just less happy than the child.
0RowanE
It's specified that he was killed painlessly.
Calvin70

Would you like to live forever?

For just 50$ monthly fee, agents of Time Patrol Institute promise to travel back in time extract your body a few miliseconds before death. In order to avoid causing temporal "parodxes", we pledge to replace your body with (almost) identical artificially constructed clone. After your body is extracted and moved to closest non-paradoxical future date we will reverse damage caused by aging, increase your lifespan to infinity and treat you with a cup of coffee.

While we are fully aware that time travel is not yet possib... (read more)

-2Lumifer
What?!!? Not tea? I unwilling to be reborn into such a barbaric environment. Wouldn't it be simpler to convert to Mormonism? :-D
Calvin10

Anyone else sees has a problem with this particular statement taken from Cryonics institute FAQ?

One thing we can guarantee is that if you don't sign up for cryonics is that you will have no chance at all of coming back.

I mean, marketing something as one shot that might hopefully delay (or prevent) death, is hard to swallow, but I can cope with that, but this statement reads like cryonics is the one and only possible way to do that.

3Fronken
Well ... isn't it? What others are you thinking of? None spring to my mind.
Calvin00

I was using Leech Block for old fashioned reddit-block for some time, but then I switched to Rescue Time (free version) which tracks time you spend on certain internet sites, and found it much more user friendly. It does not block the sites, but it shows you a percentage estimate of how productive you are today (e.g. Today, 1 hour on internet out of which 30min on Less Wrong - so 50% productive).

Calvin60

Can you please elaborate on how and why sufficient understanding of the concept of information-theoretic death as mapping many cognitive-identity-distinct initial physical states to the same atomic-level physical end state helps to alleviate concerns raised by the author?

The basic idea of getting cryonics is that it offers a chance of massively extended lifespan, because there is a chance that it preserves one's identity. That's the first-run approximation, with additional considerations arising from making this reasoning a bit more rigorous, e.g. that cryonics is competitive against other interventions, that the chance is not metaphysically tiny, etc.

One thing we might make more rigorous is what we mean by 'preservation'. Well, preservation refers to reliably being able to retrieve the person from the hopefully-preserved ... (read more)

Calvin60

I can't really offer anything more than a personal anecdotes, but here is what I usually do for when I try to grab attention of a group of my peers:

  • If you are talking to several people gathered in circle, and it is my turn to say something important, I make a small step forward so that I physically place myself in the center of the group.
  • When I am speaking, I try to mantain eye contact with all people gathered around, If I focus too much only on the person I am speaking to, everyone else turns their attention towards them as well.
  • I rarely do it myself
... (read more)
0ChristianKl
While we are at that topic many people use "you" when talking about themselves. They say sentence like: "Yesterday I thought: You should go to gym." I once even listened to someone who used "he" to when speaking about himself a few years ago. The language was German and he was an Austrian, but it still signified how little he identified with his self in the past. After a bit of prodding he changed to "I". That also changed subtle things about his body language did change. It was interesting to watch the effect. Identifying with oneself helps to be more charismatic. It's one of those nontrivial aspects of: "Just be yourself."
0JayDee
Thanks. These are things I've learnt or tried learning in the past. I'd guess there are good odds that I'm reverting to past (shyer) behaviors in some situations. I'll make an effort to be aware of my body language and focus next time.
Calvin60

This matches my experience. When I don't want to engage in conversation and someone asks "How are you?", I always politely counter with "Fine, thanks" and just carry on whatever I am doing. I assume the same applies for other people.

Calvin130

One possible explanation, why we as humans might be incapable of creating Strong AI without outside help:

  • Constructing Human Level AI requires sufficiently advanced tools.
  • Constructing sufficiently advanced tools requires sufficiently advanced understanding.
  • Human brain has "hardware limitations" that prevent it from achieving sufficiently advanced understanding.
  • Computers are free of such limitations, but if we want program them to be used as sufficiently advanced tools we still need the understanding in the first place.
1passive_fist
As with all arguments against strong AI, there are a bunch of unintended consequences. What prevents someone from, say, simulating a human brain on a computer, then simulating 1,000,000 human brains on a computer, then linking all their cortices with a high-bandwidth connection so that they effectively operate as a superpowered highly-integrated team? Or carrying out the same feat with biological brains using nanotech? In both cases, the natural limitations of the human brain have been transcended, and the chances of such objects engineering strong AI go up enormously. You would then have to explain, somehow, why no such extension of human brain capacity can break past the AI barrier.
5TsviBT
Be sure not to rule out the evolution of Human Level AI on neurological computers using just nucleic acids and a few billion years...
Calvin00

I think that for many people, getting fit (even if they arrived at fitness with incorrect justification) is far more important than spending time analyzing the theoretical underpinnings of fitness. Same thing with going to haven, choosing right cryo-preservation technique, learning to cook or any realm of human activity where we don't learn theory FOR THE SAKE OF BEING RIGHT, but we learn it FOR THE SAKE OF ACHIEVING X GOALS.

I mean, I concur that having vastly incorrect map can result in problems (injuries during workout, ineffecting training routine, endi... (read more)

2Brillyant
Um, yep. And that has been position all along on this series of posts. I've said why I think Atkins works and why I don't think it has anything to do with why the Atkins diet is said to work. Eat Less, Exercise More for weight loss. Lift More for strength training. Of course there are lots of exceptions, and plenty of nuance within these heuristics. But you said it best: The diminishing returns happen quickly for most people and most advice. My point was only that if someone wants to sell you Magic Muscle Beans and a workout plan that says Lift More, don't buy the beans.
Calvin60

Assuming your partner is not closely associated with LW or rationalist-transhumanist movement, you might be better of looking for advice elswhere. Just saying.

9ChristianKl
It's easy to take advice from multiple sources. You don't have to take all of it. Whenever it comes to important decisions I usually take perspectives from multiple sources.
Calvin20

It can get even better, assuming you put your moral reasoning aside.

What you could do, is to deliberately defect and then publicly announce to everyone that it was a result of random chance.

If you are concerned about lying to others, then I concur, that accdientally choosing to defect is best of both worlds.

6Richard_Kennaway
In the literal PD scenario, I imagine the subsequent converation would go: "You accidentally informed on us? Okay, we'll accidentally shoot your legs off."
Calvin20

I also liked "Smarter Than Us", it sounds a lot like an popular science book from airport store.

I don't like other titles as they seem to rely on a fearmongering too much.

Calvin00

I am not sure I follow.

If you predict that majority of 'rational' people (say more than 50%) would pre-commit to cooperation, then you had a great opportunity to shaft them by defecting and running with their money.

Personally, I decided to defect as to ensure that other people who also defected won't take advantage of me.

Calvin20

The problem I see with your reasoning lies in the term "potentially save".

Personally I think it is better to focus our efforts on actions that bring >1% chance to increase the quality of life and average lifespans of a huge populations (say fighting diseases and famine) rather than on something that has a 0.0005% percent chance of possibly preserving your mind and body so that there is a 0.0005% chance that you achieve immortality or elongate your lifespan when future generations decide to "thaw" you (or even give you new awesome b... (read more)