Teaching rationality to kids?

9 chaosmage 16 October 2013 12:38PM

I'm finally getting around to reading "Thinking, Fast and Slow". Much of it I had already learned on LW and elsewhere. Maybe that's why my strongest impression from the book is how accessible it is. Simple sentences, clear and vivid examples, easy-to-follow exercises, a remarkable lack of references to topics not explained right away.

I caught myself thinking "This is a book I should have read as a kid". In my first language, I think I could have managed it as early as 11 years old. Since measured IQ is strongly influenced by habits of thinking and cognitive returns can be reinvested, I'm sure I would be smarter now if I had.

So I have decided to buy a stack of these books and give them to kids on their, say, 12th birthdays. Then maybe Dan Dennett's "Intuition Pumps" a year later - and HPMOR a year after that? I would like to see more suggestions from you guys.

It should be obviously better to start even earlier. So how do you teach rationality to a nine-year-old? Or a seven-year-old? Has anybody done something like that? Please name books, videos or web sites.

If such media are not available, creating them should be low-hanging fruit in the quest to raise the global IQ and sanity waterline. ELI5 writing is very learnable, after all, and ELI5 type interpretations of, say, the sequences, might be helpful for adults too.

Techniques to consciously activate a rationalist self-image

2 chaosmage 30 August 2013 12:01AM

When I do mindless or routine tasks for a while, activities I don't need much conscious thought for, I go back to older and simpler cognitions. It takes me a little effort to, for example, notice I'm confused, and then remember the methods of rationality. That takes me a second or so, during which some attention goes inward, to flash images associated with rationality. So to me, rational, purposeful thought distinctly feels like a faculty that needs to be (re)activated to be used.

That's my own experience. Does anyone experience this similarly?

So I played with this activation. First I started noticing it and feeling it in detail. I don't know how much my (non-trivial) mindfulness meditation training helped with this. I related to the activation impulse as another part of my body, as if I had an extra finger and was learning how to twitch it. I found a short trigger phrase for me to associate with this activation and taught my brain the connection by simply doing the activation at the same time as I was saying the trigger phrase.

Do any of you guys do something like that? Some image, phrase or visualization that helps you remember to be rational?

The Fermi paradox as evidence against the likelyhood of unfriendly AI

5 chaosmage 01 August 2013 06:46PM

Edit after two weeks: Thanks to everyone involved in this very interesting discussion! I now accept that any possible differences in how UFAI and FAI might spread over the universe pale before the Fermi paradox's evidence against the pre-existence of any of them. I enjoyed thinking about this a lot, so thanks again for considering my original argument, which follows below...

continue reading »

Business Insider: "They Finally Tested The 'Prisoner's Dilemma' On Actual Prisoners — And The Results Were Not What You Would Expect"

2 chaosmage 24 July 2013 12:44PM

Article at http://www.businessinsider.com/prisoners-dilemma-in-real-life-2013-7#ixzz2ZxwzT6nj, seems revelant to a lot of the discussion here.

There've been studies about people who consider themselves to be relatively successful are less cooperative than people who consider themselves relatively unsuccessful. The study referenced in that article seems to bear this out.

So if you want the other party to cooperate, should you attempt to give that party the impression it has been relatively unsuccessful, at least if that party is human?

Can we make Drake-like Fermi estimates of expected distance to the next planet with primitive, sentient or self-improving life?

0 chaosmage 10 July 2013 01:34AM

I expect everyone here has an opinion on the Drake Equation. (Comment if I'm wrong.) And that's because it is an easy story to remember and spread. Never mind its glaring inadequacy or the symbols it uses: it gives you a number of alien civilizations and somehow that sticks. I'd like to see if a science meme with similar properties could be created to carry a transhumanist payload. So. Could you convince a random person of the following three points if you wanted to?

  • We're getting increasingly confident estimates on the number and distribution of planets in our galaxy.
  • The other factors in the Drake equation have been discussed a lot - they remain guesses till we find something, but at least they aren't going to change a lot until we do.
  • So we should be able to estimate, very roughly and while mumbling about priors, an expected distance to the next planetary body with primitive life, with sentient life or with self-improving life (i.e. something like AIs that can exponentially grow that biosphere's cognitive capacity).

I think you could. And if you do, and if you can give a number of light-years, regardless of how much you emphasize the low confidence, aliens will suddenly seem more real to that random person. And so will, if not full transhumanism, at least some vague notion that intelligence must grow much like life does. I think that could reach a lot of people.

(If anybody complains that the expectation of some Singularity-like development is ideological: no, it is a reasonable guess based on the current evidence, much like Drake's expectation of every technological civilization's eventual self-destruction was reasonable in his Cold War era.)

The brain I'm typing this from knows too little math or astronomy to do this locally, so I'm throwing out the idea. Anyone care to play with this?

Google's Executive Chairman Eric Schmidt: apparently a transhumanist

10 chaosmage 25 April 2013 12:36AM

It makes a lot of sense for the Google people to be transhumanist, with Sergey Brin and Larry Page working with the Singularity University, but still I was surprised to hear this on the new Colbert Report (of the 23rd of April):

Colbert: Can I live forever?
Schmidt: Yes.
Colbert: Really?
Schmidt: But not now. They need to invent some more medicine.
Colbert: So I can live forever, but later. So I just need to live long enough for later to become now.
Schmidt: But your digital identity will live forever. Because there's no delete button.
Colbert: On me?
Schmidt: That's correct.
Colbert: That's profound.

He seemed quite serious, too.

I guess a lot of people would take transhumanism more seriously if they heard the top people at Google are in. To me, I actually find it makes Google seem more trustworthy. In-group psychology is weird.

Here's another good interview with Eric Schmidt. No explicit transhumanism, but some fairly intense plans entirely compatible with it.

(edited: corrected title)

Anybody want to meet in Leipzig, Germany?

1 chaosmage 03 April 2013 10:53PM

Hey guys, does anybody else here live in Leipzig, Germany? I'd love to meet up and find/found an LW community here!

Caelum est Conterrens: I frankly don't see how this is a horror story

26 chaosmage 06 March 2013 10:31AM

So Eliezer said in his March 1st HPMOR progress report:

I recommend the recursive fanfic “Friendship is Optimal: Caelum est Conterrens” (Heaven Is Terrifying).  This is the first and only effective horror novel I have ever read, since unlike Lovecraft, it contains things I actually find scary.

So I read that and it was certainly very much worth reading - thanks for the recommendation! Obviously, the following contains spoilers.

I'm confused about how the story is supposed to be "terrifying". I rarely find any fiction scary, but I suspect that this is about something else: I didn't think Failed Utopia #4-2 was "failed" either and in Three Worlds Collide, I thought the choice of the "Normal" ending made a lot more sense than choosing the "True" ending. The Optimalverse seems to me a fantastically fortunate universe, pretty much the best universe mammals could ever hope to end up in, and I honestly don't see how it is a horror novel, at all.

So, apparently there's something I'm not getting. Something that makes an individual's hard-to-define "free choice" more valuable than her much-easier-to-define happiness. Something like a paranoid schizophrenic's right not to be treated,

So I'd like the dumb version please. What's terrifying about the Optimalverse?

My simple hack for increased alertness and improved cognitive functioning: very bright light

54 chaosmage 18 January 2013 01:43PM

This is a simple idea that I came up with by myself. I was looking for a means to enter high functioning lots-of-beta-waves modes without the use of chemical stimulants. What I found was that very bright light works really, really well.

I got the brightest light bulbs I could get cheaply. 105 watts of incandescents with halogen gas, billed as the equivalent of 130 watts of incandescent light. And I got an adaptor like this that lets me screw four of those into the same socket in the ceiling. The result is about as painful to look at as the sun. It makes my (small) room brighter than a clear summer's day at my latitude and slightly brighter than a supermarket.

I guess it affects adenosine much like caffeine does because that's what it feels like. Yet unlike caffeine, it can be rapidly turned on and off, literally with the flip of a switch.

For waking up in the morning, I find bright light more effective than a 200mg caffeine tablet, although my caffeine tolerance is moderate for a scientist.

I have not compared the effects of very bright light to modafinil, which requires a prescription in my country.

When under this amount of light, I need to remind myself to go to bed, because I tire about three hours later than with common luminosity. Yet once I switch it off, I can usually sleep within a few minutes, as (I'm guessing) a flood of unblocked adenosine suddenly overwhelms me. I used to have those unproductive late hours where I was too awake to sleep but too tired to be smart. I don't have those anymore.

You've probably heard of light therapy, which uses light to help manage seasonal affective disorder. I don't have that issue, but I definitely notice that the light does improve my mood. (Maybe that's simply because I like to function well.) I'm pretty sure the expensive "light therapy bulbs" you can get are scams, because the color of the light doesn't actually make a difference. The amount of light does.

One nice side benefit is that it keeps me awake while meditating, so I don't need the upright posture that usually does that job. Without the need for an upright posture, I can go beyond two hours straight, which helps enter more profoundly altered states.

After about 10 months of almost daily use of this lighting, I have not noticed any decrease in effectiveness. I do notice I find normally-lit rooms comparatively gloomy, and have an increasingly hard time understanding why people tolerate that. Supermarkets and offices are brightly lit to make the rats move faster - why don't we do that at our homes and while we're at it, amp it up even further? After all, our brains were made for the African savanna, which during the day is a lot brighter than most apartments today.

Since everyone can try this for a few bucks, I hope some of you will. If you do, please provide feedback on whether it works as well for you as it does for me. Any questions?

Replaceability as a virtue

5 chaosmage 12 December 2012 07:53AM

I propose it is altruistic to be replaceable and therefore, those who strive to be altruistic should strive to be replaceable.

As far as I can Google, this does not seem to have been proposed before. LW should be a good place to discuss it. A community interested in rational and ethical behavior, and in how superintelligent machines may decide to replace mankind, should at least bother to refute the following argument.

Replaceability

Replaceability is "the state of being replaceable". It isn't binary. The price of the replacement matters: so a cookie is more replaceable than a big wedding cake. Adequacy of the replacement also makes a difference: a piston for an ancient Rolls Royce is less replaceable than one in a modern car, because it has to be hand-crafted and will be distinguishable. So something is more or less replaceable depending on the price and quality of its replacement.

Replaceability could be thought of as the inverse of the cost of having to replace something. Something that's very replaceable has a low cost of replacement, while something that lacks replaceability has a high (up to unfeasible) cost of replacement. The cost of replacement plays into Total Cost of Ownership, and everything economists know about that applies. It seems pretty obvious that replaceability of possessions is good, much like cheap availability is good.

Some things (historical artifacts, art pieces) are valued highly precisely because of their irreplacability. Although a few things could be said about the resale value of such objects, I'll simplify and contend these valuations are not rational.

The practical example

Anne manages the central database of Beth's company. She's the only one who has access to that database, the skillset required for managing it, and an understanding of how it all works; she has a monopoly to that combination.

This monopoly gives Anne control over her own replacement cost. If she works according to the state of the art, writes extensive and up-to-date documentation, makes proper backups etc she can be very replaceable, because her monopoly will be easily broken. If she refuses to explain what she's doing, creates weird and fragile workarounds and documents the database badly she can reduce her replaceability and defend her monopoly. (A well-obfuscated database can take months for a replacement database manager to handle confidently.)

So Beth may still choose to replace Anne, but Anne can influence how expensive that'll be for Beth. She can at least make sure her replacement needs to be shown the ropes, so she can't be fired on a whim. But she might go further and practically hold the database hostage, which would certainly help her in salary negotiations if she does it right.

This makes it pretty clear how Anne can act altruistically in this situation, and how she can act selfishly. Doesn't it?

The moral argument

To Anne, her replacement cost is an externality and an influence on the length and terms of her employment. To maximize the length of her employment and her salary, her replacement cost would have to be high.

To Beth, Anne's replacement cost is part of the cost of employing her and of course she wants it to be low. This is true for any pair of employer and employee: Anne is unusual only in that she has a great degree of influence on her replacement cost.

Therefore, if Anne documents her database properly etc, this increases her replaceability and constitutes altruistic behavior. Unless she values the positive feeling of doing her employer a favor more highly than she values the money she might make by avoiding replacement, this might even be true altruism.

Unless I suck at Google, replaceability doesn't seem to have been discussed as an aspect of altruism. The two reasons for that I can see are:

  • replacing people is painful to think about
  • and it seems futile as long as people aren't replaceable in more than very specific functions anyway.

But we don't want or get the choice to kill one person to save the life of five, either, and such practical improbabilities shouldn't stop us from considering our moral decisions. This is especially true in a world where copies, and hence replacements, of people are starting to look possible at least in principle.

Singularity-related hypotheticals

  1. In some reasonably-near future, software is getting better at modeling people. We still don't know what makes a process intelligent, but we can feed a couple of videos and a bunch of psychological data points into a people modeler, extrapolate everything else using a standard population and the resulting model can have a conversation that could fool a four-year-old. The technology is already good enough for models of pets. While convincing models of complex personalities are at least another decade away, the tech is starting to become good enough for senile grandmothers.

    Obviously no-one wants granny to die. But the kids would like to keep a model of granny, and they'd like to make the model before the Alzheimer's gets any worse, while granny is terrified she'll get no more visits to her retirement home.

    What's the ethical thing to do here? Surely the relatives should keep visiting granny. Could granny maybe have a model made, but keep it to herself, for release only through her Last Will and Testament? And wouldn't it be truly awful of her to refuse to do that?
  2. Only slightly further into the future, we're still mortal, but cryonics does appear to be working. Unfrozen people need regular medical aid, but the technology is only getting better and anyway, the point is: something we can believe to be them can indeed come back.

    Some refuse to wait out these Dark Ages; they get themselves frozen for nonmedical reasons, to fastforward across decades or centuries into a time when the really awesome stuff will be happening, and to get the immortality technologies they hope will be developed by then.

    In this scenario, wouldn't fastforwarders be considered selfish, because they impose on their friends the pain of their absence? And wouldn't their friends mind it less if the fastforwarders went to the trouble of having a good model (see above) made first?
  3. On some distant future Earth, minds can be uploaded completely. Brains can be modeled and recreated so effectively that people can make living, breathing copies of themselves and experience the inability to tell which instance is the copy and which is the original.

    Of course many adherents of soul theories reject this as blasphemous. A couple more sophisticated thinkers worry if this doesn't devalue individuals to the point where superhuman AIs might conclude that as long as copies of everyone are stored on some hard drive orbiting Pluto, nothing of value is lost if every meatbody gets devoured into more hardware. Bottom line is: Effective immortality is available, but some refuse it out of principle.

    In this world, wouldn't those who make themselves fully and infinitely replaceable want the same for everyone they love? Wouldn't they consider it a dreadful imposition if a friend or relative refused immortality? After all, wasn't not having to say goodbye anymore kind of the point?

These questions haven't come up in the real world because people have never been replaceable in more than very specific functions. But I hope you'll agree that if and when people become more replaceable, that will be regarded as a good thing, and it will be regarded as virtuous to use these technologies as they become available, because it spares one's friends and family some or all of the cost of replacing oneself.

Replaceability as an altruist virtue

And if replaceability is altruistic in this hypothetical future, as well as in the limited sense of Anne and Beth, that implies replaceability is altruistic now. And even now, there are things we can do to increase our replaceability, i.e. to reduce the cost our bereaved will incur when they have to replace us. We can teach all our (valuable) skills, so others can replace us as providers of these skills. We can not have (relevant) secrets, so others can learn what we know and replace us as sources of that knowledge. We can endeavour to live as long as possible, to postpone the cost. We can sign up for cryonics. There are surely other things each of us could do to increase our replaceability, but I can't think of any an altruist wouldn't consider virtuous.

As an altruist, I conclude that replaceability is a prosocial, unselfish trait, something we'd want our friends to have, in other words: a virtue. I'd go as far as to say that even bothering to set up a good Last Will and Testament is virtuous precisely because it reduces the cost my bereaved will incur when they have to replace me. And although none of us can be truly easily replaceable as of yet, I suggest we honor those who make themselves replaceable, and are proud of whatever replaceability we ourselves attain.

So, how replaceable are you?

View more: Prev | Next