Torture vs Dust Specks Yet Again
The first time I read Torture vs. Specks about a year ago I didn't read a single comment because I assumed the article was making a point that simply multiplying can sometimes get you the wrong answer to a problem. I seem to have had a different "obvious answer" in mind.
And don't get me wrong, I generally agree with the idea that math can do better than moral intuition in deciding questions of ethics. Take this example from Eliezer’s post Circular Altruism which made me realize that I had assumed wrong:
Suppose that a disease, or a monster, or a war, or something, is killing people. And suppose you only have enough resources to implement one of the following two options:
1. Save 400 lives, with certainty.
2. Save 500 lives, with 90% probability; save no lives, 10% probability.
I agree completely that you pick number 2. For me that was just manifestly obvious, of course the math trumps the feeling that you shouldn't gamble with people’s lives…but then we get to torture vs. dust specks and that just did not compute. So I've read most every argument I could find in favor of torture(there are a great deal and I might have missed something critical), but...while I totally understand the argument (I think) I'm still horrified that people would choose torture over dust specks.
I feel that the way that math predominates intuition begins to fall apart when you the problem compares trivial individual suffering with massive individual suffering, in a way very much analogous to the way in which Pascal’s Mugging stops working when you make the credibility really low but the threat really high. Like this. Except I find the answer to torture vs. dust specks to be much easier...
Let me give some examples to illustrate my point.
Can you imagine Harry killing Hermione because Voldemort threatened to plague all sentient life with one barely noticed dust speck each day for the rest of time? Can you imagine killing your own best friend/significant other/loved one to stop the powers of the Matrix from hitting 3^^^3 sentient beings with nearly inconsquential dust specks? Of course not. No. Snap decision.
Eliezer, would you seriously, given the choice by Alpha, the Alien superintelligence that always carries out its threats, give up all your work, and horribly torture some innocent person, all day for fifty years in the face of the threat of a 3^^^3 insignificant dust specks barely inconveniencing sentient beings? Or be tortured for fifty years to avoid the dust specks?
I realize that this is much more personally specific than the original question: but it is someone's loved one, someone's life. And if you wouldn't make the sacrifice what right do you have to say someone else should make it? I feel as though if you want to argue that torture for fifty years is better than 3^^^3 barely noticeable inconveniences you had better well be willing to make that sacrifice yourself.
And I can’t conceive of anyone actually sacrificing their life, or themselves to save the world from dust specks. Maybe I'm committing the typical mind fallacy in believing that no one is that ridiculously altruistic, but does anyone want an Artificial Intelligence that will potentially sacrifice them if it will deal with the universe’s dust speck problem or some equally widespread and trivial equivalent? I most certainly object to the creation of that AI. An AI that sacrifices me to save two others - I wouldn't like that, certainly, but I still think the AI should probably do it if it thinks their lives are of more value. But dust specks on the other hand....
This example made me immediately think that some sort of rule is needed to limit morality coming from math in the development of any AI program. When the problem reaches a certain low level of suffering and is multiplied it by an unreasonably large number it needs to take some kind of huge penalty because otherwise to an AI it would be vastly preferable the whole of Earth be blown up than 3^^^3 people suffer a mild slap to the face.
And really, I don’t think we want to create an Artificial Intelligence that would do that.
I’m mainly just concerned that some factor be incorporated into the design of any Artificial Intelligence that prevents it from murdering myself and others for trivial but widespread causes. Because that just sounds like a sci-fi book of how superintelligence could go horribly wrong.
Improving Enjoyment and Retention Reading Technical Literature
A little background on myself first – I am currently studying to become involved with aging rejuvenation therapies like SENS.
This requires learning quite a lot about molecular biology. Which is fine, because I find cell biology quite interesting. The problem is naturally that textbooks and technical literature on the subject often make very little effort to be interesting.
Many of the books I’ve been reading lately are largely lacking in energy. And I was finding my mind was often drifting away from what I was reading and generally just not enjoying the process. Which was bad, because I need to spend a lot of time doing it.
I asked myself, do I hate learning about molecular biology and engineering? Should I shift my goals to something I’m more interested in? But, I didn’t seem to actually be disinterested in the subject. I loved talking about what I’d learned. And I frequently thought about it with interest. I was passionate about the goal of defeating aging. So the problem then was probably the books themselves.
So then the question was: how do I make boringly written biology books fun to read? Find better books? Well unfortunately, based on my research, the only biology books written to be interesting tend to be focusing on on other sects of the science; most good molecular biology books are boring. If anyone knows of any books on the subject that are unusually well written, please let me know. But I couldn't find any.
So I looked for the bright spots: where reading was fun. What makes reading a novel fun? I asked. Interesting story, character interactions, suspense, humor, dramatic scenes.
None of these are incorporated in molecular biology books and publications that I can find. But the answer was still there: visualize what I read. But not just visualize like the little diagrams of cellular interactions books usually give you – like stupid, over-the-top, Hollywood-status visualization. I had to make it dramatic. I had to mentally reconstruct the biology of a cell in massive, fast, and explosive terms.
Suddenly, I was reading about genetic engineering with a grin on my face; because I was visualizing a cackling mad scientist taking a jackhammer to a gene sequence.
Which, yes, is totally not what is happening in any way, but I remember what I read better because the unusual things are what stick in human memories; just reading a passage normally makes it easy to forget what I’ve read. And the weirdness seems to make the parts around it more memorable, so I’m remembering what I read a lot better, I find.
Most of the time I try not to make it that absurd. But if I imagine spliceosomes blasting introns out of RNA molecules or cell lysis as an overstated explosion of a cell I simply remember the concepts better. It isn’t the most accurate view of reality, but I'm aware of that when I think back on it, and it’s better than not remembering it.
But this strategy eventually gets a little tiring to maintain alone, I find, so I had to add in a second technique. Every time my mind wants to start wandering I stop, close my eyes, and refocus on what I'm reading, I recite ‘Tsuyoku Naritai’, and why I want to become stronger, what I have to protect. And then I continue. I find this little technique to make a massive difference. It reorients me, so that I continue to concentrate and it briefly reminds of what I'm pursuing and why. And if that doesn't give you the motivation to continue you should probably find a different project.
A third useful strategy has been planning how long I will read instead of how much and then break up the time spent reading over the course of a day. First, it encourages reading to understand fully rather than reading to finish fifty pages. Also, I find it tends to get me to read more pages, despite defeating the motivation to go fast. Time goals just take the pressure of failing to complete work off a bit, I find. As an example, I read about a 160 pages of a molecular biology textbook today using an input-based time goal. I used to plan for fifty pages of a similar type of material on a regular day and sometimes not finish even that. To be fair, I'm spending more time reading now, but I think using input based goals instead of output goals had a part in that that.
The other results I've gotten from these strategies have been pretty good as well. I’ve been trying to quantify my happiness lately, on a scale where every full number corresponds to doubled enjoyment, and now that I’m doing these three things my average happiness while reading technical passages has gone up by nearly a full point. My enjoyment of technical literature has gone from somewhere around 'yeah, it’s ok, I guess' to 'happy' while reading. And because it’s just more fun to do, it helps me to spend more time reading about molecular biology, more time working towards an unaging future.
Anyway, I thought I’d post the ideas in case they helped anyone else out (although the first might not work as well for things that are harder to visualize). I’m also interested if anyone does anything similar (or different) to increase their enjoyment of similar texts.
Daily Schedules in Combating Akrasia
For the last several months I've had increasing troubles with motivation to work. Reading dense technical papers, writing, and exercise were all much more difficult to prompt myself into starting and completing. I decided to try making a plan for my day the night before about two weeks back to see if it would help me get the things I wanted to do done. So every night before I go to bed I've been writing up a schedule for the next day, detailing what exactly I want to accomplish for the day and when I intend to go do it.
This has actually worked incredibly well for me in helping with my motivation problems, in fact in a couple days I felt more motivated to work than I can ever remember being before. I'm trying to change up my schedule and leave time for spontaneity to avoid having the plan become monotonous and it doesn't feel that way so far. And the results I'm getting are great: I find I get about 95% of what I plan done when I have a specific time written down for when I'm supposed to do it as opposed to what I'd roughly estimate at 60% completion when I just have some general idea in my head of what to work on over the course of the day.
My theory for why this is working is that when I have a specific time to do something I feel as though I have to do it now or I've failed some test of willpower. If I just have general work to be done, it's far too easy for me to defer to later, so that a lot of what was planned for doesn't get done. I also feel like if I expect to brace my mind for dense technical learning I have a much easier time finishing the material instead of giving up and procrastinating on it halfway through.
I feel like this solution will work mainly for people who have more flexible schedules (as I do at the moment) but could still serve a purpose for anyone with a more rigid schedule who wants to be more productive in their free time.
Has anyone else has tried this type of thing and if so, how did it work out for you over a longer period of time? Also what are people's thoughts on the general idea?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)