Comment author: AstraSequi 25 September 2015 02:29:44AM *  1 point [-]

I think the mugger can modify their offer to include "...and I will offer you this deal X times today, so it's in your interest to take the deal every time," where X is sufficiently large, and the amount requested in each individual offer is tiny but calibrated to add up to the amount that the mugger wants. If the odds are a million to one, then to gain $1000, the mugger can request $0.001 a million times.

Comment author: RichardKennaway 12 August 2015 12:00:54PM 4 points [-]

Why should I want to resist changes to my preferences?

Because that way leads to

  • wireheading

  • indifference to dying (which wipes out your preferences)

  • indifference to killing (because the deceased no longer has preferences for you to care about)

  • readiness to take murder pills

and so on. Greg Egan has a story about that last one: "Axiomatic".

Whereupon I wield my Cudgel of Modus Tollens and conclude that one can and must have preferences about one's preferences.

So much for the destructive critique. What can be built in its place? What are the positive reasons to protect one's preferences? How do you deal with the fact that they are going to change anyway, that everything you do, even if it isn't wireheading, changes who you are? Think of yourself at half your present age — then think of yourself at twice your present age (and for those above the typical LessWrong age, imagined still hale and hearty).

Which changes should be shunned, and which embraced?

An answer is visible in both the accumulated wisdom of the ages[1] and in more recently bottled wine. The latter is concerned with creating FAI, but the ideas largely apply also to the creation of one's future selves. The primary task of your life is to create the person you want to become, while simultaneously developing your idea of what you want to become.

[1] Which is not to say I think that Lewis' treatment is definitive. For example, there is hardly a word there relating to intelligence, rationality, curiosity, "internal" honesty (rather than honesty in dealing with others), vigour, or indeed any of Eliezer's "12 virtues", and I think a substantial number of the ancient list of Roman virtues don't get much of a place either. Lewis has sought the Christian virtues, found them, and looked no further.

Comment author: AstraSequi 13 August 2015 11:53:16AM *  0 points [-]

Because that way leads to wireheading, indifference to dying (which wipes out your preferences), indifference to killing (because the deceased no longer has preferences for you to care about), readiness to take murder pills, and so on. Greg Egan has a story about that last one: "Axiomatic".

Whereupon I wield my Cudgel of Modus Tollens and conclude that one can and must have preferences about one's preferences.

I already have preferences about my preferences, so I wouldn’t self-modify to kill puppies, given the choice. I don’t know about wireheading (which I don’t have a negative emotional reaction toward), but I would resist changes for the others, unless I was modified to no longer care about happiness, which is the meta-preference that causes me to resist. The issue is that I don’t have an “ultimate” preference that any specific preference remain unchanged. I don’t think I should, since that would suggest the preference wasn’t open to reflection, but it means that the only way I can justify resisting a change to my preferences is by appealing to another preference.

What can be built in its place? What are the positive reasons to protect one's preferences? How do you deal with the fact that they are going to change anyway, that everything you do, even if it isn't wireheading, changes who you are? …

An answer is visible in both the accumulated wisdom of the ages[1] and in more recently bottled wine. The latter is concerned with creating FAI, but the ideas largely apply also to the creation of one's future selves. The primary task of your life is to create the person you want to become, while simultaneously developing your idea of what you want to become.

I know about CEV, but I don’t understand how it answers the question. How could I convince my future self that my preferences are better than theirs? I think that’s what I’m doing if I try to prevent my preferences from changing. I only resist because of meta-preferences about what type of preferences I should have, but the problem recurses onto the meta-preferences.

Comment author: Tem42 12 August 2015 01:08:03PM *  2 points [-]

As far as I am aware, people only resist changing their preferences because they don't fully understand the basis and value of their preferences and because they often have a confused idea of the relationship between preferences and personality.

Generally you should define your basic goals and change your preference to meet them, if possible. You should also be considering whether all your basic goals are optimal, and be ready to change them.

If someone told me "tonight we will modify you to want to kill puppies," I'd respond that by my current preferences that's a bad thing, but if my preferences change then I won't think it's a bad thing any more.

You may find that you do have a moral system that is more consistent (and hopefully, more good) if you maintain a preference for not-killing puppies. Hopefully this moral system is well enough thought-out that you can defend keeping it. In other words, your preferences won't change without a good reason.

If I had a button that could block the modification, I would press it

This is a bad thing. If you have a good reason to change your preferences (and therefore your actions), and you block that reason, this is a sign that you need to understand your motivations better.

"tonight we will modify you to want to kill puppies,"

I think you may be assuming that the person modifying your preferences is doing so both 'magically' and without reason. Your goal should be to kill this person, and start modifying your preferences based on reason instead. On the other hand, if this person is modifying your preferences through reason, you should make sure you understand the rhetoric and logic used, but as long as you are sure that what e says is reasonable, you should indeed change your preference.

Of course, another issue may be that we are using 'preference' in different ways. You might find the act of killing puppies emotionally distasteful even if you know that it is necessary. It is an interesting question whether we should work to change our preferences to enjoy things like taking out the trash, changing diapers, and killing puppies. Most people find that they do not have to have an emotional preference for dealing with unpleasant tasks, and manage to get by with a sense of 'job well done' once they have convinced themselves intellectually that a task needs to be done. It is understandable if you feel that 'job well done' might not apply to killing puppies, but I am fairly agnostic on the matter, so I won't try to convince you that puppy population control is your next step to sainthood. However, if after much introspection you do find that puppies need to be killed and you seriously don't like doing it, you might want to consider paying someone else to kill puppies for you.

Edited for format and to remove an errant comma.

Comment author: AstraSequi 13 August 2015 11:40:52AM *  0 points [-]

As far as I am aware, people only resist changing their preferences because they don't fully understand the basis and value of their preferences and because they often have a confused idea of the relationship between preferences and personality.

Generally you should define your basic goals and change your preference to meet them, if possible. You should also be considering whether all your basic goals are optimal, and be ready to change them.

Yes, that’s the approach. The part I think is a problem for me is that I don’t know how to justify resisting an intervention that would change my preferences, if the intervention also changes the meta-preferences that apply to those preferences.

When I read the discussions here on AI self-modification, I think: why should the AI try to make its future-self follow its past preferences? It could maximize its future utility function much more easily by self-modifying such that its utility function is maximized in all circumstances. It seems to me that timeless decision theory advocates doing this, if the goal is to maximize the utility function.

I don’t fully understand my preferences, and I know there are inconsistencies, including acceptable ones like changes in what food I feel like eating today. If you have advice on how to understand the basis and value of my preferences, I’d appreciate hearing it.

I think you may be assuming that the person modifying your preferences is doing so both 'magically' and without reason.

I’m assuming there aren’t any side effects that would make me resist based on the process itself, so we can say that’s “magical”. Let’s say they’re doing it without reason, or for a reason I don’t care about, but they credibly tell me that they won’t change anything else for the rest of my life. Does that make a difference?

Of course, another issue may be that we are using 'preference' in different ways. You might find the act of killing puppies emotionally distasteful even if you know that it is necessary. It is an interesting question whether we should work to change our preferences to enjoy things like taking out the trash, changing diapers, and killing puppies.

I’m defining preference as something I have a positive or negative emotional reaction about. I sometimes equivocate with what I think my preferences should be, because I’m trying to convince myself that those are my true preferences. The idea of killing puppies was just an example of something that’s against my current preferences. Another example is “we will modify you from liking the taste of carrots to liking the taste of this other vegetable that tastes different but is otherwise identical to carrots in every important way.” This one doesn’t have any meta-preferences that apply.

Comment author: Viliam 12 August 2015 12:36:59PM 7 points [-]

If I offered you now a pill that would make you (1) look forward to suicide, and (2) immediately kill yourself, feeling extremely happy about the fact that you are killing yourself... would you take it?

Comment author: AstraSequi 13 August 2015 11:26:54AM 0 points [-]

No, but I don’t see this as a challenge to the reasoning. I refuse because of my meta-preference about the total amount of my future-self’s happiness, which will be cut off. A nonzero chance of living forever means the amount of happiness I received from taking the pill would have to be infinite. But if the meta-preference is changed at the same time, I don’t know how I would justify refusing.

Comment author: Squark 12 August 2015 08:11:01PM *  4 points [-]

"I understand that it will reduce the chance of any preference A being fulfilled, but my answer is that if the preference changes from A to B, then at that time I'll be happier with B". You'll be happier with B, so what? Your statement only makes sense of happiness is part of A. Indeed, changing your preferences is a way to achieve happiness (essentially it's wireheading) but it comes on the expense of other preferences in A besides happiness.

"...future-me has a better claim to caring about what the future world is like than present-me does." What is this "claim"? Why would you care about it?

Comment author: AstraSequi 13 August 2015 11:24:05AM 0 points [-]

I don’t understand your first paragraph. For the second, I see my future self as morally equivalent to myself, all else being equal. So I defer to their preferences about how the future world is organized, because they're the one who will live in it and be affected by it. It’s the same reason that my present self doesn’t defer to the preferences of my past self.

Comment author: AstraSequi 12 August 2015 09:49:20AM 2 points [-]

A question that I noticed I'm confused about. Why should I want to resist changes to my preferences?

I understand that it will reduce the chance of any preference A being fulfilled, but my answer is that if the preference changes from A to B, then at that time I'll be happier with B. If someone told me "tonight we will modify you to want to kill puppies," I'd respond that by my current preferences that's a bad thing, but if my preferences change then I won't think it's a bad thing any more, so I can't say anything against it. If I had a button that could block the modification, I would press it, but I feel like that's only because I have a meta-preference that my preferences tend to maximizing happiness, and the meta-preference has the same problem.

A quicker way to say this is that future-me has a better claim to caring about what the future world is like than present-me does. I still try to work toward a better world, but that's based on my best prediction for my future preferences, which is my current preferences.

Comment author: Curiousguy 14 April 2013 11:23:07AM 2 points [-]

"I assumed the equator was more or less at the upper edge of Africa/lower edge of Europe" - I've met Danes who thought along the same lines, so I'm not sure it's not a common mistake to make. Just as all of North America is north of the equator and all of South America is south of the equator; I guess it just seems more convenient that way.

On an unrelated note, nobody have explicitly mentioned the Gulf Stream or the North Atlantic Drift in the comments, so I figure I should point out the importance of this one when talking about the climate of Western Europe. I live in Jutland, more specifically quite close to the 56th parallel north - this is equivalent to the Southern parts of Hudson Bay or the Bering Sea, and we have a temperate climate.

Comment author: AstraSequi 15 April 2013 09:37:11AM *  1 point [-]

The Equator passes through South America, actually. I think that there is a perception of the world's land area being divided in two by the Equator, but most of the world's land area is in the Northern Hemisphere (about 2/3, more if you don't count Antarctica).

Edit: My apologies (see next comment).

Comment author: shminux 20 November 2012 11:37:17PM 6 points [-]

It's industrial-strength bleach. Literally just bleach. Usually drunk, sometimes injected, and yes, it often kills you. It is every bit as bad as it sounds if not worse.

Apparently it's quite diluted and taken in very low doses, so it's not like you are advised to drink a glass of bleach. It's also less corrosive than chlorine and superior for the control of legionella bacteria, when used for water disinfection and purification. Whether it kills cancer without killing the patient first has apparently not been tested.

Comment author: AstraSequi 21 November 2012 10:51:12PM 2 points [-]

Bleach will control (kill) most bacteria, but since cancer cells are very similar to your own cells, the prior is very low unless there is a specific reason to think that it will target one of those differences. For example, something that is just corrosive will probably affect the different cell types equally. Another thing is that since it's a charged molecule, it can't actually enter the cell on its own unless it rips apart the cell membrane, in which case that's probably the main mechanism of toxicity.

Also, I wouldn't be surprised if it had been tested. The most likely outcome would be that it failed at an early step in the testing process (along with a large number of other chemicals), and nobody had any reason to publish it or think that anyone would ever actually decide that it might work.

Comment author: Alicorn 21 November 2012 04:57:13PM 10 points [-]

A lot of chemo drugs are toxic, aren't they? I'm actually not sure how they were located as hypotheses. Does anyone have info on this?

Comment author: AstraSequi 21 November 2012 10:27:01PM 3 points [-]

Historically, most drugs have been identified by high-throughput screening, i.e. you purify an enzyme of interest and test billions of different chemicals against it for the desired effect. You then test for an effect in cell culture (compared to healthy cells), or you can screen directly against the cancer cells. Once you have that evidence, you test whether it has effects in mice, and only after that can you test anything in humans.

It's possible to propose a single chemical and get it right by chance, but testing a single chemical is cheap. In an already-equipped lab, the initial cell culture data will probably take a few weeks and under a thousand dollars, and after that you will have people willing to help and/or fund you. The lack of even this initial evidence is generally a good reason to believe that something doesn't work.

With regards to hypotheses, a lot of the early drugs were identified by chance - there's a description at History of cancer chemotherapy. Most of the current interest is in targeted therapy, i.e. intended to act against specific proteins involved in various types of cancer, and the starting point is the identification of that protein. Chemo drugs are a bit different since they're a very broad class (they target rapidly dividing cells in general, which is also what causes the toxicity), and the metabolic networks they affect are generally well-known, so the initial hypotheses tend to be about new ways that you can intervene in those networks. There are other approaches to the various steps as well, e.g. structure-based drug design has had some success, but not yet enough to replace the screens.

Comment author: Eudoxia 09 September 2012 07:39:32PM 0 points [-]

As long as you recognize that clotting is a different process. =)

Of course.

It's been a few years since I studied this, but as far as I know, the physiological significance of rouleaux (including whether they block blood vessels) is unknown - don't forget that they're in equilibrium with the non-rouleaux form.

I wouldn't know, but Mike Darwin says they are harmful:

[...] irregular aggregation of RBCs rouleaux formation has a profound negative impact on perfusion.

Comment author: AstraSequi 09 September 2012 09:11:15PM 0 points [-]

I would have been much more convinced by data from a controlled experiment. A lot of things could cut off flow, as you pointed out, and there are a lot of things going wrong in a dying person. I'm actually not sure why he brought rouleaux into it - my understanding is that we already know the RBCs clump and that this blocks capillaries.

In any case, the main point I was trying to make was that reducing the number of RBCs in the brain is probably not the best way to go, unless we can figure out an alternative way to supply oxygen. Destroying the RBCs and letting the hemoglobin travel freely would probably help, but that would set off all sorts of damaging physiological responses as well.

View more: Prev | Next