[LINK] Lectures on Self-Control and the myth of Willpower

8 joaolkf 13 March 2015 12:34AM

Last week Professor Neil Levy, a neuroethicist,  gave three lectures on Self-Control at the Oxford Martin School. Roughly, the first lecture targeted philosophical issues, the second empirical issues and the third bridged the two. Neil's summaries and audio are at the bottom. In the next two paragraphs I will briefly summarize the take-home message of the lectures.


Over the three lectures, he offered a new approach to self-control. He argues that will-power is of little relevance to self-control, that the self-control character-trait which correlates with success is not will-power and that relying on will-power alone leads to low levels of self-control. He defends that self-control is mainly the ability of self-management, of managing the environment so that temptations become more costly, less salient or inaccessible - he mentions Facebook nanny, Beeminder, commitment devices and so on. Self-management is the ability of not having to use will-power. Will-power is only relevant insofar as it enables these techniques to be deployed, but once in place the techniques themselves consist in avoiding to use will-power altogether. Will-power is an extremely scarce resource, and we should use the little we have so we don't have to depend on it anymore. He cites numerous evidence in support for his view. To mention a few, people with high self-control character-trait are less able to resist temptations, exert less effortful self-control, but are nonetheless more likely to pick environments with few temptations/distractions and better at developing techniques to ignore temptations (e.g. little children sung, slept, etc.). He contends that glucose role in increasing self-control is exerted by signalling high short-term resource availability, a stable environment, and thus low opportunity costs - but he doesn't expect this effect to hold in the long-term. He predicts that unconscious signals of a stable environment will increase self-control, which helps explains why high social-economic status correlates strongly with self-control. Neil has a blog post discussing how the myth that willpower equals Self-Control prevents the prescription of policies that would use these managerial techniques to increase people's Self-Control. By the end, he hinted his view could perhaps be seen as the extended will view, mirroring the extended mind view.

I believe the first lecture is not especially helpful for LessWrongers as it mainly focuses on contrasting his view with the rationalist view within philosophy, which is not pretty rational and likely false. Those interested in the empirical side will find the second lecture more attractive and should check this blog post summarizing it. I think the take-home message is more extensively spelled out in the third lecture. On the first lecture, there is this post by Professor Julian Savulescu, which particularly addresses the objective stance towards oneself present on Neil's view and opposed by the (philosophical) rationalists. There's some disagreement about to what extend these rationalists would really disagree with this view.

 

Lecture One: Self-Control: A problem of self-management

In this lecture I argue that self-control problems typical arise from conflicts between smaller sooner and larger later rewards. I suggest that we often fail successfully to navigate these problems because of our commitment to a conception of ourselves as rational agents who answer questions about ourselves by looking to the world. Despite the attractions of this conception, I argue that it undermines efforts at self-control and thereby our capacity to pursue the ends we value. I suggest we think of self-control as a problem of self-management, whereby we manipulate ourselves.

Lecture One Audio.

Blog post by Professor Julian Savulescu on the objectifying view defended in the first Lecture.

 

Lecture Two: The Science of Self-Control

In this lecture I outline some of the main perspectives on self-control and its loss stemming from recent work in psychology. I focus in particular on the puzzle arising from the role of glucose in successful self-control. Glucose ingestion seems to boost self-control but there is good evidence that it doesn't do this by providing fuel for the relevant mechanisms. I suggest that glucose functions as a cue of resource availability rather than fuel.

Lecture Two Audio.

Blog post by Dr. Joshua Shepherd summarizing the second Lecture.

 

Lecture Three: Marshmallows and Moderation

There is evidence that self-control is a character trait. This evidence seems inconsistent with the management approach I advocate, since that approach urges that we look to external props for self-control, not to states of the agent. In this lecture I argue, that contrary to appearances, we should hesitate to think that people high in what is known as trait self-control have any such character trait. In fact, properly understood the evidence concerning trait self-control supports the management. 

Lecture Three Audio.

 

EDIT: Changed the title from willpower's relative irrelevancy to the myth of Willpower. Added a link to another blog post.

Cross-temporal dependency, value bounds and superintelligence

7 joaolkf 28 October 2014 03:26PM

In this short post I will attempt to put forth some potential concerns that should be relevant when developing superintelligences, if certain meta-ethical effects exist. I do not claim they exist, only that it might be worth looking for them since their existence would mean some currently irrelevant concerns are, in fact, relevant. 

 

These meta-ethical effects would be a certain kind of cross-temporal dependency on moral value. First, let me explain what I mean by cross-temporal dependency. If value is cross-temporal dependent it means that value at t2 could be affected by t1, independently of any causal role t1 has on t2. The same event X at t2 could have more or less moral value depending on whether Z or Y happened at t1. For instance, this could be the case on matters of survival. If we kill someone and replace her with a slightly more valuable person some would argue there was a loss rather than a gain of moral value; whereas if a new person with moral value equal to the difference of the previous two is created where there was none, most would consider an absolute gain. Furthermore, some might consider small, gradual and continual improvements are better than abrupt and big ones. For example, a person that forms an intention and a careful detailed plan to become better, and forceful self-wrought to be better could acquire more value than a person that simply happens to take a pill and instantly becomes a better person - even if they become that exact same person. This is not because effort is intrinsically valuable, but because of personal continuity. There are more intentions, deliberations and desires connecting the two time-slices of the person who changed through effort than there are connecting the two time-slices of the person who changed by taking a pill. Even though both persons become equally morally valuable in isolated terms, they do so from different paths that differently affects their final value.

More examples. You live now in t1. If suddenly in t2 you were replaced by an alien individual with the same amount of value as you would otherwise have in t2, then t2 may not have the exact same amount of value as it would otherwise have, simply by virtue of the fact that in t1 you were alive and the alien's previous time slice was not. 365 individuals with a 1 day life do not amount to the same value as a single individual living through 365 days. Slice history in 1 day periods, each day the universe contains one unique advanced civilization with the same overall total moral value, each civilization being completely alien and ineffable to another, each civilization only lives for one day, and then it would be gone forever. This universe does not seem to hold the same moral value as the one where only one of those civilizations flourishes for eternity. On all these examples the value of a period of time seems to be affected by the existence or not of certain events at other periods. They indicate that there is, at least, some cross-temporal dependency.

 

Now consider another type of effect, bounds on value. There could be a physical bound – transfinite or not - on the total amount of moral value that can be present per instant. For instance, if moral value rests mainly on sentient well-being, which can be categorized as a particular kind of computation, and there is a bound on the total amount of such computation which can be performed per instant, then there is a bound on the amount of value per instant. If, arguably, we are currently extremely far from such bound, and this bound will eventually be reached by a superintelligence (or any other structure), then the total moral value of the universe would be dominated by the value of this physical bound, given that regions where the physical bound wasn't reached would make negligible contributions. How much faster the bound can be reached, also how much more negligible pre-bound values are.

 

Finally, if there is a form of value cross-temporal dependence where preceding events leading to a superintelligence could alter the value of this physical bound, then we not only ought to make sure we safely construct a superintelligence, but also that we do so following the path that maximizes such bound. It might be the case that an overly abrupt superintelligence would decrease such bound, thus all future moral value would be diminished by the fact there was a huge discontinuity in the past in the events leading to this future. Even small decreases on such bound would have dramatic effects. Although I do not know of any plausible cross-temporal effect of such kind, it seems this question deserves at least a minimal amount of thought. Both cross-temporal dependency and bounds on value seem plausible (in fact I believe some form of them are true), so it is not at all prima facie inconceivable that we could have cross-temporal effects changing the bound up or down.

Design-space traps: mapping the utility-design trajectory space

-2 joaolkf 10 November 2013 05:32PM

This is a small section on a paper I'm writing on moral enhancement. I'm trying to briefly summarize some of the points which were already made concerning local optima in evolutionary process and safety regarding taking humanity out of those local optima. You might find the text helpful in that it summarizes a very important concept. I don't think there's nothing new here, but I hope the way I tried to more properly phrase the utility-design trajectory space topology at the end can be fruitful. I would appreciate any insights you might have about that formulation in the end, how to better develop it more rigorously and some consequences. I do have some ideas, but I would want to hear what you have to say first.  Any other kind of general feedback on the text is also welcomed. But keep in mind this is just a section of a larger paper and I'm mainly interested in how to develop and what are the consequences of the framework at the end, rather than in properly developing any points in the middle.

Local optima are points where every nearby reachable positions are worse off, but there is at least one far away position which is vastly better. A strong case has been made that evolution often gets stuck on such local optima. In evolutionary processes, fitness is a monotonic function, i.e., it will necessarily increase or be maintained, any decrease in fitness will always be selected against. If there are vastly better solutions (for, e.g., solving cooperation problems) but in order to achieve those solutions organisms would have to pass through a lesser fit step, evolution will never reach that vastly better configuration. Evolutionary processes are limited by the topology of the fitness-design trajectory space, it can only go from design x to design y if there is at least one trajectory from x to y which is flat or ascendant, any trajectory momentarily descendent cannot be taken by the evolutionary processes. Say one is on the cyan ring ridge of the colored graphic. Although there is a vastly better configuration on the red peak, one would have to travel through the blue moat in order to get there. Unless one is a process who could pass through a sharp decrease in fitness, there would be no way of improving towards the red peak. Evolution is particularly prone to local optima due to fitness monotonicity. Enhancing human beings with the use of technology does not fall prey to the fitness monotonicity or any sort of utility monotonicity in general, we could initially make changes which would be harmful in order to latter achieve a vastly better configuration. Therefore, it seems plausible there would be a technological path out of evolution’s local optimum whereby we could rescue our species from these evolutionary imprisonments. Moreover, it is considered evolutionary local optima can be easily identifiable provided a careful, evolutionary and technical informed analysis is made. Hence these would be low-hanging fruits in the task for improving evolutionary products such as humans, easily accessible and able to produce great advances to humanity with little effort.

Nevertheless, it should be noted getting out of evolutionary local optima might not always be easy or even possible. Fitness does have a relatively strong correlation with overall human utility. And although human intelligence is not so dull as evolutionary process and does accept a decrease in utility in order to achieve a better design in the end, if the downward moat is deep enough, the risk of catastrophe - or much worse, extinction -, might not be worth taking. At least by being monotonic on a dimension correlated with utility, evolution was able to rightly avoid extreme losses. Perform widespread willy-nilly human enhancement, and we might fall on the moat guarding utility-design space garden’s delicious low-hanging fruits and not come back up. Particularly so in the case of moral enhancement, there is a self-reinforcing aspect of changing morality, motivations, values and desires. It might be the case tampering with deep and fundamental human morality is irreversible, because once we fundamentally value something else, we would not have any compelling reason for wanting to come back to our old values, desires or aspirations. Thus, it seems there are indeed cases where a small step past the edge of the moat will lead us to an irreversible path. To correctly map how each technology shapes utility-design trajectory space topology is a task deeply needed in order to carefully avoid falling on moats while attempting to reach local optima low-hanging fruits, or on even more dangerous existential holes. We ought to better get stuck at local optima than absolute minima. 

Utility-design trajectory space could be more properly defined as a space on Rn+u , a point there would use n-coordinates to locate all physically possible designs in all relevant dimensions n, it is defined by the laws of physics and by an utility function on u. A point will correspond to a design a iff all its neighbouring points x correspond to designs one physical step away from design a. Emergent designer processes such as evolution, human enhancement and AIs draw shapes on Rn+u by connecting points that are linked by one possible step under that process. Evolution’s hand is monotonic on dimension f, fitness, which makes for a pretty clumsy drawing. Biochemical human enhancement can more freely vary on f, but might contain other constraints elsewhere, that, e.g., uploaded minds would not. Extinctions correspond to singularities on u, once reached no other point is reachable, it designates lack of design. These points that can be reached but cannot reach need to be correctly mapped. It would also be relevant to investigate how each technology draws its specific shape on design space. Using u as some height analogue, some technologies might be inherently prone to shape moats with peaks on the middle, extinctions holes, effortless utility maximizing curves and so on. I believe moral enhancement draws a particularly bumpy hole-prone shape. FAI an ever utility-ascending shape, with all mishaps being existential holes.

How to choose a country/city?

11 joaolkf 02 November 2013 01:48AM

EDIT: I've found a very relevant indicator for my question, see "Quality of life" criteria below.


My main question is: which non-academic factors should I consider when moving to another country/city for a PhD? Further, I would also like to evaluate each country/city1 according to those criteria, but first I need to know which are the relevant criteria. If you know any (any at all) scientific literature on moving to another country and well being, let me know.

I've lived in Brazil all my life, I really like it here for many reasons. Mostly, by how personal relationships are established and maintained. However, Brazil's inability to construct a stable well developed society have crippled my intellectual development, and I simply cannot take it anymore - my brain will die here. Moreover, I feel like most of my high level desires(values) are much more in line with countries on the other end of the World Values Survey graphic. I have rational/secular and self-expressing values, instead of traditional-survival oriented ones. For all those reasons, I will be applying for my PhD aboard. I have pondered many of the career and academic factors involved, and I've had the help of many good and objective indexes available (e.g.: here and here). I've mapped most of the Departments of Philosophy in which I could research my topic (moral enhancement), and I believe these are the major factors. However, there is one other important factor I'm a bit clueless about: which country/city is better in all other aspects already not accounted by academic criteria?

My main options are2:

  • 1st: Oxford (no need to explain)
  • 2nd: Manchester (it's near Oxford, John Harris is there, one of the foremost researchers on moral enhancement)
  • 3rd: Stockholm (where everyone is born a transhumanist)
  • 3rd: Wellington, New Zealand (Nicholas Agar is there, one of the foremost researchers on moral enhancement)
  • 4th: Some places in continental Europe I'm still investigating (e.g.: Zurich , Munich)
  • 4th: Brazil (bioethics program in Rio de Janeiro)

However, this list is solely based on academic criteria. I need to factor in non-academic criteria. In fact, I do not even know which are the relevant non-academic criteria. That would be my first question.  I got fixated on the World Values Survey factors, but I might be wrong. I would gather the happiness index is important, but it might not vary for the same individual between countries, or it might covary oddly with the happiness index of the destination country.  My second question would be how each country/city is ranked according to these criteria.

There are many things that will be affected by accessing these other factors. First, I think Oxford is far, far above the 2nd option. But it is above enough that if I do not get in there on the first time (80% probability), I should wait and apply next year again instead of going to somewhere else where I did get accepted? Second, my current plan is to build the strongest possible application for Oxford and use it elsewhere. But if Oxford is not so clearly the undisputed 1st place, then I should be more concerned with building a good application that also accounted for other countries specific criteria. Furthermore, right now, I think I have a major bias against New Zealand. In terms of moral enhancement research it would be the second best after Oxford, it has huge human development, freedom and happiness indexes. However, the fact it is in the freaking middle of nowhere is very discouraging. Am I wrong about this? What are the correct factors I should be accounting for?

Here is a list of the factors I could gather from the comments, mostly the one by MathiasZaman:

  • World Values Survey: Already explained above, I believe is one of the most important. But I wonder if I'm not biased and fixated on this. I would also like to have a Cities Values Survey, since in reality I'm choosing cities.
  • Quality of life: It should matter. But I haven't found a good index for not-huge cities. The index for countries are well know. Sweden and New Zealand take the lead, then England and after a while Brazil. However, obviously, being an expatriate changes things a lot. If you know of an expatriates' quality of life index for cities or countries, please, let me know. However, there's one good indicator for expatriates available, but it is only for countries though.
  • Relative closeness to other countries: I'm having a hard time spelling out this one, but check this comment by Kaj.
  • Language barrier: This is hard to account for. I'm expecting that in no developed country I would be put in a situation where relevant people (from my university) would not be talking in English if I'm on the conversation. If it is not true, this is majorly relevant. If it is true, this is mildly relevant. I would expect this would be both a function of English proficiency and willingness to talk in English. Note Sweden is the highest in proficiency and the rest of continental Europe is the lowest. However, I do not know how to find the "willingness" factor.
  • Socio-economic system: Highly relevant. I believe this is accounted for on the World Values Survey, as type of government strongly covaries with values. More modern (rational-secular/self-expressing) have more liberal systems, while less modern have more strong governments. (while the really ancient ones have almost no State).
  • Public transport and real estate: Highly practical and I would not have thought if not for the comments. Commuting times and cost are very important. Real estate also, one of the many reasons I have not considered London was because of extremely high rents. Also, this brings back to mind why I posted this. I remember reading a very useful post on how to choose a house, where it pointed out to many relevant but unaccounted factors, commuting was one of them. What I want is something similar for cities.
  • Finances: It is mildly relevant, I do not believe I will have a desire for anything else besides researching, specially in Oxford. But I might be wrong. How I will finance myself is still a bit uncertain. For high ranking universities I will probably have a scholarship from Brazil, otherwise I will need a scholarship from elsewhere. With the probabilities in brackets, and some living costs factored in:
    • Oxford: Brazilian government scholarship. They will give me 1100 EUR per month besides paying for all the fees and accommodation. They pay one international travel per year. (90%) High living costs.
    • Manchester, same as above. (70%)
    • Stockholm: Swedish government salary (there a PhD is a job). For an Physics position it was ~2500 EUR per month.(100%) It has a very high living cost for expatriates
    • Wellington: I don't know, but will find out.
    • Brazil: 950 EUR per month (70%). Low living costs. 
  • International status: Makes a huge difference if one lives in a city by desire or by merely being born there. Prima facie, one should be more interesting if she is there by desire. Thus, I should give priority to more international cities. I will have to use anecdotal evidence here, since on normal datasets low skilled immigrants will dominate the sample. If I were less busy, I would compile data on an university-by-university basis.

Finally, please remember this not a competition between countries or cities and refrain for expressing any, however tiny, nationalism on the comments. I'm not expressing my subjective feelings either, I'm merely trying to find out the relevant factors and how countries or cities rank according to them.

 

Footnotes:

1. I would mostly like to be comparing cities, which was what I did when accounting for academic criteria, however (a) some datas are only available for countries, (b) in some cases I do not know to which city I will go and (c) this makes the analysis more complex.

2. US is out of the table for 4 reasons: (1) I would have to throw my MPhil on the garbage and start over. (2) Isn't that far away from a survival-traditional oriented society. (3) GRE (philosophy is the most competitive PhD program, I would have to nearly ace it, and I simply can't do that at the present time) (4) Doesn't have many transhumanistic oriented philosophy departments, specially on the top universities. Canada is out for (1), (3) and (4).

Fixing akrasia: damnation to acausal hell

2 joaolkf 03 October 2013 10:34PM

DISCLAIMER: This topic is related to a potentially harmful memetic hazard, that has been rightly banned from Less Wrong. If you don't know what is, it is more likely you will be fine than not, but be advised. If do know, do not mention it in the comments.


 

Abstract: The fact that humans cannot precommit very well might be one of our defences against acausal trades. If transhumanists figure out how to beat akrasia by some sort of drug or brain tweaks, that might make them much better at precommitment, and thus more vulnerable. That means solving akrasia might be dangerous, at least until we solve blackmail. If the danger is bad enough, even small steps should be considered carefully.



Strong precommitment and building detailed simulations of other agents are two relevant capabilities humans currently don't have. These capabilities have some unusual consequences for games. Most relevant games only arise when there is a chance of monitoring, commitment and multiple interactions. Hence being in a relevant game often implies cohabiting casual connected space-time regions with other agents. Nevertheless, being able to build detailed simulations of agents allows one to vastly increase the subjective probably this particular agent will have that his next observational moment will be under one's control iff the agent have access to some relevant areas of the logical game theoretic space. This doesn't seem desirable from this agent's perspective, it is extremely asymmetrical and allows more advanced agents to enslave less advanced ones even if they don't cohabit casual connected regions of the universe. Being able to be acausally reached by powerful agent who can simulate 3^^^3 copies of you, but against which you cannot do much is extremely undesirable.

However, and more generally, regions of the block universe can only be in a game with non-cohabiting regions if they are both agents and if they can strong precommit. Any acausal trade depends on precommitment, this is the only way an agreement can go across space-time, it is done on the game-theoretical possibilities space - as I am calling it. In the case I am discussing, a powerful agent would only have reason to even consider acausal trading with an agent if that agent can precommit. Otherwise, there is no other way of ensuring acausal cooperation. If the other agent cannot, beforehand, understand that due to the peculiarities of the set of possible strategies, it is better to always precommit to those strategies that will have higher payoff when considering all other strategies, then there's no trade to be made. Would be like trying to threaten a spider with a calm verbal sentence. If the other agent cannot precommit, there is no reason for the powerful agent to punish him for anything, he wouldn't be able to cooperate anyway, he wouldn't understand the game and, more importantly in my argument, he wouldn't be able to follow his precommitment, it would break down eventually, specially since the evidence for it is so abstract and complex. The powerful agent might want to simulate the minor agent suffering anyway, but it would solely amount to sadism. Acausal trades can only reach strong precommitable areas of the universe.

Moreover, an agent also needs reasonable epistemic access to the regions of logical space (certain areas of game theory, or, TDT if you will) that indicates both the possibility of acausal trades and some estimative on the type-distribution of superintelligences willing to trade with him (most likely, future ones that the agent can help create). Forever deterring the advance of knowledge on that area seems unfeasible, or - at best - complicated and undesirable for other reasons.

It is clear that we (humans) don't want to be in an enslavable position. I believe we are not. One of the things excluding us from this position is complete incapability to precommit. This is a psychological constrain, a neurochemical constrain. We do not have the ability of even having stable long term goals, strong precommitment is neurochemical impossible. However, it seems we can change this with human enhancement, we could develop drugs which could cure akrasia, we could overcome breakdown of will with some amazing psychological technique discovered by CFAR. It seems, however desirable on other grounds, getting rid of akrasia presents severe risks. Even if somehow we only slightly decrease akrasia, this would increase the probability that individuals with access to the relevant regions of logical space could precommit and become slaves. They might then proceed to cure akrasia for the rest of humanity.

Therefore, we should avoid trying to fundamentally fix akrasia for now, until we have a better understanding of those matters and perhaps solve the blackmail problem, or maybe only after FAI. My point here is merely arguing everyone should not endorse technologies (or psychological techniques) proposing to fundamentally fix a problem that would, otherwise, seems desirable of fixing. It would seem like a clear optimization process, but it could actually open the gates of acausal hell and damn humanity to eternal slavery.

 

(Thank cousin_it for the abstract. All mistakes are my responsibility.)

(EDIT: Added an explanation to back up the premise the acausal trade entails precommitment.)