[Link] How the Simulation Argument Dampens Future Fanaticism

6 wallowinmaya 09 September 2016 01:17PM

Very comprehensive analysis by Brian Tomasik on whether (and to what extent) the simulation argument should change our altruistic priorities. He concludes that the possibility of ancestor simulations somewhat increases the comparative importance of short-term helping relative to focusing on shaping the "far future".

Another important takeaway: 

[...] rather than answering the question “Do I live in a simulation or not?,” a perhaps better way to think about it (in line with Stuart Armstrong's anthropic decision theory) is “Given that I’m deciding for all subjectively indistinguishable copies of myself, what fraction of my copies lives in a simulation and how many total copies are there?"

 

[Link] Suffering-focused AI safety: Why “fail-safe” measures might be particularly promising

8 wallowinmaya 21 July 2016 08:22PM

The Foundational Research Institute just published a new paper: "Suffering-focused AI safety: Why “fail-safe” measures might be our top intervention". 

It is important to consider that [AI outcomes] can go wrong to very different degrees. For value systems that place primary importance on the prevention of suffering, this aspect is crucial: the best way to avoid bad-case scenarios specifically may not be to try and get everything right. Instead, it makes sense to focus on the worst outcomes (in terms of the suffering they would contain) and on tractable methods to avert them. As others are trying to shoot for a best-case outcome (and hopefully they will succeed!), it is important that some people also take care of addressing the biggest risks. This perspective to AI safety is especially promising both because it is currently neglected and because it is easier to avoid a subset of outcomes rather than to shoot for one highly specific outcome. Finally, it is something that people with many different value systems could get behind.

In Praise of Maximizing – With Some Caveats

22 wallowinmaya 15 March 2015 07:40PM

Most of you are probably familiar with the two contrasting decision making strategies "maximizing" and "satisficing", but a short recap won't hurt (you can skip the first two paragraphs if you get bored): Satisficing means selecting the first option that is good enough, i.e. that meets or exceeds a certain threshold of acceptability. In contrast, maximizing means the tendency to search for so long until the best possible option is found.

Research indicates (Schwartz et al., 2002) that there are individual differences with regard to these two decision making strategies. That is, some individuals – so called ‘maximizers’ – tend to extensively search for the optimal solution. Other people – ‘satisficers’ – settle for good enough1. Satisficers, in contrast to maximizers, tend to accept the status quo and see no need to change their circumstances2.

When the subject is raised, maximizing usually gets a bad rap. For example, Schwartz et al. (2002) found "negative correlations between maximization and happiness, optimism, self-esteem, and life satisfaction, and positive correlations between maximization and depression, perfectionism, and regret."

So should we all try to become satisficers? At least some scientists and the popular press seem to draw this conclusion:

Maximisers miss out on the psychological benefits of commitment, leaving them less satisfied than their more contented counterparts, the satisficers. ...Current research is trying to understand whether they can change. High-level maximisers certainly cause themselves a lot of grief.

I beg to differ. Satisficers may be more content with their lives, but most of us don't live for the sake of happiness alone. Of course, satisficing makes sense when not much is at stake3. However, maximizing also can prove beneficial, for the maximizers themselves and for the people around them, especially in the realm of knowledge, ethics, relationships and when it comes to more existential issues – as I will argue below4.

Belief systems and Epistemology

Ideal rationalists could be thought of as epistemic maximizers: They try to notice slight inconsistencies in their worldview, take ideas seriously, beware wishful thinking, compartmentalization, rationalizations, motivated reasoning, cognitive biases and other epistemic sins. Driven by curiosity, they don't try to confirm their prior beliefs, but wish to update them until they are maximally consistent and maximally correspondent with reality. To put it poetically, ideal rationalists as well as great scientists don't content themselves to wallow in the mire of ignorance but are imbued with the Faustian yearning to ultimately understand whatever holds the world together in its inmost folds.

In contrast, consider the epistemic habits of the average Joe Christian: He will certainly profess that having true beliefs is important to him. But he doesn't go to great lengths to actually make this happen. For example, he probably believes in an omnipotent and beneficial being that created our universe. Did he impartially weigh all available evidence to reach this conclusion? Probably not. More likely is that he merely shares the beliefs of his parents and his peers. However, isn't he bothered by the problem of evil or Occam's razor? And what about all those other religions whose adherents believe with the same certainty in different doctrines?

Many people don’t have good answers to these questions. Their model of how the world works is neither very coherent nor accurate but it's comforting and good enough. They see little need to fill the epistemic gaps and inconsistencies in their worldview or to search for a better alternative. Thus, one could view them as epistemic satisficers. Of course, all of us exhibit this sort of epistemic laziness from time to time. In the words of Jonathan Haidt (2013):

We take a position, look for evidence that supports it, and if we find some evidence—enough so that our position “makes sense”—we stop thinking.

Usually, I try to avoid taking cheap shots at religion and therefore I want to note that similar points apply to many non-theistic belief systems.

Ethics

Let's go back to average Joe: he presumably obeys the dictates of the law and his religion and occasionally donates to (ineffective) charities. Joe probably thinks that he is a “good” person and many people would likely agree. This leads us to an interesting question: how do we typically judge the morality of our own actions?

Let's delve into the academic literature and see what it has to offer: In one exemplary study, Sachdeva et al. (2009) asked participants to write a story about themselves using either morally positive words (e.g. fair, nice) or morally negative words (e.g. selfish, mean). Afterwards, the participants were asked if and how much they would like to donate to a charity of their choice. The result: Participants who wrote a story containing the positive words donated only one fifth as much as those who wrote a story with negative words.

This effect is commonly referred to as moral licensing: People with a recently boosted moral self-concept feel like they have done enough and see no need to improve the world even further. Or, as McGonigal (2011) puts it (emphasis mine):

When it comes to right and wrong, most of us are not striving for moral perfection. We just want to feel good enough – which then gives us permission to do whatever we want.

Another well known phenomenon is scope neglect. One explanation for scope neglect is the "purchase of moral satisfaction" proposed by Kahneman and Knetsch (1992): Most people don't try to do as much good as possible with their money, they only spend just enough cash to create a "warm-fuzzy feeling" in themselves.

Phenomenons like "moral licensing" and "purchase of moral satisfaction" indicate that it is all too human to only act as altruistic as is necessary to feel or seem good enough. This could be described as "ethical satisficing" because people just follow the course of action that meets or exceeds a certain threshold of moral goodness. They don't try to carry out the morally optimal action or an approximation thereof (as measured by their own axiology).

I think I cited enough academic papers in the last paragraphs so let's get more speculative: Many, if not most people5 tend to be intuitive deontologists6. Deontology basically posits that some actions are morally required, and some actions are morally forbidden. As long as you do perform the morally required ones and don't engage in morally wrong actions you are off the hook. There is no need to do more, no need to perform supererogatory acts. Not neglecting your duties is good enough. In short, deontology could also be viewed as ethical satisficing (see footnote 7 for further elaboration).

In contrast, consider deontology's arch-enemy: Utilitarianism. Almost all branches of utilitarianism share the same principal idea: That one should maximize something for as many entities as possible. Thus, utilitarianism could be thought of as ethical maximizing8.

Effective altruists are an even better example for ethical maximizers because they actually try to identify and implement (or at least pretend to try) the most effective approaches to improve the world. Some conduct in-depth research and compare the effectiveness of hundreds of different charities to find the ones that save the most lives with as little money as possible. And rumor has it there are people who have even weirder ideas about how to ethically optimize literally everything. But more on this later.

Friendships and conversations

Humans intuitively assume that the desires and needs of other people are similar to their own ones. Consequently, I thought that everyone secretly yearns to find like-minded companions with whom one can talk about one’s biggest hopes as well as one’s greatest fears and form deep, lasting friendships.

But experience tells me that I was probably wrong, at least to some degree: I found it quite difficult to have these sorts of conversations with a certain kind of people, especially in groups (luckily, I’ve found also enough exceptions). It seems that some people are satisfied as long as their conversations meet a certain, not very high threshold of acceptability. Similar observations could be made about their friendships in general. One could call them social or conversational satisficers. By the way, this time research actually suggests that conversational maximizing is probably better for your happiness than small talk (Mehl et al., 2008).

Interestingly, what could be called "pluralistic superficiality" may account for many instances of small talk and superficial friendships since everyone experiences this atmosphere of boring triviality but thinks that the others seem to enjoy the conversations. So everyone is careful not to voice their yearning for a more profound conversation, not realizing that the others are suppressing similar desires.

Crucial Considerations and the Big Picture

On to the last section of this essay. It’s even more speculative and half-baked than the previous ones, but it may be the most interesting, so bear with me.

Research suggests that many people don’t even bother to search for answers to the big questions of existence. For example, in a representative sample of 603 Germans, 35% of the participants could be classified as existentially indifferent, that is they neither think their lives are meaningful nor suffer from this lack of meaning (T. Schnell, 2008).

The existential thirst of the remaining 65% is presumably harder to satisfy, but how much harder? Many people don't invest much time or cognitive resources in order to ascertain their actual terminal values and how to optimally reach them – which is arguably of the utmost importance. Instead they appear to follow a mental checklist containing common life goals (one could call them "cached goals") such as a nice job, a romantic partner, a house and probably kids. I’m not saying that such goals are “bad” – I also prefer having a job to sleeping under the bridge and having a partner to being alone. But people usually acquire and pursue their (life) goals unsystematically and without much reflection which makes it unlikely that such goals exhaustively reflect their idealized preferences. Unfortunately, many humans are so occupied by the pursuit of such goals that they are forced to abandon further contemplation of the big picture.

Furthermore, many of them lack the financial, intellectual or psychological capacities to ponder complex existential questions. I'm not blaming subsistence farmers in Bangladesh for not reading more about philosophy, rationality or the far future. But there are more than enough affluent, highly intelligent and inquisitive people who certainly would be able to reflect about crucial considerations. Instead, they spend most of their waking hours maximizing nothing but the money in their bank accounts or interpreting the poems of some arabic guy from the 7th century9.

Generally, many people seem to take the current rules of our existence for granted and content themselves with the fundamental evils of the human condition such as aging, needless suffering or death. Whatever the reason may be, they don't try to radically change the rules of life and their everyday behavior seems to indicate that they’ve (gladly?) accepted their current existence and the human condition in general. One could call them existential satisficers.

Contrast this with the mindset of transhumanism. Generally, transhumanists are not willing to accept the horrors of nature and realize that human nature itself is deeply flawed. Thus, transhumanists want to fundamentally alter the human condition and aim to eradicate, for example, aging, unnecessary suffering and ultimately death. Through various technologies transhumanists desire to create an utopia for everyone. Thus, transhumanism could be thought of as existential maximizing10.

However, existential maximizing and transhumanism are not very popular. Quite the opposite, existential satisficing – accepting the seemingly unalterable human condition – has a long philosophical tradition. To give some examples: The otherwise admirable Stoics believed that the whole universe is pervaded and animated by divine reason. Consequently, one should cultivate apatheia and calmly accept one's fate. Leibniz even argued that we already live in the best of all possible worlds. The mindset of existential satisficing can also be found in Epicureanism and arguably in Buddhism. Lastly, religions like Christianity or Islam are generally against transhumanism, partly because this amounts to “playing God”. Which is understandable from their point of view because why bother fundamentally transforming the human condition if everything will be perfect in heaven anyway?

One has to grant ancient philosophers that they couldn't even imagine that one day humanity would acquire the technological means to fundamentally alter the human condition. Thus it is no wonder that Epicurus argued that death is not to be feared or that the Stoics believed that disease or poverty are not really bad: It is all too human to invent rationalizations for the desirability of actually undesirable, but (seemingly) inevitable things – be it death or the human condition itself.

But many contemporary intellectuals can't be given the benefit of the doubt. They argue explicitly against trying to change the human condition. To name a few: Bernard Williams believed that death gives life meaning. Francis Fukuyama called transhumanism the world's most dangerous idea. And even Richard Dawkins thinks that the fear of death is "whining" and that the desire for immortality is "presumptuous"11:

Be thankful that you have a life, and forsake your vain and presumptuous desire for a second one.

With all that said, "run-off-the-mill" transhumanism arguably still doesn't go far enough. There are at least two problems I can see: 1) Without a benevolent superintelligent singleton "Moloch" (to use Scott Alexander's excellent wording) will never be defeated. 2) We are still uncertain about ontology, decision theory, epistemology and our own terminal values. Consequently, we need some kind of process which can help us to understand those things or we will probably fail to rearrange reality until it conforms with our idealized preferences.

Therefore, it could be argued that the ultimate goal is the creation of a benevolent superintelligence or Friendly AI (FAI) whose values are aligned with ours. There are of course numerous objections to the whole superintelligence strategy in general and to FAI in particular, but I won’t go into detail here because this essay is already too long.

Nevertheless – however unlikely – it seems possible that with the help of a benevolent superintelligence we could abolish all gratuitous suffering and achieve an optimal mode of existence. We could become posthuman beings with god-like intellects, our ecstasy outshining the surrounding stars, and transforming the universe until one happy day all wounds are healed, all despair dispelled and every (idealized) desire fulfilled. To many this seems like sentimental and wishful eschatological speculation but for me it amounts to ultimate existential maximizing12, 13.

Conclusion

The previous paragraphs shouldn’t fool one into believing that maximizing has no serious disadvantages. The desire to aim higher, become stronger and to always behave in an optimally goal-tracking way can easily result in psychological overload and subsequent surrender. Furthermore, it seems that adopting the mindset of a maximizer increases the tendency to engage in upward social comparisons and counterfactual thinking which contribute to depression as research has shown.

Moreover, there is much to be learnt from stoicism and satisficing in general: Life isn't always perfect and there are things one cannot change; one should accept one's shortcomings – if they are indeed unalterable; one should make the best of one's circumstances. In conclusion, better be a happy satisficer whose moderate productivity is sustainable than be a stressed maximizer who burns out after one year. See also these two essays which make similar points.

All that being said, I still favor maximizing over satisficing. If our ancestors had all been satisficers we would still be picking lice off each other’s backs14. And only by means of existential maximizing can we hope to abolish the aforementioned existential evils and all needless suffering – even if the chances seem slim.

[Originally posted a longer, more personal version of this essay on my own blog]

Footnotes

[1] Obviously this is not a categorical classification, but a dimensional one.

[2] To put it more formally: The utility function of the ultimate satisficer would assign the same (positive) number to each possible world, i.e. the ultimate satisficer would be satisfied with every possible world. The less possible worlds you are satisfied with (i.e. the higher your threshold of acceptability), the less possible worlds exist between which you are indifferent, the less of a satisficer and the more of a maximizer you are. Also note: Satisficing is not irrational in itself. Furthermore, I’m talking about the somewhat messy psychological characteristics and (revealed) preferences of human satisficers/maximizers. Read these posts if you want to know more about satisficing vs. maximizing with regard to AIs.

[3] Rational maximizers take the value of information and opportunity costs into account.

[4] Instead of "maximizer" I could also have used the term "optimizer".

[5] E.g. in the "Fat Man" version of the famous trolley dilemma, something like 90% of subjects don't push a fat man onto the track, in order to save 5 other people. Also, utilitarians like Peter Singer don't exactly get rave reviews from most folks. Although there is some conflicting research (Johansson-Stenman, 2012). Furthermore, the deontology vs. utilitarianism distinction itself is limited. See e.g. "The Righteous Mind" by Jonathan Haidt.

[6] Of course, most people are not strict deontologists. They are also intuitive virtue ethicists and care about the consequences of their actions.

[7] Admittedly, one could argue that certain versions of deontology are about maximally not violating certain rules and thus could be viewed as ethical maximizing. However, in the space of all possible moral actions there exist many actions between which a deontologist is indifferent, namely all those actions that exceed the threshold of moral acceptability (i.e. those actions that are not violating any deontological rule). To illustrate this with an example: Visiting a friend and comforting him for 4 hours or using the same time to work and subsequently donating the earned money to a charity are both morally equivalent from the perspective of (many) deontological theories – as long as one doesn’t violate any deontological rule in the process. We can see that this parallels satisficing.

Contrast this with (classical) utilitarianism: In the space of all possible moral actions there is only one optimal moral action for an utilitarian and all other actions are morally worse. An (ideal) utilitarian searches for and implements the optimal moral action (or tries to approximate it because in real life one is basically never able to identify, let alone carry out the optimal moral action). This amounts to maximizing. Interestingly, this inherent demandingness has often been put forward as a critique of utilitarianism (and other sorts of consequentialism) and satisficing consequentialism has been proposed as a solution (Slote, 1984). Further evidence for the claim that maximizing is generally viewed with suspicion.

[8] The obligatory word of caution here: following utilitarianism to the letter can be self-defeating if done in a naive way.

[9] Nick Bostrom (2014) expresses this point somewhat harshly:

A colleague of mine likes to point out that a Fields Medal (the highest honor in mathematics) indicates two things about the recipient: that he was capable of accomplishing something important, and that he didn't.

As a general point: Too many people end up as money-, academia-, career- or status-maximizers although those things often don’t reflect their (idealized) preferences.

[10] Of course there are lots of utopian movements like socialism, communism or the Zeitgeist movement. But all those movements make the fundamental mistake of ignoring or at least heavily underestimating the importance of human nature. Creating utopia merely through social means is impossible because most of us are, by our very nature, too selfish, status-obsessed and hypocritical and cultural indoctrination can hardly change this. To deny this, is to simply misunderstand the process of natural selection and evolutionary psychology. Secondly, even if a socialist utopia were to come true, there still would exist unrequited love, disease, depression and of course death. To abolish those things one has to radically transform the human condition itself.

[11] Here is another quote:

We are going to die, and that makes us the lucky ones. Most people are never going to die because they are never going to be born. [….] We privileged few, who won the lottery of birth against all odds, how dare we whine at our inevitable return to that prior state from which the vast majority have never stirred?

― Richard Dawkins in "Unweaving the Rainbow"

[12] It’s probably no coincidence that Yudkowsky named his blog "Optimize Literally Everything" which adequately encapsulates the sentiment I tried to express here.

[13] Those interested in or skeptical of the prospect of superintelligent AI, I refer to "Superintelligence: Paths, Dangers and Strategies" by Nick Bostrom.

[14] I stole this line from Bostrom’s “In Defense of Posthuman Dignity”.

References

Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.

Haidt, J. (2013). The righteous mind: Why good people are divided by politics and religion. Random House LLC.

Johansson-Stenman, O. (2012). Are most people consequentialists? Economics Letters, 115 (2), 225-228.

Kahneman, D., & Knetsch, J. L. (1992). Valuing public goods: the purchase of moral satisfaction. Journal of environmental economics and management, 22(1), 57-70.

McGonigal, K. (2011). The Willpower Instinct: How Self-Control Works, Why It Matters, and What You Can Do to Get More of It. Penguin.

Mehl, M. R., Vazire, S., Holleran, S. E., & Clark, C. S. (2010). Eavesdropping on Happiness Well-Being Is Related to Having Less Small Talk and More Substantive Conversations. Psychological Science, 21(4), 539-541.

Sachdeva, S., Iliev, R., & Medin, D. L. (2009). Sinning saints and saintly sinners the paradox of moral self-regulation. Psychological science, 20(4), 523-528.

Schnell, T. (2010). Existential indifference: Another quality of meaning in life. Journal of Humanistic Psychology, 50(3), 351-373.

Schwartz, B. (2000). Self determination: The tyranny of freedom. American Psychologist, 55, 79–88.

Schwartz, B., Ward, A., Monterosso, J., Lyubomirsky, S., White, K., & Lehman, D. R. (2002). Maximizing versus satisficing: happiness is a matter of choice. Journal of personality and social psychology, 83(5), 1178.

Slote, M. (1984). “Satisficing Consequentialism”. Proceedings of the Aristotelian Society, 58: 139–63.

Meetup : First LW Meetup in Warsaw

3 wallowinmaya 22 March 2014 04:41PM

Discussion article for the meetup : First LW Meetup in Warsaw

WHEN: 30 March 2014 03:00:00PM (+0100)

WHERE: Cafe Kulturalna, Plac Defilad 1, 00-901 Warszawa

As far as I can tell, there never has been a Lesswrong meetup in Warsaw, although Warsaw has almost 2 million inhabitants.

I'm currently visiting my girlfriend in Warsaw and we would like to meet folks who are also interested in Lesswrong and related topics.

Regarding the content and structure of the meetup: I would suggest that at first everyone proposes some discussion topics he or she is interested in (e.g. epistemic rationality, effective altruism, far future/FAI, practical life tips, etc.) and then we choose the most popular ones. Simple socializing and getting to know each other is of course also great!

Please leave a comment if you're thinking about attending or are interested in a LW meetup in Warsaw, even if you can't attend this one.

And remember, (almost) everyone is welcome, especially newbies!

(In case you can't find the place or something, here's our number: 0048 693 603 770)

ETA: The meetup will take place at 15:00 PM, local time. 

Link to Facebook-Event

Discussion article for the meetup : First LW Meetup in Warsaw

Literature-review on cognitive effects of modafinil (my bachelor thesis)

33 wallowinmaya 08 January 2014 07:23PM

Modafinil is probably the most popular cognitive enhancer. LessWrong seems pretty interested in it. The incredible Gwern wrote an excellent and extensive article about it

Of all the stimulants I tried, modafinil is my favorite one. There are more powerful substances like e.g. amphetamine or methylphenidate, but modafinil has much less negative effects on physical as well as mental health and is far less addictive. All things considered, the cost-benefit-ratio of modafinil is unparalleled. 

For those reasons I decided to publish my bachelor thesis on the cognitive effects of modafinil in healthy, non-sleep deprived individuals on LessWrong. Forgive me its shortcomings. 

Here are some relevant quotes:

Introduction:

...the main research question of this thesis is if and to what extent modafinil has positive effects on cognitive performance (operationalized as performance improvements in a variety of cognitive tests) in healthy, non-sleep deprived individuals.... The abuse liability and adverse effects of modafinil are also discussed. A literature research of all available, randomized, placebo-controlled, double-blind studies which examined those effects was therefore conducted.

Overview of effects in healthy individuals:

...Altogether 19 randomized, double-blind, placebo-controlled studies about the effects of modafinil on cognitive functioning in healthy, non sleep-deprived individuals were reviewed. One of them (Randall et al., 2005b) was a retrospect analysis of 2 other studies (Randall et al., 2002 and 2005a), so 18 independent studies remain.

Out of the 19 studies, 14 found performance improvements in at least one of the administered cognitive tests through modafinil in healthy volunteers.
Modafinil significantly improved performance in 26 out of 102 cognitive tests, but significantly decreased performance in 3 cognitive tests.

...Several studies suggest that modafinil is only effective in subjects with lower IQ or lower baseline performance (Randall et al., 2005b; Müller et al., 2004; Finke et al., 2010). Significant differences between modafinil and placebo also often only emerge in the most difficult conditions of cognitive tests (Müller et al., 2004; Müller et al., 2012; Winder-Rhodes et al., 2010; Marchant et al., 2009).

Adverse effects:

...A study by Wong et al. (1999) of 32 healthy, male volunteers showed that the most frequently observed adverse effects among modafinil subjects were headache (34%), followed by insomnia, palpitations and anxiety (each occurring in 21% of participants). Adverse events were clearly dose- dependent: 50%, 83%, 100% and 100% of the participants in the 200 mg, 400 mg, 600 mg, and 800 mg dose groups respectively experienced at least one adverse event. According to the authors of this study the maximal safe dosage of modafinil is 600 mg.

Abuse potential:

...Using a randomized, double-blind, placebo-controlled design Rush et al. (2002) examined subjective and behavioral effects of cocaine (100, 200 or 300 mg), modafinil (200, 400 or 600 mg) and placebo in cocaine users….Of note, while subjects taking cocaine were willing to pay $3 for 100 mg, $6 for 200 mg and $10 for 300 mg cocaine, participants on modafinil were willing to pay $2, regardless of the dose. These results suggest that modafinil has a low abuse liability, but the rather small sample size (n=9) limits the validity of this study.

The study by Marchant et al. (2009) which is discussed in more detail in part 2.4.12 found that subjects receiving modafinil were significantly less (p<0,05) content than subjects receiving placebo which indicates a low abuse potential of modafinil. In contrast, in a study by Müller et al. (2012) which is also discussed in more detail above, modafinil significantly increased (p<0,05) ratings of "task-enjoyment" which may suggest a moderate potential for abuse.

...Overall, these results indicate that although modafinil promotes wakefulness, its effects are distinct from those of more typical stimulants like amphetamine and methylphenidate and more similar to the effects of caffeine which suggests a relatively low abuse liability.

Conclusion:

In healthy individuals modafinil seems to improve cognitive performance, especially on the Stroop Task, stop-signal and serial reaction time tasks and tests of visual memory, working memory, spatial planning ability and sustained attention. However, these cognitive enhancing effects did only emerge in a subset of the reviewed studies. Additionally, significant performance increases may be limited to subjects with low baseline performance. Modafinil also appears to have detrimental effects on mental flexibility.

...The abuse liability of modafinil seems to be small, particularly in comparison with other stimulants such as amphetamine and methylphenidate. Headache and insomnia are the most common adverse effects of modafinil.

...Because several studies suggest that modafinil may only provide substantial beneficial effects to individuals with low baseline performance, ultimately the big question remains if modafinil can really improve the cognitive performance of already high-functioning, healthy individuals. Only in the latter case modafinil can justifiably be called a genuine cognitive enhancer.

You can download the whole thing below. (Just skip the sections on substance-dependent individuals and patients with dementia. My professor wanted them.)

Effects of modafinil on cognitive performance in healthy individuals, substance-dependent individuals and patients with dementia

Meetup : First Meetup in Cologne (Köln)

2 wallowinmaya 14 October 2013 08:13PM

Discussion article for the meetup : First Meetup in Cologne (Köln)

WHEN: 10 November 2013 03:00:00PM (+0100)

WHERE: Starbucks Coffee, An der Hahnepooz 8 50674 Cologne‎

ETA: The meetup is going to take place on November, 10th, 15:00. I'll be there. In case you don't find the place or something, here's my number: 0157 39606835

As far as I can tell, there never has been a Lesswrong meetup in Cologne. This is a shame, considering that Cologne has over 1 million inhabitants.

I recently moved here from Munich (where I already attended 3 Lesswrong Meetups) to study and would like to meet folks who are also interested in Lesswrong and related topics. Regarding the content and structure of the meetup: I would suggest that at first each of us proposes some discussion topics he or she is interested in (e.g. epistemic rationality, effective altruism, far future/FAI, practical life tips, etc.) and then we choose the most popular ones. And simple socializing and getting to know each other is also great, as far as I'm concerned.

Here is a link to a doodle survey (http://doodle.com/3ms7afrniqxb5i7e), in which you can put your favorite date. I prefer Sundays, but if nobody can attend on Sundays, we can probably change the date.

Please, please leave a comment if you're interested in a LW meetup in Cologne, even if you can't attend one in the next weeks/months.

And remember, (almost) everyone is welcome, especially newbies!

Discussion article for the meetup : First Meetup in Cologne (Köln)

[Link] Should Psychological Neuroscience Research Be Funded?

2 wallowinmaya 18 April 2013 12:13PM

In this post, Jesse Marczyk argues that psychological neuroscience research often doesn't add much value per dollar spent and therefore is not worth the cost.

 

In my last post, when discussing some research by Singer et al (2006), I mentioned as an aside that their use of fMRI data didn’t seem to add a whole lot to their experiment. Yes, they found that brain regions associated with empathy appear to be less active in men watching a confederate who behaved unfairly towards them receive pain; they also found that areas associated with reward seemed slightly more active. Neat; but what did that add beyond what a pencil and paper or behavioral measure might? That is, let’s say the authors (all six of them) had subjects interact with a confederate who behaved unfairly towards them. This confederate then received a healthy dose of pain. Afterwards, the subjects were asked two questions: (1) how bad do you feel for the confederate and (2) how happy are you about what happened to them? This sounds fairly simple, likely because, well, it is fairly simple. It’s also incredibly cheap, and pretty much a replication of what the authors did. The only difference is the lack of a brain scan. The question becomes, without the fMRI, how much worse is this study?

There are two crucial questions in mind, when it comes to the above question. The first is a matter of new information: how much new and useful information has the neuroscience data given us? The second is a matter of bang-for-your-buck: how much did that neuroscience information cost? Putting the two questions together,we have the following: how much additional information (in whatever unit information comes in) did we get from this study per dollar spent?

...I’ll begin my answer to it with a thought experiment: let’s say you ran the initial same study as Singer et al did, and in addition to your short questionnaire you put people into an fMRI machine and got brain scans. In the first imaginary world, we obtained results identical to what Singer et al reported: areas thought to be related to empathy decrease in activation, areas thought to be related to pleasure increase in activation. The interpretation of these results seems fairly straightforward – that is, until one considers the second imaginary world. In this second world, we see the results of brain scan show the reverse pattern: specifically, areas thought to be related to empathy show an increase in activation and areas associated with reward show a decrease. The trick to this thought experiment, however, is that the survey responses remain the same; the only differences between the two worlds are the brain pictures.

This makes interpreting our results rather difficult. In the second world, do we conclude that the survey responses are, in some sense, wrong? The subjects “really” feel bad about the confederates being hurt, but they are unaware of it? This strikes me as a bit off, as far as conclusions go. Another route might be to suggest that our knowledge of what areas of the brain are associated with empathy and pleasure is somehow off: maybe increased activation means less empathy, or maybe empathy is processed elsewhere in the brain, or some other cognitive process is interfering. Hell; maybe it’s possible that the technology employed by fMRIs just isn’t sensitive to what you’re trying to look at. Though the brain scan might have highlighted our ignorance as to how the brain is working in that case, it didn’t help us to resolve it. Further, that the second interpretative route seems like a more reasonable one than the first, it also brings to our attention a perhaps under-appreciated fact: we would be privileging the results of the survey measure above the results of the brain scan.

...While such a thought experiment does not definitely answer the question of how much value is added by neuroscience information in psychology, it provides a tentative starting position: not the majority. The bulk of the valuable information in the study came from the survey, and all the subsequent brain information was interpreted in light of it.

Meetup : First meetup in Innsbruck

2 wallowinmaya 21 November 2012 10:31PM

Discussion article for the meetup : First meetup in Innsbruck

WHEN: 02 December 2012 03:00:17PM (+0100)

WHERE: Innsbruck, Austria

Let's organize the first Lesswrong meetup of Innsbruck! To be part of this historical event just click on this doodle-survey and vote for your favorite time.

You can also write a comment and suggest a place and discussion-topics if you want to.

Newbies and lurkers are welcome, obviously!

Discussion article for the meetup : First meetup in Innsbruck

Meetup : Munich Meetup, October 28th

4 wallowinmaya 25 September 2012 08:29AM

Discussion article for the meetup : Munich Meetup, October 28th

WHEN: 28 October 2012 03:00:00PM (+0200)

WHERE: Munich Central Station, Coffee Fellows cafe, inside the central station, *second* floor

The last meetup took place more than a year ago, so it's time for another one. Some of the topics discussed last time: Existential risks, anthropics, AI, metaethics, self-improvement and probably more that I can't remember. Of course there is much more to talk about and maybe we'll try some of those fancy rationality-games. (If the cafe sucks, we could easily go elsewhere. I've merely chosen the place, because it's relatively nice, near the central station and easy to find.) I'll be there with a LessWrong sign. Newbies and lurkers are very welcome!

 

ETA: I created a google group for the Munich LW meetup.

Send me your email adress to myusername@gmx.de and I'll add you.

Discussion article for the meetup : Munich Meetup, October 28th

[LINK] Antidepressants: Bad Drugs... Or Bad Patients?

14 wallowinmaya 04 January 2012 09:31PM

Illuminating post on Neuroskeptic:

Some Quotes:

Why is it that modern trials of antidepressant drugs increasingly show no benefit of the drugs over placebo?.....

They suggest that maybe it's the patients fault: Participation that is induced by cash payments may lead subjects to exaggerate their symptoms [i.e. in order to get included into the trial]... Another contributing factor to high placebo response rates may be the extent to which the volunteers in antidepressant trials are really generalizable to patients in clinical practice.

Since the initial antidepressant trials in the 1960s, participants have gone from being patients who were recruited primarily from inpatient psychiatric populations to outpatient volunteers who are often recruited by advertisements. At times, these symptomatic volunteers have participated in other trials. When we contact potential participants to schedule screening, they often ask to be reminded which trial we are screening for or mistake our research trial for a different protocol in which they recently participated.

A few years ago I was running a study recruiting people who'd recovered from psychiatric illness. The main source of volunteers was online adverts..... We recruited about 20 people. No fewer than 3 turned out to have enrolled in other studies and lied about it. After I realized this I Googled the offender's names and two of them turned up in the court pages of the local newspaper pleading guilty to various petty crimes.

In my view, the authors miss out on the real problem with recruiting depressed people through adverts:  depressed people don't tend to respond to adverts, because depressed people don't do anything. That's why they call it depression.

So while you wouldn't go looking for aquaphobic people in a swimming pool, I'm not sure we should be looking for depressed people through adverts.

Could similar mechanisms hold true for other drugs?

View more: Next