Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[Link] Strong men are socialist - how to use a study's own data to disprove it

6 Jacobian 31 May 2017 04:18AM

[Link] Get full-text of research papers as you browse

4 morganism 23 March 2017 07:56PM

[Link] Funding the Reproducibility Crises as effective giving

9 morganism 24 January 2017 11:05PM

[Link] Are you being p-hacked? Time to hack back.

6 Jacobian 20 December 2016 04:57PM

[Link] AI-ON is an open community dedicated to advancing Artificial Intelligence

3 morganism 18 October 2016 10:17PM

[Link] Viruses and DRACOs in the Valley of Death in medical research.

-1 morganism 08 October 2016 08:36PM

Paid research assistant position focusing on artificial intelligence and existential risk

7 crmflynn 02 May 2016 06:27PM

Yale Assistant Professor of Political Science Allan Dafoe is seeking Research Assistants for a project on the political dimensions of the existential risks posed by advanced artificial intelligence. The project will involve exploring issues related to grand strategy and international politics, reviewing possibilities for social scientific research in this area, and institution building. Familiarity with international relations, existential risk, Effective Altruism, and/or artificial intelligence are a plus but not necessary. The project is done in collaboration with the Future of Humanity Institute, located in the Faculty of Philosophy at the University of Oxford. There are additional career opportunities in this area, including in the coming academic year and in the future at Yale, Oxford, and elsewhere. If interested in the position, please email allan.dafoe@yale.edu with a copy of your CV, a writing sample, an unofficial copy of your transcript, and a short (200-500 word) statement of interest. Work can be done remotely, though being located in New Haven, CT or Oxford, UK is a plus.

The Science of Effective Fundraising: Four Common Mistakes to Avoid

8 Gleb_Tsipursky 11 April 2016 03:19PM

This article will be of interest primarily for Effective Altruists. It's also cross-posted to the EA Forum.

 

 

Summary/TL;DR: Charities that have the biggest social impact often get significantly less financial support than rivals that tell better stories but have a smaller social impact. Drawing on academic research across different fields, this article highlights four common mistakes that fundraisers for effective charities should avoid and suggests potential solutions to these mistakes. 1) Focus on individual victims as well as statistics; 2) Present problems that are solvable by individual donors; 3) Avoid relying excessively on matching donations and focus on learning about your donors; 4) Empower your donors and help them feel good.

 

 

Co-written by Gleb Tsipursky and Peter Slattery


 

Acknowledgments: Thanks to Stefan Schubert, Scott Weathers, Peter Hurford, David Moss, Alfredo Parra, Owen Shen, Gina Stuessy, Sheannal Anthony Obeyesekere and other readers who prefer to remain anonymous for providing feedback on this post. The authors take full responsibility for all opinions expressed here and any mistakes or oversights. Versions of this piece will be published on The Life You Can Save blog and the Intentional Insights blog.

 

Intro

Charities that use their funds effectively to make a social impact frequently struggle to fundraise effectively. Indeed, while these charities receive plaudits from those committed to measuring and comparing the impact of donations across sectors, many effective charities have not successfully fundraised large sums outside of donors focused highly on impact.

 

In many cases, this situation results from the beliefs of key stakeholders at effective charities. Some think that persuasive fundraising tactics are “not for them”  and instead assume that presenting hard data and statistics will be optimal as they believe that their nonprofit’s effectiveness can speak for itself.

The belief that a nonprofit’s effectiveness can speak for itself can be very harmful to fundraising efforts as it overlooks the fact that donors do not always optimise their giving for social impact. Instead, studies suggest that donors’ choices are influenced by many other considerations, such as a desire for a warm glow, social prestige, or being captured by engrossing stories. Indeed, charities that have the biggest social impact often get significantly less financial support than rivals that tell better stories but have a smaller social impact. For example, while one fundraiser collected over $700,000 to remove a young girl from a well and save a single life, most charities struggle to raise anything proportionate for causes that could save many more lives or lift thousands out of poverty.

 

Given these issues, the aim of this article is to use available science on fundraising and social impact to address some of the common misconceptions that charities may have about fundraising and, hopefully, make it easier for effective charities to also become more effective at fundraising. To do this it draws on academic research across different fields to highlight four common mistakes that those who raise funds for effective charities should avoid and suggest potential solutions to these mistakes.

 

Don’t forget individual victims

 

Many fundraisers focus on using statistics and facts to convey the severity of the social issues they tackle. However, while fact and statistics are often an effective way to convince potential donors, it is important to recognise that different people are persuaded by different things. While some individuals are best persuaded to do good deeds through statistics and facts, others are most influenced by the closeness and vividness of the suffering. Indeed, it has been found that people often prefer to help a single identifiable victim, rather than many faceless victims; the so-called identifiable victim effect.

 

One way in which charities can cover all bases is to complement their statistics by telling stories about one or more of the most compelling victims. Stories have been shown to be excellent ways of tapping emotions, and stories told using video and audio are likely to be particularly good at creating vivid depictions of victims that compel others to want to help them.

 

Don’t overemphasise the problem

 

Focusing on the size of the problem has been shown to be ineffective for at least two reasons. First, most people prefer to give to causes where they can save the greatest portion of people. This means that rather than save 100 out of 1,000 victims of malaria, the majority of people would rather use the same or even more resources to save all five out of five people stranded on a boat or one girl stranded in a well with the same amount of resources, even if saving 100 people is clearly the more rational choice. People being reluctant to help where they feel their impact is not going to be significant is often called the drop in the bucket effect.

 

Second, humans have a tendency to neglect the scope of the problem when dealing with social issues. This is called scope insensitivity: people do not scale up their efforts in proportion to a problem’s true size. For example, a donor willing to give $100 to help one person might only be willing to give $200 to help 100 people, instead of the proportional amount of $10,000.

 

Of course charities often need to deal with big problems. In such cases one solution is to break these big problems into smaller pieces (e.g., individuals, families or villages) and present situations on a scale that the donor can relate to and realistically address through their donation.

 

Don’t assume that matching donations is always a good way to spend funds

 

Charitable fundraisers frequently put a lot of emphasis on arranging for big donors to offer to match any contributions from smaller donors. Intuitively, donation matching seems to be a good incentive for givers as they will generate twice (sometimes three times) the social impact for donating the same amount. However, research provides insufficient evidence to support or discourage donation matching: after reviewing the evidence, Ben Kuhn argues that its positive effects on donations are relatively small (and highly uncertain), and that sometimes the effects can be negative.

 

Given the lack of strong supporting research, charities should make sure to check that donation matching works for them and should also consider other ways to use their funding from large donors. One option is to use some of this money to cover experiments and other forms of prospect research to better understand their donors’ reasons for giving. Another is to pay various non-program costs so that a charity may claim that more of the smaller donors’ donations will go to program costs, or to use big donations as seed money for a fundraising campaign.

 

Don't forget to empower donors and help them feel good

 

Charities frequently focus on showing tragic situations to motivate donors to help.  However, charities can sometimes go too far in focusing on the negatives as too much negative communication can overwhelm and upset potential donors, which can deter them from giving. Additionally, while people often help due to feeling sadness for others, they also give for the warm glow and feeling of accomplishment that they expect to get from helping.

 

Overall, charities need to remember that most donors want to feel good for doing good and ensure that they achieve this. One reason why the ALS Ice Bucket Challenge was such an incredibly effective approach to fundraising was that it gave donors the opportunity to have a good time, while also doing good. Even when it isn’t possible to think of a clever new way to make donors feel good while donating, it is possible to make donors look good by publicly thanking and praising them for their donations. Likewise it is possible to make them feel important and satisfied by explaining how their donations have been key to resolving tragic situations and helping address suffering.

 

Conclusion

 

Remember four key strategies suggested by the research:

 

1) Focus on individual victims as well as statistics

 

2) Present problems that are solvable by individual donors

 

3) Avoid relying excessively on matching donations and focus on learning about your donors

 

4) Empower your donors and help them feel good.

 

By following these strategies and avoiding the mistakes outlined above, you will not only provide high-impact services, but will also be effective at raising funds.


Cultivate the desire to X

3 Elo 07 March 2016 03:40AM

Recently I have found myself encouraging people to cultivate the desire to X.

Examples that you might want to cultivate interest in include:

  • Diet
  • Organise ones self
  • Plan for the future
  • be a goal-oriented thinker
  • build the tools
  • Anything else in the list of common human goals
  • Getting healthy sleep
  • Being less wrong
  • Trusting people more
  • Trusting people less
  • exercise
  • interest in a topic (cars, fashion, psychology etc.)

Why do we need to cultivate?

We don't.  But sometimes we can't just "do".  Lot's of reasons are reasonable reasons to not be able to just "do" the thing:

  • Some things are scary
  • Some things need planning
  • Some things need research
  • Some things are hard
  • Some things are a leap of faith
  • Some things can be frustrating to accept
  • Some things seem stupid (well if exercising is so great why don't I automatically want to do it)
  • Other excuses exist.

On some level you have decided you want to do X; on some other level you have not yet committed to doing it.  Easy tasks can get done quickly.  More complicated tasks are not so easy to do right away.

Well if it were easy enough to just successfully do the thing - you can go ahead and do the thing (TTYL flying to the moon tomorrow - yea nope.).

  1. your system 1 wants to do the thing and your system 2 is not sure how.
  2. your system 2 wants to do the thing and your system 1 is not sure it wants to do the thing.  
  • The healthy part of you wants to diet; the social part of you is worried about the impact on your social life.

(now borrowing from Common human goals)

  • Your desire to live forever wants you to take a medication every morning to increase your longevity; your desire for freedom does not want to be tied down to a bottle of pills every morning.
  • Your desire for a legacy wants you to stay late at work; your desire for quality family time wants you to leave the office early.

The solution:

The solution is to cultivate the interest; or the desire to do the thing. From the initial point of interest or desire - you can move forward; do some research to either convince your system 2 of the benefits, or work out how to do the thing to convince your system 1 that it is possible/viable/easy enough.  Or maybe after some research the thing seems impossible.  I offer Cultivating the desire as a step along the way to working it out.

Short post for today; Cultivate the desire to do X.


Meta: time to write 1.5 hours.

My table of contents contains my other writing

feedback welcome

The case for value learning

4 leplen 27 January 2016 08:57PM

This post is mainly fumbling around trying to define a reasonable research direction for contributing to FAI research. I've found that laying out what success looks like in the greatest possible detail is a personal motivational necessity. Criticism is strongly encouraged. 

The power and intelligence of machines has been gradually and consistently increasing over time, it seems likely that at some point machine intelligence will surpass the power and intelligence of humans. Before that point occurs, it is important that humanity manages to direct these powerful optimizers towards a target that humans find desirable.

This is difficult because humans as a general rule have a fairly fuzzy conception of their own values, and it seems unlikely that the millennia of argument surrounding what precisely constitutes eudaimonia are going to be satisfactorily wrapped up before the machines get smart. The most obvious solution is to try to leverage some of the novel intelligence of the machines to help resolve the issue before it is too late.

Lots of people regard using a machine to help you understand human values as a chicken and egg problem. They think that a machine capable of helping us understand what humans value must also necessarily be smart enough to do AI programming, manipulate humans, and generally take over the world. I am not sure that I fully understand why people believe this. 

Part of it seems to be inherent in the idea of AGI, or an artificial general intelligence. There seems to be the belief that once an AI crosses a certain threshold of smarts, it will be capable of understanding literally everything. I have even heard people describe certain problems as "AI-complete", making an explicit comparison to ideas like Turing-completeness. If a Turing machine is a universal computer, why wouldn't there also be a universal intelligence?

To address the question of universality, we need to make a distinction between intelligence and problem solving ability. Problem solving ability is typically described as a function of both intelligence and resources, and just throwing resources at a problem seems to be capable of compensating for a lot of cleverness. But if problem-solving ability is tied to resources, then intelligent agents are in some respects very different from Turing machines, since Turing machines are all explicitly operating with an infinite amount of tape. Many of the existential risk scenarios revolve around the idea of the intelligence explosion, when an AI starts to do things that increase the intelligence of the AI so quickly that these resource restrictions become irrelevant. This is conceptually clean, in the same way that Turing machines are, but navigating these hard take-off scenarios well implies getting things absolutely right the first time, which seems like a less than ideal project requirement.

If an AI that knows a lot about AI results in an intelligence explosion, but we also want an AI that's smart enough to understand human values, is it possible to create an AI that can understand human values, but not AI programming? In principle it seems like this should be possible.  Resources useful for understanding human values don't necessarily translate into resources useful for understanding AI programming. The history of AI development is full of tasks that were supposed to be solvable only by a machine smart enough to possess general intelligence, where significant progress was made in understanding and pre-digesting the task, allowing problems in the domain to be solved by much less intelligent AIs. 

If this is possible, then the best route forward is focusing on value learning. The path to victory is working on building limited AI systems that are capable of learning and understanding human values, and then disseminating that information. This effectively softens the AI take-off curve in the most useful possible way, and allows us to practice building AI with human values before handing them too much power to control. Even if AI research is comparatively easy compared to the complexity of human values, a specialist AI might find thinking about human values easier than reprogramming itself, in the same way that humans find complicated visual/verbal tasks much easier than much simpler tasks like arithmetic. The human intelligence learning algorithm is trained on visual object recognition and verbal memory tasks, and it uses those tools to perform addition. A similarly specialized AI might be capable of rapidly understanding human values, but find AI programming as difficult as humans find determining whether 1007 is prime. As an additional incentive, value learning has an enormous potential for improving human rationality and the effectiveness of human institutions even without the creation of a superintelligence. A system that helped people better understand the mapping between values and actions would be a potent weapon in the struggle with Moloch.

Building a relatively unintelligent AI and giving it lots of human values resources to help it solve the human values problem seems like a reasonable course of action, if it's possible. There are some difficulties with this approach. One of these difficulties is that after a certain point, no amount of additional resources compensates for a lack of intelligence. A simple reflex agent like a thermostat doesn't learn from data and throwing resources at it won't improve its performance. To some extent you can make up for intelligence with data, but only to some extent. An AI capable of learning human values is going to be capable of learning lots of other things. It's going to need to build models of the world, and it's going to have to have internal feedback mechanisms to correct and refine those models. 

If the plan is to create an AI and primarily feed it data on how to understand human values, and not feed it data on how to do AI programming and self-modify, that plan is complicated by the fact that inasmuch as the AI is capable of self-observation, it has access to sophisticated AI programming. I'm not clear on how much this access really means. My own introspection hasn't allowed me anything like hardware level access to my brain. While it seems possible to create an AI that can refactor its own code or create successors, it isn't obvious that AIs created for other purposes will have this ability on accident. 

This discussion focuses on intelligence amplification as the example path to superintelligence, but other paths do exist. An AI with a sophisticated enough world model, even if somehow prevented from understanding AI, could still potentially increase its own power to threatening levels. Value learning is only the optimal way forward if human values are emergent, if they can be understood without a molecular level model of humans and the human environment. If the only way to understand human values is with physics, then human values isn't a meaningful category of knowledge with its own structure, and there is no way to create a machine that is capable of understanding human values, but not capable of taking over the world.

In the fairy tale version of this story, a research community focused on value learning manages to use specialized learning software to make the human value program portable, instead of only running on human hardware. Having a large number of humans involved in the process helps us avoid lots of potential pitfalls, especially the research overfitting to the values of the researchers via the typical mind fallacy. Partially automating introspection helps raise the sanity waterline. Humans practice coding the human value program, in whole or in part, into different automated systems. Once we're comfortable that our self-driving cars have a good grasp on the trolley problem, we use that experience to safely pursue higher risk research on recursive systems likely to start an intelligence explosion. FAI gets created and everyone lives happily ever after.

Whether value learning is worth focusing on seems to depend on the likelihood of the following claims. Please share your probability estimates (and explanations) with me because I need data points that originated outside of my own head.

 I can't figure out how to include working polls in a post, but there should be a working version in the comments.
  1. There is regular structure in human values that can be learned without requiring detailed knowledge of physics, anatomy, or AI programming. [poll:probability]
  2. Human values are so fragile that it would require a superintelligence to capture them with anything close to adequate fidelity.[poll:probability]
  3. Humans are capable of pre-digesting parts of the human values problem domain. [poll:probability]
  4. Successful techniques for value discovery of non-humans, (e.g. artificial agents, non-human animals, human institutions) would meaningfully translate into tools for learning human values. [poll:probability]
  5. Value learning isn't adequately being researched by commercial interests who want to use it to sell you things. [poll:probability]
  6. Practice teaching non-superintelligent machines to respect human values will improve our ability to specify a Friendly utility function for any potential superintelligence.[poll:probability]
  7. Something other than AI will cause human extinction sometime in the next 100 years.[poll:probability]
  8. All other things being equal, an additional researcher working on value learning is more valuable than one working on corrigibility, Vingean reflection, or some other portion of the FAI problem. [poll:probability]

How do you choose areas of scientific research?

5 [deleted] 07 November 2015 01:15AM

I've been thinking lately about what is the optimal way to organize scientific research both for individuals and for groups. My first idea: research should have a long-term goal. If you don't have a long-term goal, you will end up wasting a lot of time on useless pursuits. For instance, my rough thought process of the goal of economics is that it should be “how do we maximize the productive output of society and distribute this is in an equitable manner without preventing the individual from being unproductive if they so choose?”, the goal of political science should be “how do we maximize the government's abilities to provide the resources we want while allowing individuals the freedom to pursue their goals without constraint toward other individuals?”, and the goal of psychology should be “how do we maximize the ability of individuals to make the decisions they would choose if their understanding of the problems they encounter was perfect?” These are rough, as I said, but I think they go further than the way most researchers seem to think about such problems.

 

Political science seems to do the worst in this area in my opinion. Very little research seems to have anything to do with what causes governments to make correct decisions, and when they do research of this type, their evaluation of correct decision making often is based on a very poor metric such as corruption. I think this is a major contributor to why governments are so awful, and yet very few political scientists seem to have well-developed theories grounded in empirical research on ways to significantly improve the government. Yes, they have ideas on how to improve government, but they're frequently not grounded in robust scientific evidence.

 

Another area I've been considering is search parameters of moving through research topics. An assumption I have is that the overwhelming majority of possible theories are wrong such that only a minority of areas of research will result in something other than a null outcome. Another assumption is that correct theories are generally clustered. If you get a correct result in one place, there will be a lot more correct results in a related area than for any randomly chosen theory. There seems like two major methods for searching through the landscape of possibilities. One method is to choose an area where you have strong reason to believe there might be a cluster nearby that fits with your research goals and then randomly pick isolated areas of that research area until you get to a major breakthrough, then go through the various permutations of that breakthrough until you have a complete understanding of that particular cluster area of knowledge. Another method would be to take out large chunks of research possibilities, and to just throw the book at it basically. If you come back with nothing, then you can conclude that the entire section is empty. If you get a hit, you can then isolate the many subcomponents and figure out what exactly is going on. Technically I believe the chunking approach should be slightly faster than the random approach, but only by a slight amount unless the random approach is overly isolated. If the cluster of most important ideas are at 10 to the -10th power, and you isolate variables at 10 to the -100th power, then time will be wasted going back up to the correct level. You have to guess what level of isolation will result in the most important insights.

 

One mistake I think is to isolate variables, and then proceed through the universe of possibilities systematically one at a time. If you get a null result in one place, it's likely true that very similar research will also result in a null result. Another mistake I often see is researchers not bothering to isolate after they get a hit. You'll sometimes see thousands of studies on the exact same thing without any application of reductionism eg the finding that people who eat breakfast are generally healthier. Clinical and business researchers seem to most frequently make this mistake of forgetting reductionism.

 

I'm also thinking through what types of research are most critical, but haven't gotten too far in that vein yet. It seems like long-term research (40+ years until major breakthrough) should be centered around the singularity, but what about more immediate research?

New positions and recent hires at the Centre for the Study of Existential Risk (Cambridge, UK)

9 Sean_o_h 13 October 2015 11:11AM

[Cross-posted from EA Forum. Summary: Four new postdoc positions at the Centre for the Study of Existential Risk: Evaluation of extreme technological risk (philosophy, economics); Extreme risk and the culture of science (philosophy of science); Responsible innovation and extreme technological risk (science & technology studies, sociology, policy, governance); and an academic project manager (cutting across the Centre’s research projects, and playing a central role in Centre development). Please help us to spread the word far and wide in the academic community!]

 

An inspiring first recruitment round

The Centre for the Study of Existential Risk (Cambridge, UK) has been making excellent progress in building up our research team. Our previous recruitment round was a great success, and we made three exceptional hires. Dr Shahar Avin joined us in September from Google, with a background in the philosophy of science (Cambridge, UK). He is currently fleshing out several potential research projects, which will be refined and finalised following a research visit to FHI later this month. Dr Yang Liu joined us this month from Columbia University, with a background in mathematical logic and philosophical decision theory. Yang will work on problems in decision theory that relate to long-term AI, and will help us to link the excellent work being done at MIRI with relevant expertise and talent within academia. In February 2016, we will be joined by Dr Bonnie Wintle from the Centre of Excellence for Biosecurity Risk Analysis (CEBRA), who will lead our horizon-scanning work in collaboration with Professor Bill Sutherland’s group at Cambridge; among other things, she has worked on IARPA-funded development of automated horizon-scanning tools, and has been involved in the Good Judgement Project.

We are very grateful for the help of the existential risk and EA communities in spreading the word about these positions, and helping us to secure an exceptionally strong field. Additionally, I have now moved on from FHI to be CSER’s full-time Executive Director, and Huw Price is now 50% funded as CSER’s Academic Director (we share him with Cambridge’s Philosophy Faculty, where he remains Bertrand Russell Chair of Philosophy).

Four new positions:

We’re delighted to announce four new positions at the Centre for the Study of Existential Risk; details below. Unlike the previous round, where we invited project proposals from across our areas of interest, in this case we have several specific positions that we need to fill for our three year Managing Extreme Technological Risk project, funded by the Templeton World Charity Foundation; details are provided below. As we are building up our academic brand within a traditional university, we expect to predominantly hire from academia, i.e. academic researchers with (or near to the completion of) PhDs. However, we are open to hiring excellent candidates without candidates but with an equivalent and relevant level of expertise, for example in think tanks, policy settings or industry.

Three of these positions are in the standard academic postdoc mould, working on specific research projects. I’d like to draw attention to the fourth, the academic project manager. For this position, we are looking for someone with the intellectual versatility to engage across our research strands – someone who can coordinate these projects, synthesise and present our research to a range of audiences including funders, collaborators, policymakers and industry contacts. Additionally, this person will play a key role in developing the centre over the next two years, working with our postdocs and professorial advisors to secure funding, and contributing to our research, media, and policy strategy among other things. I’ve been interviewed in the past (https://80000hours.org/2013/02/bringing-it-all-together-high-impact-research-management/) about the importance of roles of this nature; right now I see it as our biggest bottleneck, and a position in which an ambitious person could make a huge difference.

We need your help – again!

In some ways, CSER has been the quietest of the existential risk organisations of late – we’ve mainly been establishing research connections, running lectures and seminars, writing research grants and building relations with policymakers (plus some behind-the scenes involvement with various projects). But we’ve been quite successful in these things, and now face an exciting but daunting level of growth: by next year we aim to have a team of 9-10 postdoctoral researchers here at Cambridge, plus senior professors and other staff. It’s very important we continue our momentum by getting world-class researchers motivated to do work of the highest impact. Reaching out and finding these people is quite a challenge, especially given our still-small team. So the help of the existential risk and EA communities in spreading the word – on your facebook feeds, on relevant mailing lists in your universities, passing them on to talented people you know – will make a huge difference to us.

Thank you so much!

Seán Ó hÉigeartaigh (Executive Director, CSER)

 

“The Centre for the Study of Existential Risk is delighted to announce four new postdoctoral positions for the subprojects below, to begin in January 2016 or as soon as possible afterwards. The research associates will join a growing team of researchers developing a general methodology for the management of extreme technological risk.

Evaluation of extreme technological risk will examine issues such as:

The use and limitations of approaches such as cost-benefit analysis when evaluating extreme technological risk; the importance of mitigating extreme technological risk compared to other global priorities; issues in population ethics as they relate to future generations; challenges associated with evaluating small probabilities of large payoffs; challenges associated with moral and evaluative uncertainty as they relate to the long-term future of humanity. Relevant disciplines include philosophy and economics, although suitable candidates outside these fields are welcomed. More: Evaluation of extreme technological risk

Extreme risk and the culture of science will explore the hypothesis that the culture of science is in some ways ill-adapted to successful long-term management of extreme technological risk, and investigate the option of ‘tweaking’ scientific practice, so as to improve its suitability for this special task. It will examine topics including inductive risk, use and limitations of the precautionary principle, and the case for scientific pluralism and ‘breakout thinking’ where extreme technological risk is concerned. Relevant disciplines include philosophy of science and science and technology studies, although suitable candidates outside these fields are welcomed. More: Extreme risk and the culture of science;

Responsible innovation and extreme technological risk asks what can be done to encourage risk-awareness and societal responsibility, without discouraging innovation, within the communities developing future technologies with transformative potential. What can be learned from historical examples of technology governance and culture-development? What are the roles of different forms of regulation in the development of transformative technologies with risk potential? Relevant disciplines include science and technology studies, geography, sociology, governance, philosophy of science, plus relevant technological fields (e.g., AI, biotechnology, geoengineering), although suitable candidates outside these fields are welcomed. More: Responsible innovation and extreme technological risk

We are also seeking to appoint an academic project manager, who will play a central role in developing CSER into a world-class research centre. We seek an ambitious candidate with initiative and a broad intellectual range for a postdoctoral role combining academic and administrative responsibilities. The Academic Project Manager will co-ordinate and develop CSER’s projects and the Centre’s overall profile, and build and maintain collaborations with academic centres, industry leaders and policy makers in the UK and worldwide. This is a unique opportunity to play a formative research development role in the establishment of a world-class centre. More: CSER Academic Project Manager

Candidates will normally have a PhD in a relevant field or an equivalent level of experience and accomplishment (for example, in a policy, industry, or think tank setting). Application Deadline: Midday (12:00) on November 12th 2015.”

Forecasting health gaps

-3 [deleted] 05 August 2015 04:14AM

You're an average person.

You don't know what diseases you'll get in the future.

You know people get diseases and certain populations get diseases more than others, enough to say certain things cause diseases.

You're not quite the average person.

You have a strong preference against sickness and a strong belief in your ability to mitigate deleterious circumstances.

You have access to preventative research. You know if you don't work in a coal mine, overtrain when running, and eat healthy, you can stay healthier than those who take those risks.

You know that some disease outcomes are less than predictable, so you want to work towards the available of treatments that fill gaps in the availability of therapeutics. For instance, you might want a treatment for HIV to be developed, in case you become HIV infected, since there is a risk of HIV exposure for almost anyone exposed to unprotected sex, since they won't necessarily know their sexual partners entire serohistory (noologism?)
However, you don't know which diseases you will get. So how do you prioritise?

Perhaps, medical device and pharmaceutical company strategies could be ported to your situation.

Most people, including non-epidemiologist researchers, don't have access to epidemiology data sets.

Most people, don't have the patience to read a book on medical market research

You don't have the funds or connections to employ the world's only specialist in the area of medical market forecasting.

At least he's broken down the field into best practice questions:

  • Where can we find epidemiological information/data?
  • How do we judge/evaluate it?
  • What is the correct methodology for using it?
  • What's useful and what's not useful for pharma market researchers?
  • How do we combine/apply it with MR data?

The only firm, other than Bill's, that appears to specialist in the area fortunately breaks down the techniques in the field for us:

  • Integrated forecasts based on choice modeling or univariate demand research to ensure that the primary marketing research is aligned with the needs of forecast
  • Volumetric new product forecasting to provide the accuracy required for pre-launch planning
  • Combination epidemiology-/sales-volume-based forecast models that provide robust market sizing and trend information
  • Custom patient flow models that represent the dynamics of complex markets not possible with cross-sectional methods
  • Oncology-specific forecast models to accept the data and assumptions unique to cancer therapeutics and accurately forecast patients on therapy
  • Subscription forecasting software for clients who would like to build their own forecasts using user-friendly functionality to save time and prevent calculation and logic errors

The generalisations in the industry, things that are applicable across particular populations, therapeutics or firms appears to be summarised here:

It's 36 pages long, but well worth it if area is interesting to you.

So now you know how this market operates, what are the outputs:

Mega trends are available here

A detailed review is available here

Do they answer the questions, use the techniques proposed, and answer the ultimate question of what gaps exist in the provision of medical therapeutics?

I don't know how to apply the techniques to tell. What do you think?

I know there are other ways to think about these problems.

For instance, if I put myself in a pharmaceutical company's position, I could use strategic tools like Porter's 4 forces and see whether a particular decision looks compelling.

The 2018 paper suggests that pain killers in developed countries are going to get lots of government investment.

So, does it makes sense to supply that demand?

There are a number of highly risky threats that might suggest say a potential poppy producer shouldn't proceed:

**technological**

Disruptive biotechnology, such as genetically modified yeast which can convert glucose to morphine. There have been suggestions that this invention is overhyped

**political**

Licensing poppy producers who currently supply illicit drug producers

 

This said, the whole thing is very underdetermined so I suspect actual organisations are far more procedural in their approaches. What do you think?

 

Speculative rationality skills and appropriable research or anecdote

3 [deleted] 21 July 2015 04:02AM

Is rationality training in it's infancy? I'd like to think so, given the paucity of novel, usable information produced by rationalists since the Sequence days. I like to model the rationalist body of knowledge as superset of pertinent fields such as decision analysis, educational psychology and clinical psychology. This reductionist model enables rationalists to examine the validity of rationalist constructs while standing on the shoulders of giants.

CFAR's obscurantism (and subsequent price gouging) capitalises on our [fear of missing out](https://en.wikipedia.org/wiki/Fear_of_missing_out). They brand established techniques like mindfulness as againstness or reference class forecasting as 'hopping' as if it's of their own genesis, spiting academic tradition and cultivating an insular community. In short, Lesswrongers predictably flouts [cooperative principles](https://en.wikipedia.org/wiki/Cooperative_principle).

This thread is to encourage you to speculate on potential rationality techniques, underdetermined by existing research which might be a useful area for rationalist individuals and organisations to explore. I feel this may be a better use of rationality skills training organisations time, than gatekeeping information.

To get this thread started, I've posted a speculative rationality skill I've been working on. I'd appreciate any comments about it or experiences with it. However, this thread is about working towards the generation of rationality skills more broadly.

Seeking geeks interested in bioinformatics

17 bokov 22 June 2015 01:44PM

I work at a small but feisty research team whose focus is biomedical informatics, i.e. mining biomedical data. Especially anonymized hospital records pooled over multiple healthcare networks. My personal interest is ultimately life-extension, and my colleagues are warming up to the idea as well. But the short-term goal that will be useful many different research areas is building infrastructure to massively accelerate hypothesis testing on and modelling of retrospective human data.

 

We have a job posting here (permanent, non-faculty, full-time, benefits):

https://www.uthscsajobs.com/postings/3113

 

If you can program, want to work in an academic research setting, and can relocate to San Antonio, TX, I invite you to apply. Thanks.

Note: The first step of the recruitment process will be a coding challenge, which will include an arithmetical or string-manipulation problem to solve in real-time using a language and developer tools of your choice.

edit: If you tried applying and were unable to access the posting, it's because the link has changed, our HR has an automated process that periodically expires the links for some reason. I have now updated the job post link.

Request for suggestions: ageing and data-mining

14 bokov 24 November 2014 11:38PM

Imagine you had the following at your disposal:

  • A Ph.D. in a biological science, with a fair amount of reading and wet-lab work under your belt on the topic of aging and longevity (but in hindsight, nothing that turned out to leverage any real mechanistic insights into aging).
  • A M.S. in statistics. Sadly, the non-Bayesian kind for the most part, but along the way acquired the meta-skills necessary to read and understand most quantitative papers with life-science applications.
  • Love of programming and data, the ability to learn most new computer languages in a couple of weeks, and at least 8 years spent hacking R code.
  • Research access to large amounts of anonymized patient data.
  • Optimistically, two decades remaining in which to make it all count.

Imagine that your goal were to slow or prevent biological aging...

  1. What would be the specific questions you would try to tackle first?
  2. What additional skills would you add to your toolkit?
  3. How would you allocate your limited time between the research questions in #1 and the acquisition of new skills in #2?

Thanks for your input.


Update

I thank everyone for their input and apologize for how long it has taken me to post an update.

I met with Aubrey de Grey and he recommended using the anonymized patient data to look for novel uses for already-prescribed drugs. He also suggested I do a comparison of existing longitudinal studies (e.g. Framingham) and the equivalent data elements from our data warehouse. I asked him that if he runs into any researchers with promising theories or methods but for a massive human dataset to test them on, to send them my way.

My original question was a bit to broad in retrospect: I should have focused more on how to best leverage the capabilities my project already has in place rather than a more general "what should I do with myself" kind of appeal. On the other hand, at the time I might have been less confident about the project's success than I am now. Though the conversation immediately went off into prospective experiments rather than analyzing existing data, there were some great ideas there that may yet become practical to implement.

At any rate, a lot of this has been overcome by events. In the last six months I realized that before we even get to the bifurcation point between longevity and other research areas, there are a crapload of technical, logistical, and organizational problems to solve. I no longer have any doubt that these real problems are worth solving, my team is well positioned to solve many of them, and the solutions will significantly accelerate research in many areas including longevity. We have institutional support, we have a credible revenue stream, and no shortage of promising directions to pursue. The limiting factor now is people-hours. So, we are recruiting.

Thanks again to everyone for their feedback.

 

vaccination research/reading

0 freyley 27 October 2014 05:20PM

Vaccination is probably one of the hardest topics to have a rational discussion about. I have some reason to believe that the author of http://whyarethingsthisway.com/2014/10/23/the-cdc-and-cargo-cult-science/ is someone interested in looking for the truth, not winning a side - at the very least, I'd like to help him when he says this:

I genuinely don’t want to do Cargo Cult Science so if anybody reading this knows of any citations to studies looking at the long term effects of vaccines and finding them benign or beneficial, please, be sure to post them in the comments.

 

I'm getting started on reading the actual papers, but I'm hoping this finds someone who's already done the work and wants to go post it on his site, or if not, someone else who's interested in looking through papers with me - I do better at this kind of work with social support. 

Moloch: optimisation, "and" vs "or", information, and sacrificial ems

20 Stuart_Armstrong 06 August 2014 03:57PM

Go read Yvain/Scott's Meditations On Moloch. It's one of the most beautiful, disturbing, poetical look at the future that I've ever seen.

Go read it.

Don't worry, I can wait. I'm only a piece of text, my patience is infinite.

De-dum, de-dum.

You sure you've read it?

Ok, I believe you...

Really.

I hope you wouldn't deceive an innocent and trusting blog post? You wouldn't be a monster enough to abuse the trust of a being as defenceless as a constant string of ASCII symbols?

Of course not. So you'd have read that post before proceeding to the next paragraph, wouldn't you? Of course you would.

 

Academic Moloch

Ok, now to the point. The "Moloch" idea is very interesting, and, at the FHI, we may try to do some research in this area (naming it something more respectable/boring, of course, something like "how to avoid stable value-losing civilization attractors").

The project hasn't started yet, but a few caveats to the Moloch idea have already occurred to me. First of all, it's not obligatory for an optimisation process to trample everything we value into the mud. This is likely to happen with an AI's motivation, but it's not obligatory for an optimisation process.

One way of seeing this is the difference between "or" and "and". Take the democratic election optimisation process. It's clear, as Scott argues, that this optimises badly in some ways. It encourages appearance over substance, some types of corruption, etc... But it also optimises along some positive axes, with some clear, relatively stable differences between the parties which reflects some voters preferences, and punishment for particularly inept behaviour from leaders (I might argue that the main benefit of democracy is not the final vote between the available options, but the filtering out of many pernicious options because they'd never be politically viable). The question is whether these two strands of optimisation can be traded off against each other, or if a minimum of each is required. So can we make a campaign that is purely appearance based with any substantive position ("or": maximum on one axis is enough), or do you need a minimum of substance and a minimum of appearance to buy off different constituencies ("and": you need some achievements on all axes)? And no, I'm not interested in discussing current political examples.

Another example Scott gave was of the capitalist optimisation process, and how it in theory matches customers' and producers' interests, but could go very wrong:

Suppose the coffee plantations discover a toxic pesticide that will increase their yield but make their customers sick. But their customers don't know about the pesticide, and the government hasn't caught up to regulating it yet. Now there's a tiny uncoupling between "selling to [customers]" and "satisfying [customers'] values", and so of course [customers'] values get thrown under the bus.

This effect can be combated to some extent with extra information. If the customers (or journalists, bloggers, etc...) know about this, then the coffee plantations will suffer. "Our food is harming us!" isn't exactly a hard story to publicise. This certainly doesn't work in every case, but increased information is something that technological progress would bring, and this needs to be considered when asking whether optimisation processes will inevitably tend to a bad equilibrium as technology improves. An accurate theory of nutrition, for instance, would have great positive impact if its recommendations could be measured.

Finally, Zack Davis's poem about the em stripped of (almost all) humanity got me thinking. The end result of that process is tragic for two reasons: first, the em retains enough humanity to have curiosity, only to get killed for this. And secondly, that em once was human. If the em was entirely stripped of human desires, the situation would be less tragic. And if the em was further constructed in a process that didn't destroy any humans, this would be even more desirable. Ultimately, if the economy could be powered by entities developed non-destructively from humans, and which were clearly not conscious or suffering themselves, this would be no different that powering the economy with the non-conscious machines we use today. This might happen if certain pieces of a human-em could be extracted, copied and networked into an effective, non-conscious entity. In that scenario, humans and human-ems could be the capital owners, and the non-conscious modified ems could be the workers. The connection of this with the Moloch argument is that it shows that certain nightmare scenarios could in some circumstances be adjusted to much better outcomes, with a small amount of coordination.

 

The point of the post

The reason I posted this is to get people's suggestions about ideas relevant to a "Moloch" research project, and what they thought of the ideas I'd had so far.

Biomedical research, superstars, and innovation

2 VipulNaik 14 March 2014 10:38PM

As part of my work for Cognito Mentoring reviewing biomedical research as a career option (not much at the link there right now), I came across an interview with biomedical researcher John Todd of Cambridge University published by 80,000 Hours.

The whole interview is interesting, but one part of it struck me as interesting and somewhat hard to believe:

John would prefer a good person in his lab to an extra £0.5mn in annual funding. Generally, there are enough grants, so finding good people is a bigger constraint than money.

Here's the full context:

Our candidate does data analysis in finance, earning over $100,000 per year. They have an Economics degree for Chicago, and an Masters in Financial Engineering from University of California, LA, and reasonable programming skills. They’re planning to do an MD then PhD.

“This guy looks great. I’d love to hire him.” (when he has his MD, or even before).

“The MD and programming/statistics combo is lethal. Top of the world. There’s major demand.”

He probably wouldn’t need to do a PhD, because of the programming. After his MD, he could just apply to a lab. He should go into genomic medicine, which is what I do. Tailored therapeutics or stratified medicine will be played out for major health and economic benefits over the next 30 years. Check out Atul Butte at Stanford. He’s the perfect profile for this guy. He could be the new Butte”

 

£0.5mn is about USD 830,000 according to current foreign exchange rates. In other words, John Todd, the interviewee, indicated that a sufficiently good researcher was worth that much. Now, the question was framed in terms of additional funding, rather than reallocation of existing funds. But assuming that the existing funding for the biomedical research lab is at least one order of magnitude greater than the amount (£0.5mn) under discussion, I don't think it matters whether we're talking of using additional funding or reallocating existing funds. Essentially, I read John Todd as saying that he'd be willing to pay £0.5mn to attract a "good person" to his lab (actually, as framed, it could be interpreted as even more: he's willing to pay an ordinary salary for the person, plus forgo £0.5mn in additional funds, to hire the person). Note: I clarified with Ben Todd, the interviewer, that the additional grants were per-year rather than one-time grants, so the relevant comparison is indeed between the grant amount and annual income.

I haven't surveyed the biomedical research community, so I'm not sure how representative John Todd's opinion here is. Andrew McMichael offers a more guarded response, suggesting that 200,000 pounds are not as good as a great researcher, but he's less sure at half a million pounds, and in any case, good researchers bring in their own grant money, so it's a false dichotomy. But I've heard that there are other people at biomedical research labs who place even higher value on hiring good people than John Todd does. So in the absence of more detailed information, I'll take John Todd's view as a representative median view of a segment of biomedical research labs.

So, question: why don't there exist high-paid positions of that sort in biomedical research for entry-level people? For comparison, one list of the top ten professors in the US lists the tenth highest paid professor as earning slightly under US$500,000. The list is probably far from complete (Douglas Knight points in the comments to Chicago having at least 5 salaries over $700K, one in the business school and four in the medical school). Glassdoor list salaries at the J. Craig Venter Institute, and the highest listed salary is for professors (about $200,000), with all other salaries near or below $100,000.

I asked a slightly more general version of the question in this blog post. I'll briefly list below the general explanations provided there, with some comments on the applicability of those to the context of biomedical research as I understand it.

  1. Talent constraint because of cash constraint: I don't think this applies to biomedical research. It's not that I think they are adequately funded, but rather, they do have enough funds that there shouldn't be a great different between how they would use additional funds and how they would reallocate existing funds.
  2. Genuine absence of talented people: I think that this does apply in the very short run -- it's hard for somebody to acquire a M.D. and experience with programming at short notice. But this raises a whole host of questions: why not advertise for such positions prominently, promising high pay, so that people can use the existence of such positions to make more long-term plans of what subjects to study while they're still in college?
  3. Talented people would or should be willing to work for low pay: While this argument works well in the context of effective altruism (because of the altruistic orientation needed for top work), I'm not sure it works for biomedical research. I don't see biomedical research as qualitatively different from computer programming or finance in terms of how altruistic people need to be to work productively.
  4. Workplace egalitarianism and morale: There may be friction in labs if some people get paid a lot more, particularly if other workers aren't convinced that the people getting paid more are really working harder. This is a problem everywhere, including in the programming world. One solution that the programming world has come up with is to offer different levels of stock compensation. Another solution is acquihires: rather than paying huge salaries to star programmers, companies buy startups that have collected a large number of star programmers under their roof, and the programmers cash in on the huge amount of money reaped through the sale. Neither of these specific solutions works in the context of nonprofit, university, or government research.
  5. Irrationality of funders: Employers and their funders are reluctant to pay large amounts. Biomedical research labs are often affiliated with universities and need to use the payscales of the universities. Even those that rely on other donations may be afraid that their donors will balk if they pay huge salaries.

Of course, one possibility is that none of these explanations really matter and I'm overinterpreting offhand remarks that were not intended to be taken literally. But before jumping to that conclusion, I'd like to get a clearer sense of the dynamics at play.

The nature of the explanation could also affect the social value of going into biomedical research in the following sense: if (3), (4), or (5) are big issues, that could be an indicator that perhaps superstars aren't valued much by their peers and funders (relative to the need to make people conform to norms of taking low pay). This suggests (though it doesn't prove) that perhaps the workplace doesn't offer enough flexibility for the sort of ambitious changes that superstars may bring about, so the marginal value of superstars in practice isn't as high as it could be in principle. In other words, if your bosses don't value your work enough in practice to pay you what they say you're worth, maybe they won't give you the autonomy to actually achieve that. On a related note, this GiveWell blog post hints that many experts think that bureaucracy, paperwork, and a bias in favor of older, established scientists, all get in the way of accomplishment for young, talented researchers:

  • The existing system favors researchers with strong track records, and is not good at supporting young investigators. This was the most commonly raised concern, and is mentioned in all three of our public interviews.
  • The existing system favors a particular brand of research – generally incremental testing of particular hypotheses – and is less suited to supporting research that doesn’t fit into this mold. Research that doesn’t fit into this mold may include:
    • Very high-risk research representing a small chance of a big breakthrough.
    • Research that focuses on developing improved tools and techniques (for example, better microscopy or better genome sequencing), rather than on directly investigating particular hypotheses.
    • “Translational research” aiming to improve the transition between basic scientific discoveries and clinical applications, and not focused on traditionally “academic” topics (for example, research focusing on predicting drug toxicity).
  • The existing system focuses on time-consuming, paperwork-heavy grant applications for individual investigators; more attention to differently structured grants and grant applications would be welcome. These could include mechanisms focused on providing small amounts of funding, along with feedback on ideas, quickly and with minimal paperwork, as well as mechanisms focused on supporting larger-scale projects that require collaboration between multiple investigators.

LessWrong Help Desk - free paper downloads and more (2014)

30 jsalvatier 16 January 2014 05:51AM

Over the last year, VincentYu, gwern and others have provided many papers for the LessWrong community (87% success rate in 2012) through previous help desk threads. We originally intended to provide editing, research and general troubleshooting help, but article downloads are by far the most requested service.

If you're doing a LessWrong relevant project we want to help you. If you need help accessing a journal article or academic book chapter, we can get it for you. If you need some research or writing help, we can help there too.

Turnaround times for articles published in the last 20 years or so is usually less than a day. Older articles often take a couple days.

Please make new article requests in the comment section of this thread.

If you would like to help out with finding papers, please monitor this thread for requests. If you want to monitor via RSS like I do, many RSS readers will give you the comment feed if you give it the URL for this thread (or use this link directly). 

If you have some special skills you want to volunteer, mention them in the comment section.

Tulpa References/Discussion

13 Vulture 02 January 2014 01:34AM

There have been a number of discussions here on LessWrong about "tulpas", but it's been scattered about with no central thread for the discussion. So I thought I would put this up here, along with a centralized list of reliable information sources, just so we all stay on the same page.

Tulpas are deliberately created "imaginary friends" which in many ways resemble separate, autonomous minds. Often, the creation of a tulpa is coupled with deliberately induced visual, auditory, and/or tactile hallucinations of the being.

Previous discussions here on LessWrong: 1 2 3

Questions that have been raised:

1. How do tulpas work?

2. Are tulpas safe, from a mental health perspective?

3. Are tulpas conscious? (may be a hard question)

4. More generally, is making a tulpa a good idea? What are they useful for?

 

Pertinent Links and Publications

(I will try to keep this updated if/when further sources are found)

  • In this article1, the psychological anthropologist Tanya M. Luhrmann connects tulpas to the "voice of God" experienced by devout evangelicals - a phenomenon more thoroughly discussed in her book When God Talks Back: Understanding the American Evangelical Relationship with God. Luhrmann has also succeeded2 in inducing tulpa-like visions of Leland Stanford, jr. in experimental subjects.
  • This paper3 investigates the phenomenon of authors who experience their characters as "real", which may be tulpas by yet another name.
  • There is an active subreddit of people who have or are developing tulpas, with an FAQ, links to creation guides, etc.
  • tulpa.info is a valuable resource, particularly the forum. There appears to be a whole "research" section for amateur experiments and surveys.
  • This particular experiment suggests that the idea of using tulpas to solve problems faster is a no-go.
  • Also, one person helpfully hooked themselves up to an EEG and then performed various mental activities related to their tulpa.
  • Another possibly related phenomenon is the way that actors immerse themselves in their characters. See especially the section on "Masks" in Keith Johnstone's book Impro: Improvisation and the Theatre (related quotations and video)4.
  • This blogger has some interesting ideas about the neurological basis of tulpas, based on Julian Jaynes's The Origin of Consciousness in the Breakdown of the Bicameral Mind, a book whose scientific validity is not clear to me.
  • It is not hard to find new age mystical books about the use of "thoughtforms", or the art of "channeling" "spirits", often clearly talking about the same phenomenon. These books are likely to be low in useful information for our purposes, however. Therefore I'm not going to list the ones I've found here, as they would clutter up the list significantly.
  • (Updated 2/9/2015) The abstract of a paper by our very own Kaj Sotala hypothesizing about the mechanisms behind tulpa creation.5

(Bear in mind while perusing these resources that if you have serious qualms about creating a tulpa, it might not be a good idea to read creation guides too carefully; making a tulpa is easy to do and, at least for me, was hard to resist. Proceed at your own risk.)

 

Footnotes

1. "Conjuring Up Our Own Gods", a 14 October 2013 New York Times Op-Ed

2. "Hearing the Voice of God" by Jill Wolfson in the July/August 2013 Stanford Alumni Magazine

3. "The Illusion of Independent Agency: Do Adult Fiction Writers Experience Their Characters as Having Minds of Their Own?"; Taylor, Hodges & Kohànyi in Imagination, Cognition and Personality; 2002/2003; 22, 4

4. Thanks to pure_awesome

5. "Sentient companions predicted and modeled into existence: explaining the tulpa phenomenon" by Kaj Sotala

The Inefficiency of Theoretical Discovery

19 lukeprog 03 November 2013 09:26PM

Previously: Why Neglect Big Topics.

Why was there no serious philosophical discussion of normative uncertainty until 1989, given that all the necessary ideas and tools were present at the time of Jeremy Bentham?

Why did no professional philosopher analyze I.J. Good’s important “intelligence explosion” thesis (from 19591) until 2010?

Why was reflectively consistent probabilistic metamathematics not described until 2013, given that the ideas it builds on go back at least to the 1940s?

Why did it take until 2003 for professional philosophers to begin updating causal decision theory for the age of causal Bayes nets, and until 2013 to formulate a reliabilist metatheory of rationality?

By analogy to financial market efficiency, I like to say that “theoretical discovery is fairly inefficient.” That is: there are often large, unnecessary delays in theoretical discovery.

This shouldn’t surprise us. For one thing, there aren’t necessarily large personal rewards for making theoretical progress. But it does mean that those who do care about certain kinds of theoretical progress shouldn’t necessarily think that progress will be hard. There is often low-hanging fruit to be plucked by investigators who know where to look.

Where should we look for low-hanging fruit? I’d guess that theoretical progress may be relatively easy where:

  1. Progress has no obvious, immediately profitable applications.
  2. Relatively few quality-adjusted researcher hours have been devoted to the problem.
  3. New tools or theoretical advances open up promising new angles of attack.
  4. Progress is only valuable to those with unusual views.

These guesses make sense of the abundant low-hanging fruit in much of MIRI’s theoretical research, with the glaring exception of decision theory. Our September decision theory workshop revealed plenty of low-hanging fruit, but why should that be? Decision theory is widely applied in multi-agent systems, and in philosophy it’s clear that visible progress in decision theory is one way to “make a name” for oneself and advance one’s career. Tons of quality-adjusted researcher hours have been devoted to the problem. Yes, new theoretical advances (e.g. causal Bayes nets and program equilibrium) open up promising new angles of attack, but they don’t seem necessary to much of the low-hanging fruit discovered thus far. And progress in decision theory is definitely not valuable only to those with unusual views. What gives?

Anyway, three questions:

  1. Do you agree about the relative inefficiency of theoretical discovery?
  2. What are some other signs of likely low-hanging fruit for theoretical progress?
  3. What’s up with decision theory having so much low-hanging fruit?

1 Good (1959) is the earliest statement of the intelligence explosion: “Once a machine is designed that is good enough… it can be put to work designing an even better machine. At this point an ”explosion“ will clearly occur; all the problems of science and technology will be handed over to machines and it will no longer be necessary for people to work. Whether this will lead to a Utopia or to the extermination of the human race will depend on how the problem is handled by the machines. The important thing will be to give them the aim of serving human beings.” The term itself, “intelligence explosion,” originates with Good (1965). Technically, artist and philosopher Stefan Themerson wrote a "philosophical analysis" of Good's intelligence explosion thesis called Special Branch, published in 1972, but by "philosophical analysis" I have in mind a more analytic, argumentative kind of philosophical analysis than is found in Themerson's literary Special Branch ↩

[LINK] Spread the wings of uncertainty, the research drug version

1 Stuart_Armstrong 16 October 2013 12:37PM

EDIT: Image now visisble!

From Anders Sandberg:

Another piece examining predictive performance, this time in the pharmaceutical industry. How well can industry experts predict sales?

You guessed it, not very well. Not even when data really accumulated.

Large pharma has less bias than small companies, but the variance still overshadows everything.

 

First, most consensus forecasts were wrong, often substantially. And although consensus forecasts improved over time as more information became available, accuracy remained an issue even several years post-launch. More than 60% of the consensus forecasts in our data set were either over or under by more than 40% of the actual peak revenues (Fig. a). Although the overall median of the data set was within 4%, the distribution is wide for both under- and overestimated forecasts. Furthermore, a significant number of consensus forecasts were overly optimistic by more than 160% of the actual peak revenues of the product.



The unanswered question in this analysis is what companies and investors ought to be doing to forecast better. We do not offer a complete answer here, but we have thoughts based on our analysis.

Beware the wisdom of the crowd. The 'consensus' consists of well-compensated, focused professionals who have many years of experience, and we have shown that the consensus is often wrong. There should be no comfort in having one's own forecast being close to the consensus, particularly when millions or billions of dollars are on the line in an investment decision or acquisition situation.

Broaden the aperture on what the future could look like, and rapidly adapt to new information. Much of the divergence between a forecast and what actually happens is due to the emergence of a scenario that no one foresaw: a new competitor, unfavourable clinical data or a more restrictive regulatory environment. Companies need to fight their own inertia and the tendency to make only incremental shifts in forecasting and resourcing.

Try to improve. It appears that some companies and analysts may be better at forecasting than others (see Supplementary information S1 (box)). We suspect there is no magic bullet to improving the accuracy of forecasts, but the first step is conducting a self-assessment and recognizing that there may be a capability issue that needs to be addressed.

Research interests I don't currently have time to develop alone

15 Stuart_Armstrong 16 October 2013 10:31AM

EDIT: added the "rights of parents" and "simulation hypothesis" research interests.

I've started a lot of research projects and have a lot of research interests that I don't currently have time to develop on my own. So I'm putting the research interests together on this page, and anyone can let me know if they're interested in doing any joint projects on these topics. This can range from coauthoring, to simply having a conversation about these and seeing where that goes.

The possible research topics are:

The State of the Art of Scientific Research on Polyamoury

-6 Ritalin 09 September 2013 09:26PM

The idea of polyamoury is one that interests me. However, while such books as The Ethical Slut have done a good job of providing me with tools to understand and possibly handle the challenges and rewards involved, I found them unsatisfying in that they were largely based on anecdotal evidence, with a very strong selection bias. Before making the jump of attempting to live that way, one would need to know precisely the state of the art of scientific, rigourous, credible research on the topic; it is a tedious job to seek out and compile everything, but I believe it is a job worth doing. 

I'll be initiating an ongoing process of data compilation, and will publish my findings on this thread as I discover and summarize them. Any help is greatly appreciated, as this promises to be long and tedious. I might especially need help extracting meaningful information from the masses of data; I am not a good statistician yet, far from it.

To Be Expanded...

 

Update on establishment of Cambridge’s Centre for Study of Existential Risk

40 Sean_o_h 12 August 2013 04:11PM
Cambridge’s high-profile launch of the Centre for Study of Existential Risk last November received a lot of attention on LessWrong, and a number of people have been enquiring as to what‘s happened since. This post is meant to give a little explanation and update of what’s been going on.

Motivated by a common concern over human activity-related risks to humanity, Lord Martin Rees, Professor Huw Price, and Jaan Tallinn founded the Centre for Study of Existential Risk last year.  However, this announcement was made before the establishment of a physical research centre or securement of long-term funding. The last 9 months have been focused on turning an important idea into a reality.

Following the announcement in November, Professor Price contacted us at the Future of Humanity Institute regarding the possibility of collaboration on joint academic funding opportunities; the aim being both to raise the funds for CSER’s research programmes and to support joint work by the FHI and CSER’s researchers on anthropogenic existential risk. We submitted our first grant application in January to the European Research Council – an ambitious project to create “A New Science of Existential Risk” that, if successful, would provide enough funding for CSER’s first research programme - a sizeable programme that will run for five years.
We’ve been successful in the first and second rounds, and we will hear a final round decision at the end of the year. It was also an opportunity for us to get some additional leading academics onto the project – Sir Partha Dasgupta, Professor of Economics at Cambridge and an expert in social choice theory, sustainability and intergenerational ethics, is a co-PI (along with Huw Price, Martin Rees and Nick Bostrom). In addition, a number of prominent academics concerned about technology-related risk – including Stephen Hawking, David Spiegelhalter, George Church and David Chalmers – have joined our advisory board.

The FHI regards establishment of CSER as of the highest priority for a number of reasons including:

1) The value of the research the Centre will engage in
2) The reputational boost to the field of Existential Risk gained by the establishment of high-profile research centre in Cambridge.
3) The impact on policy and public perception that academic heavy-hitters like Rees and Price can have

Therefore we’ve been working with CSER behind the scenes over the last 9 months. Progress has been a little slow until now – Huw, Martin and Jaan are fully committed to this project, but due to their other responsibilities aren’t in a position to work full-time on it yet. 

However, we’re now in a position to make CSER’s establishment official. Cambridge’s new Centre for Research in the Arts, Social Sciences and Humanities (CRASSH) will host CSER and provide logistical support. I’ll be acting manager of CSER’s activities over the coming 6-12 months, under the guidance of Huw, Martin and Jaan. A generous seed funding donation from Jaan Tallinn is funding CSER’s establishment and these activities – which will include a lecture series, workshops, public outreach, and staff time on grant-writing and fundraising. It’ll also provide a buyout of a fraction of my time from FHI (providing funds for us to hire part-time staff to offload some of the FHI workload and help with some of the CSER work).

At the moment and over the next couple of months we’re going to be focused on identifying and working on additional academic funding opportunities for additional programmes, as well as chasing some promising leads in industry, private and philanthropic funding. I’ll also be aiming to keep CSER’s public profile active. There will be newsletters every three months (sign up here), the website’s going to be fleshed out to contain more detail about our planned research and existing literature, and we’ll be arranging regular high-quality media engagement. While we’re unlikely to have time to answer every general query that comes in (though we’ll try whenever possible: email: admin@cser.org), we’ll aim to keep the existential risk community informed through the newsletters and posts such as these.

We’ve been lucky to get a lot of support from the academic and existential risk community for the CSER centre. In addition to CRASSH, Cambridge’s Centre for Science and Policy will provide support in making policy-relevant links, and may co-host and co-publicise events. Luke Muehlhauser, MIRI’s Executive Director, has been very supportive and has provided valuable advice, and has generously offered to direct some of MIRI’s volunteer support towards CSER tasks. We also expect to get valuable support from the growing community around FHI.

From where I’m sitting, CSER’s successful launch is looking very promising. The timeline on our research programmes, however, is still a little more uncertain. If we’re successful with the European Research Council, we can expect to be hiring a full research team next spring. If not, it may take a little longer, but we’re exploring a number of different opportunities in parallel and are feeling confident. The support of the existential risk community continues to be invaluable.

Thanks,

Seán Ó hÉigeartaigh
Academic Manager, Future of Humanity Institute 
Acting Academic Manager, Cambridge Centre for Study of Existential Risk.


Internet Research (with tangent on intelligence analysis and collapse)

11 [deleted] 31 July 2013 04:58AM

Want to save time? Skip down to "I'm looking to compile a thread on Internet Research"!

Opinionated Preamble:

There is a lot of high level thinking on Less Wrong, which is great. It's done wonders to structure and optimize my own decisions. I think the political and futurology-related issues that Less Wrong cover can sometimes get out of sync with the reality and injustices of events in the immediate world. There are comprehensive treatments of how medical science is failing, or how academia cannot give unbiased results, and this is the milieu of programmers and philosophers in the middle-to-upper-class of the planet. I at least believe that this circle of awareness can be expanded, even if it's treading into mind-killing territory. If anything I want to give people a near-mode sense of the stakes aside from x-risk: all in all the x-risk scenarios I've seen Less Wrong fear the most, kill humanity somewhat instantly. A slower descent into violence and poverty is to me much more horrifying, because I might have to live in it and I don't know how. In a matter of fact, I have no idea of how to predict it.

This is one reason why I'm drawn to the Intelligence Operations performed by the military and crime units, among other things. Intelligence product delivery is about raw and immediate *fact*, and there is a lot of it. The problems featured in IntelOps are one of the few things rationality is good for - highly uncertain scenarios with one-off executions and messy or noisy feedback. Facts get lost in translation as messages are passed through, and of course the feeding and receiving fake facts are all a part of the job - but nevertheless, knowing *everything* *everywhere* is in the job description, and some form of rationality became a necessity.

It gets ugly. The demand for these kinds of skills often lie in industries that are highly competitive, violent, and illegal. I believe that once a close look is taken on how force and power is applied in practice then there isn't any pretending anymore that human evils are an accident.

Open Source Intelligence, or "OSINT", is the mining of data and facts from public information databases, news articles, codebases, journals. Although the amount of classified data dwarfs the unclassified, the size and scope of the unclassified is responsible for a majority of intelligence reports - and thus is involved in the great majority of executive decisions made by government entities. It's worth giving some thought as to how much that we know, that they do too. As illustrated in this expose, the processing of OSINT is a great big chunk of what modern intelligence is about aside from many other things. I think understanding how rationality as developed on Less Wrong can contribute to better IntelOps, and how IntelOps can feed the rationality community, would be awesome, but that's a post for another time.

--

The Show

Through my investigations into IntelOps I've noticed the emphasis on search. Good search.

I'm looking to compile a thread on Internet Research. I'm wondering if there is any wisdom on Less Wrong that can be taken advantage of here on how to become more effective searchers.  Here are some questions that could be answered specifically, but they are just guidelines - feel free to voice associated thoughts, we're exploring here.

  • Before actually going out and searching, what would be the most effective way of drafting and optimizing a collection plan? Are there any formal optimization models that inform our distribution of time and attention? Exploration vs exploitation comes to mind, but it would be worth formulating something specific. I heard that the multi-armed bandit problem is solved?
  • Do you have any links or resources regarding more effective search?
  • Do you have any experiences regarding internet research that you can share? Any patterns that you've noticed that have made you more effective at searching?
  • What are examples of closed-source information that are low-hanging fruit in terms of access (e.g. academic journals)? What are possible strategies for acquiring closed source data (e.g. enrolling in small courses at universities, e-mailing researchers, cohesion via the law/Freedom of Information Act, social engineering etc)?
  • I would like to hear from SEOs and software developers on what their interpretation of semantic web technologies and how they are going to affect end-users. I am somewhat unfamiliar with the semantic web, but from my understanding information that could not be indexed is now indexed; and new ontologies will emerge as this information is mined. What should an end-user expect and what opportunities will there be that didn't exist in the current generation of search?

That should be enough to get started. Below are some links that I have found useful with respect to Internet Research.

--

Meta-Search Engines or Assisted Search:

Summarizers:

Bots/Collectors/Automatic Filters:

Compilations and Directories:

Guides:

Practice:

I don't really care how you use this information, but I hope I've jogged some thinking of why it could be important.

Exercise isn't necessarily good for people

9 NancyLebovitz 08 June 2013 02:32PM

I would appreciate it very much if anyone would take a close look at this-- it looks sound to me, but it also appeals to my prejudices.

http://www.youtube.com/watch?feature=player_embedded&v=E42TQNWhW3w#!

My comments are in square brackets. Everything else is my notes on the Jamie Timmons lecture from the video.

Short version: 12% of people become less healthy from exercise. 20% of people get nothing from exercise. This is a matter of genetics, not doing exercise wrong.

****

Ask a hundred people about exercise, you'll get a wide range of answers about what exercise is and what good it might do for health, and the same for health professionals.

You need to focus on the evidence that exercise affects particular health outcomes. Weight and health are not strongly correlated. BMI is problematic.

There's a recommendation for 150 minutes of exercise/week, but this isn't sound. People who *report* being active have better health. People who are fitter have better health. These are not evidence that having a person with low activity take up exercise will make them healthier.

Nothing but a supervised intervention study is good enough.

Improved lifestyle is better than Metformin for preventing diabetes. (Studies) Exercise + diet modification has a powerful effect of preventing and slowing the progression of Type II diabetes. People with Type II have more cardiovascular disease (heart attacks and strokes). However, it doesn't follow that the lifestyle changes which help with Type II will also help with CVD. [I'm surprised]

Diabetes doesn't kill, CVD does, and a major motivation for the NHS to care is that CVD is expensive.

[9:45] Two studies which find that lifestyle intervention has no effect on CVD in diabetics. [11:00] One study which found that lifestyle intervention prevents Type II but doesn't affect microvascular disease (blindness and ulcers). [I'm not sure what this means. Maybe people can have the ill effects of Type II without the disease showing up in their blood sugar levels?] There are no supervised exercise-only intervention studies which show that exercise prevents long term disease progression.

[13:00] The usual advice on exercise from the NHS (pretty similar in the US): Aerobic exerise must raise your heart rate and make you sweat to be benefiscial. The more exercise you do, the better. Do a minimum of 150 minutes/week of aerobic exercise + strength training. If you do more than 150 minutes/week, you'll gain even more health benefits. Using a skipping rope is an example of vigorous intensity exercise. People aren't following this advice, and a major factor is the amount of time required. The advice is based on best guesses.

[15:55] Exercise will increase aerobic capacity in 80% of people (lowers all-cause mortality), improve insulin action in 65% of people (lowers type II diabetes by 50%), reduce blood pressure in >55% of people (lowers strokes 25%), increase good cholesterol in 70% of people (less vascular disease), promote muscle and bone mass (? less fractures and 'aging')

[17:40] Exercise response graphs. The average person gets a 15% increase in aerobic capacity, but a few get less capacity if they exercise. Insulin response-- average of 20% improvement. Some people get better, some get worse. A high proportion, maybe the majority, have little or no change. The people in this chart were doing 150 minutes/week of supervised exercise.

[20:00] High-intensity exercise is exercise which depends on stored energy, there's no way to take in enough oxygen to contribute. An athlete might be able to continue for 10 minutes. The average person can continue for more like 30 seconds to one minute.

[22:00] Experiments with high-intensity/rest intervals: 3 x 20 seconds of high intensity. [25:00] Charts showing flattened glucose spike (there probably was a peak, but the test missed the moment) and less isulin in the blood after only two weeks of 6 x 30 seconds interval training (total 7 minutes).

[30:54] "Advice has been based on what epidemiology methods can detect, not what is actually important or required." Health questionaires don't include things like 20 seconds of running for the bus.

[33:00] Ten days of bed rest will make healthy people insulin resistant.

[35:00] It looks as though modern hunter gatherers expend about as much energy/mass as Americans on the east coast do. [I found I could make sense out of the graphs by using full screen.] This evidence suggests that people are eating more rather than moving less. The evidence for 7 minutes of HIIT three times a week isn't completely solid, but it's at least as good as the evidence for 150 minutes/week.

[38:36] ..... Epidemiology of a sort-- evidence that eating chocolate makes it more likely to get a Nobel prize. Beautiful corelation! The Swiss eat the most chocolate and get the most prizes. The Swedes are an outlier-- they don't eat as much chocolate as they should to get so many prizes. That the prize is given in Sweden might have something to do with this. Cocoa has flavenols which slow age-related cognitive decline, but the corelation is probably just a coincidence.

[40:00] 12% of healthy people make their blood pressure **higher** by exercising 150 minutes a week. 20% get little or no improvement. [42:00] Graphs of low responders for aerobic capacity, muscle mass, and insulin sensitivity. Exercise does slow progression of diabetes on the average, but that doesn't apply to all individuals.

[44:47] There's no obvious indicator to tell high responders from low responders in advance. You have to either check the genes or track the results of exercise. [45:00] Finding non- or adverse responders: change in aerobic fitness is 60% genetic, insulin sensitivity is 40% genetic, strength is 50% genetic. These are estimates from family studies, including twin studies. There are 10 million genes variants which might have at least a 5% effect.

[47:35] There's a group of 27 genes which together can 'predict' gains in VO2max. It isn't necessary to understand how the genes work to create their effect as long as that effect is predictable, and it's possible that we will never understand something so complex. There may be drug combinations which can make exercise safe and effective for non-adaptors. There's research happening. It's possible to breed rats which are better at responding to training.

[53:52] A life-style program will *on average* reduce the risk of developing type II diabetes. We *don't know* whether exercise-training on its own will reduce heart-disease, angina, etc. It does improve risk factors and symptoms. If *you* have a risk-factor for ill-health, we *can not* be sure that exercise will help. (12% *adverse* responders, 20% no effect)

[57:00]Public health (what advice should the government give?): 1 minute a day of high-intensity sprint cycling reduces major risk factors. [For what proportion of people?] People tend to like brief high intensity exercise better than longer low intensity exercise. North American study: 150 minutes/week of exercise increase one's carbon footprint by 15% (food, laundry, showers).

Safety: 2 million marathoners have been studied. Very low fatalities. HIIT isn't likely to be more dangerous. [Ack! Ack! Ack! What happened to all the care about evidence? Marathoning isn't sprinting. Fatalities during the race aren't the only thing that can go wrong. People who do marathons aren't randomly selected.]

HIIT has be done safely by medically supervised diabetes and heart failure patients. It would take a billion dollars to do a thorough supervised intervention study. Some pieces of it have been done. This is much less than big drug companies spend, without much results. The current hope is finding the gene markers and then useful drugs for non and adverse responders. There are no average people!

**** http://www.medicalnewstoday.com/articles/242498.php

Summary of a TV show which has more details about High Intensity Interval Training.

Research is polygamous! The importance of what you do needn't be proportional to your awesomeness

22 diegocaleiro 26 May 2013 10:29PM

In a recent discussion a friend was telling me how he felt he was not as smart as the people he thinks are doing the best research on the most important topics. He said a few jaw-dropping names, which indeed are smarter than him, and mentioned their research agenda, say, A B and C.  

From that, a remarkable implication followed, in his cognitive algorithm: 

 

Therefore I should research thing D or thing E. 

 

Which made me pause for a moment. Here is a hypothetical schematic of this conception of the world. Arrows stand for "Ought to research"

Humans by Level of Awesome (HLA)             Research Agenda by Level of Importance. (RALI)

HLA                 RALI

Mrs 1 --------> X-risk #1

2 --------> X-risk #2 

3 --------> Longevity

4 --------> Malaria Reduction 

5 --------> Enhancement 

1344 --------> Increasing Puppies Cuteness

Etc... 

 

It made me think of the problem of creating match making algorithms for websites where people want to pair to do stuff, such as playing tennis, chess or having a romantic relationship.

This reasoning is profoundly mistaken, and I can look back into my mind, and remember dozens of times I have made the exact same mistake. So I thought it would be good to spell out 10 times in different ways for the unconscious bots in my mind that didn't get it yet: 

1) Research agenda topics are polygamous, they do not mind if there is someone else researching them, besides the very best people who could be doing such research. 

2) The function above should not be one-to-one (biunivocal), but many-to-one. 

3) There is no relation of overshadowing based on someone's awesomeness to everyone else who researches the same topic, unless they are researching the same narrow minimal sub-type of the same question coming from the same background. 

4) Overdetermination doesn't happen at the "general topic level". 

5) Awesome people do not obfuscate what less awesome people do in their area, they catapult it, by creating resources. 

6) Being in an area where the most awesome people are is not asking to "lose the game" it is being in an environment that cultivates greatness

7) The amount of awesomeness in a field does not supervene on the amount of awesomeness in it's best explorer. 

8) The Best person in each area would never be able to cause progress alone. 

9) To want to be the best in something has absolutely no precedence over doing something that matters. 

10) If you believe in monogamous research, you'd be in the akward situation where finding out that no one gives a flying fuck about X-risk should make you ecstatic, and that can't be right. That there are people doing something that matters so well that you currently estimate you can't beat them should be fantastic news! 

Well, I hope every last cortical column I have got it now, and the overall surrounding being may be a little less wrong. 

Also, this text by Michael Vassar is magnificent, and makes a related set of points. 

 

 

 

Developmental Thinking Shout-out to CFAR

16 MarkL 03 May 2013 01:46AM

Preamble

Before I make my main point, I want to acknowledge that curriculum development is hard. It's even harder when you're trying to teach the unteachable. And it's even harder when you're in the process of bootstrapping. I am aware of the Kahneman inside/outside curriculum design story. And, I myself have taught 200+ hours of my own computer science curricula to middle-school students. So this "open letter," is not some sort of criticism of CFAR's curriculum; It's a "Hey, check out this cool stuff eventually when you have time," letter. I just wanted to put all this out there, to possibly influence the next five years of CFAR.

Curriculum development is hard.

So, anyway, I don't personally know any of the people involved in CFAR, but I do know you're all great. 

 

A case for developmental thinking

The point of this post is to make a case for CFAR to become "developmentally aware." Massive amounts of quality research has gone into describing the differences between 1) children, 2) adults, and 3) expert or developmentally advanced adults. I haven't (yet?) seen any evidence of awareness of this research in CFAR's materials. (I haven't attended a CFAR workshop, but I've flipped through some of the more recent stuff.)

Developmental thinking is a different approach than, e.g., cataloguing biases, promoting real-time awareness of them, and having a toolbox of de-biasing strategies and algorithms. Developmental literature gives clues to the precise cognitive operations that are painstakingly acquired over an entire lifetime, in a more fine-grained way than is possible when studying, say, already-expert performers or cognitive bias literature. I think developmental thinking goes deeper than "toolbox thinking" (straw!) and is an angle of approach for teaching the unteachable

Below is an annotated bibliography of some of my personal touchstones in the development literature, books that are foundational or books that synthesize decades of research about the developmental aspects of entrepreneurial, executive, educational, and scientific thinking, as well as the developmental aspects of emotion and cognition. Note that this is personal, idiosyncratic, non-exhaustive list.

And, to qualify, I have epistemological and ontological issues with plenty of the stuff below. But some of these authors are brilliant, and the rest are smart, meticulous, and values-driven. Lots of these authors deeply care about empirically identifying, targeting, accelerating, and stabilizing skills ahead of schedule or helping skills manifest when they wouldn't have otherwise appeared at all. Quibbles and double-takes aside, there is lots of signal, here, even if it's not seated in a modern framework (which would of course increase the value and accessibility of what's below).

There are clues or even neon signs, here, for isolating fine-grained, trainable stuff to be incorporated into curricula. Even if an intervention was designed for kids, a lot of adults still won't perform consistently prior to said intervention. And these researchers have spent thousands of collective hours thinking about how to structure assessments, interventions, and validations which may be extendable to more advanced scenarios.

So all the material below is not only useful for thinking about remedial or grade-school situations, and is not just for adding more tools to a cognitive toolbox, but could be useful for radically transforming a person's thinking style at a deep level.

Consider:

child:adult :: adult: ? 

This has everything to do with the "Outside the Box" Box. Really. One author below has been collecting data for decades to attempt to describe individuals that may represent far less than one percent of the population.

 

0. Protocol analysis

Everyone knows that people are poor reporters of what goes on in their heads. But this is a straw. A tremendous amount of research has gone into understanding what conditions, tasks, types of cognitive routines, and types of cognitive objects foster reliable introspective reporting. Introspective reporting can be reliable and useful. Grandaddy Herbert Simon (who coined the term "bounded rationality") devotes an entire book to it. The preface (I think) is a great overview. I wanted to mention this, first, because lots of the researchers below use verbal reports in their work.

http://www.amazon.com/Protocol-Analysis-Edition-Verbal-Reports/dp/0262550237/

 

1. Developmental aspects of scientific thinking

Deanna Kuhn and colleagues develop and test fine-grained interventions to promote transfer of various aspects of causal inquiry and reasoning in middle school students. In her words, she wants to "[develop] students' meta-level awareness and management of their intellectual processes." Kuhn believes that inquiry and argumentation skills, carefully defined and empirically backed, should be emphasized over specific content in public education. That sounds like vague and fluffy marketing-speak, but if you drill down to the specifics of what she's doing, her work is anything but. (That goes for all of these 50,000 foot summaries. These people are awesome.)

http://www.amazon.com/Education-Thinking-Deanna-Kuhn/dp/0674027450/

http://www.tc.columbia.edu/academics/index.htm?facid=dk100

http://www.educationforthinking.org/

 

David Klahr and colleagues emphasize how children and adults compare in coordinated searches of a hypothesis space and experiment space. He believes that scientific thinking is not different in kind than everyday thinking. Klahr gives an integrated account of all the current approaches to studying scientific thinking. Herbert Simon was Klahr's dissertation advisor.

http://www.amazon.com/Exploring-Science-Cognition-Development-Discovery/dp/0262611767

http://www.psy.cmu.edu/~klahr/

 

2. Developmental aspects of executive or instrumental thinking

Ok, I'll say it: Elliot Jacques was a psychoanalyst, among other things. And the guy makes weird analogies between thinking styles and truth tables. But his methods are rigorous. He has found possible discontinuities in how adults process information in order to achieve goals and how these differences relate to an individuals "time horizon," or maximum time length over which an individual can comfortably execute a goal. Additionally, he has explored how these factors predictably change over a lifespan.

http://www.amazon.com/Human-Capability-Individual-Potential-Application/dp/0962107077/

 

3. Developmental aspects of entrepreneurial thinking

Saras Sarasvathy and colleagues study the difference between novice entrepreneurs and expert entrepreneurs. Sarasvathy wants to know how people function under conditions of goal ambiguity ("We don't know the exact form of what we want"), environmental isotropy ("The levers to affect the world, in our concrete situation, are non-obvious"), and enaction ("When we act we change the world"). Herbert Simon was her advisor. Her thinking predates and goes beyond the lean startup movement.

http://www.amazon.com/Effectuation-Elements-Entrepreneurial-Expertise-Entrepreneurship/dp/1848445725/

"What effectuation is not" http://www.effectuation.org/sites/default/files/research_papers/not-effectuation.pdf

Related: http://lesswrong.com/r/discussion/lw/hcb/book_suggestion_diaminds_is_worth_reading/

4. General Cognitive Development

Jane Loevinger and colleagues' work have inspired scores of studies. Loevinger discovered potentially stepwise changes in "ego level" over a lifespan. Ego level is an archaic-sounding term that might be defined as one's ontological, epistemological, and metacognitive stance towards self and world. Loevinger's methods are rigorous, with good inter-rater reliability, bayesian scoring rules incorporating base rates, and so forth.

http://www.amazon.com/Measuring-Ego-Development-Volume-Construction/dp/0875890598/

http://www.amazon.com/Measuring-Development-Scoring-Manual-Women/dp/0875890695/

Here is a woo-woo description of the ego levels, but note that these descriptions are based on decades of experience and have a repeatedly validated empirical core. The author of this document, Susanne Cook-Greuter, received her doctorate from Harvard by extending Loevinger's model, and it's well worth reading all the way through: 

http://www.cook-greuter.com/9%20levels%20of%20increasing%20embrace%20update%201%2007.pdf

Here is a recent look at the field:

http://www.amazon.com/The-Postconventional-Personality-Researching-Transpersonal/dp/1438434642/

By the way, having explicit cognitive goals predicts an increase in ego level, three years later, but not an increase in subjective well-being. (Only the highest ego levels are discontinuously associated with increased wellbeing.) Socio-emotional goals do predict an increase in subjective well-being, three years later. Great study:

Bauer, Jack J., and Dan P. McAdams. "Eudaimonic growth: Narrative growth goals predict increases in ego development and subjective well-being 3 years later." Developmental Psychology 46.4 (2010): 761.

 

5. Bridging symbolic and non-symbolic cognition

[Related: http://wiki.lesswrong.com/wiki/A_Human's_Guide_to_Words]

Eugene Gendlin and colleagues developed a "[...] theory of personality change [...] which involved a fundamental shift from looking at content [to] process [...]. From examining hundreds of transcripts and hours of taped psychotherapy interviews, Gendlin and Zimring formulated the Experiencing Level variable. [...]"

The "focusing" technique was designed as a trainable intervention to influence an individual's Experiencing Level.

Marion N. Hendricks reviews 89 studies, concluding that [I quote]:

  • Clients who process in a High Experiencing manner or focus do better in therapy according to client, therapist and objective outcome measures.
  • Clients and therapists judge sessions in which focusing takes place as more successful.
  • Successful short term therapy clients focus in every session.
  • Some clients focus immediately in therapy; Others require training.
  • Clients who process in a Low Experiencing manner can be taught to focus and increase in Experiencing manner, either in therapy or in a separate training.
  • Therapist responses deepen or flatten client Experiencing. Therapists who focus effectively help their clients do so.
  • Successful training in focusing is best maintained by those clients who are the strongest focusers during training.

http://www.focusing.org/research_basis.html

http://www.amazon.com/Focusing-Eugene-T-Gendlin/dp/0553278339/

http://www.amazon.com/Focusing-Oriented-Psychotherapy-Manual-Experiential-Method/dp/157230376X/

http://www.amazon.com/Self-Therapy-Step-By-Step-Wholeness-Cutting-Edge-Psychotherapy/dp/0984392777/ [IFS is very similar to focusing]

http://www.amazon.com/Emotion-Focused-Therapy-Coaching-Clients-Feelings/dp/1557988811/ [more references, similar to focusing]

http://www.amazon.com/Experiencing-Creation-Meaning-Philosophical-Psychological/dp/0810114275/ [favorite book of all time, by the way]

 

6. Rigorous Instructional Design

Siegfried Engelmann (http://www.zigsite.com/) and colleagues are dedicated to dramatically accelerating cognitive skill acquisition in disadvantaged children. In addition to his peer-reviewed research, he specializes in unambiguously decomposing cognitive learning tasks and designing curricula. Engelmann's methods were validated as part of Project Follow Through, the "largest and most expensive experiment in education funded by the U.S. federal government that has ever been conducted," according to Wikipedia. Engelmann contends that the data show that Direct Instruction outperformed all other methods:

http://www.zigsite.com/prologue_NeedyKids_chapter_5.html

http://en.wikipedia.org/wiki/Project_Follow_Through

Here, he systematically eviscerates an example of educational material that doesn't meet his standards:

http://www.zigsite.com/RubricPro.htm

And this is his instructional design philosophy:

http://www.amazon.com/Theory-Instruction-Applications-Siegfried-Engelmann/dp/1880183803/

 

Conclusion

In conclusion, lots of scientists have cared for decades about describing the cognitive differences between children, adults, and expert or developmentally advanced adults. And lots of scientists care about making those differences happen ahead of schedule or happen when they wouldn't have otherwise happened at all. This is a valuable and complementary perspective to what seems to be CFAR's current approach. I hope CFAR will eventually consider digging into this line of thinking, though maybe they're already on top of it or up to something even better.

Study on depression

10 Swimmer963 15 January 2013 09:58PM

I am currently running a study on depression, in collaboration with Shannon Friedman (http://lesswrong.com/user/ShannonFriedman/overview/). If you are interested in participating, the study involves filling out a survey and will take a few minutes of your time (half an hour would be very generous), most likely once a week for four weeks. Send me an email at mdixo100@uottawa.ca, and I can give you more details. 

 

Thank you!

What does the world look like, the day before FAI efforts succeed?

23 michaelcurzi 16 November 2012 08:56PM

TL;DR: let's visualize what the world looks like if we successfully prepare for the Singularity.

I remember reading once, though I can't remember where, about a technique called 'contrasting'. The idea is to visualize a world where you've accomplished your goals, and visualize the current world, and hold the two worlds in contrast to each other. Apparently there was a study about this; the experimental 'contrasting' group was more successful than the control in accomplishing its goals.

It occurred to me that we need some of this. Strategic insights about the path to FAI are not robust or likely to be highly reliable. And in order to find a path forward, you need to know where you're trying to go. Thus, some contrasting:

It's the year 20XX. The time is 10 AM, on the day that will thereafter be remembered as the beginning of the post-Singularity world. Since the dawn of the century, a movement rose in defense of humanity's future. What began with mailing lists and blog posts became a slew of businesses, political interventions, infrastructure improvements, social influences, and technological innovations designed to ensure the safety of the world.

Despite all odds, we exerted a truly extraordinary effort, and we did it. The AI research is done; we've laboriously tested and re-tested our code, and everyone agrees that the AI is safe. It's time to hit 'Run'.

And so I ask you, before we hit the button: what does this world look like? In the scenario where we nail it, which achievements enabled our success? Socially? Politically? Technologically? What resources did we acquire? Did we have superior technology, or a high degree of secrecy? Was FAI research highly prestigious, attractive, and well-funded? Did we acquire the ability to move quickly, or did we slow unFriendly AI research efforts? What else?

I had a few ideas, which I divided between scenarios where we did a 'fantastic', 'good', or 'sufficient' job at preparing for the Singularity. But I need more ideas! I'd like to fill this out in detail, with the help of Less Wrong. So if you have ideas, write them in the comments, and I'll update the list.

Some meta points:

  • This speculation is going to be, well, pretty speculative. That's fine - I'm just trying to put some points on the map. 
  • However, I'd like to get a list of reasonable possibilities, not detailed sci-fi stories. Do your best.
  • In most cases, I'd like to consolidate categories of possibilities. For example, we could consolidate "the FAI team has exclusive access to smart drugs" and "the FAI team has exclusive access to brain-computer interfaces" into "the FAI team has exclusive access to intelligence-amplification technology." 
  • However, I don't want too much consolidation. For example, I wouldn't want to consolidate "the FAI team gets an incredible amount of government funding" and "the FAI team has exclusive access to intelligence-amplification technology" into "the FAI team has a lot of power".
  • Lots of these possibilities are going to be mutually exclusive; don't see them as aspects of the same scenario, but rather different scenarios.

Anyway - I'll start.

Visualizing the pre-FAI world

  • Fantastic scenarios
    • The FAI team has exclusive access to intelligence amplification technology, and use it to ensure Friendliness & strategically reduce X-risk.
    • The government supports Friendliness research, and contributes significant resources to the problem. 
    • The government actively implements legislation which FAI experts and strategists believe has a high probability of making AI research safer.
    • FAI research becomes a highly prestigious and well-funded field, relative to AGI research.
    • Powerful social memes exist regarding AI safety; any new proposal for AI research is met with a strong reaction (among the populace and among academics alike) asking about safety precautions. It is low status to research AI without concern for Friendliness.
    • The FAI team discovers important strategic insights through a growing ecosystem of prediction technology; using stables of experts, prediction markets, and opinion aggregation.
    • The FAI team implements deliberate X-risk reduction efforts to stave off non-AI X-risks. Those might include a global nanotech immune system, cheap and rigorous biotech tests and safeguards, nuclear safeguards, etc.
    • The FAI team implements the infrastructure for a high-security research effort, perhaps offshore, implementing the best available security measures designed to reduce harmful information leaks.
    • Giles writes: Large amounts of funding are available, via government or through business. The FAI team and its support network may have used superior rationality to acquire very large amounts of money.
    • Giles writes: The technical problem of establishing Friendliness is easier than expected; we are able t construct a 'utility function' (or a procedure for determining such a function) in order to implement human values that people (including people with a broad range of expertise) are happy with.
    • Crude_Dolorium writes: FAI research proceeds much faster than AI research, so by the time we can make a superhuman AI, we already know how to make it Friendly (and we know what we really want that to mean).
  • Pretty good scenarios
    • Intelligence amplification technology access isn't exclusive to the FAI team, but it is differentially adopted by the FAI team and their supporting network, resulting in a net increase in FAI team intelligence relative to baseline. The FAI team uses it to ensure Friendliness and implement strategy surrounding FAI research.
    • The government has extended some kind of support for Friendliness research, such as limited funding. No protective legislation is forthcoming.
    • FAI research becomes slightly more high status than today, and additional researchers are attracted to answer important open questions about FAI.
    • Friendliness and rationality memes grow at a reasonable rate, and by the time the Friendliness program occurs, society is more sane.
    • We get slightly better at making predictions, mostly by refining our current research and discussion strategies. This allows us a few key insights that are instrumental in reducing X-risk.
    • Some X-risk reduction efforts have been implemented, but with varying levels of success. Insights about which X-risk efforts matter are of dubious quality, and the success of each effort doesn't correlate well to the seriousness of the X-risk. Nevertheless, some X-risk reduction is achieved, and humanity survives long enough to implement FAI.
    • Some security efforts are implemented, making it difficult but not impossible for pre-Friendly AI tech to be leaked. Nevertheless, no leaks happen.
    • Giles writes: Funding is harder to come by, but small donations, limited government funding, or moderately successful business efforts suffice to fund the FAI team.
    • Giles writes: The technical problem of aggregating values through a Friendliness function is difficult; people have contradictory and differing values. However, there is broad agreement as to how to aggregate preferences. Most people accept that FAI needs to respect values of humanity as a whole, not just their own.
    • Crude_Dolorium writes: Superhuman AI arrives before we learn how to make it Friendly, but we do learn how to make an 'Anchorite' AI that definitely won't take over the world. The first superhuman AIs use this architecture, and we use them to solve the harder problems of FAI before anyone sets off an exploding UFAI.
  • Sufficiently good scenarios
    • Intelligence amplification technology is widespread, preventing any differential adoption by the FAI team. However, FAI researchers are able to keep up with competing efforts to use that technology for AI research.
    • The government doesn't support Friendliness research, but the research group stays out of trouble and avoids government interference.
    • FAI research never becomes prestigious or high-status, but the FAI team is able to answer the important questions anyway.
    • Memes regarding Friendliness aren't significantly more widespread than today, but  the movement has grown enough to attract the talent necessary to implement a Friendliness program.
    • Predictive ability is no better than it is today, but the few insights we've gathered suffice to build the FAI team and make the project happen.
    • There are no significant and successful X-risk reduction efforts, but humanity survives long enough to implement FAI anyway.
    • No significant security measures are implemented for the FAI project. Still, via cooperation and because the team is relatively unknown, no dangerous leaks occur.
    • Giles writes: The team is forced to operate on a shoestring budget, but succeeds anyway because the problem turns out to not be incredibly sensitive to funding constraints.
    • Giles writes: The technical problem of aggregating values is incredibly difficult. Many important human values contradict each other, and we have discovered no "best" solution to those conflicts. Most people agree on the need for a compromise but quibble over how that compromise should be reached. Nevertheless, we come up with a satisfactory compromise.
    • Crude_Dolorium writes: The problems of Friendliness aren't solved in time, or the solutions don't apply to practical architectures, or the creators of the first superhuman AIs don't use them, so the AIs have only unreliable safeguards. They're given cheap, attainable goals; the creators have tools to read the AIs' minds to ensure they're not trying anything naughty, and killswitches to stop them; they have an aversion to increasing their intelligence beyond a certain point, and to whatever other failure modes the creators anticipate; they're given little or no network connectivity; they're kept ignorant of facts more relevant to exploding than to their assigned tasks; they require special hardware, so it's harder for them to explode; and they're otherwise designed to be safer if not actually safe. Fortunately they don't encounter any really dangerous failure modes before they're replaced with descendants that really are safe.

 

Desired articles on AI risk?

13 lukeprog 02 November 2012 05:39AM

I've once again updated my list of forthcoming and desired articles on AI risk, which currently names 17 forthcoming articles and books about AGI risk, and also names 26 desired articles that I wish researchers were currently writing.

But I'd like to hear your suggestions, too. Which articles not already on the list as "forthcoming" or "desired" would you most like to see written, on the subject of AGI risk?

Book/article titles reproduced below for convenience...

continue reading »

LessWrong help desk - free paper downloads and more

36 jsalvatier 07 October 2012 11:45PM

Over the last year, VincentYu, gwern, myself and others have provided 132 academic papers for the LessWrong community (out of 152 requests, a 87% success rate) through the Free research, editing and articles thread. We originally intended to provide editing, research and general troubleshooting help, but article downloads are by far the most requested service.

If you're doing a LessWrong relevant project we want to help you. If you need help accessing a journal article or academic book chapter, we can get it for you. If you need some research or writing help, we can help there too.

Turnaround times for articles published in the last 20 years or so is usually less than a day. Older articles often take a couple days.

Please make new article requests in the comment section of this thread.

If you would like to help out with finding papers, please monitor this thread for requests. If you want to monitor via RSS like I do, Google Reader will give you the comment feed if you give it the URL for this thread (or use this link directly). 

If you have some special skills you want to volunteer, mention them in the comment section.

Which questions about online classes would you ask Peter Norvig?

6 [deleted] 18 September 2012 07:39AM

A week ago Google launched an open source project called Course Builder it packages the software and technology used to build their July Class Power Searching with Google. The discussion forum for it is here. Tomorrow is the first live hangout where he will be answering questions about MOOC design and technical aspects of using Course Builder. The live hangout will is scheduled for the 26th of September.

Helping the World to Teach



In July, Research at Google ran a large open online course, Power Searching with Google, taught by search expert, Dan Russell. The course was successful, with 155,000 registered students. Through this experiment, we learned that Google technologies can help bring education to a global audience. So we packaged up the technology we used to build Power Searching and are providing it as an open source project called Course Builder. We want to make this technology available so that others can experiment with online learning.

The Course Builder open source project is an experimental early step for us in the world of online education. It is a snapshot of an approach we found useful and an indication of our future direction. We hope to continue development along these lines, but we wanted to make this limited code base available now, to see what early adopters will do with it, and to explore the future of learning technology. We will be hosting a community building event in the upcoming months to help more people get started using this software. edX shares in the open source vision for online learning platforms, and Google and the edX team are in discussions about open standards and technology sharing for course platforms.

We are excited that Stanford University, Indiana University, UC San Diego, Saylor.org, LearningByGivingFoundation.org, Swiss Federal Institute of Technology in Lausanne (EPFL), and a group of universities in Spain led by Universia, CRUE, and Banco Santander-Universidades are considering how this experimental technology might work for some of their online courses. Sebastian Thrun at Udacity welcomes this new option for instructors who would like to create an online class, while Daphne Koller at Coursera notes that the educational landscape is changing and it is exciting to see new avenues for teaching and learning emerge. We believe Google’s preliminary efforts here may be useful to those looking to scale online education through the cloud.

Along with releasing the experimental open source code, we’ve provided documentation and forums for anyone to learn how to develop and deploy an online course like Power Searching. In addition, over the next two weeks we will provide educators the opportunity to connect with the Google team working on the code via Google Hangouts. For access to the code, documentation, user forum, and information about the Hangouts, visit the Course Builder Open Source Project Page. To see what is possible with the Course Builder technology register for Google’s next version of Power Searching. We invite you to explore this brave new world of online learning with us.

A small group of us has been working on related matters but we are far from done reviewing the relevant literature. Not having any good questions yet, I thought what harm might there be in asking for the broader community to come up with a few questions! If Norvig has answered your questions in some of his other existing material that I've reviewed I'll respond with a link.

 

[LINK] "Junk" DNA revealed as information processing system?

3 EphemeralNight 18 September 2012 05:07AM

http://spectrum.ieee.org/tech-talk/at-work/test-and-measurement/re-imagining-our-genes-encode-project-reveals-genome-as-an-information-processing-system/?utm_source=techalert&utm_medium=email&utm_camp

Just a few years ago, the prevailing wisdom said that the genome comprises 3 percent or so genes and 97 percent “junk” (with 2 or 3 percent of that junk consisting of the fossilized remains of retroviruses that infected our ancestors somewhere along the line). After a decade of painstaking analysis by more than 200 scientists, the new ENCODE data show that indeed 2.94 percent of the genome is protein-coding genes, while 80.4 percent of sequences regulate how those genes get turned on, turned off, expressed, processed, and modified.

This fundamentally changes how most biologists understand the master instruction set of life: we are, in short, 3 percent  input/output and 80 percent logic. (Though perhaps a surprise to biologists, the finding will hardly astound anyone who has designed a complex interactive system.)

Correct me if I'm wrong, but this is a really big deal, right?

Mike Darwin on animal research, moral cowardice, and reasoning in an uncaring universe

23 Synaptic 25 August 2012 04:38PM

He writes this essay in response to someone who writes about their "gut level emotional response when [they] thought about dogs being likely killed by an as yet unproven and dangerous medical procedure." 

I recommend the whole thing. If you are going to read it all, note that some text is duplicated near the end, though there is one paragraph at the very end which is not. 

First, he describes how animals share empathy and emotions with humans:

It is a maxim of the Animal Rights ideologues that "a rat is a dog is a boy." [PETA] This is patently not true, and might just be denounced as absurd on its face. But, it is true that rats, dogs and boys share important properties, or more generally, that rats, dogs and people share important properties. I have a huge reservoir of experience with rats, dogs and people. All three have a well developed sense of self, the ability to read my face and determine my mental state, and, obviously, the ability to experience most, if not all of the basic emotions and mental states that humans experience: anxiety, fear, emotional attachment to others (or their own and other species), sexual arousal and release, anticipation, enjoyment, curiosity, and so on. Most importantly, they have the ability to experience empathy - to extend their internal feelings to others. Well socialized rats and dogs know that the people they interact with can be hurt, provoked, pleased, and otherwise be emotionally and physically affected by their actions and they, in turn, act accordingly within the limits of their abilities to do so. Neither "pet" dogs nor rats bite their owners with abandon nor destroy their homes. This isn't just "conditioned behavior," but rather is the result of a more global understanding that humans, like them, can feel; and thus can be rewarded or made to suffer.

This is a very important and valuable property to people. it is so valuable that, when members of our own species fail to demonstrate it, we imprison them or even kill them! Jails and prisons are full of people who either lack empathy, or lack the ability to act upon it. What then does it say of us if we treat animals in ways that demonstrate a lack of understanding or respect for their feelings - for their ability to suffer or experience pleasure?

The answer is that it would, at first glance, say that we were either sociopaths, profoundly ignorant of the nature of animals, or taken over by some ideology which induced a state of perceptual blindness to their plight. Thus, what I am saying here is that I agree that it is neither reasonable nor moral (within our value structure as empathetic beings) to regard animals as unfeeling automatons, let alone treat them as such.

However, there is a problem with this approach to dealing both with our fellow humans and with other animals as the sole guide to our actions. The problem is, put simply, this: The native state of man and beast is one of unfathomable suffering.

Next, he explains ethics in a way that seems to correspond with a lot of Eliezer's writing: 

The central moral kernel of almost every religion is that we are born into a world of injustice and suffering. There can be little quibbling with that observation, since everywhere we turn we see living systems whose very structure brings them into "conflict" with their environment and causes enormous suffering. This is how it has always been. It is the reality of our existence in this universe. Evolution, the beautiful star studded sky at night, the cool lapping ocean - they don't give a damn about anything, least of all a mouse in a cage with cancer or a woman with her breast rotting off. And as far we can tell, they never will.

The best the universe has done so far is to produce us - creatures who both can and do care about injustice and suffering. If you believe in a Grand Design, or some other teleological explanation that results in universal justice, then, go to the mirror right now and take a long hard look, because buddy, you are it - you are as good as it has gotten, so far.

Then, unless you are cretin or a fool, or both, realize that suffering and injustice are both inescapable contemporary and future realities which you have to deal with rationally (or not) as you choose. You do not get to choose Door Number 3, which is "no suffering and injustice." In fact, even you kill yourself straightaway to avoid inconveniencing a mouse with a plow, the suffering and injustice will continue to march on, even for billions and billions of years.

There are no easy choices.

The best you can do is to choose carefully and rationally what kinds of misery you will inflict and to work, relentlessly, to minimize it and to make the universe a more just place. Those decisions will be informed by your values - by what you hold most worthy and in highest esteem. You are, of course, free to choose mice over men, a hunter-gatherer life over that of an agrarian, the world of the primitive or technological civilization.

Next, he tackles questions about whether animal research is, on net, beneficial: 

However, what you are not free to do, at least not around me, is to spew out lies and moral falsehoods about the supposed real nature of the universe and the real consequences of the choices you (and others like you) make. If you think that animal's have rights in the classical and real sense that has historically been applied to humans, then I will call you a liar and a moral blackguard who would, and has, condemned not only countless humans to unnecessary suffering and death, but countless animals whom humans value highly (our companion animals and livestock) as well - because much of veterinary medicine is a direct result of animal research.

If you argue that humans should be used in research, there I would agree with you. Most of the pharmacological research done with rodents is junk science which has led to few real medical advances. But be advised that such research will be ugly and terrifying and very likely costly in some meaningful proportion to the benefit it yields.

I am sorry to be so harsh, but technological civilization has robbed most of the Western world of any sense of reality - of how the universe works and of just how much suffering accrues from every frozen ready meal and every lipstick or plastic bottle of beverage consumed.

That dreamy, soft-bellied state of unreality is intolerable and it is also incompatible with our continued existence as a technological species, and probably as a species at all.

And it is most certainly incompatible with any hope we can currently see of the universe becoming a more just, decent and humane place.

Thus, I see your feelings and attitudes as profoundly incompatible both with your long term personal survival, and that of our species. As such, they evoke in me a feeling of revulsion and strong feeling of anger for the damage they have already caused to biomedical research - and will likely continue to cause.

Next, he goes into details of what animal lifespan research entails: 

I would also like to note that "the worst" of animal research in terms of inflicting suffering is not the acute experimental work conducted by cryonicists and most other mainstream medical research, but rather is to be found in the work of gerontologists conducting life span studies on rodents and primates. Research which virtually all on this list serve avidly lap up and never criticize - even though much, if not most of it, is junk science.

I can say, without reservation, that of all the pain, horror and cruelty that I have inflicted, either inadvertently, or as an anticipated consequence of research, by far the most cruel work I've ever (done or) observed is that of the gerontologist doing lifespan studies. ... 

The fact is, that aging animals get a dreadful array of truly horrible and disgusting pathologies and, because they are not humans receiving human medical care, they die in fantastically gruesome ways more often than not. ...

Rodents often develop not only mammary neoplasms [breast cancers], but tumors of the food pouches and buccal mucosa [inside lining of the cheeks]. Since there is no surgical intervention, these masses often grow to colossal size, ulcerate, break down and fungate. A common cause of death is starvation, which is truly terrible to watch. Sometimes, the animals lose the ability to drink, in which case death is mercifully faster and less painful as a result of dehydration.

The visceral and bone pain that results from tumor invasion of vital organs, the skeleton and joints must be unimaginable. And cancers kills the majority of animals in gerontological lifespan studies. I've seen animals languish in their cages for weeks or months being slowly consumed by lesions so revolting I could barely force myself to handle them in order to document their decline.

And of what the lucky ones who don't die of cancer? Are they in rodent care homes in tiny beds with tiny egg crate mattresses with a staff of rodents careers to lick their bums and turn them? Hardly. As animals age and develop spondylosis [spine osteoarthritis] and sarcopenia [age-related loss of muscle mass], they become unable to reach their anuses and urogential areas with their mouths. As a result, they cannot clean themselves, and they develop an ammonia-generating, bacteria infested crusting of urea and feces in these delicate areas, which, not infrequently results in ulceration. They are often blind from cataracts, and are, of necessity, usually housed one to a cage (they have a propensity to cannibalism, especially if calorie restricted), so they die alone, slowly, most often of starvation and dehydration.

Of course, the first question that likely comes to most peoples' minds upon hearing this a tale of horror is, "For the love of god man, why don't you euthanize such poor creatures, or at least medicate them for pain?" The answer is that you can't, not without developing a whole new, complex and costly model which has highly specific (and uniform) and almost completely NONSUBJECTIVE algorithms for when euthanasia should take place. And, you can forget about knowing what the "maximum lifespan" is, because it is flat out impossible to tell how long a moribund and likely suffering animal will live. I've seen animals I thought were certain to die within days survive for MONTHS! And so has every other experienced gerontological researcher.

That is the reality gerontological lifespan research.

So, you want to trespass on the territory of the gods and life forever, or even just another 50 or 500 years longer and you want to do it whilst being a nice guy? Give me a break!

The ending is poignant, and I think an excusable violation of Godwin's law

Cryonics has largely been taken over by this moral world-view and with an understandable, if inexcusable accompanying moral cowardice which dictates that we hide our animal research and cower in fear because the "Animal Rights" people will attack us (and by implication our poorly protected patients stored in vulnerable, unhardened facilities). This is the direct path to the Dark Ages or to the Soviet, or to the Third Reich, which was ironically, the only nation-state to completely ban animal research because of its cruelty and inhumanity. Instead, they built concentration camps and turned loose the likes of Holzhoner, Rascher, Mengle, Whichtman, Caluberg and countless others like them on humans, who, unlike animals, have the rich perceptual ability to comprehend their own mortality and to contemplate, at length, the certain inevitability of their fate.

Darwin does not mention it in this essay, but he is a vegetarian, and his dog is cryopreserved at Alcor. 

Learn Power Searching with Google

18 [deleted] 02 July 2012 07:09PM

Google Search makes it amazingly easy to find information. Come learn about the powerful advanced tools we provide to help you find just the right information when the stakes are high.

Daniel Russell is doing a free Google class on how to search the web. Besides six 50-minute classes it will include interactive activities to practice new skills. Upon passing the post-course assessment you get a Certificate of Completion.

Advanced search skills are not only a useful everyday skill but vital to doing scholarship. Searching the web is a superpower that would make thinkers of previous centuries green with envy. Learn to use it well. I recommend checking out Inside Search, Russel's Blog or perhaps reading the article "How to solve impossible problems" to get a feeling about what you can expect to gain from it.

I think for most the value of information is high enough to be worth the investment. Also I suspect it will be plain fun. I am doing the class and strongly recommend it to fellow LessWrong users. Anyone else who has registered please say so publicly in the comments as well. :)

Registration is open from June 26, 2012 to July 16, 2012.

Framing a problem in a foreign language seems to reduce decision biases

-4 MBlume 25 June 2012 05:25PM

The researchers aren't entirely sure why speaking in a less familiar tongue makes people more "rational", in the sense of not being affected by framing effects or loss aversion. But they think it may have to do with creating psychological distance, encouraging systematic rather than automatic thinking, and with reducing the emotional impact of decisions. This would certainly fit with past research that's shown the emotional impact of swear words, expressions of love and adverts is diminished when they're presented in a less familiar language.

Paywalled article (can someone with access throw a PDF up on dropbox or something?): http://pss.sagepub.com/content/early/2012/04/18/0956797611432178

Blog summary: http://bps-research-digest.blogspot.co.uk/2012/06/we-think-more-rationally-in-foreign.html

 

Funding Good Research

22 lukeprog 27 May 2012 06:41AM

Series: How to Purchase AI Risk Reduction

I recently explained that one major project undergoing cost-benefit analysis at the Singularity Institute is that of a scholarly AI risk wiki. The proposal is exciting to many, but as Kaj Sotala points out:

This idea sounds promising, but I find it hard to say anything about "should this be funded" without knowing what the alternative uses for the money are. Almost any use of money can be made to sound attractive with some effort, but the crucial question in budgeting is not "would this be useful" but "would this be the most useful thing".

Indeed. So here is another thing that donations to SI could purchase: good research papers by skilled academics.

 

Our recent grant of $20,000 to Rachael Briggs (for an introductory paper on TDT) provides an example of how this works:

  1. SI thinks of a paper it wants to exist but doesn't have the resources to write itself (e.g. a clearer presentation of TDT).
  2. SI looks for a few productive academics well-suited to write the paper we have in mind, and approaches them directly with the grant proposal. (Briggs is an excellent choice for the TDT paper because she is a good explainer and has had two of her past decision theory papers selected as among the 10 best papers of the year by The Philosopher's Annual.)
  3. Hopefully, one of these academics says "yes." We award them the grant in return for a certain kind of paper published in one of a pre-specified set of journals. (In the case of the TDT grant to Rachael Briggs, we specified that the final paper must be published in one of the following journals: Philosopher's Imprint, Philosophy and Phenomenological Research, Philosophical Quarterly, Philosophical Studies, Erkenntnis, Theoria, Australasian Journal of Philosophy, Nous, The Philosophical Review, or Theory and Decision.)
  4. SI gives regular feedback on outline drafts and article drafts prepared by the article author.
  5. Paper gets submitted, revised, and published!


For example, SI could award grants for the following papers:

  • "Objections to CEV," by somebody like David Sobel (his "Full Information Accounts of Well-Being" remains the most significant unanswered attack on ideal-preference theories like CEV).
  • "Counterfactual Mugging," by somebody like Rachael Briggs (here is the original post by Vladimir Nesov).
  • "CEV as a Computational Meta-Ethics," by somebody like Gert-Jan Lokhorst (see his paper "Computational Metaethics").
  • "Non-Bayesian Decision Theory and Normative Uncertainty," by somebody like Martin Peterson (the problem of normative uncertainty is a serious one, and Peterson's approach is a different line of approach than the one pursued by Nick Bostrom, Toby Ord, and Will Crouch, and also different from the one pursued by Andrew Sepielli).
  • "Methods for Long-Term Technological Forecasting," by somebody like Bela Nagy (Nagy is the lead author on one of the best papers in the field)
  • "Convergence to Rational Economic Agency," by somebody like Steve Omohundro (Omohundro's 2007 paper argues that advanced agents will converge toward the rational economic model of decision-making, if true this would make it easier to predict the convergent instrumental goals of advanced AIs, but his argument leaves much to be desired in persuasiveness as it is currently formulated).
  • "Value Learning," by somebody like Bill Hibbard (Dewey's 2011 paper and Hibbard's 2012 paper make interesting advances on this topic, but there is much more work to be done).
  • "Learning Preferences from Human Behavior," by somebody like Thomas Nielsen (Nielsen's 2004 paper with Finn Jensen described the first computationally tractable algorithms capable of learning a decision maker’s utility function from potentially inconsistent behavior. Their solution was to interpret inconsistent choices as random deviations from an underlying “true” utility function. But the data from neuroeconomics suggest a different solution: interpret inconsistent choices as deviations from an underlying “true” utility function that are produced by non-model-based valuation systems in the brain, and use the latest neuroscientific research to predict when and to what extent model-based choices are being “overruled” by the non-model-based valuation systems).

(These are only examples. I don't necessarily think these particular papers would be good investments.)

 

View more: Next