Filter This year

You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Post-doctoral Fellowships at METRICS

12 Anders_H 12 November 2015 07:13PM
The Meta-Research Innovation Center at Stanford (METRICS) is hiring post-docs for 2016/2017. The full announcement is available at http://metrics.stanford.edu/education/postdoctoral-fellowships. Feel free to contact me with any questions; I am currently a post-doc in this position.

METRICS is a research center within Stanford Medical School. It was set up to study the conditions under which the scientific process can be expected to generate accurate beliefs, for instance about the validity of evidence for the effect of interventions.

METRICS was founded by Stanford Professors Steve Goodman and John Ioannidis in 2014, after Givewell connected them with the Laura and John Arnold Foundation, who provided the initial funding. See http://blog.givewell.org/2014/04/23/meta-research-innovation-centre-at-stanford-metrics/ for more details.

What's the most annoying part of your life/job?

11 Liron 23 October 2016 03:37AM

Hi, I'm an entrepreneur looking for a startup idea.

In my experience, the reason most startups fail is because they never actually solve anyone's problem. So I'm cheating and starting out by identifying a specific person with a specific problem.

So I'm asking you, what's the most annoying part of your life/job? Also, how much would you pay for a solution?

MIRI AMA plus updates

11 RobbBB 11 October 2016 11:52PM

MIRI is running an AMA on the Effective Altruism Forum tomorrow (Wednesday, Oct. 11): Ask MIRI Anything. Questions are welcome in the interim!

Nate also recently posted a more detailed version of our 2016 fundraising pitch to the EA Forum. One of the additions is about our first funding target:

We feel reasonably good about our chance of hitting target 1, but it isn't a sure thing; we'll probably need to see support from new donors in order to hit our target, to offset the fact that a few of our regular donors are giving less than usual this year.

The Why MIRI's Approach? section also touches on new topics that we haven't talked about in much detail in the past, but plan to write up some blog posts about in the future. In particular:

Loosely speaking, we can imagine the space of all smarter-than-human AI systems as an extremely wide and heterogeneous space, in which "alignable AI designs" is a small and narrow target (and "aligned AI designs" smaller and narrower still). I think that the most important thing a marginal alignment researcher can do today is help ensure that the first generally intelligent systems humans design are in the “alignable” region. I think that this is unlikely to happen unless researchers have a fairly principled understanding of how the systems they're developing reason, and how that reasoning connects to the intended objectives.

Most of our work is therefore aimed at seeding the field with ideas that may inspire more AI research in the vicinity of (what we expect to be) alignable AI designs. When the first general reasoning machines are developed, we want the developers to be sampling from a space of designs and techniques that are more understandable and reliable than what’s possible in AI today.

In other news, we've uploaded a new intro talk on our most recent result, "Logical Induction," that goes into more of the technical details than our previous talk.

See also Shtetl-Optimized and n-Category Café for recent discussions of the paper.

Link: Re-reading Kahneman's Thinking, Fast and Slow

11 toomanymetas 04 July 2016 06:32AM

"A bit over four years ago I wrote a glowing review of Daniel Kahneman’s Thinking, Fast and Slow. I described it as a “magnificent book” and “one of the best books I have read”. I praised the way Kahneman threaded his story around the System 1 / System 2 dichotomy, and the coherence provided  by prospect theory.

What a difference four years makes. I will still describe Thinking, Fast and Slow as an excellent book – possibly the best behavioural science book available. But during that time a combination of my learning path and additional research in the behavioural sciences has led me to see Thinking, Fast and Slow as a book with many flaws."

Continued here: https://jasoncollins.org/2016/06/29/re-reading-kahnemans-thinking-fast-and-slow/

Are smart contracts AI-complete?

11 Stuart_Armstrong 22 June 2016 02:08PM

Many people are probably aware of the hack at DAO, using a bug in their smart contract system to steal millions of dollars worth of the crypto currency Ethereum.

There's various arguments as to whether this theft was technically allowed or not, and what should be done about it, and so on. Many people are arguing that the code is the contract, and that therefore no-one should be allowed to interfere with it - DAO just made a coding mistake, and are now being (deservedly?) punished for it.

That got me wondering whether its ever possible to make a smart contract without a full AI of some sort. For instance, if the contract is triggered by the delivery of physical goods - how can you define what the goods are, what constitutes delivery, what constitutes possession of them, and so on. You could have a human confirm delivery - but that's precisely the kind of judgement call you want to avoid. You could have an automated delivery confirmation system - but what happens if someone hacks or triggers that? You could connect it automatically with scanning headlines of media reports, but again, this is relying on aggregated human judgement, which could be hacked or influenced.

Digital goods seem more secure, as you can automate confirmation of delivery/services rendered, and so on. But, again, this leaves the confirmation process open to hacking. Which would be illegal, if you're going to profit from the hack. Hum...

This seems the most promising avenue for smart contracts that doesn't involve full AI: clear out the bugs in the code, then ground the confirmation procedure in such a way that it can only be hacked in a way that's already illegal. Sort of use the standard legal system as a backstop, fixing the basic assumptions, and then setting up the smart contracts on top of them (which is not the same as using the standard legal system within the contract).

Review and Thoughts on Current Version of CFAR Workshop

11 Gleb_Tsipursky 06 June 2016 01:44PM

Outline: I will discuss my background and how I prepared for the workshop, and then how I would have prepared differently if I could go back and have the chance to do it again; I will then discuss my experience at the CFAR workshop, and what I would have done differently if I had the chance to do it again; I will then discuss what my take-aways were from the workshop, and what I am doing to integrate CFAR strategies into my life; finally, I will give my assessment of its benefits and what other folks might expect to get who attend the workshop.


 

Acknowledgments: Thanks to fellow CFAR alumni and CFAR staff for feedback on earlier versions of this post


 

Introduction

 

Many aspiring rationalists have heard about the Center for Applied Rationality, an organization devoted to teaching applied rationality skills to help people improve their thinking, feeling, and behavior patterns. This nonprofit does so primarily through its intense workshops, and is funded by donations and revenue from its workshop. It fulfills its social mission through conducting rationality research and through giving discounted or free workshops to those people its staff judge as likely to help make the world a better place, mainly those associated with various Effective Altruist cause areas, especially existential risk.

 

To be fully transparent: even before attending the workshop, I already had a strong belief that CFAR is a great organization and have been a monthly donor to CFAR for years. So keep that in mind as you read my description of my experience (you can become a donor here).


Preparation

 

First, some background about myself, so you know where I’m coming from in attending the workshop. I’m a professor specializing in the intersection of history, psychology, behavioral economics, sociology, and cognitive neuroscience. I discovered the rationality movement several years ago through a combination of my research and attending a LessWrong meetup in Columbus, OH, and so come from a background of both academic and LW-style rationality. Since discovering the movement, I have become an activist in the movement as the President of Intentional Insights, a nonprofit devoted to popularizing rationality and effective altruism (see here for our EA work). So I came to the workshop with some training and knowledge of rationality, including some CFAR techniques.

 

To help myself prepare for the workshop, I reviewed existing posts about CFAR materials, with an eye toward being careful not to assume that the actual techniques match their actual descriptions in the posts.

 

I also delayed a number of tasks for after the workshop, tying up loose ends. In retrospect, I wish I did not leave myself some ongoing tasks to do during the workshop. As part of my leadership of InIn, I coordinate about 50ish volunteers, and I wish I had placed those responsibilities on someone else during the workshop.

 

Before the workshop, I worked intensely on finishing up some projects. In retrospect, it would have been better to get some rest and come to the workshop as fresh as possible.

 

There were some communication snafus with logistics details before the workshop. It all worked out in the end, but I would have told myself in retrospect to get the logistics hammered out in advance to not experience anxiety before the workshop about how to get there.


Experience

 

The classes were well put together, had interesting examples, and provided useful techniques. FYI, my experience in the workshop was that reading these techniques in advance was not harmful, but that the techniques in the CFAR classes were quite a bit better than the existing posts about them, so don’t assume you can get the same benefits from reading posts as attending the workshop. So while I was aware of the techniques, the ones in the classes definitely had optimized versions of them - maybe because of the “broken telephone” effect or maybe because CFAR optimized them from previous workshops, not sure. I was glad to learn that CFAR considers the workshop they gave us in May as satisfactory enough to scale up their workshops, while still improving their content over time.

 

Just as useful as the classes were the conversations held in between and after the official classes ended. Talking about them with fellow aspiring rationalists and seeing how they were thinking about applying these to their lives was helpful for sparking ideas about how to apply them to my life. The latter half of the CFAR workshop was especially great, as it focused on pairing off people and helping others figure out how to apply CFAR techniques to themselves and how to address various problems in their lives. It was especially helpful to have conversations with CFAR staff and trained volunteers, of whom there were plenty - probably about 20 volunteers/staff for the 50ish workshop attendees.

 

Another super-helpful aspect of the conversations was networking and community building. Now, this may have been more useful to some participants than others, so YMMV. As an activist in the moment, I talked to many folks in the CFAR workshop about promoting EA and rationality to a broad audience. I was happy to introduce some people to EA, with my most positive conversation there being to encourage someone to switch his efforts regarding x-risk from addressing nuclear disarmament to AI safety research as a means of addressing long/medium-term risk, and promoting rationality as a means of addressing short/medium-term risk. Others who were already familiar with EA were interested in ways of promoting it broadly, while some aspiring rationalists expressed enthusiasm over becoming rationality communicators.

 

Looking back at my experience, I wish I was more aware of the benefits of these conversations. I went to sleep early the first couple of nights, and I would have taken supplements to enable myself to stay awake and have conversations instead.


Take-Aways and Integration

 

The aspects of the workshop that I think will help me most were what CFAR staff called “5-second” strategies - brief tactics and techniques that could be executed in 5 seconds or less and address various problems. The stuff that we learned at the workshops that I was already familiar with required some time to learn and practice, such as Trigger Action Plans, Goal Factoring, Murphyjitsu, Pre-Hindsight, often with pen and paper as part of the work. However, with sufficient practice, one can develop brief techniques that mimic various aspects of the more thorough techniques, and apply them quickly to in-the-moment decision-making.

 

Now, this doesn’t mean that the longer techniques are not helpful. They are very important, but they are things I was already generally familiar with, and already practice. The 5-second versions were more of a revelation for me, and I anticipate will be more helpful for me as I did not know about them previously.

 

Now, CFAR does a very nice job of helping people integrate the techniques into daily life, as this is a common failure mode of CFAR attendees, with them going home and not practicing the techniques. So they have 6 Google Hangouts with CFAR staff and all attendees who want to participate, 4 one-on-one sessions with CFAR trained volunteers or staff, and they also pair you with one attendee for post-workshop conversations. I plan to take advantage of all these, although my pairing did not work out.

 

For integration of CFAR techniques into my life, I found the CFAR strategy of “Overlearning” especially helpful. Overlearning refers to trying to apply a single technique intensely for a while to all aspect of one’s activities, so that it gets internalized thoroughly. I will first focus on overlearning Trigger Action Plans, following the advice of CFAR.

 

I also plan to teach CFAR techniques in my local rationality dojo, as teaching is a great way to learn, naturally.

 

Finally, I plan to integrate some CFAR techniques into Intentional Insights content, at least the more simple techniques that are a good fit for the broad audience with which InIn is communicating.


Benefits

 

I have a strong probabilistic belief that having attended the workshop will improve my capacity to be a person who achieves my goals for doing good in the world. I anticipate I will be able to figure out better whether the projects I am taking on are the best uses of my time and energy. I will be more capable of avoiding procrastination and other forms of akrasia. I believe I will be more capable of making better plans, and acting on them well. I will also be more in touch with my emotions and intuitions, and be able to trust them more, as I will have more alignment among different components of my mind.

 

Another benefit is meeting the many other people at CFAR who have similar mindsets. Here in Columbus, we have a flourishing rationality community, but it’s still relatively small. Getting to know 70ish people, attendees and staff/volunteers, passionate about rationality was a blast. It was especially great to see people who were involved in creating new rationality strategies, something that I am engaged in myself in addition to popularizing rationality - it’s really heartening to envision how the rationality movement is growing.

 

These benefits should resonate strongly with those who are aspiring rationalists, but they are really important for EA participants as well. I think one of the best things that EA movement members can do is studying rationality, and it’s something we promote to the EA movement as part of InIn’s work. What we offer is articles and videos, but coming to a CFAR workshop is a much more intense and cohesive way of getting these benefits. Imagine all the good you can do for the world if you are better at planning, organizing, and enacting EA-related tasks. Rationality is what has helped me and other InIn participants make the major impact that we have been able to make, and there are a number of EA movement members who have rationality training and who reported similar benefits. Remember, as an EA participant, you can likely get a scholarship with a partial or full coverage of the regular $3900 price of the workshop, as I did myself when attending it, and you are highly likely to be able to save more lives as a result of attending the workshop over time, even if you have to pay some costs upfront.

 

Hope these thoughts prove helpful to you all, and please contact me at gleb@intentionalinsights.org if you want to chat with me about my experience.

 

Improving long-run civilisational robustness

11 RyanCarey 10 May 2016 11:15AM

People trying to guard civilisation against catastrophe usually focus on one specific kind of catastrophe at a time. This can be useful for building concrete knowledge with some certainty in order for others to build on it. However, there are disadvantages to this catastrophe-specific approach:

1. Catastrophe researchers (including Anders Sandberg and Nick Bostrom) think that there are substantial risks from catastrophes that have not yet been anticipated. Resilience-boosting measures may mitigate risks that have not yet been investigated.

2. Thinking about resilience measures in general may suggest new mitigation ideas that were missed by the catastrophe-specific approach.

One analogy for this is that an intrusion (or hack) to a software system can arise from a combination of many minor security failures, each of which might appear innocuous in isolation. You can decrease the chance of an intrusion by adding extra security measures, even without a specific idea of what kind of hacking would be performed. Things like being being able to power down and reboot a system, storing a backup and being able to run it in a "safe" offline mode are all standard resilience measures for software systems. These measures aren't necessarily the first thing that would come to mind if you were trying to model a specific risk like a password getting stolen, or a hacker subverting administrative privileges, although they would be very useful in those cases. So mitigating risk doesn't necessarily require a precise idea of the risk to be mitigated. Sometimes it can be done instead by thinking about the principles required for proper operation of a system - in the case of its software, preservation of its clean code - and the avenues through which it is vulnerable - such as the internet.

So what would be good robustness measures for human civilisation? I have a bunch of proposals:

 

Disaster forecasting

Disaster research

* Build research labs to survey and study catastrophic risks (like the Future of Humanity Institute, the Open Philanthropy Project and others)

Disaster prediction

* Prediction contests (like IARPA's Aggregative Contingent Estimation "ACE" program)

* Expert aggregation and elicitation

 

Disaster prevention

General prevention measures

* Build a culture of prudence in groups that run risky scientific experiments

* Lobby for these mitigation measures

* Improving the foresight and clear-thinking of policymakers and other relevant decision-makers

* Build research labs to plan more risk-mitigation measures (including the Centre for Study of Existential Risk)

Preventing intentional violence

* Improve focused surveillance of people who might commit large-scale terrorism (this is controversial because excessive surveillance itself poses some risk)

* Improve cooperation between nations and large institutions

Preventing catastrophic errors

* Legislating for individuals to be held more accountable for large-scale catastrophic errors that they may make (including by requiring insurance premiums for any risky activities)

 

Disaster response

* Improve political systems to respond to new risks

* Improved vaccine development, quarantine and other pandemic response measures

* Building systems for disaster notification


Disaster recovery

Shelters

* Build underground bomb shelters

* Provide a sheltered place for people to live with air and water

* Provide (or store) food and farming technologies (cf Dave Denkenberger's *Feeding Everyone No Matter What*

* Store energy and energy-generators

* Store reproductive technologies (which could include IVF, artificial wombs or measures for increasing genetic diversity)

* Store information about building the above

* Store information about building a stable political system, and about mitigating future catastrophes

* Store other useful information about science and technology (e.g. reading and writing)

* Store some of the above in submarines

* (maybe) store biodiversity

 

Space Travel

* Grow (or replicate) the international space station

* Improve humanity's capacity to travel to the Moon and Mars

* Build sustainable settlements on the Moon and Mars

 

Of course, some caveats are in order. 

To begin with, one could argue that surveilling terrorists is a measure specifically designed to reduce the risk from terrorism. But there are a number of different scenarios and methods through which a malicious actor could try to inflict major damage on civilisation, and so I still regard this as a general robustness measure, granted that there is some subjectivity to all of this. If you know absolutely nothing about the risks that you might face, and the structures in society that are to be preserved, then the exercise is futile. So some of the measures on this list will mitigate a smaller subset of risks than others, and that's just how it is, though I think the list is pretty different from the one people think of by using a risk-specific paradigm, which is the reason for the exercise.

Additionally, I'll disclaim that some of these measures are already well invested, and yet others will not be able to be done cheaply or effectively. But many seem to me to be worth thinking more about.

Additional suggestions for this list are welcome in the comments, as are proposals for their implementation.

 

Related readings

https://www.academia.edu/7266845/Existential_Risks_Exploring_a_Robust_Risk_Reduction_Strategy

http://www.nickbostrom.com/existential/risks.pdf

http://users.physics.harvard.edu/~wilson/pmpmta/Mahoney_extinction.pdf

http://gcrinstitute.org/aftermath

http://sethbaum.com/ac/2015_Food.html

http://the-knowledge.org

http://lesswrong.com/lw/ma8/roadmap_plan_of_action_to_prevent_human/

Collaborative Truth-Seeking

11 Gleb_Tsipursky 04 May 2016 11:28PM

Summary: We frequently use debates to resolve different opinions about the truth. However, debates are not always the best course for figuring out the truth. In some situations, the technique of collaborative truth-seeking may be more optimal.

 

Acknowledgments: Thanks to Pete Michaud, Michael Dickens, Denis Drescher, Claire Zabel, Boris Yakubchik, Szun S. Tay, Alfredo Parra, Michael Estes, Aaron Thoma, Alex Weissenfels, Peter Livingstone, Jacob Bryan, Roy Wallace, and other readers who prefer to remain anonymous for providing feedback on this post. The author takes full responsibility for all opinions expressed here and any mistakes or oversights.

 

The Problem with Debates

 

Aspiring rationalists generally aim to figure out the truth, and often disagree about it. The usual method of hashing out such disagreements in order to discover the truth is through debates, in person or online.

 

Yet more often than not, people on opposing sides of a debate end up seeking to persuade rather than prioritizing truth discovery. Indeed, research suggests that debates have a specific evolutionary function – not for discovering the truth but to ensure that our perspective prevails within a tribal social context. No wonder debates are often compared to wars.

 

We may hope that as aspiring rationalists, we would strive to discover the truth during debates. Yet given that we are not always fully rational and strategic in our social engagements, it is easy to slip up within debate mode and orient toward winning instead of uncovering the truth. Heck, I know that I sometimes forget in the midst of a heated debate that I may be the one who is wrong – I’d be surprised if this didn’t happen with you. So while we should certainly continue to engage in debates, we should also use additional strategies – less natural and intuitive ones. These strategies could put us in a better mindset for updating our beliefs and improving our perspective on the truth. One such solution is a mode of engagement called collaborative truth-seeking.


Collaborative Truth-Seeking

 

Collaborative truth-seeking is one way of describing a more intentional approach in which two or more people with different opinions engage in a process that focuses on finding out the truth. Collaborative truth-seeking is a modality that should be used among people with shared goals and a shared sense of trust.

 

Some important features of collaborative truth-seeking, which are often not present in debates, are: focusing on a desire to change one’s own mind toward the truth; a curious attitude; being sensitive to others’ emotions; striving to avoid arousing emotions that will hinder updating beliefs and truth discovery; and a trust that all other participants are doing the same. These can contribute to increased  social sensitivity, which, together with other attributes, correlate with accomplishing higher group performance  on a variety of activities.

 

The process of collaborative truth-seeking starts with establishing trust, which will help increase social sensitivity, lower barriers to updating beliefs, increase willingness to be vulnerable, and calm emotional arousal. The following techniques are helpful for establishing trust in collaborative truth-seeking:

  • Share weaknesses and uncertainties in your own position

  • Share your biases about your position

  • Share your social context and background as relevant to the discussion

    • For instance, I grew up poor once my family immigrated to the US when I was 10, and this naturally influences me to care about poverty more than some other issues, and have some biases around it - this is one reason I prioritize poverty in my Effective Altruism engagement

  • Vocalize curiosity and the desire to learn

  • Ask the other person to call you out if they think you're getting emotional or engaging in emotive debate instead of collaborative truth-seeking, and consider using a safe word



Here are additional techniques that can help you stay in collaborative truth-seeking mode after establishing trust:

  • Self-signal: signal to yourself that you want to engage in collaborative truth-seeking, instead of debating

  • Empathize: try to empathize with the other perspective that you do not hold by considering where their viewpoint came from, why they think what they do, and recognizing that they feel that their viewpoint is correct

  • Keep calm: be prepared with emotional management to calm your emotions and those of the people you engage with when a desire for debate arises

    • watch out for defensiveness and aggressiveness in particular

  • Go slow: take the time to listen fully and think fully

  • Consider pausing: have an escape route for complex thoughts and emotions if you can’t deal with them in the moment by pausing and picking up the discussion later

    • say “I will take some time to think about this,” and/or write things down

  • Echo: paraphrase the other person’s position to indicate and check whether you’ve fully understood their thoughts

  • Be open: orient toward improving the other person’s points to argue against their strongest form

  • Stay the course: be passionate about wanting to update your beliefs, maintain the most truthful perspective, and adopt the best evidence and arguments, no matter if they are yours of those of others

  • Be diplomatic: when you think the other person is wrong, strive to avoid saying "you're wrong because of X" but instead to use questions, such as "what do you think X implies about your argument?"

  • Be specific and concrete: go down levels of abstraction

  • Be clear: make sure the semantics are clear to all by defining terms

  • Be probabilistic: use probabilistic thinking and probabilistic language, to help get at the extent of disagreement and be as specific and concrete as possible

    • For instance, avoid saying that X is absolutely true, but say that you think there's an 80% chance it's the true position

    • Consider adding what evidence and reasoning led you to believe so, for both you and the other participants to examine this chain of thought

  • When people whose perspective you respect fail to update their beliefs in response to your clear chain of reasoning and evidence, update a little somewhat toward their position, since that presents evidence that your position is not very convincing

  • Confirm your sources: look up information when it's possible to do so (Google is your friend)

  • Charity mode: trive to be more charitable to others and their expertise than seems intuitive to you

  • Use the reversal test to check for status quo bias

    • If you are discussing whether to change some specific numeric parameter - say increase by 50% the money donated to charity X - state the reverse of your positions, for example decreasing the amount of money donated to charity X by 50%, and see how that impacts your perspective

  • Use CFAR’s double crux technique

    • In this technique, two parties who hold different positions on an argument each writes the the fundamental reason for their position (the crux of their position). This reason has to be the key one, so if it was proven incorrect, then each would change their perspective. Then, look for experiments that can test the crux. Repeat as needed. If a person identifies more than one reason as crucial, you can go through each as needed. More details are here.  


Of course, not all of these techniques are necessary for high-quality collaborative truth-seeking. Some are easier than others, and different techniques apply better to different kinds of truth-seeking discussions. You can apply some of these techniques during debates as well, such as double crux and the reversal test. Try some out and see how they work for you.


Conclusion

 

Engaging in collaborative truth-seeking goes against our natural impulses to win in a debate, and is thus more cognitively costly. It also tends to take more time and effort than just debating. It is also easy to slip into debate mode even when using collaborative truth-seeking, because of the intuitive nature of debate mode.

 

Moreover, collaborative truth-seeking need not replace debates at all times. This non-intuitive mode of engagement can be chosen when discussing issues that relate to deeply-held beliefs and/or ones that risk emotional triggering for the people involved. Because of my own background, I would prefer to discuss poverty in collaborative truth-seeking mode rather than debate mode, for example. On such issues, collaborative truth-seeking can provide a shortcut to resolution, in comparison to protracted, tiring, and emotionally challenging debates. Likewise, using collaborative truth-seeking to resolve differing opinions on all issues holds the danger of creating a community oriented excessively toward sensitivity to the perspectives of others, which might result in important issues not being discussed candidly. After all, research shows the importance of having disagreement in order to make wise decisions and to figure out the truth. Of course, collaborative truth-seeking is well suited to expressing disagreements in a sensitive way, so if used appropriately, it might permit even people with triggers around certain topics to express their opinions.

 

Taking these caveats into consideration, collaborative truth-seeking is a great tool to use to discover the truth and to update our beliefs, as it can get past the high emotional barriers to altering our perspectives that have been put up by evolution. Rationality venues are natural places to try out collaborative truth-seeking.

 

 

 

[Link] White House announces a series of workshops on AI, expresses interest in safety

11 AspiringRationalist 04 May 2016 02:50AM

On Making Things

11 Gram_Stone 05 March 2016 03:26AM

(Content note: This is basically just a story about how I accidentally briefly made something that I find very unfun into something very fun, for the sake of illustrating how surprising it was and how cool it would be if everyone could do things like this more often and deliberately. You also might get a kick out of this story in the way that you might get a kick out of How It's Made, or many of Swimmer963's posts on swimming and nursing, or Elo's post on wearing magnetic rings. If none of that interests you, then you might consider backing out now.)

I'm learning math under the tutelage of a friend, and I go through a lot of paper. I write a lot of proofs so there can be plenty of false starts. I could fill a whole sheet of paper, decide that I only need one result to continue on my way, and switch to a blank sheet. Since this is how I go about it, I thought that a whiteboard would be a really good idea. The solution is greater surface area and practical erasure.

I checked Amazon; whiteboards are one of those products with polarized reviews. I secretly wondered if ten percent of all whiteboards manufactured don't just immediately permanently stain. Maybe I was being a little risk-averse, but I decided to hold off on buying one.

Then I remembered that I make signs for a living, and I realized that I could probably just make a whiteboard myself.

I had a good rapport with my supervisor. I have breaks and lunch time, and the boundaries are kind of fuzzy, so the time wouldn't be an issue. I didn't have to print anything, so I wouldn't be taking up time on the printers or using ink.

Maybe everyone knows what 'vinyl' is and I don't need to explain this, but the stuff that 'PVC pipes' (PVC stands for polyvinyl chloride) are made out of can be formed into thin elastic sheets. Manufacturers apply adhesive and paper backing to these sheets and sell them to people so they can pull off the paper and stick the vinyl to stuff. You can print on some of it too. It comes on long rolls, typically 54 in. or 60 in., sort of like tape or paper towels. If you ever see a vehicle that belongs to a business with all sorts of art all over it, then it's probably printed on vinyl.

It's kind of hard to print on a really short roll without everything going horribly awry, so we have tons of rolls with like 10 ft. by 54 in. sheets on them that just get thrown away.

If you scratch a vinyl print, the ink will come right off. So we laminate the vinyl before we apply it. Most of our products are laminated with a laminate by the enigmatic name of '8518', but today we happened to be using a very particular and rarely used dry erase laminate. So naturally I ran one of those extra sheets of vinyl through the laminator after I finished the job that I was really supposed to be doing.

And we keep these things called 'drops', which are just sheets of substrate material, stuff that you might apply vinyl to or print on, that were cut off from other things that were made into signs, and then never touched again. Sometimes you can make a sign out of one. People forget about them and don't like to use them because they're usually dirtier and more damaged than stock substrate, so we have a ton of them. It might be corrugated plastic (like cardboard, but plastic), or foamboard (two pieces of paper glued to a sheet of foam), or much thicker, non-elastic PVC.

And this is when I started to think that this was becoming a kind of important experience.

I looked at the drops lined up on the shelf. I definitely didn't want to use foamboard; it's extremely fragile, you can't pull the vinyl off if you mess up, it would dent when I pressed too hard with the marker, and it most generally sucks in every way possible except cost. Corrugated plastic is also quite fragile, and it has linear indentations between the flutes that vinyl would conform to; I wanted the board to be flat. PVC is a better alternative than both, but drops can sit for a long time, and large sheets of PVC warp under their own weight; I wanted a relatively large board and I didn't want it to be warped. So I went for a product that we refer to as 'MaxMetal'; two sheets of aluminum sandwiched around a thicker sheet of plastic. It's much harder to warp, and I could be confident that it would be a solid writing surface. PVC is solid, but it's not metal.

I was looking through the MaxMetal drops, trying to find the right one, realizing that I hadn't decided what dimensions I wanted the board to be, and I felt a little jump in my chest. That was me finally noticing how much fun I was having. And immediately after that, I realized that even though I had implicitly expected to do everything that I had done, I was surprised at how much fun I was having. I had failed to predict how much fun I would have doing those things. It seemed like something worth fixing.

I finally chose a precisely cut piece that was approximately 30 in. wide by 24 in. high. And then I made the board. I separated some of the vinyl from the backing, and I cut off a strip of backing, and I applied part of the vinyl sheet to one edge of the board. I put the end of the sheet with the strip of stuck vinyl between two mechanical rollers, left the substrate flat, flipped the vinyl sheet over the top of the machine and past the top of the substrate sheet, pulled up more of the backing, and rolled it through to press the two sheets together while I pulled the backing off of the vinyl. I put the product on a table, turned it upside down, cut off the excess vinyl with my trusty utility knife, and rounded the corners off by half an inch for safety and aesthetics. I took an orange Expo marker to it, and made a giant signature, and it worked. A microfiber rag erased it just fine even after letting it sit for half an hour. I cut off some super heavy duty, I-promise-this-is-safe double-sided tape, rolled it up, and took it home, so I could mount the board to my bedroom wall. I made a pretty snazzy whiteboard for myself. It was cool.

There probably aren't a lot of signmakers on LessWrong, but there are a lot of programmers. I don't see them talk about this experience a lot, but I figure it's pretty similar; what it feels like to use something that you made, or watch it work. And I'm sure there are other people with other things.

But it seems worth saying explicitly, "Maybe you should make stuff because it's fun."

That was my main explanation for how fun it was, for awhile. But there were a lot of other things when I thought about it more.

I technically had to solve problems, but they were relatively simple and rewarding to solve.

It felt a little forbidden, doing something creative for yourself at work when you're really only there to stay alive. Even a lame taboo is usually a nice kick.

And my time was taken up by responsibility, I was doing real work between all of those steps, so I could look forward to the next step in the creation process while doing something that I normally drag myself through. The day flew by when I started making that thing. When could I fit in some time for my whiteboard?

And it was fun because the meta-event was interesting; I never thought that I could do exactly the same work activity, and a small context change would change it from boring, old work to fun. I was laminating vinyl and fetching drops and rounding corners, but it wasn't for a vehicle wrap, or a sign, or a magnet; it was for my whiteboard, and that changed everything. I was glad that I noticed that, and hopeful that I could find a way to deliberately apply it in the future.

And I was using non-universal, demanded skills, that many people could acquire, but not instantly. It was cool to feel like I was being resourceful in a very particular way that most people never would.

And there weren't too many choices, and the choices weren't ambiguous. The dimensions of the board, including thickness, were limited to the dimensions of the drops, and I'd have to make very precise cuts through a hard material if I wanted a board that wasn't the size of an existing one. A whiteboard is mostly a plain white surface, there isn't much design to be done. I only had quarter-inch and half-inch corner rounders; it's one of those or square corners. What if I had more choices, either about the design of the board, or in a different domain with way more choices by default? I might be a human and regret every choice that I actually make because all of those other foregone choices combined are so much more salient.

And it seems helpful that the whiteboard was being made for a noble purpose: so that I could conserve paper and continue to study mathematics at the same time, and do so much more conveniently. I think it would have been less fun if I was making a whiteboard so that I could see what it's like to snap a whiteboard in half with cinder blocks and a bowling ball, or if I was making one because I just thought it would be cool to have one.

And instead of paying $30-$50, I paid nothing. It felt like I won.

I've thought for quite a while, but not on this level, that there should be an applied fun theory; that it seemed a bit strange that you wouldn't go further with the idea that you could find deliberate ways to make your world more fun, and try to make the present more fun, as opposed to just the distant future. And not in the way where you critically examine the suggestions that people usually generate when you ask for a list of activities that are popularly considered fun, but in the way where you predict that things are fun because you understand how fun works, and your predictions come true. Hopefully I offered up something interesting with respect to that line of inquiry.

But of course, fun seems like just the sort of thing that you could easily overthink. At the very least it's not the sort of domain where you want deep theories that don't generate practical advice for too long. But I still think it seems worth thinking about.

AIFoom Debate - conclusion?

11 Bound_up 04 March 2016 08:33PM

I've been going through the AIFoom debate, and both sides makes sense to me. I intend to continue, but I'm wondering if there're already insights in LW culture I can get if I just ask for them.

 

My understanding is as follows:

 

The difference between a chimp and a human is only 5 million years of evolution. That's not time enough for many changes.

 

Eliezer takes this as proof that the difference between the two in the brain architecture can't be much. Thus, you can have a chimp-intelligent AI that doesn't do much, and then with some very small changes, suddenly get a human-intelligent AI and FOOM!

 

Robin takes the 5-million year gap as proof that the significant difference between chimps and humans is only partly in the brain architecture. Evolution simply can't be responsible for most of the relevant difference; the difference must be elsewhere.

So he concludes that when our ancestors got smart enough for language, culture became a thing. Our species stumbled across various little insights into life, and these got passed on. An increasingly massive base of cultural content, made of very many small improvements is largely responsible for the difference between chimps and humans.

Culture assimilated new information into humans much faster than evolution could.

So he concludes that you can get a chimp-level AI, and to get up to human-level will take, not a very few insights, but a very great many, each one slowly improving the computer's intelligence. So no Foom, it'll be a gradual thing.

 

So I think I've figured out the question. Is there a commonly known answer, or are there insights towards the same?

Intentional Insights and the Effective Altruism Movement – Q & A

11 Gleb_Tsipursky 02 January 2016 07:43PM

This post is cross-posted on the EA forum and is mainly of interest to EAs. It focuses on the engagement of Intentional Insights with the EA movement, and does not address the engagement of InIn with promoting rationality-informed strategies, which is a hotly-debated issue.

 

 

Introduction

I wanted to share InIn’s background and goals and where we see ourselves as fitting within the EA movement. I also wanted to allow all of you a chance to share your opinions about the benefits and drawbacks of what InIn is doing, put forth any reservations, concerns, and risks, and provide suggestions for optimization.

 

Background

InIn began in January 2014, when my wife and I decided to create an organization dedicated to marketing rational, evidence-based thinking in all areas of our lives, especially charitable giving, to a broad audience. We decided to do so because we looked around for organizations that would provide marketing resources for our own local activism in Columbus, OH, trying to convey these ideas to a broad public and found no such organizations. So we decided – if not us, then who? If not now, then when? My wife would use her experience in nonprofits to run the organisation, while I would use my experience as a professor to work on content and research.

 

We gathered together a group of local aspiring rationalists and Effective Altruists interested in the project, and launched the organization publicly in 9/2014. We got our 501(c)(3) nonprofit status, began running various content marketing experiments, and established the internal infrastructure. We also built up a solid audience in the secular and skeptical market, who we saw as the easiest-to-reach audience with promoting effective giving and rational thinking. By the early fall of 2015, we had established some connections and reputation, a solid social media following, and our articles began to be accepted in prominent venues that reach a broad audience, such as The Huffington Post and Lifehack. At that point, we felt comfortable enough to begin our active engagement with the EA movement, as we felt we could provide added value.

 

Fit in EA Movement

As an Effective Altruist, I have long seen opportunities of optimization in the marketing of EA ideas using research-based, modern content marketing strategies. I did not feel comfortable speaking out about that until I had built up InIn enough to be able to speak from a position of some expertise in the early fall of 2015, and to demonstrate right away the benefit we could bring through publishing widely-shared articles that promoted EA messages.

 

Looking back, I wish I had started engaging with the EA Forum sooner. It was a big mistake on my part that caused some EAs to treat InIn as a sudden outsider that burst on the scene. Also, our early posts were perceived as too self-promotional. I guess this is not surprising, looking back – although the goal was simply to demonstrate our value, the content marketing nature of our work does show through. Ah well, lessons learned and something to update on for the future.

 

As InIn has become more engaged in various projects within the EA movement, we have begun to settle on how to add value to the EA community and have formulated our plans for future work.

 

1) We are promoting EA-themed effective giving ideas to a broad audience through publishing shareable articles in prominent venues.

 

1A) Note: we focus on spreading ideas like effective giving without associating them overtly with the movement of Effective Altruism, though leaving buried hooks to EA in the articles. This approach has the benefit minimizing the risk of diluting the movement with less value-aligned members, while leaving opportunities for those who are more value-aligned to find the EA movement. Likewise, we don’t emphasize EA as we believe that overt uses of labels can lead some people to perceive our messages as ideological, which would undermine our ability to build rapport with them.

 

2) We are specifically promoting effective giving to the secular and skeptic community, as we see this audience as more likely to be value aligned, and also have strong existing connections with this audience.

 

3) We are providing content and social media marketing consulting to the EA movement, both EA meta-charities and prominent direct-action charities.

 

4) We are collaborating with EA meta-charities in boosting the marketing capacities of the EA movement as a whole being.

 

5) We are helping build EA capacity around effective decision-making and goal achievement through providing foundational rationality knowledge.

 

6) By using content marketing to promote rationality to a broad audience, we are aiming to help people be more clear-thinking, long-term oriented, empathetic, and utilitarian. This not only increases their own flourishing, but also expands their circles of caring beyond biases based on geographical location (drowning child problem), species (non-human animals), and temporal distance (existential risk).

 

Conclusion

InIn is engaged in both EA capacity-building and movement-building, but movement-building of a new type, not oriented toward directing people into the EA movement, but getting EA habits of thinking into the broader world. I specifically chose not to include our achievements in doing so in this post, as I had previously fallen into the trap of including too much and being perceived as self-promotional as a result. However, if you wish, you can learn more about the organization and its activities at this link.


What are your impressions on the value of this fit of InIn within the EA movement and our plans, including advantages and disadvantages, as well as suggestions for improvement? We are always eager to learn and improve based on feedback from the community.

 

 

 

Why You Should Be Public About Your Good Deeds

11 Gleb_Tsipursky 30 December 2015 04:06AM

(This will be mainly of interest to Effective Altruists, and is cross-posted on the Giving What We Can blog, the Intentional Insights blog, and the EA Forum)

 

When I first started donating, I did so anonymously. My default is to be humble and avoid showing off. I didn’t want others around me to think that I have a stuffed head and hold too high an opinion of myself. I also didn’t want them to judge my giving decisions, as some may have judged them negatively. I also had cached patterns of associating sharing about my good deeds publicly with feelings that I get from commercials, of self-promotion and sleaziness.

I wish I had known back then that I could have done much more good by publicizing my donations and other goods deeds, such as signing the Giving What We Can Pledge to donate 10% of my income to effective charities, or being public about my donations to CFAR on this LW forum post.

Why did I change my mind about being public? Let me share a bit of my background to give you the appropriate context.

As long as I can remember, I have been interested in analyzing how and why individuals and groups evaluated their environment and made their decisions to reach their goals – rational thinking. This topic became the focus of my research as a professor at Ohio State in the history of science, studying the intersection of psychology, cognitive neuroscience, behavioral economics, and other fields.

While most of my colleagues focused on research, I grew more passionate about sharing my knowledge with others, focusing my efforts on high-quality, innovative teaching. I perceived my work as cognitive altruism, sharing my knowledge about rational thinking, and students expressed much appreciation for my focus on helping them make better decisions in their lives. Separately, I engaged in anonymous donations to causes such as poverty alleviation.

Yet over time, I realized that by teaching only in the classroom, I would have a very limited impact, since my students were only a small minority of the population I could potentially reach. I began to consult academic literature on how to spread my knowledge broadly. Through reading classics in the field of social influence such as Influence: The Psychology of Persuasion and Made To Stick, I learned a great many strategies to multiply the impact of my cognitive altruism work, as well as my charitable giving.

One of the most important lessons was the value of being public about my activities. Both Influence: The Psychology of Persuasion and subsequent research showed that our peers deeply impact our thoughts, feelings, and behaviors. We tend to evaluate ourselves based on what our peers think of us, and try to model behaviors that will cause others to have positive opinions about us. This applies not only to in-person meetings, but also online communities.

A related phenomenon, social proof, illustrates how we evaluate appropriate behavior based on how we see others behaving. However, research also shows that people who exhibit more beneficial behaviors tend to avoid expressing themselves to those with less beneficial behaviors, resulting in overall social harm.

Learning about the importance of being public, including in online communities that reach far more people than in-person communities, especially by people engaging in socially beneficial habits, led to a deep transformation in my civic engagement. While it was not easy to overcome my shyness, I realized I had to do it if I wanted to optimize my positive impact on the world – both in cognitive altruism and in effective giving.

I shared this journey of learning and transformation with my wife, Agnes Vishnevkin, an MBA and non-profit professional. Together, we decided to co-found a nonprofit dedicated to spreading rational thinking and effective giving to a broad audience using research-based strategies for maximizing social impact, Intentional Insights. Uniting with others committed to this mission, we write articles, blogs, make videos, author books, program apps, and collaborate with other organizations to share these ideas widely.

I also rely on research to make other decisions, such as my decision to take the Giving What We Can pledge. The strategy of precommitment is key here – we make a decision in a state where we have the time to consider their consequences in the long term, and specifically wish to constrain the options of our future selves. That way, we can plan within a narrowed range of options and make the best possible use of the resources available to us.

Thus, I can plan to live on 90% of my income over my lifetime, and plan to decrease some of my spending in the long term so that I can give to charities that I believe are most effective for making the kind of impact I want to see in the world.

Knowing about the importance of publicizing my good deeds and commitments, I recognize that I can do much more good by sharing my decision to take the pledge with others. All of us have friends, and the large majority of us have social media channels and we all have the power to be public about our good deeds. You can also consider fundraising for effective charities, and being an advocate for effective altruism in your community. 

According to the scholarly literature, by being public about our good deeds we can bring about much good in the world. Even though it may not feel as tangible as direct donations, sharing with others about our good deeds and supporting others doing so may in the end allow us to do even more good.

Deadly sins of software estimation

11 NancyLebovitz 22 December 2015 01:38PM

This is so remarkably sensible I think it deserves its own article.

It's a pdf of the slides from a lecture, and should help with the planning fallacy.

A few highlights: Distinguish between targets and estimates. Don't make estimates before you know very much about the project. Estimates are probability statements. Best assumption is that a new tool or method will lead to productivity loss.

Promoting rationality to a broad audience - feedback on methods

11 Gleb_Tsipursky 30 November 2015 04:52AM

We at Intentional Insights​, the nonprofit devoted to promoting rationality and effective altruism  to a broad audience, are finalizing our Theory of Change (a ToC is meant to convey our goals, assumptions, methods, and metrics). Since there's recently been extensive discussion on LessWrong of our approaches to promoting rationality and effective altruism to a broad audience, one that was quite helpful for helping us update, I'd like to share our Theory of Change with you and ask for your feedback.

 

Here's the Executive Summary:

  • The goal of Intentional Insights is to create a world where all rely on research-based strategies to make wise decisions and lead to mutual flourishing.
  • To achieve this goal, we believe that people need to be motivated to learn and have broadly accessible information about such research-based strategies, and also integrate these strategies into their daily lives through regular practice.
  • We assume that:
    • some natural and intuitive human thinking, feeling, and behavior patterns are flawed in ways that undermine wise decisions.
    • problematic decision making undermines mutual flourishing in a number of life areas.
    • these flawed thinking, feeling, and behavior patterns can be improved through effective interventions.
    • we can motivate and teach people to improve their thinking, feeling, and behavior patterns by presenting our content in ways that combine education and entertainment.
  • Our intervention is helping people improve their patterns of thinking, feeling, and behavior to enable them to make wise decisions and bring about mutual flourishing.
  • Our outputs, what we do, come in the form of online content such as blog entries, videos, etc., on our channels and in external publications, as well as collaborations with other organizations.
  • Our metrics of impact are in the form of anecdotal evidence, feedback forms from workshops, and studies we run on our content.

Here is the full version.

 

I'd appreciate any feedback on the full version from fellow Less Wrongers, on things like content, concepts, structure, style, grammar, etc. I look forward to updating the organization's goals, assumptions, methods, and metrics based on your thoughts. Thanks!

[Link] Putanumonit - Convincing people to read the Sequences and wondering about "postrationalists"

10 Jacobian 28 September 2016 04:43PM

2016 LessWrong Diaspora Survey Analysis: Part Four (Politics, Calibration & Probability, Futurology, Charity & Effective Altruism)

10 ingres 10 September 2016 03:51AM

Politics

The LessWrong survey has a very involved section dedicated to politics. In previous analysis the benefits of this weren't fully realized. In the 2016 analysis we can look at not just the political affiliation of a respondent, but what beliefs are associated with a certain affiliation. The charts below summarize most of the results.

Political Opinions By Political Affiliation



































Miscellaneous Politics

There were also some other questions in this section which aren't covered by the above charts.

PoliticalInterest

On a scale from 1 (not interested at all) to 5 (extremely interested), how would you describe your level of interest in politics?

1: 67 (2.182%)

2: 257 (8.371%)

3: 461 (15.016%)

4: 595 (19.381%)

5: 312 (10.163%)

Voting

Did you vote in your country's last major national election? (LW Turnout Versus General Election Turnout By Country)
Group Turnout
LessWrong 68.9%
Austrailia 91%
Brazil 78.90%
Britain 66.4%
Canada 68.3%
Finland 70.1%
France 79.48%
Germany 71.5%
India 66.3%
Israel 72%
New Zealand 77.90%
Russia 65.25%
United States 54.9%
Numbers taken from Wikipedia, accurate as of the last general election in each country listed at time of writing.

AmericanParties

If you are an American, what party are you registered with?

Democratic Party: 358 (24.5%)

Republican Party: 72 (4.9%)

Libertarian Party: 26 (1.8%)

Other third party: 16 (1.1%)

Not registered for a party: 451 (30.8%)

(option for non-Americans who want an option): 541 (37.0%)

Calibration And Probability Questions

Calibration Questions

I just couldn't analyze these, sorry guys. I put many hours into trying to get them into a decent format I could even read and that sucked up an incredible amount of time. It's why this part of the survey took so long to get out. Thankfully another LessWrong user, Houshalter, has kindly done their own analysis.

All my calibration questions were meant to satisfy a few essential properties:

  1. They should be 'self contained'. I.E, something you can reasonably answer or at least try to answer with a 5th grade science education and normal life experience.
  2. They should, at least to a certain extent, be Fermi Estimable.
  3. They should progressively scale in difficulty so you can see whether somebody understands basic probability or not. (eg. In an 'or' question do they put a probability of less than 50% of being right?)

At least one person requested a workbook, so I might write more in the future. I'll obviously write more for the survey.

Probability Questions

Question Mean Median Mode Stdev
Please give the obvious answer to this question, so I can automatically throw away all surveys that don't follow the rules: What is the probability of a fair coin coming up heads? 49.821 50.0 50.0 3.033
What is the probability that the Many Worlds interpretation of quantum mechanics is more or less correct? 44.599 50.0 50.0 29.193
What is the probability that non-human, non-Earthly intelligent life exists in the observable universe? 75.727 90.0 99.0 31.893
...in the Milky Way galaxy? 45.966 50.0 10.0 38.395
What is the probability that supernatural events (including God, ghosts, magic, etc) have occurred since the beginning of the universe? 13.575 1.0 1.0 27.576
What is the probability that there is a god, defined as a supernatural intelligent entity who created the universe? 15.474 1.0 1.0 27.891
What is the probability that any of humankind's revealed religions is more or less correct? 10.624 0.5 1.0 26.257
What is the probability that an average person cryonically frozen today will be successfully restored to life at some future time, conditional on no global catastrophe destroying civilization before then? 21.225 10.0 5.0 26.782
What is the probability that at least one person living at this moment will reach an age of one thousand years, conditional on no global catastrophe destroying civilization in that time? 25.263 10.0 1.0 30.510
What is the probability that our universe is a simulation? 25.256 10.0 50.0 28.404
What is the probability that significant global warming is occurring or will soon occur, and is primarily caused by human actions? 83.307 90.0 90.0 23.167
What is the probability that the human race will make it to 2100 without any catastrophe that wipes out more than 90% of humanity? 76.310 80.0 80.0 22.933

 

Probability questions is probably the area of the survey I put the least effort into. My plan for next year is to overhaul these sections entirely and try including some Tetlock-esque forecasting questions, a link to some advice on how to make good predictions, etc.

Futurology

This section got a bit of a facelift this year. Including new cryonics questions, genetic engineering, and technological unemployment in addition to the previous years.

Cryonics

Cryonics

Are you signed up for cryonics?

Yes - signed up or just finishing up paperwork: 48 (2.9%)

No - would like to sign up but unavailable in my area: 104 (6.3%)

No - would like to sign up but haven't gotten around to it: 180 (10.9%)

No - would like to sign up but can't afford it: 229 (13.8%)

No - still considering it: 557 (33.7%)

No - and do not want to sign up for cryonics: 468 (28.3%)

Never thought about it / don't understand: 68 (4.1%)

CryonicsNow

Do you think cryonics, as currently practiced by Alcor/Cryonics Institute will work?

Yes: 106 (6.6%)

Maybe: 1041 (64.4%)

No: 470 (29.1%)

Interestingly enough, of those who think it will work with enough confidence to say 'yes', only 14 are actually signed up for cryonics.

sqlite> select count(*) from data where CryonicsNow="Yes" and Cryonics="Yes - signed up or just finishing up paperwork";

14

sqlite> select count(*) from data where CryonicsNow="Yes" and (Cryonics="Yes - signed up or just finishing up paperwork" OR Cryonics="No - would like to sign up but unavailable in my area" OR "No - would like to sign up but haven't gotten around to it" OR "No - would like to sign up but can't afford it");

34

CryonicsPossibility

Do you think cryonics works in principle?

Yes: 802 (49.3%)

Maybe: 701 (43.1%)

No: 125 (7.7%)

LessWrongers seem to be very bullish on the underlying physics of cryonics even if they're not as enthusiastic about current methods in use.

The Brain Preservation Foundation also did an analysis of cryonics responses to the LessWrong Survey.

Singularity

SingularityYear

By what year do you think the Singularity will occur? Answer such that you think, conditional on the Singularity occurring, there is an even chance of the Singularity falling before or after this year. If you think a singularity is so unlikely you don't even want to condition on it, leave this question blank.

Mean: 8.110300081581755e+16

Median: 2080.0

Mode: 2100.0

Stdev: 2.847858859055733e+18

I didn't bother to filter out the silly answers for this.

Obviously it's a bit hard to see without filtering out the uber-large answers, but the median doesn't seem to have changed much from the 2014 survey.

Genetic Engineering

ModifyOffspring

Would you ever consider having your child genetically modified for any reason?

Yes: 1552 (95.921%)

No: 66 (4.079%)

Well that's fairly overwhelming.

GeneticTreament

Would you be willing to have your child genetically modified to prevent them from getting an inheritable disease?

Yes: 1387 (85.5%)

Depends on the disease: 207 (12.8%)

No: 28 (1.7%)

I find it amusing how the strict "No" group shrinks considerably after this question.

GeneticImprovement

Would you be willing to have your child genetically modified for improvement purposes? (eg. To heighten their intelligence or reduce their risk of schizophrenia.)

Yes : 0 (0.0%)

Maybe a little: 176 (10.9%)

Depends on the strength of the improvements: 262 (16.2%)

No: 84 (5.2%)

Yes I know 'yes' is bugged, I don't know what causes this bug and despite my best efforts I couldn't track it down. There is also an issue here where 'reduce your risk of schizophrenia' is offered as an example which might confuse people, but the actual science of things cuts closer to that than it does to a clean separation between disease risk and 'improvement'.

 

This question is too important to just not have an answer to so I'll do it manually. Unfortunately I can't easily remove the 'excluded' entries so that we're dealing with the exact same distribution but only 13 or so responses are filtered out anyway.

sqlite> select count(*) from data where GeneticImprovement="Yes";

1100

>>> 1100 + 176 + 262 + 84
1622
>>> 1100 / 1622
0.6781750924784217

67.8% are willing to genetically engineer their children for improvements.

GeneticCosmetic

Would you be willing to have your child genetically modified for cosmetic reasons? (eg. To make them taller or have a certain eye color.)

Yes: 500 (31.0%)

Maybe a little: 381 (23.6%)

Depends on the strength of the improvements: 277 (17.2%)

No: 455 (28.2%)

These numbers go about how you would expect, with people being progressively less interested the more 'shallow' a genetic change is seen as.


GeneticOpinionD

What's your overall opinion of other people genetically modifying their children for disease prevention purposes?

Positive: 1177 (71.7%)

Mostly Positive: 311 (19.0%)

No strong opinion: 112 (6.8%)

Mostly Negative: 29 (1.8%)

Negative: 12 (0.7%)

GeneticOpinionI

What's your overall opinion of other people genetically modifying their children for improvement purposes?

Positive: 737 (44.9%)

Mostly Positive: 482 (29.4%)

No strong opinion: 273 (16.6%)

Mostly Negative: 111 (6.8%)

Negative: 38 (2.3%)

GeneticOpinionC

What's your overall opinion of other people genetically modifying their children for cosmetic reasons?

Positive: 291 (17.7%)

Mostly Positive: 290 (17.7%)

No strong opinion: 576 (35.1%)

Mostly Negative: 328 (20.0%)

Negative: 157 (9.6%)

All three of these seem largely consistent with peoples personal preferences about modification. Were I inclined I could do a deeper analysis that actually takes survey respondents row by row and looks at correlation between preference for ones own children and preference for others.

Technological Unemployment

LudditeFallacy

Do you think the Luddite's Fallacy is an actual fallacy?

Yes: 443 (30.936%)

No: 989 (69.064%)

We can use this as an overall measure of worry about technological unemployment, which would seem to be high among the LW demographic.

UnemploymentYear

By what year do you think the majority of people in your country will have trouble finding employment for automation related reasons? If you think this is something that will never happen leave this question blank.

Mean: 2102.9713740458014

Median: 2050.0

Mode: 2050.0

Stdev: 1180.2342850727339

Question is flawed because you can't distinguish answers of "never happen" from people who just didn't see it.

Interesting question that would be fun to take a look at in comparison to the estimates for the singularity.

EndOfWork

Do you think the "end of work" would be a good thing?

Yes: 1238 (81.287%)

No: 285 (18.713%)

Fairly overwhelming consensus, but with a significant minority of people who have a dissenting opinion.

EndOfWorkConcerns

If machines end all or almost all employment, what are your biggest worries? Pick two.

Question Count Percent
People will just idle about in destructive ways 513 16.71%
People need work to be fulfilled and if we eliminate work we'll all feel deep existential angst 543 17.687%
The rich are going to take all the resources for themselves and leave the rest of us to starve or live in poverty 1066 34.723%
The machines won't need us, and we'll starve to death or be otherwise liquidated 416 13.55%
Question is flawed because it demanded the user 'pick two' instead of up to two.

The plurality of worries are about elites who refuse to share their wealth.

Existential Risk

XRiskType

Which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?

Nuclear war: +4.800% 326 (20.6%)

Asteroid strike: -0.200% 64 (4.1%)

Unfriendly AI: +1.000% 271 (17.2%)

Nanotech / grey goo: -2.000% 18 (1.1%)

Pandemic (natural): +0.100% 120 (7.6%)

Pandemic (bioengineered): +1.900% 355 (22.5%)

Environmental collapse (including global warming): +1.500% 252 (16.0%)

Economic / political collapse: -1.400% 136 (8.6%)

Other: 35 (2.217%)

Significantly more people worried about Nuclear War than last year. Effect of new respondents, or geopolitical situation? Who knows.

Charity And Effective Altruism

Charitable Giving

Income

What is your approximate annual income in US dollars (non-Americans: convert at www.xe.com)? Obviously you don't need to answer this question if you don't want to. Please don't include commas or dollar signs.

Sum: 66054140.47384

Mean: 64569.052271593355

Median: 40000.0

Mode: 30000.0

Stdev: 107297.53606321265

IncomeCharityPortion

How much money, in number of dollars, have you donated to charity over the past year? (non-Americans: convert to dollars at http://www.xe.com/ ). Please don't include commas or dollar signs in your answer. For example, 4000

Sum: 2389900.6530000004

Mean: 2914.5129914634144

Median: 353.0

Mode: 100.0

Stdev: 9471.962766896671

XriskCharity

How much money have you donated to charities aiming to reduce existential risk (other than MIRI/CFAR) in the past year?

Sum: 169300.89

Mean: 1991.7751764705883

Median: 200.0

Mode: 100.0

Stdev: 9219.941506342007

CharityDonations

How much have you donated in US dollars to the following charities in the past year? (Non-americans: convert to dollars at http://www.xe.com/) Please don't include commas or dollar signs in your answer. Options starting with "any" aren't the name of a charity but a category of charity.

Question Sum Mean Median Mode Stdev
Against Malaria Foundation 483935.027 1905.256 300.0 None 7216.020
Schistosomiasis Control Initiative 47908.0 840.491 200.0 1000.0 1618.785
Deworm the World Initiative 28820.0 565.098 150.0 500.0 1432.712
GiveDirectly 154410.177 1429.723 450.0 50.0 3472.082
Any kind of animal rights charity 83130.47 1093.821 154.235 500.0 2313.493
Any kind of bug rights charity 1083.0 270.75 157.5 None 353.396
Machine Intelligence Research Institute 141792.5 1417.925 100.0 100.0 5370.485
Any charity combating nuclear existential risk 491.0 81.833 75.0 100.0 68.060
Any charity combating global warming 13012.0 245.509 100.0 10.0 365.542
Center For Applied Rationality 127101.0 3177.525 150.0 100.0 12969.096
Strategies for Engineered Negligible Senescence Research Foundation 9429.0 554.647 100.0 20.0 1156.431
Wikipedia 12765.5 53.189 20.0 10.0 126.444
Internet Archive 2975.04 80.406 30.0 50.0 173.791
Any campaign for political office 38443.99 366.133 50.0 50.0 1374.305
Other 564890.46 1661.442 200.0 100.0 4670.805
"Bug Rights" charity was supposed to be a troll fakeout but apparently...

This table is interesting given the recent debates about how much money certain causes are 'taking up' in Effective Altruism.

Effective Altruism

Vegetarian

Do you follow any dietary restrictions related to animal products?

Yes, I am vegan: 54 (3.4%)

Yes, I am vegetarian: 158 (10.0%)

Yes, I restrict meat some other way (pescetarian, flexitarian, try to only eat ethically sourced meat): 375 (23.7%)

No: 996 (62.9%)

EAKnowledge

Do you know what Effective Altruism is?

Yes: 1562 (89.3%)

No but I've heard of it: 114 (6.5%)

No: 74 (4.2%)

EAIdentity

Do you self-identify as an Effective Altruist?

Yes: 665 (39.233%)

No: 1030 (60.767%)

The distribution given by the 2014 survey results does not sum to one, so it's difficult to determine if Effective Altruism's membership actually went up or not but if we take the numbers at face value it experienced an 11.13% increase in membership.

EACommunity

Do you participate in the Effective Altruism community?

Yes: 314 (18.427%)

No: 1390 (81.573%)

Same issue as last, taking the numbers at face value community participation went up by 5.727%

EADonations

Has Effective Altruism caused you to make donations you otherwise wouldn't?

Yes: 666 (39.269%)

No: 1030 (60.731%)

Wowza!

Effective Altruist Anxiety

EAAnxiety

Have you ever had any kind of moral anxiety over Effective Altruism?

Yes: 501 (29.6%)

Yes but only because I worry about everything: 184 (10.9%)

No: 1008 (59.5%)


There's an ongoing debate in Effective Altruism about what kind of rhetorical strategy is best for getting people on board and whether Effective Altruism is causing people significant moral anxiety.

It certainly appears to be. But is moral anxiety effective? Let's look:

Sample Size: 244
Average amount of money donated by people anxious about EA who aren't EAs: 257.5409836065574

Sample Size: 679
Average amount of money donated by people who aren't anxious about EA who aren't EAs: 479.7501384388807

Sample Size: 249 Average amount of money donated by EAs anxious about EA: 1841.5292369477913

Sample Size: 314
Average amount of money donated by EAs not anxious about EA: 1837.8248407643312

It seems fairly conclusive that anxiety is not a good way to get people to donate more than they already are, but is it a good way to get people to become Effective Altruists?

Sample Size: 1685
P(Effective Altruist): 0.3940652818991098
P(EA Anxiety): 0.29554896142433235
P(Effective Altruist | EA Anxiety): 0.5

Maybe. There is of course an argument to be made that sufficient good done by causing people anxiety outweighs feeding into peoples scrupulosity, but it can be discussed after I get through explaining it on the phone to wealthy PR-conscious donors and telling the local all-kill shelter where I want my shipment of dead kittens.

EAOpinion

What's your overall opinion of Effective Altruism?

Positive: 809 (47.6%)

Mostly Positive: 535 (31.5%)

No strong opinion: 258 (15.2%)

Mostly Negative: 75 (4.4%)

Negative: 24 (1.4%)

EA appears to be doing a pretty good job of getting people to like them.

Interesting Tables

Charity Donations By Political Affilation
Affiliation Income Charity Contributions % Income Donated To Charity Total Survey Charity % Sample Size
Anarchist 1677900.0 72386.0 4.314% 3.004% 50
Communist 298700.0 19190.0 6.425% 0.796% 13
Conservative 1963000.04 62945.04 3.207% 2.612% 38
Futarchist 1497494.1099999999 166254.0 11.102% 6.899% 31
Left-Libertarian 9681635.613839999 416084.0 4.298% 17.266% 245
Libertarian 11698523.0 214101.0 1.83% 8.885% 190
Moderate 3225475.0 90518.0 2.806% 3.756% 67
Neoreactionary 1383976.0 30890.0 2.232% 1.282% 28
Objectivist 399000.0 1310.0 0.328% 0.054% 10
Other 3150618.0 85272.0 2.707% 3.539% 132
Pragmatist 5087007.609999999 266836.0 5.245% 11.073% 131
Progressive 8455500.440000001 368742.78 4.361% 15.302% 217
Social Democrat 8000266.54 218052.5 2.726% 9.049% 237
Socialist 2621693.66 78484.0 2.994% 3.257% 126


Number Of Effective Altruists In The Diaspora Communities
Community Count % In Community Sample Size
LessWrong 136 38.418% 354
LessWrong Meetups 109 50.463% 216
LessWrong Facebook Group 83 48.256% 172
LessWrong Slack 22 39.286% 56
SlateStarCodex 343 40.98% 837
Rationalist Tumblr 175 49.716% 352
Rationalist Facebook 89 58.94% 151
Rationalist Twitter 24 40.0% 60
Effective Altruism Hub 86 86.869% 99
Good Judgement(TM) Open 23 74.194% 31
PredictionBook 31 51.667% 60
Hacker News 91 35.968% 253
#lesswrong on freenode 19 24.675% 77
#slatestarcodex on freenode 9 24.324% 37
#chapelperilous on freenode 2 18.182% 11
/r/rational 117 42.545% 275
/r/HPMOR 110 47.414% 232
/r/SlateStarCodex 93 37.959% 245
One or more private 'rationalist' groups 91 47.15% 193


Effective Altruist Donations By Political Affiliation
Affiliation EA Income EA Charity Sample Size
Anarchist 761000.0 57500.0 18
Futarchist 559850.0 114830.0 15
Left-Libertarian 5332856.0 361975.0 112
Libertarian 2725390.0 114732.0 53
Moderate 583247.0 56495.0 22
Other 1428978.0 69950.0 49
Pragmatist 1442211.0 43780.0 43
Progressive 4004097.0 304337.78 107
Social Democrat 3423487.45 149199.0 93
Socialist 678360.0 34751.0 41

UC Berkeley launches Center for Human-Compatible Artificial Intelligence

10 ignoranceprior 29 August 2016 10:43PM

Source article: http://news.berkeley.edu/2016/08/29/center-for-human-compatible-artificial-intelligence/

UC Berkeley artificial intelligence (AI) expert Stuart Russell will lead a new Center for Human-Compatible Artificial Intelligence, launched this week.

Russell, a UC Berkeley professor of electrical engineering and computer sciences and the Smith-Zadeh Professor in Engineering, is co-author of Artificial Intelligence: A Modern Approach, which is considered the standard text in the field of artificial intelligence, and has been an advocate for incorporating human values into the design of AI.

The primary focus of the new center is to ensure that AI systems are beneficial to humans, he said.

The co-principal investigators for the new center include computer scientists Pieter Abbeel and Anca Dragan and cognitive scientist Tom Griffiths, all from UC Berkeley; computer scientists Bart Selman and Joseph Halpern, from Cornell University; and AI experts Michael Wellman and Satinder Singh Baveja, from the University of Michigan. Russell said the center expects to add collaborators with related expertise in economics, philosophy and other social sciences.

The center is being launched with a grant of $5.5 million from the Open Philanthropy Project, with additional grants for the center’s research from the Leverhulme Trust and the Future of Life Institute.

Russell is quick to dismiss the imaginary threat from the sentient, evil robots of science fiction. The issue, he said, is that machines as we currently design them in fields like AI, robotics, control theory and operations research take the objectives that we humans give them very literally. Told to clean the bath, a domestic robot might, like the Cat in the Hat, use mother’s white dress, not understanding that the value of a clean dress is greater than the value of a clean bath.

The center will work on ways to guarantee that the most sophisticated AI systems of the future, which may be entrusted with control of critical infrastructure and may provide essential services to billions of people, will act in a manner that is aligned with human values.

“AI systems must remain under human control, with suitable constraints on behavior, despite capabilities that may eventually exceed our own,” Russell said. “This means we need cast-iron formal proofs, not just good intentions.”

One approach Russell and others are exploring is called inverse reinforcement learning, through which a robot can learn about human values by observing human behavior. By watching people dragging themselves out of bed in the morning and going through the grinding, hissing and steaming motions of making a caffè latte, for example, the robot learns something about the value of coffee to humans at that time of day.

“Rather than have robot designers specify the values, which would probably be a disaster,” said Russell, “instead the robots will observe and learn from people. Not just by watching, but also by reading. Almost everything ever written down is about people doing things, and other people having opinions about it. All of that is useful evidence.”

Russell and his colleagues don’t expect this to be an easy task.

“People are highly varied in their values and far from perfect in putting them into practice,” he acknowledged. “These aspects cause problems for a robot trying to learn what it is that we want and to navigate the often conflicting desires of different individuals.”

Russell, who recently wrote an optimistic article titled “Will They Make Us Better People?,” summed it up this way: “In the process of figuring out what values robots should optimize, we are making explicit the idealization of ourselves as humans. As we envision AI aligned with human values, that process might cause us to think more about how we ourselves really should behave, and we might learn that we have more in common with people of other cultures than we think.”

European Soylent alternatives

10 ChristianKl 15 August 2016 08:22PM

A person at our local LW meetup (not active at LW.com) tested various Soylent alternatives that are available in Europe and wrote a post about them:

______________________

Over the course of the last three months, I've sampled parts of the
european Soylent alternatives to determine which ones would work for me
longterm.

- The prices are always for the standard option and might differ for
e.g. High Protein versions.
- The prices are always for the amount where you get the cheapest
marginal price (usually around a one month supply, i.e. 90 meals)
- Changing your diet to Soylent alternatives quickly leads to increased
flatulence for some time - I'd recommend a slow adoption.
- You can pay for all of them with Bitcoin.
- The list is sorted by overall awesomeness.

So here's my list of reviews:

Joylent:

Taste: 7/10
Texture: 7/10
Price: 5eu / day
Vegan option: Yes
Overall awesomeness: 8/10

This one is probably the european standard for nutritionally complete
meal replacements.

The texture is nice, the taste is somewhat sweet, the flavors aren't
very intensive.
They have an ok amount of different flavors but I reduced my orders to
Mango (+some Chocolate).

They offer a morning version with caffeine and a sports version with
more calories/protein.

They also offer Twennybars (similar to a cereal bar but each offers 1/5
of your daily needs), which everyone who tasted them really liked.
They're nice for those lazy times where you just don't feel like pouring
the powder, adding water and shaking before you get your meal.
They do cost 10eu per day, though.

I also like the general style. Every interaction with them was friendly,
fun and uncomplicated.


Veetal:

Taste: 8/10
Texture: 7/10
Price: 8.70 / day
Vegan option: Yes
Overall awesomeness: 8/10

This seems to be the "natural" option, apparently they add all those
healthy ingredients.

The texture is nice, the taste is sweeter than most, but not very sweet.
They don't offer flavors but the "base taste" is fine, it also works
well with some cocoa powder.

It's my favorite breakfast now and I had it ~54 of the last 60 days.
Would have been first place if not for the relatively high price.


Mana:

Taste: 6/10
Texture: 7/10
Price: 6.57 / day
Vegan option: Only Vegan
Overall awesomeness: 7/10

Mana is one of the very few choices that don't taste sweet but salty.
Among all the ones I've tried, it tastes the most similar to a classic meal.
It has a somewhat oily aftertaste that was a bit unpleasent in the
beginning but is fine now that I got used to it.

They ship the oil in small bottles seperate from the rest which you pour
into your shaker with the powder. This adds about 100% more complexity
to preparing a meal.

The packages feel somewhat recycled/biodegradable which I don't like so
much but which isn't actually a problem.

It still made it to the list of meals I want to consume on a regular
basis because it tastes so different from the others (and probably has a
different nutritional profile?).


Nano:

Taste: 7/10
Texture: 7/10
Price: 1.33eu / meal
*I couldn't figure out whether they calculate with 3 or 5 meals per day
** Price is for an order of 666 meals. I guess 222 meals for 1.5eu /meal
is the more reasonable order
Vegan option: Only Vegan
Overall awesomeness: 7/10

Has a relatively sweet taste. Only comes in the standard vanilla-ish flavor.

They offer a Veggie hot meal which is the only one besides Mana that
doesn't taste sweet. It tastes very much like a vegetable soup but was a
bit too spicy for me. (It's also a bit more expensive)

Nano has a very future-y feel about it that I like. It comes in one meal
packages which I don't like too much but that's personal preference.


Queal:

Taste: 7/10
Texture: 6/10
Price: 6.5 / day
Vegan option: No
Overall awesomeness: 7/10

Is generally similar to Joylent (especially in flavor) but seems
strictly inferior (their flavors sound more fun - but don't actually
taste better).


Nutrilent:

Taste: 6/10
Texture: 7/10
Price: 5 / day
Vegan option: No
Overall awesomeness: 6/10

Taste and flavor are also similar to Joylent but it tastes a little
worse. It comes in one meal packages which I don't fancy.


Jake:

Taste: 6/10
Texture: 7/10
Price: 7.46 / day
Vegan option: Only Vegan
Overall awesomeness: 6/10

Has a silky taste/texture (I didn't even know that was a thing before I
tried it). Only has one flavor (vanilla) which is okayish.
Also offers a light and sports option.


Huel:

Taste: 1/10
Texture: 6/10
Price: 6.70 / day
Vegan option: Only Vegan
Overall awesomeness: 4/10

The taste was unanimously rated as awful by every single person to whom
I gave it for trying. The Vanilla flavored version was a bit less awful
then the unflavored version but still...
The worst packaging - it's in huge bags that make it hard to pour and
are generally inconvenient to handle.

Apart from that, it's ok, I guess?


Ambronite:

Taste: ?
Texture: ?
Price: 30 / day
Vegan option: Only Vegan
Overall awesomeness: ?

Price was prohibitive for testing - they advertise it as being very
healthy and natural and stuff.


Fruiticio:

Taste: ?
Texture: ?
Price: 5.76 / day
Vegan option: No
Overall awesomeness: ?

They offer a variety for women and one for men. I didn't see any way for
me to find out which of those I was supposed to order. I had to give up
the ordering process at that point. (I guess you'd have to ask your
doctor which one is for you?)



Conclusion:
Meal replacements are awesome, especially when you don't have much time
to make or eat a "proper" meal.
I generally don't feel full after drinking them but also stop being hungry.
I assume they're healthier than the average European diet.
The texture and flavor do get a bit dull after a while if I only use
meal replacements.

On my usual day I eat one serving of Joylent, Veetal and Mana at the
moment (and have one or two "non-replaced" meals).

 

A Review of Signal Data Science

10 The_Jaded_One 14 August 2016 03:32PM

I took part in the second signal data science cohort earlier this year, and since I found out about Signal through a slatestarcodex post a few months back (it was also covered here on less wrong), I thought it would be good to return the favor and write a review of the program. 

The tl;dr version:

Going to Signal was a really good decision. I had been doing teaching work and some web development consulting previous to the program to make ends meet, and now I have a job offer as a senior machine learning researcher1. The time I spent at signal was definitely necessary for me to get this job offer, and another very attractive data science job offer that is my "second choice" job. I haven't paid anything to signal, but I will have to pay them a fraction of my salary for the next year, capped at 10% and a maximum payment of $25k. 

The longer version:

Obviously a ~12 week curriculum is not going to be a magic pill that turns a nontechnical, averagely intelligent person into a super-genius with job offers from Google and Facebook. In order to benefit from Signal, you should already be somewhat above average in terms of intelligence and intellectual curiosity. If you have never programmed and/or never studied mathematics beyond high school2 , you will probably not benefit from Signal in my opinion. Also, if you don't already understand statistics and probability to a good degree, they will not have time to teach you. What they will do is teach you how to be really good with R, make you do some practical machine learning and learn some SQL, all of which are hugely important for passing data science job interviews. As a bonus, you may be lucky enough (as I was) to explore more advanced machine learning techniques with other program participants or alumni and build some experience for yourself as a machine learning hacker. 

As stated above, you don't pay anything up front, and cheap accommodation is available. If you are in a situation similar to mine, not paying up front is a huge bonus. The salary fraction is comparatively small, too, and it only lasts for one year. I almost feel like I am underpaying them. 

This critical comment by fluttershy almost put me off, and I'm glad it didn't. The program is not exactly "self-directed" - there is a daily schedule and a clear path to work through, though they are flexible about it. Admittedly there isn't a constant feed of staff time for your every whim - ideally there would be 10-20 Jonahs, one per student; there's no way to offer that kind of service at a reasonable price. Communication between staff and students seemed to be very good, and key aspects of the program were well organised. So don't let perfect be the enemy of good: what you're getting is an excellent focused training program to learn R and some basic machine learning, and that's what you need to progress to the next stage of your career.

Our TA for the cohort, Andrew Ho, worked tirelessly to make sure our needs were met, both academically and in terms of running the house. Jonah was extremely helpful when you needed to debug something or clarify a misunderstanding. His lectures on selected topics were excellent. Robert's Saturday sessions on interview technique were good, though I felt that over time they became less valuable as some people got more out of interview practice than others. 

I am still in touch with some people I met on my cohort, even though I had to leave the country, I consider them pals and we keep in touch about how our job searches are going. People have offered to recommend me to companies as a result of Signal. As a networking push, going to Signal is certainly a good move. 

Highly recommended for smart people who need a helping hand to launch a technical career in data science.

 


 

1: I haven't signed the contract yet as my new boss is on holiday, but I fully intend to follow up when that process completes (or not). Watch this space. 

2: or equivalent - if you can do mathematics such as matrix algebra, know what the normal distribution is, understand basic probability theory such as how to calculate the expected value of a dice roll, etc, you are probably fine. 

Superintelligence and physical law

10 AnthonyC 04 August 2016 06:49PM

It's been a few years since I read http://lesswrong.com/lw/qj/einsteins_speed/ and the rest of the quantum physics sequence, but I recently learned about the company Nutonian, http://www.nutonian.com/. Basically it's a narrow AI system that looks at unstructured data and tries out billions of models to fit it, favoring those that use simpler math. They apply it to all sorts of fields, but that includes physics. It can't find Newton's laws from three frames of a falling apple, but it did find the Hamiltonian of a double pendulum given its motion data after a few hours of processing: http://phys.org/news/2009-12-eureqa-robot-scientist-video.html

Two forms of procrastination

10 Viliam 16 July 2016 08:30PM

I noticed something about myself when comparing two forms of procrastination:

a) reading online discussions,
b) watching movies online.

Reading online discussions (LessWrong, SSC, Reddit, Facebook) and sometimes writing a comment there, is a huge sink of time for me. On the other hand, watching movies online is almost harmless, at least compared with the former option. The difference is obvious when I compare my productivity at the end of the day when I did only the former, or only the latter. The interesting thing is that at the moment it feels the other way round.

When I start watching a movie that is 1:30:00 long, or start watching a series where each part is 40:00 long but I know I will probably watch more than one part a day, I am aware from the beginning that I am going to lose more than one hour of time; possibly several hours. On the other hand, when I open the "Discussion" tab on LessWrong, the latest "Open Thread" on SSC, my few favorite subreddits, and/or my Facebook "Home" page, it feels like it will only take a few minutes -- I will click on the few interesting links, quickly skim through the text, and maybe write a comment or two -- it certainly feels like much less than an hour.

But the fact is, when I start reading the discussions, I will probably click on at least hundred links. Most of the pages I will read just as quickly as I imagined, but there will be a few that will take disproportionally more time; either because they are interesting and long, or because they contain further interesting links. And writing a comment sometimes takes more time than it seems; it can easily be a half an hour for a three-paragraphs-long comment. (Ironically, this specific article gets written rather quickly, because I know what I want to write, and I write it directly. But there are comments where I think a lot, and keep correcting my text, to avoid misunderstanding when debating a sensitive topic, etc.) And when I stop doing it, because I want to make something productive for a change, I will feel tired. Reading many different things, trying to read quickly, and formulating my answers, that all makes me mentally exhausted. So after I close the browser, I just wish I could take a nap.

On the other hand, watching a movie does not make me tired in that specific way. The movies runs at its own speed and doesn't require me to do anything actively. Also, there is no sense of urgency; none of the "if I reply to this now, people will notice and respond, but if I do it a week later, no one will care anymore". So I feel perfectly comfortable pausing the movie at any moment, doing something productive for a while, then unpausing the movie and watching more. I know I won't miss anything.

I think it's the mental activity during my procrastination that both makes me tired and creates the illusion that it will take less time than it actually does. When the movie says 1:30:00, I know it will be 1:30:00 (or maybe a little less because of the final credits). With a web page, I can always tell myself "don't worry, I will read this one really fast", so there is the illusion that I have it under control, and can reduce the time waste. The fact that I am reading an individual page really fast makes me underestimate how much time it took to read all those pages.

On the other hand, sometimes I do inverse procrastination -- I start watching a movie, pause it a dozen times and do some useful work during the breaks -- and at the end of the day I spent maybe 90% of the time working productively, while my brain tells me I just spent the whole day watching a movie, so I almost feel like I had a free day.

Okay, so how could I use this knowledge to improve my productivity?

1) Knowing the difference between the two forms of procrastination, whenever I feel a desire to escape to the online world, I should start watching a movie instead of reading some debate, because thus I will waste less time, even if it feels the other way round.

2) Integrate it with pomodoro? 10 minutes movie, 50 minutes work, then again, and at the end of the day my lying brain will tell me "dude, you didn't work at all today, you were just watching movies, of course you should feel awesome!".

Do you have a similar experience? No idea how typical is this. No need to hurry with responding, I am going to watch a movie now. ;-)

Thoughts on hacking aromanticism?

10 hg00 02 June 2016 11:52AM

Several years ago, Alicorn wrote an article about how she hacked herself to be polyamorous.  I'm interested in methods for hacking myself to be aromantic.  I've had some success with this, so I'll share what's worked for me, but I'm really hoping you all will chime in with your ideas in the comments.

Motivation

Why would someone want to be aromantic?  There's the obvious time commitment involved in romance, which can be considerable.  This is an especially large drain if you're in a situation where finding suitable partners is difficult, which means most of this time is spent enduring disappointment (e.g. if you're heterosexual and the balance of singles in your community is unfavorable).

But I think an even better way to motivate aromanticism is by referring you to this Paul Graham essay, The Top Idea in Your Mind.  To be effective at accomplishing your goals, you'd like to have your goals be the most interesting thing you have to think about.  I find it's far too easy for my love life to become the most interesting thing I have to think about, for obvious reasons.

Subproblems

After thinking some, I came up with a list of 4 goals people try to achieve through engaging in romance:

  1. Companionship.
  2. Sexual pleasure.
  3. Infatuation (also known as new relationship energy).
  4. Validation.  This one is trickier than the previous three, but I think it's arguably the most important.  Many unhappy singles have friends they are close to, and know how to masturbate, but they still feel lousy in a way people in post-infatuation relationships do not.  What's going on?  I think it's best described as a sort of romantic insecurity.  To test this out, imagine a time when someone you were interested in was smiling at you, and contrast that with the feeling of someone you were interested in turning you down.  You don't have to experience companionship or sexual pleasure from these interactions for them to have a major impact on your "romantic self-esteem".  And in a culture where singlehood is considered a failure, it's natural for your "romantic self-esteem" to take a hit if you're single.

To remove the need for romance, it makes sense to find quicker and less distracting ways to achieve each of these 4 goals.  So I'll treat each goal as a subproblem and brainstorm ideas for solving it.  Subproblems 1 through 3 all seem pretty easy to solve:

  1. Companionship: Make deep friendships with people you're not interested in romantically.  I recommend paying special attention to your coworkers and housemates, since you spend so much time with them.
  2. Sexual pleasure: Hopefully you already have some ideas on pleasuring yourself.
  3. Infatuation: I see this as more of a bonus than a need to be met.  There are lots of ways to find inspiration, excitement, and meaning in life outside of romance.

Subproblem 4 seems trickiest.

Hacking Romantic Self-Esteem

I'll note that what I'm describing as "validation" or "romantic self-esteem" seems closely related to abundance mindset.  But I think it's useful to keep them conceptually distinct.  Although alieving that there are many people you could date is one way to boost your romantic self-esteem, it's not necessarily the only strategy.

The most important thing to keep in mind about your romantic self-esteem is that it's heavily affected by the availability heuristic.  If I was encouraged by someone in 2015, that won't do much to assuage the sting of discouragement in 2016, except maybe if it happens to come to mind.

Another clue is the idea of a sexual "dry spell".  Dry spells are supposed to get worse the longer they go on... which simply means that if your mind doesn't have a recent (available!) incident of success to latch on, you're more likely to feel down.

So to increase your romantic self-esteem, keep a cherished list of thoughts suggesting your desirability is high, and don't worry too much about thoughts suggesting your desirability is low.  Here's a freebie: If you're reading this post, it's likely that you are (or will be) quite rich by global standards.  I hear rich people are considered attractive.  Put it on your list!

Other ideas for raising your romantic self-esteem:

  • Take steps to maintain your physical appearance, so you will appear marginally more desirable to yourself when you see yourself in the mirror.
  • Remind yourself that you're not a victim if you're making a conscious choice to prioritize other aspects of your life.  Point out to yourself things you could be doing to find partners that you're choosing not to do.

I think this is a situation where prevention works better than cure--it's best to work pre-emptively to keep your romantic self-esteem high.  In my experience, low romantic self-esteem leads to unproductive coping mechanisms like distracting myself from dark thoughts by wasting time on the Internet.

The other side of the coin is avoiding hits to your romantic self-esteem.  Here's an interesting snippet from a Quora answer I found:

In general specialized contemplative monastic organisations that tend to separate from the society tend to be celibate while ritual specialists within the society (priests) even if expected to follow a higher standard of ethical and ritual purity tend not to be.

So, it seems like it's easier for heterosexual male monks to stay celibate if they are isolated on a monastery away from women.  Without any possible partners around, there's no one to reject (or distract) them.  Participating in a monastic culture in which long-term singlehood is considered normal & desirable also removes a romantic self-esteem hit.

Retreating to a monastery probably isn't practical, but there may be simpler things you can do.  I recently switched from lifting weights to running in order to get exercise, and I found that running is better for my concentration because I'm not distracted by attractive people at the gym.

It's not supposed to be easy

I shared a bunch of ideas in this post.  But my overall impression is that instilling aromanticism is a very hard problem.  Based on my research, even monks and priests have a difficult time of things.  That's why I'm curious to hear what the Less Wrong community can come up with.  Side note: when possible, please try to make your suggestions gender-neutral so we can avoid gender-related flame wars.  Thanks!

[link] Disjunctive AI Risk Scenarios

10 Kaj_Sotala 05 April 2016 12:51PM

Arguments for risks from general AI are sometimes criticized on the grounds that they rely on a series of linear events, each of which has to occur for the proposed scenario to go through. For example, that a sufficiently intelligent AI could escape from containment, that it could then go on to become powerful enough to take over the world, that it could do this quickly enough without being detected, etc.

The intent of my following series of posts is to briefly demonstrate that AI risk scenarios are in fact disjunctive: composed of multiple possible pathways, each of which could be sufficient by itself. To successfully control the AI systems, it is not enough to simply block one of the pathways: they all need to be dealt with.

I've got two posts in this series up so far:

AIs gaining a decisive advantage discusses four different ways by which AIs could achieve a decisive advantage over humanity. The one-picture version is:

AIs gaining the power to act autonomously discusses ways by which AIs might come to act as active agents in the world, despite possible confinement efforts or technology. The one-picture version (which you may wish to click to enlarge) is:

These posts draw heavily on my old paper, Responses to Catastrophic AGI Risk, as well as some recent conversations here on LW. Upcoming posts will try to cover more new ground.

The Thyroid Madness : Core Argument, Evidence, Probabilities and Predictions

10 johnlawrenceaspden 14 March 2016 01:41AM

I've made a couple of recent posts about hypothyroidism:

http://lesswrong.com/lw/nbm/thyroid_hormones_chronic_fatigue_and_fibromyalgia/
http://lesswrong.com/lw/n8u/a_medical_mystery_thyroid_hormones_chronic/

It appears that many of those who read them were unable to extract the core argument, and few people seem to have found them interesting.


They seem extremely important to me. Somewhere between a possible palliative for some cases of Chronic Fatigue Syndrome, and a panacea for most of the remaining unexplained diseases of the world.


So here I've made the core argument as plain as I can. But obviously it misses out many details. Please read the original posts to see what I'm really saying. They were written as I thought, and the idea has crystallised somewhat in the process of arguing about it with friends and contributors to Less Wrong. In particular I am indebted to the late Broda Barnes for the connection with diabetes, which I found in his book 'Hypothyroidism: The Unsuspected Illness', and which makes the whole thing look rather more plausible.



CORE ARGUMENT


(1.1) Hypothyroidism is a disease with very variable symptoms, which can present in many different ways.

It is an endocrine hormone disease, which causes the metabolism to run slow. A sort of general systems failure. Which parts fail first seems random.

It is extraordinarily difficult to diagnose by clinical symptoms.


(1.2) Chronic Fatigue Syndrome and Fibromyalgia look very like possible presentations of Hypothyroidism


(1.3) The most commonly used blood test (TSH) for Hypothyroidism is negative in CFS/FMS


=>


EITHER


(2.1) CFS/FMS/Hypothyroidism are extremely similar diseases which are nevertheless differently caused.


OR


(2.2) The blood test is failing to detect many cases of Hypothyroidism.



It seems that one is either forced to accept (2.1), or to believe that blood hormone levels can be normal in the presence of Hypothyroidism.


There is precedent for this:


Diabetes, another endocrine disorder (this time the hormone is insulin), comes in two forms:


type I : the hormone producing gland is damaged, the blood hormone levels go wrong.         (Classical Diabetes)

type II: the blood hormone levels are normal, but for some reason the hormone does not act. (Insulin Resistance)


I therefore hypothesize:


(3) That there is at least one mechanism interfering with the action of the thyroid hormones on the cells.


and


(4) The same, or similar mechanisms can interfere with the actions of other hormones.


A priori, I'd give these hypotheses a starting chance of 1%. They do not seem unreasonable. In fact they are obvious.

The strongest evidence against them is that they are so very obvious, and yet not believed by those whose job it is to decide.

 

 




CURRENT STATUS  (Estimated probability)


(1.1) Uncontroversial, believed by everyone involved (~100%)


(1.2) Similarly uncontroversial (~100%)


(1.3) By definition. With abnormal TSH, you'd have hypothyroidism (~100%)


(2.1) Universal belief of conventional medicine and medical science, some alternative medicine disagrees (~90%)


(2.2) The idea that the TSH test is inaccurate is widely believed in alternative medicine, and by thyroid patient groups, but largely rejected by conventional medicine (~10%)


(3) There is some evidence from alternative medicine that this might be true (~10%)


(4) My own idea. A wild stab in the dark. But if it happens twice, you bet it happens thrice [1] (~0.000001%)



Some Details


(1.1) Clinical diagnosis of Hypothyroidism is very out of fashion, considered hopelessly unreliable, doctors are actually trained to ignore the symptoms. There is a famous medical sin of 'Overdiagnosing Hypothyroidism', and doctors who fall into sin are regularly struck off.


(1.2) I don't think you'll find anyone who knows about both diseases to dispute this.


(1.3) True by definition. CFS/FMS symptoms plus abnormal TSH would be Hypothyroidism proper, almost no-one would disagree.


(2.1) This is the belief of conventional medicine. But the cause of CFS/FMS is unknown.

Generally the symptoms are blamed on 'stress', but 'stress' seems to be 'that which causes disease'. This 'explanation' seems to be doing little explanatory work. In fact it looks like magical thinking to me.

Medical Scientists know much more about all this than I do, and they believe it.

On the other hand, scientific ideas without verified causal chains often turn out to be wrong.


(2.2) (The important bit: If the TSH test is not solid, there are a number of interesting consequences.)


I've been looking for a few months through the endocrinological literature for evidence that the sensitivity of the TSH test was properly checked before its introduction or since, and I can't find any. It seems to have been an unjustified assumption. At the very least, my medical literature search skillz are not up to it. I appeal for help to those with better skillz.


It is beyond doubt that atrophy or removal of the thyroid gland causes the TSH value to go extremely high, and such cases are uncontroversial.


The actual interpretation of the TSH test is curiously wooly.

It has proved very difficult to pin down the 'normal range' for TSH, and they have been arguing about it for nearly forty years, over which the 'normal range' has been repeatedly narrowed

The AACB report of 2012 concluded that the normal range was so narrow that huge numbers of people with no symptoms would be outside it, and this range is not widely accepted for obvious reasons


There are many other possible blood hormone tests for hypothyroidism. All are considered to be less accurate or less sensitive than the TSH test. It does seem to be the best available blood test. It does not correlate particularly well with clinical symptoms.


(3) Broda Barnes, a conventional endocrinologist working before the introduction of reliable blood tests, was convinced that the most accurate test was the peripheral basal body temperature on waking.

He considered measuring the basal metabolic rate, and rejected it for good reasons. He considered that desiccated thyroid was a good treatment for the disease, and thought the disease very common. He estimated its prevalence at 40% in the American population. His work is nowadays considered obsolete, and ignored. But he seems to have been a careful, thoughtful scientist, and the best arguments against his conclusions are placebo-effect and confirmation bias. He treated thousands of patients, his treatments were not controversial at the time, and he reported great success. He wrote a popular book 'Hypothyroidism: The Unsuspected Illness', and his conclusions have fathered a large and popular alternative medicine tradition.


John Lowe, a chiropractor who claimed that fibromyalgia could be cured with desiccated thyroid, found that many (25%) of his patients did not respond to the treatment. He hypothesised peripheral resistance, thought it genetic, and used very high doses of the thyroid hormone T3 on many of his patients, which should have killed them. I have read many of his writings, they seem thoughtful and sane. I am not aware of any case in which John Lowe is thought to have done harm. There must be some, even if he was right. But if he was wrong he should have killed many of his patients, including himself. He was either a liar, or a serial murderer, or he was right. He was likely seeing an extremely biased sample of patients, those who could not be helped by conventional approaches.


(4) I just made it up by analogy.

There is the curious concept of 'adrenal fatigue', widespread in alternative medicine but dismissed as fantasy outside it, where the adrenal glands (more endocrine things) are supposed to be 'tired out' by 'excessive stress'. That could conceivably be explained by peripheral resistance to adrenal hormones.



CONSEQUENCES


If (3) is true but (4) is not:


There are a number of mysterious 'somatoform' disorders, collectively known as the central sensitivity syndromes, with many symptoms in common, which could be explained as type 2 hypothyroidism. Obvious cases are Chronic Fatigue Syndrome, Fibromyalgia Syndrome, Major Depressive Disorder and Irritable Bowel Syndrome, but there are many others. Taken together they would explain Broda Barnes' estimate of 40% of Americans.


If (4) is true:


Then we can probably explain most of the remaining unexplained human diseases as endocrine resistance disorders.

 

 




HOW CAN THIS BE TRUE, BUT HAVE BEEN MISSED?


This is the million-dollar question!


My favourite explanation is that in order to overwhelm 'peripheral resistance to thyroid hormones', one needs to give the patient both T4 and T3 in exactly the right proportions and dose.


Supplementation with T4 alone will not increase the levels of T3 in the system, since the conversion is under the body's normal control, and the body defends T3 levels.


But T3 is the 'active hormone'. Without significantly increasing the circulating levels of T3, the resistance cannot be overwhelmed.


On the other hand, any significant overdosing of T3 will massively overstimulate the body, causing the extremely unpleasant symptoms of hyperthyroidism.


This seems to me to be sufficient explanation for why various trials of T4 supplementation on the central sensitivity disorders have all failed. In almost all cases, the patients will either have seen no improvement, or have experienced the symptoms of over-treatment. Only in very few cases will any improvement have occurred, and standard trials are not designed to detect such effects.


It is actually just luck that the T4/T3 proportion in desiccated thyroid is about right for some people.


Alternatively, there may just be some component in desiccated thyroid whose action we don't understand.



PERSONAL EXPERIENCE


I displayed symptoms of mild-to-moderate Chronic Fatigue Syndrome, and my wonderful NHS GP checked everything it could possibly be. All my blood tests normal, TSH=2.51. I was heading for a diagnosis of CFS.


After four months I mysteriously partially recovered after trying the iron/vitamin B supplement Floradix, even though I wasn't anaemic.


I started researching on the basis that things that go away on their own tend to come back on their own.


I noticed that I had recorded, in records kept at the time of the illness, thirty out of a list of forty possible symptoms of Hypothyroidism, drew the obvious conclusions as so many others have, and purchased a supply of desiccated thyroid in case it came back.


It did come back, and after one month, I began to self-treat with desiccated thyroid, very carefully titrating small doses against symptoms, and quickly noted immediate huge improvement in all symptoms. In fact I'd say they were gone.


My basal temperature rose over a few weeks from 36.1 to ~36.6 (average, rise slow over several weeks, noise ~ +-0.3 day to day).


One week, holding the dose steady in anticipation of more blood tests, I overdid it by the truly minute amount of 3mg/day of desiccated thyroid, which caused all of the symptoms of the manic phase of bipolar disorder (whose down phase is indistinguishable from CFS, and whose up phase looks terribly like the onset of hyperthyroidism), The manic symptoms disappeared within twelve hours of ceasing thyroid supplementation, to be replaced by overwhelming tiredness.


I resumed thyroid supplementation at a slightly lower dose, and feel as well as I have done for ten years. It's now been ten weeks and I am becoming reasonably confident that it is having some effect.



POSSIBLE CAUSATION


Such catastrophic failures of the body's central control system CANNOT be evolutionarily stable unless they are extremely rare or have compensating advantages.


I am thus drawn to the idea of either:


(a) recent environmental change (which seems to be the alternative medicine explanation)


(b) immune defence (which would explain why e.g. CFS often presents as extended version of the normal post-viral fatigue)

If the alternative is being eaten alive, it seems all too plausible that an immune mechanism might be to 'wall off' cells in some way until the emergency is past, even if catastrophic damage is a side effect.




STRONG PREDICTIONS

Low Body Temperature


It is a very strong prediction of this theory that low basal metabolic rates, and thus low basal peripheral temperatures will be found in many sufferers of Chronic Fatigue Syndrome and Fibromyalgia.

If this is not true, then the idea is refuted unambiguously.

Thyroid Hormone Supplementation as Palliative

It is a less strong prediction, but still fairly strong, that supplementation of the hormones T4 and T3 in carefully titrated doses and proportions will relieve some of the symptoms of CFS/FMS.


Note that T4 supplementation alone is unlikely to work. And that unless the doses and proportions are carefully adjusted to relieve symptoms, the treatment is likely to either not work, or be worse than the disease!


SOME SELECTED POSSIBLE IMPLICATIONS / PREDICTIONS

I've been very reluctant to draw my wilder speculative conclusions in public, since they have the potential to do great harm whether or not the idea is true, but here are some of the less frightening ones that I feel safe stating:


I state them only to encourage people to believe that this problem is worth thinking about.


Endocrinology appears not to be too interested, and my crank emails to endocrinologists have gone unanswered.


One of the reasons that I feel safe stating these four in public is that Broda Barnes thought them obvious and published popular books about them, so they are unlikely to come as a surprise to anyone outside endocrinology:


Dieting/Exercise/Weight Loss


Dieting and Exercise don't work long term as treatments for weight loss. The function of the thyroid system is to adapt metabolism to available resources. Starvation will cause mild transient hypothyroidism as the body attempts to survive the famine it infers. This may be the explanation for Anorexia Nervosa.


Diabetes


Diagnosis of diabetes was once a death sentence. With the discovery of insulin, allowing diabetics to control their blood sugar levels, it became survivable.

However it still has terrible complications, a lot of which look like the complications of hypothyroidism.


If a hormone-resistance mechanism interferes with both insulin and thyroid hormones, the reason for this is obvious. Diabetics with well-controlled blood sugar are dying in their millions from a treatable condition.


Heart Disease


One of the very old tests for hypothyroidism was blood cholesterol. It was thought to be a reliable indicator of hypothyroidism if present, but it was not always present.


A known symptom of hypothyroidism is atherosclerosis and weakness of the heart.


I would imagine that hypothyroidism initially presents as low blood pressure, due to the weakness of the heart. As the arteries clog, the weakened heart is forced to work harder and harder. Blood pressure goes higher and higher, and eventually the heart collapses under the strain.


Blood pressure reducing medications may actually be doing harm. A promising treatment might be to correct the underlying hypothyroidism.


Smoking


Cigarettes are full of poisons, and smoking is correlated with very many diseases.


It could be that smoking causes amongst its effects peripheral resistance, which causes clinical hypothyroidism, which then causes everything it usually causes. And that would be my bet!


It could be that hypothyroidism causes a very great number of bad things, including depression, which then causes smoking.


Smoking may not actually be that dangerous, and it might be possible to mitigate its bad effects.

 

[1] Madonna, "Pretender", Like A Virgin, Power Station Studios, New York, New York (1984)




I'm going to stop there. There are quite a lot of similar conclusions to be drawn. Read Barnes.


I also have some novel ones of my own which I am not telling anyone about just yet.


What the hell do I, or any of the quacks who have been screaming about this for forty years, have to say in order that someone with real expertise in this area takes this idea seriously enough to have a go at refuting it?

 

 


EDIT: This keeps confusing people (including me): Low Basal Metabolic Rates. The amount of oxygen you use once you have been asleep for a while. That's what the thyroid apparently controls in adult animals. Daytime won't do, that's probably under the control of something else. And peripheral temperatures. Not core. We're interested in the amount of heat flowing out of the body. Which is not quite the same thing as temperature....

 

 


 

EDIT : WHY THIS IS WORTH A CLOSE LOOK, EVEN THOUGH IT IS LIKELY WRONG!

Thanks to HungryHobo for making me make this point explicitly:

This is a very simple and obvious explanation of an awful lot of otherwise confusing data, anecdotes, quackery, expert opinion and medical research.

And it is obviously false! Of course medicine has tried using thyroid supplementation to fix 'tired all the time'. It doesn't work!

But there really is an awful lot unexplained about all this T4/T3 business, and why different people think it works differently. I refer you to the internet for all the unexplained things.

In just the endocrinological literature there is a long fight going on about T4/T3 ratios in thyroid supplementation, and about the question of whether or not to treat 'subclinical hypothyroidism'. Some people show symptoms with very low TSH values. Some people have extremely high TSH values and show no symptoms at all.

I've been trying various ways of explaining it all for nearly four months now. And I've found lots of magical thinking in conventional medicine, and lots of waving away of the reports of honest-sounding empiricists, who have made no obvious errors of reasoning, most of whom are taking terrible risks with their own careers in order to, as they see it, help their patients.

I've read lots of people saying 'we tried this, and it works', and no people saying 'we tried this, and it makes no difference'. The explanation favoured by conventional medicine strongly predicts 'we tried this, and it makes no difference'. But they've never tried it! It's really confusing. A lot of people are very confused.

I think that simple explanations are extra-worth looking at because they are simple.

Of course that doesn't mean they're right. Consequence and experiment are the only judge of that.

I do not think I am right! There is no way I can have got the whole picture. I can't explain, for instance 'euthyroid sick syndrome'. But I don't predict that it doesn't exist either.

But you should look very carefully at the simple beautiful ideas that seem to explain everything, but that look untrue.

Firstly because Solomonoff induction looks like a good way to think about the world. Or call it Occam's Razor if you prefer. It is straightforward Bayesianism, as David Mackay points out in Information Theory, Inference, and Learning Algorithms.

Secondly because all the good ideas have turned out to be simple, and could have been spotted, (and often were) by the Ancient Greeks, and could have been demonstrated by them, if only they'd really thought about it.

Thirdly because experiments not done with the hypothesis in mind have likely neglected important aspects of the problem. (In this case T3 homeostasis and possible peripheral resistance and the difference between basal metabolic rate and waking rate, and the difference between core and peripheral temperature and the possibility of a common DIO2 mutation causing people's systems to react differently to T4 monotherapy).

So that even if there are things you can't explain (I can't explain hot daytime fibro-turks...), you should keep plugging away, to see if you can explain them, if you think hard enough.

Good ideas should be given extra-benefit of the doubt. Not ignored because they prove (slightly) too much!

 

 

 

 




 

I reckon that we should be able to refute or strongly support the general idea from reports in the published literature. Here is some stuff that I have found recently. There is a comment that looks like this. Add anything you find to it, and I'll move it up here.

ADD EVIDENCE FOR OR AGAINST HERE

Found this for "Wilson's syndrome", but can only see the abstract:

http://www.ncbi.nlm.nih.gov/pubmed/16883675

It looks like it might be supportive, but it also looks crap. No mention of blinding, randomising, or placebo in the abstract.

Can anyone see the actual paper and link to it here? And can anyone work out whether these guys are allies of Wilson, or trying to break him? Because that matters.


This, on the other hand:

http://www.ncbi.nlm.nih.gov/pubmed/9513740

Looks solid, and looks like refutation. They claim normal average core temperatures in CFS. I have quibbles, of course:

I'd expect the core temperature to be well defended. So I'm not worried by that per se, but they do talk about relation to oral temperature, and they do talk about metabolic rate, so they've obviously thought about it, and I can't quite work out what they did there.

Also, the reason that they're measuring this is because their CFS patients have all been complaining about low oral temperatures and the fact that even when they've got a fever, they're not hot. So errr?? Do all the CFS patients believe this theory and are (un)consciously faking? I mean, I can believe that, but is it true that all CFS patients think this theory is true? Who is telling CFS patients to take their temperatures and why?

On the other hand, their actual graphs do look funny. There's a strange shape to the CBT vs time graph in CFS, but n=7, I think, so maybe that's just noise.


These guys:

http://www.sciencedirect.com/science/article/pii/S0024320515301223

Are actually claiming HIGHER peripheral temperatures in Fibromyalgia. But I think they're measuring during the day. I've no idea how to explain that, or what it might mean.


Barnes claimed: Measure axillary temperature on waking. Should be 98.6+/-0.2F (so 37C+/-0.1), lower is bad. Treat with lots of thyroid (1/2-2 grains).

I claim (from just me, and I am perfectly capable of fooling myself): measure oral temperature on waking. Was low (~36.1), has gone higher (36.6-7-8-9) under influence of small amounts of thyroid (1/3 grain). Feel fine now.

Can anyone find: Large numbers of CFS/FMS patients have normal metabolic rate while sleeping or just after waking, no exercise allowed, or normal axillary or oral temperature on waking, again no exercise allowed?

Because that's what I'm looking for at the moment, and it is refutation. I will have to pull off some clever moves indeed to get round that.


Oh, yes, and there's a paper by Lowe himself, finding exactly what I expect him to find:

http://www.ncbi.nlm.nih.gov/pubmed/16810133

Can anyone dig up quibbles with this that can make me discount it?


Oh Jesus:

Clinical Response to Thyroxine Sodium in Clinically Hypothyroid but Biochemically Euthyroid Patients G. R. B. SKINNER MD DSc FRCPath FRCOG, D. HOLMES, A. AHMAD PhD, J. A. DAVIES BSc and J. BENITEZ MSc Vaccine Research Trust, 22 Alcester Road, Moseley, Birmingham B13 8BE, UK

This I can't explain at all! He treated CFS people with tiny amounts of T4, and worked up the dose until they were all better. Worked a treat, apparently. Can anyone break it?

It simultaneously breaks me and proves that CFS is a thyroid problem. I think. Help! Again, no placebos, but a large clinical trial that seems to have worked, by a careful man.

I wouldn't dream of suggesting that anyone steal this using sci-hub.io by typing the title into the search box and then solving the easy CAPTCHA which is in English even though the instructions are all in Russian. You should write to the authors and request a copy instead.

 


Four 2003 Studies of
Thyroid Hormone Replacement Therapies:
Logical Analysis and Ethical Implications
Dr. John C. Lowe

Lowe again, my rationalist hero, publishing in his own journal, referencing his own papers and books. This time I think he's made maths mistakes. But that's my department, so I'm going to go away and think about it. I mention the paper here to avoid the obvious mistake of deciding whether to mention it after I've had a proper look.

 


Effective Treatment of Chronic Fatigue Syndrome and Fibromyalgia—A Randomized, Double-Blind, Placebo-Controlled, Intent-To-Treat Study

Jacob E. Teitelbaum*, Barbara Bird, Robert M. Greenfield, Alan Weiss, Larry Muenz & Laurie Gould

DOI:10.1300/J092v08n02_02

ABSTRACT
Background: Hypothalamic dysfunction has been suggested in fibromyalgia (FMS) and chronic fatigue syndrome (CFS). This dysfunction may result in disordered sleep, subclinical hormonal deficiencies, and immunologic changes. Our previously published open trial showed that patients usually improve by using a protocol which treats all the above processes simultaneously. The current study examines this protocol using a randomized, double-blind design with an intent-to-treat analysis. Methods: Seventy-two FMS patients (38 active:34 placebo; 69 also met CFS criteria) received all active or all placebo therapies as a unified intervention. Patients were treated, as indicated by symptoms and/or lab testing, for: (1) subclinical thyroid, gonadal, and/or adrenal insufficiency, (2) disordered sleep, (3) suspected neurally mediated hypotension (NMH), (4) opportunistic infections, and (5) suspected nutritional deficiencies. Results: At the final visit, 16 active patients were “much better,” 14 “better”, 2 “same,” 0 “worse,” and 1 “much worse” vs. 3, 9, 11, 6, and 4 in the placebo group (p < .0001, Cochran-Mantel-Haenszel trend test). Significant improvement in the FMS Impact Questionnaire (FIQ) scores (decreasing from 54.8 to 33.2 vs. 51.4 to 47.7) and Analog scores (improving from 176.1 to 310.3 vs. 177.1 to 211.9) (both with p < .0001 by random effects regression), and Tender Point Index (TPI) (31.7 to 15.5 vs. 35.0 to 32.3, p < .0001 by baseline adjusted linear model) were seen. Long term follow-up (mean 1.9 years) of the active group showed continuing and increasing improvement over time, despite patients being able to discontinue most treatments. Conclusions: Significantly greater benefits were seen in the active group than in the placebo group for all primary outcomes. An integrated treatment approach appears effective in the treatment of FMS/CFS.

OK, how do we discount this one? I haven't even read it yet. Can anyone see it?




Thyroid Insufficiency. Is Thyroxine the Only Valuable Drug?

DOI:10.1080/13590840120083376

W. V. Baisier, J. Hertoghe & W. Eeckhaut

ABSTRACT
Purpose: To evaluate the efficacy of a drug containing both liothyronine and thyroxine (T3 + T4) in hypothyroid patients who were treated, but not cured, with thyroxine (T4 alone). Design: Practice-based retrospective study of patients' records. Materials and Methods: The records of 89 hypothyroid patients, treated elsewhere with thyroxine but still with hypothyroidism, seen in a private practice in Antwerp, Belgium, were compared with those of 832 untreated hypothyroid patients, over the same period of time (May 1984-July 1997). Results: The same criteria were applied to both groups: a score of eight main symptoms of hypothyroidism and the 24 h urine free T3 dosage. The group of 89 patients, treated elsewhere with T4, but still complaining of symptoms of hypothyroidism, did not really differ from the group of untreated hypothyroid patients as far as symptoms and 24 h urine free T3 were concerned. A number of these patients were followed up during treatment with natural desiccated thyroid (NDT): 40 T4 treated patients and 278 untreated patients. Both groups responded equally favourably to NDT. Conclusions: Combined T3 + T4 treatment seems to be more effective than treatment with T4 alone in hypothyroid patients.

Even mighty sci-hub.io can't provide me a copy of this. Any reason to bin it?

 

Bored now. Anyone find me anything that says this doesn't work?


I've even heard rumours that Lowe himself did PCRTs of his treatments. And probably published them in some chiropractic house mag. I can't even find those.

 

 


 

 

A rich seam of thyroid vs depression papers, all found through: http://psycheducation.org/

Since he's got a cause, I expect to find them all in favour. I'm going to list them here before reading them in order to avoid the obvious mistake of cherry picking from the cherry basket, and then add comments once I've read them / their abstracts.

Further evidence pointing in the opposite direction is very welcome!

I also tried:
https://www.ncbi.nlm.nih.gov/pubmed/?term=thyroxine+major+depression

and some of those are also here. I can't remember which ones I found through psycheducation and which ones through pubmed.
Bloody browser tabs, sorry, I should have been more careful.




J Affect Disord. 2014 Sep;166:353-8. doi: 10.1016/j.jad.2014.04.022. Epub 2014 May 2.
A favorable risk-benefit analysis of high dose thyroid for treatment of bipolar disorders with regard to osteoporosis.
Kelly T1.

 

ABSTRACT

High dose thyroid hormone has been in use since the 1930s for the treatment of affective disorders. Despite numerous papers showing benefit, the lack of negative trials and its inclusion in multiple treatment guidelines, high dose thyroid has yet to find wide spread use. The major objection to the use of high dose thyroid is the myth that it causes osteoporosis. This paper reviews the literature surrounding the use of high dose thyroid, both in endocrinology and in psychiatry. High dose thyroid does not appear to be a significant risk factor for osteoporosis while other widely employed psychiatric medications do pose a risk. Psychiatrists are uniquely qualified to do the risk-benefit analyses of high dose thyroid for the treatment of the bipolar I, bipolar II and bipolar NOS. Other specialties do not have the requisite knowledge of the risks of alterative medications or of the mortality and morbidity of the bipolar disorders to do a full risk benefit analysis.


J Clin Endocrinol Metab. 2010 Aug;95(8):3623-32. doi: 10.1210/jc.2009-2571. Epub 2010 May 25.
A randomized controlled trial of the effect of thyroxine replacement on cognitive function in community-living elderly subjects with subclinical hypothyroidism: the Birmingham Elderly Thyroid study.
Parle J1, Roberts L, Wilson S, Pattison H, Roalfe A, Haque MS, Heath C, Sheppard M, Franklyn J, Hobbs FD.

Conclusions:


This RCT provides no evidence for treating elderly subjects with SCH with T4 replacement therapy to improve cognitive function.

 


 

 

 

 

J Affect Disord. 2002 Apr;68(2-3):285-94.
Effects of supraphysiological thyroxine administration in healthy controls and patients with depressive disorders.
Bauer M1, Baur H, Berghöfer A, Ströhle A, Hellweg R, Müller-Oerlinghausen B, Baumgartner A.

J Affect Disord. 2009 Aug;116(3):222-6. doi: 10.1016/j.jad.2008.12.010. Epub 2009 Feb 11.
The use of triiodothyronine as an augmentation agent in treatment-resistant bipolar II and bipolar disorder NOS.
Kelly T1, Lieberman DZ.

Am J Psychiatry. 2006 Sep;163(9):1519-30; quiz 1665.
A comparison of lithium and T(3) augmentation following two failed medication treatments for depression: a STAR*D report.
Nierenberg AA1, Fava M, Trivedi MH, Wisniewski SR, Thase ME, McGrath PJ, Alpert JE, Warden D, Luther JF, Niederehe G, Lebowitz B, Shores-Wilson K, Rush AJ.

Nord J Psychiatry. 2015 Jan;69(1):73-8. doi: 10.3109/08039488.2014.929741. Epub 2014 Jul 1.
Well-being and depression in individuals with subclinical hypothyroidism and thyroid autoimmunity - a general population study.
Fjaellegaard K1, Kvetny J, Allerup PN, Bech P, Ellervik C.

Mol Biol Rep. 2014;41(4):2419-25. doi: 10.1007/s11033-014-3097-6. Epub 2014 Jan 18.
Thyroid hormones association with depression severity and clinical outcome in patients with major depressive disorder.
Berent D1, Zboralski K, Orzechowska A, Gałecki P.

Mol Psychiatry. 2016 Feb;21(2):229-36. doi: 10.1038/mp.2014.186. Epub 2015 Jan 20.
Levothyroxine effects on depressive symptoms and limbic glucose metabolism in bipolar disorder: a randomized, placebo-controlled positron emission tomography study.
Bauer M1,2, Berman S2, Stamm T3, Plotkin M4, Adli M3, Pilhatsch M1, London ED2, Hellemann GS5, Whybrow PC2, Schlagenhauf F3.
    Abstract

Mol Psychiatry. 2005 May;10(5):456-69.
Supraphysiological doses of levothyroxine alter regional cerebral metabolism and improve mood in bipolar depression.
Bauer M1, London ED, Rasgon N, Berman SM, Frye MA, Altshuler LL, Mandelkern MA, Bramen J, Voytek B, Woods R, Mazziotta JC, Whybrow PC.

Minerva Endocrinol. 2013 Dec;38(4):365-77.
Hypothyroidism and depression: salient aspects of pathogenesis and management.
Duntas LH1, Maillis A.

J Psychiatr Res. 2012 Nov;46(11):1406-13. doi: 10.1016/j.jpsychires.2012.08.009. Epub 2012 Sep 7.
The combination of triiodothyronine (T3) and sertraline is not superior to sertraline monotherapy in the treatment of major depressive disorder.
Garlow SJ1, Dunlop BW, Ninan PT, Nemeroff CB.

Mol Psychiatry. 2016 Feb;21(2):229-36. doi: 10.1038/mp.2014.186. Epub 2015 Jan 20.
Levothyroxine effects on depressive symptoms and limbic glucose metabolism in bipolar disorder: a randomized, placebo-controlled positron emission tomography study.
Bauer M1,2, Berman S2, Stamm T3, Plotkin M4, Adli M3, Pilhatsch M1, London ED2, Hellemann GS5, Whybrow PC2, Schlagenhauf F3.

 

Lesswrong Survey - invitation for suggestions

10 Elo 08 February 2016 08:07AM

Given that it's been a while since the last survey (http://lesswrong.com/lw/lhg/2014_survey_results/)

 

It's now time to open the floor to suggestions of improvements to the last survey.  If you have a question you think should be on the survey (perhaps with reasons why, predictions as to the result, or other useful commentary about a survey question)

 

Alternatively questions that should not be included in the next survey, with similar reasons as to why...

 

survey is now up (2016-03-26)  http://lesswrong.com/lw/nfk/lesswrong_2016_survey/

[LINK] The Top A.I. Breakthroughs of 2015

10 Vika 30 December 2015 10:04PM

A great overview article on AI breakthroughs by Richard Mallah from FLI, linking to many excellent recent papers worth reading. 

Progress in artificial intelligence and machine learning has been impressive this year. Those in the field acknowledge progress is accelerating year by year, though it is still a manageable pace for us. The vast majority of work in the field these days actually builds on previous work done by other teams earlier the same year, in contrast to most other fields where references span decades.

Creating a summary of a wide range of developments in this field will almost invariably lead to descriptions that sound heavily anthropomorphic, and this summary does indeed. Such metaphors, however, are only convenient shorthands for talking about these functionalities. It's important to remember that even though many of these capabilities sound very thought-like, they're usually not very similar to how human cognition works. The systems are all of course functional and mechanistic, and, though increasingly less so, each are still quite narrow in what they do. Be warned though: in reading this article, these functionalities may seem to go from fanciful to prosaic.

The biggest developments of 2015 fall into five categories of intelligence: abstracting across environments, intuitive concept understanding, creative abstract thought, dreaming up visions, and dexterous fine motor skills. I'll highlight a small number of important threads within each that have brought the field forward this year.

Open Thread, Dec. 28 - Jan. 3, 2016

10 Clarity 27 December 2015 02:21PM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

[Link] My Interview with Dilbert creator Scott Adams

9 James_Miller 13 September 2016 05:22AM

In the second half of the interview we discussed several topics of importance to the LW community including cryonics, unfriendly AI, and eliminating mosquitoes. 

https://soundcloud.com/user-519115521/scott-adams-dilbert-interview

 

Jocko Podcast

9 moridinamael 06 September 2016 03:38PM

I've recently been extracting extraordinary value from the Jocko Podcast.

Jocko Willink is a retired Navy SEAL commander, jiu-jitsu black belt, management consultant and, in my opinion, master rationalist. His podcast typically consists of detailed analysis of some book on military history or strategy followed by a hands-on Q&A session. Last week's episode (#38) was particularly good and if you want to just dive in, I would start there.

As a sales pitch, I'll briefly describe some of his recurring talking points:

  • Extreme ownership. Take ownership of all outcomes. If your superior gave you "bad orders", you should have challenged the orders or adapted them better to the situation; if your subordinates failed to carry out a task, then it is your own instructions to them that were insufficient. If the failure is entirely your own, admit your mistake and humbly open yourself to feedback. By taking on this attitude you become a better leader and through modeling you promote greater ownership throughout your organization. I don't think I have to point out the similarities between this and "Heroic Morality" we talk about around here.
  • Mental toughness and discipline. Jocko's language around this topic is particularly refreshing, speaking as someone who has spent too much time around "self help" literature, in which I would partly include Less Wrong. His ideas are not particularly new, but it is valuable to have an example of somebody who reliably executes on his the philosophy of "Decide to do it, then do it." If you find that you didn't do it, then you didn't truly decide to do it. In any case, your own choice or lack thereof is the only factor. "Discipline is freedom." If you adopt this habit as your reality, it become true.
  • Decentralized command. This refers specifically to his leadership philosophy. Every subordinate needs to truly understand the leader's intent in order to execute instructions in a creative and adaptable way. Individuals within a structure need to understand the high-level goals well enough to be able to act in a almost all situations without consulting their superiors. This tightens the OODA loop on an organizational level.
  • Leadership as manipulation. Perhaps the greatest surprise to me was the subtlety of Jocko's thinking about leadership, probably because I brought in many erroneous assumptions about the nature of a SEAL commander. Jocko talks constantly about using self-awareness, detachment from one's ideas, control of one's own emotions, awareness of how one is perceived, and perspective-taking of one's subordinates and superiors. He comes off more as HPMOR!Quirrell than as a "drill sergeant".

The Q&A sessions, in which he answers questions asked by his fans on Twitter, tend to be very valuable. It's one thing to read the bullet points above, nod your head and say, "That sounds good." It's another to have Jocko walk through the tactical implementation of this ideas in a wide variety of daily situations, ranging from parenting difficulties to office misunderstandings.

For a taste of Jocko, maybe start with his appearance on the Tim Ferriss podcast or the Sam Harris podcast.

Non-Fiction Book Reviews

9 SquirrelInHell 11 August 2016 05:05AM

Time start 13:35:06

For another exercise in speed writing, I wanted to share a few book reviews.

These are fairly well known, however there is a chance you haven't read all of them - in which case, this might be helpful.

 

Good and Real - Gary Drescher ★★★★★

This is one of my favourite books ever. Goes over a lot of philosophy, while showing a lot of clear thinking and meta-thinking. Number one replacement for Eliezer's meta-philosophy, if it had not existed. The writing style and language is somewhat obscure, but this book is too brilliant to be spoiled by that. The biggest takeaway is the analysis of ethics of non-causal consequences of our choices, which is something that actually has changed how I act in my life, and I have not seen any similar argument in other sources that would do the same. This book changed my intuitions so much that I now pay $100 in counterfactual mugging without second thought.

 

59 Seconds - Richard Wiseman ★★★

A collection of various tips and tricks, directly based on studies. The strength of the book is that it gives easy but detailed descriptions of lots of studies, and that makes it very fun to read. Can be read just to check out the various psychology results in an entertaining format. The quality of the advice is disputable, and it is mostly the kind of advice that only applies to small things and does not change much in what you do even if you somehow manage to use it. But I still liked this book, and it managed to avoid saying anything very stupid while saying a lot of things. It counts for something.

 

What You Can Change and What You Can't - Martin Seligman ★★★

It is a heartwarming to see that the author puts his best effort towards figuring out what psychology treatments work, and which don't, as well as builiding more general models of how people work that can predict what treatments have a chance in the first place. Not all of the content is necessarily your best guess, after updating on new results (the book is quite old). However if you are starting out, this book will serve excellently as your prior, on which you can update after checking out the new results. And also in some cases, it is amazing that the author was right about them 20 years ago, and mainstream psychology is STILL not caught up (like the whole bullshit "go back to your childhood to fix your problems" approach, which is in wide use today and not bothered at all by such things as "checking facts").

 

Thinking, Fast and Slow - Daniel Kahneman ★★★★★

A classic, and I want to mention it just in case. It is too valuable not to read. Period. It turns out some of the studies the author used for his claims have been later found not to replicate. However the details of those results is not (at least for me) a selling point of this book. The biggest thing is the author's mental toolbox for self-analysis and analysis of biases, as well concepts that he created to describe the mechanisms of intuitive judgement. Learn to think like the author, and you are 10 years ahead in your study of rationality.

 

Crucial Conversations - Al Switzler, Joseph Grenny, Kerry Patterson, Ron McMillan ★★★★

I have almost dropped this book. When I saw the style, it reminded me so much of the crappy self-help books without actual content. But fortunately I have read on a litte more, and it turns out that even while the style is the same in the whole book and it has litte content for the amount of text you read, it is still an excellent book. How is that possible? Simple: it only tells you a few things, but the things it tells you are actually important and they work and they are amazing when you put them into practice. Also on the concept and analysis side, there is precious little but who cares as long as there are some things that are "keepers". The authors spend most of the book hammering the same point over and over, which is "conversation safety". And it is still a good book: if you get this one simple point than you have learned more than you might from reading 10 other books.

 

How to Fail at Almost Everything and Still Win Big - Scott Adams ★★★

I don't agree with much of the stuff that is in this book, but that's not the point here. The author says what he thinks, and also he himself encourages you to pass it through your own filters. Around one third of the book, I thought it was obviously true; another one third, I had strong evidence that told me the author made a mistake or got confused about something; and the remaining one third gave me new ideas, or points of view that I could use to produce more ideas for my own use. This felt kind of like having a conversation with any intelligent person you might know, who has different ideas from you. It was a healthy ratio of agreement and disagreement, such that leads to progress for both people. Except of course in this case the author did not benefit, but I did.

 

Time end: 14:01:54

Total time to write this post: 26 minutes 48 seconds

Average writing speed: 31.2 words/minute, 169 characters/minute

The same data calculated for my previous speed-writing post: 30.1 words/minute, 167 characters/minute

[link] MIRI's 2015 in review

9 Kaj_Sotala 03 August 2016 12:03PM

https://intelligence.org/2016/07/29/2015-in-review/

The introduction:

As Luke had done in years past (see 2013 in review and 2014 in review), I (Malo) wanted to take some time to review our activities from last year. In the coming weeks Nate will provide a big-picture strategy update. Here, I’ll take a look back at 2015, focusing on our research progress, academic and general outreach, fundraising, and other activities.

After seeing signs in 2014 that interest in AI safety issues was on the rise, we made plans to grow our research team. Fueled by the response to Bostrom’s Superintelligence and the Future of Life Institute’s “Future of AI” conference, interest continued to grow in 2015. This suggested that we could afford to accelerate our plans, but it wasn’t clear how quickly.

In 2015 we did not release a mid-year strategic plan, as Luke did in 2014. Instead, we laid out various conditional strategies dependent on how much funding we raised during our 2015 Summer Fundraiser. The response was great; we had our most successful fundraiser to date. We hit our first two funding targets (and then some), and set out on an accelerated 2015/2016 growth plan.

As a result, 2015 was a big year for MIRI. After publishing our technical agenda at the start of the year, we made progress on many of the open problems it outlined, doubled the size of our core research team, strengthened our connections with industry groups and academics, and raised enough funds to maintain our growth trajectory. We’re very grateful to all our supporters, without whom this progress wouldn’t have been possible.

Availability Heuristic Considered Ambiguous

9 Gram_Stone 10 June 2016 10:40PM

(Content note: The experimental results on the availability bias, one of the biases described in Tversky and Kahneman's original work, have been overdetermined, which has led to at least two separate interpretations of the heuristic in the cognitive science literature. These interpretations also result in different experimental predictions. The audience probably wants to know about this. This post is also intended to measure audience interest in a tradition of cognitive scientific research that I've been considering describing here for a while. Finally, I steal from Scott Alexander the section numbering technique that he stole from someone else: I expect it to be helpful because there are several inferential steps to take in this particular article, and it makes it look less monolithic.)

Related to: Availability

I.

The availability heuristic is judging the frequency or probability of an event, by the ease with which examples of the event come to mind.

This statement is actually slightly ambiguous. I notice at least two possible interpretations with regards to what the cognitive scientists infer is happening inside of the human mind:

  1. Humans think things like, “I found a lot of examples, thus the frequency or probability of the event is high,” or, “I didn’t find many examples, thus the frequency or probability of the event is low.”
  2. Humans think things like, “Looking for examples felt easy, thus the frequency or probability of the event is high,” or, “Looking for examples felt hard, thus the frequency or probability of the event is low.”

I think the second interpretation is the one more similar to Kahneman and Tversky’s original description, as quoted above.

And it doesn’t seem that I would be building up a strawman by claiming that some adhere to the first interpretation, intentionally or not. From Medin and Ross (1996, p. 522):

The availability heuristic refers to a tendency to form a judgment on the basis of what is readily brought to mind. For example, a person who is asked whether there are more English words that begin with the letter ‘t’ or the letter ‘k’ might try to think of words that begin with each of these letters. Since a person can probably think of more words beginning with ‘t’, he or she would (correctly) conclude that ‘t’ is more frequent than ‘k’ as the first letter of English words.

And even that sounds at least slightly ambiguous to me, although it falls on the other side of the continuum between pure mental-content-ism and pure phenomenal-experience-ism that includes the original description.

II.

You can’t really tease out this ambiguity with the older studies on availability, because these two interpretations generate the same prediction. There is a strong correlation between the number of examples recalled and the ease with which those examples come to mind.

For example, consider a piece of the setup in Experiment 3 from the original paper on the availability heuristic. The subjects in this experiment were asked to estimate the frequency of two types of words in the English language: words with ‘k’ as their first letter, and words with ‘k’ as their third letter. There are twice as many words with ‘k’ as their third letter, but there was bias towards estimating that there are more words with ‘k’ as their first letter.

How, in experiments like these, are you supposed to figure out whether the subjects are relying on mental content or phenomenal experience? Both mechanisms predict the outcome, "Humans will be biased towards estimating that there are more words with 'k' as their first letter." And a lot of the later studies just replicate this result in other domains, and thus suffer from the same ambiguity.

III.

If you wanted to design a better experiment, where would you begin?

Well, if we think of feelings as sources of information in the way that we regard thoughts as sources of information, then we should find that we have some (perhaps low, perhaps high) confidence in the informational value of those feelings, as we have some level of confidence in the informational value of our thoughts.

This is useful because it suggests a method for detecting the use of feelings as sources of information: if we are led to believe that a source of information has low value, then its relevance will be discounted; and if we are led to believe that it has high value, then its relevance will be augmented. Detecting this phenomenon in the first place is probably a good place to start before trying to determine whether the classic availability studies demonstrate a reliance on phenomenal experience, mental content, or both. 

Fortunately, Wänke et al. (1995) conducted a modified replication of the experiment described above with exactly the properties that we’re looking for! Let’s start with the control condition.

In the control condition, subjects were given a blank sheet of paper and asked to write down 10 words that have ‘t’ as the third letter, and then to write down 10 words that begin with the letter ‘t’. After this listing task, they rated the extent to which words beginning with a ‘t’ are more or less frequent than words that have ‘t’ as the third letter. As in the original availability experiments, subjects estimated that words that begin with a ‘t’ are much more frequent than words with a ‘t’ in the third position.

Like before, this isn’t enough to answer the questions that we want to answer, but it can’t hurt to replicate the original result. It doesn’t really get interesting until you do things that affect the perceived value of the subjects’ feelings.

Wänke et al. got creative and, instead of blank paper, they gave subjects in two experimental conditions sheets of paper imprinted with pale, blue rows of ‘t’s, and told them to write 10 words beginning with a ‘t’. One condition was told that the paper would make it easier for them to recall words beginning with a ‘t’, and the other was told that the paper would make it harder for them to recall words beginning with a ‘t’.

Subjects made to think that the magic paper made it easier to think of examples gave lower estimates of the frequency of words beginning with a ‘t’ in the English language. It felt easy to think of examples, but the experimenter made them expect that by means of the magic paper, so they discounted the value of the feeling of ease. Their estimates of the frequency of words beginning with 't' went down relative to the control condition.

Subjects made to think that the magic paper made it harder to think of examples gave higher estimates of the frequency of words beginning with a ‘t’ in the English language. It felt easy to recall examples, but the experimenter made them think it would feel hard, so they augmented the value of the feeling of ease. Their estimates of the frequency of words beginning with 't' went up relative to the control condition.

(Also, here's a second explanation by Nate Soares if you want one.)

So, at least in this sort of experiment, it looks like the subjects weren’t counting the number of examples they came up with; it looks like they really were using their phenomenal experiences of ease and difficulty to estimate the frequency of certain classes of words. This is some evidence for the validity of the second interpretation mentioned at the beginning.

IV.

So we know that there is at least one circumstance in which the second interpretation seems valid. This was a step towards figuring out whether the availability heuristic first described by Kahneman and Tversky is an inference from amount of mental content, or an inference from the phenomenal experience of ease of recall, or something else, or some combination thereof.

As I said before, the two interpretations have identical predictions in the earlier studies. The solution to this is to design an experiment where inferences from mental content and inferences from phenomenal experience cause different judgments.

Schwarz et al. (1991, Experiment 1) asked subjects to list either 6 or 12 situations in which they behaved either assertively or unassertively. Pretests had shown that recalling 6 examples was experienced as easy, whereas recalling 12 examples was experienced as difficult. After listing examples, subjects had to evaluate their own assertiveness.

As one would expect, subjects rated themselves as more assertive when recalling 6 examples of assertive behavior than when recalling 6 examples of unassertive behavior.

But the difference in assertiveness ratings didn’t increase with the number of examples. Subjects who had to recall examples of assertive behavior rated themselves as less assertive after reporting 12 examples rather than 6 examples, and subjects who had to recall examples of unassertive behavior rated themselves as more assertive after reporting 12 examples rather than 6 examples.

If they were relying on the number of examples, then we should expect their ratings for the recalled quality to increase with the number of examples. Instead, they decreased.

It could be that it got harder to come up with good examples near the end of the task, and that later examples were lower quality than earlier examples, and the increased availability of the later examples biased the ratings in the way that we see. Schwarz acknowledged this, checked the written reports manually, and claimed that no such quality difference was evident.

V.

It would still be nice if we could do better than taking Schwarz’s word on that though. One thing you could try is seeing what happens when you combine the methods we used in the last two experiments: vary the number of examples generated and manipulate the perceived relevance of the experiences of ease and difficulty at the same time. (Last experiment, I promise.)

Schwarz et al. (1991, Experiment 3) manipulated the perceived value of the experienced ease or difficulty of recall by having subjects listen to ‘new-age music’ played at half-speed while they worked on the recall task. Some subjects were told that this music would make it easier to recall situations in which they behaved assertively and felt at ease, whereas others were told that it would make it easier to recall situations in which they behaved unassertively and felt insecure. These manipulations make subjects perceive recall experiences as uninformative whenever the experience matches the alleged impact of the music; after all, it may simply be easy or difficult because of the music. On the other hand, experiences that are opposite to the alleged impact of the music are considered very informative.

When the alleged effects of the music were the opposite of the phenomenal experience of generating examples, the previous experimental results were replicated.

When the alleged effects of the music match the phenomenal experience of generating examples, then the experience is called into question, since you can’t tell if it’s caused by the recall task or the music.

When this is done, the pattern that we expect from the first interpretation of the availability heuristic holds. Thinking of 12 examples of assertive behavior makes subjects rate themselves as more assertive than thinking of 6 examples of assertive behavior; mutatis mutandis for unassertive examples. When people can’t rely on their experience, they fall back to using mental content, and instead of relying on how hard or easy things feel, they count.

Under different circumstances, both interpretations are useful, but of course, it’s important to recognize that a distinction exists in the first place.


Medin, D. L., & Ross, B. H. (1996). Cognitive psychology (2nd ed.). Fort Worth: Harcourt Brace.

Schwarz, N., Bless, H., Strack, F., Klumpp, G., Rittenauer-Schatka, H., & Simons, A. (1991). Ease of retrieval as information: Another look at the availability heuristic. Journal of Personality and Social Psychology, 61, 195–202.

Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5, 207–232.

Wänke, M., Schwarz, N. & Bless, H. (1995). The availability heuristic revisited: Experienced ease of retrieval in mundane frequency estimates. Acta Psychologica, 89, 83-90.

Using humility to counteract shame

9 Vika 15 April 2016 06:32PM

"Pride is not the opposite of shame, but its source. True humility is the only antidote to shame."

Uncle Iroh, "Avatar: The Last Airbender"

Shame is one of the trickiest emotions to deal with. It is difficult to think about, not to mention discuss with others, and gives rise to insidious ugh fields and negative spirals. Shame often underlies other negative emotions without making itself apparent - anxiety or anger at yourself can be caused by unacknowledged shame about the possibility of failure. It can stack on top of other emotions - e.g. you start out feeling upset with someone, and end up being ashamed of yourself for feeling upset, and maybe even ashamed of feeling ashamed if meta-shame is your cup of tea. The most useful approach I have found against shame is invoking humility.

What is humility, anyway? It is often defined as a low view of your own importance, and tends to be conflated with modesty. Another common definition that I find more useful is acceptance of your own flaws and shortcomings. This is more compatible with confidence, and helpful irrespective of your level of importance or comparison to other people. What humility feels like to me on a system 1 level is a sense of compassion and warmth towards yourself while fully aware of your imperfections (while focusing on imperfections without compassion can lead to beating yourself up). According to LessWrong, "to be humble is to take specific actions in anticipation of your own errors", which seems more like a possible consequence of being humble than a definition.

Humility is a powerful tool for psychological well-being and instrumental rationality that is more broadly applicable than just the ability to anticipate errors by seeing your limitations more clearly. I can summon humility when I feel anxious about too many upcoming deadlines, or angry at myself for being stuck on a rock climbing route, or embarrassed about forgetting some basic fact in my field that I am surely expected to know by the 5th year of grad school. While humility comes naturally to some people, others might find it useful to explicitly build an identity as a humble person. How can you invoke this mindset?

One way is through negative visualization or pre-hindsight, considering how your plans could fail, which can be time-consuming and usually requires system 2. A faster and less effortful way is to is to imagine a person, real or fictional, who you consider to be humble. I often bring to mind my grandfather, or Uncle Iroh from the Avatar series, sometimes literally repeating the above quote in my head, sort of like an affirmation. I don't actually agree that humility is the only antidote to shame, but it does seem to be one of the most effective.

(Cross-posted from my blog. Thanks to Janos Kramar for his feedback on this post.)

Updating towards the simulation hypothesis because you think about AI

9 SoerenMind 05 March 2016 10:23PM

(This post is both written up in a rush and very speculative so not as rigorous and full of links as a good post on this site should be but I'd rather get the idea out there than not get around to it.)


Here’s a simple argument that could make us update towards the hypothesis that we live in a simulation. This is the basic structure:


1) P(involved in AI* | ¬sim) = very low

2) P(involved in AI | sim) = high


Ergo, assuming that we fully accept this the argument and its premises (ignoring e.g. model uncertainty), we should strongly update in favour of the simulation hypothesis.


Premise 1


Supposed you are a soul who will randomly awaken in one of at least 100 billion beings (the number of homo sapiens that have lived so far), probably many more. What you know about the world of these beings is that at some point there will be a chain of events that leads to the creation of superintelligent AI. This AI will then go on to colonize the whole universe, making its creation the most impactful event the world will see by an extremely large margin.


Waking up, you see that you’re in the body of one of the first 1000 beings trying to affect this momentous event. Would you be surprised? Given that you were randomly assigned a body, you probably would be.


(To make the point even stronger and slightly more complicated: Bostrom suggests to use observer moments, e.g. an observer-second, rather than beings as the fundamental unit of anthropics. You should be even more surprised to find yourself as an observer-second thinking about or even working on AI since most of the observer seconds in people's lives don’t do so. You reading this sentence may be such a second.)


Therefore, P(involved in AI* | ¬sim) = very low.


Premise 2

 

Given that we’re in a simulation, we’re probably in a simulation created by a powerful AI which wants to investigate something.


Why would a superintelligent AI simulate the people (and even more so, the 'moments’) involved in its creation? I have an intuition that there would be many reasons to do so. If I gave it more thought I could probably name some concrete ones, but for now this part of the argument remains shaky.


Another and probably more important motive would be to learn about (potential) other AIs. It may be trying to find out who its enemies are or to figure out ways of acausal trade. An AI created with the 'Hail Mary’ approach would need information about other AIs very urgently. In any case, there are many possible reasons to want to know who else there is in the universe.


Since you can’t visit them, the best way to find out is by simulating how they may have come into being. And since this process is inherently uncertain you’ll want to run MANY simulations in a Monte Carlo way with slightly changing conditions. Crucially, to run these simulations efficiently, you’ll run observer-moments (read: computations in your brain) more often the more causally important they are for the final outcome.


Therefore, the thoughts of people which are more causally connected to the properties of the final AI will be run many times and that includes especially the thoughts of those who got involved first as they may cause path-changes. AI capabilities researchers would not be so interesting to simulate because their work has less effect on the eventual properties of an AI.


If figuring out what other AIs are like is an important convergent instrumental goal for AIs, then a lot of minds created in simulations may be created for this purpose. Under SSA, the assumption that “all other things equal, an observer should reason as if they are randomly selected from the set of all actually existent observers [or observer moments] (past, present and future) in their reference class”, it would seem rather plausible that,

P(involved in AI | sim) = high


The closer the causal chain to (capabilities research etc)


If you read this, you’re probably one of those people who could have some influence over the eventual properties of a superintelligent AI and as a result should update towards living some simulation that’s meant to figure out the creation of an AI.


Why could this be wrong?


I could think of four general ways in which this argument could go wrong:


1) Our position in the history of the universe is not that unlikely

2) We would expect to see something else if we were in one of the aforementioned simulations.

3) There are other, more likely, situations we should expect to find ourselves in if we were in a simulation created by an AI

4) My anthropics are flawed


I’m most confused about the first one. Everyone has some things in their life that are very exceptional by pure chance. I’m sure there’s some way to deal with this in statistics but I don’t know it. In the interest of my own time I’m not going to go elaborate further on these failure modes and leave that to the commentators.


Conclusion

Is this argument flawed? Or has it been discussed elsewhere? Please point me to it. Does it make sense? Then what are the implications for those most intimately involved with the creation of superhuman AI?


Appendix


My friend Matiss Apinis (othercenterism) put the first premise like this:


“[…] it's impossible to grasp that in some corner of the Universe there could be this one tiny planet that just happens to spawn replicators that over billions of painful years of natural selection happen to create vast amounts of both increasingly intelligent and sentient beings, some of which happen to become just intelligent enough to soon have one shot at creating this final invention of god-like machines that could turn the whole Universe into either a likely hell or unlikely utopia. And here we are, a tiny fraction of those almost "just intelligent enough" beings, contemplating this thing that's likely to happen within our lifetimes and realizing that the chance of either scenario coming true may hinge on what we do. What are the odds?!"

Education as Entertainment and the Downfall of LessWrong

9 SquirrelInHell 04 March 2016 02:06PM

Note 1: I'm not very serious about the second part of the title, I just thought it sounds more catchy. I'm a long time lurker writing here for the first time, and it's not my intention to alienate anyone. Also, hi, nice to meet you. Please leave a comment to achieve a result of making me happy about you having left a comment. But let's get to the point.

I think you might be familiar with TED Talks. Recall the last time you watched one, and how you felt while doing it.

[BZRT BZRT sound of imagination working]

In my case, I often got the feeling like if I was learning something valuable while watching most TED Talks. The speakers are (mostly) obviously passionate and intelligent people, speaking about important matters they care about a lot. (Granted, I probably haven't watched more than a dozen TED Talks in all my life, so my sample is quite small, but I think it isn't very unrepresentative.)

But at some point, I started asking myself afterwards:

So, what have I actually learned?

Which translates in my internal dialect to:

For each major point, give a one-sentence summary and at least one example of how I could apply it.

(Note 2: don't treat this "one sentence summary" thing too strictly - of course it's only a reflex/shorthand that is useful in many situations, but not all. I like it because it's simple enough that it's installable as a subconscious trigger-action.)

And I could not state afterwards anything actually useful that I have learned from those "fascinating" videos (with at most one or two small exceptions).

This is exactly what I mean by "Education as Entertainment".

It's getting the enjoyable *feeling* of learning without any real progress.

[DUM DUM DUM sound of increasing dramatism]

And now, what if you use this concept to look at rationality materials?

For me, reading the core Eliezer's braindump (basically the content of "From AI to Zombies"), as well as braindumps (in the form of blogs) of several other people from the LW community, had definite learning value.

I take notes when I read those, and I have an accountability system in place that enables me to make sure I follow up on all the advice I give to myself, test the new ideas, and improve/drop/replace/implement as needed.

However, when I read (a significant part of) the content produced by the "modern" community-powered-LessWrong, I classify its actual learning value at around the same level as TED Talks.

Or YouTube videos with cats, only those don't give me the *impression* that I'm learning something.

THE END

Please let me know what you think.

Final Note: Please take my remarks with a grain of salt. What I write is meant to inspire thoughts in you, not to represent my best factual knowledge about the LW community.

[LINK] How A Lamp Took Away My Reading And A Box Brought It Back

9 CronoDAS 30 January 2016 04:55PM

By Ferrett Steinmetz

Ferrett isn't officially a Rationality Blogger, but he posts things that seem relevant fairly often. This one is in the spirit of "Beware Trivial Inconveniences". It's the story of how he realized that a small change in his environment led to a big change in his behavior...

Clearing An Overgrown Garden

9 Anders_H 29 January 2016 10:16PM

(tl;dr: In this post, I make some concrete suggestions for LessWrong 2.0.)

Less Wrong 2.0

A few months ago, Vaniver posted some ideas about how to reinvigorate Less Wrong. Based on comments in that thread and based on personal discussions I have had with other members of the community, I believe there are several different views on why Less Wrong is dying. The following are among the most popular hypotheses:

(1) Pacifism has caused our previously well-kept garden to become overgrown

(2) The aversion to politics has caused a lot of interesting political discussions to move away from the website

(3) People prefer posting to their personal blogs.

With this background, I suggest the following policies for Less Wrong 2.0.  This should be seen only as a starting point for discussion about the ideal way to implement a rationality forum. Most likely, some of my ideas are counterproductive. If anyone has better suggestions, please post them to the comments.

Moderation Policy:

There are four levels of users:  

  1. Users
  2. Trusted Users 
  3. Moderators
  4. Administrator
Users may post comments and top level posts, but their contributions must be approved by a moderator.

Trusted users may post comments and top level posts which appear immediately. Trusted user status is awarded by 2/3 vote among the moderators

Moderators may approve comments made by non-trusted users. There should be at least 10 moderators to ensure that comments are approved within an hour of being posted, preferably quicker. If there is disagreement between moderators, the matter can be discussed on a private forum. Decisions may be altered by a simple majority vote.

The administrator (preferably Eliezer or Nate) chooses the moderators.

Personal Blogs:


All users are assigned a personal subdomain, such as Anders_H.lesswrong.com. When publishing a top-level post, users may click a checkbox to indicate whether the post should appear only on their personal subdomain, or also in the Less Wrong discussion feed. The commenting system is shared between the two access pathways. Users may choose a design template for their subdomain. However, when the post is accessed from the discussion feed, the default template overrides the user-specific template. The personal subdomain may include a blogroll, an about page, and other information. Users may purchase a top-level domain as an alias for their subdomain

Standards of Discourse and Policy on Mindkillers:

All discussion in Less Wrong 2.0 is seen explicitly as an attempt to exchange information for the purpose of reaching Aumann agreement. In order to facilitate this goal, communication must be precise. Therefore, all users agree to abide by Crocker's Rules for all communication that takes place on the website.  

However, this is not a license for arbitrary rudeness.  Offensive language is permitted only if it is necessary in order to point to a real disagreement about the territory. Moreover, users may not repeatedly bring up the same controversial discussion outside of their original context.

Discussion of politics is explicitly permitted as long as it adheres to the rules outlined above. All political opinions are permitted (including opinions which are seen as taboo by society as large), as long as the discussion is conducted with civility and in a manner that is suited for dispassionate exchange of information, and suited for accurate reasoning about the consequences of policy choice. By taking part in any given discussion, all users are expected to pre-commit to updating in response to new information.

Upvotes:

Only trusted users may vote. There are two separate voting systems.  Users may vote on whether the post raises a relevant point that will result in interesting discussion (quality of contribution) and also on whether they agree with the comment (correctness of comment). The first is a property both of the comment and of the user, and is shown in their user profile.  The second scale is a property only of the comment. 

All votes are shown publicly (for an example of a website where this is implemented, see for instance dailykos.com).  Abuse of the voting system will result in loss of Trusted User Status. 

How to Implement This

After the community comes to a consensus on the basic ideas behind LessWrong 2.0, my preference is for MIRI to implement it as a replacement for Less Wrong. However, if for some reason MIRI is unwilling to do this, and if there is sufficient interest in going in this direction, I offer to pay server costs. If necessary, I also offer to pay some limited amount for someone to develop the codebase (based on Open Source solutions). 

Other Ideas:


MIRI should start a professionally edited rationality journal (For instance called "Rationality") published bi-monthly. Users may submit articles for publication in the journal. Each week, one article is chosen for publication and posted to a special area of Less Wrong. This replaces "main". Every two months, these articles are published in print in the journal.  

The idea behind this is as follows:
(1) It will incentivize users to compete for the status of being published in the journal.
(2) It will allow contributors to put the article on their CV.
(3) It may bring in high-quality readers who are unlikely to read blogs.  
(4) Every week, the published article may be a natural choice for discussion topic at Less Wrong Meetup

[Link] Lifehack article promoting rationality-themed ideas, namely long-term orientation, mere-exposure effect, consider-the-alternative, and agency

9 Gleb_Tsipursky 11 January 2016 08:14PM

Here's my article in Lifehack, one of the most prominent self-improvement websites, bringing rationality-style ideas to a broad audience, specifically long-term orientation, mere-exposure effect, consider-the-alternative, and agency :-)

 

P.S. Based on feedback from the LessWrong community, I made sure to avoid mentioning LessWrong or rationality in the article.

[Link] Huffington Post article about dual process theory

9 Gleb_Tsipursky 06 January 2016 01:44AM

Published a piece in The Huffington Post popularizing dual-process theory in layman's language.

 

P.S. I know some don't like using terms like Autopilot and Intentional to describe System 1 and System 2, but I find from long experience that these terms resonate well with a broad audience. Also, I know dual process theory is criticized by some, but we have to start somewhere, and just explaining dual process theory is a way to start bridging the inference gap to higher meta-cognition.

View more: Prev | Next