Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Rationalist house

3 Elo 27 August 2014 10:52PM

At the Australia online hangout; one of the topics we discussed (before I fell asleep on camera for a bunch of people) Was writing a rationality TV show as an outreach task.  Of course there being more ways for this to go wrong than right I figured its worth mentioning the ideas and getting some comments.

The strategy is to have a set of regular characters who's rationality behaviour seems nuts.  Effectively sometimes because it is; when taken out of context.  Then to have one "blank" person who tries to join - "rationality house". and work things out.  My aim was to have each episode straw man a rationality behaviour and then steelman it.  Where by the end of the episode it saves the day; makes someone happy; achieves a goal - or some other <generic win-state>.

Here is a list of notes of characters from the hangout or potential topics to talk about.

  • No showers. Bacterial showers
  • Stopwatches everywhere
  • temperature controls everywhere, light controls.
  • radical honesty person.
  • Soylent only eating person
  • born-again atheist
  • bayesian person
  • Polyphasic sleep cycles.
I have not written much in my life and certainly never anything for TV but it sounds like a fun project.  I figured I would pick a pilot idea; roll with it and see if I can make a script.  I could probably also get Sydney folk to act for a first-round web-cast version.

I was wondering if anyone had any other rationality topics that can be easily strawmanned then steelmanned worth adding to the list.  And if anyone had experience worth sharing with writing for TV, as well as anyone interested in joining the project to write or be a sounding board...


[LINK] Could a Quantum Computer Have Subjective Experience?

11 shminux 26 August 2014 06:55PM

Yet another exceptionally interesting blog post by Scott Aaronson, describing his talk at the Quantum Foundations of a Classical Universe workshop, videos of which should be posted soon. Despite the disclaimer "My talk is for entertainment purposes only; it should not be taken seriously by anyone", it raises several serious and semi-serious points about the nature of conscious experience and related paradoxes, which are generally overlooked by the philosophers, including Eliezer, because they have no relevant CS/QC expertise. For example:

  • Is an FHE-encrypted sim with a lost key conscious?
  • If you "untorture" a reversible simulation, did it happen? What does the untorture feel like?
  • Is Vaidman brain conscious? (You have to read the blog post to learn what it is, not going to spoil it.)

Scott also suggests a model of consciousness which sort-of resolves the issues of cloning, identity and such, by introducing what he calls a "digital abstraction layer" (again, read the blog post to understand what he means by that). Our brains might be lacking such a layer and so be "fundamentally unclonable". 

Another interesting observation is that you never actually kill the cat in the Schroedinger's cat experiment, for a reasonable definition of "kill".

There are several more mind-blowing insights in this "entertainment purposes" post/talk, related to the existence of p-zombies, consciousness of Boltzmann brains, the observed large-scale structure of the Universe and the "reality" of Tegmark IV.

I certainly got the humbling experience that Scott is the level above mine, and I would like to know if other people did, too.

Finally, the standard bright dilettante caveat applies: if you think up a quick objection to what an expert in the area argues, and you yourself are not such an expert, the odds are extremely heavy that this objection is either silly or has been considered and addressed by the expert already. 

 

Reverse engineering of belief structures

3 Stefan_Schubert 26 August 2014 06:00PM

(Cross-posted from my blog.)

Since some belief-forming processes are more reliable than others, learning by what processes different beliefs were formed is for several reasons very useful. Firstly, if we learn that someone's belief that p (where p is a proposition such as "the cat is on the mat") was formed a reliable process, such as visual observation under ideal circumstances, we have reason to believe that p is probably true. Conversely, if we learn that the belief that p was formed by an unreliable process, such as motivated reasoning, we have no particular reason to believe that p is true (though it might be - by luck, as it were). Thus we can use knowledge about the process that gave rise to the belief that p to evaluate the chance that p is true.

Secondly, we can use knowledge about belief-forming processes in our search for knowledge. If we learn that some alleged expert's beliefs are more often than not caused by unreliable processes, we are better off looking for other sources of knowledge. Or, if we learn that the beliefs we acquire under certain circumstances - say under emotional stress - tend to be caused by unreliable processes such as wishful thinking, we should cease to acquire beliefs under those circumstances.

Thirdly, we can use knowledge about others' belief-forming processes to try to improve them. For instance, if it turns out that a famous scientist has used outdated methods to arrive at their experimental results, we can announce this publically. Such "shaming" can be a very effective means to scare people to use more reliable methods, and will typically not only have an effect on the shamed person, but also on others who learn about the case. (Obviously, shaming also has its disadvantages, but my impression is that it has played a very important historical role in the spreading of reliable scientific methods.)

 

A useful way of inferring by what process a set of beliefs was formed is by looking at its structure. This is a very general method, but in this post I will focus on how we can infer that a certain set of beliefs most probably was formed by (politically) motivated cognition. Another use is covered here and more will follow in future posts.

Let me give two examples. Firstly, suppose that we give American voters the following four questions:

  1. Do expert scientists mostly agree that genetically modified foods are safe?
  2. Do expert scientists mostly agree that radioactive wastes from nuclear power can be safely disposed of in deep underground storage facilities?
  3. Do expert scientists mostly agree that global temperatures are rising due to human activities?
  4. Do expert scientists mostly agree that the "intelligent design" theory is false?

The answer to all of these questions is "yes".* Now suppose that a disproportionate number of republicans answer "yes" to the first two questions, and "no" to the third and the fourth questions, and that a disproportionate number of democrats answer "no" to the first two questions, and "yes" to the third and the fourth questions. In the light of what we know about motivated cognition, these are very suspicious patterns or structures of beliefs, since that it is precisely the patterns we would expect them to arrive at given the hypothesis that they'll acquire whatever belief on empirical questions that suit their political preferences. Since no other plausibe hypothesis seem to be able to explain these patterns as well, this confirms this hypothesis. (Obviously, if we were to give the voters more questions and their answers would retain their one-sided structure, that would confirm the hypothesis even stronger.)

Secondly, consider a policy question - say minimum wages - on which a number of empirical claims have bearing. For instance, these empirical claims might be that minimum wages significantly decrease employers' demand for new workers, that they cause inflation, that they significantly increase the supply of workers (since they provide stronger incentives to work) and that they significantly reduce workers' tendency to use public services (since they now earn more). Suppose that there are five such claims which tell in favour of minimum wages and five that tell against them, and that you think that each of them has a roughly 50 % chance of being true. Also, suppose that they are probabilistically independent of each other, so that learning that one of them is true does not affect the probabilities of the other claims.

Now suppose that in a debate, all proponents of minimum wages defend all of the claims that tell in favour of minimum wages, and reject all of the claims that tell against them, and vice versa for the opponents of minimum wages. Now this is a very surprising pattern. It might of course be that one side is right across the board, but given your prior probability distribution (that the claims are independent and have a 50 % probability of being true) a more reasonable interpretation of the striking degree of coherence within both sides is, according to your lights, that they are both biased; that they are both using motivated cognition. (See also this post for more on this line of reasoning.)

The difference between the first and the second case is that in the former, your hypothesis that the test-takers are biased is based on the fact that they are provably wrong on certain questions, whereas in the second case, you cannot point to any issue where any of the sides is provably wrong. However, the patterns of their claims are so improbable given the hypothesis that they have reviewed the evidence impartially, and so likely given the hypothesis of bias, that they nevertheless strongly confirms the latter. What they are saying is simply "too good to be true".


These kinds of arguments, in which you infer a belief-forming process from a structure of beliefs (i.e you reverse engineer the beliefs), have of course always been used. (A salient example is Marxist interpretations of "bourgeois" belief structures, which, Marx argued, supported their material interests to a suspiciously high degree.) Recent years have, however, seen a number of developments that should make them less speculative and more reliable and useful.

Firstly, psychological research such as Tversky and Kahneman's has given us a much better picture of the mechanisms by which we acquire beliefs. Experiments have shown that we fall prey to an astonishing list of biases and identified which circumstances that are most likely to trigger them. 

Secondly, a much greater portion of our behaviour is now being recorded, especially on the Internet (where we spend an increasing share of our time). This obviously makes it much easier to spot suspicious patterns of beliefs.

Thirdly, our algorithms for analyzing behaviour are quickly improving. FiveLabs recently launched a tool that analyzes your big five personality traits on the basis of your Facebook posts. Granted, this tool does not seem completely accurate, and inferring bias promises to be a harder task (since the correlations are more complicated than that between usage of exclamation marks and extraversion, or that betwen using words such as "nightmare" and "sick of" and neuroticism). Nevertheless, better algorithms and more computer power will take us in the right direction.

 

In my view, there is thus a large untapped potential to infer bias from the structure of people's beliefs, which in turn would be inferred from their online behaviour. In coming posts, I intend to flesh out my ideas on this in some more details. Any comments are welcome and might be incorporated in future posts.

 

* The second and the third questions are taken from a paper by Dan Kahan et al, which refers to the US National Academy of Sciences (NAS) assessment of expert scientists' views on these questions. Their study shows that many conservatives don't believe that experts agree on climate change, whereas a fair number of liberals think experts don't agree that nuclear storage is safe, confirming the hypothesis that people let their political preferences influence their empirical beliefs. The assessment of expert consensus on the first and fourth question are taken from Wikipedia.

Asking people what they think about the expert consensus on these issues, rather than about the issues themselves, is good idea, since it's much easier to come to an agreement on what the true answer is on the former sort of question. (Of course, you can deny that professors from prestigious universities count as expert scientists, but that would be a quite extreme position that few people hold.) 

Changes to my workflow

19 paulfchristiano 26 August 2014 05:29PM

About 18 months ago I made a post here on my workflow. I've received a handful of requests for follow-up, so I thought I would make another post detailing changes since then. I expect this post to be less useful than the last one.

For the most part, the overall outline has remained pretty stable and feels very similar to 18 months ago. Things not mentioned below have mostly stayed the same. I believe that the total effect of continued changes have been continued but much smaller improvements, though it is hard to tell (as opposed to the last changes, which were more clearly improvements).

Based on comparing time logging records I seem to now do substantially more work on average, but there are many other changes during this period that could explain the change (including changes in time logging). Changes other than work output are much harder to measure; I feel like they are positive but I wouldn't be surprised if this were an illusion.

Splitting days:

I now regularly divide my day into two halves, and treat the two halves as separate units. I plan each separately and reflect on each separately. I divide them by an hour long period of reflecting on the morning, relaxing for 5-10 minutes, napping for 25-30 minutes, processing my emails, and planning the evening. I find that this generally makes me more productive and happier about the day. Splitting my days is often difficult due to engagements in the middle of the day, and I don't have a good solution to that.

WasteNoTime:

I have longstanding objections to explicitly rationing internet use (since it seems either indicative of a broader problem that should be resolved directly, or else to serve a useful function that would be unwise to remove). That said, I now use the extension WasteNoTime to limit my consumption of blogs, webcomics, facebook, news sites, browser games, etc., to 10 minutes each half-day. This has cut the amount of time I spend browsing the internet from an average of 30-40 minutes to an average of 10-15 minutes. It doesn't seem to have been replaced by lower-quality leisure, but by a combination of work and higher-quality leisure.

Similarly, I turned off the newsfeed in facebook, which I found to improve the quality of my internet time in general (the primary issue was that I would sometimes be distracted by the newsfeed while sending messages over facebook, which wasn't my favorite way to use up wastenotime minutes).

I also tried StayFocusd, but ended up adopting WasteNoTime because of the ability to set limits per half-day (via "At work" and "not at work" timers) rather than per-day. I find that the main upside is cutting off the tail of derping (e.g. getting sucked into a blog comment thread, or looking into a particularly engrossing issue), and for this purpose per half-day timers are much more effective.

Email discipline:

I set gmail to archive all emails on arrival and assign them the special label "In." This lets me to search for emails and compose emails, using the normal gmail interface, without being notified of new arrivals. I process the items with label "in" (typically turning emails into todo items to be processed by the same system that deals with other todo items) at the beginning of each half day. Each night I scan my email quickly for items that require urgent attention. 

Todo lists / reminders:

I continue to use todo lists for each half day and for a range of special conditions. I now check these lists at the beginning of each half day rather than before going to bed.

I also maintain a third list of "reminders." These are things that I want to be reminded of periodically, organized by day; each morning I look at the day's reminders and think about them briefly. Each of them is copied and filed under a future day. If I feel like I remember a thing well I file it in far in the future, if I feel like I don't remember it well I file it in the near future.

Over the last month most of these reminders have migrated to be in the form "If X, then Y," e.g. "If I agree to do something for someone, then pause, say `actually I should think about it for a few minutes to make sure I have time,' and set a 5 minute timer that night to think about it more clearly." These are designed to fix problems that I notice when reflecting on the day. This is a recommendation from CFAR folks, which seems to be working well, though is the newest part of the system and least tested.

Isolating "todos":

I now attempt to isolate things that probably need doing, but don't seem maximally important; I aim to do them only on every 5th day, and only during one half-day. If I can't finish them in this time, I will typically delay them 5 days. When they spill over to other days, I try to at least keep them to one half-day or the other. I don't know if this helps, but it feels better to have isolated unproductive-feeling blocks of time rather than scattering it throughout the week.

I don't do this very rigidly. I expect the overall level of discipline I have about it is comparable to or lower than a normal office worker who has a clearer division between their personal time and work time.

Toggl:

I now use Toggl for detailed time tracking. Katja Grace and I experimented with about half a dozen other systems (Harvest, Yast, Klok, Freckle, Lumina, I expect others I'm forgetting) before settling on Toggl. It has a depressing number of flaws, but ends up winning for me by making it very fast to start and switch timers which is probably the most important criterion for me. It also offers reviews that work out well with what I want to look at.

I find the main value adds from detailed time tracking are:

1. Knowing how long I've spent on projects, especially long-term projects. My intuitive estimates are often off by more than a factor of 2, even for things taking 80 hours; this can lead me to significantly underestimate the costs of taking on some kinds of projects, and it can also lead me to think an activity is unproductive instead of productive by overestimating how long I've actually spent on it.

2. Accurate breakdowns of time in a day, which guide efforts at improving my day-to-day routine. They probably also make me feel more motivated about working, and improve focus during work.

Reflection / improvement:

Reflection is now a smaller fraction of my time, down from 10% to 3-5%, based on diminishing returns to finding stuff to improve. Another 3-5% is now redirected into longer-term projects to improve particular aspects of my life (I maintain a list of possible improvements, roughly sorted by goodness). Examples: buying new furniture, improvements to my diet (Holden's powersmoothie is great), improvements to my sleep (low doses of melatonin seem good). At the moment the list of possible improvements is long enough that adding to the list is less valuable than doing things on the list.

I have equivocated a lot about how much of my time should go into this sort of thing. My best guess is the number should be higher.

-Pomodoros:

I don't use pomodoros at all any more. I still have periods of uninterrupted work, often of comparable length, for individual tasks. This change wasn't extremely carefully considered, it mostly just happened. I find explicit time logging (such that I must consciously change the timer before changing tasks) seems to work as a substitute in many cases. I also maintain the habit of writing down candidate distractions and then attending to them later (if at all).

For larger tasks I find that I often prefer longer blocks of unrestricted working time. I continue to use Alinof timer to manage these blocks of uninterrupted work.

-Catch:

Catch disappeared, and I haven't found a replacement that I find comparably useful. (It's also not that high on the list of priorities.) I now just send emails to myself, but I do it much less often.

-Beeminder:

I no longer use beeminder. This again wasn't super-considered, though it was based on a very rough impression of overhead being larger than the short-term gains. I think beeminder was helpful for setting up a number of habits which have persisted (especially with respect to daily routine and regular focused work), and my long-term averages continue to satisfy my old beeminder goals.

Project outlines:

I now organize notes about each project I am working on in a more standardized way, with "Queue of todos," "Current workspace," and "Data" as the three subsections. I'm not thrilled by this system, but it seems to be an improvement over the previous informal arrangement. In particular, having a workspace into which I can easily write thoughts without thinking about where they fit, and only later sorting them into the data section once it's clearer how they fit in, decreases the activation energy of using the system. I now use Toggl rather than maintaining time logs by hand.

Randomized trials:

As described in my last post I tried various randomized trials (esp. of effects of exercise, stimulant use, and sleep on mood, cognitive performance, and productive time). I have found extracting meaningful data from these trials to be extremely difficult, due to straightforward issues with signal vs. noise. There are a number of tests which I still do expect to yield meaningful data, but I've increased my estimates for the expensiveness of useful tests substantially, and they've tended to fall down the priority list. For some things I've just decided to do them without the data, since my best guess is positive in expectation and the data is too expensive to acquire.

 

The immediate real-world uses of Friendly AI research

4 ancientcampus 26 August 2014 02:47AM

Much of the glamor and attention paid toward Friendly AI is focused on the misty-future event of a super-intelligent general AI, and how we can prevent it from repurposing our atoms to better run Quake 2. Until very recently, that was the full breadth of the field in my mind. I recently realized that dumber, narrow AI is a real thing today, helpfully choosing advertisements for me and running my 401K. As such, making automated programs safe to let loose on the real world is not just a problem to solve as a favor for the people of tomorrow, but something with immediate real-world advantages that has indeed already been going on for quite some time. Veterans in the field surely already understand this, so this post is directed at people like me, with a passing and disinterested understanding of the point of Friendly AI research, and outlines an argument that the field may be useful right now, even if you believe that an evil AI overlord is not on the list of things to worry about in the next 40 years.

 

Let's look at the stock market. High-Frequency Trading is the practice of using computer programs to make fast trades constantly throughout the day, and accounts for more than half of all equity trades in the US. So, the economy today is already in the hands of a bunch of very narrow AIs buying and selling to each other. And as you may or may not already know, this has already caused problems. In the “2010 Flash Crash”, the Dow Jones suddenly and mysteriously hit a massive plummet only to mostly recover within a few minutes. The reasons for this were of course complicated, but it boiled down to a couple red flags triggering in numerous programs, setting off a cascade of wacky trades.

 

The long-term damage was not catastrophic to society at large (though I'm sure a couple fortunes were made and lost that day), but it illustrates the need for safety measures as we hand over more and more responsibility and power to processes that require little human input. It might be a blue moon before anyone makes true general AI, but adaptive city traffic-light systems are entirely plausible in upcoming years.

 

To me, Friendly AI isn't solely about making a human-like intelligence that doesn't hurt us – we need techniques for testing automated programs, predicting how they will act when let loose on the world, and how they'll act when faced with unpredictable situations. Indeed, when framed like that, it looks less like a field for “the singularitarian cultists at LW”, and more like a narrow-but-important specialty in which quite a bit of money might be made.

 

After all, I want my self-driving car.

 

(To the actual researchers in FAI – I'm sorry if I'm stretching the field's definition to include more than it does or should. If so, please correct me.)

Persistent Idealism

9 jkaufman 26 August 2014 01:38AM

When I talk to people about earning to give, it's common to hear worries about "backsliding". Yes, you say you're going to go make a lot of money and donate it, but once you're surrounded by rich coworkers spending heavily on cars, clothes, and nights out, will you follow through? Working at a greedy company in a selfishness-promoting culture you could easily become corrupted and lose initial values and motivation.

First off, this is a totally reasonable concern. People do change, and we are pulled towards thinking like the people around us. I see two main ways of working against this:

  1. Be public with your giving. Make visible commitments and then list your donations. This means that you can't slowly slip away from giving; either you publish updates saying you're not going to do what you said you would, or you just stop updating and your pages become stale. By making a public promise you've given friends permission to notice that you've stopped and ask "what changed?"
  2. Don't just surround yourself with coworkers. Keep in touch with friends and family. Spend some time with other people in the effective altruism movement. You could throw yourself entirely into your work, maximizing income while sending occasional substantial checks to GiveWell's top picks, but without some ongoing engagement with the community and the research this doesn't seem likely to last.

One implication of the "won't you drift away" objection, however, is often that if instead of going into earning to give you become an activist then you'll remain true to your values. I'm not so sure about this: many people who are really into activism and radical change in their 20s have become much less ambitious and idealistic by their 30s. You can call it "burning out" or "selling out" but decreasing idealism with age is very common. This doesn't mean people earning to give don't have to worry about losing their motivation—in fact it points the opposite way—but this isn't a danger unique to the "go work at something lucrative" approach. Trying honestly to do the most good possible is far from the default in our society, and wherever you are there's going to be pressure to do the easy thing, the normal thing, and stop putting so much effort into altruism.

Open thread, 25-31 August 2014

3 jaime2000 25 August 2014 11:14AM

Previous open thread

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Bayesianism for humans: prosaic priors

16 BT_Uytya 24 August 2014 11:14PM

There are two insights from Bayesianism which occurred to me and which I hadn't seen anywhere else before. 
I like lists in the two posts linked above, so for the sake of completeness, I'm going to add my two cents to a public domain.This post is about the second penny.

Prosaic Priors

The second insight can be formulated as «the dull explanations are more likely to be correct because they tend to have high prior probability.»

Why is that? 

1) Almost by definition! Some property X is 'banal' if X applies to a lot of people in an disappointingly mundane way, not having any redeeming features which would make it more rare (and, hence, interesting).

In the other words, X is banal iff base rate of X is high. Or, you can say, prior probability of X is high.

1.5) Because of Occam's Razor and burdensome details. One way to make something boring more exciting is to add interesting details: some special features which will make sure that this explanation is about you as opposed to 'about almost anybody'.

This could work the other way around: sometimes the explanation feels unsatisfying exactly because it was shaved of any unnecessary and (ultimately) burdensome details.

2) Often, the alternative of a mundane explanation is something unique and custom made to fit the case you are interested in. And anybody familiar with overfitting and conjunction fallacy (and the fact that people tend to love coherent stories with blinding passion1) should be very suspicious about such things. So, there could be a strong bias against stale explanations, which should  be countered.

* * *

I fully grokked this when being in process of CBT-induced soul-searching; usage in this context still looks the most natural to me, but I believe that the area of application of this heuristic is wider.

Examples

1) I'm fairly confident that I'm an introvert. Still, sometimes I can behave like an extrovert. I was interested in the causes of this "extroversion activation", as I called it2. I suspected that I really had two modes of functioning (with "introversion" being the default one), and some events — for example, mutual interest (when I am interested in a person I was talking to, and xe is interested in me) or feeling high-status — made me switch between them.

Or, you know, it could be just reduction in a social anxiety, which makes people more communicative. Increased anxiety levels wasn't a new element to be postulated; I already knew I had it, yet I was tempted to make up new mental entities, and prosaic explanation about anxiety managed to avoid me for a while.

2) I find it hard to do something I consider worthwhile while on a spring break, despite having lots of a free time. I tend to make grandiose plans — I should meet new people! I should be more involved in sports! I should start using Anki! I should learn Lojban! I should practice meditation! I should read these textbooks including doing most of exercises! — and then fail to do almost anything. Yet I manage to do some impressive stuff during academic term, despite having less time and more commitments.

This paradoxical situation calls for explanation.

The first hypothesis that came to my mind was about activation energy. It takes effort to go  from "procrastinating" to "doing something"; speaking more generally, you can say that it takes effort to go from "lazy day" to "productive day". During the academic term, I am forced to make most of my days productive: I have to attend classes, do homework, etc. And, already having done something good, I can do something else as well. During spring break, I am deprived of that natural structure, and, hence I am on my own in terms of starting doing something I find worthwhile.

The alternative explanation: I was tired. Because, you know, vacation comes right after midterms, and I tend to go all out while preparing for midterms. I am exhausted, my energy and willpower are scarce, so it's no wonder I am having trouble utilizing it.

(I don't really believe in the latter explanation (I think that my situation is caused by several factors, including two outlined above), so it is also an example of descriptive "probable enough" hypothesis)

3) This example comes from Slate Star Codex. Nerds tend to find aversive many group bonding activities usual people supposedly enjoy, such as patriotism, prayer, team sports, and pep rallies. Supposedly, they should feel (with a tear-jerking passion of thousand exploding suns) the great unity with their fellow citizens, church-goers, teammates or pupils respectively, but instead they feel nothing.

Might it be that nerds are unable to enjoy these activities because something is broken inside their brains? One could be tempted to construct an elaborate argument involving autism spectrum and a mild case of schizoid personality disorder. In other words, this calls for postulating a rare form of autism which affects only some types of social behaviour (perception of group activities), leaving other types unchanged.

Or, you know, maybe nerds just don't like the group they are supposed to root for. Maybe nerds don't feel unity and relationship to The Great Whole because they don't feel like they truly belong here.

As Scott put it, "It’s not that we lack the ability to lose ourselves in an in-group, it’s that all the groups people expected us to lose ourselves in weren’t ones we could imagine as our in-group by any stretch of the imagination"3.

4) This example comes from this short comic titled "Sherlock Holmes in real life".

* * *

...and after this the word "prosaic" quickly turned into an awesome compliment. Like, "so, this hypothesis explains my behaviour well; but is it boring enough?", or "your claim is refreshingly dull; I like it!".


1. If you had read Thinking: Fast and Slow, you probably know what I mean. If you hadn't, you can look at narrative fallacy in order to get a general idea.
2. Which was, as I now realize, an excellent way to deceive myself via using word with a lot of hidden assumptions. Taboo your words, folks!
3. As a side note, my friend proposed an alternative explanation: the thing is, often nerds are defined as "sort of people who dislike pep rallies". So, naturally, we have "usual people" who like pep rallies and "nerds" who avoid them. And then "nerds dislike pep rallies" is tautology rather than something to be explained.

Announcing The Effective Altruism Forum

24 RyanCarey 24 August 2014 08:07AM

The Effective Altruist Forum will be launched at effective-altruism.com on September 10, British time.

Now seems like a good time time to discuss why we might need an effective altruist forum, and how it might compare to LessWrong.

About the Effective Altruist Forum

The motivation for the Effective Altruist Forum is to improve the quality of effective altruist discussion and coordination. A big part of this is to give many of the useful features of LessWrong to effective altruists, including:

 

  • Archived, searchable content (this will begin with archived content from effective-altruism.com)
  • Meetups
  • Nested comments
  • A karma system
  • A dynamically upated list of external effective altruist blogs
  • Introductory materials (this will begin with these articles)

 

The effective altruist forum has been designed by Mihai Badic. Over the last month, it has been developed by Trike Apps, who have built the new site using the LessWrong codebase. I'm glad to report that it is now basically ready, looks nice, and is easy to use.

I expect that at the new forum, as on the effective altruist Facebook and Reddit pages, people will want to discuss the which intellectual procedures to use to pick effective actions. I also expect some proposals of effective altruist projects, and offers of resources. So users of the new forum will share LessWrong's interest in instrumental and epistemic rationality. On the other hand, I expect that few of its users will want to discuss the technical aspects of artificial intelligence, anthropics or decision theory, and to the extent that they do so, they will want to do it at LessWrong. As a result, I  expect the new forum to cause:

 

  • A bunch of materials on effective altruism and instrumental rationality to be collated for new effective altruists
  • Discussion of old LessWrong materials to resurface
  • A slight increase to the number of users of LessWrong, possibly offset by some users spending more of their time posting at the new forum.

 

At least initially, the new forum won't have a wiki or a Main/Discussion split and won't have any institutional affiliations.

Next Steps:

It's really important to make sure that the Effective Altruist Forum is established with a beneficial culture. If people want to help that process by writing some seed materials, to be posted around the time of the site's launch, then they can contact me at ry [dot] duff [at] gmail.com. Alternatively, they can wait a short while until they automatically receive posting priveleges.

It's also important that the Effective Altruism Forum helps the shared goals of rationalists and effective altruists, and has net positive effects on LessWrong in particular. Any suggestions for improving the odds of success for the effective altruism forum are most welcome.

[Link] Feynman lectures on physics

9 Mark_Friedenbach 23 August 2014 08:14PM

The Feynman lectures on physics are now available to read online for free. This is a classic resource for not just learning physics also but also the process of science and the mindset of a scientific rationalist.

Bayesianism for humans: "probable enough"

25 BT_Uytya 23 August 2014 05:57PM

There are two insights from Bayesianism which occurred to me and which I hadn't seen anywhere else before. 
I like lists in the two posts linked above, so for the sake of completeness, I'm going to add my two cents to a public domain. Post about second penny will be up tomorrow, or a bit later.


"Probable enough"

When you have eliminated the impossible, whatever  remains is often more improbable than your having made a mistake in one  of your impossibility proofs.


Bayesian way of thinking introduced me to the idea of "hypothesis which is probably isn't true, but probable enough to rise to the level of conscious attention" — in other words, to the situation when P(H) is notable but less than 50%.

Looking back, I think that the notion of taking seriously something which you don't think is true was alien to me. Hence, everything was either probably true or probably false; things from the former category were over-confidently certain, and things from the latter category were barely worth thinking about.

This model was correct, but only in a formal sense.

Suppose you are living in Gotham, the city famous because of it's crime rate and it's masked (and well-funded) vigilante, Batman. Recently you had read The Better Angels of Our Nature: Why Violence Has Declined by Steven Pinker, and according to some theories described here, Batman isn't good for Gotham at all.

Now you know, for example, the theory of Donald Black that "crime is, from the point of view of the perpetrator, the pursuit of justice". You know about idea that in order for crime rate to drop, people should perceive their law system as legitimate. You suspect that criminals beaten by Bats don't perceive the act as a fair and regular punishment for something bad, or an attempt to defend them from injustice; instead the act is perceived as a round of bad luck. So, the criminals are busy plotting their revenge, not internalizing civil norms.

You believe that if you send your copy of book (with key passages highlighted) to the person connected to Batman, Batman will change his ways and Gotham will become much more nice in terms of homicide rate. 

So you are trying to find out Batman's secret identity, and there are 17 possible suspects. Derek Powers looks like a good candidate: he is wealthy, and has a long history of secretly delegating illegal-violence-including tasks to his henchmen; however, his motivation is far from obvious. You estimate P(Derek Powers employs Batman) as 20%. You have very little information about other candidates, like Ferris Boyle, Bruce Wayne, Roland Daggett, Lucius Fox or Matches Malone, so you assign an equal 5% to everyone else.

In this case you should pick Derek Powers as your best guess when forced to name only one candidate (for example, if you forced to send the book to someone today), but also you should be aware that your guess is 80% likely to be wrong. When making expected utility calculations, you should take Derek Powers more seriously than Lucius Fox, but only by 15% more seriously.

In other words, you should take maximum a posteriori probability hypothesis into account while not deluding yourself into thinking that now you understand everything or nothing at all. Derek Powers hypothesis probably isn't true; but it is useful.

Sometimes I find it easier to reframe question from "what hypothesis is true?" to "what hypothesis is probable enough?". Now it's totally okay that your pet theory isn't probable but still probable enough, so doubt becomes easier. Also, you are aware that your pet theory is likely to be wrong (and this is nothing to be sad about), so the alternatives come to mind more naturally.

These "probable enough" hypothesis can serve as a very concise summaries of state of your knowledge when you simultaneously outline the general sort of evidence you've observed, and stress that you aren't really sure. I like to think about it like a rough, qualitative and more System1-friendly variant of Likelihood ratio sharing.

Planning Fallacy

The original explanation of planning fallacy (proposed by Kahneman and Tversky) is about people focusing on a most optimistic scenario when asked about typical one (instead of trying to do an Outside VIew). If you keep the distinction between "probable" and "probable enough" in mind, you can see this claim in a new light.

Because the most optimistic scenario is the most probable and the most typical one, in a certain sense.

The illustration, with numbers pulled out of thin air, goes like this: so, you want to visit a museum.

The first thing you need to do is to get dressed and take your keys and stuff. Usually (with 80% probability) you do this very quick, but there is a weak possibility of your museum ticket having been devoured by an entropy monster living on your computer table.

The second thing is to catch bus. Usually (p = 80%), bus is on schedule, but sometimes it can be too early or too late. After this, the bus could (20%) or could not (80%) get stuck in a traffic jam.

Finally, you need to find a museum building. You've been there before once, so you sorta remember your route, yet still could be lost with 20% probability.

And there you have it: P(everything is fine) = 40%, and probability of every other scenario is 10% or even less. "Everything is fine" is probable enough, yet likely to be false. Supposedly, humans pick MAP hypothesis and then forget about every other scenario in order to save computations.

Also, "everything is fine" is a good description of your plan. If your friend asks you, "so how are you planning to get to the museum?", and you answer "well, I catch the bus, get stuck in a traffic jam for 30 agonizing minutes, and then just walk from here", your friend is going  to get a completely wrong idea about dangers of your journey. So, in a certain sense, "everything is fine" is a typical scenario. 

Maybe it isn't human inability to pick the most likely scenario which should be blamed. Maybe it is false assumption that "most likely == likely to be correct" which contributes to this ubiquitous error.

In this case you would be better off having picked the "something will go wrong, and I will be late", instead of "everything will be fine".

So, sometimes you are interested in the best specimen out of your hypothesis space, sometimes you are interested in a most likely thingy (and it doesn't matter how vague it would be), and sometimes there are no shortcuts, and you have to do an actual expected utility calculation.

Study: In giving charity, let not your right hand...

3 homunq 22 August 2014 10:23PM

So, here's the study¹:

It's veterans' day in Canada. As any good Canadian knows, you're supposed to wear a poppy to show you support the veterans (it has something to do with Flanders Field). As people enter a concourse on the university, a person there does one of three things: gives them a poppy to wear on their clothes; gives them an envelope to carry and tells them (truthfully) that there's a poppy inside; or gives them nothing. Then, after they've crossed the concourse, another person asks them if they want to put donations in a box to support Canadian war veterans.

Who do you think gives the most?

...

If you guessed that it's the people who got the poppy inside the envelope, you're right. 78% of them gave, for an overall average donation of $0.86. That compares to 58% of the people wearing the poppy, for an average donation of $0.34; and 56% of those with no poppy, for an average of $0.15.

Why did the envelope holders give the most? Unlike the no-poppy group, they had been reminded of the expectation of supporting veterans; but unlike the poppy-wearers, they hadn't been given an easy, cost-free means of demonstrating their support.

I think this research has obvious applications, both to fundraising and to self-hacking. It also validates the bible quote (Matthew 6:3) which is the title of this article.

¹ The Nature of Slacktivism: How the Social Observability of an Initial Act of Token Support Affects Subsequent Prosocial Action; K Kristofferson, K White, J Peloza - Journal of Consumer Research, 2014

 

 

 

[LINK] Physicist Carlo Rovelli on Modern Physics Research

5 shminux 22 August 2014 09:46PM

A blog post in Scientific American, well worth reading. Rovelli is a researcher in Loop Quantum Gravity.

Some quotes:

Horgan: Do multiverse theories and quantum gravity theories deserve to be taken seriously if they cannot be falsified?

Rovelli: No.

Horgan: What’s your opinion of the recent philosophy-bashing by Stephen Hawking, Lawrence Krauss and Neil deGrasse Tyson?

Rovelli: Seriously: I think they are stupid in this.   I have admiration for them in other things, but here they have gone really wrong.  Look: Einstein, Heisenberg, Newton, Bohr…. and many many others of the greatest scientists of all times, much greater than the names you mention, of course, read philosophy, learned from philosophy, and could have never done the great science they did without the input they got from philosophy, as they claimed repeatedly.  You see: the scientists that talk philosophy down are simply superficial: they have a philosophy (usually some ill-digested mixture of Popper and Kuhn) and think that this is the “true” philosophy, and do not realize that this has limitations.

Horgan: Can science attain absolute truth?

 

Rovelli: I have no idea what “absolute truth” means. I think that science is the attitude of those who find funny the people saying they know something is absolute truth.  Science is the awareness that our knowledge is constantly uncertain.  What I know is that there are plenty of things that science does not understand yet. And science is the best tool found so far for reaching reasonably reliable knowledge.

Horgan: Do you believe in God?

Rovelli: No.  But perhaps I should qualify the answer, because like this it is bit too rude and simplistic. I do not understand what “to believe in God” means. The people that “believe in God” seem like Martians to me.  I do not understand them.  I suppose this means that I “do not believe in God”. If the question is whether I think that there is a person who has created Heavens and Earth, and responds to our prayers, then definitely my answer is no, with much certainty.

Horgan: Are science and religion compatible?

Rovelli: Of course yes: you can be great in solving Maxwell’s equations and pray to God in the evening.  But there is an unavoidable clash between science and certain religions, especially some forms of Christianity and Islam, those that pretend to be repositories of “absolute Truths.”

 

Weekly LW Meetups

2 FrankAdamek 22 August 2014 03:38PM

Conservation of Expected Jury Probability

9 jkaufman 22 August 2014 03:25PM

The New York Times has a calculator to explain how getting on a jury works. They have a slider at the top indicating how likely each of the two lawyers think you are to side with them, and as you answer questions it moves around. For example, if you select that your occupation is "blue collar" then it says "more likely to side with plaintiff" while "white collar" gives "more likely to side with defendant". As you give it more information the pointer labeled "you" slides back and forth, representing the lawyers' ongoing revision of their estimates of you. Let's see what this looks like.

Initial
Selecting "Over 30"
Selecting "Under 30"

For several other questions, however, the options aren't matched. If your household income is under $50k then it will give you "more likely to side with plaintiff" while if it's over $50k then it will say "no effect on either lawyer". This is not how conservation of expected evidence works: if learning something pushes you in one direction, then learning its opposite has to push you in the other.

Let's try this with some numbers. Say people's leanings are:

income probability of siding with plaintiff probability of siding with defendant
>$50k 50% 50%
<$50k 70% 30%
Before asking you your income the lawyers' best guess is you're equally likely to be earning >$50k as <$50k because $50k's the median [1]. This means they'd guess you're 60% likely to side with the plaintiff: half the people in your position earn over >$50k and will be approximately evenly split while the other half of people who could be in your position earn under <$50k and would favor the plaintiff 70-30, and averaging these two cases gives us 60%.

So the lawyers best guess for you is that you're at 60%, and then they ask the question. If you say ">$50k" then they update their estimate for you down to 50%, if you say "<$50k" they update it up to 70%. "No effect on either lawyer" can't be an option here unless the question gives no information.


[1] Almost; the median income in the US in 2012 was $51k. (pdf)

Memory is Everything

-3 Qwake 22 August 2014 04:48AM

I have found (there is some (evidence)[http://mentalfloss.com/article/52586/why-do-our-best-ideas-come-us-shower] to suggest this) that showers are a great place to think. While I am taking a shower I find that I can think about things in a whole new perspective and it's very refreshing. Well today, while I was taking a shower, an interesting thing popped into my head. Memory is everything. Your memory contains you, it contains your thoughts, it contains your own unique perception of reality. Imagine going to bed tonight and waking up with absolutely no memory of your past. Would you still consider that person yourself? There is no question that our memories/experiences influence our behavior in every possible way. If you were born in a different environment with different stimuli you would've responded to your environment differently and became a different person. How different? I don't want to get involved in the nature/nurture debate but I think there is no question that humans are influenced by their environment. How are humans influenced by our environment? Through learning from our past experiences, which are contained in our memory. I'm getting off topic and I have no idea what my point is... So I propose a thought experiment!

 

Omega the supercomputer gives you 3 Options. Option 1 is for you to pay Omega $1,000,000,000 and Omega will grant you unlimited utility potential for 1 week in which Omega will basically provide to your every wish. You will have absolutely no memory of the experience after the week is up. Option 2 is for Omega to pay you $1,000,000,000 but you must be willing to suffer unlimited negative utility potential for a week (you will not be harmed physically or mentally you will simply experience excruciating pain). You will also have absolutely memory of this experience after the week (your subconscious will also not be affected). Finally, Option 3 is simply to refuse Option 1 and 2 and maintain the status quo.

 

At first glance, it may seem that Option 2 is simply not choosable. It seems insane to subject yourself to torture when you have the option of nirvana. But it requires more thought than that. If you compare Option 1 to Option 2 after the week is up there is no difference between the options except that Option 2 nets you 2 billion dollars compared to Option 1. In both Options you have absolutely no memory of either weeks. The question that I'm trying to put forward in this thought experiment is this. If you have no memory of an experience does that experience still matter? Is it worth experiencing something for the experience alone or is it the memory of an experience that matters? Those are some questions that I have been thinking about lately. Any feedback or criticism is appreciated.

One last thing, if you are interested in the concept and importance of memory two excellent movies on the subject are [Memento](http://www.imdb.com/title/tt0209144/) and [Eternal Sunshine of the Spotless Mind](http://www.imdb.com/title/tt0338013/0). I know they both of these movies aren't scientific but I thought them to be very intriguing and thought provoking.    

Fighting Biases and Bad Habits like Boggarts

29 palladias 21 August 2014 05:07PM

TL;DR: Building humor into your habits for spotting and correcting errors makes the fix more enjoyable, easier to talk about and receive social support, and limits the danger of a contempt spiral. 

 

One of the most reliably bad decisions I've made on a regular basis is the choice to stay awake (well, "awake") and on the internet past the point where I can get work done, or even have much fun.  I went through a spell where I even fell asleep on the couch more nights than not, unable to muster the will or judgement to get up and go downstairs to bed.

I could remember (even sometimes in the moment) that this was a bad pattern, but, the more tired I was, the more tempting it was to think that I should just buckle down and apply more willpower to be more awake and get more out of my computer time.  Going to bed was a solution, but it was hard for it not to feel (to my sleepy brain and my normal one) like a bit of a cop out.

Only two things helped me really keep this failure mode in check.  One was setting a hard bedtime (and beeminding it) as part of my sacrifice for Advent.   But the other key tool (which has lasted me long past Advent) is the gif below.

sleep eating ice cream

The poor kid struggling to eat his ice cream cone, even in the face of his exhaustion, is hilarious.  And not too far off the portrait of me around 2am scrolling through my Feedly.

Thinking about how stupid or ineffective or insufficiently strong-willed I'm being makes it hard for me to do anything that feels like a retreat from my current course of action.  I want to master the situation and prove I'm stronger.  But catching on to the fact that my current situation (of my own making or not) is ridiculous, makes it easier to laugh, shrug, and move on.

I think the difference is that it's easy for me to feel contemptuous of myself when frustrated, and easy to feel fond when amused.

I've tried to strike the new emotional tone when I'm working on catching and correcting other errors.  (e.g "Stupid, you should have known to leave more time to make the appointment!  Planning fallacy!"  becomes "Heh, I guess you thought that adding two "trivially short" errands was a closed set, and must remain 'trivially short.'  That's a pretty silly error.")

In the first case, noticing and correcting an error feels punitive, since it's quickly followed by a hefty dose of flagellation, but the second comes with a quick laugh and a easier shift to a growth mindset framing.  Funny stories about errors are also easier to tell, increasing the chance my friends can help catch me out next time, or that I'll be better at spotting the error just by keeping it fresh in my memory. Not to mention, in order to get the joke, I tend to look for a more specific cause of the error than stupid/lazy/etc.

As far as I can tell, it also helps that amusement is a pretty different feeling than the ones that tend to be active when I'm falling into error (frustration, anger, feeling trapped, impatience, etc).  So, for a couple of seconds at least, I'm out of the rut and now need to actively return to it to stay stuck. 

In the heat of the moment of anger/akrasia/etc is a bad time to figure out what's funny, but, if you're reflecting on your errors after the fact, in a moment of consolation, it's easier to go back armed with a helpful reframing, ready to cast Riddikulus!

 

Crossposted from my personal blog, Unequally Yoked.

Another type of intelligence explosion

15 Stuart_Armstrong 21 August 2014 02:49PM

I've argued that we might have to worry about dangerous non-general intelligences. In a series of back and forth with Wei Dai, we agreed that some level of general intelligence (such as that humans seem to possess) seemed to be a great advantage, though possibly one with diminishing returns. Therefore a dangerous AI could be one with great narrow intelligence in one area, and a little bit of general intelligence in others.

The traditional view of an intelligence explosion is that of an AI that knows how to do X, suddenly getting (much) better at doing X, to a level beyond human capacity. Call this the gain of aptitude intelligence explosion. We can prepare for that, maybe, by tracking the AI's ability level and seeing if it shoots up.

But the example above hints at another kind of potentially dangerous intelligence explosion. That of a very intelligent but narrow AI that suddenly gains intelligence across other domains. Call this the gain of function intelligence explosion. If we're not looking specifically for it, it may not trigger any warnings - the AI might still be dumber than the average human in other domains. But this might be enough, when combined with its narrow superintelligence, to make it deadly. We can't ignore the toaster that starts babbling.

An example of deadly non-general AI

11 Stuart_Armstrong 21 August 2014 02:15PM

In a previous post, I mused that we might be focusing too much on general intelligences, and that the route to powerful and dangerous intelligences might go through much more specialised intelligences instead. Since it's easier to reason with an example, here is a potentially deadly narrow AI (partially due to Toby Ord). Feel free to comment and improve on it, or suggest you own example.

It's the standard "pathological goal AI" but only a narrow intelligence. Imagine a medicine designing super-AI with the goal of reducing human mortality in 50 years - i.e. massively reducing human population in the next 49 years. It's a narrow intelligence, so it has access only to a huge amount of human biological and epidemiological research. It must gets its drugs past FDA approval; this requirement is encoded as certain physical reactions (no death, some health improvements) to people taking the drugs over the course of a few years.

Then it seems trivial for it to design a drug that would have no negative impact for the first few years, and then causes sterility or death. Since it wants to spread this to as many humans as possible, it would probably design something that interacted with common human pathogens - colds, flues - in order to spread the impact, rather than affecting only those that took the disease.

Now, this narrow intelligence is less threatening than if it had general intelligence - where it could also plan for possible human countermeasures and such - but it seems sufficiently dangerous on its own that we can't afford to worry only about general intelligences. Some of the "AI superpowers" that Nick mentions in his book (intelligence amplification, strategizing, social manipulation, hacking, technology research, economic productivity) could be enough to cause devastation on their own, even if the AI never developed other abilities.

We still could be destroyed by a machine that we outmatch in almost every area.

Why we should err in both directions

7 owencb 21 August 2014 11:10AM

Crossposted from the Global Priorities Project

This is an introduction to the principle that when we are making decisions under uncertainty, we should choose so that we may err in either direction. We justify the principle, explore the relation with Umeshisms, and look at applications in priority-setting.

Some trade-offs

How much should you spend on your bike lock? A cheaper lock saves you money at the cost of security.

How long should you spend weighing up which charity to donate to before choosing one? Longer means less time for doing other useful things, but you’re more likely to make a good choice.

How early should you aim to arrive at the station for your train? Earlier means less chance of missing it, but more time hanging around at the station.

Should you be willing to undertake risky projects, or stick only to safe ones? The safer your threshold, the more confident you can be that you won’t waste resources, but some of the best opportunities may have a degree of risk, and you might be able to achieve a lot more with a weaker constraint.

The principle

We face trade-offs and make judgements all the time, and inevitably we sometimes make bad calls. In some cases we should have known better; sometimes we are just unlucky. As well as trying to make fewer mistakes, we should try to minimise the damage from the mistakes that we do make.

Here’s a rule which can be useful in helping you do this:

When making decisions that lie along a spectrum, you should choose so that you think you have some chance of being off from the best choice in each direction.

We could call this principle erring in both directions. It might seem counterintuitive -- isn’t it worse to not even know what direction you’re wrong in? -- but it’s based on some fairly straightforward economics. I give a non-technical sketch of a proof at the end, but the essence is: if you’re not going to be perfect, you want to be close to perfect, and this is best achieved by putting your actual choice near the middle of your error bar.

So the principle suggests that you should aim to arrive at the station with a bit of time wasted, but not so much that you won’t miss the train even if something goes wrong.

Refinements

Just saying that you should have some chance of erring in either direction isn’t enough to tell you what you should actually choose. It can be a useful warning sign in the cases where you’re going substantially wrong, though, and as these are the most important cases to fix it has some use in this form.

A more careful analysis would tell you that at the best point on the spectrum, a small change in your decision produces about as much expected benefit as expected cost. In ideal circumstances we can use this to work out exactly where on the spectrum we should be (in some cases more than one point may fit this, so you need to compare them directly). In practice it is often hard to estimate the marginal benefits and costs well enough for this to be useful approach. So although it is theoretically optimal, you will only sometimes want to try to apply this version.

Say in our train example that you found missing the train as bad as 100 minutes waiting at the station. Then you want to leave time so that an extra minute of safety margin gives you a 1% reduction in the absolute chance of missing the train.

For instance, say your options in the train case look like this:

Safety margin (min) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Chance of missing train (%) 50 30 15 8 5 3 2 1.5 1.1 0.8 0.6 0.4 0.3 0.2 0.1

Then the optimal safety margin to leave is somewhere between 6 and 7 minutes: this is where the marginal minute leads to a 1% reduction in the chance of missing the train.

Predictions and track records

So far, we've phrased the idea in terms of the predicted outcomes of actions. Another more well-known perspective on the idea looks at events that have already happened. For example:

These formulations, dubbed 'Umeshisms', only work for decisions that you make multiple times, so that you can gather a track record.

An advantage of applying the principle to track records is that it’s more obvious when you’re going wrong. Introspection can be hard.

You can even apply the principle to track records of decisions which don’t look like they are choosing from a spectrum. For example it is given as advice in the game of bridge: if you don’t sometimes double the stakes on hands which eventually go against you, you’re not doubling enough. Although doubling or not is a binary choice, erring in both directions still works because ‘how often to do double’ is a trait that roughly falls on a spectrum.

Failures

There are some circumstances where the principle may not apply.

First, if you think the correct point is at one extreme of the available spectrum. For instance nobody says ‘if you’re not worried about going to jail, you’re not committing enough armed robberies’, because we think the best number of armed robberies to commit is probably zero.

Second, if the available points in the spectrum are discrete and few in number. Take the example of the bike locks. Perhaps there are only three options available: the Cheap-o lock (£5), the Regular lock (£20), and the Super lock (£50). You might reasonably decide on the Regular lock, thinking that maybe the Super lock is better, but that the Cheap-o one certainly isn’t. When you buy the Regular lock, you’re pretty sure you’re not buying a lock that’s too tough. But since only two of the locks are good candidates, there is no decision you could make which tries to err in both directions.

Third, in the case of evaluating track records, it may be that your record isn’t long enough to expect to have seen errors in both directions, even if they should both come up eventually. If you haven’t flown that many times, you could well be spending the right amount of time -- or even too little -- in airports, even if you’ve never missed a flight.

Finally, a warning about a case where the principle is not supposed to apply. It shouldn’t be applied directly to try to equalise the probability of being wrong in either direction, without taking any account of magnitude of loss. So for example if someone says you should err on the side of caution by getting an early train to your job interview, it might look as though that were in conflict with the idea of erring in both directions. But normally what’s meant is that you should have a higher probability of failing in one direction (wasting time by taking an earlier train than needed), because the consequences of failing in the other direction (missing the interview) are much higher.

Conclusions and applications to prioritisation

Seeking to err in both directions can provide a useful tool in helping to form better judgements in uncertain situations. Many people may already have internalised key points, but it can be useful to have a label to facilitate discussion. Additionally, having a clear principle can help you to apply it in cases where you might not have noticed it was relevant.

How might this principle apply to priority-setting? It suggests that:

  • You should spend enough time and resources on the prioritisation itself that you think some of time may have been wasted (for example you should spend a while at the end without changing your mind much), but not so much that you are totally confident you have the right answer.
  • If you are unsure what discount rate to use, you should choose one so that you think that it could be either too high or too low.
  • If you don’t know how strongly to weigh fragile cost-effectiveness estimates against more robust evidence, you should choose a level so that you might be over- or under-weighing them.
  • When you are providing a best-guess estimate, you should choose a figure which could plausibly be wrong either way.

And one on track records:

  • Suppose you’ve made lots of grants. Then if you’ve never backed a project which has failed, you’re probably too risk-averse in your grantmaking.

Questions for readers

Do you know any other useful applications of this idea? Do you know anywhere where it seems to break? Can anyone work out easier-to-apply versions, and the circumstances in which they are valid?

Appendix: a sketch proof of the principle

Assume the true graph of value (on the vertical axis) against the decision you make (on the horizontal axis, representing the spectrum) is smooth, looking something like this:   pic

The highest value is achieved at d, so this is where you’d like to be. But assume you don’t know quite where d is. Say your best guess is that d=g. But you think it’s quite possible that d>g, and quite unlikely that d<g. Should you choose g?

Suppose we compare g to g’, which is just a little bit bigger than g. If d>g, then switching from g to g’ would be moving up the slope on the left of the diagram, which is an improvement. If d=g then it would be better to stick with g, but it doesn’t make so much difference because the curve is fairly flat at the top. And if g were bigger than d, we’d be moving down the slope on the right of the diagram, which is worse for g’ -- but this scenario was deemed unlikely.

Aggregating the three possibilities, we found that two of them were better for sticking with g, but in one of these (d=g) it didn’t matter very much, and the other (d<g) just wasn’t very likely. In contrast, the third case (d>g) was reasonably likely, and noticeably better for g’ than g. So overall we should prefer g’ to g.

In fact we’d want to continue moving until the marginal upside from going slightly higher was equal to the marginal downside; this would have to involve a non-trivial chance that we are going too high. So our choice should have a chance of failure in either direction. This completes the (sketch) proof.

Note: There was an assumption of smoothness in this argument. I suspect it may be possible to get slightly stronger conclusions or work from slightly weaker assumptions, but I’m not certain what the most general form of this argument is. It is often easier to build a careful argument in specific cases.

Acknowledgements: thanks to Ryan Carey, Max Dalton, and Toby Ord for useful comments and suggestions.

Productivity thoughts from Matt Fallshaw

11 John_Maxwell_IV 21 August 2014 05:05AM

At the 2014 Effective Altruism Summit in Berkeley a few weeks ago, I had the pleasure of talking to Matt Fallshaw about the things he does to be more effective.  Matt is a founder of Trike Apps (the consultancy that built Less Wrong), a founder of Bellroy, and a polyphasic sleeper.  Notes on our conversation follow.

Matt recommends having a system for acquiring habits.  He recommends separating collection from processing; that is, if you have an idea for a new habit you want to acquire, you should record the idea at the time you have it and then think about actually implementing it at some future time.  Matt recommends doing this through a weekly review.  He recommends vetting your collection to see what habits seem actually worth acquiring, then for those habits you actually want to acquire, coming up with a compassionate, reasonable plan for how you're going to acquire the habit.

(Previously on LW: How habits work and how you may control themCommon failure modes in habit formation.)

The most difficult kind of habit for me to acquire is that of random-access situation-response habits, e.g. "if I'm having a hard time focusing, read my notebook entry that lists techniques for improving focus".  So I asked Matt if he had any habit formation advice for this particular situation.  Matt recommended trying to actually execute the habit I wanted as many times as possible, even in an artificial context.  Steve Pavlina describes the technique here.  Matt recommends making your habit execution as emotionally salient as possible.  His example: Let's say you're trying to become less of a prick.  Someone starts a conversation with you and you notice yourself experiencing the kind of emotions you experience before you start acting like a prick.  So you spend several minutes explaining to them the episode of disagreeableness you felt coming on and how you're trying to become less of a prick before proceeding with the conversation.  If all else fails, Matt recommends setting a recurring alarm on your phone that reminds you of the habit you're trying to acquire, although he acknowledges that this can be expensive.

Part of your plan should include a check to make sure you actually stick with your new habit.  But you don't want a check that's overly intrusive.  Matt recommends keeping an Anki deck with a card for each of your habits.  Then during your weekly review session, you can review the cards Anki recommends for you.  For each card, you can rate the degree to which you've been sticking with the habit it refers to and do something to revitalize the habit if you haven't been executing it.  Matt recommends writing the cards in a form of a concrete question, e.g. for a speed reading habit, a question could be "Did you speed read the last 5 things you read?"  If you haven't been executing a particular habit, check to see if it has a clear, identifiable trigger.

Ideally your weekly review will come at a time you feel particularly "agenty" (see also: Reflective Control).  So you may wish to schedule it at a time during the week when you tend to feel especially effective and energetic.  Consuming caffeine before your weekly review is another idea.

When running in to seemingly intractable problems related to your personal effectiveness, habits, etc., Matt recommends taking a step back to brainstorm and try to think of creative solutions.  He says that oftentimes people will write off a task as "impossible" if they aren't able to come up with a solution in 30 seconds.  He recommends setting a 5-minute timer.

In terms of habits worth acquiring, Matt is a fan of speed reading, Getting Things Done, and the Theory of Constraints (especially useful for larger projects).

Matt has found that through aggressive habit acquisition, he's been able to experience a sort of compound return on the habits he's acquired: by acquiring habits that give him additional time and mental energy, he's been able to reinvest some of that additional time and mental energy in to the acquisition of even more useful habits.  Matt doesn't think he's especially smart or high-willpower relative to the average person in the Less Wrong community, and credits this compounding for the reputation he's acquired for being a badass.

Anthropics doesn't explain why the Cold War stayed Cold

5 KnaveOfAllTrades 20 August 2014 07:23PM

(Epistemic status: There are some lines of argument that I haven’t even started here, which potentially defeat the thesis advocated here. I don’t go into them because this is already too long or I can’t explain them adequately without derailing the main thesis. Similarly some continuations of chains of argument and counterargument begun here are terminated in the interest of focussing on the lower-order counterarguments. Overall this piece probably overstates my confidence in its thesis. It is quite possible this post will be torn to pieces in the comments—possibly by my own aforementioned elided considerations. That’s good too.)

I

George VI, King of the United Kingdom, had five siblings. That is, the father of current Queen Elizabeth II had as many siblings as on a typical human hand. (This paragraph is true, and is not a trick; in particular, the second sentence of this paragraph really is trying to disambiguate and help convey the fact in question and relate it to prior knowledge, rather than introduce an opening for some sleight of hand so I can laugh at you later, or whatever fear such a suspiciously simple proposition might engender.)

Let it be known.

II

Exactly one of the following stories is true:

Story One

Recently I hopped on Facebook and saw the following post:

“I notice that I am confused about why a nuclear war never occurred. Like, I think (knowing only the very little I know now) that if you had asked me, at the start of the Cold War or something, the probability that it would eventually lead to a nuclear war, I would've said it was moderately likely. So what's up with that?”


The post had 14 likes. In the comments, the most-Liked explanation was:

“anthropically you are considerably more likely to live in a world where there never was a fullscale nuclear war”

That comment had 17 Likes. The second-most-liked comment that offered an explanation had 4 Likes.

Story Two

continue reading »

Thought experiments on simplicity in logical probability

3 Manfred 20 August 2014 05:25PM

A common feature of many proposed logical priors is a preference for simple sentences over complex ones. This is sort of like an extension of Occam's razor into math. Simple things are more likely to be true. So, as it is said, "why not?"

 

Well, the analogy has some wrinkles - unlike hypothetical rules for the world, logical sentences do not form a mutually exclusive set. Instead, for every sentence A there is a sentence not-A with pretty much the same complexity, and probability 1-P(A). So you can't make the probability smaller for all complex sentences, because their negations are also complex sentences! If you don't have any information that discriminates between them, A and not-A will both get probability 1/2 no matter how complex they get.

But if our agent knows something that breaks the symmetry between A and not-A, like that A belongs to a mutually exclusive and exhaustive set of sentences with differing complexities, then it can assign higher probabilities to simpler sentences in this set without breaking the rules of probability. Except, perhaps, the rule about not making up information.

The question: is the simpler answer really more likely to be true than the more complicated answer, or is this just a delusion? If so, is it for some ontologically basic reason, or for a contingent and explainable reason?

 

There are two complications to draw your attention to. The first is in what we mean by complexity. Although it would be nice to use the Kolmogorov complexity of any sentence, which is the length of the shortest program that prints the sentence, such a thing is uncomputable by the kind of agent we want to build in the real world. The only thing our real-world agent is assured of seeing is the length of the sentence as-is. We can also find something in between Kolmogorov complexity and length by doing a brief search for short programs that print the sentence - this meaning is what is usually meant in this article, and I'll call it "apparent complexity."

The second complication is in what exactly a simplicity prior is supposed to look like. In the case of Solomonoff induction the shape is exponential - more complicated hypotheses are exponentially less likely. But why not a power law? Why not even a Poisson distribution? Does the difficulty of answering this question mean that thinking that simpler sentences are more likely is a delusion after all?

 

Thought experiments:

1: Suppose our agent knew from a trusted source that some extremely complicated sum could only be equal to A, or to B, or to C, which are three expressions of differing complexity. What are the probabilities?

 

Commentary: This is the most sparse form of the question. Not very helpful regarding the "why," but handy to stake out the "what." Do the probabilities follow a nice exponential curve? A power law? Or, since there are just the three known options, do they get equal consideration?

This is all based off intuition, of course. What does intuition say when various knobs of this situation are tweaked - if the sum is of unknown complexity, or of complexity about that of C? If there are a hundred options, or countably many? Intuitively speaking, does it seem like favoring simpler sentences is an ontologically basic part of your logical prior?

 

2: Consider subsequences of the digits of pi. If I give you a pair (n,m), you can tell me the m digits following the nth digit of pi. So if I start a sentence like "the subsequence of digits of pi (10100, 102) = ", do you expect to see simpler strings of digits on the right side? Is this a testable prediction about the properties of pi?

 

Commentary: We know that there is always a short-ish program to produce the sequences, which is just to compute the relevant digits of pi. This sets a hard upper bound on the possible Kolmogorov complexity of sequences of pi (that grows logarithmically as you increase m and n), and past a certain m this will genuinely start restricting complicated sequences, and thus favoring "all zeros" - or does it?

After all, this is weak tea compared to an exponential simplicity prior, for which the all-zero sequence would be hojillions of times more likely than a messy one. On the other hand, an exponential curve allows sequences with higher Kolmogorov complexity than the computation of the digits of pi.

Does the low-level view outlined in the first paragraph above demonstrate that the exponential prior is bunk? Or can you derive one from the other with appropriate simplifications (keeping in mind Komogorov complexity vs. apparent complexity)? Does pi really contain more long simple strings than expected, and if not what's going on with our prior?

 

3: Suppose I am writing an expression that I want to equal some number you know - that is, the sentence "my expression = your number" should be true. If I tell you the complexity of my expression, what can you infer about the likelihood of the above sentence?

 

Commentary: If we had access to Kolmogorov complexity of your number, then we could completely rule out answers that were too K-simple to work. With only an approximation, it seems like we can still say that simple answers are less likely up to a point. Then as my expression gets more and more complicated, there are more and more available wrong answers (and, outside of the system a bit, it becomes less and less likely that I know what I'm doing), and so probability goes down.

In the limit that my expression is much more complex than your number, does an elegant exponential distribution emerge from underlying considerations?

Polling Thread

6 Gunnar_Zarncke 20 August 2014 02:36PM

The next installment of the Polling Thread.

This is your chance to ask your multiple choice question you always wanted to throw in. Get qualified numeric feedback to your comments. Post fun polls.

These are the rules:

  1. Each poll goes into its own top level comment and may be commented there.
  2. You must at least vote all polls that were posted earlier than you own. This ensures participation in all polls and also limits the total number of polls. You may of course vote without posting a poll.
  3. Your poll should include a 'don't know' option (to avoid conflict with 2). I don't know whether we need to add a troll catch option here but we will see.

If you don't know how to make a poll in a comment look at the Poll Markup Help.


This is a somewhat regular thread. If it is successful I may post again. Or you may. In that case do the following :

  • Use "Polling Thread" in the title.
  • Copy the rules.
  • Add the tag "poll".
  • Link to this Thread or a previous Thread.
  • Create a top-level comment saying 'Discussion of this thread goes here; all other top-level comments should be polls or similar'
  • Add a second top-level comment with an initial poll to start participation.

"Follow your dreams" as a case study in incorrect thinking

21 cousin_it 20 August 2014 01:18PM

This post doesn't contain any new ideas that LWers don't already know. It's more of an attempt to organize my thoughts and have a writeup for future reference.

Here's a great quote from Sam Hughes, giving some examples of good and bad advice:

"You and your gaggle of girlfriends had a saying at university," he tells her. "'Drink through it'. Breakups, hangovers, finals. I have never encountered a shorter, worse, more densely bad piece of advice." Next he goes into their bedroom for a moment. He returns with four running shoes. "You did the right thing by waiting for me. Probably the first right thing you've done in the last twenty-four hours. I subscribe, as you know, to a different mantra. So we're going to run."

The typical advice given to young people who want to succeed in highly competitive areas, like sports, writing, music, or making video games, is to "follow your dreams". I think that advice is up there with "drink through it" in terms of sheer destructive potential. If it was replaced with "don't bother following your dreams" every time it was uttered, the world might become a happier place.

The amazing thing about "follow your dreams" is that thinking about it uncovers a sort of perfect storm of biases. It's fractally wrong, like PHP, where the big picture is wrong and every small piece is also wrong in its own unique way.

The big culprit is, of course, optimism bias due to perceived control. I will succeed because I'm me, the special person at the center of my experience. That's the same bias that leads us to overestimate our chances of finishing the thesis on time, or having a successful marriage, or any number of other things. Thankfully, we have a really good debiasing technique for this particular bias, known as reference class forecasting, or inside vs outside view. What if your friend Bob was a slightly better guitar player than you? Would you bet a lot of money on Bob making it big like Jimi Hendrix? The question is laughable, but then so is betting the years of your own life, with a smaller chance of success than Bob.

That still leaves many questions unanswered, though. Why do people offer such advice in the first place, why do other people follow it, and what can be done about it?

Survivorship bias is one big reason we constantly hear successful people telling us to "follow our dreams". Successful people doesn't really know why they are successful, so they attribute it to their hard work and not giving up. The media amplifies that message, while millions of failures go unreported because they're not celebrities, even though they try just as hard. So we hear about successes disproportionately, in comparison to how often they actually happen, and that colors our expectations of our own future success. Sadly, I don't know of any good debiasing techniques for this error, other than just reminding yourself that it's an error.

When someone has invested a lot of time and effort into following their dream, it feels harder to give up due to the sunk cost fallacy. That happens even with very stupid dreams, like the dream of winning at the casino, that were obviously installed by someone else for their own profit. So when you feel convinced that you'll eventually make it big in writing or music, you can remind yourself that compulsive gamblers feel the same way, and that feeling something doesn't make it true.

Of course there are good dreams and bad dreams. Some people have dreams that don't tease them for years with empty promises, but actually start paying off in a predictable time frame. The main difference between the two kinds of dream is the difference between positive-sum games, a.k.a. productive occupations, and zero-sum games, a.k.a. popularity contests. Sebastian Marshall's post Positive Sum Games Don't Require Natural Talent makes the same point, and advises you to choose a game where you can be successful without outcompeting 99% of other players.

The really interesting question to me right now is, what sets someone on the path of investing everything in a hopeless dream? Maybe it's a small success at an early age, followed by some random encouragement from others, and then you're locked in. Is there any hope for thinking back to that moment, or set of moments, and making a little twist to put yourself on a happier path? I usually don't advise people to change their desires, but in this case it seems to be the right thing to do.

Steelmanning MIRI critics

5 fowlertm 19 August 2014 03:14AM

I'm giving a talk to the Boulder Future Salon in Boulder, Colorado in a few weeks on the Intelligence Explosion hypothesis. I've given it once before in Korea but I think the crowd I'm addressing will be more savvy than the last one (many of them have met Eliezer personally). It could end up being important, so I was wondering if anyone considers themselves especially capable of playing Devil's Advocate so I could shape up a bit before my talk? I'd like there to be no real surprises. 

I'd be up for just messaging back and forth or skyping, whatever is convenient.

Quantified Risks of Gay Male Sex

28 pianoforte611 18 August 2014 11:55PM

If you are a gay male then you’ve probably worried at one point about sexually transmitted diseases. Indeed men who have sex with men have some of the highest prevalence of many of these diseases. And if you’re not a gay male, you’ve probably still thought about STDs at one point. But how much should you worry? There are many organizations and resources that will tell you to wear a condom, but very few will tell you the relative risks of wearing a condom vs not. I’d like to provide a concise summary of the risks associated with gay male sex and the extent to which these risks can be reduced. (See Mark Manson’s guide for a similar resources for heterosexual sex.). I will do so by first giving some information about each disease, including its prevalence among gay men. Most of this data will come from the US, but the US actually has an unusually high prevalence for many diseases. Certainly HIV is much less common in many parts of Europe. I will end with a case study of HIV, which will include an analysis of the probabilities of transmission broken down by the nature of sex act and a discussion of risk reduction techniques.

When dealing with risks associated with sex, there are few relevant parameters. The most common is the prevalence – the proportion of people in the population that have the disease. Since you can only get a disease from someone who has it, the prevalence is arguably the most important statistic. There are two more relevant statistics – the per act infectivity (the chance of contracting the disease after having sex once) and the per partner infectivity (the chance of contracting the disease after having sex with one partner for the duration of the relationship). As it turns out the latter two probabilities are very difficult to calculate. I only obtained those values for for HIV. It is especially difficult to determine per act risks for specific types of sex acts since many MSM engage in a variety of acts with multiple partners. Nevertheless estimates do exist and will explored in detail in the HIV case study section.

HIV

Prevalence: Between 13 - 28%. My guess is about 13%.

The most infamous of the STDs. There is no cure but it can be managed with anti-retroviral therapy. A commonly reported statistic is that 19% of MSM (men who have sex with men) in the US are HIV positive (1). For black MSM, this number was 28% and for white MSM this number was 16%. This is likely an overestimate, however, since the sample used was gay men who frequent bars and clubs. My estimate of 13% comes from CDC's total HIV prevalence in gay men of 590,000 (2) and their data suggesting that MSM comprise 2.9% of men in the US (3).

 

Gonorrhea

Prevalence: Between 9% and 15% in the US

This disease affects the throat and the genitals but it is treatable with antibiotics. The CDC estimates 15.5% prevalence (4). However, this is likely an overestimate since the sample used was gay men in health clinics. Another sample (in San Francisco health clinics) had a pharyngeal gonorrhea prevalence of 9% (5).

 

Syphilis

Prevalence: 0.825% in the US

 My estimate was calculated in the same manner as my estimate for HIV. I used the CDC's data (6). Syphilis is transmittable by oral and anal sex (7) and causes genital sores that may look harmless at first (8). Syphilis is curable with penicillin however the presence of sores increases the infectivity of HIV.

 

Herpes (HSV-1 and HSV-2)

Prevalence: HSV-2 - 18.4% (9); HSV-1 - ~75% based on Australian data  (10)

This disease is mostly asymptomatic and can be transmitted through oral or anal sex. Sometimes sores will appear and they will usually go away with time. For the same reason as syphilis, herpes can increase the chance of transmitting HIV. The estimate for HSV-1 is probably too high. Snowball sampling was used and most of the men recruited were heavily involved in organizations for gay men and were sexually active in the past 6 months. Also half of them reported unprotected anal sex in the past six months. The HSV-2 sample came from a random sample of US households (11).

 

Clamydia

Prevalence: Rectal - 0.5% - 2.3% ; Pharyngeal - 3.0 - 10.5% (12)

 Like herpes, it is often asymptomatic - perhaps as low as 10% of infected men report symptoms. It is curable with antibiotics.

 

HPV

Prevalence: 47.2% (13)

 This disease is incurable (though a vaccine exists for men and women) but usually asymptomatic. It is capable of causing cancers of the penis, throat and anus. Oddly there are no common tests for HPV in part because there are many strains (over 100) most of which are relatively harmless. Sometimes it goes away on its own (14). The prevalence rate was oddly difficult to find, the number I cited came from a sample of men from Brazil, Mexico and the US.

 

Case Study of HIV transmission; risks and strategies for reducing risk

 IMPORTANT: None of the following figures should be generalized to other diseases. Many of these numbers are not even the same order of magnitude as the numbers for other diseases. For example, HIV is especially difficult to transmit via oral sex, but Herpes can very easily be transmitted.

Unprotected Oral Sex per-act risk (with a positive partner or partner of unknown serostatus):

Non-zero but very small. Best guess .03% without condom (15)

 Unprotected Anal sex per-act risk (with positive partner): 

Receptive: 0.82% - 1.4% (16) (17)

                          Insertive Circumcised: 0.11% (18)

         Insertive Uncircumcised: 0.62% (18)

 Protected Anal sex per-act risk (with positive partner):  

  Estimates range from 2 times lower to twenty times lower (16)  (19) and the risk is highly dependent on the slippage and   breakage rate.


Contracting HIV from oral sex is very rare. In one study, 67 men reported performing oral sex on at least one HIV positive partner and none were infected (20). However, transmission is possible (15). Because instances of oral transmission of HIV are so rare, the risk is hard to calculate so should be taken with a grain of salt. The number cited was obtained from a group of individuals that were either HIV positive or high risk for HIV. The per act-risk with a positive partner is therefore probably somewhat higher.

 Note that different HIV positive men have different levels of infectivity hence the wide range of values for per-act probability of transmission. Some men with high viral loads (the amount of HIV in the blood) may have an infectivity of greater than 10% per unprotected anal sex act (17).

 

Risk reducing strategies

 Choosing sex acts that have a lower transmission rate (oral sex, protected insertive anal sex, non-insertive) is one way to reduce risk. Monogamy, testing, antiretroviral therapy, PEP and PrEP are five other ways.

 

Testing Your partner/ Monogamy

 If your partner tests negative then they are very unlikely to have HIV. There is a 0.047% chance of being HIV positive if they tested negative using a blood test and a 0.29% chance of being HIV positive if they tested negative using an oral test. If they did further tests then the chance is even lower. (See the section after the next paragraph for how these numbers were calculated).

 So if your partner tests negative, the real danger is not the test giving an incorrect result. The danger is that your partner was exposed to HIV before the test, but his body had not started to make antibodies yet. Since this can take weeks or months, it is possible for your partner who tested negative to still have HIV even if you are both completely monogamous.

 ____

For tests, the sensitivity - the probability that an HIV positive person will test positive - is 99.68% for blood tests (21), 98.03% with oral tests. The specificity - the probability that an HIV negative person will test negative - is 99.74% for oral tests and 99.91% for blood tests. Hence the probability that a person who tested negative will actually be positive is:

 P(Positive | tested negative) = P(Positive)*(1-sensitivity)/(P(Negative)*specificity + P(Positive)*(1-sensitivity)) = 0.047% for blood test, 0.29% for oral test

 Where P(Positive) = Prevalence of HIV, I estimated this to be 13%.

 However, according to a writer for About.com (22) - a doctor who works with HIV - there are often multiple tests which drive the sensitivity up to 99.997%.

 

Home Testing

Oraquick is an HIV test that you can purchase online and do yourself at home. It costs $39.99 for one kit. The sensitivity is 93.64%, the specificity is 99.87% (23). The probability that someone who tested negative will actually be HIV positive is 0.94%. - assuming a 13% prevalence for HIV. The same danger mentioned above applies - if the infection occurred recently the test would not detect it.

 

 Anti-Retroviral therapy

 Highly active anti-retroviral therapy (HAART), when successful, can reduce the viral load – the amount of HIV in the blood - to low or undetectable levels. Baggaley et. al (17) reports that in heterosexual couples, there have been some models relating viral load to infectivity. She applies these models to MSM and reports that the per-act risk for unprotected anal sex with a positive partner should be 0.061%. However, she notes that different models produce very different results thus this number should be taken with a grain of salt.

 

 Post-Exposure Prophylaxis (PEP)

 A last resort if you think you were exposed to HIV is to undergo post-exposure prophylaxis within 72 hours. Antiretroviral drugs are taken for about a month in the hopes of preventing the HIV from infecting any cells. In one case controlled study some health care workers who were exposed to HIV were given PEP and some were not, (this was not under the control of the experimenters). Workers that contracted HIV were less likely to have been given PEP with an odds ratio of 0.19 (24). I don’t know whether PEP is equally effective at mitigating risk from other sources of exposure.

 

 Pre-Exposure Prophylaxis (PrEP)

 This is a relatively new risk reduction strategy. Instead of taking anti-retroviral drugs after exposure, you take anti-retroviral drugs every day in order to prevent HIV infection. I could not find a per-act risk, but in a randomized controlled trial, MSM who took PrEP were less likely to become infected with HIV than men who did not (relative reduction  - 41%). The average number of sex partners was 18. For men who were more consistent and had a 90% adherence rate, the relative reduction was better - 73%. (25) (26).

1: http://www.cdc.gov/mmwr/preview/mmwrhtml/mm5937a2.htm?s_cid=mm5937a2_w

2: http://www.cdc.gov/hiv/statistics/basics/ataglance.html

3: http://www.cdc.gov/nchs/data/ad/ad362.pdf

4: http://www.cdc.gov/std/stats10/msm.htm

5: http://cid.oxfordjournals.org/content/41/1/67.short

6: http://www.cdc.gov/std/syphilis/STDFact-MSM-Syphilis.htm

7: http://www.cdc.gov/mmwr/preview/mmwrhtml/mm5341a2.htm

8: http://www.cdc.gov/std/syphilis/stdfact-syphilis.htm

9: http://journals.lww.com/stdjournal/Abstract/2010/06000/Men_Who_Have_Sex_With_Men_in_the_United_States_.13.aspx

10: http://jid.oxfordjournals.org/content/194/5/561.full

11: http://www.nber.org/nhanes/nhanes-III/docs/nchs/manuals/planop.pdf

12: http://www.cdc.gov/std/chlamydia/STDFact-Chlamydia-detailed.htm

13: http://jid.oxfordjournals.org/content/203/1/49.short

14: http://www.cdc.gov/std/hpv/stdfact-hpv-and-men.htm

15: http://journals.lww.com/aidsonline/pages/articleviewer.aspx?year=1998&issue=16000&article=00004&type=fulltext#P80

16: http://aje.oxfordjournals.org/content/150/3/306.short

17: http://ije.oxfordjournals.org/content/early/2010/04/20/ije.dyq057.full

18: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2852627/

19:

http://journals.lww.com/stdjournal/Fulltext/2002/01000/Reducing_the_Risk_of_Sexual_HIV_Transmission_.7.aspx

20:

http://journals.lww.com/aidsonline/Fulltext/2002/11220/Risk_of_HIV_infection_attributable_to_oral_sex.22.aspx

21: http://www.thelancet.com/journals/laninf/article/PIIS1473-3099%2811%2970368-1/abstract

22:

http://aids.about.com/od/hivpreventionquestions/f/How-Often-Do-False-Positive-And-False-Negative-Hiv-Test-Results-Occur.htm

23: http://www.ncbi.nlm.nih.gov/pubmed/18824617

24: http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD002835.pub3/abstract

25: http://www.nejm.org/doi/full/10.1056/Nejmoa1011205#t=articleResults

26: http://www.cmaj.ca/content/184/10/1153.short

Open thread, 18-24 August 2014

3 David_Gerard 18 August 2014 04:55PM

Previous open thread

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

The metaphor/myth of general intelligence

10 Stuart_Armstrong 18 August 2014 04:04PM

Thanks for Kaj for making me think along these lines.

It's agreed on this list that general intelligences - those that are capable of displaying high cognitive performance across a whole range of domains - are those that we need to be worrying about. This is rational: the most worrying AIs are those with truly general intelligences, and so those should be the focus of our worries and work.

But I'm wondering if we're overestimating the probability of general intelligences, and whether we shouldn't adjust against this.

First of all, the concept of general intelligence is a simple one - perhaps too simple. It's an intelligence that is generally "good" at everything, so we can collapse its various abilities across many domains into "it's intelligent", and leave it at that. It's significant to note that since the very beginning of the field, AI people have been thinking in terms of general intelligences.

And their expectations have been constantly frustrated. We've made great progress in narrow areas, very little in general intelligences. Chess was solved without "understanding"; Jeopardy! was defeated without general intelligence; cars can navigate our cluttered roads while being able to do little else. If we started with a prior in 1956 about the feasibility of general intelligence, then we should be adjusting that prior downwards.

But what do I mean by "feasibility of general intelligence"? There are several things this could mean, not least the ease with which such an intelligence could be constructed. But I'd prefer to look at another assumption: the idea that a general intelligence will really be formidable in multiple domains, and that one of the best ways of accomplishing a goal in a particular domain is to construct a general intelligence and let it specialise.

First of all, humans are very far from being general intelligences. We can solve a lot of problems when the problems are presented in particular, easy to understand formats that allow good human-style learning. But if we picked a random complicated Turing machine from the space of such machines, we'd probably be pretty hopeless at predicting its behaviour. We would probably score very low on the scale of intelligence used to construct the AIXI. The general intelligence, "g", is a misnomer - it designates the fact that the various human intelligences are correlated, not that humans are generally intelligent across all domains.

Humans with computers, and humans in societies and organisations, are certainly closer to general intelligences than individual humans. But institutions have their own blind spots and weakness, as does the human-computer combination. Now, there are various reasons advanced for why this is the case - game theory and incentives for institutions, human-computer interfaces and misunderstandings for the second example. But what if these reasons, and other ones we can come up with, were mere symptoms of a more universal problem: that generalising intelligence is actually very hard?

There are no free lunch theorems that show that no computable intelligences can perform well in all environments. As far as they go, these theorems are uninteresting, as we don't need intelligences that perform well in all environments, just in almost all/most. But what if a more general restrictive theorem were true? What if it was very hard to produce an intelligence that was of high performance across many domains? What if the performance of a generalist was pitifully inadequate as compared with a specialist. What if every computable version of AIXI was actually doomed to poor performance?

There are a few strong counters to this - for instance, you could construct good generalists by networking together specialists (this is my standard mental image/argument for AI risk), you could construct an entity that was very good at programming specific sub-programs, or you could approximate AIXI. But we are making some assumptions here - namely, that we can network together very different intelligences (the human-computer interfaces hints at some of the problems), and that a general programming ability can even exist in the first place (for a start, it might require a general understanding of problems that is akin to general intelligence in the first place). And we haven't had great success building effective AIXI approximations so far (which should reduce, possibly slightly, our belief that effective general intelligences are possible).

Now, I remain convinced that general intelligence is possible, and that it's worthy of the most worry. But I think it's worth inspecting the concept more closely, and at least be open to the possibility that general intelligence might be a lot harder than we imagine.

EDIT: Model/example of what a lack of general intelligence could look like.

Imagine there are three types of intelligence - social, spacial and scientific, all on a 0-100 scale. For any combinations of the three intelligences - eg (0,42,98) - there is an effort level E (how hard is that intelligence to build, in terms of time, resources, man-hours, etc...) and a power level P (how powerful is that intelligence compared to others, on a single convenient scale of comparison).

Wei Dai's evolutionary comment implies that any being of very low intelligence on one of the scale would be overpowered by a being of more general intelligence. So let's set power as simply the product of all three intelligences.

This seems to imply that general intelligences are more powerful, as it basically bakes in diminishing returns - but we haven't included effort yet. Imagine that the following three intelligences require equal effort: (10,10,10), (20,20,5), (100,5,5). Then the specialised intelligence is definitely the one you need to build.

But is it plausible that those could be of equal difficulty? It could be, if we assume that high social intelligence isn't so difficult, but is specialised. ie you can increase the spacial intelligence of a social intelligence, but that messes up the delicate balance in its social brain. Or maybe recursive self-improvement happens more easily in narrow domains. Further assume that intelligences of different types cannot be easily networked together (eg combining (100,5,5) and (5,100,5) in the same brain gives an overall performance of (21,21,5)). This doesn't seem impossible.

So let's caveat the proposition above: the most effective and dangerous type of AI might be one with a bare minimum amount of general intelligence, but an overwhelming advantage in one type of narrow intelligence.

A thought on AI unemployment and its consequences

6 Stuart_Armstrong 18 August 2014 12:10PM

I haven't given much thought to the concept of automation and computer induced unemployment. Others at the FHI have been looking into it in more details - see Carl Frey's "The Future of Employment", which did estimates for 70 chosen professions as to their degree of automatability, and extended the results of this using O∗NET, an online service developed for the US Department of Labor, which gave the key features of an occupation as a standardised and measurable set of variables.

The reasons that I haven't been looking at it too much is that AI-unemployment has considerably less impact that AI-superintelligence, and thus is a less important use of time. However, if automation does cause mass unemployment, then advocating for AI safety will happen in a very different context to currently. Much will depend on how that mass unemployment problem is dealt with, what lessons are learnt, and the views of whoever is the most powerful in society. Just off the top of my head, I could think of four scenarios on whether risk goes up or down, depending on whether the unemployment problem was satisfactorily "solved" or not:

AI risk\UnemploymentProblem solvedProblem unsolved
Risk reduced
With good practice in dealing
with AI problems, people and
organisations are willing and
able to address the big issues.
The world is very conscious of the
misery that unrestricted AI
research can cause, and very
wary of future disruptions. Those
at the top want to hang on to
their gains, and they are the one
with the most control over AIs
and automation research.
Risk increased
Having dealt with the easier
automation problems in a
particular way (eg taxation),
people underestimate the risk
and expect the same
solutions to work.
Society is locked into a bitter
conflict between those benefiting
from automation and those
losing out, and superintelligence
is seen through the same prism.
Those who profited from
automation are the most
powerful, and decide to push
ahead.

But of course the situation is far more complicated, with many different possible permutations, and no guarantee that the same approach will be used across the planet. And let the division into four boxes not fool us into thinking that any is of comparable probability to the others - more research is (really) needed.

A "Holy Grail" Humor Theory in One Page.

1 EGarrett 18 August 2014 10:26AM

Alrighty, with the mass downvoters gone, I can make the leap to posting some ideas. Here's the Humor Theory I've been developing over the last few months and have discussed at Meet-Ups, and have written two SSRN papers about, in one page. I've taken the document I posted on the Facebook group and retyped and formatted it here.

I strongly suspect that it's the correct solution to this unsolved problem. There was even a new neurology study released in the last few days that confirms one of the predictions I drew from this theory about the evolution of human intelligence.

Note that I tried to fit as much info as I could on the page, but obviously it's not enough space to cover everything, and the other papers are devoted to that. Any constructive questions, discussion etc are welcome.



 

A "Holy Grail" Humor Theory in One Page.


Plato, Aristotle, Kant, Freud, and hundreds of other philosophers have tried to understand humor. No one has ever found a single idea that explains it in all its forms, or shows what's sufficient to create it. Thus, it's been called a "Holy Grail" of social science. Consider this...


In small groups without language, where we evolved, social orders were needed for efficiency. But fighting for leadership would hurt them. So a peaceful, nonverbal method was extremely beneficial. Thus, the "gasp" we make when seeing someone fall evolved into a rapid-fire version at seeing certain failures, which allowed us to signal others to see what happened, and know who not to follow. The reaction, naturally, would feel good and make us smile, to lower our aggression and show no threat. This reaction is called laughter. The instinct that controls it is called humor. It's triggered by the brain weighing things it observes in the proportion:


Humor = ((Qualityexpected - Qualitydisplayed) * Noticeability * Validity) / Anxiety

 

Or H=((Qe-Qd)NV)/A. When the results of this ratio are greater than 0, we find the thing funny and will laugh, in the smallest amounts with slight smiles, small feelings of pleasure or small diaphragm spasms. The numerator terms simply state that something has to be significantly lower in quality than what we assumed, and we must notice it and feel it's real, and the denominator states that anxiety lowers the reaction. This is because laughter is a noisy reflex that threatens someone else's status, so if there is a chance of violence from the person, a danger to threatening a loved one's status, or a predator or other threat from making noise, the reflex will be mitigated. The common feeling amongst those situations, anxiety, has come to cause this.

This may appear to be an ad hoc hypothesis, but unlike those, this can clearly unite and explain everything we've observed about humor, including our cultural sayings and the scientific observations of the previous incomplete theories. Some noticed that it involves surprise, some noticed that it involves things being incorrect, all noticed the pleasure without seeing the reason. This covers all of it, naturally, and with a core concept simple enough to explain to a child. Our sayings, like "it's too soon" for a joke after a tragedy, can all be covered as well ("too soon" indicates that we still have anxiety associated with the event).

The previous confusion about humor came from a few things. For one, there are at least 4 types of laughter: At ourselves, at others we know, at others we don't know (who have an average expectation), and directly at the person with whom we're speaking. We often laugh for one reason instead of the other, like "bad jokes" making us laugh at the teller. In addition, besides physical failure, like slipping, we also have a basic laugh instinct for mental failure, through misplacement. We sense attempts to order things that have gone wrong. Puns and similar references trigger this. Furthermore, we laugh loudest when we notice multiple errors (quality-gaps) at once, like a person dressed foolishly (such as a court jester), exposing errors by others.

We call this the "Status Loss Theory," and we've written two papers on it. The first is 6 pages, offers a chart of old theories and explains this more, with 7 examples. The second is 27 pages and goes through 40 more examples, applying this concept to sayings, comedians, shows, memes, and other comedy types, and even drawing predictions from the theory that have been verified by very recent neurology studies, to hopefully exhaustively demonstrate the idea's explanatory power. If it's not complete, it should still make enough progress to greatly advance humor study. If it is, it should redefine the field. Thanks for your time.

Group Rationality Diary, August 16-31

1 therufs 18 August 2014 02:33AM

This is the public group instrumental rationality diary for August 16-31. 

It's a place to record and chat about it if you have done, or are actively doing, things like: 

  • Established a useful new habit
  • Obtained new evidence that made you change your mind about some belief
  • Decided to behave in a different way in some set of situations
  • Optimized some part of a common routine or cached behavior
  • Consciously changed your emotions or affect with respect to something
  • Consciously pursued new valuable information about something that could make a big difference in your life
  • Learned something new about your beliefs, behavior, or life that surprised you
  • Tried doing any of the above and failed

Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves. Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.

Thanks to cata for starting the Group Rationality Diary posts, and to commenters for participating.

Previous diary: August 1-15

Rationality diaries archive

[meta] Future moderation and investigation of downvote abuse cases, or, I don't want to deal with this stuff

43 Kaj_Sotala 17 August 2014 02:40PM

Since the episode with Eugine_Nier, I have received three private messages from different people asking me to investigate various cases of suspected mass downvoting. And to be quite honest, I don't want to deal with this. Eugine's case was relatively clear-cut, since he had engaged in systematic downvoting of a massive scale, but the new situations are a lot fuzzier and I'm not sure of what exactly the rules should be (what counts as a permitted use of the downvote system and what doesn't?).

At least one person has also privately contacted me and offered to carry out moderator duties if I don't want them, but even if I told them yes (on what basis? why them and not someone else?), I don't know what kind of policy I should tell them to enforce. I only happened to be appointed a moderator because I was in the list of top 10 posters at a particular time, and I don't feel like I should have any particular authority to make the rules. Nor do I feel like I have any good idea of what the rules should be, or who would be the right person to enforce them.

In any case, I don't want to be doing this job, nor do I particularly feel like being responsible for figuring out who should, or how, or what the heck. I've already started visiting LW less often because I dread having new investigation requests to deal with. So if you folks could be so kind as to figure it out without my involvement? If there's a clear consensus that someone in particular should deal with this, I can give them mod powers, or something.

Thoughts on becoming more organized

-3 Will_BC 17 August 2014 03:46AM

It seems to me that the rationality movement is doing a sub-optimal job at proliferating. I have seen on multiple occasions posts which suggest that LessWrong is in decline. I think that this has a lot to do with organization, and by organization I mean the effectiveness with which a group of people obtains its goals. I believe that rationality has a more populist message, and I would like to see it refined and spread. I have a collection of my thoughts https://drive.google.com/folderview?id=0B9BZfCmYSqm-TTlfRW1hMVJ5VnM&usp=sharing, with the more concise and up to date summary of my suggestions https://docs.google.com/document/d/1I-T-jiuhHr951FUHZ6q-KUW4oK-GHCjF9bVGr2dwR1M/edit?usp=sharing. I have not developed these ideas to the point where I am strongly attached to them. What I would like to do for now is to create three monthly discussion groups.

 

The first is based on instrumental rationality, and I'd like to call it Success Club. For this group I would like to use [Alex Vermeer's 8760 hours guide](http://alexvermeer.com/8760hours/) as a basis. If you want to join this group, I would suggest you be open minded and able to deal with other people's sensitive issues. This will work as a support group, and if you can't keep confidentiality you won't be able to be a member. The second group is based on more general or epistemic rationality. I would use the sequences as a basis, but if any CFAR alums have better suggestions I would welcome them. The third group is a meta group, discussing the movement as a whole and how to make it more effective. I would like to start by discussing the ideas in the google drive folder that I shared and move from there.

 

If anyone is interested in any of these groups, please send me a PM with a little about yourself and which group(s) you'd be interested in joining. 

 

Edit: Could someone explain why I've been downvoted? Judging by the way the karma is proportioned I'm getting a good number of positive and negative reactions but not a whole lot in the way of feedback.

Astray with the Truth: Logic and Math

2 StephenR 16 August 2014 03:40PM

LessWrong has one of the strongest and most compelling presentations of a correspondence theory of truth on the internet, but as I said in A Pragmatic Epistemology,  it has some deficiencies. This post delves into one example: its treatment of math and logic. First, though, I'll summarise the epistemology of the sequences (especially as presented in High Advanced Epistemology 101 for Beginners). 

Truth is the correspondence between beliefs and reality, between the map and the territory.[1] Reality is a causal fabric, a collection of variables ("stuff") that interact with each other.[2] True beliefs mirror reality in some way. If I believe that most maps skew the relative size of Ellesmere Island, it's true when I compare accurate measurements of Ellesmere Island to accurate measurements of other places, and find that the differences aren't preserved in the scaling of most maps. That is an example of a truth-condition, which is a reality that the belief can correspond to. My belief about world maps is true when that scaling doesn't match up in reality. All meaningful beliefs have truth-conditions; they trace out paths in a causal fabric.[3] Another way to define truth, then, is that a belief is true when it traces a path which is found in the causal fabric the believer inhabits.

Beliefs come in many forms. You can have beliefs about your experiences past, present and future; about what you ought to do;  and, relevant to our purposes, about abstractions like mathematical objects. Mathematical statements are true when they are truth-preserving, or valid. They're also conditional: they're about all possible causal fabrics rather than any one in particular.[4] That is, when you take a true mathematical statement and plug in any acceptable inputs,[5] you will end up with a true conditional statement about the inputs. Let's illustrate this with the disjunctive syllogism:

((A∨B) ∧ ¬A) ⇒ B

Letting A be "All penguins ski in December" and B be "Martians have been decimated," this reads "If all penguins ski in December or Martians have been decimated, and some penguins don't ski in December, then Martians have been decimated." And if the hypothesis obtains (if it's true that (A∨B) ∧ ¬A), then the conclusion (B) is claimed to follow.[6] 

That's it for review, now for the substance.

Summary. First, from examining the truth-conditions of beliefs about validity, we see that our sense of what is obvious plays a suspicious role in which statements we consider valid. Second, a major failure mode in following obviousness is that we sacrifice other goals by separating the pursuit of truth from other pursuits. This elevation of the truth via the epistemic/instrumental rationality distinction prevents us from seeing it as one instrumental goal among many which may sometimes be irrelevant.


What are the truth-conditions of a belief that a certain logical form is valid or not? 

A property of valid statements is being able to plug any proposition you like into the propositional variables of the statement without disturbing the outcome (the conditional statement will still be true). Literally any proposition; valid forms about everything that can be articulated by means of propositions. So part of the truth-conditions of a belief about validity is that if a sentence is valid, everything is a model of it. In that case, causal fabrics, which we investigate by means of propositions,[7] can't help but be constrained by what is logically valid. We would never expect to see some universe where inputting propositions into the disjunctive syllogism can output false without being in error. Call this the logical law view. This suggests that we could check a bunch of inputs and universes constructions until we feel satisfied that the sentence will not fail to output true.

It happens that sentences which people agree are valid are usually sentences that people agree are obviously true. There is something about the structure of our thought that makes us very willing to accept their validity. Perhaps you might say that because reality is constrained by valid sentences, sapient chunks of reality are going to be predisposed to recognising validity ...

But what separates that hypothesis from this alternative: "valid sentences are rules that have been applied successfully in many cases so far"? That is, after all, the very process that we use to check the truth-conditions of our beliefs about validity. We consider hypothetical universes and we apply the rules in reasoning. Why should we go further and claim that all possible realities are constrained by these rules? In the end we are very dependent on our intuitions about what is obvious, which might just as well be due to flaws in our thought as logical laws. And our insistence of correctness is no excuse. In that regard we may be no different than certain ants that mistake living members of the colony for dead when their body is covered in a certain pheromone:[8] prone to a reaction that is just as obviously astray to other minds as it is obviously right to us.  

In light of that, I see no reason to be confident that we can distinguish between success in our limited applications and necessary constraint on all possible causal fabrics. 

And despite what I said about "success so far," there are clear cases where sticking to our strong intuition to take the logical law view leads us astray on goals apart from truth-seeking. I give two examples where obsessive focus on truth-seeking consumes valuable resources that could be used toward a host of other worthy goals. 

The Law of Non-Contradiction. The is law is probably the most obvious thing in the world. A proposition can't be truth and false, or ¬(P ∧ ¬P). If it were both, then you would have a model of any proposition you could dream of. This is an extremely scary prospect if you hold the logical law view; it means that if you have a true contradiction, reality doesn't have to make sense.  Causality and your expectations are meaningless. That is the principle of explosion(P ∧ ¬P) ⇒ Q, for arbitrary Q. Suppose that pink is my favourite colour, and that it isn't. Then pink is my favourite colour or causality is meaningless. Except pink isn't my favourite colour, so causality is meaningless. Except it is, because either pink is my favourite colour or causality is meaningful, but pink isn't. Therefore pixies by a similar argument. 

Is (P ∧ ¬P) ⇒ Q valid? Most people think it is. If you hypnotised me into forgetting that I find that sort of question suspect, I would agree. I can *feel* the pull toward assenting its validity.  If ¬(P ∧ ¬P) is true it would be hard to say why not. But there are nonetheless very good reasons for ditching the law of non-contradiction and the principle of explosion. Despite its intuitive truth and general obviousness, it's extremely inconvenient. Solving the problem of the consistency of various PA and ZFC, which are central to mathematics, has proved very difficult. But of course part of the motivation is that if there were an inconsistency, the principle of explosion would render the entire system useless. This undesirable effect has led some to develop paraconsistent logics which do not explode with the discovery of a contradiction. 

Setting aside whether the law of non-contradiction is really truly true and the principle of explosion really truly valid, wouldn't we be better off with foundational systems that don't buckle over and die at the merest whiff of a contradiction? In any case, it would be nice to alter the debate so that the truth of these statements didn't eclipse their utility toward other goals.

The Law of Excluded MiddleP∨¬P: if a proposition isn't true, then it's false; if it isn't false, then it's true. In terms of the LessWrong epistemology, this means that a proposition either obtains in the causal fabric you're embedded in, or it doesn't. Like the previous example this has a strong intuitive pull. If that pull is correct, all sentences Q ⇒ (P∨¬P) must be valid since everything models true sentences. And yet, though doubting it can seem ridiculous, and though I would not doubt it on its own terms[9], there are very good reasons for using systems where it doesn't hold.

The use of the law of excluded middle in proofs severely inhibits the construction of programmes based on proofs. The barrier is that the law is used in existence proofs, which show that some mathematical object must exist but give no method of constructing it.[10] 

Removing the law, on the other hand, gives us intuitionistic logic. Via a mapping called the Curry-Howard isomorphism all proofs in intuitionistic logic are translatable into programmes in the lambda calculus, and vice versa. The lambda calculus itself, assuming the Church-Turing thesis, gives us all effectively computable functions. This creates a deep connection between proof theory in constructive mathematics and computability theory, facilitating automatic theorem proving and proof verification and rendering everything we do more computationally tractable.

Even if we the above weren't tempting and we decided not to restrict ourselves to constructive proofs, we would be stuck with  intuitionistic logic. Just as classical logic is associated with Boolean algebras, intuitionistic logic is associated with Heyting algebras. And it happens that the open set lattice of a topological space is a complete Heyting algebra even in classical topology.[11] This is closely related to topos theory; the internal logic of a topos is at least[12] intuitionistic. As I understand it, many topoi can be considered as foundations for mathematics,[13] and so again we see a classical theory pointing at constructivism suggestively. The moral of the story: in classical mathematics where the law of excluded middle holds, objects in which it fails arise naturally.

Work in the foundations of mathematics suggests that constructive mathematics is at least worth looking into, setting aside whether the law of excluded middle is too obvious to doubt. Letting its truth hold us back from investigating the merits of living without it cripples the capabilities of our mathematical projects. 


Unfortunately, not all constructivists or dialetheists (as proponents of paraconsistent logic are called) would agree how I framed the situation. I have blamed the tendency to stick to discussions of truth for our inability to move forward in both cases, but they might blame the inability of their opponents to see that the laws in question are false. They might urge that if we take the success of these laws as evidence of their truth, then failures or shortcomings should be evidence against them and we should simply revise our views accordingly. 

That is how the problem looks when we wear our epistemic rationality cap and focus on the truth of sentences: we consider which experiences could tip us off about which rules govern causal fabrics, and we organise our beliefs about causal fabrics around them. 

This framing of the problem is counterproductive. So long as we are discussing these abstract principles under the constraints of our own minds,[14] I will find any discussion of their truth or falsity highly suspect for the reasons highlighted above. And beyond that, the psychological pull toward the respective positions is too forceful for this mode of debate to make progress on reasonable timescales. In the interests of actually achieving some of our goals I favour dropping that debate entirely.

Instead, we should put on our instrumental rationality cap and consider whether these concepts are working for us. We should think hard about what we want to achieve with our mathematical systems and tailor them to perform better in that regard. We should recognise when a path is moot and trace a different one.

When we wear our instrumental rationality cap, mathematical systems are not attempts at creating images of reality that we can use for other things if we like. They are tools that we use to achieve potentially any goal, and potentially none. If after careful consideration we decide that creating images of reality is a fruitful goal relative to the other goals we can think of for our systems, fine. But that should by no means be the default, and if it weren't mathematics would be headed elsewhere. 


ADDENDUM

[Added due to expressions of confusion in the comments. I have also altered the original conclusion above.]

I gave two broad weaknesses in the LessWrong epistemology with respect to math.

The first concerned its ontological commitments. Thinking of validity as a property of logical laws constraining causal fabrics is indistinguishable in practical purposes from thinking of validity as a property of sentences relative to some axioms or according to strong intuition. Since our formulation and use of these sentences have been in familiar conditions, and since it is very difficult (perhaps impossible) to determine whether their psychological weight is a bias, inferring any of them as logical laws above and beyond their usefulness as tools is spurious. 

The second concerned cases where the logical law view can hold us back from achieving goals other than discovering true things.  The law of non-contradiction and the law of excluded middle are as old as they are obvious, yet they prevent us from strengthening our mathematical systems and making their use considerably easier. 

One diagnosis of this problem might be that sometimes it's best to set our epistemology aside in the interests of practical pursuits, that sometimes our epistemology isn't relevant to our goals. Under this diagnosis, we can take the LessWrong epistemology literally and believe it is true, but temporarily ignore it in order to solve certain problems. This is a step forward, but I would make a stronger diagnosis: we should have a background epistemology guided by instrumental reason, in which the epistemology of LessWrong and epistemic reason are tools that we can use if we find them convenient, but which we are not committed to taking literally.

I prescribe an epistemology that a) sees theories as no different from hammers, b) doesn't take the content of theories literally, and c) lets instrumental reason guide the decision of which theory to adopt when. I claim that this is the best framework to use for achieving our goals, and I call this a pragmatic epistemology.  

---

[1] See The Useful Idea of Truth.

[2] See The Fabric of Real Things and Stuff that Makes Stuff Happen.

[3] See The Useful Idea of Truth and The Fabric of Real Things. 

[4] See Proofs, Implications, and Models and Logical Pinpointing.

[5] Acceptable inputs being given by the universe of discourse (also known as the universe or the domain of discourse), which is discussed on any text covering the semantics of classical logic, or classical model theory in general.

[6] A visual example using modus ponens and cute cuddly kittens is found in Proofs, Implications, and Models.

[7] See The Useful Idea of Truth.

[8] See this paper by biologist E O Wilson.

[9] What I mean is that I would not claim that it "isn't true," which usually makes the debate stagnate. 

[10] For concreteness, read these examples of non-constructive proofs. 

[11] See here, paragraph two. 

[12] Given certain further restrictions, a topos is Boolean and its internal logic is classical. 

[13] This is an amusing and vague-as-advertised summary by John Baez.

[14] Communication with very different agents might be a way to circumvent this. Receiving advice from an AI, for instance. Still, I have reasons to find this fishy as well, which I will explore in later posts. 

Three methods of attaining change

6 Stefan_Schubert 16 August 2014 03:38PM

Say that you want to change some social or political institution: the educational system, the monetary system, research on AGI safety, or what not. When trying to reach this goal, you may use one of the following broad strategies (or some combination of them):

1) You may directly try to lobby (i.e. influence) politicians to implement this change, or try to influence voters to vote for parties that promise to implement these changes. 

2) You may try to build an alternative system and hope that it eventually becomes so popular so that it replaces the existing system.

3) You may try to develop tools that a) appeal to users of existing systems and b) whose widespread use is bound to change those existing systems.

Let me give some examples of what I mean. Trying to persuade politicians that we should replace conventional currencies by a private currency or, for that matter, starting a pro-Bitcoin party, fall under 1), whereas starting a private currency and hope that it spreads falls under 2). (This post was inspired by a great comment by Gunnar Zarncke on precisely this topic. I take it that he was there talking of strategy 2.) Similarly, trying to lobby politicians to reform the academia falls under 1) whereas starting new research institutions which use new and hopefully more effective methods falls under 2). I take it that this is what, e.g. Leverage Research is trying to do, in part. Similarly, libertarians who vote for Ron Paul are taking the first course, while at least one possible motivation for the Seasteading Institute is to construct an alternative system that proves to be more efficient than existing governments.

Efficient Voting Advice Applications (VAA's), which advice you to vote on the basis of your views on different policy matters, can be an example of 3) (they are discussed here). Suppose that voters started to use them on a grand scale. This could potentially force politicians to adhere very closely to the views of the voters on each particular issue, since if you failed to do this you would stand little chance of winning. This may or may not be a good thing, but the point is that it would be a change that would not be caused by lobbying of politicians or by building an alternative system, but simply by constructing a tool whose widespread use could change the existing system.

Another similar tool is reputation or user review systems. Suppose that you're dissatisfied with the general standards of some institution: say university education, medical care, or what not. You may attain this by lobbying politicians to implement new regulations intended to ensure quality (1), or by starting your own, superior, universities or hospitals (2), hoping that others will follow. Another method is, however, to create a reliable reputation/review system which, if they became widely used, would guide students and patients to the best universities and hospitals, thereby incentivizing to improve.

Now of course, when you're trying to get people to use such review systems, you are, in effect, building an evaluation system that competes with existing systems (e.g. the Guardian university ranking), so on one level you are using the second strategy. Your ultimate goal is, however, to create better universities, to which better evaluation systems, is just a means (as a tool). Hence you're following the third strategy here, in my terms.

Strategy 1) is of course a "statist" one, since what you're doing here is that you're trying to get the government to change the institution in question for you. Strategies 2) and 3) are, in contrast, both "non-statist", since when you use them you're not directly trying to implement the change through the political system. Hence libertarians and other anti-statists should prefer them.

My hunch is that when people are trying to change things, many of them unthinkingly go for 1), even regarding issues where it is unlikely that they are going to succeed that way. (For instance, it seems to me that advocates for direct democracy who try to persuade voters to vote for direct democratic parties are unlikely to succeed, but that widespread of VAA's might get us considerably closer to their ideal, and that they therefore should opt for the third strategy.) A plausible explanation of this is availability bias; our tendency to focus on what we most often see around us. Attempts to change social institutions through politics get a lot of attention, which makes people think of this strategy first. Even though this strategy is often efficient, I'd guess it is, for this reason, generally overused and that people sometimes instead should go for 2) or 3). (Possibly, Europeans have an even stronger bias in favour of this strategy than Americans.)

I also suspect, though, that people go for 2) a bit too often relative to 3). I think that people find it appealing, for its own sake, to create an entirely alternatively structure. If you're a perfectionist, it might be satisfying to see what you consider "the perfect institution", even if it is very small and has little impact on society. Also, sometimes small groups of devotees flock to these alternatives, and a strong group identity is therefore created. Moreover, I think that availability bias may play a role here, also. Even though this sort of strategy gets less attention than lobbying, most people know what it is. It is quite clear what it means to do something like this, and being part of a project like this therefore gives you a clear identity. For these reasons, I think that we might sometimes fool ourselves into believing that these alternative structures are more likely to be succesful than they actually are.

Conversely, people might be biased against the third strategy because it's less obvious. Also, it has perhaps something vaguely manipulative over it which might bias idealistic people against it. What you're typically trying to do is to get people to use a tool (say VAA's) a side-effect of which is the change you wish to attain (in this case, correspondence between voters' views and actual policies). I don't think that this kind of manipulation is necessarily vicious (but it would need to be discussed on a case-by-case-basis) but the point is that people tend to think that it is. Also, even those who don't think that it is manipulative in an unethical sense would still think that it is somehow "unheroic". Starting your own environmental party or creating your own artifical libertarian island clearly has something heroic over it, but developing efficient VAA's, which as a side-effect changes the political landscape, does not.

I'd thus argue that people should start looking more closely at the third strategy. A group that does use a strategy similar to this is of course for-profit companies. They try to analyze what products would appeal to people, and in so doing, carefully consider how existing institutions shape people's preferences. For instance, companies like Uber, AirBnB and LinkedIn have been succesful because they realized that given the structure of the taxi, the hotel and the recruitment businesses, their products would be appealing.

Of course, these companies primary goal, profit, is very different from the political goals I'm talking about here. At the same time, I think it is useful to compare the two cases. I think that generally, when we're trying to attain political change, we're not "actually trying" (in CFAR's terminology) as hard as we do when we're trying to maximize profit . It is very easy to fall into a mode where you're focusing on making symbolic gestures (which express your identity) rather than on trying to change things in politics. (This is, in effect, what many traditional charities are doing, if the EA movement is right.)

Instead, we should think as hard as profit-maximizing companies what new tools are likely to catch on. Any kind of tools could in principle be used, but the ones that seem most obvious are various kind of social media and other internet based tools (such as those mentioned in this post). The technical progress gives us enormous opportunities to costruct new tools that could re-shape people's behaviour in a way that would impact existing social and political institutions on a large scale.

Developing such tools is not easy. Even very succesful companies again and again fail to predict what new products will appeal to people. Not the least, you need a profound understanding of human psychology in order to succeed. That said, political organizations have certain advantages visavi for-profit companies. More often than not, they might develop ideas publically, whereas for-profit companies often have to keep them secret until they product is launched. This facilitates wisdom of the crowd-reasoning, where many different kinds of people come up with solutions together. Such methods can, in my opinion, be very powerful.

 

Any input regarding, e.g. the taxonomy of methods, my speculations about biases, and, in particular, examples of institution changing tools are welcome. I'm also interested in comments on efficient methods for coming up with useful tools (e.g. tests of them). Finally, if anything's unclear I'd be happy to provide clarifications (it's a very complex topic).

FAI PR tracking well [link]

7 Dr_Manhattan 15 August 2014 09:23PM

This time, it's by "The Editors" of Bloomberg view (which is very significant in News world). Content is very reasonable explanation of AI concerns, though not novel to this audience.

http://www.bloombergview.com/articles/2014-08-10/intelligent-machines-scare-smart-people

Directionally this is definitely positive, though I'm not sure quite how. Does anyone have have ideas? Perhaps one of the orgs (MIRI, FHI, CSER, FLI) reach out and say hello to the editors? 

Weekly LW Meetups

1 FrankAdamek 15 August 2014 08:21PM

[LINK] Speed superintelligence?

33 Stuart_Armstrong 14 August 2014 03:57PM

From Toby Ord:

Tool assisted speedruns (TAS) are when people take a game and play it frame by frame, effectively providing super reflexes and forethought, where they can spend a day deciding what to do in the next 1/60th of a second if they wish. There are some very extreme examples of this, showing what can be done if you really play a game perfectly. For example, this video shows how to winSuper Mario Bros 3 in 11 minutes. It shows how different optimal play can be from normal play. In particular, on level 8-1, it gains 90 extra lives by a sequence of amazing jumps.

Other TAS runs get more involved and start exploiting subtle glitches in the game. For example, this page talks about speed running NetHack, using a lot of normal tricks, as well as luck manipulation (exploiting the RNG) and exploiting a dangling pointer bug to rewrite parts of memory.

Though there are limits to what AIs could do with sheer speed, it's interesting that great performance can be achieved with speed alone, that this allows different strategies from usual ones, and that it allows the exploitation of otherwise unexploitable glitches and bugs in the setup.

Public thread for researchers seeking existential risk consultation

0 snarles 14 August 2014 01:01PM

LW is one of the few informal places which take existential risk seriously.  Researchers can post here to describe proposed or ongoing research projects, seeking consultation on possible X-risk consequences of their work.  Commenters should write their posts with the understanding that many researchers prioritize interest first and existential risk/social benefit of their work second, but that discussions of X-risk may steer researchers to projects with less X-risk/more social benefit.

View more: Next