Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Solving sleep: just a toe-dipping

10 Capla 30 June 2015 07:38PM
[For the past few months I’ve been undertaking a mostly independent study of sleep, and looking to build a coherent model of what sleep does and find ways to optimize it. I’d like to write a series of posts outlining my findings and hypotheses. I’m not sure if this is the best venue for such a project, and I’d like to gauge community interest. This first post is a brief overview of one important aspect of sleep, with a few related points of recommendation, to provide some background knowledge.]

 

In the quest to become more effective and productive, sleep is an enormously important process to optimize. Most of us spend (or at least think we should spend) 7.5 to 8.5 hours in bed every night, a third of a 24 hour day. Not sleeping well and not sleeping sufficiently have known and large drawbacks, including decreased attention, greater irritability, depressed immune function, and generally weakened cognitive ability. If you’re looking for more time, either for subjective life-extension, or so that you can get more done in a day, taking steps to sleep most efficiently, so as to not spend more than the required amount of time in bed and to get the full benefit of the rest, is of high value.

Understanding the inner mechanisms of this process, can let us work around them. Sleep, baffling as it is (and it is extremely baffling), is not a black box. Knowing how it works, you can organize your behavior to accommodate the world as it is, just as taking advantage of the principles of aerodynamics, thrust, and lift, enables one to build an airplane.

The most important thing to know about sleep and wakefulness is that it is the result of a dual process: how alert a person feels is determined by two different and opposite functions. The first is termed  the homeostatic sleep drive (also, homeostatic drive, sleep load, sleep pressure, and process S), which is determined solely by how long it has been since an individual has last slept fully. The longer he/she’s been awake, the greater his/her sleep drive. It is the brain's biological need to sleep. Just as sufficient need for calories produces hunger, sufficient sleep-drive produces sleepiness. Sleeping decreases sleep drive, and sleep drive drops faster (when sleeping) then it rises (when awake).

Neuroscience is complicated, but it seems the chemical correlate of sleep drive is the build-up of adenosine in the basal forebrain and this is used as the brain’s internal measure of how badly one needs sleep.1 (Caffeine makes us feel alert by competing with adenosine for bonding sites and thereby inhibiting reuptake.)

This is only half the story, however. Adenosine levels are much higher (and sleep drive correspondingly lower) in the evening, when one has been awake for a while, than in the middle of the night, when one has just slept for several hours. If sleepiness were only determined by sleep drive, you would have a much more fragmented sleep: sleeping several times during the day, and waking up several times during the night. Instead, humans typically stay awake through the day, and sleep through the whole night. This is due to the second influence on wakefulness: the circadian alerting signal.

For most of human history, there was little that could be done at night. Darkness made it much more difficult to hunt or gather than it was during day. Given that the brain requires some fraction of the nychthemeron (meaning a 24-hour period) asleep, it is evolutionarily preferable to concentrate that fraction of of the nychthemeron in the nighttime, freeing the day to do other things. For this reason, there is also a cyclical component to one’s alertness: independent of how long it has been since an individual has slept, there will be times in the nychthemeron when he/she will feel more or less tired.   

Roughly, the circadian alerting signal (also known as process C) counters the sleep-drive, so that as sleep drive builds up during the day, alertness stays constant, and as sleep drive increases over the course of the night, the individual will stay asleep.

The alerting signal is synchronized to circadian rhythms, which are in turn attuned to light exposure. The circadian clock is set so that the alerting signal begins to increase again (after a night of sleep) at the time when the optic nerve is first exposed to light in the morning (or rather, when the the optic nerve has habitually been first exposed to light, since it takes up to a week to reset circadian rhythms), and increases with the sleep drive until about 14 hours later (from the point that the alerting signal started rising).

This is why if you pull an “all-nighter” you might find it difficult to fall asleep during the following day, even if you feel exhausted. Your sleep drive is high, but the alerting signal is triggering wakefulness, which makes it hard to fall asleep.

For unknown reasons, there is a dip in the circadian alerting about 8 hours after the beginning of the cycle. This is why people sometimes experience that “2:30 feeling.” This is also the time at which biphasic cultures typically have an afternoon siesta. This is useful to know, because this is the best time to take a nap if you want to make up sleep missed the night before.

 

http://bonytobombshell.com/wp-content/uploads/2015/05/energy-levels-sleep-drive-alert-chart-1-bony-bombshell.jpg

 

The neurochemistry of the circadian alerting signal is more complex than that of the sleep drive, but one of the key chemicals of process C is melatonin, which is secreted by the pineal gland about 12 hours after the start of the circadian cycle (two hours before habitual bedtime). It is mildly sleep-inducing.

This is why taking melatonin tablets before is recommended by gwern and others. I second this recommendation. Though not FDA-approved, there seem to be little in the way of negative side effects and they make it much easier to fall asleep.

The natural release of melatonin is inhibited by light, and in particular blue light (which is why it is beneficial applications to red-shift the light of their computer screens, like flux or reds.shift, or wear red-tinted goggles, before bed). By limiting light exposure in the late evening you allow natural melatonin secretion, which both stimulates sleep and prevents the circadian clock from shifting (which would make it even more difficult to fall asleep the following night). Recent studies have shown bright screens ant night do demonstrably disrupt sleep.2

The thing that interests me about this fact that alertness is controlled by both process S and process C, is that it may be possible to modulate each of those processes independently. It would be enormously useful to be able to “turn off” the circadian alerting signal on demand, so that a person can fall asleep at any time off the day, to make up sleep loss whenever is convenient. Instead of accommodating circadian rhythms when scheduling, we could adjust the circadian effect to better fit our lives. When you know you’ll need to be awake all night, for instance, you could turn off the alerting signal around midday and sleep until your sleep drive is reset. In fact, is suspect that those people who are able to live successfully on a polyphasic sleep schedule get the benefits by retraining the circadian influence. In the coming posts, I want to outline a few of the possibilities and (significant) problems in that direction. 

continue reading »

Effective Altruism vs Missionaries? Advice Requested from a Newly-Built Crowdfunding Platform.

0 lululu 30 June 2015 05:39PM

Hi, I'm developing a next-generation crowdfunding platform for non-profit fundraising. From what we have seen, it is aeffective tool, more about it below. I'm working with two other cofounders, both of whom are evangelical Christians. We get along well in general, but that I strongly believe in effective altruism and they do not.

We will launch a second pilot fundraising campaign in 2-3 weeks. My co-founders have arranged for us fund raise for is a "church planting" missionary organization. This is so opposed my belief in effective altruism I feel uncomfortable using our effective tool to funnel donors' dollars in THIS of all directions. This is not the reason I got involved in this project.

My argument with them is that we should charge more to ineffective nonprofits such as colleges, religious, or political organizations, and use that extra to subsidize the campaign and money-processing costs of the effective non-profits. I think this is logically consistent with earning to give. But I am being outvoted two-to-one by people who believe saving lives and saving souls are nearly equally important.

So I have two requests:

1. If anyone has advise on how to navigate this (including any especially well written articles that would appeal to evangelical Christians, or experience negotiating with start-up cofounders). 

2. If anyone has personal connections with effective or effective-ish non-profits, I would much prefer to fundraise for them than my co-founder's church connections. Caveat: the org must have US non-profit legal status. 

About the platform: the gist our concept is that we're using a lot of psychology and biases and altruism research to nudge more people towards giving and also nudge them towards a sustained involvement with the nonprofit in question. We're using some of the tricks that made the ice bucket challenge so successful (but with added accountability to ensure that visible involvement actually leads to monetary donations). Users can pledge money contingent on their friend's involvement, which motivates people in the same way that matching donations motivate people. Giving is very visible, and people are more likely to give if they see friends giving. Friends are making the request for funding, which creates a sense of personal connection. Each person's mini-campaign has an involvement goal and a time limit (3 friends in 3 days) to create a sense of urgency. The money your friends donate visibly increases your impact so it also feel like getting money from nothing - a $20 pledge can become hundreds of dollars. We nudge people towards automated smaller monthly reoccurring gifts. We try to minimize the number of barriers to making a donation (less steps, fewer fields).  

 

Selecting vs. grooming

2 DeVliegendeHollander 30 June 2015 10:48AM

Content warning: meta-political, with hopefully low mind-killer factor.

Epistemic status: proposal for brain-storming.

- Representative democracies select political leaders. Monarchies and aristocracies groom political leaders for the job from childhood. (Also, to a certain extent they breed them for the job.)

- Capitalistic competition selects economic elites. Heritable landowning aristocracies groom economic elites from childhood. (Again, they also breed them.)

- A capitalist employer selects an accountant from a pool of 100 applicants. A feudal lord would groom a serf boy who has a knack for horses into the job of the adult stable man.

It seems a lot like selecting is better than grooming. After it is the modern way and hardly anyone would argue capitalism doesn't have a higher economic output than feudalism and so on. 

But... since it was such a hugely important difference through history, perhaps, it was one of the things that really defined the modern world because it determines the whole social structure of societies past and present, that I think it should deserve some investigation. There may be something more interesting lurking here than just saying selection/testing won over grooming, period.

1) Can aspects of grooming as opposed to selecting/testing be steelmanned, are there corner cases when it could be better?

2) A pre-modern, medievalish society that nevertheless used a lot of selection/testing was China - I am thinking about the famous mandarin exams. Does this seem to have had any positive effect on China compared to other similar societies? I.e. is this even like that it is a big factor in the general outcomes of 2015 West vs. 1515 West? Comparing old China with similar medievalish but not selectionist (but inheritance based) societies would be useful for isolating this factor, right?

3) Why exactly does selecting and testing work better than grooming (and breeding) ?

4) Is it possible it works better because people do the breeding (intelligent people tend to marry intelligent people etc.) and grooming (a child of doctors will have an entirely different upbringing than a child of manual laborers) on their own, thus the social system does not have to do it, it is enough / better for the social system to do the selection, to do the testing of the success of the at-home grooming?

5) Any other interesting insight or reference?

Note: this is NOT about meritocracy vs. aristocracy. It is about two different kinds of meritocracy - where you either select, test people for merit (through market competition or elections) but you don't care much how to _build_ people who  will have merit vs. an aristocratic meritocracy where you largely focus on breeding and grooming people into the kinds who will have merit, and don't focus on selecting and testing so much.

Note 2: is this even possible this is a false dichotomy? One could argue that Western society is chock full of features for breeding and grooming people, there are dating sites for specific groups of people, there are tons of helping resources parents can draw on, kids spend 15-20 years at school and so on, so the breeding and grooming is done all right, I am just being misled here by mere names. Such as the name democracy: it is a selection process, but who wins depends on breeding and grooming. Such as market competition: those best bred and groomed have the highest chance. Is it simply so that selection is more noticable than grooming, it gets more limelight, but we actually do both? If yes, why does selection get more limelight than grooming? Why do we talk about elections more than about how to groom a child into being a politician, or why do we talk about market competition more than how to groom a child into the entrepreneur who aces competition? If modern society uses both, why is selection in the public spotlight while grooming just being something happening at home and school and not so noticeable? (To be fair, on LW, we talk more about how to test hypotheses than how to formulate them. Is this potentially related? People are just more interested in testing than building, be that hypotheses or people?)

 

 

Top 9+1 myths about AI risk

34 Stuart_Armstrong 29 June 2015 08:41PM

Following some somewhat misleading articles quoting me, I thought Id present the top 10 myths about the AI risk thesis:

  1. That we’re certain AI will doom us. Certainly not. It’s very hard to be certain of anything involving a technology that doesn’t exist; we’re just claiming that the probability of AI going bad isn’t low enough that we can ignore it.
  2. That humanity will survive, because we’ve always survived before. Many groups of humans haven’t survived contact with more powerful intelligent agents. In the past, those agents were other humans; but they need not be. The universe does not owe us a destiny. In the future, something will survive; it need not be us.
  3. That uncertainty means that you’re safe. If you’re claiming that AI is impossible, or that it will take countless decades, or that it’ll be safe... you’re not being uncertain, you’re being extremely specific about the future. “No AI risk” is certain; “Possible AI risk” is where we stand.
  4. That Terminator robots will be involved. Please? The threat from AI comes from its potential intelligence, not from its ability to clank around slowly with an Austrian accent.
  5. That we’re assuming the AI is too dumb to know what we’re asking it. No. A powerful AI will know what we meant to program it to do. But why should it care? And if we could figure out how to program “care about what we meant to ask”, well, then we’d have safe AI.
  6. That there’s one simple trick that can solve the whole problem. Many people have proposed that one trick. Some of them could even help (see Holden’s tool AI idea). None of them reduce the risk enough to relax – and many of the tricks contradict each other (you can’t design an AI that’s both a tool and socialising with humans!).
  7. That we want to stop AI research. We don’t. Current AI research is very far from the risky areas and abilities. And it’s risk aware AI researchers that are most likely to figure out how to make safe AI.
  8. That AIs will be more intelligent than us, hence more moral. It’s pretty clear than in humans, high intelligence is no guarantee of morality. Are you really willing to bet the whole future of humanity on the idea that AIs might be different? That in the billions of possible minds out there, there is none that is both dangerous and very intelligent?
  9. That science fiction or spiritual ideas are useful ways of understanding AI risk. Science fiction and spirituality are full of human concepts, created by humans, for humans, to communicate human ideas. They need not apply to AI at all, as these could be minds far removed from human concepts, possibly without a body, possibly with no emotions or consciousness, possibly with many new emotions and a different type of consciousness, etc... Anthropomorphising the AIs could lead us completely astray.
  10. That AIs have to be evil to be dangerous. The majority of the risk comes from indifferent or partially nice AIs. Those that have sone goal to follow, with humanity and its desires just getting in the way – using resources, trying to oppose it, or just not being perfectly efficient for its goal.

 

Parenting Technique: Increase Your Child’s Working Memory

11 James_Miller 29 June 2015 07:51PM

I continually train my ten-year-old son’s working memory, and urge parents of other young children to do likewise.  While I have succeeded in at least temporarily improving his working memory, I accept that this change might not be permanent and could end a few months after he stops training.  But I also believe that while his working memory is boosted so too is his learning capacity.    

I have a horrible working memory that greatly hindered my academic achievement.  I was so bad at spelling that they stopped counting it against me in school.  In technical classes I had trouble remembering what variables stood for.  My son, in contrast, has a fantastic memory.  He twice won his school’s spelling bee, and just recently I wrote twenty symbols (letters, numbers, and shapes) in rows of five.  After a few minutes he memorized the symbols and then (without looking) repeated them forward, backwards, forwards, and then by columns.    

My son and I have been learning different programming languages through Codecademy.  While I struggle to remember the required syntax of different languages, he quickly gets this and can focus on higher level understanding.  When we do math learning together his strong working memory also lets him concentrate on higher order issues then remembering the details of the problem and the relevant formulas.     

You can easily train a child’s working memory.  It requires just a few minutes of time a day, can be very low tech or done on a computer, can be optimized for your child to get him in flow, and easily lends itself to a reward system.  Here is some of the training we have done:     

 

 

  • I write down a sequence and have him repeat it.
  • I say a sequence and have him repeat it.
  • He repeats the sequence backwards.
  • He repeats the sequence with slight changes such as adding one to each number and “subtracting” one from each letter.
  • He repeats while doing some task like touching his head every time he says an even number and touching his knee every time he says an odd one.
  • Before repeating a memorized sequence he must play repeat after me where I say a random string.
  • I draw a picture and have him redraw it.
  • He plays N-back games.
  • He does mental math requiring keeping track of numbers (i.e. 42 times 37).
  • I assign numerical values to letters and ask him math operation questions (i.e. A*B+C).        

 

 

The key is to keep changing how you train your kid so you have more hope of improving general working memory rather than the very specific task you are doing.  So, for example, if you say a sequence and have your kid repeat it back to you, vary the speed at which you talk on different days and don’t just use one class of symbols in your exercises.

 

 

Two-boxing, smoking and chewing gum in Medical Newcomb problems

9 Caspar42 29 June 2015 10:35AM

I am currently learning about the basics of decision theory, most of which is common knowledge on LW. I have a question, related to why EDT is said not to work.

Consider the following Newcomblike problem: A study shows that most people who two-box in Newcomblike problems as the following have a certain gene (and one-boxers don't have the gene). Now, Omega could put you into something like Newcomb's original problem, but instead of having run a simulation of you, Omega has only looked at your DNA: If you don't have the "two-boxing gene", Omega puts $1M into box B, otherwise box B is empty. And there is $1K in box A, as usual. Would you one-box (take only box B) or two-box (take box A and B)? Here's a causal diagram for the problem:



Since Omega does not do much other than translating your genes into money under a box, it does not seem to hurt to leave it out:


I presume that most LWers would one-box. (And as I understand it, not only CDT but also TDT would two-box, am I wrong?)

Now, how does this problem differ from the smoking lesion or Yudkowsky's (2010, p.67) chewing gum problem? Chewing Gum (or smoking) seems to be like taking box A to get at least/additional $1K, the two-boxing gene is like the CGTA gene, the illness itself (the abscess or lung cancer) is like not having $1M in box B. Here's another causal diagram, this time for the chewing gum problem:

As far as I can tell, the difference between the two problems is some additional, unstated intuition in the classic medical Newcomb problems. Maybe, the additional assumption is that the actual evidence lies in the "tickle", or that knowing and thinking about the study results causes some complications. In EDT terms: The intuition is that neither smoking nor chewing gum gives the agent additional information.

Open Thread, Jun. 29 - Jul. 5, 2015

2 Gondolinian 29 June 2015 12:14AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Is this evidence for the Simulation hypothesis?

1 Eitan_Zohar 28 June 2015 11:45PM

I haven't come across this particular argument before, so I hope I'm not just rehashing a well-known problem.

"The universe displays some very strong signs that it is a simulation.

As has been mentioned in some other answers, one way to efficiently achieve a high fidelity simulation is to design it in such a way that you only need to compute as much detail as is needed. If someone takes a cursory glance at something you should only compute its rough details and only when someone looks at it closely, with a microscope say, do you need to fill in the details.

This puts a big constraint on the kind of physics you can have in a simulation. You need this property: suppose some physical system starts in state x. The system evolves over time to a new state y which is now observed to accuracy ε. As the simulation only needs to display the system to accuracy ε the implementor doesn't want to have to compute x to arbitrary precision. They'd like only have to compute x to some limited degree of accuracy. In other words, demanding y to some limited degree of accuracy should only require computing x to a limited degree of accuracy.

Let's spell this out. Write y as a function of x, y = f(x). We want that for all ε there is a δ such that for all x-δ<y<x+δ, |f(y)-f(x)|<ε. This is just a restatement in mathematical notation of what I said in English. But do you recognise it?

It's the standard textbook definition of a Continuous function. We humans invented the notion of continuity because it was an ubiquitous property of functions in the physical world. But it's precisely the property you need to implement a simulation with demand-driven level of detail. All of our fundamental physics is based on equations that evolve continuously over time and so are optimised for demand-driven implementation.

One way of looking at this is that if y=f(x), then if you want to compute n digits of y you only need a finite number of digits of x. This has another amazing advantage: if you only ever display things to a given accuracy you only ever need to compute your real numbers to a finite accuracy. Nature could have chosen to use any number of arbitrarily complicated functions on the reals. But in fact we only find functions with the special property that they need only be computed to finite precision. This is precisely what a smart programmer would have implemented.

(This also helps motivate the use of real numbers. The basic operations on real numbers such as addition and multiplication are continuous and require only finite precision in their arguments to compute their values to finite precision. So real numbers give a really neat way to allow inhabitants to find ever more detail within a simulation without putting an undue burden on its implementation.)

But you can do one step further. As Gregory Benford says in Timescape: "nature seemed to like equations stated in covariant differential forms". Our fundamental physical quantities aren't just continuous, they're differentiable. Differentiability means that if y=f(x) then once you zoom in closely enough, y depends linearly on x. This means that one more digit of y requires precisely one more digit of x. In other words our hypothetical programmer has arranged things so that after some initial finite length segment they can know in advance exactly how much data they are going to need.

After all that, I don't see how we can know we're not in a simulation. Nature seems cleverly designed to make a demand-driven simulation of it as efficient as possible."

http://www.quora.com/How-do-we-know-that-were-not-living-in-a-computer-simulation/answer/Dan-Piponi

Goal setting journal (GSJ) - 28/06/15 -> 05/07/15

4 Clarity 28 June 2015 06:24AM

Inspired by the group rationality diary and open thread, this is the inaugural weekly goal setting journal (GSJ) thread.

If you have goals worth setting that are not worth their own post (even in Discussion), then it goes here.


Notes for future GSJ posters:

1. Please add the 'goal_setting_journal' tag.

2. Check if there is an active GSJ thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. GSJ Threads should be posted in Discussion, and not Main.

4. GSJ Threads should run for no longer than 1 week, but you may set goals, subgoals and tasks for as distant into the future as you please.

5. No one is in charge of posting these threads. If it's time for a new thread, and you want a new thread, just create it.

Praising the Constitution

-5 dragonfiremalus 27 June 2015 04:55PM

I am sure the majority of the discussion surrounding the Unites States recent Supreme Court ruling will be on the topic of same-sex marriage and marriage equality. And while there is a lot of good discussion to be had, I thought I would take the opportunity to bring up another topic that seems often to be glossed over, but is yet very important to the discussion. That is the idea in the USA of praising the United States Constitution and holding it to an often unquestioning level of devotion.

Before I really get going I would like to take a quick moment to say I do support the US Constitution and think it is important to have a very strong document that provides rights for the people and guidelines for government. The entire structure of the government is defined by the Constitution, and some form of constitution or charter is necessary for the establishment of any type of governing body. Also, in the arguments I use as examples I am not in any way saying which side I am on. I am simply using them as examples, and no attempt should be made to infer my political stances from how I treat the arguments themselves.

But now the other way. I often hear in political discussions people, particularly Libertarians, trying to tie their position back to being based on the Constitution. The buck stops there. The Constitution says it, therefore it must be right. End of discussion. To me this often sounds eerily similar to arguing the semantics of a religious text to support your position.

A great example is in the debate over gun control laws. Without espousing one side or the other, I can fairly safely and definitively say the US Constitution does support citizens' rights to own guns. For many a Libertarian, the discussion ends there. This is not something only Libertarians are guilty of. The other side of the debate often resorts to arguing context and semantics in an attempt to make the Constitution support their side. This clearly is just a case of people trying to win the argument rather than discuss and discover the best solution.

Similarly in the topic of marriage equality, a lot of the discussion has been focused on whether or not the US supreme court ruling was, in fact, constitutional. Extending that further, the topic goes on to "does the Constitution give the federal government the right to demand that the fifty states all allow same-sex marriage?" To me, this is not the true question that needs answering. Or at least, the answer to that question does not determine a certain action or inaction on the part of the federal government. (E.g., if it was decided that it was unconstitutional, that STILL DOESN'T NECESSARILY mean that the federal government shouldn't do it. I know, shocking.) 

The Constitution was written by a bunch of men over two hundred years ago. Fallible, albeit brilliant, men. It isn't perfect. (It's damn good, else the country wouldn't have survived this long.) But it is still just a heuristic for finding the best course of action in what resembles a reasonable amount of time (insert your favorite 'inefficiency of bureaucracy' joke here). But heuristics can be wrong. So perhaps we should more often consider the question of whether or not what the Constitution says is actually the right thing. Certainly, departures from the heuristic of the Constitution should be taken with extreme caution and consideration. But we cannot discard the idea and simply argue based on the Constitution. 

At the heart of the marriage equality and the supreme court ruling debate are the ideas of freedom, equality, and states' rights. All three of those are heuristics I use that usually point to what I think are best. I usually support states' rights, and consider departure from that as negative expected utility. However, there are many times when that consideration is completely blown away by other considerations. 

The best example I can think of off the top of my head is slavery. Before the Emancipation Proclamation some states ruled slavery illegal, some legal. The question that tore our nation apart was whether or not the federal government had the right to impose abolition of slavery on all the states. I usually side with states' rights. But slavery is such an abominable practice that in that case I would have considered the constitutional rights of the federal government a non-issue when weighed against the continuation of slavery in the US for a single more day. If the Constitution had specifically supported the legality of slavery, then that would have shown it was time to burn it and try again.

Any federal proclamation infringes on states' rights, something I usually side with. And as more and more states were legalizing same-sex marriage it seemed that the states were deciding by themselves to promote marriage equality. The supreme court decision certainly speeds things up, but is it worth the infringement of state rights? To me that is the important question. Not whether or not it is Constitutional, but whether or not it is right. I am not answering that question here, just attempting to point out that the discussion of constitutionality may be the wrong question. And certainly an argument could be made for why states' rights should not be used as a heuristic at all. 

4 days left in Giving What We Can's 2015 fundraiser - £34k to go

5 RobertWiblin 27 June 2015 02:16AM

We at Giving What We Can have been running a fundraiser to raise £150,000 by the end of June, so that we can make our budget through the end of 2015. We are really keen to keep the team focussed on their job of growing the movement behind effective giving, and ensure they aren't distracted worrying about fundraising and paying the bills.

With 4 days to go, we are now short just £34,000!

We also still have £6,000 worth of matching funds available for those who haven't given more than £1,000 to GWWC before and donate £1,000-£5,000 before next Tuesday! (For those who are asking, 2 of the matchers I think wouldn't have given otherwise and 2 I would guess would have.)

If you've been one of those holding out to see if we would easily reach the goal, now's the time to pitch in to ensure Giving What We Can can continue to achieve its vision of making effective giving the societal default and move millions more to GiveWell-recommended and other high impact organisations.

So please give now or email me for our bank details: robert [dot] wiblin [at] centreforeffectivealtruism [dot] org.

If you want to learn more, please see this more complete explanation for why we might be the highest impact place you can donate. This fundraiser has also been discussed on LessWrong before, as well as the Effective Altruist forum.

Thanks so much!


New LW Meetups: Maine, San Antonio

2 FrankAdamek 26 June 2015 02:59PM

This summary was posted to LW Main on June 19th. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.

continue reading »

[link] Essay on AI Safety

10 jsteinhardt 26 June 2015 07:42AM

I recently wrote an essay about AI risk, targeted at other academics:

Long-Term and Short-Term Challenges to Ensuring the Safety of AI Systems

I think it might be interesting to some of you, so I am sharing it here. I would appreciate any feedback any of you have, especially from others who do AI / machine learning research.

Can You Give Support or Feedback for My Program to Alleviate Poverty?

9 Brendon_Wong 25 June 2015 11:18PM

Hi LessWrong,

Two years ago, when I travelled to Belize, I came up with an idea for a self-sufficient scalable program to address poverty. I saw how many people in Belize were unemployed or getting paid very low wages, but I also saw how skilled they were, a result of English being the national language and a mandatory education system. Many Belizeans have a secondary/high school education in Belize, and the vast majority have at least a primary school education and can speak English. I thought to myself, "it's too bad I can't teleport Belizeans to the United States, because in the U.S., they would automatically be able to earn many times more the minimum wage in Belize with their existing skills."

But I knew there was a way to do it: "virtual teleportation." My solution involves using computer and internet access in conjunction with training and support to connect the poor with high paying international work opportunities. My tests of virtual employment using Upwork and Amazon Mechanical Turk show that it is possible to earn at least twice the minimum wage in Belize, around $3 an hour, working with flexible hours. This solution is scalable because there is a consistent international demand for very low wage work (relatively speaking) from competent English speakers, and in other countries around the world like South Africa, many people matching that description can be found and lifted out of poverty. The solution could become self-sufficient because running a virtual employment enterprise or taking a cut of the earnings of members using virtual employment services (as bad as that sounds) can generate enough income to pay for the relatively low costs of monthly internet and the one-time costs of technology upgrades.

If you have any feedback, comments, suggestions, I would love to hear about it in the comments section. Feedback on my fundraising campaign at igg.me/at/bvep is also greatly appreciated.

If you are thinking about supporting the idea, my team and I need your help to make this possible. It may be difficult for us to reach our goal, but every contribution greatly increases the chances our fundraiser and our program will be successful, especially in the early stages. All donations are tax-deductible, and if you’d like, you can also opt-in for perks like flash drives and t-shirts. It only takes a moment to make a great difference: igg.me/at/bvep.

Thank you for reading!

GiveWell event for SF Bay Area EAs

3 Benquo 25 June 2015 08:27PM

Passing this announcement along from GiveWell:

GiveWell is holding an event at our offices in San Francisco for Bay Area residents who are interested in Effective Altruism. The evening will be similar to the research events we hold periodically for GiveWell donors: it will include presentations and discussion about GiveWell’s top charity work and the Open Philanthropy Project, as well as a light dinner and time for mingling. We’re tentatively planning to hold the event in the evening of Tuesday July 7th or Wednesday July 8th.

We hope to be able to accommodate everyone who is interested, but may have to limit places depending on demand. If you would be interested in attending, please fill out this form.
We hope to see you there!

[link] Choose your (preference) utilitarianism carefully – part 1

13 Kaj_Sotala 25 June 2015 12:06PM

Summary: Utilitarianism is often ill-defined by supporters and critics alike, preference utilitarianism even more so. I briefly examine some of the axes of utilitarianism common to all popular forms, then look at some axes unique but essential to preference utilitarianism, which seem to have received little to no discussion – at least not this side of a paywall. This way I hope to clarify future discussions between hedonistic and preference utilitarians and perhaps to clarify things for their critics too, though I’m aiming the discussion primarily at utilitarians and utilitarian-sympathisers.

http://valence-utilitarianism.com/?p=8

I like this essay particularly for the way it breaks down different forms of utilitarianism to various axes, which have rarely been discussed on LW much.

For utilitarianism in general:

Many of these axes are well discussed, pertinent to almost any form of utilitarianism, and at least reasonably well understood, and I don’t propose to discuss them here beyond highlighting their salience. These include but probably aren’t restricted to the following:

  • What is utility? (for the sake of easy reference, I’ll give each axis a simple title – for this, the utility axis); eg happiness, fulfilled preferences, beauty, information(PDF)
  • How drastically are we trying to adjust it?, aka what if any is the criterion for ‘right’ness? (sufficiency axis); eg satisficing, maximising[2], scalar
  • How do we balance tradeoffs between positive and negative utility? (weighting axis); eg, negative, negative-leaning, positive (as in fully discounting negative utility – I don’t think anyone actually holds this), ‘middling’ ie ‘normal’ (often called positive, but it would benefit from a distinct adjective)
  • What’s our primary mentality toward it? (mentality axis); eg act, rule, two-level, global
  • How do we deal with changing populations? (population axis); eg average, total
  • To what extent do we discount future utility? (discounting axis); eg zero discount, >0 discount
  • How do we pinpoint the net zero utility point? (balancing axis); eg Tännsjö’s test, experience tradeoffs
  • What is a utilon? (utilon axis) [3] – I don’t know of any examples of serious discussion on this (other than generic dismissals of the question), but it’s ultimately a question utilitarians will need to answer if they wish to formalise their system.

For preference utilitarianism in particular:

Here then, are the six most salient dependent axes of preference utilitarianism, ie those that describe what could count as utility for PUs. I’ll refer to the poles on each axis as (axis)0 and (axis)1, where any intermediate view will be (axis)X. We can then formally refer to subtypes, and also exclude them, eg ~(F0)R1PU, or ~(F0 v R1)PU etc, or represent a range, eg C0..XPU.

How do we process misinformed preferences? (information axis F)

(F0 no adjustment / F1 adjust to what it would have been had the person been fully informed / FX somewhere in between)

How do we process irrational preferences? (rationality axis R)

(R0 no adjustment / R1 adjust to what it would have been had the person been fully rational / RX somewhere in between)

How do we process malformed preferences? (malformation axes M)

(M0 Ignore them / MF1 adjust to fully informed / MFR1 adjust to fully informed and rational (shorthand for MF1R1) / MFxRx adjust to somewhere in between)

How long is a preference relevant? (duration axis D)

(D0 During its expression only / DF1 During and future / DPF1 During, future and past (shorthand for  DP1F1) / DPxFx Somewhere in between)

What constitutes a preference? (constitution axis C)

(C0 Phenomenal experience only / C1 Behaviour only / CX A combination of the two)

What resolves a preference? (resolution axis S)

(S0 Phenomenal experience only / S1 External circumstances only / SX A combination of the two)

What distinguishes these categorisations is that each category, as far as I can perceive, has no analogous axis within hedonistic utilitarianism. In other words to a hedonistic utilitarian, such axes would either be meaningless, or have only one logical answer. But any well-defined and consistent form of preference utilitarianism must sit at some point on every one of these axes.

See the article for more detailed discussion about each of the axes of preference utilitarianism, and more.

The Unfriendly Superintelligence next door

30 jacob_cannell 24 June 2015 08:14PM

Markets are powerful decentralized optimization engines - it is known.  Liberals see the free market as a kind of optimizer run amuck, a dangerous superintelligence with simple non-human values that must be checked and constrained by the government - the friendly SI.  Conservatives just reverse the narrative roles.

In some domains, where the incentive structure aligns with human values, the market works well.  In our current framework, the market works best for producing gadgets. It does not work so well for pricing intangible information, and most specifically it is broken when it comes to health.

We treat health as just another gadget problem: something to be solved by pills.  Health is really a problem of knowledge; it is a computational prediction problem.  Drugs are useful only to the extent that you can package the results of new knowledge into a pill and patent it.  If you can't patent it, you can't profit from it.

So the market is constrained to solve human health by coming up with new patentable designs for mass-producible physical objects which go into human bodies.  Why did we add that constraint - thou should solve health, but thou shalt only use pills?  (Ok technically the solutions don't have to be ingestible, but that's a detail.)

The gadget model works for gadgets because we know how gadgets work - we built them, after all.  The central problem with health is that we do not completely understand how the human body works - we did not build it.  Thus we should be using the market to figure out how the body works - completely - and arguably we should be allocating trillions of dollars towards that problem.

The market optimizer analogy runs deeper when we consider the complexity of instilling values into a market.  Lawmakers cannot program the market with goals directly, so instead they attempt to engineer desireable behavior by ever more layers and layers of constraints.  Lawmakers are deontologists.

As an example, consider the regulations on drug advertising.  Big pharma is unsafe - its profit function does not encode anything like "maximize human health and happiness" (which of course itself is an oversimplification).  If allowed to its own devices, there are strong incentives to sell subtly addictive drugs, to create elaborate hyped false advertising campaigns, etc.  Thus all the deontological injunctions.  I take that as a strong indicator of a poor solution - a value alignment failure.

What would healthcare look like in a world where we solved the alignment problem?

To solve the alignment problem, the market's profit function must encode long term human health and happiness.  This really is a mechanism design problem - its not something lawmakers are even remotely trained or qualified for.  A full solution is naturally beyond the scope of a little blog post, but I will sketch out the general idea.

To encode health into a market utility function, first we create financial contracts with an expected value which captures long-term health.  We can accomplish this with a long-term contract that generates positive cash flow when a human is healthy, and negative when unhealthy - basically an insurance contract.  There is naturally much complexity in getting those contracts right, so that they measure what we really want.  But assuming that is accomplished, the next step is pretty simple - we allow those contracts to trade freely on an open market.

There are some interesting failure modes and considerations that are mostly beyond scope but worth briefly mentioning.  This system probably needs to be asymmetric.  The transfers on poor health outcomes should partially go to cover medical payments, but it may be best to have a portion of the wealth simply go to nobody/everybody - just destroyed.

In this new framework, designing and patenting new drugs can still be profitable, but it is now put on even footing with preventive medicine.  More importantly, the market can now actually allocate the correct resources towards long term research.

To make all this concrete, let's use an example of a trillion dollar health question - one that our current system is especially ill-posed to solve:

What are the long-term health effects of abnormally low levels of solar radiation?  What levels of sun exposure are ideal for human health?

This is a big important question, and you've probably read some of the hoopla and debate about vitamin D.  I'm going to soon briefly summarize a general abstract theory, one that I would bet heavily on if we lived in a more rational world where such bets were possible.

In a sane world where health is solved by a proper computational market, I could make enormous - ridiculous really - amounts of money if I happened to be an early researcher who discovered the full health effects of sunlight.  I would bet on my theory simply by buying up contracts for individuals/demographics who had the most health to gain by correcting their sunlight deficiency.  I would then publicize the theory and evidence, and perhaps even raise a heap pile of money to create a strong marketing engine to help ensure that my investments - my patients - were taking the necessary actions to correct their sunlight deficiency.  Naturally I would use complex machine learning models to guide the trading strategy.

Now, just as an example, here is the brief 'pitch' for sunlight.

If we go back and look across all of time, there is a mountain of evidence which more or less screams - proper sunlight is important to health.  Heliotherapy has a long history.

Humans, like most mammals, and most other earth organisms in general, evolved under the sun.  A priori we should expect that organisms will have some 'genetic programs' which take approximate measures of incident sunlight as an input.  The serotonin -> melatonin mediated blue-light pathway is an example of one such light detecting circuit which is useful for regulating the 24 hour circadian rhythm.

The vitamin D pathway has existed since the time of algae such as the Coccolithophore.  It is a multi-stage pathway that can measure solar radiation over a range of temporal frequencies.  It starts with synthesis of fat soluble cholecalciferiol which has a very long half life measured in months. [1] [2]

The rough pathway is:

  • Cholecalciferiol (HL ~ months) becomes 
  • 25(OH)D (HL ~ 15 days) which finally becomes 
  • 1,25(OH)2 D (HL ~ 15 hours)

The main recognized role for this pathway in regards to human health - at least according to the current Wikipedia entry - is to enhance "the internal absorption of calcium, iron, magnesium, phosphate, and zinc".  Ponder that for a moment.

Interestingly, this pathway still works as a general solar clock and radiation detector for carnivores - as they can simply eat the precomputed measurement in their diet.

So, what is a long term sunlight detector useful for?  One potential application could be deciding appropriate resource allocation towards DNA repair.  Every time an organism is in the sun it is accumulating potentially catastrophic DNA damage that must be repaired when the cell next divides.  We should expect that genetic programs would allocate resources to DNA repair and various related activities dependent upon estimates of solar radiation.

I should point out - just in case it isn't obvious - that this general idea does not imply that cranking up the sunlight hormone to insane levels will lead to much better DNA/cellular repair.  There are always tradeoffs, etc.

One other obvious use of a long term sunlight detector is to regulate general strategic metabolic decisions that depend on the seasonal clock - especially for organisms living far from the equator.  During the summer when food is plentiful, the body can expect easy calories.  As winter approaches calories become scarce and frugal strategies are expected.

So first off we'd expect to see a huge range of complex effects showing up as correlations between low vit D levels and various illnesses, and specifically illnesses connected to DNA damage (such as cancer) and or BMI.  

Now it turns out that BMI itself is also strongly correlated with a huge range of health issues.  So the first key question to focus on is the relationship between vit D and BMI.  And - perhaps not surprisingly - there is pretty good evidence for such a correlation [3][4] , and this has been known for a while.

Now we get into the real debate.  Numerous vit D supplement intervention studies have now been run, and the results are controversial.  In general the vit D experts (such as my father, who started the vit D council, and publishes some related research[5]) say that the only studies that matter are those that supplement at high doses sufficient to elevate vit D levels into a 'proper' range which substitutes for sunlight, which in general requires 5000 IU day on average - depending completely on genetics and lifestyle (to the point that any one-size-fits all recommendation is probably terrible).

The mainstream basically ignores all that and funds studies at tiny RDA doses - say 400 IU or less - and then they do meta-analysis over those studies and conclude that their big meta-analysis, unsurprisingly, doesn't show a statistically significant effect.  However, these studies still show small effects.  Often the meta-analysis is corrected for BMI, which of course also tends to remove any vit D effect, to the extent that low vit D/sunlight is a cause of both weight gain and a bunch of other stuff.

So let's look at two studies for vit D and weight loss.

First, this recent 2015 study of 400 overweight Italians (sorry the actual paper doesn't appear to be available yet) tested vit D supplementation for weight loss.  The 3 groups were (0 IU/day, ~1,000 IU / day, ~3,000 IU/day).  The observed average weight loss was (1 kg, 3.8 kg, 5.4 kg). I don't know if the 0 IU group received a placebo.  Regardless, it looks promising.

On the other hand, this 2013 meta-analysis of 9 studies with 1651 adults total (mainly women) supposedly found no significant weight loss effect for vit D.  However, the studies used between 200 IU/day to 1,100 IU/day, with most between 200 to 400 IU.  Five studies used calcium, five also showed weight loss (not necessarily the same - unclear).  This does not show - at all - what the study claims in its abstract.

In general, medical researchers should not be doing statistics.  That is a job for the tech industry.

Now the vit D and sunlight issue is complex, and it will take much research to really work out all of what is going on.  The current medical system does not appear to be handling this well - why?  Because there is insufficient financial motivation.

Is Big Pharma interested in the sunlight/vit D question?  Well yes - but only to the extent that they can create a patentable analogue!  The various vit D analogue drugs developed or in development is evidence that Big Pharma is at least paying attention.  But assuming that the sunlight hypothesis is mainly correct, there is very little profit in actually fixing the real problem.

There is probably more to sunlight that just vit D and serotonin/melatonin.  Consider the interesting correlation between birth month and a number of disease conditions[6].  Perhaps there is a little grain of truth to astrology after all.

Thus concludes my little vit D pitch.  

In a more sane world I would have already bet on the general theory.  In a really sane world it would have been solved well before I would expect to make any profitable trade.  In that rational world you could actually trust health advertising, because you'd know that health advertisers are strongly financially motivated to convince you of things actually truly important for your health.

Instead of charging by the hour or per treatment, like a mechanic, doctors and healthcare companies should literally invest in their patients long-term health, and profit from improvements to long term outcomes.  The sunlight health connection is a trillion dollar question in terms of medical value, but not in terms of exploitable profits in today's reality.  In a properly constructed market, there would be enormous resources allocated to answer these questions, flowing into legions of profit motivated startups that could generate billions trading on computational health financial markets, all without selling any gadgets.

So in conclusion: the market could solve health, but only if we allowed it to and only if we setup appropriate financial mechanisms to encode the correct value function.  This is the UFAI problem next door.


Cryonics: peace of mind vs. immortality

3 oge 24 June 2015 07:10AM

I wrote a blog post arguing that people sign up for cryo more for peace of mind than for immortality. This suggests that cryo organizations should market towards the former desire than the latter (you can think of it as marketing to near mode rather than far mode, in Hansonian terms).

Perhaps we've been selling cryonics wrong. I'm signed up and feel like the reason I should have for signing up is that cryonics buys me a small, but non-zero chance at living forever. However, for years this should didn't actually result in me signing up. Recently, though, after being made aware of this dissonance between my words and actions, I finally signed up. I'm now very glad that I did. But it's not because I now have a shot at everlasting life.

http://specterdefied.blogspot.com/2015/06/a-cryo-membership-buys-peace-of-mind.html

 

For those signed up already, does peace-of-mind resonate as a benefit of your membership?

If you are not a cryonics member, what would make you decide that it is a good idea?

​My recent thoughts on consciousness

-1 AlexLundborg 24 June 2015 12:37AM

I have lately come to seriously consider the view that the everyday notion of consciousness doesn’t refer to anything that exists out there in the world but is rather a confused (but useful) projection made by purely physical minds onto their depiction of themselves in the world. The main influences on my thinking are Dan Dennett, (I assume most of you are familiar with him)  and to a lesser extent Yudkowsky (1) and Tomasik (2). To use Dennett’s line of thought: we say that honey is sweet, that metal is solid or that a falling tree makes a sound, but the character tag of sweetness and sounds is not in the world but in the brains internal model of it. Sweetness in not an inherent property of the glucose molecule, instead, we are wired by evolution to perceive it as sweet to reward us for calorie intake in our ancestral environment, and there is neither any need for non-physical sweetness-juice in the brain – no, it's coded (3). We can talk about sweetness and sound as if being out there in the world but in reality it is a useful fiction of sorts that we are "projecting" out into the world. The default model of our surroundings and ourselves we use in our daily lives (the manifest image, or ’umwelt’) is puzzling to reconcile with the scientific perspective of gluons and quarks. We can use this insight to look critically on how we perceive a very familiar part of the world: ourselves. It might be that we are projecting useful fictions onto our model of ourselves as well. Our normal perception of consciousness is perhaps like the sweetness of honey, something we think exist in the world, when it is in fact a judgement about the world made (unconsciously) by the mind.

What we are pointing at with the judgement “I am conscious” is perhaps the competence that we have to access states about the world, form expectations about those states and judge their value to us, coded in by evolution. That is, under this view, equivalent with saying that suger is made of glucose molecules, not sweetness-magic. In everyday language we can talk about suger as sweet and consciousness as “something-to-be-like-ness“ or “having qualia”, which is useful and probably necessary for us to function, but that is a somewhat misleading projection made by our ​​world-accessing and assessing consciousness that really exists in the world. That notion of consciousness is not subject to the Hard Problem, it may not be an easy problem to figure out how consciousness works, but it does not appear impossible to explain it scientifically as pure matter like anything else in the natural world, at least in theory. I’m pretty confident that we will solve consciousness, if we by consciousness mean the competence of a biological system to access states about the world, make judgements and form expectations. That is however not what most people mean when they say consciousness. Just like ”real” magic refers to the magic that isn’t real and the magic that is real, that can be performed in the world, is not “real magic”, “real” consciousness turns out to be a useful, but misleading assessment (4). We should perhaps keep the word consciousness but adjust what we mean when we use it, for diplomacy.

Having said that, I still find myself baffled by the idea that I might not be conscious in the way I’ve found completely obvious before. Consciousness seems so mysterious and unanswerable, so it’s not surprising then that the explanation provided by physicalists like Dennett isn’t the most satisfying. Despite that, I think it’s the best explanation I've found so far, so I’m trying to cope with it the best I can. One of the problems I’ve had with the idea is how it has required me to rethink my views on ethics. I sympathize with moral realism, the view that there exist moral facts, by pointing to the strong intuition that suffering seems universally bad, and well-being seems universally good. Nobody wants to suffer agonizing pain, everyone wants beatific eudaimonia, and it doesn't feel like an arbitrary choice to care about the realization of these preferences in all sentience to a high degree, instead of any other possible goal like paperclip maximization. It appeared to me to be an unescapable fact about the universe that agonizing pain really is bad (ought to be prevented), that intelligent bliss really is good (ought to be pursued), like a label to distinguish wavelength of light in the brain really is red, and that you can build up moral values from there. I have a strong gut feeling that the well-being of sentience matters, and the more capacity a creature has of receiving pain and pleasure the more weight it is given, say a gradience from beetles to posthumans that could perhaps be understood by further inquiry of the brain (5). However, if it turns out that pain and pleasure isn’t more than convincing judgements by a biological computer network in my head, no different in kind to any other computation or judgement, the sense of seriousness and urgency of suffering appears to fade away. Recently, I’ve loosened up a bit to accept a weaker grounding for morality: I still think that my own well-being matter, and I would be inconsistent if I didn’t think the same about other collections of atoms that appears functionally similar to ’me’, who also claim, or appear, to care about their well-being. I can’t answer why I should care about my own well-being though, I just have to. Speaking of 'me': personal identity also looks very different (nonexistent?) under physicalism, than in the everyday manifest image (6).

Another difficulty I confront is why e.g. colors and sounds looks and sounds the way they do or why they have any quality at all, under this explanation. Where do they come from if they’re only labels my brain uses to distinguish inputs from the senses? Where does the yellowness of yellow come? Maybe it’s not a sensible question, but only the murmuring of a confused primate. Then again, where does anything come from? If we can learn to shut up our bafflement about consciousness and sensibly reduce it down to physics – fair enough, but where does physics come from? That mystery remains, and that will possibly always be out of reach, at least probably before advanced superintelligent philosophers. For now, understanding how a physical computational system represents the world, creates judgements and expectations from perception presents enough of a challenge. It seems to be a good starting point to explore anyway (7).


I did not really put forth any particularly new ideas here, this is just some of my thoughts and repetitions of what I have read and heard others say, so I'm not sure if this post adds any value. My hope is that someone will at least find some of my references useful, and that it can provide a starting point for discussion. Take into account that this is my first post here, I am very grateful to receive input and criticism! :-)

  1. Check out Eliezer's hilarious tear down of philosophical zombies if you haven't already
  2. http://reducing-suffering.org/hard-problem-consciousness/
  3. [Video] TED talk by Dan Dennett http://www.ted.com/talks/dan_dennett_cute_sexy_sweet_funny
  4. http://ase.tufts.edu/cogstud/dennett/papers/explainingmagic.pdf
  5. Reading “The Moral Landscape” by Sam Harris increased my confidence in moral realism. Whether moral realism is true of false can obviously have implications for approaches to the value learning problem in AI alignment, and for the factual accuracy of the orthogonality thesis
  6. http://www.lehigh.edu/~mhb0/Dennett-WhereAmI.pdf
  7. For anyone interested in getting a grasp of this scientific challenge I strongly recommend the book “A User’s Guide to Thought and Meaning” by Ray Jackendoff.



Edit: made some minor changes and corrections. Edit 2: made additional changes in the first paragraph for increased readability.

 


Is Greed Stupid?

-8 adamzerner 23 June 2015 08:38PM

I just finished reading a fantastic Wait But Why post: How Tesla Will Change The World. One of the things that was noted is that the people in the Auto and Oil industries are trying to delay the introduction of Electric Vehicles (EVs) so they could make more money.

The post also explains how important it is that we become less reliant on oil.

  1. Because we're going to run out relatively soon.
  2. Because it's causing global warming.
So, from the perspective of these moneybag guys, here is how I see the cost-benefit of delaying the introduction of EVs:
  • Make some more money, which gives them and their families a marginally more comfortable life.
  • Not get a sense of purpose out of your career.
  • Probably feel some sort of guilt about what you do.
  • Avoid the short-term discomfort of changing jobs/careers.
This probably makes my opinions pretty clear:
  • Because of diminishing marginal utility, I doubt that the extra money is making them much happier. I'm sure they're pretty well off to begin with. It could be the case that they're so used to their lifestyle that they really do need the extra money to be happy, but I doubt it.
  • Autonomy, mastery and purpose are three of the most important things to get out of your career. There seems to be a huge opportunity cost to not working somewhere that provides you with a sense of purpose.
  • To continue that thought, I'm sure they feel some sort of guilt for what they're doing. Or maybe not. But if they are, that seems like a relatively large cost.
  • I understand that there's probably a decent amount of social pressure on them to conform. I'm sure that they surround themselves with people who are pro-oil and anti-electric. I'm sure that their companies put pressure on them to perform. I'm sure that they have families and all of that and starting something new might be difficult. But these don't seem to be large enough costs to make their choices worthwhile. A big reason why I get this impression is because they are so short term.
I've been talking specifically about those in the auto and oil industries, but the same logic seems to apply to other greedy people (ex. in finance). I get the impression that greed is stupid. That it doesn't make you happy, and that it isn't instrumentally rational. But I'd like to get the opinions of others.

A map: Typology of human extinction risks

8 turchin 23 June 2015 05:23PM

In 2008 I was working on a Russian language book “Structure of the Global Catastrophe, and I brought it to one our friends for review. He was geologist Aranovichan old friend of my late mother's husband.

We started to discuss Stevenson's probe — a hypothetical vehicle which could reach the earth's core by melting its way through the mantle, taking scientific instruments with it. It would take the form of a large drop of molten iron – at least 60 000 tons – theoretically feasible, but practically impossible.

Milan Cirkovic wrote an article arguing against this proposal, in which he fairly concluded that such a probe would leave a molten channel of debris behind it, and high pressure inside the earth's core could push this material upwards. A catastrophic degassing of the earth's core could ensue that would act like giant volcanic eruption, completely changing atmospheric composition and killing all life on Earth. 

Our friend told me that in his institute they had created an upgraded version of such a probe, which would be simpler, cheaper and which could drill down deeply at a speed of 1000 km per month. This probe would be a special nuclear reactor, which uses its energy to melt through the mantle. (Something similar was suggested in the movie “China syndrome” about a possible accident at a nuclear power station – so I don’t think that publishing this information would endanger humanity.) The details of the reactor-probe were kept secret, but there was no money available for practical realisation of the project. I suggested that it would be wise not to create such a probe. If it were created it could become the cheapest and most effective doomsday weapon, useful for worldwide blackmail in the reasoning style of Herman Khan. 

But in this story the most surprising thing for me was not a new way to kill mankind, but the ease with which I discovered its details. If your nearest friends from a circle not connected with x-risks research know of a new way of destroying humanity (while not fully recognising it as such), how many more such ways are known to scientists from other areas of expertise!

I like to create full exhaustive lists, and I could not stop myself from creating a list of human extinction risks. Soon I reached around 100 items, although not all of them are really dangerous. I decided to convert them into something like periodic table — i.e to sort them by several parameters — in order to help predict new risks. 

For this map I chose two main variables: the basic mechanism of risk and the historical epoch during which it could happen. Also any map should be based on some kind of future model, nd I chose Kurzweil’s model of exponential technological growth which leads to the creation of super technologies in the middle of the 21st century. Also risks are graded according to their probabilities: main, possible and hypothetical. I plan to attach to each risk a wiki page with its explanation. 

I would like to know which risks are missing from this map. If your ideas are too dangerous to openly publish them, PM me. If you think that any mention of your idea will raise the chances of human extinction, just mention its existence without the details. 

I think that the map of x-risks is necessary for their prevention. I offered prizes for improving the previous map  which illustrates possible prevention methods of x-risks and it really helped me to improve it. But I do not offer prizes for improving this map as it may encourage people to be too creative in thinking about new risks.

Pdf is here: http://immortality-roadmap.com/typriskeng.pdf

 

The great quote of rationality a la Socrates (or Plato, or Aristotle)

1 Bound_up 23 June 2015 03:55PM

Help a brother out?

 

There's a great quote by one of The Big 3 Greek Philosophers (EDIT: Reference to Cicero removed) which I can paraphrase by memory as:

 

"I consider it rather better for myself to be proven wrong than to prove someone else wrong, just as I'm better off being cured of a disease than curing someone of one."

 

I can't find the quote, or from which of the Three it is.

 

Anybody know? Or know where to look? I've already tried varying google search techniques and perused the Wikiquotes article on each of them.

Min/max goal factoring and belief mapping exercise

-1 Clarity 23 June 2015 05:30AM

Edit 3: Removed description of previous edits and added the following:

This thread used to contain the description of a rationality exercise.

I have removed it and plan to rewrite it better.

I will repost it here, or delete this thread and repost in the discussion.

Thank you.

Two Zendo-inspired games

17 StephenBarnes 22 June 2015 03:47PM

LW has often discussed the inductive logic game Zendo, as a possible way of training rationality. But I couldn't find any computer implementations of Zendo online.

So I built two (fairly similar) games inspired by Zendo; they generate rules and play as sensei. The code is on GitHub, along with some more explanation. To run the games you'll need to install Python 3, and Scikit-Learn for the second game; see the readme.

All bugfixes and improvements are welcome. For instance, more rule classes or features would improve the game and be pretty easy to code. Also, if anyone has a website and wants to host this playable online (with CGI, say), that would be awesome.

Seeking geeks interested in bioinformatics

17 bokov 22 June 2015 01:44PM

I work at a small but feisty research team whose focus is biomedical informatics, i.e. mining biomedical data. Especially anonymized hospital records pooled over multiple healthcare networks. My personal interest is ultimately life-extension, and my colleagues are warming up to the idea as well. But the short-term goal that will be useful many different research areas is building infrastructure to massively accelerate hypothesis testing on and modelling of retrospective human data.

 

We have a job posting here (permanent, non-faculty, full-time, benefits):

https://www.uthscsajobs.com/postings/2719

 

If you can program, want to work in an academic research setting, and can relocate to San Antonio, TX, I invite you to apply. Thanks.

 

Note: The first step of the recruitment process will be a coding challenge, which will include an arithmetical or string-manipulation problem to solve in real-time using a language and developer tools of your choice.

Group rationality diary for June 22th - July 11th 2015

5 Clarity 22 June 2015 11:29AM

This is the public group rationality diary for June 22th - July 11th, 2015. It's a place to record and chat about it if you have done, or are actively doing, things like:

  • Established a useful new habit

  • Obtained new evidence that made you change your mind about some belief

  • Decided to behave in a different way in some set of situations

  • Optimized some part of a common routine or cached behavior

  • Consciously changed your emotions or affect with respect to something

  • Consciously pursued new valuable information about something that could make a big difference in your life

  • Learned something new about your beliefs, behavior, or life that surprised you

  • Tried doing any of the above and failed

Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves. Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.

Archive of previous rationality diaries

Note to future posters: no one is in charge of posting these threads. If it's time for a new thread, and you want a new thread, just create it. It should run for about two weeks, finish on a Saturday, and have the 'group_rationality_diary' tag.

Open Thread, Jun. 22 - Jun. 28, 2015

6 Gondolinian 22 June 2015 12:01AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

High energy ethics and general moral relativity

7 maxikov 21 June 2015 08:34PM

Utilitarianism sometimes supports weird things: killing lone backpackers for their organs, sacrificing all world's happiness to one utility monster, creating zillions of humans living on near-subsistence level to maximize total utility, or killing all but a bunch of them to maximize average utility. Also, it supports gay rights, and has been supporting them since 1785, when saying that there's nothing wrong in having gay sex was pretty much in the same category as saying that there's nothing wrong in killing backpackers. This makes one wonder: if despite all the disgust towards them few centuries ago, gay rights have been inside the humanity's coherent extrapolated volition all along, then perhaps our descendants will eventually come to the conclusion that killing the backpacker has been the right choice all along, and only those bullet-biting extremists of our time were getting it right. As a matter of fact, as a friend of mine pointed out, you don't even need to fast forward few centuries - there are or were already ethical systems actually in use in some cultures (e.g. bushido in pre-Meiji restoration Japan) that are obsessed with honor and survivor's guilt. They would approve of killing the backpacker or letting them kill themselves - this being an honorable death, and living while letting five other people to die being dishonorable - on non-utilitarian grounds, and actually alieve that this is the right choice. Perhaps they were right all along, and the Western civilization bulldozed through them effectively destroying such culture not because of superior (non-utilitarian) ethics but for any other reason things happened in history. In this case there's no need in trying to fix utilitarianism, lest it suggest killing backpackers, because it's not broken - we are - and out descendants will figure that out. In physics we've seen this, when an elegant low-Kolmogorov-complexity model predicted that weird things happens on a subatomic level, and we've built huge particle accelerators just to confirm - yep, that's exactly what happens, in spite of all your intuitions. Perhaps smashing utilitarianism with high energy problems only breaks our intuitions, while utilitarianism is just fine.

But let's talk about relativity. In 1916 Karl Schwarzschild solved the newly discovered Einstein field equations and thus predicted the black holes. It was thought as a mere curiosity and perhaps GIGO at the time, until in 1960s people realized that yes, contra all intuitions, this is in fact a thing. But here's the thing: they were actually first predicted by John Michell in 1783. You can easily check it: if you substitute the speed of light to the classical formula for escape velocity, you'll get the Schwarzschild radius. Michell actually knew the radius and mass of the Sun, as well as the gravitational constant precisely enough to get the order of magnitude and the first digit right when providing an example of such object. If we somehow never discovered general relativity, but managed to build good enough telescopes to observe the stars orbiting the emptiness that we now call Sagittarius A*, if would be very tempting to say: "See? We predicted this centuries ago, and however crazy it seemed, we now know it's true. That's what happens when you stick to the robust theories, shut up, and calculate - you stay centuries ahead of the curve."

We now know that Newtonian mechanics aren't true, although they're close to truth when you plug in non-astronomical numbers (and even some astronomical). A star 500 times size and the same density as the Sun, however, is very much astronomical. It is only sheer coincidence that in this exact formula relativistic terms work exactly in the way to give the same solution for the escape velocity as the classical mechanics do. It would be enough for Michell to imagine that his dark star rotates - a thing that Newtonian mechanics say doesn't matter, although it does - to change the category of this prediction from "miraculously correct" to "expectedly incorrect". It doesn't mean that Newtonian mechanics weren't a breakthrough, better than any single theory existing at the time. But it does mean that it would be premature to people in pre-relativity era to invest into building a starship designed to go ten times the speed of light even if they could - although that's where "shut up and calculate" could lead them.

And that's where I think we are with utilitarianism. It's very good. It's more or less reliably better than anything else. And it managed to make ethical predictions so far fetched (funny enough, about as far fetched as the prediction of dark stars) that it's tempting to conclude that the only reason why it keeps making crazy predictions is that we haven't yet realized they're not crazy. But we live in the world where Sagittarius A* was discovered, and general relativity wasn't. The actual 42-ish ethical system will probably converge to utilitarianism when you plug in non-extreme numbers (small numbers of people, non-permanent risks and gains, non-taboo topics). But just because it converged to utilitarianism on one taboo (at the time) topic, and made utilitarianism stay centuries ahead of the moral curve, doesn't mean it will do the same for others.

Human factors research seems very relevant to rationality

8 casebash 21 June 2015 12:55PM

"The science of “human factors” now permeates the aviation industry. It includes a sophisticated understanding of the kinds of mistakes that even experts make under stress. So when Martin Bromiley read the Harmer report, an incomprehensible event suddenly made sense to him. “I thought, this is classic human factors stuff. Fixation error, time perception, hierarchy.”

It’s a miracle that only ten people were killed after Flight 173 crashed into an area of woodland in suburban Portland; but the crash needn’t have happened at all. Had the captain attempted to land, the plane would have touched down safely: the subsequent investigation found that the landing gear had been down the whole time. But the captain and officers of Flight 173 became so engrossed in one puzzle that they became blind to the more urgent problem: fuel shortage. This is called “fixation error”. In a crisis, the brain’s perceptual field narrows and shortens. We become seized by a tremendous compulsion to fix on the problem we think we can solve, and quickly lose awareness of almost everything else. It’s an affliction to which even the most skilled and
experienced professionals are prone..."

I don't believe that I've heard fixation error or time perception mentioned on Less Wrong. The field of human factors may be something worth looking into more.

 

Reseach questions

-2 Clarity 20 June 2015 11:50PM

Sometimes I have hypotheses/research questions based on my personal experience which I'd love to test out. For various reasons, I can't gather evidence on them. Feel free to poke around at them or add your own for others to explore.

1.

Field: Addiction psychology, Clinical psychology

Applications; Therapy, self-help

H1: Is experiential avoidance a necessary risk factor for mental illnesses?

H2: Is expressive suppression a necessary risk factor for mental illnesses, particularly depression?

H3: Could cognitive re-appraisal or reframing of supernormal stimuli such as pornography mitigate the undesired effects?

H4: Does dogmatic avoidance of pornography approximate experiential avoidance or avoidance coping, which is pathological itself?

H5: Are generalised counterphobic attitudes, associated with codependent behaviour, a predictor of long term engagement with the ''pick-up community'' and/or clinginess?

 

2.

Field: Positive psychology (given my last post, may not be applicable)

Application: Wellness

H1: Gratitude is requisite on associative thinking about counterfactuals.

 

3.

Field: Performance/sport psychology,  positive psychology and creative industries

Application: Athletics, wellness, work, art

H1: That the processing fluency theory of aesthetic pleasure can explain mental phenomenon like flow states (and be generalised to contribute towards a normative theory of positive functioning).

 

4.

Field: Mental health, experimental philosophy, set-theory, mnemonics

Application: Therapy, self-help

H1: Mindfulness and/or psychiatric insight predicts belief in determinism.

H2: Conscious mental representations of mental categories and objects don’t need to comply with ‘paradox-free’ set-theoretic specifications

H3: Organising mental categories with ‘paradox-free’ set-theoretic specifications predicts decreased mental flexibility

H4: Exposure therapy (ET/ERP) and behavioural experiments (CBT) share the same underlying cognitive process

 

5.

Field: Human computer interaction

Application: Human performance technologies

H1: Statistical literacy and computer science literacy causes or predicts improvements to general intelligence

H2. Machine learning literacy predicts improvements in domain general pattern recognition and problem solving

H3. Future wearable technologies will have software that recognises and reports on predicted trajectories/changes in environmental stimuli

 

Happiness interventions

-4 Clarity 20 June 2015 11:39AM

 

I found a website called Happier Human. It's about how to become and stay happier. I've trawled through it. Here are the best posts in my opinion:

 

[Meditate]. Don't [worry/overthink/fantasise/compare]. [Disregard desire]. [Motivate]. [Exercise gratitude]. [Don’t have kids].

[Buy many small gifts]. [Trade some happiness for productivity]. [Set] [happiness goals]

 

If you've found any other happiness interventions on any website, please share them.

 

New LW Meetup: São Paulo

5 FrankAdamek 19 June 2015 02:48PM

This summary was posted to LW Main on June 12th. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.

continue reading »

[Link] Self-Representation in Girard’s System U

2 Gunnar_Zarncke 18 June 2015 11:22PM

Self-Representation in Girard’s System U, by Matt Brown and Jens Palsberg:

In 1991, Pfenning and Lee studied whether System F could support a typed self-interpreter. They concluded that typed self-representation for System F “seems to be impossible”, but were able to represent System F in Fω. Further, they found that the representation of Fω requires kind polymorphism, which is outside Fω. In 2009, Rendel, Ostermann and Hofer conjectured that the representation of kind-polymorphic terms would require another, higher form of polymorphism. Is this a case of infinite regress?
We show that it is not and present a typed self-representation for Girard’s System U, the first for a λ-calculus with decidable type checking. System U extends System Fω with kind polymorphic terms and types. We show that kind polymorphic types (i.e. types that depend on kinds) are sufficient to “tie the knot” – they enable representations of kind polymorphic terms without introducing another form of polymorphism. Our self-representation supports operations that iterate over a term, each of which can be applied to a representation of itself. We present three typed self-applicable operations: a self-interpreter that recovers a term from its representation, a predicate that tests the intensional structure of a term, and a typed continuation-passing-style (CPS) transformation – the first typed self-applicable CPS transformation. Our techniques could have applications from verifiably type-preserving metaprograms, to growable typed languages, to more efficient self-interpreters.
Emphasis mine. That seems to be a powerful calculus for writing self-optimizing AI programs in...

See also the lambda-the-ultimate comment thread about it.

In praise of gullibility?

21 ahbwramc 18 June 2015 04:52AM

I was recently re-reading a piece by Yvain/Scott Alexander called Epistemic Learned Helplessness. It's a very insightful post, as is typical for Scott, and I recommend giving it a read if you haven't already. In it he writes:

When I was young I used to read pseudohistory books; Immanuel Velikovsky's Ages in Chaos is a good example of the best this genre has to offer. I read it and it seemed so obviously correct, so perfect, that I could barely bring myself to bother to search out rebuttals.

And then I read the rebuttals, and they were so obviously correct, so devastating, that I couldn't believe I had ever been so dumb as to believe Velikovsky.

And then I read the rebuttals to the rebuttals, and they were so obviously correct that I felt silly for ever doubting.

And so on for several more iterations, until the labyrinth of doubt seemed inescapable.

He goes on to conclude that the skill of taking ideas seriously - often considered one of the most important traits a rationalist can have - is a dangerous one. After all, it's very easy for arguments to sound convincing even when they're not, and if you're too easily swayed by argument you can end up with some very absurd beliefs (like that Venus is a comet, say).

This post really resonated with me. I've had several experiences similar to what Scott describes, of being trapped between two debaters who both had a convincingness that exceeded my ability to discern truth. And my reaction in those situations was similar to his: eventually, after going through the endless chain of rebuttals and counter-rebuttals, changing my mind at each turn, I was forced to throw up my hands and admit that I probably wasn't going to be able to determine the truth of the matter - at least, not without spending a lot more time investigating the different claims than I was willing to. And so in many cases I ended up adopting a sort of semi-principled stance of agnosticism: unless it was a really really important question (in which case I was sort of obligated to do the hard work of investigating the matter to actually figure out the truth), I would just say I don't know when asked for my opinion.

[Non-exhaustive list of areas in which I am currently epistemically helpless: geopolitics (in particular the Israel/Palestine situation), anthropics, nutrition science, population ethics]

All of which is to say: I think Scott is basically right here, in many cases we shouldn't have too strong of an opinion on complicated matters. But when I re-read the piece recently I was struck by the fact that his whole argument could be summed up much more succinctly (albeit much more pithily) as:

"Don't be gullible."

Huh. Sounds a lot more obvious that way.

Now, don't get me wrong: this is still good advice. I think people should endeavour to not be gullible if at all possible. But it makes you wonder: why did Scott feel the need to write a post denouncing gullibility? After all, most people kind of already think being gullible is bad - who exactly is he arguing against here?

Well, recall that he wrote the post in response to the notion that people should believe arguments and take ideas seriously. These sound like good, LW-approved ideas, but note that unless you're already exceptionally smart or exceptionally well-informed, believing arguments and taking ideas seriously is tantamount to...well, to being gullible. In fact, you could probably think of gullibility as a kind of extreme and pathological form of lightness; a willingness to be swept away by the winds of evidence, no matter how strong (or weak) they may be.

There seems to be some tension here. On the one hand we have an intuitive belief that gullibility is bad; that the proper response to any new claim should be skepticism. But on the other hand we also have some epistemic norms here at LW that are - well, maybe they don't endorse being gullible, but they don't exactly not endorse it either. I'd say the LW memeplex is at least mildly friendly towards the notion that one should believe conclusions that come from convincing-sounding arguments, even if they seem absurd. A core tenet of LW is that we change our mind too little, not too much, and we're certainly all in favour of lightness as a virtue.

Anyway, I thought about this tension for a while and came to the conclusion that I had probably just lost sight of my purpose. The goal of (epistemic) rationality isn't to not be gullible or not be skeptical - the goal is to form correct beliefs, full stop. Terms like gullibility and skepticism are useful to the extent that people tend to be systematically overly accepting or dismissive of new arguments - individual beliefs themselves are simply either right or wrong. So, for example, if we do studies and find out that people tend to accept new ideas too easily on average, then we can write posts explaining why we should all be less gullible, and give tips on how to accomplish this. And if on the other hand it turns out that people actually accept far too few new ideas on average, then we can start talking about how we're all much too skeptical and how we can combat that. But in the end, in terms of becoming less wrong, there's no sense in which gullibility would be intrinsically better or worse than skepticism - they're both just words we use to describe deviations from the ideal, which is accepting only true ideas and rejecting only false ones.

This answer basically wrapped the matter up to my satisfaction, and resolved the sense of tension I was feeling. But afterwards I was left with an additional interesting thought: might gullibility be, if not a desirable end point, then an easier starting point on the path to rationality?

That is: no one should aspire to be gullible, obviously. That would be aspiring towards imperfection. But if you were setting out on a journey to become more rational, and you were forced to choose between starting off too gullible or too skeptical, could gullibility be an easier initial condition?

I think it might be. It strikes me that if you start off too gullible you begin with an important skill: you already know how to change your mind. In fact, changing your mind is in some ways your default setting if you're gullible. And considering that like half the freakin sequences were devoted to learning how to actually change your mind, starting off with some practice in that department could be a very good thing.

I consider myself to be...well, maybe not more gullible than average in absolute terms - I don't get sucked into pyramid scams or send money to Nigerian princes or anything like that. But I'm probably more gullible than average for my intelligence level. There's an old discussion post I wrote a few years back that serves as a perfect demonstration of this (I won't link to it out of embarrassment, but I'm sure you could find it if you looked). And again, this isn't a good thing - to the extent that I'm overly gullible, I aspire to become less gullible (Tsuyoku Naritai!). I'm not trying to excuse any of my past behaviour. But when I look back on my still-ongoing journey towards rationality, I can see that my ability to abandon old ideas at the (relative) drop of a hat has been tremendously useful so far, and I do attribute that ability in part to years of practice at...well, at believing things that people told me, and sometimes gullibly believing things that people told me. Call it epistemic deferentiality, or something - the tacit belief that other people know better than you (especially if they're speaking confidently) and that you should listen to them. It's certainly not a character trait you're going to want to keep as a rationalist, and I'm still trying to do what I can to get rid of it - but as a starting point? You could do worse I think.

Now, I don't pretend that the above is anything more than a plausibility argument, and maybe not a strong one at that. For one I'm not sure how well this idea carves reality at its joints - after all, gullibility isn't quite the same thing as lightness, even if they're closely related. For another, if the above were true, you would probably expect LWer's to be more gullible than average. But that doesn't seem quite right - while LW is admirably willing to engage with new ideas, no matter how absurd they might seem, the default attitude towards a new idea on this site is still one of intense skepticism. Post something half-baked on LW and you will be torn to shreds. Which is great, of course, and I wouldn't have it any other way - but it doesn't really sound like the behaviour of a website full of gullible people.

(Of course, on the other hand it could be that LWer's really are more gullible than average, but they're just smart enough to compensate for it)

Anyway, I'm not sure what to make of this idea, but it seemed interesting and worth a discussion post at least. I'm curious to hear what people think: does any of the above ring true to you? How helpful do you think gullibility is, if it is at all? Can you be "light" without being gullible? And for the sake of collecting information: do you consider yourself to be more or less gullible than average for someone of your intelligence level?

Michigan Meetup Feedback and Planning

7 Zubon 18 June 2015 02:06AM

Our meetup last weekend was at the downtown Ann Arbor Public Library. There were several comments, requests, and discussion items. This discussion topic goes out to attendees, people who might have wanted to attend but didn't, and members of other meetup groups who have suggestions.

1. Several people mentioned having trouble commenting here on the Less Wrong forums. Some functions are restricted by karma, and if you cannot comment, you cannot accummulate karma.

  • Have you verified your e-mail address? This is a common stumbling point.
  • Please try to comment on this post. Restrictions on comments are (moderate certainty) looser than comments on starting posts.
  • If that does not work, please try to comment on a comment on this post. I will add one specifically for this purpose. Restrictions on comments on comments may be (low certainty) looser than starting new comment threads.
  • If someone has already troubleshot new users' problems with commenting, please link.

2. Some people felt intimidated about attending. Prominent community members include programmers, physicists, psychiatrists, philosophy professors, Ph.D.s, and other impressive folks who do not start with P like fanfiction writers. Will I be laughed out of the room if I have not read all the Sequences?

No. Not only is there no minimum requirement to attend, as a group, we are *very excited* about explaining things to people. Our writing can be informationally dense, but our habit of linking to long essays is (often) meant to provide context, not to say, "You must read all the dependencies before you are allowed to talk."

And frankly, we are not that intimidating. Being really impressive makes it easy to become prominent, which via availability bias makes us all look impressive, but our average is way lower than that. And the really impressive people will welcome you to the discussion.

So how can we express this in meetup announcements? I promised to draft a phrasing. Please critique and edit in comments.

Everyone is welcome. There is no minimum in terms of age, education, or reading history. There is no minimum contribution to the community nor requirement to speak. You need not be this tall to ride. If you can read this and are interested in the meetup, we want you to come to the meetup.

3. As part of signalling "be comfortable, you are welcome here," I bought some stim toys from Stimtastic and put them out for whoever might need them. They seemed popular. Comforting, distracting, how did that go for folks? They seemed good for some folks who wanted to do something with their hands, but I was worried that we had a bit much "play" at some points.

Your recommendations on accommodating access needs are welcome. (But I'm not buying chewable stim toys to share; you get to bring your own on those.)

4. The location was sub-optimal. It is a fine meeting space, but the library is under construction, has poor parking options, and does not allow food or drink. Attendees requested somewhere more comfortable, with snacking options. Our previous meeting was at a restaurant, which offers much of that but has more background noise and seemed less socially optimal in terms of coordinating discussion. Prior to that, Michigan meetups had been at Yvain's home.

We moved to Ann Arbor from Livonia because (1) Yvain had been hosting and moved to Ann Arbor, (2) half the Livonia attendees seemed to be Ann Arbor-area folks, and (3) I knew the library had a free meeting room.

Recommendations and volunteers for a meeting site in the area are welcome. I'm in Lansing and not well set up for a group of our size.

5. We had 17 people, although not all at once. It was suggested that we break up into two or more groups for part of the discussion. This is probably a good idea, and it would give more people a chance to participate.

6. Many groups have pre-defined topics or projects. No one leaped at that idea, but we can discuss on here.

7. Rationalist game night game night was another suggestion. I like it. Again, volunteers for hosts are welcome. Many public locations like restaurants are problematic for game nights.

Rationality Reading Group: Part C: Noticing Confusion

11 Gram_Stone 18 June 2015 01:01AM

This is part of a semi-monthly reading group on Eliezer Yudkowsky's ebook, Rationality: From AI to Zombies. For more information about the group, see the announcement post.


Welcome to the Rationality reading group. This week we discuss Part C: Noticing Confusion (pp. 81-114)This post summarizes each article of the sequence, linking to the original LessWrong post where available.

C. Noticing Confusion

20. Focus Your Uncertainty - If you are paid for post-hoc analysis, you might like theories that "explain" all possible outcomes equally well, without focusing uncertainty. But what if you don't know the outcome yet, and you need to have an explanation ready in 100 minutes? Then you want to spend most of your time on excuses for the outcomes that you anticipate most, so you still need a theory that focuses your uncertainty.

21. What Is Evidence? - Evidence is an event connected by a chain of causes and effects to whatever it is you want to learn about. It also has to be an event that is more likely if reality is one way, than if reality is another. If a belief is not formed this way, it cannot be trusted.

22. Scientific Evidence, Legal Evidence, Rational Evidence - For good social reasons, we require legal and scientific evidence to be more than just rational evidence. Hearsay is rational evidence, but as legal evidence it would invite abuse. Scientific evidence must be public and reproducible by everyone, because we want a pool of especially reliable beliefs. Thus, Science is about reproducible conditions, not the history of any one experiment.

23. How Much Evidence Does It Take? - If you are considering one hypothesis out of many, or that hypothesis is more implausible than others, or you wish to know with greater confidence, you will need more evidence. Ignoring this rule will cause you to jump to a belief without enough evidence, and thus be wrong.

24. Einstein's Arrogance - Albert Einstein, when asked what he would do if an experiment disproved his theory of general relativity, responded with "I would feel sorry for [the experimenter]. The theory is correct." While this may sound like arrogance, Einstein doesn't look nearly as bad from a Bayesian perspective. In order to even consider the hypothesis of general relativity in the first place, he would have needed a large amount of Bayesian evidence.

25. Occam's Razor - To a human, Thor feels like a simpler explanation for lightning than Maxwell's equations, but that is because we don't see the full complexity of an intelligent mind. However, if you try to write a computer program to simulate Thor and a computer program to simulate Maxwell's equations, one will be much easier to accomplish. This is how the complexity of a hypothesis is measured in the formalisms of Occam's Razor.

26. Your Strength as a Rationalist - A hypothesis that forbids nothing permits everything, and thus fails to constrain anticipation. Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.

27. Absence of Evidence Is Evidence of Absence - Absence of proof is not proof of absence. But absence of evidence is always evidence of absence. According to the probability calculus, if P(H|E) > P(H) (observing E would be evidence for hypothesis H), then P(H|~E) < P(H) (absence of E is evidence against H). The absence of an observation may be strong evidence or very weak evidence of absence, but it is always evidence.

28. Conservation of Expected Evidence - If you are about to make an observation, then the expected value of your posterior probability must equal your current prior probability. On average, you must expect to be exactly as confident as when you started out. If you are a true Bayesian, you cannot seek evidence to confirm your theory, because you do not expect any evidence to do that. You can only seek evidence to test your theory.

29. Hindsight Devalues Science - Hindsight bias leads us to systematically undervalue scientific findings, because we find it too easy to retrofit them into our models of the world. This unfairly devalues the contributions of researchers. Worse, it prevents us from noticing when we are seeing evidence that doesn't fit what we really would have expected. We need to make a conscious effort to be shocked enough.



This has been a collection of notes on the assigned sequence for this week. The most important part of the reading group though is discussion, which is in the comments section. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

The next reading will cover Part D: Mysterious Answers (pp. 117-191). The discussion will go live on Wednesday, 1 July 2015 at or around 6 p.m. PDT, right here on the discussion forum of LessWrong.

New Meetup in New Hampshire

6 NancyLebovitz 17 June 2015 08:30PM

The inaugural New Hampshire Less Wrong meet-up will take place the week of June 29-July 4. I've created a Doodle poll to find out the best date for likely participants. If you are interested in attending, please fill out the poll here: http://doodle.com/4ypehfkvsm7cvf76

The first meeting will be in Manchester, but I'm open to rotating locations throughout NH in the future, especially if people want to host meetings in their homes.

I hope to coordinate crossover meetings with Boston LW, e.g. field trips to Sundays at the Citadel."

*****

I've posted this for Elizabeth Edwards-Appell-- she's confirmed her LW email, but still can't post, not even comments. I've notified tech, but meanwhile, if anyone can help with her posting problem, let me know.

Effective altruism and political power

2 adamzerner 17 June 2015 05:47PM

I just saw that Donald Trump is running for president. Which led me to the following thought: would any of the big names in tech have a chance at being elected president of the US? Elon Musk? Sergey Brin? Jeff Bezos? Reid Hoffman? Peter Thiel? Edit: Bill Gates?

Some follow up questions/thoughts:

  • As far as maximizing altruistic impact goes, would it be a good idea for them to become president?
  • Do these people care about maximizing altruistic impact? To what extent? If so/enough, why not do it?
  • What other "sane" people have enough reputation in the public eye to have a chance at acquiring a lot of political power? My first thought was tech people, but I'm sure there are others. Big hedge fund managers? Ray Dalio? Or maybe some famous scientists?
  • What does EA have to say about acquiring political power?

Edit: hypothetically, if one of these big-name tech people were to try to gain political power, how should they go about doing so?

Effectively Less Altruistically Wrong Codex

-3 diegocaleiro 16 June 2015 07:00PM

My post on the fact that incentive structures are eating the central place to be for rationalists has generated 140 comments which I have generated no clear action in the horizon. 

I post here again to incentivize that it also generates some attempts to shake the ground a bit. Arguing and discussing are fun, and beware of things that are fun to argue. 

Is anyone actually doing anything to mitigate the problem? To solve it? To have a stable end state in the long run where online discussions still preserve what needs being preserved?

Intelligent commentary is valuable, pools are interesting. Yet, at the end of the day, it is the people who show up to do something who will determine the course of everything. 

If you care about this problem, act on it. I care enough to write these two posts. 

Pattern-botching: when you forget you understand

27 malcolmocean 15 June 2015 10:58PM

It’s all too easy to let a false understanding of something replace your actual understanding. Sometimes this is an oversimplification, but it can also take the form of an overcomplication. I have an illuminating story:

Years ago, when I was young and foolish, I found myself in a particular romantic relationship that would later end for epistemic reasons, when I was slightly less young and slightly less foolish. Anyway, this particular girlfriend of mine was very into healthy eating: raw, organic, home-cooked, etc. During her visits my diet would change substantially for a few days. At one point, we got in a tiny fight about something, and in a not-actually-desperate chance to placate her, I semi-jokingly offered: “I’ll go vegetarian!”

“I don’t care,” she said with a sneer.

…and she didn’t. She wasn’t a vegetarian. Duhhh... I knew that. We’d made some ground beef together the day before.

So what was I thinking? Why did I say “I’ll go vegetarian” as an attempt to appeal to her values?

 

(I’ll invite you to take a moment to come up with your own model of why that happened. You don't have to, but it can be helpful for evading hindsight bias of obviousness.)

 

(Got one?)

 

Here's my take: I pattern-matched a bunch of actual preferences she had with a general "healthy-eating" cluster, and then I went and pulled out something random that felt vaguely associated. It's telling, I think, that I don't even explicitly believe that vegetarianism is healthy. But to my pattern-matcher, they go together nicely.

I'm going to call this pattern-botching.† Pattern-botching is when you pattern-match a thing "X", as following a certain model, but then implicit queries to that model return properties that aren't true about X. What makes this different from just having false beliefs is that you know the truth, but you're forgetting to use it because there's a botched model that is easier to use.

†Maybe this already has a name, but I've read a lot of stuff and it feels like a distinct concept to me.

Examples of pattern-botching

So, that's pattern-botching, in a nutshell. Now, examples! We'll start with some simple ones.

Calmness and pretending to be a zen master

In my Againstness Training video, past!me tries a bunch of things to calm down. In the pursuit of "calm", I tried things like...

  • dissociating
  • trying to imitate a zen master
  • speaking really quietly and timidly

None of these are the desired state. The desired state is present, authentic, and can project well while speaking assertively.

But that would require actually being in a different state, which to my brain at the time seemed hard. So my brain constructed a pattern around the target state, and said "what's easy and looks vaguely like this?" and generated the list above. Not as a list, of course! That would be too easy. It generated each one individually as a plausible course of action, which I then tried, and which Val then called me out on.

Personality Types

I'm quite gregarious, extraverted, and generally unflappable by noise and social situations. Many people I know describe themselves as HSPs (Highly Sensitive Persons) or as very introverted, or as "not having a lot of spoons". These concepts are related—or perhaps not related, but at least correlated—but they're not the same. And even if these three terms did all mean the same thing, individual people would still vary in their needs and preferences.

Just this past week, I found myself talking with an HSP friend L, and noting that I didn't really know what her needs were. Like I knew that she was easily startled by loud noises and often found them painful, and that she found motion in her periphery distracting. But beyond that... yeah. So I told her this, in the context of a more general conversation about her HSPness, and I said that I'd like to learn more about her needs.

L responded positively, and suggested we talk about it at some point. I said, "Sure," then added, "though it would be helpful for me to know just this one thing: how would you feel about me asking you about a specific need in the middle of an interaction we're having?"

"I would love that!" she said.

"Great! Then I suspect our future interactions will go more smoothly," I responded. I realized what had happened was that I had conflated L's HSPness with... something else. I'm not exactly sure what, but a preference for indirect communication, perhaps? I have another friend, who is also sometimes short on spoons, who I model as finding that kind of question stressful because it would kind of put them on the spot.

I've only just recently been realizing this, so I suspect that I'm still doing a ton of this pattern-botching with people, that I haven't specifically noticed.

Of course, having clusters makes it easier to have heuristics about what people will do, without knowing them too well. A loose cluster is better than nothing. I think the issue is when we do know the person well, but we're still relying on this cluster-based model of them. It's telling that I was not actually surprised when L said that she would like it if I asked about her needs. On some level I kind of already knew it. But my botched pattern was making me doubt what I knew.

False aversions

CFAR teaches a technique called "Aversion Factoring", in which you try to break down the reasons why you don't do something, and then consider each reason. In some cases, the reasons are sound reasons, so you decide not to try to force yourself to do the thing. If not, then you want to make the reasons go away. There are three types of reasons, with different approaches.

One is for when you have a legitimate issue, and you have to redesign your plan to avert that issue. The second is where the thing you're averse to is real but isn't actually bad, and you can kind of ignore it, or maybe use exposure therapy to get yourself more comfortable with it. The third is... when the outcome would be an issue, but it's not actually a necessary outcome of the thing. As in, it's a fear that's vaguely associated with the thing at hand, but the thing you're afraid of isn't real.

All of these share a structural similarity with pattern-botching, but the third one in particular is a great example. The aversion is generated from a property that the thing you're averse to doesn't actually have. Unlike a miscalibrated aversion (#2 above) it's usually pretty obvious under careful inspection that the fear itself is based on a botched model of the thing you're averse to.

Taking the training wheels off of your model

One other place this structure shows up is in the difference between what something looks like when you're learning it versus what it looks like once you've learned it. Many people learn to ride a bike while actually riding a four-wheeled vehicle: training wheels. I don't think anyone makes the mistake of thinking that the ultimate bike will have training wheels, but in other contexts it's much less obvious.

The remaining three examples look at how pattern-botching shows up in learning contexts, where people implicitly forget that they're only partway there.

Rationality as a way of thinking

CFAR runs 4-day rationality workshops, which currently are evenly split between specific techniques and how to approach things in general. Let's consider what kinds of behaviours spring to mind when someone encounters a problem and asks themselves: "what would be a rational approach to this problem?"

  • someone with a really naïve model, who hasn't actually learned much about applied rationality, might pattern-match "rational" to "hyper-logical", and think "What Would Spock Do?"
  • someone who is somewhat familiar with CFAR and its instructors but who still doesn't know any rationality techniques, might complete the pattern with something that they think of as being archetypal of CFAR-folk: "What Would Anna Salamon Do?"
  • CFAR alumni, especially new ones, might pattern-match "rational" as "using these rationality techniques" and conclude that they need to "goal factor" or "use trigger-action plans"
  • someone who gets rationality would simply apply that particular structure of thinking to their problem

In the case of a bike, we see hundreds of people biking around without training wheels, and so that becomes the obvious example from which we generalize the pattern of "bike". In other learning contexts, though, most people—including, sometimes, the people at the leading edge—are still in the early learning phases, so the training wheels are the rule, not the exception.

So people start thinking that the figurative bikes are supposed to have training wheels.

Incidentally, this can also be the grounds for strawman arguments where detractors of the thing say, "Look at these bikes [with training wheels]! How are you supposed to get anywhere on them?!"

Effective Altruism

We potentially see a similar effect with topics like Effective Altruism. It's a movement that is still in its infancy, which means that nobody has it all figured out. So when trying to answer "How do I be an effective altruist?" our pattern-matchers might pull up a bunch of examples of things that EA-identified people have been commonly observed to do.

  • donating 10% of one's income to a strategically selected charity
  • going to a coding bootcamp and switching careers, in order to Earn to Give
  • starting a new organization to serve an unmet need, or to serve a need more efficiently
  • supporting the Against Malaria Fund

...and this generated list might be helpful for various things, but be wary of thinking that it represents what Effective Altruism is. It's possible—it's almost inevitable—that we don't actually know what the most effective interventions are yet. We will potentially never actually know, but we can expect that in the future we will generally know more than at present. Which means that the current sampling of good EA behaviours likely does not actually even cluster around the ultimate set of behaviours we might expect.

Creating a new (platform for) culture

At my intentional community in Waterloo, we're building a new culture. But that's actually a by-product: our goal isn't to build this particular culture but to build a platform on which many cultures can be built. It's like how as a company you don't just want to be building the product but rather building the company itself, or "the machine that builds the product,” as Foursquare founder Dennis Crowley puts it.

What I started to notice though, is that we started to confused the particular, transitionary culture that we have at our house, with either (a) the particular, target culture, that we're aiming for, or (b) the more abstract range of cultures that will be constructable on our platform.

So from a training wheels perspective, we might totally eradicate words like "should". I did this! It was really helpful. But once I had removed the word from my idiolect, it became unhelpful to still be treating it as being a touchy word. Then I heard my mentor use it, and I remembered that the point of removing the word wasn't to not ever use it, but to train my brain to think without a particular structure that "should" represented.

This shows up on much larger scales too. Val from CFAR was talking about a particular kind of fierceness, "hellfire", that he sees as fundamental and important, and he noted that it seemed to be incompatible with the kind of culture my group is building. I initially agreed with him, which was kind of dissonant for my brain, but then I realized that hellfire was only incompatible with our training culture, not the entire set of cultures that could ultimately be built on our platform. That is, engaging with hellfire would potentially interfere with the learning process, but it's not ultimately proscribed by our culture platform.

Conscious cargo-culting

I think it might be helpful to repeat the definition:

Pattern-botching is you pattern-match a thing "X", as following a certain model, but then but then implicit queries to that model return properties that aren't true about X. What makes this different from just having false beliefs is that you know the truth, but you're forgetting to use it because there's a botched model that is easier to use.

It's kind of like if you were doing a cargo-cult, except you knew how airplanes worked.

(Cross-posted from malcolmocean.com)

View more: Next