Filter This week

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The Unfriendly Superintelligence next door

29 jacob_cannell 24 June 2015 08:14PM

Markets are powerful decentralized optimization engines - it is known.  Liberals see the free market as a kind of optimizer run amuck, a dangerous superintelligence with simple non-human values that must be checked and constrained by the government - the friendly SI.  Conservatives just reverse the narrative roles.

In some domains, where the incentive structure aligns with human values, the market works well.  In our current framework, the market works best for producing gadgets. It does not work so well for pricing intangible information, and most specifically it is broken when it comes to health.

We treat health as just another gadget problem: something to be solved by pills.  Health is really a problem of knowledge; it is a computational prediction problem.  Drugs are useful only to the extent that you can package the results of new knowledge into a pill and patent it.  If you can't patent it, you can't profit from it.

So the market is constrained to solve human health by coming up with new patentable designs for mass-producible physical objects which go into human bodies.  Why did we add that constraint - thou should solve health, but thou shalt only use pills?  (Ok technically the solutions don't have to be ingestible, but that's a detail.)

The gadget model works for gadgets because we know how gadgets work - we built them, after all.  The central problem with health is that we do not completely understand how the human body works - we did not build it.  Thus we should be using the market to figure out how the body works - completely - and arguably we should be allocating trillions of dollars towards that problem.

The market optimizer analogy runs deeper when we consider the complexity of instilling values into a market.  Lawmakers cannot program the market with goals directly, so instead they attempt to engineer desireable behavior by ever more layers and layers of constraints.  Lawmakers are deontologists.

As an example, consider the regulations on drug advertising.  Big pharma is unsafe - its profit function does not encode anything like "maximize human health and happiness" (which of course itself is an oversimplification).  If allowed to its own devices, there are strong incentives to sell subtly addictive drugs, to create elaborate hyped false advertising campaigns, etc.  Thus all the deontological injunctions.  I take that as a strong indicator of a poor solution - a value alignment failure.

What would healthcare look like in a world where we solved the alignment problem?

To solve the alignment problem, the market's profit function must encode long term human health and happiness.  This really is a mechanism design problem - its not something lawmakers are even remotely trained or qualified for.  A full solution is naturally beyond the scope of a little blog post, but I will sketch out the general idea.

To encode health into a market utility function, first we create financial contracts with an expected value which captures long-term health.  We can accomplish this with a long-term contract that generates positive cash flow when a human is healthy, and negative when unhealthy - basically an insurance contract.  There is naturally much complexity in getting those contracts right, so that they measure what we really want.  But assuming that is accomplished, the next step is pretty simple - we allow those contracts to trade freely on an open market.

There are some interesting failure modes and considerations that are mostly beyond scope but worth briefly mentioning.  This system probably needs to be asymmetric.  The transfers on poor health outcomes should partially go to cover medical payments, but it may be best to have a portion of the wealth simply go to nobody/everybody - just destroyed.

In this new framework, designing and patenting new drugs can still be profitable, but it is now put on even footing with preventive medicine.  More importantly, the market can now actually allocate the correct resources towards long term research.

To make all this concrete, let's use an example of a trillion dollar health question - one that our current system is especially ill-posed to solve:

What are the long-term health effects of abnormally low levels of solar radiation?  What levels of sun exposure are ideal for human health?

This is a big important question, and you've probably read some of the hoopla and debate about vitamin D.  I'm going to soon briefly summarize a general abstract theory, one that I would bet heavily on if we lived in a more rational world where such bets were possible.

In a sane world where health is solved by a proper computational market, I could make enormous - ridiculous really - amounts of money if I happened to be an early researcher who discovered the full health effects of sunlight.  I would bet on my theory simply by buying up contracts for individuals/demographics who had the most health to gain by correcting their sunlight deficiency.  I would then publicize the theory and evidence, and perhaps even raise a heap pile of money to create a strong marketing engine to help ensure that my investments - my patients - were taking the necessary actions to correct their sunlight deficiency.  Naturally I would use complex machine learning models to guide the trading strategy.

Now, just as an example, here is the brief 'pitch' for sunlight.

If we go back and look across all of time, there is a mountain of evidence which more or less screams - proper sunlight is important to health.  Heliotherapy has a long history.

Humans, like most mammals, and most other earth organisms in general, evolved under the sun.  A priori we should expect that organisms will have some 'genetic programs' which take approximate measures of incident sunlight as an input.  The serotonin -> melatonin mediated blue-light pathway is an example of one such light detecting circuit which is useful for regulating the 24 hour circadian rhythm.

The vitamin D pathway has existed since the time of algae such as the Coccolithophore.  It is a multi-stage pathway that can measure solar radiation over a range of temporal frequencies.  It starts with synthesis of fat soluble cholecalciferiol which has a very long half life measured in months. [1] [2]

The rough pathway is:

  • Cholecalciferiol (HL ~ months) becomes 
  • 25(OH)D (HL ~ 15 days) which finally becomes 
  • 1,25(OH)2 D (HL ~ 15 hours)

The main recognized role for this pathway in regards to human health - at least according to the current Wikipedia entry - is to enhance "the internal absorption of calcium, iron, magnesium, phosphate, and zinc".  Ponder that for a moment.

Interestingly, this pathway still works as a general solar clock and radiation detector for carnivores - as they can simply eat the precomputed measurement in their diet.

So, what is a long term sunlight detector useful for?  One potential application could be deciding appropriate resource allocation towards DNA repair.  Every time an organism is in the sun it is accumulating potentially catastrophic DNA damage that must be repaired when the cell next divides.  We should expect that genetic programs would allocate resources to DNA repair and various related activities dependent upon estimates of solar radiation.

I should point out - just in case it isn't obvious - that this general idea does not imply that cranking up the sunlight hormone to insane levels will lead to much better DNA/cellular repair.  There are always tradeoffs, etc.

One other obvious use of a long term sunlight detector is to regulate general strategic metabolic decisions that depend on the seasonal clock - especially for organisms living far from the equator.  During the summer when food is plentiful, the body can expect easy calories.  As winter approaches calories become scarce and frugal strategies are expected.

So first off we'd expect to see a huge range of complex effects showing up as correlations between low vit D levels and various illnesses, and specifically illnesses connected to DNA damage (such as cancer) and or BMI.  

Now it turns out that BMI itself is also strongly correlated with a huge range of health issues.  So the first key question to focus on is the relationship between vit D and BMI.  And - perhaps not surprisingly - there is pretty good evidence for such a correlation [3][4] , and this has been known for a while.

Now we get into the real debate.  Numerous vit D supplement intervention studies have now been run, and the results are controversial.  In general the vit D experts (such as my father, who started the vit D council, and publishes some related research[5]) say that the only studies that matter are those that supplement at high doses sufficient to elevate vit D levels into a 'proper' range which substitutes for sunlight, which in general requires 5000 IU day on average - depending completely on genetics and lifestyle (to the point that any one-size-fits all recommendation is probably terrible).

The mainstream basically ignores all that and funds studies at tiny RDA doses - say 400 IU or less - and then they do meta-analysis over those studies and conclude that their big meta-analysis, unsurprisingly, doesn't show a statistically significant effect.  However, these studies still show small effects.  Often the meta-analysis is corrected for BMI, which of course also tends to remove any vit D effect, to the extent that low vit D/sunlight is a cause of both weight gain and a bunch of other stuff.

So let's look at two studies for vit D and weight loss.

First, this recent 2015 study of 400 overweight Italians (sorry the actual paper doesn't appear to be available yet) tested vit D supplementation for weight loss.  The 3 groups were (0 IU/day, ~1,000 IU / day, ~3,000 IU/day).  The observed average weight loss was (1 kg, 3.8 kg, 5.4 kg). I don't know if the 0 IU group received a placebo.  Regardless, it looks promising.

On the other hand, this 2013 meta-analysis of 9 studies with 1651 adults total (mainly women) supposedly found no significant weight loss effect for vit D.  However, the studies used between 200 IU/day to 1,100 IU/day, with most between 200 to 400 IU.  Five studies used calcium, five also showed weight loss (not necessarily the same - unclear).  This does not show - at all - what the study claims in its abstract.

In general, medical researchers should not be doing statistics.  That is a job for the tech industry.

Now the vit D and sunlight issue is complex, and it will take much research to really work out all of what is going on.  The current medical system does not appear to be handling this well - why?  Because there is insufficient financial motivation.

Is Big Pharma interested in the sunlight/vit D question?  Well yes - but only to the extent that they can create a patentable analogue!  The various vit D analogue drugs developed or in development is evidence that Big Pharma is at least paying attention.  But assuming that the sunlight hypothesis is mainly correct, there is very little profit in actually fixing the real problem.

There is probably more to sunlight that just vit D and serotonin/melatonin.  Consider the interesting correlation between birth month and a number of disease conditions[6].  Perhaps there is a little grain of truth to astrology after all.

Thus concludes my little vit D pitch.  

In a more sane world I would have already bet on the general theory.  In a really sane world it would have been solved well before I would expect to make any profitable trade.  In that rational world you could actually trust health advertising, because you'd know that health advertisers are strongly financially motivated to convince you of things actually truly important for your health.

Instead of charging by the hour or per treatment, like a mechanic, doctors and healthcare companies should literally invest in their patients long-term health, and profit from improvements to long term outcomes.  The sunlight health connection is a trillion dollar question in terms of medical value, but not in terms of exploitable profits in today's reality.  In a properly constructed market, there would be enormous resources allocated to answer these questions, flowing into legions of profit motivated startups that could generate billions trading on computational health financial markets, all without selling any gadgets.

So in conclusion: the market could solve health, but only if we allowed it to and only if we setup appropriate financial mechanisms to encode the correct value function.  This is the UFAI problem next door.


Top 9 myths about AI risk

22 Stuart_Armstrong 29 June 2015 08:41PM

Following some somewhat misleading articles quoting me, I thought Id present the top 9 myths about the AI risk thesis:

  1. That we’re certain AI will doom us. Certainly not. It’s very hard to be certain of anything involving a technology that doesn’t exist; we’re just claiming that the probability of AI going bad isn’t low enough that we can ignore it.
  2. That humanity will survive, because we’ve always survived before. Many groups of humans haven’t survived contact with more powerful intelligent agents. In the past, those agents were other humans; but they need not be. The universe does not owe us a destiny. In the future, something will survive; it need not be us.
  3. That uncertainty means that you’re safe. If you’re claiming that AI is impossible, or that it will take countless decades, or that it’ll be safe... you’re not being uncertain, you’re being extremely specific about the future. “No AI risk” is certain; “Possible AI risk” is where we stand.
  4. That Terminator robots will be involved. Please? The threat from AI comes from its potential intelligence, not from its ability to clank around slowly with an Austrian accent.
  5. That we’re assuming the AI is too dumb to know what we’re asking it. No. A powerful AI will know what we meant to program it to do. But why should it care? And if we could figure out how to program “care about what we meant to ask”, well, then we’d have safe AI.
  6. That there’s one easy trick that can solve the whole problem. Many people have proposed that one trick. Some of them could even help (see Holden’s tool AI idea). None of them reduce the risk enough to relax – and many of the tricks contradict each other (you can’t design an AI that’s both a tool and socialising with humans!).
  7. That we want to stop AI research. We don’t. Current AI research is very far from the risky areas and abilities. And it’s risk aware AI researchers that are most likely to figure out how to make safe AI.
  8. That AIs will be more intelligent than us, hence more moral. It’s pretty clear than in humans, high intelligence is no guarantee of morality. Are you really willing to bet the whole future of humanity on the idea that AIs might be different? That in the billions of possible minds out there, there is none that is both dangerous and very intelligent?
  9. That science fiction or spiritual ideas are useful ways of understanding AI risk. Science fiction and spirituality are full of human concepts, created by humans, for humans, to communicate human ideas. They need not apply to AI at all, as these could be minds far removed from human concepts, possibly without a body, possibly with no emotions or consciousness, possibly with many new emotions and a different type of consciousness, etc... Anthropomorphising the AIs could lead us completely astray.
  10. That all lists must have a relevant tenth element. Some do, some don’t. It really depends.

 

[link] Choose your (preference) utilitarianism carefully – part 1

12 Kaj_Sotala 25 June 2015 12:06PM

Summary: Utilitarianism is often ill-defined by supporters and critics alike, preference utilitarianism even more so. I briefly examine some of the axes of utilitarianism common to all popular forms, then look at some axes unique but essential to preference utilitarianism, which seem to have received little to no discussion – at least not this side of a paywall. This way I hope to clarify future discussions between hedonistic and preference utilitarians and perhaps to clarify things for their critics too, though I’m aiming the discussion primarily at utilitarians and utilitarian-sympathisers.

http://valence-utilitarianism.com/?p=8

I like this essay particularly for the way it breaks down different forms of utilitarianism to various axes, which have rarely been discussed on LW much.

For utilitarianism in general:

Many of these axes are well discussed, pertinent to almost any form of utilitarianism, and at least reasonably well understood, and I don’t propose to discuss them here beyond highlighting their salience. These include but probably aren’t restricted to the following:

  • What is utility? (for the sake of easy reference, I’ll give each axis a simple title – for this, the utility axis); eg happiness, fulfilled preferences, beauty, information(PDF)
  • How drastically are we trying to adjust it?, aka what if any is the criterion for ‘right’ness? (sufficiency axis); eg satisficing, maximising[2], scalar
  • How do we balance tradeoffs between positive and negative utility? (weighting axis); eg, negative, negative-leaning, positive (as in fully discounting negative utility – I don’t think anyone actually holds this), ‘middling’ ie ‘normal’ (often called positive, but it would benefit from a distinct adjective)
  • What’s our primary mentality toward it? (mentality axis); eg act, rule, two-level, global
  • How do we deal with changing populations? (population axis); eg average, total
  • To what extent do we discount future utility? (discounting axis); eg zero discount, >0 discount
  • How do we pinpoint the net zero utility point? (balancing axis); eg Tännsjö’s test, experience tradeoffs
  • What is a utilon? (utilon axis) [3] – I don’t know of any examples of serious discussion on this (other than generic dismissals of the question), but it’s ultimately a question utilitarians will need to answer if they wish to formalise their system.

For preference utilitarianism in particular:

Here then, are the six most salient dependent axes of preference utilitarianism, ie those that describe what could count as utility for PUs. I’ll refer to the poles on each axis as (axis)0 and (axis)1, where any intermediate view will be (axis)X. We can then formally refer to subtypes, and also exclude them, eg ~(F0)R1PU, or ~(F0 v R1)PU etc, or represent a range, eg C0..XPU.

How do we process misinformed preferences? (information axis F)

(F0 no adjustment / F1 adjust to what it would have been had the person been fully informed / FX somewhere in between)

How do we process irrational preferences? (rationality axis R)

(R0 no adjustment / R1 adjust to what it would have been had the person been fully rational / RX somewhere in between)

How do we process malformed preferences? (malformation axes M)

(M0 Ignore them / MF1 adjust to fully informed / MFR1 adjust to fully informed and rational (shorthand for MF1R1) / MFxRx adjust to somewhere in between)

How long is a preference relevant? (duration axis D)

(D0 During its expression only / DF1 During and future / DPF1 During, future and past (shorthand for  DP1F1) / DPxFx Somewhere in between)

What constitutes a preference? (constitution axis C)

(C0 Phenomenal experience only / C1 Behaviour only / CX A combination of the two)

What resolves a preference? (resolution axis S)

(S0 Phenomenal experience only / S1 External circumstances only / SX A combination of the two)

What distinguishes these categorisations is that each category, as far as I can perceive, has no analogous axis within hedonistic utilitarianism. In other words to a hedonistic utilitarian, such axes would either be meaningless, or have only one logical answer. But any well-defined and consistent form of preference utilitarianism must sit at some point on every one of these axes.

See the article for more detailed discussion about each of the axes of preference utilitarianism, and more.

[link] Essay on AI Safety

10 jsteinhardt 26 June 2015 07:42AM

I recently wrote an essay about AI risk, targeted at other academics:

Long-Term and Short-Term Challenges to Ensuring the Safety of AI Systems

I think it might be interesting to some of you, so I am sharing it here. I would appreciate any feedback any of you have, especially from others who do AI / machine learning research.

Two-boxing, smoking and chewing gum in Medical Newcomb problems

9 Caspar42 29 June 2015 10:35AM

I am currently learning about the basics of decision theory, most of which is common knowledge on LW. I have a question, related to why EDT is said not to work.

Consider the following Newcomblike problem: A study shows that most people who two-box in Newcomblike problems as the following have a certain gene (and one-boxers don't have the gene). Now, Omega could put you into something like Newcomb's original problem, but instead of having run a simulation of you, Omega has only looked at your DNA: If you don't have the "two-boxing gene", Omega puts $1M into box B, otherwise box B is empty. And there is $1K in box A, as usual. Would you one-box (take only box B) or two-box (take box A and B)? Here's a causal diagram for the problem:



Since Omega does not do much other than translating your genes into money under a box, it does not seem to hurt to leave it out:


I presume that most LWers would one-box. (And as I understand it, not only CDT but also TDT would two-box, am I wrong?)

Now, how does this problem differ from the smoking lesion or Yudkowsky's (2010, p.67) chewing gum problem? Chewing Gum (or smoking) seems to be like taking box A to get at least/additional $1K, the two-boxing gene is like the CGTA gene, the illness itself (the abscess or lung cancer) is like not having $1M in box B. Here's another causal diagram, this time for the chewing gum problem:

As far as I can tell, the difference between the two problems is some additional, unstated intuition in the classic medical Newcomb problems. Maybe, the additional assumption is that the actual evidence lies in the "tickle", or that knowing and thinking about the study results causes some complications. In EDT terms: The intuition is that neither smoking nor chewing gum gives the agent additional information.

Can You Give Support or Feedback for My Program to Alleviate Poverty?

9 Brendon_Wong 25 June 2015 11:18PM

Hi LessWrong,

Two years ago, when I travelled to Belize, I came up with an idea for a self-sufficient scalable program to address poverty. I saw how many people in Belize were unemployed or getting paid very low wages, but I also saw how skilled they were, a result of English being the national language and a mandatory education system. Many Belizeans have a secondary/high school education in Belize, and the vast majority have at least a primary school education and can speak English. I thought to myself, "it's too bad I can't teleport Belizeans to the United States, because in the U.S., they would automatically be able to earn many times more the minimum wage in Belize with their existing skills."

But I knew there was a way to do it: "virtual teleportation." My solution involves using computer and internet access in conjunction with training and support to connect the poor with high paying international work opportunities. My tests of virtual employment using Upwork and Amazon Mechanical Turk show that it is possible to earn at least twice the minimum wage in Belize, around $3 an hour, working with flexible hours. This solution is scalable because there is a consistent international demand for very low wage work (relatively speaking) from competent English speakers, and in other countries around the world like South Africa, many people matching that description can be found and lifted out of poverty. The solution could become self-sufficient because running a virtual employment enterprise or taking a cut of the earnings of members using virtual employment services (as bad as that sounds) can generate enough income to pay for the relatively low costs of monthly internet and the one-time costs of technology upgrades.

If you have any feedback, comments, suggestions, I would love to hear about it in the comments section. Feedback on my fundraising campaign at igg.me/at/bvep is also greatly appreciated.

If you are thinking about supporting the idea, my team and I need your help to make this possible. It may be difficult for us to reach our goal, but every contribution greatly increases the chances our fundraiser and our program will be successful, especially in the early stages. All donations are tax-deductible, and if you’d like, you can also opt-in for perks like flash drives and t-shirts. It only takes a moment to make a great difference: igg.me/at/bvep.

Thank you for reading!

A map: Typology of human extinction risks

8 turchin 23 June 2015 05:23PM

In 2008 I was working on a Russian language book “Structure of the Global Catastrophe, and I brought it to one our friends for review. He was geologist Aranovichan old friend of my late mother's husband.

We started to discuss Stevenson's probe — a hypothetical vehicle which could reach the earth's core by melting its way through the mantle, taking scientific instruments with it. It would take the form of a large drop of molten iron – at least 60 000 tons – theoretically feasible, but practically impossible.

Milan Cirkovic wrote an article arguing against this proposal, in which he fairly concluded that such a probe would leave a molten channel of debris behind it, and high pressure inside the earth's core could push this material upwards. A catastrophic degassing of the earth's core could ensue that would act like giant volcanic eruption, completely changing atmospheric composition and killing all life on Earth. 

Our friend told me that in his institute they had created an upgraded version of such a probe, which would be simpler, cheaper and which could drill down deeply at a speed of 1000 km per month. This probe would be a special nuclear reactor, which uses its energy to melt through the mantle. (Something similar was suggested in the movie “China syndrome” about a possible accident at a nuclear power station – so I don’t think that publishing this information would endanger humanity.) The details of the reactor-probe were kept secret, but there was no money available for practical realisation of the project. I suggested that it would be wise not to create such a probe. If it were created it could become the cheapest and most effective doomsday weapon, useful for worldwide blackmail in the reasoning style of Herman Khan. 

But in this story the most surprising thing for me was not a new way to kill mankind, but the ease with which I discovered its details. If your nearest friends from a circle not connected with x-risks research know of a new way of destroying humanity (while not fully recognising it as such), how many more such ways are known to scientists from other areas of expertise!

I like to create full exhaustive lists, and I could not stop myself from creating a list of human extinction risks. Soon I reached around 100 items, although not all of them are really dangerous. I decided to convert them into something like periodic table — i.e to sort them by several parameters — in order to help predict new risks. 

For this map I chose two main variables: the basic mechanism of risk and the historical epoch during which it could happen. Also any map should be based on some kind of future model, nd I chose Kurzweil’s model of exponential technological growth which leads to the creation of super technologies in the middle of the 21st century. Also risks are graded according to their probabilities: main, possible and hypothetical. I plan to attach to each risk a wiki page with its explanation. 

I would like to know which risks are missing from this map. If your ideas are too dangerous to openly publish them, PM me. If you think that any mention of your idea will raise the chances of human extinction, just mention its existence without the details. 

I think that the map of x-risks is necessary for their prevention. I offered prizes for improving the previous map  which illustrates possible prevention methods of x-risks and it really helped me to improve it. But I do not offer prizes for improving this map as it may encourage people to be too creative in thinking about new risks.

Pdf is here: http://immortality-roadmap.com/typriskeng.pdf

 

4 days left in Giving What We Can's 2015 fundraiser - £34k to go

5 RobertWiblin 27 June 2015 02:16AM

We at Giving What We Can have been running a fundraiser to raise £150,000 by the end of June, so that we can make our budget through the end of 2015. We are really keen to keep the team focussed on their job of growing the movement behind effective giving, and ensure they aren't distracted worrying about fundraising and paying the bills.

With 4 days to go, we are now short just £34,000!

We also still have £6,000 worth of matching funds available for those who haven't given more than £1,000 to GWWC before and donate £1,000-£5,000 before next Tuesday! (For those who are asking, 2 of the matchers I think wouldn't have given otherwise and 2 I would guess would have.)

If you've been one of those holding out to see if we would easily reach the goal, now's the time to pitch in to ensure Giving What We Can can continue to achieve its vision of making effective giving the societal default and move millions more to GiveWell-recommended and other high impact organisations.

So please give now or email me for our bank details: robert [dot] wiblin [at] centreforeffectivealtruism [dot] org.

If you want to learn more, please see this more complete explanation for why we might be the highest impact place you can donate. This fundraiser has also been discussed on LessWrong before, as well as the Effective Altruist forum.

Thanks so much!


Parenting Technique: Increase Your Child’s Working Memory

3 James_Miller 29 June 2015 07:51PM

I continually train my ten-year-old son’s working memory, and urge parents of other young children to do likewise.  While I have succeeded in at least temporarily improving his working memory, I accept that this change might not be permanent and could end a few months after he stops training.  But I also believe that while his working memory is boosted so too is his learning capacity.    

I have a horrible working memory that greatly hindered my academic achievement.  I was so bad at spelling that they stopped counting it against me in school.  In technical classes I had trouble remembering what variables stood for.  My son, in contrast, has a fantastic memory.  He twice won his school’s spelling bee, and just recently I wrote twenty symbols (letters, numbers, and shapes) in rows of five.  After a few minutes he memorized the symbols and then (without looking) repeated them forward, backwards, forwards, and then by columns.    

My son and I have been learning different programming languages through Codecademy.  While I struggle to remember the required syntax of different languages, he quickly gets this and can focus on higher level understanding.  When we do math learning together his strong working memory also lets him concentrate on higher order issues then remembering the details of the problem and the relevant formulas.     

You can easily train a child’s working memory.  It requires just a few minutes of time a day, can be very low tech or done on a computer, can be optimized for your child to get him in flow, and easily lends itself to a reward system.  Here is some of the training we have done:     

 

 

  • I write down a sequence and have him repeat it.
  • I say a sequence and have him repeat it.
  • He repeats the sequence backwards.
  • He repeats the sequence with slight changes such as adding one to each number and “subtracting” one from each letter.
  • He repeats while doing some task like touching his head every time he says an even number and touching his knee every time he says an odd one.
  • Before repeating a memorized sequence he must play repeat after me where I say a random string.
  • I draw a picture and have him redraw it.
  • He plays N-back games.
  • He does mental math requiring keeping track of numbers (i.e. 42 times 37).
  • I assign numerical values to letters and ask him math operation questions (i.e. A*B+C).        

 

 

The key is to keep changing how you train your kid so you have more hope of improving general working memory rather than the very specific task you are doing.  So, for example, if you say a sequence and have your kid repeat it back to you, vary the speed at which you talk on different days and don’t just use one class of symbols in your exercises.

 

 

Goal setting journal (GSJ) - 28/06/15 -> 05/07/15

3 Clarity 28 June 2015 06:24AM

Inspired by the group rationality diary and open thread, this is the inaugural weekly goal setting journal (GSJ) thread.

If you have goals worth setting that are not worth their own post (even in Discussion), then it goes here.


Notes for future GSJ posters:

1. Please add the 'goal_setting_journal' tag.

2. Check if there is an active GSJ thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. GSJ Threads should be posted in Discussion, and not Main.

4. GSJ Threads should run for no longer than 1 week, but you may set goals, subgoals and tasks for as distant into the future as you please.

5. No one is in charge of posting these threads. If it's time for a new thread, and you want a new thread, just create it.

GiveWell event for SF Bay Area EAs

3 Benquo 25 June 2015 08:27PM

Passing this announcement along from GiveWell:

GiveWell is holding an event at our offices in San Francisco for Bay Area residents who are interested in Effective Altruism. The evening will be similar to the research events we hold periodically for GiveWell donors: it will include presentations and discussion about GiveWell’s top charity work and the Open Philanthropy Project, as well as a light dinner and time for mingling. We’re tentatively planning to hold the event in the evening of Tuesday July 7th or Wednesday July 8th.

We hope to be able to accommodate everyone who is interested, but may have to limit places depending on demand. If you would be interested in attending, please fill out this form.
We hope to see you there!

Cryonics: peace of mind vs. immortality

3 oge 24 June 2015 07:10AM

I wrote a blog post arguing that people sign up for cryo more for peace of mind than for immortality. This suggests that cryo organizations should market towards the former desire than the latter (you can think of it as marketing to near mode rather than far mode, in Hansonian terms).

Perhaps we've been selling cryonics wrong. I'm signed up and feel like the reason I should have for signing up is that cryonics buys me a small, but non-zero chance at living forever. However, for years this should didn't actually result in me signing up. Recently, though, after being made aware of this dissonance between my words and actions, I finally signed up. I'm now very glad that I did. But it's not because I now have a shot at everlasting life.

http://specterdefied.blogspot.com/2015/06/a-cryo-membership-buys-peace-of-mind.html

 

For those signed up already, does peace-of-mind resonate as a benefit of your membership?

If you are not a cryonics member, what would make you decide that it is a good idea?

Open Thread, Jun. 29 - Jul. 5, 2015

2 Gondolinian 29 June 2015 12:14AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

New LW Meetups: Maine, San Antonio

2 FrankAdamek 26 June 2015 02:59PM

This summary was posted to LW Main on June 19th. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.

continue reading »

Is this evidence for the Simulation hypothesis?

1 Eitan_Zohar 28 June 2015 11:45PM

I haven't come across this particular argument before, so I hope I'm not just rehashing a well-known problem.

"The universe displays some very strong signs that it is a simulation.

As has been mentioned in some other answers, one way to efficiently achieve a high fidelity simulation is to design it in such a way that you only need to compute as much detail as is needed. If someone takes a cursory glance at something you should only compute its rough details and only when someone looks at it closely, with a microscope say, do you need to fill in the details.

This puts a big constraint on the kind of physics you can have in a simulation. You need this property: suppose some physical system starts in state x. The system evolves over time to a new state y which is now observed to accuracy ε. As the simulation only needs to display the system to accuracy ε the implementor doesn't want to have to compute x to arbitrary precision. They'd like only have to compute x to some limited degree of accuracy. In other words, demanding y to some limited degree of accuracy should only require computing x to a limited degree of accuracy.

Let's spell this out. Write y as a function of x, y = f(x). We want that for all ε there is a δ such that for all x-δ<y<x+δ, |f(y)-f(x)|<ε. This is just a restatement in mathematical notation of what I said in English. But do you recognise it?

It's the standard textbook definition of a Continuous function. We humans invented the notion of continuity because it was an ubiquitous property of functions in the physical world. But it's precisely the property you need to implement a simulation with demand-driven level of detail. All of our fundamental physics is based on equations that evolve continuously over time and so are optimised for demand-driven implementation.

One way of looking at this is that if y=f(x), then if you want to compute n digits of y you only need a finite number of digits of x. This has another amazing advantage: if you only ever display things to a given accuracy you only ever need to compute your real numbers to a finite accuracy. Nature could have chosen to use any number of arbitrarily complicated functions on the reals. But in fact we only find functions with the special property that they need only be computed to finite precision. This is precisely what a smart programmer would have implemented.

(This also helps motivate the use of real numbers. The basic operations on real numbers such as addition and multiplication are continuous and require only finite precision in their arguments to compute their values to finite precision. So real numbers give a really neat way to allow inhabitants to find ever more detail within a simulation without putting an undue burden on its implementation.)

But you can do one step further. As Gregory Benford says in Timescape: "nature seemed to like equations stated in covariant differential forms". Our fundamental physical quantities aren't just continuous, they're differentiable. Differentiability means that if y=f(x) then once you zoom in closely enough, y depends linearly on x. This means that one more digit of y requires precisely one more digit of x. In other words our hypothetical programmer has arranged things so that after some initial finite length segment they can know in advance exactly how much data they are going to need.

After all that, I don't see how we can know we're not in a simulation. Nature seems cleverly designed to make a demand-driven simulation of it as efficient as possible."

http://www.quora.com/How-do-we-know-that-were-not-living-in-a-computer-simulation/answer/Dan-Piponi

The great quote of rationality a la Socrates (or Plato, or Aristotle)

1 Bound_up 23 June 2015 03:55PM

Help a brother out?

 

There's a great quote by one of The Big 3 Greek Philosophers (EDIT: Reference to Cicero removed) which I can paraphrase by memory as:

 

"I consider it rather better for myself to be proven wrong than to prove someone else wrong, just as I'm better off being cured of a disease than curing someone of one."

 

I can't find the quote, or from which of the Three it is.

 

Anybody know? Or know where to look? I've already tried varying google search techniques and perused the Wikiquotes article on each of them.

Min/max goal factoring and belief mapping exercise

-1 Clarity 23 June 2015 05:30AM

Edit 3: Removed description of previous edits and added the following:

This thread used to contain the description of a rationality exercise.

I have removed it and plan to rewrite it better.

I will repost it here, or delete this thread and repost in the discussion.

Thank you.

​My recent thoughts on consciousness

-2 AlexLundborg 24 June 2015 12:37AM

I have lately come to seriously consider the view that the everyday notion of consciousness doesn’t refer to anything that exists out there in the world but is rather a confused (but useful) projection made by purely physical minds onto their depiction of themselves in the world. The main influences on my thinking are Dan Dennett, (I assume most of you are familiar with him)  and to a lesser extent Yudkowsky (1) and Tomasik (2). To use Dennett’s line of thought: we say that honey is sweet, that metal is solid or that a falling tree makes a sound, but the character tag of sweetness and sounds is not in the world but in the brains internal model of it. Sweetness in not an inherent property of the glucose molecule, instead, we are wired by evolution to perceive it as sweet to reward us for calorie intake in our ancestral environment, and there is neither any need for non-physical sweetness-juice in the brain – no, it's coded (3). We can talk about sweetness and sound as if being out there in the world but in reality it is a useful fiction of sorts that we are "projecting" out into the world. The default model of our surroundings and ourselves we use in our daily lives (the manifest image, or ’umwelt’) is puzzling to reconcile with the scientific perspective of gluons and quarks. We can use this insight to look critically on how we perceive a very familiar part of the world: ourselves. It might be that we are projecting useful fictions onto our model of ourselves as well. Our normal perception of consciousness is perhaps like the sweetness of honey, something we think exist in the world, when it is in fact a judgement about the world made (unconsciously) by the mind.

What we are pointing at with the judgement “I am conscious” is perhaps the competence that we have to access states about the world, form expectations about those states and judge their value to us, coded in by evolution. That is, under this view, equivalent with saying that suger is made of glucose molecules, not sweetness-magic. In everyday language we can talk about suger as sweet and consciousness as “something-to-be-like-ness“ or “having qualia”, which is useful and probably necessary for us to function, but that is a somewhat misleading projection made by our ​​world-accessing and assessing consciousness that really exists in the world. That notion of consciousness is not subject to the Hard Problem, it may not be an easy problem to figure out how consciousness works, but it does not appear impossible to explain it scientifically as pure matter like anything else in the natural world, at least in theory. I’m pretty confident that we will solve consciousness, if we by consciousness mean the competence of a biological system to access states about the world, make judgements and form expectations. That is however not what most people mean when they say consciousness. Just like ”real” magic refers to the magic that isn’t real and the magic that is real, that can be performed in the world, is not “real magic”, “real” consciousness turns out to be a useful, but misleading assessment (4). We should perhaps keep the word consciousness but adjust what we mean when we use it, for diplomacy.

Having said that, I still find myself baffled by the idea that I might not be conscious in the way I’ve found completely obvious before. Consciousness seems so mysterious and unanswerable, so it’s not surprising then that the explanation provided by physicalists like Dennett isn’t the most satisfying. Despite that, I think it’s the best explanation I've found so far, so I’m trying to cope with it the best I can. One of the problems I’ve had with the idea is how it has required me to rethink my views on ethics. I sympathize with moral realism, the view that there exist moral facts, by pointing to the strong intuition that suffering seems universally bad, and well-being seems universally good. Nobody wants to suffer agonizing pain, everyone wants beatific eudaimonia, and it doesn't feel like an arbitrary choice to care about the realization of these preferences in all sentience to a high degree, instead of any other possible goal like paperclip maximization. It appeared to me to be an unescapable fact about the universe that agonizing pain really is bad (ought to be prevented), that intelligent bliss really is good (ought to be pursued), like a label to distinguish wavelength of light in the brain really is red, and that you can build up moral values from there. I have a strong gut feeling that the well-being of sentience matters, and the more capacity a creature has of receiving pain and pleasure the more weight it is given, say a gradience from beetles to posthumans that could perhaps be understood by further inquiry of the brain (5). However, if it turns out that pain and pleasure isn’t more than convincing judgements by a biological computer network in my head, no different in kind to any other computation or judgement, the sense of seriousness and urgency of suffering appears to fade away. Recently, I’ve loosened up a bit to accept a weaker grounding for morality: I still think that my own well-being matter, and I would be inconsistent if I didn’t think the same about other collections of atoms that appears functionally similar to ’me’, who also claim, or appear, to care about their well-being. I can’t answer why I should care about my own well-being though, I just have to. Speaking of 'me': personal identity also looks very different (nonexistent?) under physicalism, than in the everyday manifest image (6).

Another difficulty I confront is why e.g. colors and sounds looks and sounds the way they do or why they have any quality at all, under this explanation. Where do they come from if they’re only labels my brain uses to distinguish inputs from the senses? Where does the yellowness of yellow come? Maybe it’s not a sensible question, but only the murmuring of a confused primate. Then again, where does anything come from? If we can learn to shut up our bafflement about consciousness and sensibly reduce it down to physics – fair enough, but where does physics come from? That mystery remains, and that will possibly always be out of reach, at least probably before advanced superintelligent philosophers. For now, understanding how a physical computational system represents the world, creates judgements and expectations from perception presents enough of a challenge. It seems to be a good starting point to explore anyway (7).


I did not really put forth any particularly new ideas here, this is just some of my thoughts and repetitions of what I have read and heard others say, so I'm not sure if this post adds any value. My hope is that someone will at least find some of my references useful, and that it can provide a starting point for discussion. Take into account that this is my first post here, I am very grateful to receive input and criticism! :-)

  1. Check out Eliezer's hilarious tear down of philosophical zombies if you haven't already
  2. http://reducing-suffering.org/hard-problem-consciousness/
  3. [Video] TED talk by Dan Dennett http://www.ted.com/talks/dan_dennett_cute_sexy_sweet_funny
  4. http://ase.tufts.edu/cogstud/dennett/papers/explainingmagic.pdf
  5. Reading “The Moral Landscape” by Sam Harris increased my confidence in moral realism. Whether moral realism is true of false can obviously have implications for approaches to the value learning problem in AI alignment, and for the factual accuracy of the orthogonality thesis
  6. http://www.lehigh.edu/~mhb0/Dennett-WhereAmI.pdf
  7. For anyone interested in getting a grasp of this scientific challenge I strongly recommend the book “A User’s Guide to Thought and Meaning” by Ray Jackendoff.



Edit: made some minor changes and corrections. Edit 2: made additional changes in the first paragraph for increased readability.

 


Praising the Constitution

-5 dragonfiremalus 27 June 2015 04:55PM

I am sure the majority of the discussion surrounding the Unites States recent Supreme Court ruling will be on the topic of same-sex marriage and marriage equality. And while there is a lot of good discussion to be had, I thought I would take the opportunity to bring up another topic that seems often to be glossed over, but is yet very important to the discussion. That is the idea in the USA of praising the United States Constitution and holding it to an often unquestioning level of devotion.

Before I really get going I would like to take a quick moment to say I do support the US Constitution and think it is important to have a very strong document that provides rights for the people and guidelines for government. The entire structure of the government is defined by the Constitution, and some form of constitution or charter is necessary for the establishment of any type of governing body. Also, in the arguments I use as examples I am not in any way saying which side I am on. I am simply using them as examples, and no attempt should be made to infer my political stances from how I treat the arguments themselves.

But now the other way. I often hear in political discussions people, particularly Libertarians, trying to tie their position back to being based on the Constitution. The buck stops there. The Constitution says it, therefore it must be right. End of discussion. To me this often sounds eerily similar to arguing the semantics of a religious text to support your position.

A great example is in the debate over gun control laws. Without espousing one side or the other, I can fairly safely and definitively say the US Constitution does support citizens' rights to own guns. For many a Libertarian, the discussion ends there. This is not something only Libertarians are guilty of. The other side of the debate often resorts to arguing context and semantics in an attempt to make the Constitution support their side. This clearly is just a case of people trying to win the argument rather than discuss and discover the best solution.

Similarly in the topic of marriage equality, a lot of the discussion has been focused on whether or not the US supreme court ruling was, in fact, constitutional. Extending that further, the topic goes on to "does the Constitution give the federal government the right to demand that the fifty states all allow same-sex marriage?" To me, this is not the true question that needs answering. Or at least, the answer to that question does not determine a certain action or inaction on the part of the federal government. (E.g., if it was decided that it was unconstitutional, that STILL DOESN'T NECESSARILY mean that the federal government shouldn't do it. I know, shocking.) 

The Constitution was written by a bunch of men over two hundred years ago. Fallible, albeit brilliant, men. It isn't perfect. (It's damn good, else the country wouldn't have survived this long.) But it is still just a heuristic for finding the best course of action in what resembles a reasonable amount of time (insert your favorite 'inefficiency of bureaucracy' joke here). But heuristics can be wrong. So perhaps we should more often consider the question of whether or not what the Constitution says is actually the right thing. Certainly, departures from the heuristic of the Constitution should be taken with extreme caution and consideration. But we cannot discard the idea and simply argue based on the Constitution. 

At the heart of the marriage equality and the supreme court ruling debate are the ideas of freedom, equality, and states' rights. All three of those are heuristics I use that usually point to what I think are best. I usually support states' rights, and consider departure from that as negative expected utility. However, there are many times when that consideration is completely blown away by other considerations. 

The best example I can think of off the top of my head is slavery. Before the Emancipation Proclamation some states ruled slavery illegal, some legal. The question that tore our nation apart was whether or not the federal government had the right to impose abolition of slavery on all the states. I usually side with states' rights. But slavery is such an abominable practice that in that case I would have considered the constitutional rights of the federal government a non-issue when weighed against the continuation of slavery in the US for a single more day. If the Constitution had specifically supported the legality of slavery, then that would have shown it was time to burn it and try again.

Any federal proclamation infringes on states' rights, something I usually side with. And as more and more states were legalizing same-sex marriage it seemed that the states were deciding by themselves to promote marriage equality. The supreme court decision certainly speeds things up, but is it worth the infringement of state rights? To me that is the important question. Not whether or not it is Constitutional, but whether or not it is right. I am not answering that question here, just attempting to point out that the discussion of constitutionality may be the wrong question. And certainly an argument could be made for why states' rights should not be used as a heuristic at all. 

Is Greed Stupid?

-8 adamzerner 23 June 2015 08:38PM

I just finished reading a fantastic Wait But Why post: How Tesla Will Change The World. One of the things that was noted is that the people in the Auto and Oil industries are trying to delay the introduction of Electric Vehicles (EVs) so they could make more money.

The post also explains how important it is that we become less reliant on oil.

  1. Because we're going to run out relatively soon.
  2. Because it's causing global warming.
So, from the perspective of these moneybag guys, here is how I see the cost-benefit of delaying the introduction of EVs:
  • Make some more money, which gives them and their families a marginally more comfortable life.
  • Not get a sense of purpose out of your career.
  • Probably feel some sort of guilt about what you do.
  • Avoid the short-term discomfort of changing jobs/careers.
This probably makes my opinions pretty clear:
  • Because of diminishing marginal utility, I doubt that the extra money is making them much happier. I'm sure they're pretty well off to begin with. It could be the case that they're so used to their lifestyle that they really do need the extra money to be happy, but I doubt it.
  • Autonomy, mastery and purpose are three of the most important things to get out of your career. There seems to be a huge opportunity cost to not working somewhere that provides you with a sense of purpose.
  • To continue that thought, I'm sure they feel some sort of guilt for what they're doing. Or maybe not. But if they are, that seems like a relatively large cost.
  • I understand that there's probably a decent amount of social pressure on them to conform. I'm sure that they surround themselves with people who are pro-oil and anti-electric. I'm sure that their companies put pressure on them to perform. I'm sure that they have families and all of that and starting something new might be difficult. But these don't seem to be large enough costs to make their choices worthwhile. A big reason why I get this impression is because they are so short term.
I've been talking specifically about those in the auto and oil industries, but the same logic seems to apply to other greedy people (ex. in finance). I get the impression that greed is stupid. That it doesn't make you happy, and that it isn't instrumentally rational. But I'd like to get the opinions of others.