You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

No negative press agreement

-10 Elo 01 September 2016 11:10AM

Original post:  http://bearlamp.com.au/no-negative-press-agreement/

What is a no negative press agreement?

A no negative press agreement binds a media outlet's consent to publish information provided by a person with the condition that they be not portrayed negatively by the press.

Why would a person want that?

In recognising that the press has powers above and beyond every-day people to publish information and spread knowledge and perspective about an issue that can be damaging to an individual.  An individual while motivated by the appeal of publicity, is also concerned about the potential damage caused by negative press.

Every person is the hero of their own story, from one's own perspective they performed actions that were justified and motivated by their own intention and worldview, no reasonable person would be able to tell their story (other than purposefully) in which they are spun as the negative conspirator of a plot, actively causing negative events on the world for no reason.

Historically, humans have been motivated to care more about bad news than good news, for reasons that expand on the idea that bad news might ring your death (and be a cause of natural selection) and good news would be irrelevant for survival purposes.  Today we are no longer in that historic period, yet we still pay strong attention to bad news.  It's clear that bad news can personally effect individuals - not only those in the stories, but others experiencing the bad news can be left with a negative worldview or motivated to be upset or distraught.  In light of the fact that bad news is known to spread more than good news, and also risks negatively affecting us mentally, we are motivated to choose to avoid bad news, both in not creating it, not endorsing it and not aiding in it's creation.

The binding agreement is designed to do several things:

  • protect the individual from harm
  • reduce the total volume of negative press in the world
  • decrease the damage caused by negative press in the world
  • bring about the future we would rather live in
  • protect the media outlet from harming individuals

Does this limit news-maker's freedom to publish?

That is not the intent.  On the outset, it's easy to think that it could have that effect, and perhaps in a very shortsighted way it might have that effect.  Shortly after the very early effects, it will have a net positive effect of creating news of positive value, protecting the media from escalating negativity, and bringing about the future we want to see in the world.  If it limits media outlets in any way it should be to stop them from causing harm.  At which point any non-compliance by a media entity will signal the desire to act as agents of harm in the world.

Why would a media outlet be an agent of harm?  Doesn't that go against the principles of no negative press?

While media outlets (or humans), set out with the good intentions of not having a net negative effect on the world, they can be motivated by other concerns.  For example, the value of being more popular, or the direction from which they are paid for their efforts (for example advertising revenue).  The concept of competing commitment, and being motivated by conflicting goals is best covered by Scott under the name moloch.  

The no negative press agreement is an attempt to create a commons which binds all relevant parties to action better than the potential for a tragedy.  This commons has a desire to grow and maintain itself, and is motivated to maintain itself.  If any media outlets are motivated to defect, they are to be penalised by both the other press and the public.

How do I encourage a media outlet to comply with no negative press?

Ask them to publish a policy with regard to no negative press.  If you are an individual interested in interacting with the media, and are concerned with the risks associated with negative press, you can suggest an individual binding agreement in the interim of the media body designing and publishing a relevant policy.

I think someone violated the no negative press policy, what should I do?

At the time of writing, no one is bound by the concept of no negative press.  Should there be desire and pressure in the world to motivate entities to comply, they are more likely to comply.  To create the pressure a few actions can be taken:

  • Write to media entities on public record and request they consider a no negative press policy, outline clearly and briefly your reasons why it matters to you.
  • Name and shame media entities that fail to comply with no negative press, or fail to consider a policy.
  • Vote with your feet - if you find a media entity that fails to comply, do not subscribe to their information and vocally encourage others to do the same.

Meta: this took 45mins to write.

The call of the void

-6 Elo 28 August 2016 01:17PM

Original post:  http://bearlamp.com.au/the-call-of-the-void

L'appel du vide - The call of the void.

When you are standing on the balcony of a tall building, looking down at the ground and on some track your brain says "what would it feel like to jump".  When you are holding a kitchen knife thinking, "I wonder if this is sharp enough to cut myself with".  When you are waiting for a train and your brain asks, "what would it be like to step in front of that train?".  Maybe it's happened with rope around your neck, or power tools, or what if I take all the pills in the bottle.  Or touch these wires together, or crash the plane, crash the car, just veer off.  Lean over the cliff...  Try to anger the snake, stick my fingers in the moving fan...  Or the acid.  Or the fire.

There's a strange phenomenon where our brains seem to do this, "I wonder what the consequences of this dangerous thing are".  And we don't know why it happens.  There has only been one paper (sorry it's behind a paywall) on the concept.  Where all they really did is identify it.  I quite like the paper for quoting both (“You know that feeling you get when you're standing in a high place… sudden urge to jump?… I don't have it” (Captain Jack Sparrow, Pirates of the Caribbean: On Stranger Tides, 2011). And (a drive to return to an inanimate state of existence; Freud, 1922).

Taking a look at their method; they surveyed 431 undergraduates for their experiences of what they coined HPP (High Place Phenomenon).  They found that 30% of their constituents have experienced HPP, and tried to measure if it was related to anxiety or suicide.  They also proposed a theory. 

...we propose that at its core, the experience of the high place phenomenon stems from the misinterpretation of a safety or survival signal. (e.g., “back up, you might fall”)

I want to believe it, but today there are Literally no other papers on the topic.  And no evidence either way.  So all I can say is - We don't really know.  s'weird.  Dunno.


This week I met someone who uncomfortably described their experience of toying with L'appel du vide.  I explained to them how this is a common and confusing phenomenon, and to their relief said, "it's not like I want to jump!".   Around 5 years ago (before I knew it's name) an old friend recounting the experience of living and wondering what it was like to step in front of moving busses (with discomfort), any time she was near a bus.  I have coaxed a friend out of the middle of a road (they weren't drunk and weren't on drugs at the time).  And dragged friends out of the ocean.  I have it with knives, in a way that borderlines OCD behaviour.  The desire to look at and examine the sharp edges.

What I do know is this.  It's normal.  Very normal.  Even if it's not 30% of the population, it could easily be 10 or 20%.  Everyone has a right to know that it happens, and it's normal and you're not broken if you experience it.  Just as common a shared human experience as common dreams like your teeth falling out, or of flying, running away from groups of people, or being underwater.  Or the experience of rehearsing what you want to say before making a phone call.  Or walking into a room for a reason and forgetting what it was.

Next time you are struck with the L'appel du vide, don't get uncomfortable.  Accept that it's a neat thing that brains do, and it's harmless.  Experience it.  And together with me - wonder why.  Wonder what evolutionary benefit has given so many of us the L'appel du vide.  

And be careful.


Meta: this took one hour to write.

The ladder of abstraction and giving examples

-6 Elo 16 August 2016 05:31AM

Original post:  http://bearlamp.com.au/examples/

When we talk about a concept or a point it's important to understand the ladder of abstraction.  Covered before on lesswrong and in other places as advice for communicators on how to bridge a gap of knowledge.

Knowing, understanding and feeling the ladder of abstraction prevents things like this:

  1. Speakers who bury audiences in an avalanche of data without providing the significance.
  2. Speakers who discuss theories and ideals, completely detached from real-world practicalities.

When you talk to old and wise people, they will sometimes give you stories of their lives.  "back in my day...".  Seeing that in perspective is a good way to realise that might be people's way of shifting around the latter of abstraction.  As an agenty-agent of agenty goodness - your job is to make sense of this occurrence.  The ladder of abstraction is very powerful when used effectively and very frustrating when you find yourself on the wrong side of it.

The flipside to this example is when people talk at a highly theoretical level.  I suspect this happens to philosophers, as well as hippies.  They are very good at being able to tell you about the connections between things that are "energy" or "desire", but lack the grounding to explain how that applies to real life.  I don't blame them.  One day I will be able to think completely abstractly.  Today is not that day.  Since today is not that day, it is my duty and your's to ask and specify.  To give the explanation of what the ladder of abstraction is, and then tell them you have no idea what they are talking about.  Or as for the example above - ask them to go up a level in the ladder of abstraction.  "If I were to learn something from your experiences - what would it be?".


Lesswrong doing it wrong

I care about adding the conceptual ladder of abstraction to the repertoire for a reason.  LW'ers are very good at paying attention to details.  A really powerful and important ability.  After all - the fifth virtue is argument, the tenth is precision.  If you can't be precise about what you are communication, you fail to value what we value.

Which is why it's great to see critical objections to what OP's provide as examples.

I object when defeating an example does not defeat the rule.  Our delightful OP may see their territory, stride forth and exclaim to have a map for this territory and a few similar mountains or valleys.  Correcting the mountains and valleys map mentioned doesn't change the rest of the territory and does not change the rest of the map.


This does matter. Recently a copy of this dissertation came around the slack - https://cryptome.org/2013/09/nolan-nctc.pdf.  It is a report detailing the ridiculous culture inside the CIA and other US government security institutions.  One of the biggest problems within that culture can be shown through this example (page 34 of the report):

The following exchange is a good example, told to me by a CIA analyst who was explaining the rules of baseball to visitors who didn’t know the game: 

Analyst A: So there are four bases-- 
Analyst B: -- Well, no, it’s really three bases plus home plate.
Analyst A: ... Okay, three bases plus home plate. The batter hits the ball and advances through the bases one by one— 
Analyst C: -- Well, no, it doesn’t have to be one base at a time.

And these ones on page 35:

The following excerpts from stories people have told me or that I witnessed further illustrate this concept: 

John: I see you’ve drawn a star on that draft. 
Bridget: Yeah, that’s just my doodle of choice. I just do it unconsciously sometimes. 
John: Don’t you mean subconsciously? 

Scott: Good morning! 
Employee in the parking lot: Well, I don’t know if it’s good, but here we are. 

Helene: I am so thirsty today! I seriously have a dehydration problem. 
Lucy: Actually, you have a hydration problem. 

Victoria: My hopes have been squashed like a pancake. 
James: Don’t you mean flattened like a pancake?

For those of us that don't have time to read 215 pages.  The point is that analyst culture does this.  A lot.  From the outside it might seem ridiculous.  We can intellectually confidently say that the analysts A, B and C in the first example were all right, and if they paid attention to the object of the situation they would skip the interruptions and get to the point of explaining how baseball works.  But that's not what it feels like when you are on the inside.

The report outlines that these things make analyst culture a difficult one to be a part of or be engaged in because of examples like these.


We do the same thing.  We nitpick at examples, and fight over irrelevant things.  If I were to change everyone's mind, I would rather see something like this:

No one denies that people have different metabolisms.

Statements including "no one denies that ..." are usually false.  Regardless, my goal here was to...  

Taken literally, yes. However these statements are not intended to be taken literally...

Turn into:

No one denies that people have different metabolisms.

My goal here was to ask people...

(*yes this is not a very good example of an example, this is an example of a turn of speech that was challenged, but the same effect of nitpicking on irrelevant details is present).


Nitpicking is not necessary.  

Sometimes we forget that we are all in the same boat together, racing down the river at the rate that we can uncover truth.  Sometimes we feel like we are in different boats racing each other.  In this sense it would be a good idea to compete and accuse each other of our failures on the journey to get ahead.  However we do not want to do that.

It's in our nature to compete, the human need to be right! But we don't need to compete against each other, we need to support each other to compete against MolochAkrasiaEntropyFallacies and biases (among others).


I am guilty myself.  In my personal life as well as on LW.  If I am laying blame, I blame myself for failing to point this out sooner, more than I blame anyone else for nitpicking examples.  

The plan of action.

Next time you go to comment; Next time I go to comment, think very carefully about if you can improve, if I can improve - the post I am commenting on, before I level my objections at it.  We want to make the world a better place.  People wiser, older, sharper and witter than me have already said it; "if you are looking for where to start...  you need only look in the mirror".


Meta: this took 3 hours to write.

Should you change where you live? (also - a worked “how to solve a question”)

-6 Elo 22 July 2016 06:06AM

Original post: http://bearlamp.com.au/should-you-change-where-you-live-a-worked-how-to-solve-a-question/

It's not a hard question, but it potentially has a lot of moving parts.

This post is going to be two in one.  The first is whether you should move geography, the second is how I go through a problem.  In red.

First up - brainstorm ideas:

Meta-level

  • Make a list of relevant factors of staying or going (then google it to check for any I missed)
  • Decision making strategies

Object level

  • Why did this come up?
  • Make a list of things you wish were different with how you live now
  • Make a list of features of your current geography
  • Make a list of features that you know of in other geographies that you would like to obtain.

relevant factors

  • Family
  • Friends
  • Relationships
  • Population density
  • Population diversity breakdown
  • Local safety (bad neighbourhoods)
  • Religion
  • Politics, country-scale political climate
  • Government structure, public welfare
  • public transport
  • cost of living
  • qualify of food, variation of food, culture of food.
  • exchange rate
  • Normal temperature/weather/climate (rain, cloud, sun, heat, cold, wind)
  • Extreme weather risk.  (i.e. cyclones, earthquakes, bushfires)
  • Work (and commute)
  • Salary
  • Pollution (Light, Air or noise pollution)
  • Residential or natural environment, parks, trees, tall buildings...
  • Ocean (if you swim, or like beach culture)
  • Landmarks
  • native plants, animals, diseases.
  • culture, art.
  • difficulty in moving
  • opportunity/plans
  • language barrier
  • public amenities
  • Education
  • Dwelling -> upsize, downsize, sidegrade...
  • Sleep - are you getting enough of it
  • postage costs

Why did this come up?

Usually you are thinking of a seeding factor; a reason why you are moving.  It will help to keep it in mind when planning other things.  Is there something wrong or pushing you out, is the current location stagnant, is something pulling you?  Write that down.    Keep it in mind.  Considering the context of the event may help you make a more informed choice, it's also why it's often hard to ask for advice without being more specific about what seems to be the difficulty.

Factors

When you move you will be exchanging your current set of these factors for a new and different set of these factors.  Sometimes you might move with your family, sometimes you might be moving across town and still have the same public transport network but just pay cheaper rent.

Your job; should you choose to accept it: work out which ones are getting better, which are getting worse, and which are staying the same.  Some of them will do both.

Example: you live in a small town with a few friends. you are moving to a big city where you know nobody but you expect to make many more friends quickly.  friends are getting both worse and better at the same time.

How?

There should be some instruction set to make it easier to actually come to an answer.  Not everyone could have automatically generated this list, and not everyone will know what to do with it now.  So what to do with the information is listed here.

  1. Take the list above - best of copied to a spreadsheet, make two copies of the list, for each point; write a few words about what you have now in your current location.
  2. If certain points seem irrelevant to you then don't worry.  Cross them out.
    E.g. If the weather doesn't bother you much then you can skip it.
  3. For each point, out of 10, rate - how much do you care about this factor?  and also out of 10 - How well do you fulfil this need right now.  (this is where it's necessary to understand which ones you don't care about)
  4. On the second copy, fill out the details of the place you want to go.  If you don't yet have a destination; look at the first list and find the things that you care about a lot with a low rating.  to start your search, make a list of places that you expect will have a high rating in those area, or search by that thing (i.e. places of religious significance).

    Of course there are ways to do this badly.  for example, as above - you live in a small town with a few friends. you are moving to a big city where you know nobody but you expect to make many more friends quickly.  friends are getting both worse and better at the same time.  If on pondering you realise that no place ever will have more friends than the place you are now, because everywhere else is foreign, then that makes it a not-great metric to go on.  However (in this example) you might benefit from considering instead where might have the potential to have good friends, (or crazy ideas like taking your friends with you)

  5. Use your newly laid out knowledge as a guide on where to go and what to look for.

Consider the inverse proposal

Heuristic thinking strategies that might help you.  There are generic ones for problems and then there are questions that suit certain problems very well.  These are relatively generic but I have heard great success in applying them to moving decisions.

This is very generic.  If you are leaving a place for an obvious reason (for example political unrest), it would take a lot to convince you to stay.  This is where the idea of thinking of the inverse proposal comes in.

Example: your work has offered you a promotion.  It's $20,000 extra.  But you would have to leave your friends and family and work in a city several hours away for at least a year.

Example in reverse:  I am going to offer you a $20,000 pay cut and in exchange you get to live in a town with your friends.

*it can be hard to generate the reverse example from your own perspective.

Some people can easily say, pay "$20k just for my lousy friends, hell no".  Other people can easily say, "listen boss, $50k and you got a deal."

Is there an alternative solution

This is a fully generic question to ask.

Before you convince yourself that the factors are out of your hands, consider if you can take it into your own hands.  If you don't at least ask, you will genuinely never know if it could have gone differently.  Can you take your friends with?  Can you take the pay rise but not move for work?  Can you still have a nice lake even if you don't have an ocean?  Who knows.  At least consider it.

How can you make it easier for yourself?

This is a fully generic strategy for getting things done.

As with many decisions in life, they are big, they are hard, they are scary.  Are there things you can do to make the decision easier for yourself?


Meta: this took three hours to research and write.

Have I missed any factors?  I went through this very fast because I am trying a new productivity method; which means less polish but more posts, but also the understanding that I might have missed something.

Map:Territory::Uncertainty::Randomness – but that doesn’t matter, value of information does.

6 Davidmanheim 22 January 2016 07:12PM

In risk modeling, there is a well-known distinction between aleatory and epistemic uncertainty, which is sometimes referred to, or thought of, as irreducible versus reducible uncertainty. Epistemic uncertainty exists in our map; as Eliezer put it, “The Bayesian says, ‘Uncertainty exists in the map, not in the territory.’” Aleatory uncertainty, however, exists in the territory. (Well, at least according to our map that uses quantum mechanics, according to Bells Theorem – like, say, the time at which a radioactive atom decays.) This is what people call quantum uncertainty, indeterminism, true randomness, or recently (and somewhat confusingly to myself) ontological randomness – referring to the fact that our ontology allows randomness, not that the ontology itself is in any way random. It may be better, in Lesswrong terms, to think of uncertainty versus randomness – while being aware that the wider world refers to both as uncertainty. But does the distinction matter?

To clarify a key point, many facts are treated as random, such as dice rolls, are actually mostly uncertain – in that with enough physics modeling and inputs, we could predict them. On the other hand, in chaotic systems, there is the possibility that the “true” quantum randomness can propagate upwards into macro-level uncertainty. For example, a sphere of highly refined and shaped uranium that is *exactly* at the critical mass will set off a nuclear chain reaction, or not, based on the quantum physics of whether the neutrons from one of the first set of decays sets off a chain reaction – after enough of them decay, it will be reduced beyond the critical mass, and become increasingly unlikely to set off a nuclear chain reaction. Of course, the question of whether the nuclear sphere is above or below the critical mass (given its geometry, etc.) can be a difficult to measure uncertainty, but it’s not aleatory – though some part of the question of whether it kills the guy trying to measure whether it’s just above or just below the critical mass will be random – so maybe it’s not worth finding out. And that brings me to the key point.

In a large class of risk problems, there are factors treated as aleatory – but they may be epistemic, just at a level where finding the “true” factors and outcomes is prohibitively expensive. Potentially, the timing of an earthquake that would happen at some point in the future could be determined exactly via a simulation of the relevant data. Why is it considered aleatory by most risk analysts? Well, doing it might require a destructive, currently technologically impossible deconstruction of the entire earth – making the earthquake irrelevant. We would start with measurement of the position, density, and stress of each relatively macroscopic structure, and the perform a very large physics simulation of the earth as it had existed beforehand. (We have lots of silicon from deconstructing the earth, so I’ll just assume we can now build a big enough computer to simulate this.) Of course, this is not worthwhile – but doing so would potentially show that the actual aleatory uncertainty involved is negligible. Or it could show that we need to model the macroscopically chaotic system to such a high fidelity that microscopic, fundamentally indeterminate factors actually matter – and it was truly aleatory uncertainty. (So we have epistemic uncertainty about whether it’s aleatory; if our map was of high enough fidelity, and was computable, we would know.)

It turns out that most of the time, for the types of problems being discussed, this distinction is irrelevant. If we know that the value of information to determine whether something is aleatory or epistemic is negative, we can treat the uncertainty as randomness. (And usually, we can figure this out via a quick order of magnitude calculation; Value of Perfect information is estimated to be worth $100 to figure out which side the dice lands on in this game, and building and testing / validating any model for predicting it would take me at least 10 hours, my time is worth at least $25/hour, it’s negative.) But sometimes, slightly improved models, and slightly better data, are feasible – and then worth checking whether there is some epistemic uncertainty that we can pay to reduce. In fact, for earthquakes, we’re doing that – we have monitoring systems that can give several minutes of warning, and geological models that can predict to some degree of accuracy the relative likelihood of different sized quakes.

So, in conclusion; most uncertainty is lack of resolution in our map, which we can call epistemic uncertainty. This is true even if lots of people call it “truly random” or irreducibly uncertain – or if they are fancy, aleatory uncertainty. Some of what we assume is uncertainty is really randomness. But lots of the epistemic uncertainty can be safely treated as aleatory randomness, and value of information is what actually makes a difference. And knowing the terminology used elsewhere can be helpful.

The Growth of My Pessimism: Transhumanism, Immortalism, Effective Altruism.

15 diegocaleiro 28 November 2015 11:07AM
This text has many, many hyperlinks, it is useful to at least glance at frontpage of the linked material to get it. It is an expression of me thinking so it has many community jargon terms. Thank Oliver Habryka, Daniel Kokotajlo and James Norris for comments. No, really, check the front page of the hyperlinks. 
  • Why I Grew Skeptical of Transhumanism
  • Why I Grew Skeptical of Immortalism
  • Why I Grew Skeptical of Effective Altruism
  • Only Game in Town

 

Wonderland’s rabbit said it best: The hurrier I go, the behinder I get.

 

We approach 2016, and the more I see light, the more I see brilliance popping everywhere, the Effective Altruism movement growing, TEDs and Elons spreading the word, the more we switch our heroes in the right direction, the behinder I get. But why? - you say.

Clarity, precision, I am tempted to reply. I have left the intellectual suburbs of Brazil, straight into the strongest hub of production of things that matter, The Bay Area, via Oxford’s FHI office, I now split my time between UC Berkeley, and the CFAR/MIRI office. In the process, I have navigated an ocean of information, read hundreds of books, papers, saw thousands of classes, became proficient in a handful of languages and a handful of intellectual disciplines. I’ve visited the Olympus and I met our living demigods in person as well.

Against the overwhelming forces of an extremely upbeat personality surfing a hyper base-level happiness, these three forces: approaching the center, learning voraciously, and meeting the so-called heroes, have brought me to the current state of pessimism.

I was a transhumanist, an immortalist, and an effective altruist.

 

Why I Grew Skeptical of Transhumanism

The transhumanist in me is skeptical of technological development fast enough for improving the human condition to be worth it now, he sees most technologies as fancy toys that don’t get us there. Our technologies can’t and won’t for a while lead our minds to peaks anywhere near the peaks we found by simply introducing weirdly shaped molecules into our brains. The strangeness of Salvia, the beauty of LSD, the love of MDMA are orders and orders of magnitude beyond what we know how to change from an engineering perspective. We can induce a rainbow, but we don’t even have the concept of force yet. Our knowledge about the brain, given our goals about the brain, is at the level of knowledge of physics of someone who found out that spraying water on a sunny day causes the rainbow. It’s not even physics yet.

Believe me, I have read thousands of pages of papers in the most advanced topics in cognitive neuroscience, my advisor spent his entire career, from Harvard to Tenure, doing neuroscience, and was the first person to implant neurons that actually healed a brain to the point of recovering functionality by using non-human neurons. As Marvin Minsky, who invented the multi-agent computational theory of mind, told me: I don’t recommend entering a field where every four years all knowledge is obsolete, they just don’t know it yet.

 

Why I Grew Skeptical of Immortalism

The immortalist in me is skeptical because he understands the complexity of biology from conversations with the centimillionaires and with the chief scientists of anti-aging research facilities worldwide, he met the bio-startup founders and gets that the structure of incentives does not look good for bio-startups anyway, so although he was once very excited about the prospect of defeating the mechanisms of ageing, back when less than 300 thousand dollars were directly invested in it, he is now, with billions pledged against ageing, confident that the problem is substantially harder to surmount than the number of man-hours left to be invested in the problem, at least during my lifetime, or before the Intelligence Explosion.

Believe me, I was the first cryonicist among the 200 million people striding my country, won a prize for anti-ageing research at the bright young age of 17, and hang out on a regular basis with all the people in this world who want to beat death that still share in our privilege of living, just in case some new insight comes that changes the tides, but none has come in the last ten years, as our friend Aubrey will be keen to tell you in detail.

 

Why I Grew Skeptical of Effective Altruism

The Effective Altruist is skeptical too, although less so, I’m still founding an EA research institute, keeping a loving eye on the one I left behind, living with EAs, working at EA offices and mostly broadcasting ideas and researching with EAs. Here are some problems with EA which make me skeptical after being shook around by the three forces:

  1. The Status Games: Signalling, countersignalling, going one more meta-level up, outsmarting your opponent, seeing others as opponents, my cause is the only true cause, zero-sum mating scarcity, pretending that poly eliminates mating scarcity, founders X joiners, researchers X executives, us institutions versus them institutions, cheap individuals versus expensive institutional salaries, it's gore all the way up and down.

  2. Reasoning by Analogy: Few EAs are able to and doing their due intellectual diligence. I don’t blame them, the space of Crucial Considerations is not only very large, but extremely uncomfortable to look at, who wants to know our species has not even found the stepping stones to make sure that what matters is preserved and guaranteed at the end of the day? It is a hefty ordeal. Nevertheless, it is problematic that fewer than 20 EAs (one in 300?) are actually reasoning from first principles, thinking all things through from the very beginning. Most of us are looking away from at least some philosophical assumption or technological prediction. Most of us are cooks and not yet chefs. Some of us have not even waken up yet.

  3. Babies with a Detonator: Most EAs still carry their transitional objects around, clinging desperately to an idea or a person they think more guaranteed to be true, be it hardcore patternism about philosophy of mind, global aggregative utilitarianism, veganism, or the expectation of immortality.

  4. The Size of the Problem: No matter if you are fighting suffering, Nature, Chronos (death), Azathoth (evolutionary forces) or Moloch (deranged emergent structures of incentives), the size of the problem is just tremendous. One completely ordinary reason to not want to face the problem, or to be in denial, is the problem’s enormity.

  5. The Complexity of The Solution: Let me spell this out, the nature of the solution is not simple in the least. It’s possible that we luck out and it turns out the Orthogonality Thesis and the Doomsday Argument and Mind Crime are just philosophical curiosities that have no practical bearing in our earthly engineering efforts, that the AGI or Emulation will by default fall into an attractor basin which implements some form of MaxiPok with details that it only grasps after CEV or the Crypto, and we will be Ok. It is possible, and it is more likely than that our efforts will end up being the decisive factor. We need to focus our actions in the branches where they matter though.

  6. The Nature of the Solution: So let’s sit down side by side and stare at the void together for a bit. The nature of the solution is getting a group of apes who just invented the internet from everywhere around the world, and get them to coordinate an effort that fills in the entire box of Crucial Considerations yet unknown - this is the goal of Convergence Analysis, by the way - find every single last one of them to the point where the box is filled, then, once we have all the Crucial Considerations available, develop, faster than anyone else trying, a translation scheme that translates our values to a machine or emulation, in a physically sound and technically robust way (that’s if we don’t find a Crucial Consideration otherwise which, say, steers our course towards Mars). Then we need to develop the engineering prerequisites to implement a thinking being smarter than all our scientists together who can reflect philosophically better than the last two thousand years of effort while becoming the most powerful entity in the universe’s history, that will fall into the right attractor basin within mindspace. That’s if Superintelligences are even possible technically. Add to that we or it have to guess correctly all the philosophical problems that are A)Relevant B)Unsolvable within physics (if any) or by computers, all of this has to happen while the most powerful corporations, States, armies and individuals attempt to seize control of the smart systems themselves. without being curtailed by the hindrance counter incentive of not destroying the world either because they don’t realize it, or because the first mover advantage seems worth the risk, or because they are about to die anyway so there’s not much to lose.

  7. How Large an Uncertainty: Our uncertainties loom large. We have some technical but not much philosophical understanding of suffering, and our technical understanding is insufficient to confidently assign moral status to other entities, specially if they diverge in more dimensions than brain size and architecture. We’ve barely scratched the surface of technical understanding on happiness increase, and philosophical understanding is also in its first steps.

  8. Macrostrategy is Hard: A Chess Grandmaster usually takes many years to acquire sufficient strategic skill to command the title. It takes a deep and profound understanding of unfolding structures to grasp how to beam a message or a change into the future. We are attempting to beam a complete value lock-in in the right basin, which is proportionally harder.

  9. Probabilistic Reasoning = Reasoning by Analogy: We need a community that at once understands probability theory, doesn’t play reference class tennis, and doesn’t lose motivation by considering the base rates of other people trying to do something, because the other people were cooks, not chefs, and also because sometimes you actually need to try a one in ten thousand chance. But people are too proud of their command of Bayes to let go of the easy chance of showing off their ability to find mathematically sound reasons not to try.

  10. Excessive Trust in Institutions: Very often people go through a simplifying set of assumptions that collapses a brilliant idea into an awful donation, when they reason:
    I have concluded that cause X is the most relevant
    Institution A is an EA organization fighting for cause X
    Therefore I donate to institution A to fight for cause X.
    To begin with, this is very expensive compared to donating to any of the three P’s: projects, people or prizes. Furthermore, the crucial points to fund institutions are when they are about to die, just starting, or building a type of momentum that has a narrow window of opportunity where the derivative gains are particularly large or you have private information about their current value. To agree with you about a cause being important is far from sufficient to assess the expected value of your donation.

  11. Delusional Optimism: Everyone who like past-me moves in with delusional optimism will always have a blind spot in the feature of reality about which they are in denial. It is not a problem to have some individuals with a blind spot, as long as the rate doesn’t surpass some group sanity threshold, yet, on an individual level, it is often the case that those who can gaze into the void a little longer than the rest end up being the ones who accomplish things. Staring into the void makes people show up.

  12. Convergence of opinions may strengthen separation within EA:  Thus far, the longer someone is an EA for, the more likely they are to transition to an opinion in the subsequent boxes in this flowchart from whichever box they are at at the time. There are still people in all the opinion boxes, but the trend has been to move in that flow. Institutions however have a harder time escaping being locked into a specific opinion. As FHI moves deeper into AI, and GWWC into poverty, 80k into career selection etc… they become more congealed. People’s opinions are still changing, and some of the money follows, but institutions are crystallizing into some opinions, and in the future they might prevent transition between opinion clusters and free mobility of individuals, like national frontiers already do. Once institutions, which in theory are commanded by people who agree with institutional values, notice that their rate of loss towards the EA movement is higher than their rate of gain, they will have incentives to prevent the flow of talent, ideas and resources that has so far been a hallmark of Effective Altruism and why many of us find it impressive, it’s being an intensional movement. Any part that congeals or becomes extensional will drift off behind, and this may create unsurmountable separation between groups that want to claim ‘EA’ for themselves.

 

Only Game in Town

 

The reasons above have transformed a pathological optimist into a wary skeptical about our future, and the value of our plans to get there. And yet, I don’t see other option than to continue the battle. I wake up in the morning and consider my alternatives: Hedonism, well, that is fun for a while, and I could try a quantitative approach to guarantee maximal happiness over the course of the 300 000 hours I have left. But all things considered, anyone reading this is already too close to the epicenter of something that can become extremely important and change the world to have the affordance to wander off indeterminately. I look at my high base-happiness and don’t feel justified in maximizing it up to the point of no marginal return, there clearly is value elsewhere than here (points inwards), clearly the self of which I am made has strong altruistic urges anyway, so at least above a threshold of happiness, has reason to purchase the extremely good deals in expected value happiness of others that seem to be on the market. Other alternatives? Existentialism? Well, yes, we always have a fundamental choice and I feel the thrownness into this world as much as any Kierkegaard does. Power? When we read Nietzsche it gives that fantasy impression that power is really interesting and worth fighting for, but at the end of the day we still live in a universe where the wealthy are often reduced to having to spend their power in pathetic signalling games and zero sum disputes or coercing minds to act against their will. Nihilism and Moral Fictionalism, like Existentialism all collapse into having a choice, and if I have a choice my choice is always going to be the choice to, most of the time, care, try and do.

Ideally, I am still a transhumanist and an immortalist. But in practice, I have abandoned those noble ideals, and pragmatically only continue to be an EA.

It is the only game in town.

Against the internal locus of control

6 Thrasymachus 03 April 2015 05:48PM

What do you think about these pairs of statements?

  1. People's misfortunes result from the mistakes they make
  2. Many of the unhappy things in people's lives are partly due to bad luck
  1. In the long run, people get the respect they deserve in this world.
  2. Unfortunately, an individual's worth often passes unrecognized no matter how hard he tries.
  1. Becoming a success is a matter of hard work; luck has little or nothing to do with it.
  2. Getting a good job mainly depends on being in the right place at the right time.

They have a similar theme: the first statement suggests that an outcome (misfortune, respect, or a good job) for a person are the result of their own action or volition. The second assigns the outcome to some external factor like bad luck.(1)

People who tend to think their own attitudes or efforts can control what happens to them are said to have an internal locus of control, those who don't, an external locus of control. (Call them 'internals' and 'externals' for short).

Internals seem to do better at life, pace obvious confounding: maybe instead of internals doing better by virtue of their internal locus of control, being successful inclines you to attribute success internal factors and so become more internal, and vice versa if you fail.(2) If you don't think the relationship is wholly confounded, then there is some prudential benefit for becoming more internal.

Yet internal versus external is not just a matter of taste, but a factual claim about the world. Do people, in general, get what their actions deserve, or is it generally thanks to matters outside their control?

Why the external view is right

Here are some reasons in favour of an external view:(3)

  1. Global income inequality is marked (e.g. someone in the bottom 10% of the US population by income is still richer than two thirds of the population - more here). The main predictor of your income is country of birth, it is thought to explain around 60% of the variance: not only more important than any other factor, but more important than all other factors put together.
  2. Of course, the 'remaining' 40% might not be solely internal factors either. Another external factor we could put in would be parental class. Include that, and the two factors explain 80% of variance in income.
  3. Even conditional on being born in the right country (and to the right class), success may still not be a matter of personal volition. One robust predictor of success (grades in school, job performance, income, and so on) is IQ. The precise determinants of IQ remain controversial, it is known to be highly heritable, and the 'non-genetic' factors of IQ proposed (early childhood environment, intra-uterine environment, etc.) are similarly outside one's locus of control.

On cursory examination the contours of how our lives are turned out are set by factors outside our control, merely by where we are born and who our parents are. Even after this we know various predictors, similarly outside (or mostly outside) of our control, that exert their effects on how our lives turn out: IQ is one, but we could throw in personality traits, mental health, height, attractiveness, etc.

So the answer to 'What determined how I turned out, compared to everyone else on the planet?', the answer surely has to by primarily about external factors, and our internal drive or will is relegated a long way down the list. Even if we want to look at narrower questions, like "What has made me turn out the way I am, versus all the other people who were likewise born in rich countries in comfortable circumstances?" It is still unclear whether the locus of control resides within our will: perhaps a combination of our IQ, height, gender, race, risk of mental illness and so on will still do the bulk of the explanatory work.(4)

Bringing the true and the prudentially rational together again

If it is the case that folks with an internal locus of control succeed more, yet also the external view being generally closer to the truth of the matter, this is unfortunate. What is true and what is prudentially rational seem to be diverging, such that it might be in your interests not to know about the evidence in support of an external locus of control view, as deluding yourself about an internal locus of control view would lead to your greater success.

Yet it is generally better not to believe falsehoods. Further, the internal view may have some costs. One possibility is fueling a just world fallacy: if one thinks that outcomes are generally internally controlled, then a corollary is when bad things happen to someone or they fail at something, it was primarily their fault rather than them being a victim of circumstance.

So what next? Perhaps the right view is to say that: although most important things are outside our control, not everything is. Insofar as we do the best with what things we can control, we make our lives go better. And the scope of internal factors - albeit conditional on being a rich westerner etc. - may be quite large: it might determine whether you get through medical school, publish a paper, or put in enough work to do justice to your talents. All are worth doing.

Acknowledgements

Inspired by Amanda MacAskill's remarks, and in partial response of Peter McIntyre. Neither are responsible for what I've written, and the former's agreement or the latter's disagreement with this post shouldn't be assumed.

 

1) Some ground-clearing: free will can begin to loom large here - after all, maybe my actions are just a result of my brain's particular physical state, and my brain's particular physical state at t depends on it's state at t-1, and so on and so forth all the way to the big bang. If so, there is no 'internal willer' for my internal locus of control to reside.

However, even if that is so, we can parse things in a compatibilist way: 'internal' factors are those which my choices can affect; external factors are those which my choices cannot affect. "Time spent training" is an internal factor as to how fast I can run, as (borrowing Hume), if I wanted to spend more time training, I could spend more time training, and vice versa. In contrast, "Hemiparesis secondary to birth injury" is an external factor, as I had no control over whether it happened to me, and no means of reversing it now. So the first set of answers imply support for the results of our choices being more important; whilst the second set assign more weight to things 'outside our control'.

2) In fairness, there's a pretty good story as to why there should be 'forward action': in the cases where outcome is a mix of 'luck' factors (which are a given to anyone), and 'volitional ones' (which are malleable), people inclined to think the internal ones matter a lot will work hard at them, and so will do better when this is mixed in with the external determinants.

3) This ignores edge cases where we can clearly see the external factors dominate - e.g. getting childhood leukaemia, getting struck by lightning etc. - I guess sensible proponents of an internal locus of control would say that there will be cases like this, but for most people, in most cases, their destiny is in their hands. Hence I focus on population level factors.

4) Ironically, one may wonder to what extent having an internal versus external view is itself an external factor.

Artificial Utility Monsters as Effective Altruism

10 [deleted] 25 June 2014 09:52AM

Dear effective altruist,

have you considered artificial utility monsters as a high-leverage form of altruism?

In the traditional sense, a utility monster is a hypothetical being which gains so much subjective wellbeing (SWB) from marginal input of resources that any other form of resource allocation is inferior on a utilitarian calculus. (as illustrated on SMBC)

This has been used to show that utilitarianism is not as egalitarian as it intuitively may appear, since it prioritizes some beings over others rather strictly - including humans.

The traditional utility monster is implausible even in principle - it is hard to imagine a mind that is constructed such that it will not succumb to diminishing marginal utility from additional resource allocation. There is probably some natural limit on how much SWB a mind can implement, or at least how much this can be improved by spending more on the mind. This would probably even be true for an algorithmic mind that can be sped up with faster computers, and there are probably limits to how much a digital mind can benefit in subjective speed from the parallelization of its internal subcomputations.

However, we may broaden the traditional definition somewhat and call any technology utility-monstrous if it implements high SWB with exceptionally good cost-effectiveness and in a scalable form - even if this scalability stems form a larger set of minds running in parallel, rather than one mind feeling much better or living much longer per additional joule/dollar.

Under this definition, it may be very possible to create and sustain many artificial minds reliably and cheaply, while they all have a very high SWB level at or near subsistence. An important point here is that possible peak intensities of artificially implemented pleasures could be far higher than those commonly found in evolved minds: Our worst pains seem more intense than our best pleasures for evolutionary reasons - but the same does not have to be true for artifial sentience, whose best pleasures could be even more intense than our worst agony, without any need for suffering anywhere near this strong.

If such technologies can be invented - which seems highly plausible in principle, if not yet in practice - then the original conclusion for the utilitarian calculus is retained: It would be highly desirable for utilitarians to facilitate the invention and implementation of such utility-monstrous systems and allocate marginal resources to subsidize their existence. This makes it a potential high-value target for effective altruism.

 

Many tastes, many utility monsters

Human motivation is barely stimulated by abstract intellectual concepts, and "utilitronium" sounds more like "aluminium" than something to desire or empathize with. Consequently, the idea is as sexy as a brick. "Wireheading" evokes associations of having a piece of metal rammed into one's head, which is understandably unattractive to any evolved primate (unless it's attached to an iPod, which apparently makes it okay).

Technically, "utility monsters" suffer from a similar association problem, which is that the idea is dangerous or ethically monstrous. But since the term is so specific and established in ethical philosophy, and since "monster" can at least be given an emotive and amicable - almost endearing - tone, it seems realistic to use it positively. (Suggestions for a better name are welcome, of course.)

So a central issue for the actual implementation and funding is human attraction. It is more important to motivate humans to embrace the existence of utility monsters than it is for them to be optimally resource-efficient - after all, a technology that is never implemented or funded properly gains next to nothing from being efficient.

A compromise between raw efficiency of SWB per joule/dollar and better forms to attract humans might be best. There is probably a sweet spot - perhaps various different ones for different target groups - between resource-efficiency and attractiveness. Only die-hard utilitarians will actually want to fund something like hedonium, but the rest of the world may still respond to "The Sims - now with real pleasures!", likeable VR characters, or a new generation of reward-based Tamagotchis.

Once we step away somewhat from maximum efficiency, the possibilities expand drastically. Implementation forms may be:

  • decorative like gimmicks or screensavers, 
  • fashionable like sentient wearables, 
  • sophisticated and localized like works of art, 
  • cute like pets or children, 
  • personalized like computer game avatars retiring into paradise, 
  • erotic like virtual lovers who continue to have sex without the user,
  • nostalgic like digital spirits of dead loved ones in artificial serenity, 
  • crazy like hyperorgasmic flowers, 
  • semi-functional like joyful household robots and software assistants,
  • and of course generally a wide range of human-like and non-human-like simulated characters embedded in all kinds of virtual narratives.

 

Possible risks and mitigation strategies

Open-souce utility monsters could be made public as templates to add additional control that the implementation of sentience is correct and positive, and to make better variations easy to explore. However, this would come with the downside of malicious abuse and reckless harm potential. Risks of suffering could come from artificial unhappiness desired by users, e.g. for narratives that contain sadism, dramatic violence or punishment of evil characters for quasi-moral gratification. Another such risk could come simply from bad local modifications that implement suffering by accident.

Despite these risks, one may hope that most humans who care enough to run artificial sentience are more benevolent and careful than malevolent and careless in a way that causes more positive SWB than suffering. After all, most people love their pets and do not torture them, and other people look down on those who do (compare this discussion of Norn abuse, which resulted in extremely hostile responses). And there may be laws against causing artificial suffering. Still, this is an important point of concern.

Closed-source utility monsters may further mitigate some of this risk by not making the sentient phenotypes directly available to the public, but encapsulating their internal implementation within a well-defined interface - like a physical toy or closed-source software that can be used and run by private users, but not internally manipulated beyond a well-tested state-space without hacking.

An extremely cautionary approach would be to run the utility monsters by externally controlled dedicated institutions and only give the public - such as voters or donors - some limited control over them through communication with the institution. For instance, dedicated charities could offer "virtual paradises" to donors so they can "adopt" utility monsters living there in certain ways without allowing those donors to actually lay hands on their implementation. On the other hand, this would require a high level of trustworthiness of the institutions or charities and their controllers.

 

Not for the sake of utility monsters alone

Human values are complex, and it has been argued on LessWrong that the resource allocation of any good future should not be spent for the sake of pleasure or happiness alone. As evolved primates, we all have more than one intuitive value we hold dear, even among self-identified intellectual utilitarians, who compose only a tiny fraction of the population.

However, some discussions in the rationalist community touching related technologies like pleasure wireheading, utilitronium, and so on, have suffered from implausible or orthogonal assumptions and associations. Since the utilitarian calculus favors SWB maximization above all else, it has been feared, we run the risk of losing a more complex future because 

a) utilitarianism knows no compromise and

b) the future will be decided by one winning singleton who takes it all and

c) we have only one world with only one future to get it right

In addition, low status has been ascribed to wireheads, with the association of fake utility or cheating life as a form of low-status behavior. People have been competing for status by associating themselves with the miserable Socrates instead of the happy pig, without actually giving up real option value in their own lives.

On Scott Alexander's blog, there's a good example of a mostly pessimistic view both in the OP and in the comments. And in this comment on an effective altruism critique, Carl Shulman names hedonistic utilitarianism turning into a bad political ideology similar to communist states as a plausible failure mode of effective altruism.

So, will we all be killed by a singleton who turns us into utilitronium?

Be not afraid! These fears are plausibly unwarranted because:

a) Utilitarianism is consequentialism, and consequentialists are opportunistic compromisers - even within the conflicting impulses of their own evolved minds. The number of utilitarians who would accept existential risk for the sake of pleasure maximization is small, and practically all of them ascribe to the philosophy of cooperative compromise with orthogonal, non-exclusive values in the political marketplace. Those who don't are incompetent almost by definition and will never gain much political traction.

b) The future may very well not be decided by one singleton but by a marketplace of competing agency. Building a singleton is hard and requires the strict subduction or absorption of all competition. Even if it were to succeed, the singleton will probably not implement only one human value, since it will be created by many humans with complex values, or at least it will have to make credible concessions to a critical mass of humans with diverse values who can stop it before it reaches singleton status. And if these mitigating assumptions are all false and a fooming singleton is possible and easy, then too much pleasure should be the least of humanity's worries - after all, in this case the Taliban, the Chinese government, the US military or some modern King Joffrey are just as likely to get the singleton as the utilitarians.

c) There are plausibly many Everett branches and many hubble volumes like ours, implementing more than one future-earth outcome, as summed up by Max Tegmark here. Even if infinitarian multiverse theories should all end up false against current odds, a very large finite universe would still be far more realistic than a small one, given our physical observations. This makes a pre-existing value diversity highly probable if not inevitable. For instance, if you value pristine nature in addition to SWB, you should accept the high probability of many parallel earth-like planets with pristine nature irregardless of what you do, and consider that we may be in an exceptional minority position to improve the measure of other values that do not naturally evolve easily, such as a very high positive-SWB-over-suffering surplus.

 

From the present, into the future

If we accept the conclusion that utility-monstrous technology is a high-value vector for effective altruism (among others), then what could current EAs do as we transition into the future? To my best knowledge, we don't have the capacity yet to create artificial utility monsters.

However, foundational research in neuroscience and artificial intelligence/sentience theory is already ongoing today and certainly a necessity if we ever want to implement utility-monstrous systems. In addition, outreach and public discussion of the fundamental concepts is also possible and plausibly high-value (hence this post). Generally, the following steps seem all useful and could use the attention of EAs, as we progress into the future:

  1. spread the idea, refine the concepts, apply constructive criticism to all its weak spots until it becomes either solid or revealed as irredeemably undesirable
  2. identify possible misunderstandings, fears, biases etc. that may reduce human acceptance and find compromises and attraction factors to mitigate them
  3. fund and do the scientific research that, if successful, could lead to utility-monstrous technologies
  4. fund the implementation of the first actual utility monsters and test them thoroughly, then improve on the design, then test again, etc.
  5. either make the templates public (open-source approach) or make them available for specialized altruistic institutions, such as private charities
  6. perform outreach and fundraising to give existence donations to as many utility monsters as possible

All of this can be done without much self-sacrifice on the part of any individual. And all of this can be done within existing political systems, existing markets, and without violating anyone's rights.

Unfriendly Natural Intelligence

8 Gunnar_Zarncke 15 April 2014 05:05AM

Related to: UFAIPaperclip maximizerReason as memetic immune disorder

A discussion with Stefan (cheers, didn't get your email, please message me) during the European Community Weekend Berlin fleshed out an idea I had toyed around with for some time:

If a UFAI can wreak havoc by driving simple goals to extremes then so should driving human desires to extremes cause problems. And we should already see this. 

Actually we do. 

We know that just following our instincts on eating (sugar, fat) is unhealthy. We know that stimulating our pleasure centers more or less directly (drugs) is dangerous. We know that playing certain games can lead to comparable addiction. And the recognition of this has led to a large number of more or less fine-tuned anti-memes e.g. dieting, early drug prevention, helplines. These memes steering us away from such behaviors were selected for because they provided aggregate benefits to the (members of) social (sub) systems they are present in.     

Many of these memes have become so self-evident we don't recognize them as such. Some are essential parts of highly complex social systems. What is the general pattern? Did we catch all the critical cases? Are the existing memes well-suited for the task?How are they related. Many are probably deeply woven into our culture and traditions.

Did we miss any anti-memes? 

This last question really is at the core of this post. I think we lack some necessary memes keeping new exploitations of our desires in check. Some new ones result from our society a) having developed the capacity to exploit them and b) the scientific knowledge to know how to do this.

continue reading »

Teapots and Soda Cans

6 Odinn 01 September 2013 10:21PM

Reading an earnest and thought provoking editorial1 from one James Wood, reviewing 'Letter To a Christian Nation' by Sam Harris. Though atheist himself, he admits a flagging patience with certain attitudes of atheists. I can concede that an atheist's superior and glib demeanor may be due to frustration and no small amount of pessimistic inference about the human condition, though I had to comment about a rebuttal he gives regarding Bertrand Russell's celestial teapot2.

He claims that God, so much grander and more complex than a teapot, cannot be banished with such a simplistic comparison, when I would insist that God is actually much less believable than the teapot for that exact reason. I think Russell's teapot is due for an update which is more approachable and grounded. Here goes:

I claim that there is a discarded Coke can somewhere in the vastness of the Sahara, but I will brook absolutely no discussion about doubting my claim or investigating it for veracity. "Okay," you think, "I suppose I can assume that much to be true. Whatever this man's sources, the odds of a Coke can being somewhere in the desert must be considerable." But I then elaborate with claims that it's actually many, many cans, folded into glorious and artistically pleasing forms, and my obdurate refusal to discuss how it can be proved continues. At this point even the most generous theists would likely start getting annoyed with my odd behavior, yet at the very least what I'm asking you to believe isn't outside the realm of possibility. For all you know (though I refuse to allow you to check) there could be a folk art bazaar currently set up in the Sahara, so really it costs you very little to entertain my view.

And then I say that the cans have taken on beautiful, shimmering consciousness and are forming a society which hides from humanity, burying their chrome castles beneath the sand and moving their aluminum cities whenever we get too close to discovering them. "But..." you try to cut in. Before you can even begin to tell me what you find odd about my fantasy, I'm on the next detail. I claim that all of our major technological achievements of the last several hundred years are all thanks to the secret influence of the Shiny Can People.

Now you have countless legitimate doubts, but every time you try to tell me that, for starters, soda didn't even come in aluminum cans several hundred years ago, I insist that you weren't there so you can't be sure, and how could a mere burden of proof destroy the mighty empire of the Shiny Cans?

I like the utility of the can people because it doesn't start with an outlandish proposition, but if you stick around it gets absolutely ridiculous. Not only does that remind me more of how religion is actually sold, but it also serves to strengthen the original analogy of the teapot by reminding the curious mind that Russell's teapot is infinitely smaller and less complex than God, making it much less embarrassing to genuinely believe in since it would have so much more room to hide.

Odinn Celusta

1) http://www.newrepublic.com/article/the-celestial-teapot

2) http://en.wikipedia.org/wiki/Russell's_teapot

Research is polygamous! The importance of what you do needn't be proportional to your awesomeness

22 diegocaleiro 26 May 2013 10:29PM

In a recent discussion a friend was telling me how he felt he was not as smart as the people he thinks are doing the best research on the most important topics. He said a few jaw-dropping names, which indeed are smarter than him, and mentioned their research agenda, say, A B and C.  

From that, a remarkable implication followed, in his cognitive algorithm: 

 

Therefore I should research thing D or thing E. 

 

Which made me pause for a moment. Here is a hypothetical schematic of this conception of the world. Arrows stand for "Ought to research"

Humans by Level of Awesome (HLA)             Research Agenda by Level of Importance. (RALI)

HLA                 RALI

Mrs 1 --------> X-risk #1

2 --------> X-risk #2 

3 --------> Longevity

4 --------> Malaria Reduction 

5 --------> Enhancement 

1344 --------> Increasing Puppies Cuteness

Etc... 

 

It made me think of the problem of creating match making algorithms for websites where people want to pair to do stuff, such as playing tennis, chess or having a romantic relationship.

This reasoning is profoundly mistaken, and I can look back into my mind, and remember dozens of times I have made the exact same mistake. So I thought it would be good to spell out 10 times in different ways for the unconscious bots in my mind that didn't get it yet: 

1) Research agenda topics are polygamous, they do not mind if there is someone else researching them, besides the very best people who could be doing such research. 

2) The function above should not be one-to-one (biunivocal), but many-to-one. 

3) There is no relation of overshadowing based on someone's awesomeness to everyone else who researches the same topic, unless they are researching the same narrow minimal sub-type of the same question coming from the same background. 

4) Overdetermination doesn't happen at the "general topic level". 

5) Awesome people do not obfuscate what less awesome people do in their area, they catapult it, by creating resources. 

6) Being in an area where the most awesome people are is not asking to "lose the game" it is being in an environment that cultivates greatness

7) The amount of awesomeness in a field does not supervene on the amount of awesomeness in it's best explorer. 

8) The Best person in each area would never be able to cause progress alone. 

9) To want to be the best in something has absolutely no precedence over doing something that matters. 

10) If you believe in monogamous research, you'd be in the akward situation where finding out that no one gives a flying fuck about X-risk should make you ecstatic, and that can't be right. That there are people doing something that matters so well that you currently estimate you can't beat them should be fantastic news! 

Well, I hope every last cortical column I have got it now, and the overall surrounding being may be a little less wrong. 

Also, this text by Michael Vassar is magnificent, and makes a related set of points. 

 

 

 

Drowning In An Information Ocean

25 diegocaleiro 30 March 2013 04:32AM

Drowning In An Information Ocean

I decided to take a look at the books hanging around the Future of Humanity Institute. It is a sobering and sad experience. I'd say there are little less than 2 thousand books.

80% of books I wouldn't mind reading,

1/2 I would read,

1/3 I should read

and 1/5 I must read!  

I predict that I'll read actually 1/400, counting the ones there, and their enhanced successors. How emotionally terrible is it to live in such a technically competent society and want to understand the world! Since 2000 I've abandoned TV, videogames, celebrity gossip, musical ability, knowledge about bands, politics, theater classes, dancing classes, handball, tennis, reading fiction, reading parts of Facebook, maintaining contact with groups X and Y of friends, newspapers, magazines and comics. All in the name of keeping up with human knowledge on some areas that fascinate me. Mostly areas having to do with the nature of minds and mental states. Come to think of it, the only two things that really, really interest me are minds and evolution. My curiosity is very narrow, it should be no trouble to learn a satisfactory amount about two things, right? So if you want to know what a mind is and what it does, and to get a grasp on the outlook of evolved stuff, you need to go through areas like:

Positive Psychology, Evolutionary Psychology, Animal Cognition (Ethology), Cultural Evolution, Cognitive Neuroscience, Cognitive Science, Artificial Intelligence, Philosophy of Mind, Philosophy of Cognitive Science, Primatology, Physical and Biological Anthropology.  

Which I did. 

Dig up a bit and you'll find that those require knowledge from Evolutionary Biology, Neuroeconomics, Basic Neuroscience, Genetics, Proof Theory, Formal Logic, Anthropic Reasoning - And from Anthropic Reasoning, you get a lot of physics requirements, mostly in cosmology and a bit in particle physics. Dig a little further and you can't get a lot of what is up there without grasping Maynard Smith and Trivers thoughts on biology, which come from economics, and by the time you notice you are surrounded by isoquants, comparing stable equilibriums across disciplines and thinking of economic metaphors for how the PreMedial Ventral Cortex settles some decision issues. Which of course requires that you understand metaphors, and you'll have to check some Hofstadter and Pinker on those issues, which will require at least some very basic linguistics, or at least an outlook of philosophy of language. Did I mention that most of this only works if you are rational, and that means you'd better have read the sequences prior to all this stuff?

Then there are the nagging exact sciences people. They come to you at night, haunt you in your dreams, telling you how much you should study math, how math is important for this, for that, and for that. Most disagree which branches of math are important, stats being the most universal like. If I were to learn to all the math I was told to learn, that would be at least 3 years more. Scott Young can do an entire university course (CS) in one year, Nick Bostrom kept that pace for 6 or 7 years. Most people don't get the mix of time, luck, capacity, resources and most importantly, motivation, to pursue such Homeric tasks.

I've never doubted Math is awesome. What I did doubt, and to this day I have seen few who doubt with me, but good examples being Peter Thiel, more strongly, and Jared Diamond and Dan Dennett, less strongly, is that so many young talents should be drawn into physics and math (and chess). Why should we make people who are really smart do the things in which it is easier to detect being smart?  Companies don't ask their best employees to devise ever better and more complicated IQ tests just because IQ tests are good predictors of how good a worker will be. The goal is not to costly signal being near the upper bound in intelligence. The goal is using your intelligence to pursue your goals. Sure, lots of it will be signalling instrumentally, but once the dust settles, don't get fixed in proving the constructibility of enormously large polygons, or beating Kasparov.  

So far I've tried to make two cases: Even with prima facie narrow interests, anyone is bound to be drowning in an ocean of information, and the interconnectedness and requirements to understand narrow interests may be much larger than one's initial expectancy. Secondly the main modulator of what to do with intelligence (your own, or someone else's) should be to tune it with goals and interests, not with easy detectability. 

 

Swimming Upwards

To avoid drowning in the ocean, I've already mentioned a lot of weight I found I could live without: TV, videogames, celebrity gossip, musical ability, knowledge about bands, or politics, theater classes, dancing classes, handball, tennis, reading fiction, reading parts of Facebook, maintaining contact with groups X and Y of friends, newspapers, magazines and comics. Those were not easy choices, each comes with a cost, a sadness, and a feeling that something valuable has been lost. The richness of flavors of life got somewhat poorer, because at least about minds and evolution, I wanted to keep track of human knowledge. 

It is hard enough not to go after understading Muons better, or knowing if really Brontosaurs had extra little brains throughout their neck, or why is it that vegetables are healthier than a double bacon cheeseburguer. But this tradeoff is knowing X versus knowing Y. It gets messy when it becomes earning X versus knowing Y, loving G versus knowing Y, containing curiosity about facebook update F versus knowing Y, and going to U's party versus knowing Y. 

Keeping a positive information diet helps, but I'm unsure even that stringent criterion is enough to know as much as one would like about one's narrow interests. Thus here I am, surrounded by 400 books I must read, and imagining how often new books that I'd put in the "must read" category are created every month. Probably same goes for amount of pages of scientific and philosophical journal papers. Stephen Hawking points out that you'd have to run faster than a car to read all written knowledge being created. I think the drowning metaphor is better because if books were liquid, you would quite likely not be able to swim even an aquarium of your own interests. I'm even considering moving to cold lowlight areas of the world, just for the purpose of having less (distractions) weight even, so I can swim a little longer. 

 

Writing, Advocating and Teaching 

Finally, there is the ultimate tradeoff. Being a child versus being a parent. Getting memes versus spreading memes. Learning versus teaching. Exploration versus exploitation. Being directed versus directing. Paying attention versus becoming focus. Riding versus driving. 

Writing takes a ridiculously long time. To write this text so far took me about 2 hours. It is simple, autobiographical, uses mostly folk psychological concepts, and not very theory-laden. My rule of thumb for writing technical stuff is one hour per page. In that time I could read up to 40 times as much. Assuming a publishability of a page per 3, the choice is writing three books or reading the 400 that surround me. Surely a lot of learning requires reprocessing, and one of the best ways to learn is to reconfigure our mental constructs, and use inter-areas knowledge to compose new ideas out of read ones (Pasupathi2012). Writing is learning, but it is still costly learning.

When thinking whether you should go into research, not only all the different sorts of considerations suggested by the 80000hours community should be looked at, but also how much is that individual driven to sharing knowledge, once acquired. Some people really want to output as much as possible, but many care, by and large, mostly about the input, and given writing one book may cost reading up to 120, they can rest assured there will be very interesting material eager to be read, always jumping ahead their priority list. In the last two years, the Teens and Twenties, a conference for young cryonicists (many of whom lesswrongers) had, out of four personality types, a vast majority of curiosity driven individuals. Much more incentives are needed to get people writing their thesis than to get them reading about their thesis topics. 

 

The Examined Swim 

From many perspectives, in particular that of technical achievement and development, it is great and fascinating that we live in such accelerated scientific age. In other states of mind, or ways of thinking, it is not that great. Those states of mind are not frequently ones that show up in books, specially not in academic courses. They deal not with the speed or depth of things, but breadth, gravity, resonance, luminance, sacredness. Some books, like The Examined Life, The Guinea Pig Diaries, Mortals and Others, and lots of songs and movies deal with those aspects. 

Wearing the transhumanist technoprogressive hat, what I don't like about drowning is similar to what I don't like about the cosmological constant, it would be really cool if the speed of creation and my speed of absorption were exactly the same, and it would be really cool if the universe was stable instead of getting cold. It's something I can shrug about and move on. 

Wearing the other hat, the surrounding ocean of great books has a more sinister message to tell. It reminds me of the finitude of the human condition, it is a visual reminder of all I'll never know, never see, taste, borrow or steal. More than that, because all aspects of life are in constant dispute of attentional resources, it takes a lot of effort and anguish to choose to go for those books, the plunge is deep, and wearing this hat, I can't help but to think it may not be worth it.

In a recent conversation with one of the enhancement researchers here he pointed out that it may be the case that for the individual Modafinil is not an enhancement, but for society as a whole it is. An individual won't change much due to taking Modafinil, and may pay costs if it has some particularly adverse effects for that person. Society on the other hand will be greatly benefited by the additional capacity of hundreds of thousands of scientists, each a little smarter.

It may well be that society needs you not to drown, and incentivizes you to swim as fast as you can, cost whom it may, it sure is the case in the corporate world. Thinking of yourself as an utility function and wearing the technoprogressive hat sure signal your allegiance to (this) society's cause. Yet wearing the other hat, as I often do, sometimes tempts me to let go and delve into the Siren's songs...  

 

Number of Members on LessWrong

3 Epiphany 17 August 2012 05:47AM

I was excited to find this site, so I wanted to know how many people had joined LessWrong.  Was it what it seemed - that a lot of people had actually gathered around the theme of rational thought - or was that just wishful thinking about a site that a guy with a neat idea and his buddies put together?  I couldn't find anything stating the number of members on LessWrong anywhere on the site or the internet, so I decided it would be a fun test of my search engine knowledge to nail jello to a tree and make my own.

Some argue that Google totals are completely meaningless, however, the real problem is that it's very complicated and if you don't know how search engines work, your likelihood of getting a usable number is low.  I took into account the potential pitfalls when MacGyvering this figure out of Google.  So far, no one has posted a significant flaw with my specific method.  (I will change that statement if they do, once I've read their comment.)  Also, I was right (Find in page: total).

Here is the query I constructed:

site:lesswrong.com/user -"submitted by" -"comments by"

(Translation provided at the end.)

This gets a similar result in Bing and Yahoo:

"lesswrong.com/user"

If this is correct, LessWrong has over 9,000 members.  That's my claim: "LessWrong probably has over 9,000 members" not "LessWrong has exactly 9,000 members".  My LessWrong population figure is likely to be low.  (I explain this below.)

Why did I do this?  I was really overjoyed to find this site and wanted to see whether it was somebody's personal site with just a few buddies, or if they actually managed to draw a significant gathering of people who are interested in rational thought.  I was very happy to see that it looks much bigger than a personal site.  Since it was so hard to find out how many users LessWrong has, I decided to share.

I think a lot of people assume the hasty generalization that "all search engine totals are meaningless".  If you're an average user just plugging in search terms with little understanding of how search engines work: yes, you should regard them as meaningless.  However, if you know the limitations of a technique, what parts of the system your working within are consistent and what parts of it are not, I say it is possible to get some meaning within those limitations.  Do I know all the limitations?  Well, I assume I am unaware of things I don't know, so I won't say that.  But I do know that so far nobody has proven this number or method wrong.  If you want to prove me wrong, go for it.  That would be fascinating.  Remember that the claim is "LessWrong probably has over 9,000 members".  The entire purpose of this was to get an "at least this many" figure for how many members LessWrong has.  The inaccuracies I've already taken into consideration in order to compensate for the limits of this technique are listed below:

 

Why this is an "at least this many" figure, pitfalls I've avoided or addressed, and inaccuracies.

  - Some users may not be included in Google's index yet.  For instance, if they have never posted, there may be no link to their page (which is what I searched for - user pages), and the spider would not find them.  This may be restricted to members that have actually commented, posted, or have been linked to in some way somewhere on the internet. 

  - Search engine caches are not in real time.  There can be a lag of up to months, depending on how much the search engine "likes" the page.

  - It has been reported by previous employees of a major search engine that they are using crazy old computer equipment to store their caches.  I've been told that it is common for sections of cache to be down for that reason.

  - Search engines have restrictions in place to conserve resources.  For instance, they won't let you peruse all of the results using the "next" button, and they don't total all of the results that they have when you first press "search" (you may see that number increase later if you continue to press "next" to see more pages of results.)

  - It has been argued that Google doesn't interpret search terms the way you'd think.  I knew that before I started.  The query  was designed with that in mind.  I explain that here: http://lesswrong.com/r/discussion/lw/e4j/number_of_members_on_lesswrong/780g

  - Some of the results in Bing and Yahoo were irrelevant, though I think I weeded them pretty thoroughly for Google if my random samples of results pages are a good indication of the whole.

  - When you go to your user page, if you have more than 10 comments, a next link shows at the bottom and clicking it makes more pages appear.  My understanding is that Google doesn't index these types of links - and they don't seem to be getting included.  http://lesswrong.com/lw/e4j/number_of_members_on_lesswrong/7839

Go ahead and check it out - stick the query in Google and see how many LessWrong members it shows.  You'll certainly get a more up-to-date total than I have posted here.  ;)

 

Translation for those of you that don't know Google's codes:

site:lesswrong.com/user

"Search only lesswrong.com, only the user directory."

(The user directory is where each user's home page is, so I'm essentially telling it "find all the home page directories".)

-"submitted by" -"comments by"

Exclude any page in that directory with the exact text "submitted by" or "comments by"

(The submissions and comments pages use a url in that directory, so they will show up in the results if I do not subtract them.  Also, I used exact text specific to those pages, so that the text in the links on user home pages do not get user home pages omitted from the search. )

 

Note:

I realize this number isn't scientific proof of anything, (we can't see Google's code so that would be foolish), which is why I'm not attempting to use it to convince anyone of anything important. 

 

 

Shortening the Unshortenable Way

-2 Duk3 26 July 2011 06:44AM

 

or

A Starting Point for Defense against Flexible Dark Artists and Circumstances

 

In On Seeking a Shortening of the Way the assertion “Maybe we're not geniuses because we don't bother paying attention to ordinary things” caught my eye. Certainly! I said. Obviously if we were able to pay the appropriate amount of attention to every occurrence so as to gain enough data to update our models in an optimal way, we would rapidly increase our overall ability to model the world and increase our probability of insights at the level currently considered ‘genius.’

 

                And then I remembered that I can’t really do that, on account of having crappy models of what is actually important, and thinking that i can't improve those models quickly. Whoops! I, like so many others, fail to know how much attention to pay to ordinary things so as to become a genius. C’est la vie. Fortunately the lesson here was not the factuality of the statement, which is high, but a reminder that you could probably gain benefits from paying more attention and being more disciplined in your thought.

                Which is even better because it’s great advice, and eminently doable. Thanks, Yvain! So I set about paying attention to how I currently pay attention and, like usual, paid attention to the cues I get about how other people pay attention, assuming that I make the mistakes they do at least some of the time.

                And then I realized… wait a minute, whenever other people aren’t actually paying attention is when I could most easily shanghai them into doing things they normally wouldn’t do (Were I a dark artist. Hypothetically.). So learning how to pay more attention and pay attention in the correct way is probably the best reflexive method of avoiding being dutch booked by people who are highly adaptable dark artists.

                And here’s my low-hanging fruit of techniques to build the foundational reflexes for shortening the way. The goal is to avoid being inattentive in certain sorts of situations where I noted personal susceptibility to being taken advantage of by changing situations or flexible con artists.

                Summary: Act like Suspicious, Smart, Rich People Do. Assume everyone and everything is both an opportunity and an encounter with a parasite, and don’t act like it unless it’s socially convenient. How do you do this, you say. It sounds more difficult than that, you say. On the contrary, skeptical sir! I will now present an exercise which rapidly becomes reflexive, in a manner which will cause it to become reflexive, which separates the exercise from the situation so that you can learn the requisite acting skills separately! Try this!

Ask yourself for new people , situations, arguments, and facts, what is this worth to me? What risks do I run by paying attention to this? What opportunities lie in this, if my understanding of it is correct? What risks do I run, if my understanding of it is incorrect? And you can go as much deeper as you think is valuable or are mentally capable of sustaining.

                  For the step-by-steppers out there (I salute you!), here’s explicitly How To start doing this in a low-cost way.

Step 1: In your journal for daily events (If you’re not keeping one of these go buy a journal and start. Without a daily log how do you know you’re actually making progress?) use Pen and Paper (The Great Equalizer!) and write down your understanding of a couple of important topics and a few simple topics (the simple topics shouldn’t take as long… right?). This will be a lot of work! But it’s only for one day, and developing this mental habit in particular and your ability to do rational yet seemingly onerous things for a brief period each day will both be massively valuable.

Step 2: When That Gets Boring, elaborate with pros and cons, an analysis of arguments, or other techniques that professionals use when it’s important (Imagine a lawyer not analyzing their opponent’s arguments, and then imagine yourself as their client.).  Do a Fermi calculation (here's some practice) if it involves a number of things you don’t understand well.

Step 3: Avoid abusing this method to convince yourself you don't need to run the numbers by pretending someone else, someone biased,wrote the analysis. (Those darned Biased people, cropping up even in your own journal!) Think of how future versions of yourself will look at your thought processes (you'll be smarter then... wiser... with a knowledge ofcommon logical fallacies and the heuristics and biases literature)(you might even read Godel Escher Bach or something andblow your mind. Anything is possible!). Look over your previous analyses before deciding (sleep on it and wait on it). Developing a decent set of evidence for fermi calculations and calibration exercises will let you use the same thought processes to do this right when you don't have time to run the numbers.

Step 4: Profit.

 

 

Is there evolutionary selection for female orgasms?

5 NancyLebovitz 13 October 2010 02:07PM

>Elisabeth Lloyd: I don’t actually know. I think that it’s at a very problematic intersection of topics. I mean, you’re taking the intersection of human evolution, women, sexuality – once you take that intersection you’re bound to kind of get a disaster. More than that, when evolutionists have looked at this topic, I think that they’ve had quite a few items on their agenda, including telling the story about human origins that bolsters up the family, monogamy, a certain view of female sexuality that’s complimentary to a certain view of male sexuality. And all of those items have been on their agenda and it’s quite visible in their explanations.

>Natasha Mitchell: I guess it’s perplexed people partly, too, because women don’t need an orgasm to become pregnant, and so the question is: well, what’s its purpose? Well, is its purpose to give us pleasure so that we have sex, so that we can become pregnant, according to the classic evolutionary theories?

>Elisabeth Lloyd: The problem is even worse than it appears at first because not only is orgasm not necessary on the female side to become pregnant, there isn’t even any evidence that orgasm makes any difference at all to fertility, or pregnancy rate, or reproductive success. It seems intuitive that a female orgasm would motivate females to engage in intercourse which would naturally lead to more pregnancies or help with bonding or something like that, but the evidence simply doesn’t back that up.

The whole discussion. It backs my theory that using evolution to explain current traits seriously tempts people to make things up.