Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Is my brain a utility minimizer? Or, the mechanics of labeling things as "work" vs. "fun"

1 contravariant 28 August 2015 01:12AM

I recently encountered something that is, in my opinion, one of the most absurd failure modes of the human brain. I first encountered this after introspection on useful things that I enjoy doing, such as programming and writing. I noticed that my enjoyment of the activity doesn't seem to help much when it comes to motivation for earning income. This was not boredom from too much programming, as it did not affect my interest in personal projects. What it seemed to be, was the brain categorizing activities into "work" and "fun" boxes. On one memorable occasion, after taking a break due to being exhausted with work, I entertained myself, by programming some more, this time on a hobby personal project (as a freelancer, I pick the projects I work on so this is not from being told what to do). Relaxing by doing the exact same thing that made me exhausted in the first place.

The absurdity of this becomes evident when you think about what distinguishes "work" and "fun" in this case, which is added value. Nothing changes about the activity except the addition of more utility, making a "work" strategy always dominate a "fun" strategy, assuming the activity is the same. If you are having fun doing something, handing you some money can't make you worse off. Making an outcome better makes you avoid it. Meaning that the brain is adopting a strategy that has a (side?) effect of minimizing future utility, and it seems like it is utility and not just money here - as anyone who took a class in an area that personally interested them knows, other benefits like grades recreate this effect just as well. This is the reason I think this is among the most absurd biases - I can understand akrasia, wanting the happiness now and hyperbolically discounting what happens later, or biases that make something seem like the best option when it really isn't. But knowingly punishing what brings happiness just because it also benefits you in the future? It's like the discounting curve dips into the negative region. I would really like to learn where is the dividing line between which kinds of added value create this effect and which ones don't (like money obviously does, and immediate enjoyment obviously doesn't). Currently I'm led to believe that the difference is present utility vs. future utility, (as I mentioned above) or final vs. instrumental goals, and please correct me if I'm wrong here.

This is an effect that has been studied in psychology and called the overjustification effect, called that because the leading theory explains it in terms of the brain assuming the motivation comes from the instrumental gain instead of the direct enjoyment, and then reducing the motivation accordingly. This would suggest that the brain has trouble seeing a goal as being both instrumental and final, and for some reason the instrumental side always wins in a conflict. However, its explanation in terms of self-perception bothers me a little, since I find it hard to believe that a recent creation like self-perception can override something as ancient and low-level as enjoyment of final goals. I searched LessWrong for discussions of the overjustification effect, and the ones I found discussed it in the context of self-perception, not decision-making and motivation. It is the latter that I wanted to ask for your thoughts on.

 

Rationality Reading Group: Part H: Against Doublethink

3 Gram_Stone 27 August 2015 01:22AM

This is part of a semi-monthly reading group on Eliezer Yudkowsky's ebook, Rationality: From AI to Zombies. For more information about the group, see the announcement post.


Welcome to the Rationality reading group. This fortnight we discuss Part H: Against Doublethink (pp. 343-361)This post summarizes each article of the sequence, linking to the original LessWrong post where available.

H. Against Doublethink

81. Singlethink - The path to rationality begins when you see a great flaw in your existing art, and discover a drive to improve, to create new skills beyond the helpful but inadequate ones you found in books. Eliezer's first step was to catch what it felt like to shove an unwanted fact to the corner of his mind. Singlethink is the skill of not doublethinking.

82. Doublethink (Choosing to be Biased) - George Orwell wrote about what he called "doublethink", where a person was able to hold two contradictory thoughts in their mind simultaneously. While some argue that self deception can make you happier, doublethink will actually lead only to problems.

83. No, Really, I've Deceived Myself - Some people who have fallen into self-deception haven't actually deceived themselves. Some of them simply believe that they have deceived themselves, but have not actually done this.

84. Belief in Self-Deception - Deceiving yourself is harder than it seems. What looks like a successively adopted false belief may actually be just a belief in false belief.

85. Moore's Paradox - People often mistake reasons for endorsing a proposition for reasons to believe that proposition.

86. Don't Believe You'll Self-Deceive - It may be wise to tell yourself that you will not be able to successfully deceive yourself, because by telling yourself this, you may make it true.

 


This has been a collection of notes on the assigned sequence for this fortnight. The most important part of the reading group though is discussion, which is in the comments section. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

The next reading will cover Part I: Seeing with Fresh Eyes (pp. 365-406). The discussion will go live on Wednesday, 9 September 2015, right here on the discussion forum of LessWrong.

Personal story about benefits of Rationality Dojo and shutting up and multiplying

6 Gleb_Tsipursky 26 August 2015 04:38PM

My wife and I have been going to Ohio Rationality Dojo for a few months now, started by Raelifin, who has substantial expertise in probabilistic thinking and Bayesian reasoning, and I wanted to share about how the dojo helped us make a rational decision about house shopping. We were comparing two houses. We had an intuitive favorite house (170 on the image) but decided to compare it to our second favorite (450) by actually shutting up and multiplying, based on exercises we did as part of the dojo.

What we did was compare mathematically each part of the house by comparing the value of that part of the house multiplied by the use of that part of the house, and had separate values for the two of us (A for my wife, Agnes Vishnevkin, and G for me, Gleb Tsipursky, on the image). By comparing it mathematically, 450 came out way ahead. Hard to update our beliefs, but we did it, and are now orienting toward that one as our primary choice. Rationality for the win!

Here is the image of our back-of-the-napkin calculations.

 

Sensation & Perception

1 ScottL 26 August 2015 01:13PM

(The below notes are pretty much my attempt to summarise the content in this sample chapter from this book. I am posting this in discussion because I don’t think I will get the time/be bothered enough to improve upon this, so I am posting it now and hope someone finds it interesting or useful. If you do find it interesting check out the full chapter, which goes into more detail)

We don’t experience the world directly, but instead we experience it through a series of “filters” that we call our senses. We know that this is true because of cases of sensory loss. An example is Jonathan I., a 65-year-old New Yorker painter who following an automobile accident suffered from cerebral achromatopsia as well as the loss of the ability to remember and to imagine colours. He would look at a tomato and instead of seeing colours like red or green would instead see only black and shades of grey. The problem was not that Johnathan's eyes no longer worked it was that his brain was unable to process the neural messages for colour.

To understand why Johnathan cannot see colour, we first have to realise that incoming light travels only as far  as the back of the eyes. There the information it contains is converted into neural messages in a process called transduction. We call these neural messages: "sensations". These sensations only involve neural representations of stimuli, not the actual stimuli themselves. Sensations such as “red” or “sweet” or “cold” can be said to have been made by the brain. They also only occur when the neural signal reaches the cerebral cortex. They do not occur when you first interact with the stimuli. To us, the process seems so immediate and direct that we are often fooled into thinking that the sensation of "red" is a characteristic of tomato or that the sensation of “cold” is a characteristic of ice cream. But they are not. What we sense is an electrochemical rendition of the world created by our brain and sensory receptors.

There is another separation between reality as it is and how we sense it to be as well. Organisms can only sense some types of stimulus between certain ranges. This is called the absolute threshold for different types of stimulation and it is the minimum amount of physical energy needed to produce a sensory experience. It should be noted that a faint stimulus does not abruptly become detectable as its intensity increases. There is instead a fuzzy boundary between detection and non-detection, which means that a person’s absolute threshold is in fact not absolute at all. Instead, it varies continually with our mental alertness and physical condition.

To understand the reasons why the thresholds vary, we can turn to the signal detection theory. According to the signal detection theory, sensation depends on the characteristics of the stimulus, the background stimulation and the detector (the brain). Signal detection theory says that the background stimulation makes it less likely, for example, for you to hear someone calling your name on a busy downtown street than in a quiet park. The signal detection theory also tells us that your ability to hear them would depend on the condition of your brain, i.e. detector, and, perhaps, whether it has been aroused by a strong cup of coffee or dulled by drugs or lack of sleep.

The thresholds also change as similar stimuli are continued. This is called sensory adaption and it refers to the diminishing responsiveness of sensory systems to prolonged stimulation. An example of this would be when you adapt to the feeling of swimming in cold water. Unchanging stimulation generally shifts to the back of people's awareness, whereas, intense or changing stimulation will immediately draw your attention.

So far, we have talked about how the sensory organs filter incoming stimuli and how they can only pick up certain types of stimuli. But, there is also something more. We don’t just sense the world; we perceive it as well. The brain in a process called perception combines sensations with memories, motives and emotions to create a representation of the world that fits our current concerns and interests. In essence, we impose our own meanings on sensory experience. People obviously have different memories, motives and current emotional states and this means that we attach different meanings to our sensations i.e. we have perceptual differences. Two people can look at the same political party or religion and come to starkly different conclusions about them. 

The below picture provides a summary of the whole process discussed so far (stimulation to perception):

From simulation to perception, there are a great number of chances for errors to creep in and for you to either misperceive or even not perceive some types stimuli at all. These errors are often exacerbated by mistakes made by the brain. The brain, while brilliant and complex, is not perfect. Some of the mistakes it can make include perceptual phenomena such as: illusions, constancies, change blindness, and inattentional blindness. Illusions, for example, are when your mind deceives you by interpreting a stimulus pattern incorrectly. It is troubling that despite all we know about sensation and perception many people still uncritically accept the evidence of their senses and perceptions at face value.

That was a quick summary on perception. But, an important question still needs to be asked. Is sensory perception and how its input gets organised in our minds the sole basis of our internal representations of the world or is there something else that might placate any creeping errors from perception? This question was asked by many philosophers. Kant in particular, had a distinction between a priori concepts (things that we know before any experience) and a posteriori concepts (things that we know only from experience). He pointed out that there are some things that we can’t know from experience and instead need to be born with them. The work of Konrad Lorenz, though, pointed out that Kant’s a priori were really evolutionary a posteriori concepts. That is we didn’t learn them, but our ancestors did. We might believe X despite not having seen it with our own eyes, but this is only because our ancestors who believed X survived. If we couldn’t navigate the world because our internal representations of the world were too distant from how the world actually is, then we would have been less likely to survive and reproduce. What this means is that we can have a priori concepts i.e. innate knowledge. But, that this innate knowledge is itself based on sensory perceptions of the world, just not yours. The types of a priori knowledge can be differentiated into the naturalistic a priori and the inference-from-premises a priori.

Is semiotics bullshit?

9 PhilGoetz 25 August 2015 02:09PM

I spent an hour recently talking with a semiotics professor who was trying to explain semiotics to me.  He was very patient, and so was I, and at the end of an hour I concluded that semiotics is like Indian chakra-based medicine:  a set of heuristic practices that work well in a lot of situations, justified by complete bullshit.

I learned that semioticians, or at least this semiotician:

  • believe that what they are doing is not philosophy, but a superset of mathematics and logic
  • use an ontology, vocabulary, and arguments taken from medieval scholastics, including Scotus
  • oppose the use of operational definitions
  • believe in the reality of something like Platonic essences
  • look down on logic, rationality, reductionism, the Enlightenment, and eliminative materialism.  He said that semiotics includes logic as a special, degenerate case, and that semiotics includes extra-logical, extra-computational reasoning.
  • seems to believe people have an extra-computational ability to make correct judgements at better-than-random probability that have no logical basis
  • claims materialism and reason each explain only a minority of the things they are supposed to explain
  • claims to have a complete, exhaustive, final theory of how thinking and reasoning works, and of the categories of reality.

When I've read short, simple introductions to semiotics, they didn't say this.  They didn't say anything I could understand that wasn't trivial.  I still haven't found one meaningful claim made by semioticians, or one use for semiotics.  I don't need to read a 300-page tome to understand that the 'C' on a cold-water faucet signifies cold water.  The only example he gave me of its use is in constructing more-persuasive advertisements.

(Now I want to see an episode of Mad Men where they hire a semotician to sell cigarettes.)

Are there multiple "sciences" all using the name "semiotics"?  Does semiotics make any falsifiable claims?  Does it make any claims whose meanings can be uniquely determined and that were not claimed before semiotics?

His notion of "essence" is not the same as Plato's; tokens rather than types have essences, but they are distinct from their physical instantiation.  So it's a tripartite Platonism.  Semioticians take this division of reality into the physical instantiation, the objective type, and the subjective token, and argue that there are only 10 possible combinations of these things, which therefore provide a complete enumeration of the possible categories of concepts.  There was more to it than that, but I didn't follow all the distinctions. He had several different ways of saying "token, type, unbound variable", and seemed to think they were all different.

Really it all seemed like taking logic back to the middle ages.

Yudkowsky's brain is the pinnacle of evolution

-24 Yudkowsky_is_awesome 24 August 2015 08:56PM

Here's a simple problem: there is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are 3^^^3 people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person, Eliezer Yudkowsky, on the side track. You have two options: (1) Do nothing, and the trolley kills the 3^^^3 people on the main track. (2) Pull the lever, diverting the trolley onto the side track where it will kill Yudkowsky. Which is the correct choice?

The answer:

Imagine two ant philosophers talking to each other. “Imagine," they said, “some being with such intense consciousness, intellect, and emotion that it would be morally better to destroy an entire ant colony than to let that being suffer so much as a sprained ankle."

Humans are such a being. I would rather see an entire ant colony destroyed than have a human suffer so much as a sprained ankle. And this isn't just human chauvinism either - I can support my feelings on this issue by pointing out how much stronger feelings, preferences, and experiences humans have than ants do.

How this relates to the trolley problem? There exists a creature as far beyond us ordinary humans as we are beyond ants, and I think we all would agree that its preferences are vastly more important than those of humans.

Yudkowsky will save the world, not just because he's the one who happens to be making the effort, but because he's the only one who can make the effort.

The world was on its way to doom until the day of September 11, 1979, which will later be changed to national holiday and which will replace Christmas as the biggest holiday. This was of course the day when the most important being that has ever existed or will exist, was born.

Yudkowsky did the same to the field of AI risk as Newton did to the field of physics. There was literally no research done on AI risk in the same scale that has been done in the 2000's by Yudkowsky. The same can be said about the field of ethics: ethics was an open problem in philosophy for thousands of years. However, Plato, Aristotle, and Kant don't really compare to the wisest person who has ever existed. Yudkowsky has come closest to solving ethics than anyone ever before. Yudkowsky is what turned our world away from certain extinction and towards utopia.

We all know that Yudkowsky has an IQ so high that it's unmeasurable, so basically something higher than 200. After Yudkowsky gets the Nobel prize in literature due to getting recognition from Hugo Award, a special council will be organized to study the intellect of Yudkowsky and we will finally know how many orders of magnitude higher Yudkowsky's IQ is to that of the most intelligent people of history.

Unless Yudkowsky's brain FOOMs before it, MIRI will eventually build a FAI with the help of Yudkowsky's extraordinary intelligence. When that FAI uses the coherent extrapolated volition of humanity to decide what to do, it will eventually reach the conclusion that the best thing to do is to tile the whole universe with copies of Eliezer Yudkowsky's brain. Actually, in the process of making this CEV, even Yudkowsky's harshest critics will reach such understanding of Yudkowsky's extraordinary nature that they will beg and cry to start doing the tiling as soon as possible and there will be mass suicides because people will want to give away the resources and atoms of their bodies for Yudkowsky's brains. As we all know, Yudkowsky is an incredibly humble man, so he will be the last person to protest this course of events, but even he will understand with his vast intellect and accept that it's truly the best thing to do.

Why people want to die

38 PhilGoetz 24 August 2015 08:13PM

Over and over again, someones says that living for a very long time would be a bad thing, and then some futurist tries to persuade them that their reasoning is faulty.  They tell them that they think that way now, but they'll change their minds when they're older.

The thing is, I don't see that happening.  I live in a small town full of retirees, and those few I've asked about it are waiting for death peacefully.  When I ask them about their ambitions, or things they still want to accomplish, they have none.

Suppose that people mean what they say.  Why do they want to die?

continue reading »

Manhood of Humanity

6 Viliam 24 August 2015 06:31PM

This is my re-telling of Korzybski's Manhood of Humanity. First part here.)

continue reading »

The virtual AI within its virtual world

3 Stuart_Armstrong 24 August 2015 04:42PM

A putative new idea for AI control; index here.

In a previous post, I talked about an AI operating only on a virtual world (ideas like this used to be popular, until it was realised the AI might still want to take control of the real world to affect the virtual world; however, with methods like indifference, we can guard against this much better).

I mentioned that the more of the AI's algorithm that existed in the virtual world, the better it was. But why not go the whole way? Some people at MIRI and other places are working on agents modelling themselves within the real world. Why not have the AI model itself as an agent inside the virtual world? We can quine to do this, for example.

Then all the restrictions on the AI - memory capacity, speed, available options - can be specified precisely, within the algorithm itself. It will only have the resources of the virtual world to achieve its goals, and this will be specified within it. We could define a "break" in the virtual world (ie any outside interference that the AI could cause, were it to hack us to affect its virtual world) as something that would penalise the AI's achievements, or simply as something impossible according to its model or beliefs. It would really be a case of "given these clear restrictions, find the best approach you can to achieve these goals in this specific world".

It would be idea if the AI's motives were not given in terms of achieving anything in the virtual world, but in terms of making the decisions that, subject to the given restrictions, were most likely to achieve something if the virtual world were run in its entirety. That way the AI wouldn't care if the virtual world were shut down or anything similar. It should only seek to self modify in way that makes sense within the world, and understand itself existing completely within these limitations.

Of course, this would ideally require flawless implementation of the code; we don't want bugs developing in the virtual world that point to real world effects (unless we're really confident we have properly coded the "care only about the what would happen in the virtual world, not what actually does happen).

Any thoughts on this idea?

 

AI, cure this fake person's fake cancer!

9 Stuart_Armstrong 24 August 2015 04:42PM

A putative new idea for AI control; index here.

An idea for how an we might successfully get useful work out of a powerful AI.

 

The ultimate box

Assume that we have an extremely detailed model of a sealed room, with a human in it and enough food, drink, air, entertainment, energy, etc... for the human to survive for a month. We have some medical equipment in the room - maybe a programmable set of surgical tools, some equipment for mixing chemicals, a loud-speaker for communication, and anything else we think might be necessary. All these objects are specified within the model.

We also have some defined input channels into this abstract room, and output channels from this room.

The AI's preferences will be defined entirely with respect to what happens in this abstract room. In a sense, this is the ultimate AI box: instead of taking a physical box and attempting to cut it out from the rest of the universe via hardware or motivational restrictions, we define an abstract box where there is no "rest of the universe" at all.

 

Cure cancer! Now! And again!

What can we do with such a setup? Well, one thing we could do is to define the human in such a way that they have some from of advanced cancer. We define what "alive and not having cancer" counts as, as well as we can (the definition need not be fully rigorous). Then the AI is motivated to output some series of commands to the abstract room that results in the abstract human inside not having cancer. And, as a secondary part of its goal, it outputs the results of its process.

continue reading »

Open Thread - Aug 24 - Aug 30

4 Elo 24 August 2015 08:14AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

List of common human goals

6 Elo 24 August 2015 07:58AM
List of common goal areas:
This list is meant to be in the area of goal-space.  It is non-exhaustive and the descriptions are including but not limited to - some hints to help you understand where in the idea-space these goals land.  When constructing this list I try to imagine a large venn diagram where sometimes they overlap.  The areas mentioned are areas that have an exclusive part to them; i.e. where sometimes knowledge overlaps with self-awareness there are parts of each that do not overlap; so both are mentioned.  If you prefer a more "focussing" or feeling base description; Imagine each of these goals is a hammer, designed with a specific weight to hit a certain note on a xylophone.  Often one hammer can produce the note that is meant for that key and several other keys as well.  But sometimes they can't quite make them sound perfect.  What is needed is the right hammer for that block to hit the right note and make the right sound.  Each of these "hammers" has some note that cannot be produced through the use of other hammers.

This list has several purposes:

  1. For someone with some completed goals who is looking to move forward to new horizons; help you consider which common goal-pursuits you have not explored and if you want to try to strive for something in one of these directions.
  2. For someone without clear goals who is looking to create them and does not know where to start.
  3. For someone with too many specific goals who is looking to consider the essences of those goals and what they are really striving for.
  4. For someone who doesn't really understand goals or why we go after them to get a better feel for "what" potential goals could be.

What to do with this list?

0. Agree to invest 30 minutes of effort into a goal confirmation exercise as follows.
  1. Go through this list (copy paste to your own document) and cross out the things you probably don't care about.  Some of these have overlapping solutions of projects that you can do that fulfils multiple goal-space concepts. (5mins)
  2. For the remaining goals; rank them either "1 to n", in "tiers" of high to low priority or generally order them in some way that is coherent to you.  (For serious quantification; consider giving them points - i.e. 100 points for achieving a self-awareness and understanding goal but a pleasure/creativity goal might be only worth 20 points in comparison) (10mins)
  3. Make a list of your ongoing projects (5-10mins), and check if they actually match up to your most preferable goals. (or your number ranking) (5-10mins)  If not; make sure you have a really really good excuse for yourself.
  4. Consider how you might like to do things differently that prioritise your current plans to fit more inline with your goals. (10-20mins)
  5. Repeat this task at an appropriate interval (6monthly, monthly, when your goals significantly change, when your life significantly changes, when major projects end)

Why have goals?

Your goals could change in life; you could explore one area and realise you actually love another area more.  It's important to explore and keep confirming that you are still winning your own personal race to where you want to be going.
It's easy to insist that goals serve to only disappoint or burden a person.  These are entirely valid fears for someone who does not yet have goals.  Goals are not set in stone; however they don't like to be modified either.  I like to think of goals as doing this:
(source: internet viral images) Pictures from the Internet aside; The best reason I have ever reasoned for picking goals is to do exactly this.  Make choices that a reasonable you in the future will be motivated to stick to Outsource that planning and thinking of goal/purpose/direction to your past self.  Naturally you could feel like making goals is piling on the bricks (but there is a way to make goals that do not leave them piling on like bricks); you should think of it as rescuing future you from a day spent completely lost and wondering what you were doing.  Or a day spent questioning if "this" is something that is getting you closer to what you want to be doing in life.

Below here is the list.  Good luck.


personal:

Spirituality - religion, connection to a god, meditation, the practice of gratitude or appreciation of the universe, buddhism, feeling of  a greater purpose in life.
knowledge/skill + ability - learning for fun - just to know, advanced education, becoming an expert in a field, being able to think clearly, being able to perform a certain skill (physical skill), ability to do anything from run very far and fast to hold your breath for a minute, Finding ways to get into flow or the zone, be more rational.
self-awareness/understanding - to be at a place of understanding one’s place in the world, or have an understanding of who you are; Practising thinking in eclectic perspectives for various other people and how it effects your understanding of the world.
health + mental - happiness (mindset) - Do you even lift? http://thefutureprimaeval.net/why-we-even-lift/, are you fit, healthy, eating right, are you in pain, is your mind in a good place, do you have a positive internal voice, do you have bad dreams, do you feel confident, do you feel like you get enough time to yourself?
Live forever - do you want to live forever - do you want to work towards ensuring that this happens?
art/creativity - generating creative works, in any field - writing, painting, sculpting, music, performance.
pleasure/recreation - are you enjoying yourself, are you relaxing, are you doing things for you.
experience/diversity - Have you seen the world?  Have you explored your own city?  Have you met new people, are you getting out of your normal environment?
freedom - are you tied down?  Are you trapped in your situation?  Are your burdens stacked up?
romance - are you engaged in romance?  could you be?
Being first - You did something before anyone; you broke a record, It’s not because you want your name on the plaque - just the chance to do it first.  You got that.
Create something new - invent something; be on the cutting edge of your field; just see a discovery for the first time.  Where the new-ness makes creating something new not quite the same as being first or being creative.

personal-world:

legacy - are you leaving something behind?  Do you have a name? Will people look back and say; I wish I was that guy!
fame/renoundness - Are you “the guy”?  Do you want people to know your name when you walk down the street?  Are there gossip magazines talking about you; do people want to know what you are working on in the hope of stealing some of your fame?  Is that what you want?
leadership, and military/conquer - are you climbing to the top?  Do you need to be in control?  Is that going to make the best outcomes for you?  Do you wish to destroy your enemies?  As a leader do you want people following you?  Do as you do? People should revere you. And power - in the complex; “in control” and “flick the switch” ways that overlap with other goal-space areas.  Of course there are many forms of power; but if its something that you want; you can find fulfilment through obtaining it.
Being part of something greater - The opportunity to be a piece of a bigger puzzle, are you bringing about change; do we have you to thank for being part of bringing the future closer; are you making a difference.
Social - are you spending time socially? No man is an island, do you have regular social opportunities, do you have exploratory social opportunities to meet new people.  Do you have an established social network?  Do you have intimacy?
Family - do you have a family of your own?  Do you want one?  Are there steps that you can take to put yourself closer to there?  Do you have a pet? Having your own offspring? Do you have intimacy?
Money/wealth - Do you have money; possessions and wealth?  Does your money earn you more money without any further effort (i.e. owning a business, earning interest on your $$, investing)
performance - Do you want to be a public performer, get on stage and entertain people?  Is that something you want to be able to do?  Or do on a regular basis?
responsibility - Do you want responsibility?  Do you want to be the one who can make the big decisions?
Achieve, Awards - Do you like gold medallions?  Do you like to strive towards an award?
influence - Do you want to be able to influence people, change hearts and minds.
Conformity - The desire to blend in; or be normal.  Just to live life as is; without being uncomfortable.
Be treated fairly - are you getting the raw end of the stick?  Are there ways that you don't have to keep being the bad guy around here?
keep up with the Joneses - you have money/wealth already, but there is also the goal of appearing like you have money/wealth.  Being the guy that other people keep up with.
Validation/acknowledgement - Positive Feedback on emotions/feeling understood/feeling that one is good and one matters

world:

improve the lives of others (helping people) - in the charity sense of raising the lowest common denominator directly.
Charity + improve the world -  indirectly.  putting money towards a cause; lobby the government to change the systems to improve people’s lives.
winning for your team/tribe/value set - doing actions but on behalf of your team, not yourself. (where they can be one and the same)
Desired world-states - make the world into a desired alternative state.  Don't like how it is; are you driven to make it into something better?

other (and negative stimuli):

addiction (fulfil addiction) - addiction feels good from the inside and can be a motivating factor for doing something.
Virtual reality success - own all the currency/coin and all the cookie clickers, grow all the levels and get all the experience points!
Revenge - Get retribution; take back what you should have rightfully had, show the world who’s boss.
Negative - avoid (i.e. pain, loneliness, debt, failure, embarrassment, jail) - where you can be motivated to avoid pain - to keep safe, or avoid something, or “get your act together”.
Negative - stagnation (avoid stagnation) - Stop standing still.  Stop sitting on your ass and DO something.


Words:

This list being written in words; Will not mean the same thing to every reader.  Which is why I tried to include several categories that almost overlap with each other.  Some notable overlaps are: Legacy/Fame.  Being first/Achievement. Being first/skill and ability.  But of course there are several more.  I really did try to keep the categories open and several; not simplified.  My analogy to hammers and notes should be kept in mind when trying to improve this list.

I welcome all suggestions and improvements to this list.
I welcome all feedback to improve the do-at-home task.
I welcome all life-changing realisations as feedback from examining this list.
I welcome the opportunity to be told how wrong I am :D

Meta-information

This document in total has been 7-10 hours of writing over about two weeks.
I have had it reviewed by a handful of people and lesswrongers before posting.  (I kept realising that someone I was talking to might get value out of it)
I wrote this because I felt like it was the least-bad way that I could think of going about
finding these ideas in the one place
sharing these ideas and this way of thinking about them with you.

Please fill out the survey of if this was helpful.

Edit: also included; (not in the comments) desired world states; and live forever.

The Sleeping Beauty problem and transformation invariances

1 aspera 23 August 2015 08:57PM

I recently read this blog post by Allen Downey in response to a reddit post in response to Julia Galef's video about the Sleeping Beauty problem. Downey's resolution boils down to a conjecture that optimal bets on lotteries should be based on one's expected state of prior information just before the bet's resolution, as opposed to one's state of prior information at the time the bet is made.

I suspect that these two distributions are always identical. In fact, I think I remember reading in one of Jaynes' papers about a requirement that any prior be invariant under the acquisition of new information. That is to say, the prior should be the weighted average of possible posteriors, where the weights are the likelihood that each posterior would be acheived after some measurement. But now I can't find this reference anywhere, and I'm starting to doubt that I understood it correctly when I read it.

So I have two questions:

1) Is there such a thing as this invariance requirement? Does anyone have a reference? It seems intuitive that the prior should be equivalent to the weighted average of posteriors, since it must contain all of our prior knowledge about a system. What is this property actually called?

2) If it exists, is it a corollary that our prior distribution must remain unchanged unless we acquire new information?

Magic and the halting problem

-4 kingmaker 23 August 2015 07:34PM

It is clear that the Harry Potter book series is fairly popular on this site, e.g. the fanfiction. This fanfiction approaches the existence of magic objectively and rationally. I would suggest, however, that most if not all of the people on this site would agree that magic, as presented in Harry Potter, is merely fantasy. Our understanding of the laws of physics and our rationality forbids anything so absurd as magic; it is universally regarded by most rational people as superstition.


This position can be strengthened by grabbing a stick, pointing it at some object and chanting "wingardium leviosa" and waiting for it to rise magically. When (or if) this fails to work, a proponent of magic may resort to special pleading, and claim that as we didn't believe it would work it could not work, or that we need a special wand or that we are a squib or muggle. The proponent can perpetually move the goalposts since their idea of magic is unfalsifiable. But as it is unfalsifiable, it is rejected, in the same way that most of us on this site do not believe in any god(s). If magic were to found to explain certain phenomena scientifically, however, then I and I hope everyone else would come to believe in it, or at least shut up and calculate.


I personally subscribe to the Many Worlds Interpretation of quantum mechanics, so I effectively "believe" in the multiverse. That means it is possible that somewhere in the universal wavefunction, there is an Everett Branch in which magic is real. Or at least every time someone chants an incantation, by total coincidence, the desired effect occurs. But how would the denizens of this universe be able to know that magic is not real, and that everything they had seen was by sheer coincidence? Alan Turing pondered a related problem known as the halting problem, which asks if a general algorithm can distinguish between an algorithm that will finish or one that will run forever. He proved that one could not for all algorithms, although some algorithms will obviously finish executing or infinitely loop e.g. this code segment will loop forever:

 

while (true) {

    //do nothing

}

 

So how would a person distinguish between pseudo-magic that will inevitably fail, and real magic that is the true laws of physics? The only way to be certain that magic doesn't exist in this Everett Branch would be for incantations to fail repeatedly and testably, but this may happen far into the future, long after all humans are deceased. This line of thinking leads me to wonder, do our laws of physics seem as absurd to these inhabitants as their magic seems to us? How do we know that we have the right understanding of reality, as opposed to being deceived by coincidence? If every human in this magical branch is deceived the same way, does this become their true reality? And finally, what if our entire understanding of reality, including logic, is mere deception by happenstance, and everything we think we know is false?

 

Rationality Compendium: Principle 1 - A rational agent, given its capabilities and the situation it is in, is one that thinks and acts optimally

7 ScottL 23 August 2015 08:01AM

A perfect rationalist is an ideal thinker. Rationality , however, is not the same as perfection. Perfection guarantees optimal outcomes. Rationality only guarantees that the agent will, to the utmost of their abilities, reason optimally. Optimal reasoning cannot, unfortunately, guarantee optimal outcomes. This is because most agents are not omniscient or omnipotent. They are instead fundamentally and inexorably limited. To be fair to such agents, the definition of rationality that we use should take this into account. Therefore, a rational agent will be defined as: an agent that, given its capabilities and the situation it is in, thinks and acts optimally. Although it is noted that rationality does not guarantee the best outcome, a rational agent will most of the time achieve better outcomes than those of an irrational agent. 

Rationality is often considered to be split into three parts: normative, descriptive and prescriptive rationality.

Normative rationality describes the laws of thought and action. That is, how a perfectly rational agent with unlimited computing power, omniscience etc. would reason and act. Normative rationality basically describes what is meant by the phrase "optimal reasoning". Of course, for limited agents true optimal reasoning is impossible and they must instead settle for bounded optimal reasoning, which is the closest approximation to optimal reasoning that is possible given the information available to the agent and the computational abilities of the agent. The laws of thought and action (what we currently believe optimal reasoning involves) are::

  • Logic  - math and logic are deductive systems, where the conclusion of a successful argument follows necessarily from its premises, given the axioms of the system you’re using: number theory, geometry, predicate logic, etc.
  • Probability theory  - is essentially an extension of logic. Probability is a measure of how likely a proposition is to be true, given everything else that you already believe. Perhaps, the most useful rule to be derived from the axioms of probability theory is Bayes’ Theorem , which tells you exactly how your probability for a statement should change as you encounter new information. Probability is viewed from one of two perspectives: the Bayesian perspective which sees probability as a measure of uncertainty about the world and the Frequentist perspective which sees probability as the proportion of times the event would occur in a long run of repeated experiments. Less wrong follows the Bayesian perspective. 
  • Decision theory  - is about choosing actions based on the utility function of the possible outcomes. The utility function is a measure of how much you desire a particular outcome. The expected utility of an action is simply the average utility of the action’s possible outcomes weighted by the probability that each outcome occurs. Decision theory can be divided into three parts:
    • Normative decision theory studies what an ideal agent (a perfect agent, with infinite computing power, etc.) would choose.
    • Descriptive decision theory studies how non-ideal agents (e.g. humans) actually choose.
    • Prescriptive decision theory studies how non-ideal agents can improve their decision-making (relative to the normative model) despite their imperfections.

Descriptive rationality describes how people normally reason and act. It is about understanding how and why people make decisions. As humans, we have certain limitations and adaptations which quite often makes it impossible for us to be perfectly rational in the normative sense of the word. It is because of this that we must satisfice or approximate the normative rationality model as best we can. We engage in what's called bounded, ecological or grounded rationality  . Unless explicitly stated otherwise, 'rationality' in this compendium will refer to rationality in the bounded sense of the word. In this sense, it means that the most rational choice for an agent depends on the agents capabilities and the information that is available to it. The most rational choice for an agent is not necessarily the most certain, true or right one. It is just the best one given the information and capabilities that the agent has. This means that an agent that satisfices or uses heuristics may actually be reasoning optimally, given its limitations, even though satisficing and heuristics are shortcuts that are potentially error prone.  

Prescriptive or applied rationality is essentially about how to bring the thinking of limited agents closer to what the normative model stipulates. It is described by Baron in Thinking and Deciding   pg.34: 

In short, normative models tell us how to evaluate judgments and decisions in terms of their departure from an ideal standard. Descriptive models specify what people in a particular culture actually do and how they deviate from the normative models. Prescriptive models are designs or inventions, whose purpose is to bring the results of actual thinking into closer conformity to the normative model. If prescriptive recommendations derived in this way are successful, the study of thinking can help people to become better thinkers.

The behaviours and thoughts that we consider to be rational for limited agents is much larger than those for the perfect, i.e. unlimited, agents. This is because for the limited agents we need to take into account, not only those thoughts and behaviours which are optimal for the agent, but also those thoughts and behaviours which allow the limited agent to improve their reasoning. It is for this reason that we consider curiousity, for example, to be rational as it often leads to situations in which the agents improve their internal representations or models of the world. We also consider wise resource allocation to be rational because limited agents only have a limited amount of resources available to them. Therefore, if they can get a greater return on investment on the resources that they do use then they will be more likely to be able to get closer to thinking optimally in a greater number of domains.

We also consider the rationality of particuar choices to be something that is in a state of flux. This is because the rationality of choices depends on the information that an agent has access to and this is something which is frequently changing. This hopefully highlights an important fact. If an agent is suboptimal in its ability to gather information, then it will often end up with different information than an agent with optimal informational gathering abilities would. In short, this is a problem for the suboptimal (irrational) agent as it means that its rational choices are going to differ more from the perfect normative agents than the rational agents would. The closer an agents rational choices are to the rational choices of a perfect normative agent the more that the agent is rational.

It can also be said that the rationality of an agent depends in large part on the agents truth seeking abilities. The more accurate and up to date the agents view of the world the closer its rational choices will be to those of the perfect normative agents. It is because of this that a rational agent is one that is inextricably tied to the world as it is. It does not see the world as it wishes it, fears it or has seen it to be, but instead constantly adapts to and seeks out feedback from interactions with the world. The rational agent is attuned to the current state of affairs. One other very important characteristic of rational agents is that they adapt. If the situation has changed and the previously rational choice is no longer the one with the greatest expected utility, then the rational agent will adapt and change its preferred choice to the one that is now the most rational.

The other important part of rationality, besides truth seeking, is that it is about maximising the ability to actually achieve important goals. These two parts or domains of rationality: truth seeking and goal reaching are referred to as epistemic and instrumental rationality.  

  • Epistemic rationality is about the ability to form true beliefs. It is governed by the laws of logic and probability theory.
  • Instrumental rationality is about the ability to actually achieve the things that matter to you. It is governed by the laws of decision theory. In a formal context, it is known as maximizing “expected utility”. It important to note that it is about more than just reaching goals. It is also about discovering how to develop optimal goals.

As you move further and further away from rationality you introduce more and more flaws, inefficiencies and problems into your decision making and information gathering algorithms. These flaws and inefficiencies are the cause of irrational or suboptimal behaviors, choices and decisions. Humans are innately irrational in a large number of areas which is why, in large part, improving our rationality is just about mitigating, as much as possible, the influence of our biases and irrational propensities.

If you wish to truly understand what it means to be rational, then you must also understand what rationality is not. This is important because the concept of rationality is often misconstrued by the media. An epitomy of this misconstrual is the character of Spock from Star Trek. This character does not see rationality as if it was about optimality, but instead as if it means that 

  • You can expect everyone to react in a reasonable, or what Spock would call rational, way. This is irrational because it leads to faulty models and predictions of other peoples behaviors and thoughts.
  • You should never make a decision until you have all the information. This is irrational because humans are not omniscient or omnipotent. Their decisions are constrained by many factors like the amount of information they have, the cognitive limitations of their brains and the time available for them to make decisions. This means that a person if they are to act rationally must often make predictions and assumptions.
  • You should never rely on intuition. This is irrational because intuition (system 1 thinking)  does have many advantages over conscious and effortful deliberation (system 2 thinking) mainly its speed. Although intuitions can be wrong, to disregard them entirely is to hinder yourself immensely. If your intuitions are based on multiple interactions that are similar to the current situation and these interactions had short feedback cycles, then it is often irrational to not rely on your intuitions.
  • You should not become emotional. This is irrational because while it is true that emotions can cause you to use less rational ways of thinking and acting, i.e. ways that are optimised for ancestral or previous environments, it does not mean that we should try to eradicate emotions in ourselves. This is because emotions are essential to rational thinking and normal social behavior . An aspiring rationalist should remember four points in regards to emotions:
    • The rationality of emotions depends on the rationality of the thoughts and actions that they induce. It is rational to feel fear when you are actually in a situation where you are threatened. It is irrational to feel fear in situations where are not being threatened. If your fear compels you to take suboptimal actions, then and only then is that fear irrational.
    • Emotions are the wellspring of value. A large part of instrumental rationality is about finding the best way to achieve your fundamental human needs. A person who can fulfill these needs through simple methods is more rational than someone who can't. In this particular area people tend to become alot less rational as they age. As adults we should be jealous of the innocent exuberance that comes so naturally to children. If we are not as exuberant as children, then we should wonder at how it is that we have become so shackled by our own self restraint. 
    • Emotional control is a virtue, but denial is not. Emotions can be considered a type of internal feedback. A rational person does not be consciously ignore or avoid feedback as this means that would be limiting or distorting the information that they have access to. It is possible that a rational agent may may need to mask or hide their emotions for reasons related to societal norms and status, but they should not repress emotions unless there is some overriding rational reason to do so. If a person volitionally represses their emotions because they wish to perpetually avoid them, then this is both irrational and cowardly.
    • By ignoring, avoiding and repressing emotions you are limiting the information that you exhibit, which means that other people will not know how you are actually feeling. In some situations this may be helpful, but it is important to remember that people are not mind readers. Their ability to model your mind and your emotional state depends on the information that they know about you and the information, e.g. body language, vocal inflections, that you exhibit. If people do not know that you are vulnerable, then they cannot know that you are courageous. If people do not know that you are in pain, then they cannot know that you need help.   
  • You should only value quantifiable things like money, productivity, or efficiency. This is irrational because it means that you are reducing the amount of potentially valuable information that you consider. The only reason a rational person ever reduces the amount of information that they consider is because of resource or time limitations.

Related Materials

Wikis:

  • Rationality - the characteristic of thinking and acting optimally. An agent is rational if it wields its intelligence in such a way as to maximize the convergence between its beliefs and reality; and acts on these beliefs in such a manner as to maximize its chances of achieving whatever goals it has. For humans, this means mitigating (as much as possible) the influence of cognitive biases
  • Maths/Logic - Math and logic are deductive systems, where the conclusion of a successful argument follows necessarily from its premises, given the axioms of the system you’re using: number theory, geometry, predicate logic, etc.   
  • Probability theory - a field of mathematics which studies random variables and processes. 
  • Bayes theorem - a law of probability that describes the proper way to incorporate new evidence into prior probabilities to form an updated probability estimate.
  • Bayesian - Bayesian probability theory is the math of epistemic rationality, Bayesian decision theory is the math of instrumental rationality.
  • Bayesian probability - represents a level of certainty relating to a potential outcome or idea. This is in contrast to a frequentist probability that represents the frequency with which a particular outcome will occur over any number of trials. An event with Bayesian probability of .6 (or 60%) should be interpreted as stating "With confidence 60%, this event contains the true outcome", whereas a frequentist interpretation would view it as stating "Over 100 trials, we should observe event X approximately 60 times." The difference is more apparent when discussing ideas. A frequentist will not assign probability to an idea; either it is true or false and it cannot be true 6 times out of 10. 
  • Bayesian Decision theory - Bayesian decision theory refers to a decision theory which is informed by Bayesian probability 
  • Decision theory – is the study of principles and algorithms for making correct decisions—that is, decisions that allow an agent to achieve better outcomes with respect to its goals. 
  • Hollywood rationality- What Spock does, not what actual rationalists do.

Posts:

Suggested posts to write:

  • Bounded/ecological/grounded Rationality - I couldn't find a suitable resource for this on less wrong.  

Academic Books:

Popular Books:

Talks:

Notes on decisions I have made while creating this post

 (these notes will not be in the final draft): 

  • I agree denotationally, but object connotatively  with 'rationality is systemized winning', so I left it out. I feel that it would take too long to get rid of the connotation of competition that I believe is associated with 'winning'. The other point that would need to be delved into is: what exactly does the rationalist win at? I believe by winning Elizer meant winning at newcomb's problem, but the idea of winning is normally extended into everything.  I also believe that I have basically covered the idea with: “Rationality maximizes expected performance, while perfection maximizes actual performance.”
  • I left out the 12 virtues of rationality because I don’t like perfectionism. If it was not in the virtues, then I would have included them. My problem with perfectionism is that having it as a goal makes you liable to premature optimization and developing tendencies for suboptimal levels of adaptability. Everything I have read in complexity theory, for example, makes me think that perfectionism is not really a good thing to be aiming for, at least in uncertain and complex situations. I think truth seeking should be viewed as an optimization process. If it doesn't allow you to become more optimal, then it is not worth it. I have a post about this here.
  • I couldn't find an appropriate link for bounded/ecological/grounded rationality. 

Rationality Compendium

10 ScottL 23 August 2015 08:00AM

I want to create a rationality compendium (a collection of concise but detailed information about a particular subject) and I want to know whether you think this would be a good idea. The rationality compendium would essentially be a series of posts that will eventually serve as a guide for less wrong newbies that they can use to discover which resources to look into further, a refresher of the main concepts for less wrong veterans and a guideline or best practices document that will explain techniques that can be used to apply the core less wrong/rationality concepts. These techniques should preferably have been verified to be useful in some way. Perhaps, there will be some training specific posts in which we can track if people are actually finding the techniques to be useful.

I only want to write this because I am lazy. In this context, I mean lazy as it is described by Larry Wall:

Laziness: The quality that makes you go to great effort to reduce overall energy expenditure.

I think that a rationality compendium would not only prove that I have correctly understood the available rationality material, but it would also ensure that I am actually making use of this knowledge. That is, applying the rationality materials that I have learnt in ways that allow me to improve my life.

If you think that a rationality compendium is not needed or would not be overly helpful, then please let me know. I also want to point out that I do not think that I am necessarily the best person to do this and that I am only doing it because I don’t see it being done by others.

For the rationality compendium, I plan to write a series of posts which should, as much as possible, be:

  • Using standard terms: less wrong specific terms might be linked to in the related materials section, but common or standard terminology will be used wherever possible.
  • Concise: the posts should just contain quick overviews of the established rationality concepts. They shouldn’t be introducing “new” ideas. The one exception to this is if a new idea allows multiple rationality concepts to be combined and explained together. If existing ideas require refinement, then this should happen in a seperate post which the rationality compendium may provide a link to if the post is deemed to be high quality.
  • Comprehensive: links to all related posts, wikis or other resources should be provided in a related materials section. This is so that readers can deep dive or just go deeper on materials that pique their interest while still ensuring that the posts are concise. The aim of the rationality compendium is to create a resource that is a condensed and distilled version of the available rationality materials. This means that it is not meant to be light reading as a large number of concepts will be presented in one post.
  • Collaborative: the posts should go through many series of edits based on the feedback in the comments. I don't think that I will be able to create perfect first posts, but I am willing to expend some effort to iteratively improve the posts until they reach a suitable standard. I hope that enough people will be interested in a rationality compendium so that I can gain enough feedback to improve the posts. I plan for the posts to stay in discussion for a long time and will possibly rerun posts if it is required. I welcome all kinds of feedback, positive or negative, but request that you provide information that I can use to improve the posts.
  • Be related only to rationality: For example, concepts from AI or quantum mechanics won’t be mentioned unless they are required to explain some rationality concepts.
  • Ordered: the points in the compendium will be grouped according to overarching principles. 
I will provide a link to the posts created in the compendium here:

Self-confidence and status

4 asd 23 August 2015 07:36AM

I've seen the advice "be (more) confident" given so that the person may become more socially successful. For example self-confident get more raises. But I'm not sure if self-confidence is the cause of becoming more socially successful, or the result of it. I don't think self-confidence can be separated from social status. I see it more as an intersocial function that gives information to participants on who to follow or listen to, and you can't act fully confident in isolation with others and their feelings. Artificially raised confidence sounds really hard, and if possible it sounds more closer to arrogance or delusion and real confidence must have some connection to reality and knowledge of what kind of value you provide. I mean, delusion might work if those around you are delusional too, but that seems pretty risky and surer way is to just be connected to reality all the time. If what you're saying is not true or interesting, then it's hard to be confident when you're saying it and others will usually quickly notify in one way or another if what you're saying is not true or interesting. If the truth is that you can't really provide much value, then I can't see how you would be able to feel as confident as those who provide more value (note that value might sometimes be quite complex and not the first thing that comes into your mind, inept dictators might provide the kind of value that really makes sense in evolutionary context even though they seem to do nothing useful at all in reality)

So in short, self-confidence usually seems approximation of what kind of value you provide in the real world, where value is to be thought of as something that is beneficial in evolutionary context. There are some hacks to quickly raise your value, like dressing better or working out, but ultimately it comes down raising your value in the real world and confidence must follow that and not the other way around.

Thoughts? How does "fake it till you make it" strategy appears in this context?

Instrumental Rationality Questions Thread

14 AspiringRationalist 22 August 2015 08:25PM

This thread is for asking the rationalist community for practical advice.  It's inspired by the stupid questions series, but with an explicit focus on instrumental rationality.

Questions ranging from easy ("this is probably trivial for half the people on this site") to hard ("maybe someone here has a good answer, but probably not") are welcome.  However, please stick to problems that you actually face or anticipate facing soon, not hypotheticals.

As with the stupid questions thread, don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better, and please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.

(See also the Boring Advice Repository)

A list of apps that are useful to me. (And other phone details)

9 Elo 22 August 2015 12:24PM

 

I have noticed I often wish "Damn I wish someone had made an app for that" and when I search for it I can't find it.  Then I outsource the search to facebook or other people; and they can usually say - yes, its called X.  Which I can put down to an inability to know how to search for an app on my part; more than anything else.

With that in mind; I wanted to solve the problem of finding apps for other people.

The following is a list of apps that I find useful (and use often) for productive reasons:


The environment

This list is long.  The most valuable ones are the top section that I use regularly.  

Other things to mention:

Internal storage - I have a large internal memory card because I knew I would need lots of space.  So I played the "out of sight out of mind game" and tried to give myself as much space as possible by buying a large internal card.

Battery - I use anker external battery blocks to save myself the trouble of worrying about batteries.  If prepared I leave my house with 2 days of phone charge (of 100% use).  I used to count "wins" of days I beat my phone battery (stay awake longer than it) but they are few and far between.  Also I doubled my external battery power and it sits at two days not one (28000mA + 2*460ma spare phone batteries)

Phone - I have a Samsung S4 (android Running KitKat) because it has a few features I found useful that were not found in many other phones - Cheap, Removable battery, external storage card, replaceable case.

Screen cover - I am using the one that came with the phone still

I carry a spare phone case, in the beginning I used to go through one each month; now I have a harder case than before it hasn't broken.

MicroUSB cables - I went through a lot of effort to sort this out, it's still not sorted, but its "okay for now".  The advice I have - buy several good cables (read online reviews about it), test them wherever possible, and realise that they die.  Also carry a spare or two.

Restart - I restart my phone probably most days when it gets slow.  It's got programming bugs, but this solution works for now.

The overlays

These sit on my screen all the time.

Data monitor - Gives an overview of bits per second upload or download. updated every second.

CpuTemp - Gives an overlay of the current core temperature.  My phone is always hot, I run it hard with bluetooth, GPS and wifi blaring all the time.  I also have a lot of active apps.

Mindfulness bell - My phone makes a chime every half hour to remind me to check, "Am I doing something of high-value right now?" it sometimes stops me from doing crap things.

Facebook chat heads - I often have them open, they have memory leaks and start slowing down my phone after a while, I close and reopen them when I care enough.

 

The normals:

Facebook - communicate with people.  I do this a lot.

Inkpad - its a note-taking app, but not an exceptionally great one; open to a better suggestion.

Ingress - it makes me walk; it gave me friends; it put me in a community.  Downside is that it takes up more time than you want to give it.  It's a mobile GPS game.  Join the Resistance.

Maps (google maps) - I use this most days; mostly for traffic assistance to places that I know how to get to.

Camera - I take about 1000 photos a month.  Generic phone-app one.

Assistive light - Generic torch app (widget) I use this daily.

Hello - SMS app.  I don't like it but its marginally better than the native one.

Sunrise calendar - I don't like the native calendar; I don't like this or any other calendar.  This is the least bad one I have found.  I have an app called "facebook sync" which helps with entering in a fraction of the events in my life.  

Phone, address book, chrome browser.

GPS logger - I have a log of my current gps location every 5 minutes.  If google tracks me I might as well track myself.  I don't use this data yet but its free for me to track; so if I can find a use for the historic data that will be a win.

 

Quantified apps:

Fit - google fit; here for multiple redundancy

S Health - Samsung health - here for multiple redundancy

Fitbit - I wear a flex step tracker every day, and input my weight daily manually through this app

Basis - I wear a B1 watch, and track my sleep like a hawk.

Rescuetime - I track my hours on technology and wish it would give a better breakdown. (I also paid for their premium service)

Voice recorder - generic phone app; I record around 1-2 hours of things I do per week.  Would like to increase that.

Narrative - I recently acquired a life-logging device called a narrative, and don't really know how to best use the data it gives.  But its a start.

How are you feeling? - Mood tracking app - this one is broken but the best one I have found, it doesn't seem to open itself after a phone restart; so it won't remind you to enter in a current mood.  I use a widget so that I can enter in the mood quickly.  The best parts of this app are the way it lets you zoom out, and having a 10 point scale.  I used to write a quick sentence about what I was feeling, but that took too much time so I stopped doing it.

Stopwatch - "hybrid stopwatch" - about once a week I time something and my phone didn't have a native one.  This app is good at being a stopwatch.

Callinspector - tracks ingoing or outgoing calls and gives summaries of things like, who you most frequently call, how much data you use, etc.  can also set data limits.

 

Misc

Powercalc - the best calculator app I could find

Night mode - for saving batter (it dims your screen), I don't use this often but it is good at what it does.  I would consider an app that dims the blue light emitted from my screen; however I don't notice any negative sleep effects so I have been putting off getting around to it.

Advanced signal status - about once a month I am in a place with low phone signal - this one makes me feel better about knowing more details of what that means.

Ebay - To be able to buy those $5 solutions to problems on the spot is probably worth more than $5 of "impulse purchases" that they might be classified as.

Cal - another calendar app that sometimes catches events that the first one misses.

ES file explorer - for searching the guts of my phone for files that are annoying to find.  Not as used or as useful as I thought it would be but still useful.

Maps.Me - I went on an exploring adventure to places without signal; so I needed an offline mapping system.  This map saved my life.

Wikipedia - information lookup

Youtube - don't use it often, but its there.

How are you feeling? (again) - I have this in multiple places to make it as easy as possible for me to enter in this data

Play store - Makes it easy to find.

Gallery - I take a lot of photos, but this is the native gallery and I could use a better app.

 

Social

In no particular order;

Facebook groups, Yahoo Mail, Skype, Facebook Messenger chat heads, Whatsapp, meetup, google+, Hangouts, Slack, Viber, OKcupid, Gmail, Tinder.

They do social things.  Not much to add here.

 

Not used:

Trello

Workflowy

pocketbook

snapchat

AnkiDroid - Anki memoriser app for a phone.

MyFitnessPal - looks like a really good app, have not used it 

Fitocracy - looked good

I got these apps for a reason; but don't use them.

 

Not on my front pages:

These I don't use as often; or have not moved to my front pages (skipping the ones I didn't install or don't use)

S memo - samsung note taking thing, I rarely use, but do use once a month or so.

Drive, Docs, Sheets - The google package.  Its terrible to interact with documents on your phone, but I still sometimes access things from my phone.

bubble - I don't think I have ever used this

Compass pro - gives extra details about direction. I never use it.

(ingress apps) Glypher, Agentstats, integrated timer, cram, notify

TripView (public transport app for my city)

Convertpad - converts numbers to other numbers. Sometimes quicker than a google search.

ABC Iview - National TV broadcasting channel app.  Every program on this channel is uploaded to this app, I have used it once to watch a documentary since I got the app.

AnkiDroid - I don't need to memorise information in the way it is intended to be used; so I don't use it. Cram is also a flashcard app but I don't use it.

First aid - I know my first aid but I have it anyway for the marginal loss of 50mb of space.

Triangle scanner - I can scan details from NFC chips sometimes.

MX player - does videos better than native apps.

Zarchiver - Iunno.  Does something.

Pandora - Never used

Soundcloud - used once every two months, some of my friends post music online.

Barcode scanner - never used

Diskusage - Very useful.  Visualises where data is being taken up on your phone, helps when trying to free up space.

Swiftkey - Better than native keyboards.  Gives more freedom, I wanted a keyboard with black background and pale keys, swiftkey has it.

Google calendar - don't use it, but its there to try to use.

Sleepbot - doesn't seem to work with my phone, also I track with other methods, and I forget to turn it on; so its entirely not useful in my life for sleep tracking.

My service provider's app.

AdobeAcrobat - use often; not via the icon though.

Wheresmydroid? - seems good to have; never used.  My phone is attached to me too well for me to lose it often.  I have it open most of the waking day maybe.

Uber - I don't use ubers.

Terminal emulator, AIDE, PdDroid party, Processing Android, An editor for processing, processing reference, learn C++ - programming apps for my phone, I don't use them, and I don't program much.

Airbnb - Have not used yet, done a few searches for estimating prices of things.

Heart rate - measures your heart rate using the camera/flash.  Neat, not useful other than showing off to people how its possible to do.

Basis - (B1 app), - has less info available than their new app

BPM counter - Neat if you care about what a "BPM" is for music.  Don't use often.

Sketch guru - fun to play with, draws things.

DJ studio 5 - I did a dj thing for a friend once, used my phone.  was good.

Facebook calendar Sync - as the name says.

Dual N-back - I Don't use it.  I don't think it has value giving properties.

Awesome calendar - I don't use but it comes with good reccomendations.

Battery monitor 3 - Makes a graph of temperature and frequency of the cores.  Useful to see a few times.  Eventually its a bell curve.

urbanspoon - local food places app.

Gumtree - Australian Ebay (also ebay owns it now)

Printer app to go with my printer

Car Roadside assistance app to go with my insurance

Virgin air entertainment app - you can use your phone while on the plane and download entertainment from their in-flight system.


Two things now;

What am I missing? Was this useful?  Ask me to elaborate on any app and why I used it.  If I get time I will do that anyway. 

P.S. this took two hours to write.

P.P.S - I was intending to make, keep and maintain a list of useful apps, that is not what this document is.  If there are enough suggestions that it's time to make and keep a list; I will do that.

Robert Aumann on Judaism

3 iarwain1 21 August 2015 07:13PM

Just came across this interview with Robert Aumann. On pgs. 20-27 he describes why and how he believes in Orthodox Judaism. I don't really understand what he's saying. Key quote (I think):

H (interviewer): Take for example the six days of creation; whether or not this is how it happened is practically irrelevant to one is decisions and way of conduct. It is on a different level.

A (Aumann): It is a different view of the world, a different way of looking at the world. That is why I prefaced my answer to your question with the story about the roundness of the world being one way of viewing the world. An evolutionary geological perspective is one way of viewing the world. A different way is with the six days of creation. Truth is in our minds. If we are sufficiently broad-minded, then we can simultaneously entertain different ideas of truth, different models, different views of the world.

H: I think a scientist will have no problem with that. Would a religious person have problems with what you just said?

A: Different religious people have different viewpoints. Some of them might have problems with it. By the way, I'm not so sure that no scientist would have a problem with it. Some scientists are very doctrinaire.

Anybody have a clue what he means by all this? Do you think this is a valid way of looking at the world and/or religion? If not, how confident are you in your assertion? If you are very confident, on what basis do you think you have greatly out-thought Robert Aumann?

Please read the source (all 7 pages I referenced, rather than just the above quote), and think about it carefully before you answer. Robert Aumann is an absolutely brilliant man, a confirmed Bayesian, author of Aumann's Agreement Theorem, Nobel Prize winner, and founder / head of Hebrew University's Center for the Study of Rationality. Please don't strawman his arguments or simply dismiss them!

Weekly LW Meetups

2 FrankAdamek 21 August 2015 04:23PM

Pro-Con-lists of arguments and onesidedness points

3 Stefan_Schubert 21 August 2015 02:15PM

Follow-up to Reverse Engineering of Belief Structures

Pro-con-lists of arguments such as ProCon.org and BalancedPolitics.org fill a useful purpose. They give an overview over complex debates, and arguably foster nuance. My network for evidence-based policy is currently in the process of constructing a similar site in Swedish.

 

I'm thinking it might be interesting to add more features to such a site. You could let people create a profile on the site. Then you would let them fill in whether they agree or disagree with the theses under discussion (cannabis legalization, GM foods legalization, etc), and also whether they agree or disagree with the different argument for and against these theses (alternatively, you could let them rate the arguments from 1-5).

Once you have this data, you could use them to give people different kinds of statistics. The most straightforward statistic would be their degree of "onesidedness". If you think that all of the arguments for the theses you believe in are good, and all the arguments against them are bad, then you're defined as onesided. If you, on the other hand, believe that some of your own side's arguments are bad, whereas some of the opponents' arguments are good, you're defined as not being onesided. (The exact mathematical function you would choose could be discussed.)

Once you've told people how one-sided they are, according to the test, you would discuss what might explain onesidedness. My hunch is that the most plausible explanation normally is different kinds of bias. Instead of reviewing new arguments impartially, people treat arguments for their views more leniently than arguments against their views. Hence they end up being onesided, according to the test.

There are other possible explanations, though. One is that all of the arguments against the thesis in question actually are bad. That might happen occassionally, but I don't think that's very common. As Eliezer Yudkowsky says in "Policy Debates Should Not Appear One-sided":

On questions of simple fact (for example, whether Earthly life arose by natural selection) there's a legitimate expectation that the argument should be a one-sided battle; the facts themselves are either one way or another, and the so-called "balance of evidence" should reflect this.  Indeed, under the Bayesian definition of evidence, "strong evidence" is just that sort of evidence which we only expect to find on one side of an argument.

But there is no reason for complex actions with many consequences to exhibit this onesidedness property.  

Instead, the reason why people end up with one-sided beliefs is bias, Yudkowsky argues:

Why do people seem to want their policy debates to be one-sided?

Politics is the mind-killer.  Arguments are soldiers.  Once you know which side you're on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise it's like stabbing your soldiers in the back.  If you abide within that pattern, policy debates will also appear one-sided to you—the costs and drawbacks of your favored policy are enemy soldiers, to be attacked by any means necessary.

Especially if you're consistently one-sided in lots of different debates, it's hard to see that any other hypothesis besides bias is plausible. It depends a bit on what kinds of arguments you include in the list, though. In our lists we haven't really checked the quality of the arguments (our purpose is to summarize the debate, rather than to judge it), but you could also do that, of course.

My hope is that such a test would make people more aware both of their own biases, and of the problem of political bias in general. I'm thinking that is the first step towards debiasing. I've also constructed a political bias test with similar methods and purposes together with ClearerThinking, which should be released soon.

 

You could also add other features to a pro-con-list. For instance, you could classify arguments in different ways: ad hominem-arguments, consequentialist arguments, rights-based arguments, etc. (Some arguments might be hard to classify, and then you just wouldn't do that. You wouldn't necessarily have to classify every argument.) Using this info, you could give people a profile: e.g., what kinds of arguments do they find most persuasive? That could make them reflect more on what kinds of arguments really are valid.

You could also combine these two features. For instance, some people might accept ad hominem-arguments when they support their views, but not when they contradict them. That would make your use of ad hominem-arguments onesided.

 

Yet another feature that could be added is a standard political compass. Since people fill in what theses they believe in (cannabis legalization, GM goods legalization, etc) you could calcluate what party is closest to them, based on the parties' stances on these issues. That could potentially make the test more attractive to take.

 

Suggestions of more possible features are welcome, as well as general comments - especially about implementation.

Glossary of Futurology

2 mind_bomber 21 August 2015 05:51AM

Hi guys,

So I've been curating this glossary over at https://www.reddit.com/r/Futurology/.  I want it to be sort of an introduction to future focused topics.  A list of words that the layman can read and be inspired by.  I try to stay away from household words (i.e. cyberspace), science fiction topics (i.e. dyson sphere), words that describe themselves (i.e. self driving cars), obscure and rarely used words (i.e. betelgeuse-brain), and words that can't be found in most dictionaries (i.e. Rocko's Basilisk (i've been meaning to remove that one)).  Most of the glossary is from words and phrases I find on the /r/Futurology forum.  I have a whole other list with potential words for the glossary that i collect just waiting for the day to be added (i.e particle accelerator, Aerogel, proactionary principle).  I find curating the glossary to be more of an art than a science.  I try to balance the list between science, technology, philosophy, ideology, and sociology.  I like to find related topics to expand the list (i.e. terraforming & geoengineering). Even though the glossary is in alphabetical order i want it to read somewhat like a story.    

Anders Sandberg of The Future of Humanities Institute, Oxford told me "I like the usefulness of your list..."

 

I'm interested to know what you guys think.

 

Glossary located below (the See /r/*.* as this is native to the reddit website.  See /r/*.* links the glossary to subreddits (other reddit pages) related to that word or phrase on the reddit website):


continue reading »

Unlearning shoddy thinking

5 malcolmocean 21 August 2015 03:07AM

School taught me to write banal garbage because people would thumbs-up it anyway. That approach has been interfering with me trying to actually express my plans in writing because my mind keeps simulating some imaginary prof who will look it over and go "ehh, good enough".

Looking good enough isn't actually good enough! I'm trying to build an actual model of the world and a plan that will actually work.

Granted, school isn't necessarily all like this. In mathematics, you need to actually solve the problem. In engineering, you need to actually build something that works. But even in engineering reports, you can get away with a surprising amount of shoddy reasoning. A real example:

Since NodeJS uses the V8 JavaScript engine, it has native support for the common JSON (JavaScript Object Notation) format for data transfer, which means that interoperability between SystemQ and other CompanyX systems can still be fairly straightforward (Jelvis, 2011).

This excerpt is technically totally true, but it's also garbage, especially as a reason to use NodeJS. Sure, JSON is native to JS, but every major web programming language supports JSON. The pressure to provide citable justifications for decisions which were made for reasons more like "I enjoy JavaScript and am skilled with it," produces some deliberately confirmation-biased writing. This is just one pattern—there are many others.

I feel like I need to add a disclaimer here or something: I'm a ringed engineer, and I care a lot about the ethics of design, and I don't think any of my shoddy thinking has put any lives (or well-being, etc) at risk. I also don't believe that any of my shoddy thinking in design reports has violated academic integrity guidelines at my university (e.g. I haven't made up facts or sources).

But a lot of it was still shoddy. Most students are familiar with the process of stating a position, googling for a citation, then citing some expert who happened to agree. And it was shoddy because nothing in the school system was incentivizing me to make it otherwise, and I reasoned it would have cost more to only write stuff that I actually deeply and confidently believed, or to accurately and specifically present my best model of the subject at hand. I was trying to spend as little time and attention as possible working on school things, to free up more time and attention for working on my business, the productivity app Complice.

What I didn't realize was the cost of practising shoddy thinking.

Having finished the last of my school obligations, I've launched myself into some high-level roadmapping for Complice: what's the state of things right now, and where am I headed? And I've discovered a whole bunch of bad thinking habits. It's obnoxious.

I'm glad to be out.

(Aside: I wrote this entire post in April, when I was finished my last assignments & tests. I waited awhile to publish it so that I've now safely graduated. Wasn't super worried, but didn't want to take chances.)

Better Wrong Than Vague

So today.

I was already aware of a certain aversion I had to planning. So I decided to make things a bit easier with this roadmapping document, and base it on one my friend Oliver Habryka had written about his main project. He had created a 27-page outline in google docs, shared it with a bunch of people, and got some really great feedback and other comments. Oliver's introduction includes the following paragraph, which I decided to quote verbatim in mine:

This document was written while continuously repeating the mantra “better wrong than vague” in my head. When I was uncertain of something, I tried to express my uncertainty as precisely as possible, and when I found myself unable to do that, I preferred making bold predictions to vague statements. If you find yourself disagreeing with part of this document, then that means I at least succeeded in being concrete enough to be disagreed with.

In an academic context, at least up to the undergrad level, students are usually incentivized to follow "better vague than wrong". Because if you say something the slightest bit wrong, it'll produce a little "-1" in red ink.

Because if you and the person grading you disagree, a vague claim might be more likely to be interpreted favorably. There's a limit, of course: you usually can't just say "some studies have shown that some people sometimes found X to help". But still.

Practising being "good enough"

Nate Soares has written about the approach of whole-assed half-assing:

Your preferences are not "move rightward on the quality line." Your preferences are to hit the quality target with minimum effort.

If you're trying to pass the class, then pass it with minimum effort. Anything else is wasted motion.

If you're trying to ace the class, then ace it with minimum effort. Anything else is wasted motion.

My last two yearly review blog posts have followed structure of talking about my year on the object level (what I did), the process level (how I did it) and the meta level (my more abstract approach to things). I think it's helpful to apply the same model here.

There are lots of things that humans often wished their neurology naturally optimized for. One thing that it does optimize for though is minimum energy expenditure. This is a good thing! Brains are costly, and they'd have to function less well if they always ran at full power. But this has side effects. Here, the relevant side effect is that, if you practice a certain process for awhile, and it achieves the desired object-level results, you might lose awareness of the bigger picture approach that you're trying to employ.

So in my case, I was practising passing my classes with minimum effort, and not wasting motion, following the meta-level approach of whole-assed half-assing. But while the meta-level approach of "hitting the quality target with minimum effort" is a good one in all domains (some of which will have much, much higher quality targets) the process of doing the bare minimum to create something that doesn't have any obvious glaring flaws, is not a process that you want to be employing in your business. Or in trying to understand anything deeply.

Which I am now learning to do. And, in the process, unlearning the shoddy thinking I've been practising for the last 5 years.

Related LW post: Guessing the Teacher's Password

(This article crossposted from my blog)

Vegetarian/Omnivore Ideological Turing Test Judging Round!

4 Raelifin 20 August 2015 01:53AM

Come one, come all! Test your prediction skills in my Caplan Test (more commonly called an Ideological Turing Test). To read more about such tests, check out palladias' post here.

The Test: http://goo.gl/forms/7f4pQfxB8I

In the test, you will be asked to read responses written by rationalists from LessWrong (and the Columbus Ohio LW group). These responses are either from a vegetarian or omnivore (as decided by a coin flip) and are either their genuine response or a fake response where they pretend to be a member of the other group (also decided by coin flip). If you'd like to participate (and the more, the merrier) you'll be asked to distinguish fake from real by assigning a credence to the proposition that a given response is genuine.

I'll be posting general statistics on how people did at a later date (probably early September). Please use the comments on this thread to discuss or ask questions. Do not make predictions in the comments. I got more entries than would be reasonable to ask people to judge, so if your entry didn't make it into the test, I'm sorry. We might be able to run a second round of judging. If you're interested in judging more entries, send me a PM or leave a comment. I tended to favor the first entries I got, when selecting who got in.

How to fix academia?

8 passive_fist 20 August 2015 12:50AM

I don't usually submit articles to Discussion, but this news upset me so much that I think there is a real need to talk about it.

http://www.nature.com/news/faked-peer-reviews-prompt-64-retractions-1.18202

A leading scientific publisher has retracted 64 articles in 10 journals, after an internal investigation discovered fabricated peer-review reports linked to the articles’ publication.

The cull comes after similar discoveries of ‘fake peer review’ by several other major publishers, including London-based BioMed Central, an arm of Springer, which began retracting 43 articles in March citing "reviews from fabricated reviewers". The practice can occur when researchers submitting a paper for publication suggest reviewers, but supply contact details for them that actually route requests for review back to the researchers themselves.

Types of Misconduct

We all know that academia is a tough place to be in. There is constant pressure to 'publish or perish', and people are given promotions and pay raises directly as a result of how many publications and grants they are awarded. I was awarded a PhD recently so the subject of scientific honesty is dear to my heart.

I'm of course aware of misconduct in the field of science. 'Softer' forms of misconduct include things like picking only results that are consistent with your hypothesis or repeating experiments until you get low p-values. This kind of thing sometimes might even happen non-deliberately and subconsciously, which is why it is important to disclose methods and data.

'Harder' forms of misconduct include making up data and fudging numbers in order to get published and cited. This is of course a very deliberate kind of fraud, but it is still easy to see how someone could be led to this kind of behaviour by virtue of the incredible pressures that exist. Here, the goal is not just academic advancement, but also obtaining recognition. The authors in this case are confident that even though their data is falsified, their reasoning (based, of course, on falsified data) is sound and correct and stands up to scrutiny.

What is the problem?

But the kind of misconduct being mentioned in the linked article is extremely upsetting to me, beyond the previous types of misconduct. It is a person or (more likely) a group of people knowing full well that their publication would not stand up to serious scientific scrutiny. Yet they commit the fraud anyway, guessing that no one will actually ever seriously scrutinize their work and it will take it at face value due to being present in a reputable journal. The most upsetting part is that they are probably right in this assessment

Christie Aschwanden wrote a piece about this recently on FiveThirtyEight. She makes the argument that cases of scientific misconduct are still rare and not important in the grand scheme of things. I only partially agree with this. I agree that science is still mostly trustworthy, but I don't necessarily agree that scientific misconduct is too rare to be worth worrying about. It would be much more honest to say that we simply do not know the extent of scientific misconduct, because there is no comprehensive system in place to detect it. Surveys on this have indicated that as much as 1/3 of scientists admit to some form of questionable practices, with 2% admitting to downright fabrication or falsification of evidence. These figures could be widely off the mark. It is, unfortunately, easy to commit fraud without being detected.

Aschwanden's conclusion is that the problem is that science is difficult. With this I agree wholeheartedly. And to this I'd add that science has probably become too big. A few years ago I did some research in the area of nitric oxide (NO) transmission in the brain. I did a search and found 55,000 scientific articles from reputable publications with "nitric oxide" in the title. Today this number is over 62,000. If you expand this to both the title and abstract, you get about 160,000. Keep in mind that these are only the publications that have actually passed the process of peer review.

I have read only about 1,000 articles total during the entirety of my PhD, and probably <100 in the actual level of depth required to locate flaws in reasoning. The problem with science becoming too big is that it's easy to hide things. There are always going to be fewer fact-checkers than authors, and it is much harder to argue logically about things than it is to simply write things. The more the noise, the harder it becomes to listen.

It was not always this way. The rate of publication is increasing rapidly, outstripping even the rate of growth in number of scientists. Decades ago publications played only a minor role in the scientific process. Publications mostly had the role of disseminating important information to a large audience. Today, the opposite is true - most articles have a small audience (as, in people with the will and ability to read them), consisting of perhaps only a handful of individuals - often only the people in the same research group of institutional department. This leads to the problem where it is often seen that many publications actually receive most of their citations from people who are friends or colleagues of the authors.

Some people have suggested that because of the recent high-level cases of fraud that have been uncovered, there is now increased scrutiny and fraud is going to be uncovered more rapidly. This may be true for the types of fraud that already have been uncovered, but fraudsters are always going to be able to stay ahead of the scrutinizers. Experience with other forms of crime show this quite clearly. Before the article in nature I had never even thought about the possibility of sending reviews back to myself. It simply never occurred to me. All of these considerations lead me to believe that the problem of scientific fraud may actually get worse, not better, over time. Unless the root of the problem is attacked.

How Can it be Solved?

So how to solve the problem of scientific misconduct? I don't have any good answers. I can think of things like "Stop awarding people for mere number of publications" and "Gauge the actual impact of science rather than empty metrics like number of citations or impact factor." But I can't think of any good way to do these things. Some alternatives - like using, for instance, social media to gauge the importance of a scientific discovery - would almost certainly lead to a worse situation than we have now.

A small way to help might be to adopt a payment system for peer-review. That is, to get published, you pay a certain amount of money for researchers to review your work. Currently, most reviewers offer their services for free (however they are sometimes allocated a certain amount of time for peer-review in their academic salary). A pay system would at least give an incentive for people to rigorously review work rather than simply trying to optimize for minimum amount of time invested in review. It would also reduce the practice of parasitic submissions (people submitting to short-turnaround-time, high-profile journals like Nature just to get feedback on their work for free) and decrease the flow volume of papers submitted for review. However, it would also incentivize a higher rate of rejection to maximize profits. And it would disproportionately impact scientists from places with less scientific funding.

What are the real options we have here to minimize misconduct?

[Link] First almost fully-formed human [foetus] brain grown in lab, researchers claim

7 ESRogs 19 August 2015 06:37AM

This seems significant:

An almost fully-formed human brain has been grown in a lab for the first time, claim scientists from Ohio State University. The team behind the feat hope the brain could transform our understanding of neurological disease.

Though not conscious the miniature brain, which resembles that of a five-week-old foetus, could potentially be useful for scientists who want to study the progression of developmental diseases. 

...

The brain, which is about the size of a pencil eraser, is engineered from adult human skin cells and is the most complete human brain model yet developed

...

Previous attempts at growing whole brains have at best achieved mini-organs that resemble those of nine-week-old foetuses, although these “cerebral organoids” were not complete and only contained certain aspects of the brain. “We have grown the entire brain from the get-go,” said Anand.

...

The ethical concerns were non-existent, said Anand. “We don’t have any sensory stimuli entering the brain. This brain is not thinking in any way.”

...

If the team’s claims prove true, the technique could revolutionise personalised medicine. “If you have an inherited disease, for example, you could give us a sample of skin cells, we could make a brain and then ask what’s going on,” said Anand.

...

For now, the team say they are focusing on using the brain for military research, to understand the effect of post traumatic stress disorder and traumatic brain injuries.

http://www.theguardian.com/science/2015/aug/18/first-almost-fully-formed-human-brain-grown-in-lab-researchers-claim

 

 

Truth seeking as an optimization process

7 ScottL 18 August 2015 11:03AM

From the costs of rationality wiki:

Becoming more epistemically rational can only guarantee one thing: what you believe will include more of the truth . Knowing that truth might help you achieve your goals , or cause you to become a pariah. Be sure that you really want to know the truth before you commit to finding it; otherwise, you may flinch from it.

The reason that truth seeking is often seen as being integral to rationality is that in order to make optimal decisions you must first be able to make accurate predictions. Delusions, or false beliefs, are self-imposed barriers to accurate prediction. They are surprise inducers. It is because of this that the rational path is often to break delusions, but you should remember that doing so is a slow and hard process that is rife with potential problems.

Below I have listed three scenarios in which a person could benefit from considering the costs of truth seeking. The first scenario is when seeking a more accurate measurement is computationally expensive and not really required. The second scenario is when you know that the truth will be emotionally distressing to another person and that this person is not in an optimal state to handle this truth. The third scenario is when you are trying to change the beliefs of others. It is often beneficial if you can understand the costs involved for them to change their beliefs as well as their perspective. This allows you to become better able to actually change their beliefs rather than to just win an argument.

 

Scenario 1: computationally expensive truth

We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. – Donald Knuth

If optimization requires significant effort and only results in minimal gains in utility, then it is not worth it. If you only need to be 90% sure that something is true and you are currently 98% sure that it is, then it is not worth spending some extra effort to get to 99% certainty. For example, if you are testing ballistics on Earth then it may be appropriate to use Newtons laws even though they are known to be inexact in some extreme conditions. Now, this does not mean that optimization should never be done. Sometimes that extra 1% certainty is actually extremely important. What it does mean is that you should be spending your resources wisely. The beliefs that you do make should lead to increased abilities to anticipate accurately. You should also remember occams Razor. If you are committing yourself to a decision procedure that is accurate, but slow and wasteful then you will probably be outcompeted by others who spend their resources on more suitable and worthy activities.

 

Scenario 2: emotionally distressing truth

Assume for a moment that you have a child and that you have just finished watching that child fail horribly at a school performance. If your child then asks you, while crying, how the performance was. Do you tell them the truth in full or not? Most people would choose not to and would instead attempt to calm and comfort the child. To do otherwise is not seen as rational, but is instead seen as situationally unaware, rude and impolite. Obviously, some ways of telling the truth are worse than others. But, overall telling the full truth is probably not going to be the most prudent thing to do in this situation. This is because the child is not in an emotional state that will allow them to handle the truth well. The truth in this situation is unlikely to lead to improvement and will instead lead to further stress and trauma which will often cause future performance anxiety, premature optimization and other issues. For these reasons, I think that the truth should not be expressed in this situation. This does not mean that I think the rational person should forget about what has happened. They should instead remember it so that they can bring it up when the child is in an emotional state that would allow them to be better able to implement any advice that is given. For example, when practicing in a safe environment.

I want to point out that avoiding the truth is not what I am advocating. I am instead saying that we should be strategic about telling potentially face-threatening or emotionally distressing truths. I do believe that repression and avoidance of issues that have a persistent nature most often tends to lead to exacerbation or resignation of those issues. Hiding from the truth rarely improves the situation. Consider the child if you don't ever mention the performance because you don't want to cause the child pain then they are still probably going to get picked on at school. Knowing this, we can say that the best thing to do is to bring up the truth and frame it in a particular situation where the child can find it useful and come to be able to better handle it.

 

Scenario 3: psychologically exhausting truth

If we remember that truth seeking involves costs, then we are more likely to be aware of how we can reduce this cost when we are trying to change the beliefs of others. If you are trying to convince someone and they do not agree with you, this may not be because your arguments are weak or that the other person is stupid. It may just be that there is a significant cost involved for them to either understand your argument or to update their beliefs. If you want to convince someone and also avoid the illusion of transparency, then it is best to take into account the following:

  • You should try to end arguments well and to avoid vitriol - the emotional contagion heuristic leads people to avoid contact with people or objects viewed as "contaminated" by previous contact with someone or something viewed as bad—or, less often, to seek contact with objects that have been in contact with people or things considered good. If someone gets emotional when you are in an argument, then you are going to be less likely to change their minds about that topic in the future. It is also a good idea to consider the peak-end rule which basically means that you should try to end your arguments well.
  • If you find that someone is already closed off due to emotional contagion, then you should try a surprising strategy so that your arguments aren't stereotyped and avoided. As elizer said here
  • The first rule of persuading a negatively disposed audience - rationally or otherwise - is not to say the things they expect you to say. The expected just gets filtered out, or treated as confirmation of pre-existing beliefs regardless of its content.

  • Processing fluency - is the ease with which information is processed. You should ask yourself if your argument worded in such a way that it is fluent and easy to understand?
  • Cognitive dissonance - is a measure of how much your argument conflicts with the other persons pre-existing beliefs? Perhaps, you need to convince them of a few other points first before your argument will work
  • Inferential distance - is about how much background information that they need access to in order for them to understand your argument?
  • Leave a line of retreat - think about whether they can admit that they were wrong without also looking stupid or foolish? In winning arguments there are generally two ways that you can go about it. The first is to totally demolish the other persons position. The second is to actually change their minds. The first leaves them feeling wrong, stupid and foolish which is often going to make them start rationalizing. The second just makes them feel wrong. You win arguments the second way by seeming to be reasonable and non face threatening. A good way to do this is through empathy and understanding the argument from the other persons position. It is important to see things as others would see them because we don't see the world as it is; we see the world as we are. The other person is not stupid or lying they might just in the middle of what I call an 'epistemic contamination cascade' (perhaps there is already a better name for this). It is when false beliefs lead to filters, framing effects and other false beliefs. Another potential benefit from viewing the argument from the other persons perspective is that it is possible that you may come to realise that your own is not as steadfast as you once believed.
  • Maximise the cost of holding a false belief - ask yourself if there are any costs to them if they continue to hold a belief that you believe is false? One way to cause some cost is to convince their friends and associates of your position. The extra social pressure may help in getting them to change their minds.
  • Give it time and get them inspecting their maps rather than information that has been filtered through their map. It is possible that there are filtering and framing effects which mean that your arguments are being distorted by the other person? Consider a depressed person: you can argue with them, but this is not likely to be overly helpful. THis is because it is likely that while arguing you will need to contradict them and this will probably lead to them blocking out what you are saying. I think that in these kinds of situations what you really need to do is to get them to inspect their own maps. This can be done by asking "what" or "how does that make you" type of questions. For example,“What are you feeling?”,“What’s going on?” and“What can I do to help?”. There are two main benefits to these types of questions over arguments. The first is that it gets them inspecting their maps and the second is that it is much harder for them to block out the responses since they are the ones providing them. This is a related quote from Sarah Silverman's book:
  • My stepfather, John O'Hara, was the goodest man there was. He was not a man of many words, but of carefully chosen ones. He was the one parent who didn't try to fix me. One night I sat on his lap in his chair by the woodstove, sobbing. He just held me quietly and then asked only, 'What does it feel like?' It was the first time I was prompted to articulate it. I thought about it, then said, "I feel homesick." That still feels like the most accurate description--I felt homesick, but I was home. - Sarah Silverman

  • Remember the other-optimizing bias and that perspectival types of issues need to be resolved by the individual facing them. If you have a goal to change another persons minds, then it often pays dividends to not only understand why they are wrong, but also why they think they are right or at least unaware that they are wrong. This kind of understanding can only come from empathy. Sometimes it is impossible to truly understand what another person is going through, but you should always try, without condoning or condemning, to see things as they are from the other persons perspective. Remember that hatred blinds and so does love. You should always be curious and seek to understand things as they are, not as you wish them, fear them or desire them to be. It is only when you can do this that you can truly understand the costs involved for someone else to change their minds.

 

If you take the point of view that changing beliefs is costly. Then you are less likely to be surprised when others don't want to change their beliefs. You are also more likely to think about how you can make the process of changing their beliefs easier for them.

 

Some other examples of when seeking the truth is not necessarily valuable are:

  • Fiction writing and the cinematic experience
  • When the pragmatic meaning does not need truth, but the semantic meaning does. An example is "Hi. How are you?" and other similar greetings which are peculiar because they look the same as questions or adjacency pairs, but function slightly differently. They are like a kind of ritualised question in which the answer is normally pre-specified or at least the detail of the answer is. If someone asks: "How are you" it is seen as aberrant to answer the question in full detail with the truth rather than simply with fine, which may be a lie. If they actually do want to know how you are, then they will probably ask a follow up question after the greeting like "so, is everything good with the kids".
  • Evolutionary biases which cause delusions, but may help in perspectival and self confidence issues. For example, the sexual over perception bias from men. From a truth-maximization perspective young men who assume that all women want them are showing severe social-cognitive inaccuracies, judgment biases, and probably narcissistic personality disorder. However, from an evolutionary perspective, the same young men are behaving more optimally. That is, the bias is an adaptive bias one which has consistently maximized the reproductive success of their male ancestors. Other examples are the women's underestimation of men's commitment bias and positively biased perceptions of partners

 

tldr: this post posits that truth seeking should be viewed as an optimization process. This means that it may not always be worth it.

Predicted corrigibility: pareto improvements

5 Stuart_Armstrong 18 August 2015 11:02AM

A putative new idea for AI control; index here.

Corrigibility allows an agent to transition smoothly from a perfect u-maximiser to a perfect v-maximiser, without seeking to resist or cause this transition.

And it's the very perfection of the transition that could cause problems; while u-maximising, the agent will not take the slightest action to increase v, even if such actions are readily available. Nor will it 'rush' to finish its u-maximising before transitioning. It seems that there's some possibility of improvements here.

I've already attempted one way of dealing with the issue (see the pre-corriged agent idea). This is another one.

 

Pareto improvements allowed

Suppose that an agent with corrigible algorithm A is following utility u currently, and estimates that there are probabilities pi that it will transition to utilities vi at midnight (note that these are utility function representatives, not affine classes of equivalent utility functions). At midnight, the usual corrigibility applies, making A indifferent to that transition, making use of such terms as E(u|u→u) (the expectation of u, given that the A's utility doesn't change) and E(vi|u→vi) (the expectation of vi, given that A's utility changes to vi).

But, in the meantime, there are expectations such as E({u,v1,v2,...}). These are A's best current estimates as to what the genuine expected utility of the various utilites are, given all it knows about the world and itself. It could be more explicitly written as E({u,v1,v2,...}| A), to emphasise that these expectations are dependent on the agent's own algorithm.

Then the idea is to modify the agent's algorithm so that Pareto improvements are possible. Call this modified algorithm B. B can select actions that A would not have chosen, conditional on:

  • E(u|B) ≥ E(u|A) and E(Σpivi|B) ≥ E(Σpivi|A).

There are two obvious ways we could define B:

  • B maximises u, subject to the constraints E(Σpivi|B) ≥ E(Σpivi|A).
  • B maximises Σpivi, subject to the constraints E(u|B) ≥ E(u|A).

In the first case, the agent maximises its current utility, without sacrificing its future utility. This could apply, for example, to a ruby mining agent that rushes to gets its rubies to the bank before its utility changes. In the second case, the agent maximises it future expected utility, without sacrificing its current utility. This could apply to a ruby mining agent that's soon to become a sapphire mining agent: it then starts to look around and collect some early sapphires as well.

Now, it would seem that doing this must cause it to lose some ruby mining ability. However, it is being Pareto with E("rubies in bank"|A, expected future transition), not with E("rubies in bank"|A, "A remains a ruby mining agent forever"). The difference is that A will behave as if it was maximising the second term, and so might not go to the bank to deposit its gains, before getting hit by the transition. So B can collects some early sapphires, and also goes to the bank to deposit some rubies, and thus end up ahead for both u and Σpivi.

Mental Model Theory - Illusion of Possibility Example

1 ScottL 18 August 2015 06:29AM

(I have written an overview of the mental model theory which is in main and the link is here. You should read this overview before you read this post. You should only read this post if you want more explicit details on the first example which demonstrates the illusion of possibility)

Consider the following problem:

Before you stands a card-dealing robot. This robot has been programmed to deal one hand of cards. You are going to make a bet with another person on whether the dealt hand will contain an ace or whether it will contain a king. If the dealt hand is just a single queen, it's a draw. Based on what you know about this robot, you deduce correctly that only one of the following statements is true.

  • The dealt hand will contain either a king or an ace (or both).
  • The dealt hand will contain either a queen or an ace (or both).

Based on your deductions, should you bet that the dealt hand will contain an Ace or that it will contain a King?

If you think that the ace is the better bet, then you would have made a losing bet. In short, this is because  it is impossible for an ace to be in the dealt hand. 

To see why this is I will list out all of the explicit mental models.

Below are the mental models that people will create in accordance with the principle of truth. (See the article in main for what this is). You can see that Ace is in both rows, which makes it seem like ace must obviously be more likely to be in the dealt hand.

Statement 1 true

K

A

K ∩ A

Statement 2 true

Q

A

Q ∩ A

 

But, when we look at the full explicit set of potential models (including the models when one of the statements is false) we will realise that it is impossible for an ace to be in the hand. Note that ¬ stands for negation. (¬A) means that the hand does not have an ace. The first possible scenario is when statement one is true and statement two is false. The mental models for this are in the below table:

Statement 1 true

Statement 2 false

K

A

K ∩ A

¬Q

¬A

¬Q ∩ ¬A

 

Consider each column after the first as a potential possibility for how the dealt hand could be.

  • The first column means that the dealt hand will have a king and not have a queen. This looks good. There are no problems with this.
  • The second column means that the dealt hand will have an ace and not have an ace. We have reached a contradiction, which implies that this possibility is impossible.
  • The third column is also impossible as the first row has (A) and the second has (¬A).

If we look at the second possible scenario which is when statement two is true and statement one is false, then we get the below table.

Statement 2 true

Statement 1 false

Q

A

Q ∩ A

¬K

¬A

¬K ∩ ¬A

 

Once again if we can consider each column after the first as a potential possibility for how the dealt hand could be.

  • The first column means that the dealt hand will have a queen and not have a king. This looks good. There are no problems with this.
  • The second column like in the first table is a contradiction and so is impossible.
  • The third column is also a contradiction and so is impossible.

If we remove the ace possibilities as this leads to contradictions, we end up with the below table:

Statement 1 true

Statement 2 false

K

¬Q

Statement 2 true

Statement 1 false

Q

¬K

 

This table has two possibilities. The dealt hand contains a king or the dealt hand contains a queen. Knowing this, we can know say that it is more likely for there to be a king in the dealt hand as it impossible for an ace to be in the hand. Therefore, we should bet that there is a king in the hand.

How to learn a new area X that you have no idea about.

10 Elo 18 August 2015 05:42AM

This guide is in response to a request in the open thread.  I would like to improve it; If you have some improvement to contribute I would be delighted to hear it!  I hope it helps.  It was meant to be a written down form of; "wait-stop-think" before approaching a new area.

This list is mean't to be suggestive and not limiting.

I realise there are many object-level opportunities for better strategies but I didn't want to cover them in this meta-strategy.

It would be very easy to strawman this list. i.e. 1 could be a waste of time that people of half a brain don't need to cover.  However if your steelman each point it will hopefully make entire sense.  (I would love this document to be stronger, if there is an obvious strawman I probably missed it; feel free to make a suggestion for it to obviously read in the steel-form of the point.

 

Happy readings!


0. make sure you have a growth mindset. Nearly anything can be learnt or improved on. Aside from a few physical limits – i.e. being the best marathon runner is very difficult; but being a better marathon runner than you were yesterday is possible. (unknown time duration, changing one's mind)

 

  1. Make sure your chosen X is aligned with your actual goals (are you doing it because you want to?). When you want to learn a thing; is X that thing? (Example: if you want to exercise; maybe skiing isn't the best way to do it. Or maybe it is because you live in a snow country) (5-10 minutes)
  2. Check that you want to learn X and that will be progress towards a goal (or is a terminal goal – i.e. learning to draw faces can be your terminal, or can help you to paint a person's portrait). (5 minutes, assuming you know your goals)
  3. Make a list of what you think that X is. Break it down. Followed by what you know about X, and if possible what you think you are missing about X. (5-30 minutes, no more than an hour)
  4. Do some research to confirm that your rough definition of X is actually correct. Confirm that what you know already is true, if not – replace that existing knowledge with true things about X. Do not jump into everything yet. (1-2 hours, no more than 5 hours)
  5. Figure out what experts in the area know (by topic area name), try to find what strategies experts in the area use to go about improving themselves. (expert people are usually a pretty good way to find things out) (1-2 hours, no more than about 5 hours)
  6. Find out what common mistakes are when learning X, and see if you can avoid them. (learn by other people's mistakes where possible as it can save time) (1-2 hours, no more than 5 hours)
  7. Check if someone is teaching about X. Chances are that someone is, and someone has listed what relevant things they teach about X. We live in the information age, its probably all out there. If it's not, reconsider if you are learning the right thing. (if no learning is out there it might be hard to master without trial and error the hard way) (10-20mins, no more than 2 hours)
  8. Figure out the best resources on X. If this is taking too long; spend 10 minutes and then pick the best one so far. These can be books; people; wikipedia; Reddit or StackExchange; Metafilter; other website repositories; if X is actually safe – consider making a small investment and learn via trial and error. (i.e. frying an egg – the common mistakes probably won't kill you, you could invest in 50 eggs and try several methods to do it at little cost) (10mins, no more than 30mins)
  9. Confirm that these are still the original X, and not X2, or X3. (if you find you were actually looking for X2 or X3, go back over the early steps for Xn again. (5mins)
  10. Consider writing to 5 experts and asking them for advice in X or in finding out about X. (5*20mins)
  11. Get access to the best resources possible. Estimate how much resource they will take to go over (time, money) and confirm you are okay with those investments. (postage of a book; a few weeks, 1-2 hours to order the thing maximum)
  12. Delve in; make notes as you go. If things change along the way, re-evaluate. (unknown, depends on the size of the area you are looking for.  consider estimating word-speed, total content volume, amount of time it will take to cover the territory)
  13. Write out the best things you needed to learn and publish them for others. (remembering you had foundations to go on – publish these as well) (10-20 hours, depending on the size of the field, possibly a summary of how to go about finding object-level information best)
  14. try to find experiments you can conduct on yourself to confirm you are on the right track towards X. Or ways to measure yourself (measurement or testing is one of the most effective ways to learn) (1hour per experiment, 10-20 experiments)
  15. Try to teach X to other people. You can be empowering their lives, and teaching is a great way to learn, also making friends in the area of X is very helpful to keep you on task and enjoying X. (a lifetime, or also try 5-10 hours first, then 50 hours, then see if you like teaching)

Update: includes suggestion to search reddit, StackExchange; other web sources for the best resource.

Update: time estimate guide.

 

Fragile Universe Hypothesis and the Continual Anthropic Principle - How crazy am I?

6 PeterCoin 18 August 2015 12:53AM

Personal Statement

I like to think about big questions from time to time. A fancy that quite possibly causes me more harm than good. Every once in a while I come up with some idea and wonder "hey, this seems pretty good, I wonder if anyone is taking it seriously?" Usually, answering that results at worst in me wasting a couple days on google and blowing $50 on amazon before I find someone who’s going down the same path and can tell myself. "Well, someone's got that covered". This particular idea is a little more stubborn and the amazon bill is starting to get a little heavy. So I cobbled together this “paper” to get this idea out there and see where it goes.  

I've been quite selective here and have only submitted it on two other places Vixra, and FXQI forum.  Vixra for posterity in the bizarre case that it's actually right.  FXQI because they play with some similar ideas (but the forum turned out to be not really vibrant for such things).  I'm now posting it on Less Wrong because you guys seem to be the right balance of badass skeptics and open minded geeks.  In addition I see a lot of cool work on Anthropic Reasoning and the like so it seems to go along with your theme.

Any and all feedback is welcome, I'm a good sport!

Abstract

A popular objection to the Many-worlds interpretation of Quantum Mechanics is that it allows for quantum suicide where an experimenter creates a device that instantly kills him or leaves him be depending the output of a quantum measurement, since he has no experience of the device killing him he experiences quantum immortality. This is considered counter-intuitive and absurd. Presented here is a speculative argument that accepts counter-intuitiveness and proposes it as a new approach to physical theory without accepting some of the absurd conclusions of the thought experiment. The approach is based on the idea that the Universe is Fragile in that only a fraction of the time evolved versions retain the familiar structures of people and planets, but the fractions that do not occur are not observed. This presents to us as a skewed view of physics and only by accounting for this fact (which I propose calling the Continual Anthropic Principle) can we understand the true fundamental laws.

Preliminary reasoning

Will a supercollider destroy the Earth?

A fringe objection to the latest generation of high energy supercolliders was they might trigger some quantum event that would destroy the earth such as by turning it to strangelets (merely an example). To assuage those fears it has been noted that since Cosmic Rays have been observed with higher energies then the collisions these supercolliders produce that if a supercollider were able to create such Earth-destroying events cosmic rays would have already destroyed the Earth. Since that hasn't happened physics must not work that way and we thus must be safe.

A false application of the anthropic principle

One may try to cite the anthropic principle as an appeal against the conclusion that physics disallows Earth-destruction by said mechanism. If the Earth were converted to strangelets, there would be no observers on it. If the right sort of multiverse exists, some Earths will be lucky enough to escape this mode of destruction. Thus physics may still allow for strangelet destruction and supercolliders may still destroy the world. We can reject that objection by noting that if that were the case, it is far more probable that our planet would be alone in a sea of strangelet balls that were already converted by highenergy cosmic rays. Since we observe other worlds made of ordinary matter, we can be sure physics doesn't allow for the Earth to be converted into strange matter by interactions at Earth’s energy level.

Will a supercollider destroy the universe?

Among the ideas on how supercolliders will destroy the world there are some that destroy not just the Earth but entire universe as well. A proposed mechanism is in triggering vacuum energy to collapse to a new lower energy state. By that mechanism the destructive event spreads out from the nucleation site at the speed of light and shreds the universe to something completely unrecognizable. In the same way cosmic rays rule out an Earth-destroying event it has said that this rules out a universe destroying event.

Quantum immortality and suicide

Quantum suicide is a thought experiment there is a device that measures a random quantum event, and kills an experimenter instantly upon one outcome, and leaves him alive upon the other. If Everett multiple worlds is true, then no matter how matter how many times an experiment is performed, the experimenter will only experience the outcome where he is not killed thus experiencing subjective immortality. There are some pretty nutty ideas about the quantum suicide and immortality, and this has been used as an argument against many-worlds. I find the idea of finding oneself for example perpetually avoiding fatal accidents or living naturally well beyond any reasonable time to be mistaken (see objections). I do however think that Max Tegmark came up with a good system of rules on his "crazy" page for how it might work: http://space.mit.edu/home/tegmark/crazy.html

The rules he outlines are: "I think a successful quantum suicide experiment needs to satisfy three criteria:

1. The random number generator must be quantum, not classical (deterministic), so that you really enter a superposition of dead and alive.

2. It must kill you (at least make you unconscious) on a timescale shorter than that on which you can become aware of the outcome of the quantum coin-toss - otherwise you'll have a very unhappy version of yourself for a second or more who knows he's about to die for sure, and the whole effect gets spoiled.

3. It must be virtually certain to really kill you, not just injure you.”

Have supercolliders destroyed the universe? 

Let's say that given experiment has a certain "probability" (by a probabilistic interpretation of QM) of producing said universe destructive event. This satisfies all 3 of Tegmark's conditions for a successful quantum suicide experiment. As such the experimenter might conclude that said event cannot happen. However, he would be mistaken, and a corresponding percentage of successor states would in fact be ones where the event occurred. If the rules of physics are such that an event is allowed then we have a fundamentally skewed perceptions of what physics are.

It's not a bug it's a feature!

If we presume such events could occur, we have no idea how frequent they are. There's no necessary reason why they need to be confined to rare high energy experiments and cosmic rays. Perhaps it dictates more basic and fundamental interactions. For instance certain events within an ordinary atomic nucleus could create a universe-destroying event. Even if these events occur at an astonishing rate, so long as there's a situation where the event doesn't occur (or is "undone" before the runaway effect can occur), it would not be contradictory with our observation. The presumption that these events don't occur may be preventing us from understanding a simpler law that describes physics in a certain situation in favor of more complex theories that limit behavior to that which we can observe.

Fragile Universe Hypothesis

Introduction

Because of this preliminary reasoning I am postulating what I call the "Fragile Universe Hypothesis". The core idea is that our universe is constantly being annihilated by various runaway events initiated by quantum phenomena. However, because for any such event there's always a possible path where such event does not occur, and since all possible paths are realized we are presented with an illusion of stability. What we see as persistent structures in the universe (chairs, planets, galaxies) are so only because events that destroy them by and large destroy us as well. What we may think are fundamental laws of our universe, are merely descriptions of the nature of possible futures consistent with our continued existence.

Core theory

The hypothesis can be summarized as postulating the following:

1. For a given event at Time T there are multiple largely non-interacting future successor events at T + ε (i.e. Everett Many Worlds is either correct or at least on the right track)

2. There are some events where some (but not all) successor events trigger runaway interactions that destroy the universe as we know it. Such events expand from the origin at C and immediately disrupt the consciousness of any being it encounters.

3. We experience only a subset of possible futures and thus have a skewed perspective of the laws of physics.

4. To describe the outcome of an experiment we must first calculate possible outcomes then filter out those that result in observer destruction (call it the "continual anthropic principle")

Possible Objections

"If I get destroyed I die and will no longer have experiences. This is at face value absurd"

I'm sympathetic, and I'd say this requires a stretch of imagination to consider. But do note that under this hypothesis, no one will ever have an experience that isn't followed by a successive experience (see quantum immortality for discussion of death). So from our perspective our existence will go on unimpeded. As an example, consider a video game save. The game file can be saved, copied, compressed, decompressed, moved from medium to medium (with some files being deleted after being copied to a new location). We say that the game continues so long as someone plays at least one copy of the file. Likewise for us, we say life (or the universe as we know it) goes on so long as at least one successor continues.

"This sort of reasoning would result in having to accept absurdities like quantum immortality"

I don't think so. Quantum immortality (the idea that many worlds guarantees one immortality as there will always be some future state in which one continues to exist) presumes that personhood is an all-ornothing thing. In reality a person is more of a fragmented collection of mental processes. We don't suddenly stop having experiences as we die, rather the fragments unbind, some live on in the memory of others or in those experiencing the products of our expression, while others fade out. A destructive event of the kind proposed would absolutely be an all-or-nothing affair. Either everything goes, or nothing goes.

"This isn't science. What testable predictions are you making? Heck you don't even have a solid theory" 

Point taken! This is, at this point, speculation, but I think at this point it might have the sort of elegance that good theories have. The questions that I have are:

1. Has this ever been seriously considered? (I’ve done some homework but undoubtedly not enough).

2. Are there any conceptual defeaters that make this a nonstarter?

3. Could some theories be made simpler by postulating a fragile universe and continual anthropic principle?

4. Could those hypothetical theories make testable predictions?

5. Have those tests been consistent with the theory.

My objective in writing this is to provide an argument against 2, and starting to look into 1 and 3. 4 and 5 are essential to good science as well too, but we’re simply not at that point yet.

Final Thoughts

The Copernican Principle for Many worlds

When we moved the Earth as the center of the solar system, the orbits of the other planets became simpler and clearer. Perhaps physical law can be made simpler and clearer when we move the futures we will experience away from the center of possible futures. And like the solar system's habitable zone, perhaps only a small portion of futures are habitable.

Why confine the Anthropic Principle to the past? 

Current models of cosmology limit the impact of the Anthropic selection on the cosmos to the past: string landscapes, bubble universes or cosmic branes, these things all got fixed at some set of values 13 billion years ago and the selection effect does no more work at the cosmic scale. Perhaps the selection effect is more fundamental then that. Could it be that instead 13 billion years ago is when the anthropic selection merely switched from being creative in sowing our cosmic seeds to conservative in allowing them to grow? 

Does random reward evoke stronger habits?

1 Bound_up 17 August 2015 09:03PM

http://measureofdoubt.com/2011/04/12/pulling-levers-killing-monsters-the-lure-of-unpredictable-rewards/ (how do I put a link like this in a word with blue letters?)

I've read that unpredictable rewards associated with a behavior actually encourage that behavior more effectively than consistent rewards.

The optimal habit-forming figure given in the link above is a 25% chance of reward for each instance of performing the behavior.

My hypothesis then, is that if I want to establish a habit by rewarding myself upon successfully performing a certain task, I should reward myself only 25% of the time if I want to ingrain the habit as forcefully as possible into my unconscious.

 

Anyone else think so, or have any other research to add?

Predict - "Log your predictions" app

13 Gust 17 August 2015 04:20PM

As an exercise on programming Android, I've made an app to log predictions you make and keep score of your results. Like PredictionBook, but taking more of a personal daily exercise feel, in line with this post.

The "statistics" right now are only a score I copied from the old Credence calibration game, and a calibration bar chart.

Features I think might be worth adding:

  • Daily notifications to remember to exercise your prediction ability
  • Maybe with trivia questions you can answer if you don't have any personal prediction to make

I'm hoping for suggestionss for features and criticism on the app design.

Here's the link for the apk, and here's the source code repository.

 

Edit:

2015-08-26 - Fixed bug that broke on Android 5.0.2 (thanks Bobertron)

[LINK] Scott Aaronson: Common knowledge and Aumann's agreement theorem

12 gjm 17 August 2015 08:41AM

The excellent Scott Aaronson has posted on his blog a version of a talk he recently gave at SPARC, about Aumann's agreement theorem and related topics. I think a substantial fraction of LW readers would enjoy it. As well as stating Aumann's theorem and explaining why it's true, the article discusses other instances where the idea of "common knowledge" (the assumption that does a lot of the work in the AAT) is important, and offers some interesting thoughts on the practical applicability (if any) of the AAT.

(Possibly relevant: an earlier LW discussion of AAT.)

Open thread, Aug. 17 - Aug. 23, 2015

3 MrMind 17 August 2015 07:05AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Rational approach to finding life partners

1 c_edwards 16 August 2015 05:07PM

Speaking from personal experience, finding the right relationship can be HARD. I recently came across a rational take on finding relationship partners, much of which really resonated with my experiences:

http://waitbutwhy.com/2014/02/pick-life-partner.html

http://waitbutwhy.com/2014/02/pick-life-partner-part-2.html

 

(I'm still working my way through the Sequences, and lw has more than eight thousand articles with "relationship" in them. I'm not promising the linked articles include unique information)

Yvain's most important articles

20 casebash 16 August 2015 08:27AM

Important

 

Meditations on Moloch: An explanation of co-ordination problems within our society

 

Weak Men are Superweapons (supplement - feminists will like this one less)

 

The Virtue of Silence - silence is a hard virtue

 

You Kant Dismiss Universalizability - Kant is about not proposing rules that would be self-defeating

 

The Spirit of the First Amendment

 

Red Plenty - Why communism failed

 

All in all, another brick in the motte - Motte-and-bailey doctrine

 

Intellectual Hipsters and Meta-Contrarianism

 

Burdens - society owes people an existence

 

Reactionary Philosophy in an Enormous, Planet-sized Nutshell

 

Anti-reactionary FAQ

 

Right is the new Left

 

Archipelago and Atomic Communitarianism - different countries based on different principles

 

Parable of the talents - nature vs. nurture

 

Why I defend scoundrels

 

Nobody is perfect, Everything is Commensurable

 

The categories were made for man, not man for the categories - hairdryer incident

 

Non-conformism

 

Toxoplasma of rage - why the most divisive issues will always spread

 

Towards a theory of drama, Further towards a theory of drama

 

All debates are bravery debates

 

I can tolerate anything except the outgroup - what tolerance really mean

 

Who by very slow decay - Euthanasia

 

Non-libertarian FAQ

 

Consequentialism FAQ

 

Efficient Charity: Do Unto Others

 

Eight Short Studies on Excuses

 

Generalising from one example

 

Game theory as a dark art

 

What is signaling really?

 

Book review: Chronicles of wasted time

 

The biodeterminists guide to parenting

 

Social Justice General

 

Offense versus harm minimisation

 

Fearful Symmetry - Politicization, Micro-aggressions, Hyperviligance

 

In favor of niceness, community and civilisation - Importance of the social contract

 

Radicalizing the romanceless - Complaints about "Nice Guys"

 

Living by the sword - whales and cancer

 

Social justice for the highly-demanding of rigour

 

Meditations on Privilege 1 - India (Meditation 2 - follow up)

 

Meditation 3 - Creepiness

 

Meditation 5 - True love and creepiness

 

Meditation 8 on Superweapons and Bingo

 

Triggers

 

I believe the correct term is "straw individual"

 

Five case studies on politicization

 

Social Justice Careful

 

Why I defend scoundrels part 2

 

Untitled - Arguments against nerds being privileged. How feminism makes some men afraid to talk to women.

 

Social Justice and Words, Words, Words - What privilege means vs. what feminists say it means

 

A Response to Apophemi on Triggers - Should the rationality community be a safe space?

 

Meditation on Applause Lights

 

Fetal Attraction: Abortion and the Principle of Charity

 

Arguments about Male Violence Prove too Much

 

Mitt Romney

 

I do not understand rape culture

 

Useful concepts

 

Introduction to Game Theory - main ones:

 

Unspoken ground assumptions of discussion

 

Revenge as a charitable act

 

Should you reverse any advice you hear?

 

Joint Over And Underdiagnosis

 

Hope! Change! - how much change can we expect from our politicians

 

What universal human experiences are you missing without realizing it?

 

A Thrive-survive Theory of the Political Spectrum - included primarily for the section on how to get into a Republican mindset

 

Phatic and anti-inductive

 

Read History of Philosophy Backwards

 

Against bravery debates

 

Searching for One-Sided Tradeoffs

 

Proving too much

 

Non-central fallacy

 

Schelling fences on slippery slopes

 

Purchase fuzzies and utilitons separately

 

Beware isolated demands for rigour

 

Diseased thinking: dissolving questions about disease

 

Confidence levels inside and outside an argument

 

Least convenient possible world

 

Giving and accepting apologies

 

Epistemic learned helplessness

 

Approving reinforces low-effort behaviors - wanting/liking/approving

 

What's in a name

 

How not to lose an argument

 

Beware trivial inconveniences

 

When truth isn't enough

 

Why support the underdog?

 

Applied picoeconomics

 

A signaling theory of class x politics interaction

 

That other kind of status

 

A parable on obsolete ideologies

 

The Courtier's Reply and the Myers Shuffle

 

Talking snakes: A cautionary tale

 

Beware the man of one study

 

My id on defensiveness - Projective identification

 

Interesting

 

Bogus Pipeline, Bona Fide Piepline

 

The Zombie Preacher Of SomerSet

 

Rational home buying

 

Apologia Pro Vita Sua - "drugs mysteriously find their own non-fungible money"

 

"I appreciate the situation"

 

A Babylon 5 Story

 

Money, money, everywhere, but not a cent to spend - that $5000 can be a crippling debt for some people

 

Social Psychology is a Flamethrower

 

Fish - Now by Prescription

 

An Iron Curtain has descended upon Psychopharmacology - Russian medicines being ignored

 

The Control Group is out of Control - parapsychology

 

Schitzophrenia and geomagnetic storms

 

And I show you how deep the Rabbit Hole Goes - story, purely for entertainment value

 

Five years and one week of less wrong - interesting for readers of Less Wrong only

 

Highlights from my notes from another psychiatry conference - Schitzophrenia

 

The apologist and the revolutionary - Anosognosia and neuro-science

You Are A Brain - Intro to LW/Rationality Concepts [Video & Slides]

13 Liron 16 August 2015 05:51AM

Here's a 32-minute presentation I made to provide an introduction to some of the core LessWrong concepts for a general audience:

You Are A Brain [YouTube]

You Are a Brain [Google Slides] - public domain

I already posted this here in 2009 and some commenters asked for a video, so I immediately recorded one six years later. This time the audience isn't teens from my former youth group, it's employees who work at my software company where we have a seminar series on Thursday afternoons.

Soylent has been found to contain lead (12-25x) and cadmium (≥4x) in greater concentrations than California's 'safe harbor' levels

9 Transfuturist 15 August 2015 10:45PM

Press Release

Edit: Soylent's Reply, provided by Trevor_Blake

OAKLAND, Calif., Aug. 13, 2015 /PRNewswire-USNewswire/ -- As You Sow, a non-profit environmental-health watchdog, today filed a notice of intent to bring legal action against Soylent, a "meal replacement" powder recently featured in New York Times and Forbes stories reporting that workers in Silicon Valley are drinking their meals, eliminating the need to eat food. The 60-day notice alleges violation of California's Safe Drinking Water and Toxic Enforcement Act for failure to provide sufficient warning to consumers of lead and cadmium levels in the Soylent 1.5 product.

Test results commissioned by As You Sow, conducted by an independent laboratory, show that one serving of Soylent 1.5 can expose a consumer to a concentration of lead that is 12 to 25 times above California's Safe Harbor level for reproductive health, and a concentration of cadmium that is at least 4 times greater than the Safe Harbor level for cadmium. Two separate samples of Soylent 1.5 were tested.

According to the Soylent website, Soylent 1.5 is "designed for use as a staple meal by all adults." The startup recently raised $20 million in funding led by venture capital firm Andreessen Horowitz.

"Nobody expects heavy metals in their meals," said Andrew Behar, CEO of As You Sow. "These heavy metals accumulate in the body over time and, since Soylent is marketed as a meal replacement, users may be chronically exposed to lead and cadmium concentrations that exceed California's safe harbor level (for reproductive harm). With stories about Silicon Valley coders sometimes eating three servings a day, this is of very high concern to the health of these tech workers."

Lead exposure is a significant public health issue and is associated with neurological impairment, such as learning disabilities and lower IQ, even at low levels. Chronic exposure to cadmium has been linked to kidney, liver, and bone damage in humans.

Since 1992, As You Sow has been a leading enforcer of California's Safe Drinking Water and Toxic Enforcement Act, with enforcement actions resulting in removal of lead from children's jewelry, formaldehyde from portable classrooms, and lead from baby powder.

View more: Next