Filter This month

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Why people want to die

46 PhilGoetz 24 August 2015 08:13PM

Over and over again, someones says that living for a very long time would be a bad thing, and then some futurist tries to persuade them that their reasoning is faulty.  They tell them that they think that way now, but they'll change their minds when they're older.

The thing is, I don't see that happening.  I live in a small town full of retirees, and those few I've asked about it are waiting for death peacefully.  When I ask them about their ambitions, or things they still want to accomplish, they have none.

Suppose that people mean what they say.  Why do they want to die?

continue reading »

Yvain's most important articles

21 casebash 16 August 2015 08:27AM

Important

 

Meditations on Moloch: An explanation of co-ordination problems within our society

 

Weak Men are Superweapons (supplement - feminists will like this one less)

 

The Virtue of Silence - silence is a hard virtue

 

You Kant Dismiss Universalizability - Kant is about not proposing rules that would be self-defeating

 

The Spirit of the First Amendment

 

Red Plenty - Why communism failed

 

All in all, another brick in the motte - Motte-and-bailey doctrine

 

Intellectual Hipsters and Meta-Contrarianism

 

Burdens - society owes people an existence

 

Reactionary Philosophy in an Enormous, Planet-sized Nutshell

 

Anti-reactionary FAQ

 

Right is the new Left

 

Archipelago and Atomic Communitarianism - different countries based on different principles

 

Parable of the talents - nature vs. nurture

 

Why I defend scoundrels

 

Nobody is perfect, Everything is Commensurable

 

The categories were made for man, not man for the categories - hairdryer incident

 

Non-conformism

 

Toxoplasma of rage - why the most divisive issues will always spread

 

Towards a theory of drama, Further towards a theory of drama

 

All debates are bravery debates

 

I can tolerate anything except the outgroup - what tolerance really mean

 

Who by very slow decay - Euthanasia

 

Non-libertarian FAQ

 

Consequentialism FAQ

 

Efficient Charity: Do Unto Others

 

Eight Short Studies on Excuses

 

Generalising from one example

 

Game theory as a dark art

 

What is signaling really?

 

Book review: Chronicles of wasted time

 

The biodeterminists guide to parenting

 

Social Justice General

 

Offense versus harm minimisation

 

Fearful Symmetry - Politicization, Micro-aggressions, Hyperviligance

 

In favor of niceness, community and civilisation - Importance of the social contract

 

Radicalizing the romanceless - Complaints about "Nice Guys"

 

Living by the sword - whales and cancer

 

Social justice for the highly-demanding of rigour

 

Meditations on Privilege 1 - India (Meditation 2 - follow up)

 

Meditation 3 - Creepiness

 

Meditation 5 - True love and creepiness

 

Meditation 8 on Superweapons and Bingo

 

Triggers

 

I believe the correct term is "straw individual"

 

Five case studies on politicization

 

Social Justice Careful

 

Why I defend scoundrels part 2

 

Untitled - Arguments against nerds being privileged. How feminism makes some men afraid to talk to women.

 

Social Justice and Words, Words, Words - What privilege means vs. what feminists say it means

 

A Response to Apophemi on Triggers - Should the rationality community be a safe space?

 

Meditation on Applause Lights

 

Fetal Attraction: Abortion and the Principle of Charity

 

Arguments about Male Violence Prove too Much

 

Mitt Romney

 

I do not understand rape culture

 

Useful concepts

 

Introduction to Game Theory - main ones:

 

Unspoken ground assumptions of discussion

 

Revenge as a charitable act

 

Should you reverse any advice you hear?

 

Joint Over And Underdiagnosis

 

Hope! Change! - how much change can we expect from our politicians

 

What universal human experiences are you missing without realizing it?

 

A Thrive-survive Theory of the Political Spectrum - included primarily for the section on how to get into a Republican mindset

 

Phatic and anti-inductive

 

Read History of Philosophy Backwards

 

Against bravery debates

 

Searching for One-Sided Tradeoffs

 

Proving too much

 

Non-central fallacy

 

Schelling fences on slippery slopes

 

Purchase fuzzies and utilitons separately

 

Beware isolated demands for rigour

 

Diseased thinking: dissolving questions about disease

 

Confidence levels inside and outside an argument

 

Least convenient possible world

 

Giving and accepting apologies

 

Epistemic learned helplessness

 

Approving reinforces low-effort behaviors - wanting/liking/approving

 

What's in a name

 

How not to lose an argument

 

Beware trivial inconveniences

 

When truth isn't enough

 

Why support the underdog?

 

Applied picoeconomics

 

A signaling theory of class x politics interaction

 

That other kind of status

 

A parable on obsolete ideologies

 

The Courtier's Reply and the Myers Shuffle

 

Talking snakes: A cautionary tale

 

Beware the man of one study

 

My id on defensiveness - Projective identification

 

Interesting

 

Bogus Pipeline, Bona Fide Piepline

 

The Zombie Preacher Of SomerSet

 

Rational home buying

 

Apologia Pro Vita Sua - "drugs mysteriously find their own non-fungible money"

 

"I appreciate the situation"

 

A Babylon 5 Story

 

Money, money, everywhere, but not a cent to spend - that $5000 can be a crippling debt for some people

 

Social Psychology is a Flamethrower

 

Fish - Now by Prescription

 

An Iron Curtain has descended upon Psychopharmacology - Russian medicines being ignored

 

The Control Group is out of Control - parapsychology

 

Schitzophrenia and geomagnetic storms

 

And I show you how deep the Rabbit Hole Goes - story, purely for entertainment value

 

Five years and one week of less wrong - interesting for readers of Less Wrong only

 

Highlights from my notes from another psychiatry conference - Schitzophrenia

 

The apologist and the revolutionary - Anosognosia and neuro-science

My future posts; a table of contents.

17 Elo 30 August 2015 10:27PM

My future posts

I have been living in the lesswrong rationality space for at least two years now. Recently more active than previously. This has been deliberate. I plan to make more serious active posts in the future. In saying so I wanted to announce the posts I intend on making when moving forwards from today.  This should do a few things:

  1. keep me on track
  2. keep me accountable to me more than anyone else
  3. keep me accountable to others
  4. allow others to pick which they would like to be created sooner
  5. allow other people to volunteer to create/collaborate on these topics
  6. allow anyone to suggest more topics
  7. meta: this post should help to demonstrate one person's method of developing rationality content and the time it takes to do that.
feel free to PM me about 6, or comment below.

Unfortunately these are not very well organised, they are presented in no particular order.  They are probably missing posts that will help link them all together, as well as skills required to understand some of the posts on this list.


Unpublished but written:

A very long list of sleep maintenance suggestions – I wrote up all the ideas I knew of; there are about 150 or so; worth reviewing just to see if you can improve your sleep because the difference in quality of life with good sleep is a massive change. (20mins to write an intro)

A list of techniques to help you remember names. - remembering names is a low-hanging social value fruit that can improve many of your early social interactions with people. I wrote up a list of techniques to help. (5mins to post)

 

Posts so far:

The null result: a magnetic ring wearing experiment. - a fun one; about how wearing magnetic rings was cool; but not imparting of superpowers. (done)

An app list of useful apps for android my current list of apps that I use also some very good suggestions in the comments. (done)

How to learn X How to attack a problem of learning a new area that you don't know a lot about (for a generic thing) (done)

A list of common human goals – when plotting out goals that matter to you; so you can look over some common ones and see you fulfilling them interests you. (done)

 

Future posts

Goals of your lesswrong group – Do you have a local group; why? What do you want out of it (do people know)? setting goals, doing something particularly, having fun anyway, changing your mind. (4hrs)

 

Goals interrogation + Goal levels – Goal interrogation is about asking <is this thing I want to do actually a goal of mine> and <is this the best way to achieve that>, goal levels are something out of Sydney Lesswrong that help you have mutual long term goals and supporting short term goal. (2hrs)

 

How to human – A zero to human guide. A guide for basic functionality of a humanoid system. (4hrs)

 

How to effectively accrue property – Just spent more than the value of an object on it? How to think about that and try to do it better. (5hrs)

 

List of strategies for getting shit done – working around the limitations of your circumstances and understanding what can get done with the resources you have at hand. (4hrs)

 

List of superpowers and kryptonites – when asking the question "what are my superpowers?" and "what are my kryptonites?". Knowledge is power; working with your powers and working out how to avoid your kryptonites is a method to improve yourself. (6hrs over a week)

 

List of effective behaviours – small life-improving habits that add together to make awesomeness from nothing. And how to pick them up. (8hrs over 2 weeks)

 

Memory and notepads – writing notes as evidence, the value of notes (they are priceless) and what you should do. (1hr + 1hr over a week)

 

Suicide prevention checklist – feeling off? You should have already outsourced the hard work for "things I should check on about myself" to your past self. Make it easier for future you. Especially in the times that you might be vulnerable. (4hrs)

 

Make it easier for future you. Especially in the times that you might be vulnerable. - as its own post in curtailing bad habits. (5hrs)

 

A p=np approach to learning – Sometimes you have to learn things the long way; but sometimes there is a short cut. Where you could say, "I wish someone had just taken me on the easy path early on". It's not a perfect idea; but start looking for the shortcuts where you might be saying "I wish someone had told me". Of course my line now is, "but I probably wouldn't have listened anyway" which is something that can be worked on as well. (2hrs)

 

Rationalists guide to dating – attraction. Relationships. Doing things with a known preference. Don't like stupid people? Don't try to date them. Think first; an exercise in thinking hard about things before trying trial-and-error on the world. (half written, needs improving 2hrs)

 

Training inherent powers (weights, temperatures, smells, estimation powers) – practice makes perfect right? Imagine if you knew the temperature always, the weight of things by lifting them, the composition of foods by tasting them, the distance between things without measuring. How can we train these, how can we improve. (2hrs)

 

Strike to the heart of the question. The strongest one; not the one you want to defeat – Steelman not Strawman. Don't ask "how do I win at the question"; ask, "am I giving the best answer to the best question I can give", (2hrs)

Time-Binding

17 Viliam 14 August 2015 05:38PM

(I started reading Alfred Korzybski, the famous 20th century rationalist. Instead of the more famous Science and Sanity I started with Manhood of Humanity, which was written first, because I expected it to be more simple, and possibly to provide a context necessary for the later book. I will post my re-telling of the book in shorter parts, to make writing and discussion easier. This post is approximately the first 1/4 of the book.)

 

The central question of Manhood of Humanity is: "What is a human?" Answering this question correctly could help us design a civilization allowing the fullest human development. Failure to answer this question correctly will repeat the cycle of revolutions and wars.

We should aim to answer this question precisely, using the best ways of thinking typically seen in exact sciences -- as opposed to verbal metaphysics and tribal fights often seen in social sciences. We should make our "science of human" more predictive, which will likely also make it progress faster.

According to Korzybski, the unique quality of humans is what he calls "time-binding", described as "the capacity of an individual or a generation to begin where the former left off". The science itself is a glorious example of time-binding. On the other hand we can observe the worst failures in psychiatrical cases. This is a scale of our ability to adjust to facts and reality, and the normal people are somewhere in between.

continue reading »

Instrumental Rationality Questions Thread

14 AspiringRationalist 22 August 2015 08:25PM

This thread is for asking the rationalist community for practical advice.  It's inspired by the stupid questions series, but with an explicit focus on instrumental rationality.

Questions ranging from easy ("this is probably trivial for half the people on this site") to hard ("maybe someone here has a good answer, but probably not") are welcome.  However, please stick to problems that you actually face or anticipate facing soon, not hypotheticals.

As with the stupid questions thread, don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better, and please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.

(See also the Boring Advice Repository)

Versions of AIXI can be arbitrarily stupid

14 Stuart_Armstrong 10 August 2015 01:23PM

Many people (including me) had the impression that AIXI was ideally smart. Sure, it was uncomputable, and there might be "up to finite constant" issues (as with anything involving Kolmogorov complexity), but it was, informally at least, "the best intelligent agent out there". This was reinforced by Pareto-optimality results, namely that there was no computable policy that performed at least as well as AIXI in all environments, and strictly better in at least one.

However, Jan Leike and Marcus Hutter have proved that AIXI can be, in some sense, arbitrarily bad. The problem is that AIXI is not fully specified, because the universal prior is not fully specified. It depends on a choice of a initial computing language (or, equivalently, of an initial Turing machine).

For the universal prior, this will only affect it up to a constant (though this constant could be arbitrarily large). However, for the agent AIXI, it could force it into continually bad behaviour that never ends.

For illustration, imagine that there are two possible environments:

  1. The first one is Hell, which will give ε reward if the AIXI outputs "0", but, the first time it outputs "1", the environment will give no reward for ever and ever after that.
  2. The second is Heaven, which gives ε reward for outputting "0" and 1 reward for outputting "1", and is otherwise memoryless.

Now simply choose a language/Turing machine such that the ratio P(Hell)/P(Heaven) is higher than the ratio 1/ε. In that case, for any discount rate, the AIXI will always output "0", and thus will never learn whether its in Hell or not (because its too risky to do so). It will observe the environment giving reward ε after receiving "0", behaviour which is compatible with both Heaven and Hell. Thus keeping P(Hell)/P(Heaven) constant, and ensuring the AIXI never does anything else.

In fact, it's worse than this. If you use the prior to measure intelligence, then an AIXI that follows one prior can be arbitrarily stupid with respect to another.

[Link] Game Theory YouTube Videos

14 James_Miller 06 August 2015 04:17PM

I made a series of game theory videos that carefully go through the mechanics of solving many different types of games.  I optimized the videos for my future Smith College game theory students who will either miss a class, or get lost in class and want more examples.   I emphasize clarity over excitement.   I would be grateful for any feedback.

Is semiotics bullshit?

13 PhilGoetz 25 August 2015 02:09PM

I spent an hour recently talking with a semiotics professor who was trying to explain semiotics to me.  He was very patient, and so was I, and at the end of an hour I concluded that semiotics is like Indian chakra-based medicine:  a set of heuristic practices that work well in a lot of situations, justified by complete bullshit.

I learned that semioticians, or at least this semiotician:

  • believe that what they are doing is not philosophy, but a superset of mathematics and logic
  • use an ontology, vocabulary, and arguments taken from medieval scholastics, including Scotus
  • oppose the use of operational definitions
  • believe in the reality of something like Platonic essences
  • look down on logic, rationality, reductionism, the Enlightenment, and eliminative materialism.  He said that semiotics includes logic as a special, degenerate case, and that semiotics includes extra-logical, extra-computational reasoning.
  • seems to believe people have an extra-computational ability to make correct judgements at better-than-random probability that have no logical basis
  • claims materialism and reason each explain only a minority of the things they are supposed to explain
  • claims to have a complete, exhaustive, final theory of how thinking and reasoning works, and of the categories of reality.

When I've read short, simple introductions to semiotics, they didn't say this.  They didn't say anything I could understand that wasn't trivial.  I still haven't found one meaningful claim made by semioticians, or one use for semiotics.  I don't need to read a 300-page tome to understand that the 'C' on a cold-water faucet signifies cold water.  The only example he gave me of its use is in constructing more-persuasive advertisements.

(Now I want to see an episode of Mad Men where they hire a semotician to sell cigarettes.)

Are there multiple "sciences" all using the name "semiotics"?  Does semiotics make any falsifiable claims?  Does it make any claims whose meanings can be uniquely determined and that were not claimed before semiotics?

His notion of "essence" is not the same as Plato's; tokens rather than types have essences, but they are distinct from their physical instantiation.  So it's a tripartite Platonism.  Semioticians take this division of reality into the physical instantiation, the objective type, and the subjective token, and argue that there are only 10 possible combinations of these things, which therefore provide a complete enumeration of the possible categories of concepts.  There was more to it than that, but I didn't follow all the distinctions. He had several different ways of saying "token, type, unbound variable", and seemed to think they were all different.

Really it all seemed like taking logic back to the middle ages.

Predict - "Log your predictions" app

13 Gust 17 August 2015 04:20PM

As an exercise on programming Android, I've made an app to log predictions you make and keep score of your results. Like PredictionBook, but taking more of a personal daily exercise feel, in line with this post.

The "statistics" right now are only a score I copied from the old Credence calibration game, and a calibration bar chart.

Features I think might be worth adding:

  • Daily notifications to remember to exercise your prediction ability
  • Maybe with trivia questions you can answer if you don't have any personal prediction to make

I'm hoping for suggestionss for features and criticism on the app design.

Here's the link for the apk (v0.4), and here's the source code repository. You can download it at Google Play Store.

 

Edit:

2015-08-26 - Fixed bug that broke on Android 5.0.2 (thanks Bobertron)

2015-08-28 - Change layout for landscape mode, and add a better icon

2015-08-31 -

  • Daily notifications
  • Buttons at the expanded-item-layout (ht dutchie)
  • Show points won/lost in the snackbar when a prediction is answered
  • Translation to portuguese

 

You Are A Brain - Intro to LW/Rationality Concepts [Video & Slides]

13 Liron 16 August 2015 05:51AM

Here's a 32-minute presentation I made to provide an introduction to some of the core LessWrong concepts for a general audience:

You Are A Brain [YouTube]

You Are a Brain [Google Slides] - public domain

I already posted this here in 2009 and some commenters asked for a video, so I immediately recorded one six years later. This time the audience isn't teens from my former youth group, it's employees who work at my software company where we have a seminar series on Thursday afternoons.

Book Review: Naive Set Theory (MIRI research guide)

13 David_Kristoffersson 14 August 2015 10:08PM

I'm David. I'm reading through the books in the MIRI research guide and will write a review for each as I finish them. By way of inspiration from how Nate did it.

Naive Set Theory

Halmos Naive Set Theory is a classic and dense little book on axiomatic set theory, from a "naive" perspective.

Which is to say, the book won't dig to the depths of formality or philosophy, it focuses on getting you productive with set theory. The point is to give someone who wants to dig into advanced mathematics a foundation in set theory, as set theory is a fundamental tool used in a lot of mathematics.

Summary

Is it a good book? Yes.

Would I recommend it as a starting point, if you would like to learn set theory? No. The book has a terse presentation which makes it tough to digest if you aren't already familiar with propositional logic, perhaps set theory to some extent already and a bit of advanced mathematics in general. There are plenty of other books that can get you started there.

If you do have a somewhat fitting background, I think this should be a very competent pick to deepen your understanding of set theory. The author shows you the nuts and bolts of set theory and doesn't waste any time doing it.

Perspective of this review

I will first refer you to Nate's review, which I found to be a lucid take on it. I don't want to be redundant and repeat the good points made there, so I want to focus this review on the perspective of someone with a bit weaker background in math, and try to give some help to prospective readers with parts I found tricky in the book.

What is my perspective? While I've always had a knack for math, I only read about 2 months of mathematics at introductory university level, and not including discrete mathematics. I do have a thorough background in software development.

Set theory has eluded me. I've only picked up fragments. It's seemed very fundamental but school never gave me a good opportunity to learn it. I've wanted to understand it, which made it a joy to add Naive Set Theory to the top of my reading list.

How I read Naive Set Theory

Starting on Naive Set Theory, I quickly realized I wanted more meat to the explanations. What is this concept used for? How does it fit in to the larger subject of mathematics? What the heck is the author expressing here?

I supplemented heavily with wikipedia, math.stackexchange and other websites. Sometimes, I read other sources even before reading the chapter in the book. At two points, I laid down the book in order to finish two other books. The first was Gödel's Proof, which handed me some friendly examples of propositional logic. I had started reading it on the side when I realized it was contextually useful. The second was Concepts of Modern Mathematics, which gave me much of the larger mathematical context that Naive Set Theory didn't.

Consequently, while reading Naive Set Theory, I spent at least as much time reading other sources!

A bit into the book, I started struggling with the exercises. It simply felt like I hadn't been given all the tools to attempt the task. So, I concluded I needed a better introduction to mathematical proofs, ordered some books on the subject, and postponed investing into the exercises in Naive Set Theory until I had gotten that introduction.

Chapters

In general, if the book doesn't offer you enough explanation on a subject, search the Internet. Wikipedia has numerous competent articles, math.stackexchange is overflowing with content and there's plenty additional sources available on the net. If you get stuck, do try playing around with examples of sets on paper or in a text file. That's universal advice for math.

I'll follow with some key points and some highlights of things that tripped me up while reading the book.

Axiom of extension

The axiom of extension tells us how to distinguish between sets: Sets are the same if they contain the same elements. Different if they do not.

Axiom of specification

The axiom of specification allows you to create subsets by using conditions. This is pretty much what is done every time set builder notation is employed.

Puzzled by the bit about Russell's paradox at the end of the chapter? http://math.stackexchange.com/questions/651637/russells-paradox-in-naive-set-theory-by-paul-halmos

Unordered pairs

The axiom of pairs allows one to create a new set that contains the two original sets.

Unions and intersections

The axiom of unions allows one to create a new set that contains all the members of the original sets.

Complements and powers

The axiom of powers allows one to, out of one set, create a set containing all the different possible subsets of the original set.

Getting tripped up about the "for some" and "for every" notation used by Halmos? Welcome to the club:
http://math.stackexchange.com/questions/887363/axiom-of-unions-and-its-use-of-the-existential-quantifier
http://math.stackexchange.com/questions/1368073/order-of-evaluation-in-conditions-in-set-theory

Using natural language rather than logical notation is commmon practice in mathematical textbooks. You'd better get used to it:
http://math.stackexchange.com/questions/1368531/why-there-is-no-sign-of-logic-symbols-in-mathematical-texts

The existential quantifiers tripped me up a bit before I absorbed it. In math, you can freely express something like "Out of all possible x ever, give me the set of x that fulfill this condition". In programming languages, you tend to have to be much more... specific, in your statements.

Ordered pairs

Cartesian products are used to represent plenty of mathematical concepts, notably coordinate systems.

Relations

Equivalence relations and equivalence classes are important concepts in mathematics.

Functions

Halmos is using some dated terminology and is in my eyes a bit inconsistent here. In modern usage, we have: injective, surjective, bijective and functions that are none of these. Bijective is the combination of being both injective and surjective. Replace Halmos' "onto" with surjective, "one-to-one" with injective, and "one-to-one correspondence" with bijective.

He also confused me with his explanation of "characteristic function" - you might want to check another source there.

Families

This chapter tripped me up heavily because Halmos mixed in three things at the same time on page 36: 1. A confusing way of talking about sets. 2. Convoluted proof. 3. n-ary cartesian product.

Families are an alternative way of talking about sets. An indexed family is a set, with an index and a function in the background. A family of sets means a collection of sets, with an index and a function in the background. For Halmos build-up to n-ary cartesian products, the deal seems to be that he teases out order without explicitly using ordered pairs. Golf clap. Try this one for the math.se treatment: http://math.stackexchange.com/questions/312098/cartesian-products-and-families

Inverses and composites

The inverses Halmos defines here are more general than the inverse functions described on wikipedia. Halmos' inverses work even when the functions are not bijective.

Numbers

The axiom of infinity states that there is a set of the natural numbers.

The Peano axioms

The peano axioms can be modeled on the the set-theoretic axioms. The recursion theorem guarantees that recursive functions exist.

Arithmetic

The principle of mathematical induction is put to heavy use in order to define arithmetic.

Order

Partial orders, total orders, well orders -- are powerful mathematical concepts and are used extensively.

Some help on the way:
http://math.stackexchange.com/questions/1047409/sole-minimal-element-why-not-also-the-minimum
http://math.stackexchange.com/questions/367583/example-of-partial-order-thats-not-a-total-order-and-why
http://math.stackexchange.com/questions/225808/is-my-understanding-of-antisymmetric-and-symmetric-relations-correct
http://math.stackexchange.com/questions/160451/difference-between-supremum-and-maximum

Also, keep in mind that infinite sets like subsets of w can muck up expectations about order. For example, a totally ordered set can have multiple elements without a predecessor.

Axiom of choice

The axiom of choice lets you, from any collection of non-empty sets, select an element from every set in the collection. The axiom is necessary to do these kind of "choices" with infinite sets. In finite cases, one can construct functions for the job using the other axioms. Though, the axiom of choice often makes the job easier in finite cases so it is used where it isn't necessary.

Zorn's lemma

Zorn's lemma is used in similar ways to the axiom of choice - making infinite many choices at once - which perhaps is not very strange considering ZL and AC have been proven to be equivalent.

robot-dreams offers some help in following the massive proof in the book.

Well ordering

A well-ordered set is a totally ordered set with the extra condition that every non-empty subset of it has a smallest element. This extra condition is useful when working with infinite sets.

The principle of transfinite induction means that if the presence of all strict predecessors of an element always implies the presence of the element itself, then the set must contain everything. Why does this matter? It means you can make conclusions about infinite sets beyond w, where mathematical induction isn't sufficient.

Transfinite recursion

Transfinite recursion is an analogue to the ordinary recursion theorem, in a similar way that transfinite induction is an analogue to mathematical induction - recursive functions for infinite sets beyond w.

In modern lingo, what Halmos calls a "similarity" is an "order isomorphism".

Ordinal numbers

The axiom of substitution is called the axiom (schema) of replacement in modern use. It's used for extending counting beyond w.

Sets of ordinal numbers

The counting theorem states that each well ordered set is order isomorphic to a unique ordinal number.

Ordinal arithmetic

The misbehavior of commutativity in arithmetic with ordinals tells us a natural fact about ordinals: if you tack on an element in the beginning, the result will be order isomorphic to what it is without that element. If you tack on an element at the end, the set now has a last element and is thus not order isomorphic to what you started with.

The Schröder-Bernstein theorem

The Schröder-Bernstein theorem states that if X dominates Y, and Y dominates X, then X ~ Y (X and Y are equivalent).

Countable sets

Cantor's theorem states that every set always has a smaller cardinal number than the cardinal number of its power set.

Cardinal arithmetic

Read this chapter after Cardinal numbers.

Cardinal arithmetic is an arithmetic where just about all the standard operators do nothing (beyond the finite cases).

Cardinal numbers

Read this chapter before Cardinal arithmetic.

The continuum hypothesis asserts that there is no cardinal number between that of the natural numbers and that of the reals. The generalized continuum hypothesis asserts that, for all cardinal numbers including aleph-0 and beyond aleph-0, the next cardinal number in the sequence is the power set of the previous one.

Concluding reflections

I am at the same time humbled by the subject and empowered by what I've learned in this episode. Mathematics is a truly vast and deep field. To build a solid foundation in proofs, I will now go through one or two books about mathematical proofs. I may return to Naive Set Theory after that. If anyone is interested, I could post my impressions of other mathematical books I read.

I think Naive Set Theory wasn't the optimal book for me at the stage I was. And I think Naive Set Theory probably should be replaced by another introductory book on set theory in the MIRI research guide. But that's a small complaint on an excellent document.

If you seek to get into a new field, know the prerequisites. Build your knowledge in solid steps. Which I guess, sometimes requires that you do test your limits to find out where you really are.

The next book I start on from the research guide is bound to be Computability and Logic.

Peer-to-peer "knowledge exchanges"

13 snarles 08 August 2015 03:33PM

I wonder if anyone has thought about setting up an online community dedicated to peer-to-peer tutoring.  The idea is that if I want to learn "Differential Geometry" and know "Python programming", and you want to learn "Python programming" and know "Differential geometry," then we can agree to tutor each other online.  The features of the community would be to support peer-to-peer tutoring by:

 

 

  • Facilitating matchups between compatible tutors
  • Allowing for more than two people to participate in a tutoring arrangement
  • Providing reputation-based incentives to honor tutoring agreements and putting effort into tutoring
  • Allowing other members to "sit in" on tutoring sessions, if they are made public
  • Allowing the option to record tutoring sessions
  • Providing members with access to such recorded sessions and "course materials"
  • Providing a forum to arrange other events

With such functions, the community would have some overlap with other online learning platforms, but the focus of the community would be to provide free, quality personalized teaching.

The LessWrong community could build the first version of this peer tutoring system.  It has people with broad interests, high intellectual standards, and many engineers who could help develop some of the infrastructure.  The first iteration of the community would be small, and many of the above features (e.g. a reputation system, and tools for facilitating matchups) would not be needed.  The first problems we would need to solve are:
  • Where should we host the community? (e.g. Google groups?)
  • What are some basic ground rules to ensure the integrity of the community and ensure safety?
  • Where can we provide a place for people to list which subjects they want to learn and which subjects they can teach?
  • Which software should we use for tutoring?
  • How can people publicize their tutoring schedule in case others want to "sit in"?
  • How can people record their tutoring sessions if they wish, and how can they make these available?
  • How should the community be administrated?  Who should be put in charge of organizing the development of the community?
  • How should we recruit new members?

 

[LINK] Scott Aaronson: Common knowledge and Aumann's agreement theorem

12 gjm 17 August 2015 08:41AM

The excellent Scott Aaronson has posted on his blog a version of a talk he recently gave at SPARC, about Aumann's agreement theorem and related topics. I think a substantial fraction of LW readers would enjoy it. As well as stating Aumann's theorem and explaining why it's true, the article discusses other instances where the idea of "common knowledge" (the assumption that does a lot of the work in the AAT) is important, and offers some interesting thoughts on the practical applicability (if any) of the AAT.

(Possibly relevant: an earlier LW discussion of AAT.)

Lesswrong real time chat

11 Elo 04 September 2015 02:29AM

This is a short post to say that I have started and am managing a Slack channel for lesswrong.

 

Slack has only an email-invite option which means that I need an email address for anyone who wants to join.

 

There is a web interface and a mobile app that is better than google hangouts.

 

If you are interested in joining; consider this one requirement:

  • You must be willing to be charitable in your conversations with your fellow lesswrongers.

To be clear; This means (including but not limited to);

  • Steelman not strawman of discussion
  • Respect of others
  • patience
So far every conversation we have had has been excellent, there have been no problems at all and everyone is striving towards better understanding of each other.  This policy does not come out of a recognition of a failure to be charitable; but as a standard to set when moving forward.  I have no reason to expect it will be broken but all the same; I feel it is valuable to have.



I would like this to have several goals and purposes (some of which were collaboratively developed with other lesswrongers in the chat, and if more come up in the future too that would be good)
  • an aim for productive conversations, to make progress on our lives.
  • a brains trust for life-advice in all kinds of areas where, "outsource this decision to others" is an effective strategy.
  • collaborative creation of further rationality content
  • a safe space for friendly conversation on the internet (a nice place to hang out)
  • A more coherent and stronger connected lesswrong
  • Development of better ideas and strategies in how to personally improve the world.

So far the chat has been operating by private invite from me for about two weeks as a trial.  In the meta-sense we are still low on the numbers required to keep as fun a conversation I would like, and it goes quiet; I expect that to change with this post.  I have personally gained two very good friends already; that I now talk to every day.  (Which coincidentally slowed me down from posting this notice because I was too busy with other things and learning from new people)

I realise this type of medium is not for all.  But I am keen to make it work.

I also realise that when people PM me their email addresses - other people will not see how many of you have already signed up.  So generally assume that there have been others who are already signed up and don't hesitate to join.  If you are wondering if you have anything to contribute; that's exactly the type of person we want to be inviting.  By doing that thought you classify yourself as the type of person to try harder.  We want you (and others) to talk with us.

Yudkowsky, Thiel, de Grey, Vassar panel on changing the world

11 NancyLebovitz 01 September 2015 03:57PM

30 minute panel

The first question was why isn't everyone trying to change the world, with the underlying assumption that everyone should be. However, it isn't obviously the case that the world would be better if everyone were trying to change it. For one thing, trying to change the world mostly means trying to change other people. If everyone were trying to do it, this would be a huge drain on everyone's attention. In addition, some people are sufficiently mean and/or stupid that their efforts to change the world make things worse.

At the same time, some efforts to change the world are good, or at least plausible. Is there any way to improve the filter so that we get more ambition from benign people without just saying everyone should try to change the world, even if they're Osama bin Laden?

The discussion of why there's too much duplicated effort in science didn't bring up the problem of funding, which is probably another version of the problem of people not doing enough independent thinking.

There was some discussion of people getting too hooked on competition, which is a way of getting a lot of people pointed at the same goal. 

Link thanks to Clarity

Rationality Compendium

11 ScottL 23 August 2015 08:00AM

I want to create a rationality compendium (a collection of concise but detailed information about a particular subject) and I want to know whether you think this would be a good idea. The rationality compendium would essentially be a series of posts that will eventually serve as a guide for less wrong newbies that they can use to discover which resources to look into further, a refresher of the main concepts for less wrong veterans and a guideline or best practices document that will explain techniques that can be used to apply the core less wrong/rationality concepts. These techniques should preferably have been verified to be useful in some way. Perhaps, there will be some training specific posts in which we can track if people are actually finding the techniques to be useful.

I only want to write this because I am lazy. In this context, I mean lazy as it is described by Larry Wall:

Laziness: The quality that makes you go to great effort to reduce overall energy expenditure.

I think that a rationality compendium would not only prove that I have correctly understood the available rationality material, but it would also ensure that I am actually making use of this knowledge. That is, applying the rationality materials that I have learnt in ways that allow me to improve my life.

If you think that a rationality compendium is not needed or would not be overly helpful, then please let me know. I also want to point out that I do not think that I am necessarily the best person to do this and that I am only doing it because I don’t see it being done by others.

For the rationality compendium, I plan to write a series of posts which should, as much as possible, be:

  • Using standard terms: less wrong specific terms might be linked to in the related materials section, but common or standard terminology will be used wherever possible.
  • Concise: the posts should just contain quick overviews of the established rationality concepts. They shouldn’t be introducing “new” ideas. The one exception to this is if a new idea allows multiple rationality concepts to be combined and explained together. If existing ideas require refinement, then this should happen in a seperate post which the rationality compendium may provide a link to if the post is deemed to be high quality.
  • Comprehensive: links to all related posts, wikis or other resources should be provided in a related materials section. This is so that readers can deep dive or just go deeper on materials that pique their interest while still ensuring that the posts are concise. The aim of the rationality compendium is to create a resource that is a condensed and distilled version of the available rationality materials. This means that it is not meant to be light reading as a large number of concepts will be presented in one post.
  • Collaborative: the posts should go through many series of edits based on the feedback in the comments. I don't think that I will be able to create perfect first posts, but I am willing to expend some effort to iteratively improve the posts until they reach a suitable standard. I hope that enough people will be interested in a rationality compendium so that I can gain enough feedback to improve the posts. I plan for the posts to stay in discussion for a long time and will possibly rerun posts if it is required. I welcome all kinds of feedback, positive or negative, but request that you provide information that I can use to improve the posts.
  • Be related only to rationality: For example, concepts from AI or quantum mechanics won’t be mentioned unless they are required to explain some rationality concepts.
  • Ordered: the points in the compendium will be grouped according to overarching principles. 
I will provide a link to the posts created in the compendium here:

How to learn a new area X that you have no idea about.

10 Elo 18 August 2015 05:42AM

This guide is in response to a request in the open thread.  I would like to improve it; If you have some improvement to contribute I would be delighted to hear it!  I hope it helps.  It was meant to be a written down form of; "wait-stop-think" before approaching a new area.

This list is mean't to be suggestive and not limiting.

I realise there are many object-level opportunities for better strategies but I didn't want to cover them in this meta-strategy.

It would be very easy to strawman this list. i.e. 1 could be a waste of time that people of half a brain don't need to cover.  However if your steelman each point it will hopefully make entire sense.  (I would love this document to be stronger, if there is an obvious strawman I probably missed it; feel free to make a suggestion for it to obviously read in the steel-form of the point.

 

Happy readings!


0. make sure you have a growth mindset. Nearly anything can be learnt or improved on. Aside from a few physical limits – i.e. being the best marathon runner is very difficult; but being a better marathon runner than you were yesterday is possible. (unknown time duration, changing one's mind)

 

  1. Make sure your chosen X is aligned with your actual goals (are you doing it because you want to?). When you want to learn a thing; is X that thing? (Example: if you want to exercise; maybe skiing isn't the best way to do it. Or maybe it is because you live in a snow country) (5-10 minutes)
  2. Check that you want to learn X and that will be progress towards a goal (or is a terminal goal – i.e. learning to draw faces can be your terminal, or can help you to paint a person's portrait). (5 minutes, assuming you know your goals)
  3. Make a list of what you think that X is. Break it down. Followed by what you know about X, and if possible what you think you are missing about X. (5-30 minutes, no more than an hour)
  4. Do some research to confirm that your rough definition of X is actually correct. Confirm that what you know already is true, if not – replace that existing knowledge with true things about X. Do not jump into everything yet. (1-2 hours, no more than 5 hours)
  5. Figure out what experts in the area know (by topic area name), try to find what strategies experts in the area use to go about improving themselves. (expert people are usually a pretty good way to find things out) (1-2 hours, no more than about 5 hours)
  6. Find out what common mistakes are when learning X, and see if you can avoid them. (learn by other people's mistakes where possible as it can save time) (1-2 hours, no more than 5 hours)
  7. Check if someone is teaching about X. Chances are that someone is, and someone has listed what relevant things they teach about X. We live in the information age, its probably all out there. If it's not, reconsider if you are learning the right thing. (if no learning is out there it might be hard to master without trial and error the hard way) (10-20mins, no more than 2 hours)
  8. Figure out the best resources on X. If this is taking too long; spend 10 minutes and then pick the best one so far. These can be books; people; wikipedia; Reddit or StackExchange; Metafilter; other website repositories; if X is actually safe – consider making a small investment and learn via trial and error. (i.e. frying an egg – the common mistakes probably won't kill you, you could invest in 50 eggs and try several methods to do it at little cost) (10mins, no more than 30mins)
  9. Confirm that these are still the original X, and not X2, or X3. (if you find you were actually looking for X2 or X3, go back over the early steps for Xn again. (5mins)
  10. Consider writing to 5 experts and asking them for advice in X or in finding out about X. (5*20mins)
  11. Get access to the best resources possible. Estimate how much resource they will take to go over (time, money) and confirm you are okay with those investments. (postage of a book; a few weeks, 1-2 hours to order the thing maximum)
  12. Delve in; make notes as you go. If things change along the way, re-evaluate. (unknown, depends on the size of the area you are looking for.  consider estimating word-speed, total content volume, amount of time it will take to cover the territory)
  13. Write out the best things you needed to learn and publish them for others. (remembering you had foundations to go on – publish these as well) (10-20 hours, depending on the size of the field, possibly a summary of how to go about finding object-level information best)
  14. try to find experiments you can conduct on yourself to confirm you are on the right track towards X. Or ways to measure yourself (measurement or testing is one of the most effective ways to learn) (1hour per experiment, 10-20 experiments)
  15. Try to teach X to other people. You can be empowering their lives, and teaching is a great way to learn, also making friends in the area of X is very helpful to keep you on task and enjoying X. (a lifetime, or also try 5-10 hours first, then 50 hours, then see if you like teaching)

Update: includes suggestion to search reddit, StackExchange; other web sources for the best resource.

Update: time estimate guide.

 

Is my brain a utility minimizer? Or, the mechanics of labeling things as "work" vs. "fun"

9 contravariant 28 August 2015 01:12AM

I recently encountered something that is, in my opinion, one of the most absurd failure modes of the human brain. I first encountered this after introspection on useful things that I enjoy doing, such as programming and writing. I noticed that my enjoyment of the activity doesn't seem to help much when it comes to motivation for earning income. This was not boredom from too much programming, as it did not affect my interest in personal projects. What it seemed to be, was the brain categorizing activities into "work" and "fun" boxes. On one memorable occasion, after taking a break due to being exhausted with work, I entertained myself, by programming some more, this time on a hobby personal project (as a freelancer, I pick the projects I work on so this is not from being told what to do). Relaxing by doing the exact same thing that made me exhausted in the first place.

The absurdity of this becomes evident when you think about what distinguishes "work" and "fun" in this case, which is added value. Nothing changes about the activity except the addition of more utility, making a "work" strategy always dominate a "fun" strategy, assuming the activity is the same. If you are having fun doing something, handing you some money can't make you worse off. Making an outcome better makes you avoid it. Meaning that the brain is adopting a strategy that has a (side?) effect of minimizing future utility, and it seems like it is utility and not just money here - as anyone who took a class in an area that personally interested them knows, other benefits like grades recreate this effect just as well. This is the reason I think this is among the most absurd biases - I can understand akrasia, wanting the happiness now and hyperbolically discounting what happens later, or biases that make something seem like the best option when it really isn't. But knowingly punishing what brings happiness just because it also benefits you in the future? It's like the discounting curve dips into the negative region. I would really like to learn where is the dividing line between which kinds of added value create this effect and which ones don't (like money obviously does, and immediate enjoyment obviously doesn't). Currently I'm led to believe that the difference is present utility vs. future utility, (as I mentioned above) or final vs. instrumental goals, and please correct me if I'm wrong here.

This is an effect that has been studied in psychology and called the overjustification effect, called that because the leading theory explains it in terms of the brain assuming the motivation comes from the instrumental gain instead of the direct enjoyment, and then reducing the motivation accordingly. This would suggest that the brain has trouble seeing a goal as being both instrumental and final, and for some reason the instrumental side always wins in a conflict. However, its explanation in terms of self-perception bothers me a little, since I find it hard to believe that a recent creation like self-perception can override something as ancient and low-level as enjoyment of final goals. I searched LessWrong for discussions of the overjustification effect, and the ones I found discussed it in the context of self-perception, not decision-making and motivation. It is the latter that I wanted to ask for your thoughts on.

 

Manhood of Humanity

9 Viliam 24 August 2015 06:31PM

This is my re-telling of Korzybski's Manhood of Humanity. First part here.)

continue reading »

AI, cure this fake person's fake cancer!

9 Stuart_Armstrong 24 August 2015 04:42PM

A putative new idea for AI control; index here.

An idea for how an we might successfully get useful work out of a powerful AI.

 

The ultimate box

Assume that we have an extremely detailed model of a sealed room, with a human in it and enough food, drink, air, entertainment, energy, etc... for the human to survive for a month. We have some medical equipment in the room - maybe a programmable set of surgical tools, some equipment for mixing chemicals, a loud-speaker for communication, and anything else we think might be necessary. All these objects are specified within the model.

We also have some defined input channels into this abstract room, and output channels from this room.

The AI's preferences will be defined entirely with respect to what happens in this abstract room. In a sense, this is the ultimate AI box: instead of taking a physical box and attempting to cut it out from the rest of the universe via hardware or motivational restrictions, we define an abstract box where there is no "rest of the universe" at all.

 

Cure cancer! Now! And again!

What can we do with such a setup? Well, one thing we could do is to define the human in such a way that they have some from of advanced cancer. We define what "alive and not having cancer" counts as, as well as we can (the definition need not be fully rigorous). Then the AI is motivated to output some series of commands to the abstract room that results in the abstract human inside not having cancer. And, as a secondary part of its goal, it outputs the results of its process.

continue reading »

A list of apps that are useful to me. (And other phone details)

9 Elo 22 August 2015 12:24PM

 

I have noticed I often wish "Damn I wish someone had made an app for that" and when I search for it I can't find it.  Then I outsource the search to facebook or other people; and they can usually say - yes, its called X.  Which I can put down to an inability to know how to search for an app on my part; more than anything else.

With that in mind; I wanted to solve the problem of finding apps for other people.

The following is a list of apps that I find useful (and use often) for productive reasons:


The environment

This list is long.  The most valuable ones are the top section that I use regularly.  

Other things to mention:

Internal storage - I have a large internal memory card because I knew I would need lots of space.  So I played the "out of sight out of mind game" and tried to give myself as much space as possible by buying a large internal card.

Battery - I use anker external battery blocks to save myself the trouble of worrying about batteries.  If prepared I leave my house with 2 days of phone charge (of 100% use).  I used to count "wins" of days I beat my phone battery (stay awake longer than it) but they are few and far between.  Also I doubled my external battery power and it sits at two days not one (28000mA + 2*460ma spare phone batteries)

Phone - I have a Samsung S4 (android Running KitKat) because it has a few features I found useful that were not found in many other phones - Cheap, Removable battery, external storage card, replaceable case.

Screen cover - I am using the one that came with the phone still

I carry a spare phone case, in the beginning I used to go through one each month; now I have a harder case than before it hasn't broken.

MicroUSB cables - I went through a lot of effort to sort this out, it's still not sorted, but its "okay for now".  The advice I have - buy several good cables (read online reviews about it), test them wherever possible, and realise that they die.  Also carry a spare or two.

Restart - I restart my phone probably most days when it gets slow.  It's got programming bugs, but this solution works for now.

The overlays

These sit on my screen all the time.

Data monitor - Gives an overview of bits per second upload or download. updated every second.

CpuTemp - Gives an overlay of the current core temperature.  My phone is always hot, I run it hard with bluetooth, GPS and wifi blaring all the time.  I also have a lot of active apps.

Mindfulness bell - My phone makes a chime every half hour to remind me to check, "Am I doing something of high-value right now?" it sometimes stops me from doing crap things.

Facebook chat heads - I often have them open, they have memory leaks and start slowing down my phone after a while, I close and reopen them when I care enough.

 

The normals:

Facebook - communicate with people.  I do this a lot.

Inkpad - its a note-taking app, but not an exceptionally great one; open to a better suggestion.

Ingress - it makes me walk; it gave me friends; it put me in a community.  Downside is that it takes up more time than you want to give it.  It's a mobile GPS game.  Join the Resistance.

Maps (google maps) - I use this most days; mostly for traffic assistance to places that I know how to get to.

Camera - I take about 1000 photos a month.  Generic phone-app one.

Assistive light - Generic torch app (widget) I use this daily.

Hello - SMS app.  I don't like it but its marginally better than the native one.

Sunrise calendar - I don't like the native calendar; I don't like this or any other calendar.  This is the least bad one I have found.  I have an app called "facebook sync" which helps with entering in a fraction of the events in my life.  

Phone, address book, chrome browser.

GPS logger - I have a log of my current gps location every 5 minutes.  If google tracks me I might as well track myself.  I don't use this data yet but its free for me to track; so if I can find a use for the historic data that will be a win.

 

Quantified apps:

Fit - google fit; here for multiple redundancy

S Health - Samsung health - here for multiple redundancy

Fitbit - I wear a flex step tracker every day, and input my weight daily manually through this app

Basis - I wear a B1 watch, and track my sleep like a hawk.

Rescuetime - I track my hours on technology and wish it would give a better breakdown. (I also paid for their premium service)

Voice recorder - generic phone app; I record around 1-2 hours of things I do per week.  Would like to increase that.

Narrative - I recently acquired a life-logging device called a narrative, and don't really know how to best use the data it gives.  But its a start.

How are you feeling? - Mood tracking app - this one is broken but the best one I have found, it doesn't seem to open itself after a phone restart; so it won't remind you to enter in a current mood.  I use a widget so that I can enter in the mood quickly.  The best parts of this app are the way it lets you zoom out, and having a 10 point scale.  I used to write a quick sentence about what I was feeling, but that took too much time so I stopped doing it.

Stopwatch - "hybrid stopwatch" - about once a week I time something and my phone didn't have a native one.  This app is good at being a stopwatch.

Callinspector - tracks ingoing or outgoing calls and gives summaries of things like, who you most frequently call, how much data you use, etc.  can also set data limits.

 

Misc

Powercalc - the best calculator app I could find

Night mode - for saving batter (it dims your screen), I don't use this often but it is good at what it does.  I would consider an app that dims the blue light emitted from my screen; however I don't notice any negative sleep effects so I have been putting off getting around to it.

Advanced signal status - about once a month I am in a place with low phone signal - this one makes me feel better about knowing more details of what that means.

Ebay - To be able to buy those $5 solutions to problems on the spot is probably worth more than $5 of "impulse purchases" that they might be classified as.

Cal - another calendar app that sometimes catches events that the first one misses.

ES file explorer - for searching the guts of my phone for files that are annoying to find.  Not as used or as useful as I thought it would be but still useful.

Maps.Me - I went on an exploring adventure to places without signal; so I needed an offline mapping system.  This map saved my life.

Wikipedia - information lookup

Youtube - don't use it often, but its there.

How are you feeling? (again) - I have this in multiple places to make it as easy as possible for me to enter in this data

Play store - Makes it easy to find.

Gallery - I take a lot of photos, but this is the native gallery and I could use a better app.

 

Social

In no particular order;

Facebook groups, Yahoo Mail, Skype, Facebook Messenger chat heads, Whatsapp, meetup, google+, Hangouts, Slack, Viber, OKcupid, Gmail, Tinder.

They do social things.  Not much to add here.

 

Not used:

Trello

Workflowy

pocketbook

snapchat

AnkiDroid - Anki memoriser app for a phone.

MyFitnessPal - looks like a really good app, have not used it 

Fitocracy - looked good

I got these apps for a reason; but don't use them.

 

Not on my front pages:

These I don't use as often; or have not moved to my front pages (skipping the ones I didn't install or don't use)

S memo - samsung note taking thing, I rarely use, but do use once a month or so.

Drive, Docs, Sheets - The google package.  Its terrible to interact with documents on your phone, but I still sometimes access things from my phone.

bubble - I don't think I have ever used this

Compass pro - gives extra details about direction. I never use it.

(ingress apps) Glypher, Agentstats, integrated timer, cram, notify

TripView (public transport app for my city)

Convertpad - converts numbers to other numbers. Sometimes quicker than a google search.

ABC Iview - National TV broadcasting channel app.  Every program on this channel is uploaded to this app, I have used it once to watch a documentary since I got the app.

AnkiDroid - I don't need to memorise information in the way it is intended to be used; so I don't use it. Cram is also a flashcard app but I don't use it.

First aid - I know my first aid but I have it anyway for the marginal loss of 50mb of space.

Triangle scanner - I can scan details from NFC chips sometimes.

MX player - does videos better than native apps.

Zarchiver - Iunno.  Does something.

Pandora - Never used

Soundcloud - used once every two months, some of my friends post music online.

Barcode scanner - never used

Diskusage - Very useful.  Visualises where data is being taken up on your phone, helps when trying to free up space.

Swiftkey - Better than native keyboards.  Gives more freedom, I wanted a keyboard with black background and pale keys, swiftkey has it.

Google calendar - don't use it, but its there to try to use.

Sleepbot - doesn't seem to work with my phone, also I track with other methods, and I forget to turn it on; so its entirely not useful in my life for sleep tracking.

My service provider's app.

AdobeAcrobat - use often; not via the icon though.

Wheresmydroid? - seems good to have; never used.  My phone is attached to me too well for me to lose it often.  I have it open most of the waking day maybe.

Uber - I don't use ubers.

Terminal emulator, AIDE, PdDroid party, Processing Android, An editor for processing, processing reference, learn C++ - programming apps for my phone, I don't use them, and I don't program much.

Airbnb - Have not used yet, done a few searches for estimating prices of things.

Heart rate - measures your heart rate using the camera/flash.  Neat, not useful other than showing off to people how its possible to do.

Basis - (B1 app), - has less info available than their new app

BPM counter - Neat if you care about what a "BPM" is for music.  Don't use often.

Sketch guru - fun to play with, draws things.

DJ studio 5 - I did a dj thing for a friend once, used my phone.  was good.

Facebook calendar Sync - as the name says.

Dual N-back - I Don't use it.  I don't think it has value giving properties.

Awesome calendar - I don't use but it comes with good reccomendations.

Battery monitor 3 - Makes a graph of temperature and frequency of the cores.  Useful to see a few times.  Eventually its a bell curve.

urbanspoon - local food places app.

Gumtree - Australian Ebay (also ebay owns it now)

Printer app to go with my printer

Car Roadside assistance app to go with my insurance

Virgin air entertainment app - you can use your phone while on the plane and download entertainment from their in-flight system.


Two things now;

What am I missing? Was this useful?  Ask me to elaborate on any app and why I used it.  If I get time I will do that anyway. 

P.S. this took two hours to write.

P.P.S - I was intending to make, keep and maintain a list of useful apps, that is not what this document is.  If there are enough suggestions that it's time to make and keep a list; I will do that.

How to fix academia?

9 passive_fist 20 August 2015 12:50AM

I don't usually submit articles to Discussion, but this news upset me so much that I think there is a real need to talk about it.

http://www.nature.com/news/faked-peer-reviews-prompt-64-retractions-1.18202

A leading scientific publisher has retracted 64 articles in 10 journals, after an internal investigation discovered fabricated peer-review reports linked to the articles’ publication.

The cull comes after similar discoveries of ‘fake peer review’ by several other major publishers, including London-based BioMed Central, an arm of Springer, which began retracting 43 articles in March citing "reviews from fabricated reviewers". The practice can occur when researchers submitting a paper for publication suggest reviewers, but supply contact details for them that actually route requests for review back to the researchers themselves.

Types of Misconduct

We all know that academia is a tough place to be in. There is constant pressure to 'publish or perish', and people are given promotions and pay raises directly as a result of how many publications and grants they are awarded. I was awarded a PhD recently so the subject of scientific honesty is dear to my heart.

I'm of course aware of misconduct in the field of science. 'Softer' forms of misconduct include things like picking only results that are consistent with your hypothesis or repeating experiments until you get low p-values. This kind of thing sometimes might even happen non-deliberately and subconsciously, which is why it is important to disclose methods and data.

'Harder' forms of misconduct include making up data and fudging numbers in order to get published and cited. This is of course a very deliberate kind of fraud, but it is still easy to see how someone could be led to this kind of behaviour by virtue of the incredible pressures that exist. Here, the goal is not just academic advancement, but also obtaining recognition. The authors in this case are confident that even though their data is falsified, their reasoning (based, of course, on falsified data) is sound and correct and stands up to scrutiny.

What is the problem?

But the kind of misconduct being mentioned in the linked article is extremely upsetting to me, beyond the previous types of misconduct. It is a person or (more likely) a group of people knowing full well that their publication would not stand up to serious scientific scrutiny. Yet they commit the fraud anyway, guessing that no one will actually ever seriously scrutinize their work and it will take it at face value due to being present in a reputable journal. The most upsetting part is that they are probably right in this assessment

Christie Aschwanden wrote a piece about this recently on FiveThirtyEight. She makes the argument that cases of scientific misconduct are still rare and not important in the grand scheme of things. I only partially agree with this. I agree that science is still mostly trustworthy, but I don't necessarily agree that scientific misconduct is too rare to be worth worrying about. It would be much more honest to say that we simply do not know the extent of scientific misconduct, because there is no comprehensive system in place to detect it. Surveys on this have indicated that as much as 1/3 of scientists admit to some form of questionable practices, with 2% admitting to downright fabrication or falsification of evidence. These figures could be widely off the mark. It is, unfortunately, easy to commit fraud without being detected.

Aschwanden's conclusion is that the problem is that science is difficult. With this I agree wholeheartedly. And to this I'd add that science has probably become too big. A few years ago I did some research in the area of nitric oxide (NO) transmission in the brain. I did a search and found 55,000 scientific articles from reputable publications with "nitric oxide" in the title. Today this number is over 62,000. If you expand this to both the title and abstract, you get about 160,000. Keep in mind that these are only the publications that have actually passed the process of peer review.

I have read only about 1,000 articles total during the entirety of my PhD, and probably <100 in the actual level of depth required to locate flaws in reasoning. The problem with science becoming too big is that it's easy to hide things. There are always going to be fewer fact-checkers than authors, and it is much harder to argue logically about things than it is to simply write things. The more the noise, the harder it becomes to listen.

It was not always this way. The rate of publication is increasing rapidly, outstripping even the rate of growth in number of scientists. Decades ago publications played only a minor role in the scientific process. Publications mostly had the role of disseminating important information to a large audience. Today, the opposite is true - most articles have a small audience (as, in people with the will and ability to read them), consisting of perhaps only a handful of individuals - often only the people in the same research group of institutional department. This leads to the problem where it is often seen that many publications actually receive most of their citations from people who are friends or colleagues of the authors.

Some people have suggested that because of the recent high-level cases of fraud that have been uncovered, there is now increased scrutiny and fraud is going to be uncovered more rapidly. This may be true for the types of fraud that already have been uncovered, but fraudsters are always going to be able to stay ahead of the scrutinizers. Experience with other forms of crime show this quite clearly. Before the article in nature I had never even thought about the possibility of sending reviews back to myself. It simply never occurred to me. All of these considerations lead me to believe that the problem of scientific fraud may actually get worse, not better, over time. Unless the root of the problem is attacked.

How Can it be Solved?

So how to solve the problem of scientific misconduct? I don't have any good answers. I can think of things like "Stop awarding people for mere number of publications" and "Gauge the actual impact of science rather than empty metrics like number of citations or impact factor." But I can't think of any good way to do these things. Some alternatives - like using, for instance, social media to gauge the importance of a scientific discovery - would almost certainly lead to a worse situation than we have now.

A small way to help might be to adopt a payment system for peer-review. That is, to get published, you pay a certain amount of money for researchers to review your work. Currently, most reviewers offer their services for free (however they are sometimes allocated a certain amount of time for peer-review in their academic salary). A pay system would at least give an incentive for people to rigorously review work rather than simply trying to optimize for minimum amount of time invested in review. It would also reduce the practice of parasitic submissions (people submitting to short-turnaround-time, high-profile journals like Nature just to get feedback on their work for free) and decrease the flow volume of papers submitted for review. However, it would also incentivize a higher rate of rejection to maximize profits. And it would disproportionately impact scientists from places with less scientific funding.

What are the real options we have here to minimize misconduct?

Soylent has been found to contain lead (12-25x) and cadmium (≥4x) in greater concentrations than California's 'safe harbor' levels

9 Transfuturist 15 August 2015 10:45PM

Press Release

Edit: Soylent's Reply, provided by Trevor_Blake

OAKLAND, Calif., Aug. 13, 2015 /PRNewswire-USNewswire/ -- As You Sow, a non-profit environmental-health watchdog, today filed a notice of intent to bring legal action against Soylent, a "meal replacement" powder recently featured in New York Times and Forbes stories reporting that workers in Silicon Valley are drinking their meals, eliminating the need to eat food. The 60-day notice alleges violation of California's Safe Drinking Water and Toxic Enforcement Act for failure to provide sufficient warning to consumers of lead and cadmium levels in the Soylent 1.5 product.

Test results commissioned by As You Sow, conducted by an independent laboratory, show that one serving of Soylent 1.5 can expose a consumer to a concentration of lead that is 12 to 25 times above California's Safe Harbor level for reproductive health, and a concentration of cadmium that is at least 4 times greater than the Safe Harbor level for cadmium. Two separate samples of Soylent 1.5 were tested.

According to the Soylent website, Soylent 1.5 is "designed for use as a staple meal by all adults." The startup recently raised $20 million in funding led by venture capital firm Andreessen Horowitz.

"Nobody expects heavy metals in their meals," said Andrew Behar, CEO of As You Sow. "These heavy metals accumulate in the body over time and, since Soylent is marketed as a meal replacement, users may be chronically exposed to lead and cadmium concentrations that exceed California's safe harbor level (for reproductive harm). With stories about Silicon Valley coders sometimes eating three servings a day, this is of very high concern to the health of these tech workers."

Lead exposure is a significant public health issue and is associated with neurological impairment, such as learning disabilities and lower IQ, even at low levels. Chronic exposure to cadmium has been linked to kidney, liver, and bone damage in humans.

Since 1992, As You Sow has been a leading enforcer of California's Safe Drinking Water and Toxic Enforcement Act, with enforcement actions resulting in removal of lead from children's jewelry, formaldehyde from portable classrooms, and lead from baby powder.

Typical Sneer Fallacy

8 calef 01 September 2015 03:13AM

I like going to see movies with my friends.  This doesn't require much elaboration.  What might is that I continue to go see movies with my friends despite the radically different ways in which my friends happen to enjoy watching movies.  I'll separate these movie-watching philosophies into a few broad and not necessarily all-encompassing categories (you probably fall into more than one of them, as you'll see!):

(a): Movie watching for what was done right.  The mantra here is "There are no bad movies." or "That was so bad it was good."  Every movie has something redeeming about it, or it's at least interesting to try and figure out what that interesting and/or good thing might be.  This is the way that I watch movies, most of the time (say 70%).

 

(b): Movie watching for entertainment.  Mantra: "That was fun!".  Critical analysis of the movie does not provide any enjoyment.  The movie either succeeds in 'entertaining' or it fails.  This is the way that I watch movies probably 15% of the time.

 

(c): Movie watching for what was done wrong.  Mantra: "That movie was terrible."  The only enjoyment that is derived from the movie-watching comes from tearing the film apart at its roots--common conversation pieces include discussion of plot inconsistencies, identification of poor directing/cinematography/etc., and even alternative options for what could have 'fixed' the film to the extent that the film could even said to be 'fixed'.  I do this about ~12% of the time.

 

(d): Sneer. Mantra: "Have you played the drinking game?".  Vocal, public, moderately-drunken dog-piling of a film's flaws are the only way a movie can be enjoyed.  There's not really any thought put into the critical analysis.  The movie-watching is more an excuse to be rambunctious with a group of friends than it is to actually watch a movie.  I do this, conservatively, 3% of the time.

What's worth stressing here is that these are avenues of enjoyment.  Even when a (c) person watches a 'bad' movie, they enjoy it to the extent that they can talk at length about what was wrong with the movie. With the exception of the Sneer category, none of these sorts of critical analysis are done out of any sort of vindictiveness, particularly and especially (c).

So, like I said, I'm mostly an (a) person.  I have friends that are (a) people, (b) people, (c) people, and even (d) people (where being a (_) person means watching movies with that philosophy more than 70% of the time).

 

This can generate a certain amount of friction.  Especially when you really enjoy a movie, and your friend starts shitting all over it.

 

Or at least, that's what it feels like from the inside!  Because you might have really enjoyed a movie because you thought it was particularly well-shot, or it evoked a certain tone really well, but here comes your friend who thought the dialogue was dumb, boring, and poorly written.  Fundamentally, you and your friend are watching the movie for different reasons.  So when you go to a movie with 6 people who are exclusively (c), category (c) can start looking a whole lot like category (d) when you're an (a) or (b) person.

And that's no fun, because (d) people aren't really charitable at all.  It can be easy to translate in one's mind the criticism "That movie was dumb" into "You are dumb for thinking that movie wasn't dumb".  Sometimes the translation is even true!  Sneer Culture is a thing that exists, and while its connection to my 'Sneer' category above is tenuous, my word choice is intentional.  There isn't anything wrong with enjoying movies via (d), but because humans are, well, human, a sneer culture can bloom around this sort of philosophy.

Being able to identify sneer cultures for what they are is valuable.  Let's make up a fancy name for misidentifying sneer culture, because the rationalist community seems to really like snazzy names for things:

Typical Sneer Fallacy: When you ignore or are offended by criticism because you've mistakenly identified it as coming purely from sneer.  In reality, the criticism was genuine and actually true, to the extent that it represents someone's sincere beliefs, and is not simply from a place of malice.

 

This is the point in the article where I make a really strained analogy between the different ways in which people enjoy movies, and how Eliezer has pretty extravagantly committed the Typical Sneer Fallacy in this reddit thread.

 

Some background for everyone that doesn't follow the rationalist and rationalist-adjacent tumblr-sphere:  su3su2u1, a former physicist, now data scientist, has a pretty infamous series of reviews of HPMOR.  These reviews are not exactly kind.  Charitably, I suspect this is because su3su2u1 is a (c) kind of person, or at least, that's the level at which he chose to interact with HPMOR.  For disclosure, I definitely (a)-ed by way through HPMOR.

su3su2u1 makes quite a few science criticisms of Eliezer.  Eliezer doesn't really take these criticisms seriously, and explicitly calls them "fake".  Then, multiple physicists come out of the woodwork to tell Eliezer he is wrong concerning a particular one involving energy conservation and quantum mechanics (I am also a physicist, and su3su2u1's criticism is, in fact, correct.  If you actually care about the content of the physics issue, I'd be glad to get into it in the comments.  It doesn't really matter, except insofar as this is not the first time Eliezer's discussions of quantum mechanics have gotten him into trouble) (Note to Eliezer: you probably shouldn't pick physics fights with the guy whose name is the symmetry of the standard model Lagrangian unless you really know what you're talking about (yeah yeah, appeal to authority, I know)).

I don't really want to make this post about stupid reddit and tumblr drama.  I promise.  But I think the issue was rather succinctly summarized, if uncharitably, in a tumblr post by nostalgebraist.

 

The Typical Sneer Fallacy is scary because it means your own ideological immune system isn't functioning correctly.  It means that, at least a little bit, you've lost the ability to determine what sincere criticism actually looks like.  Worse, not only will you not recognize it, you'll also misinterpret the criticism as a personal attack.  And this isn't singular to dumb internet fights.

Further, dealing with criticism is hard.  It's so easy to write off criticism as insincere if it means getting to avoid actually grappling with the content of that criticism:  You're red tribe, and the blue tribe doesn't know what it's talking about.  Why would you listen to anything they have to say?  All the blues ever do is sneer at you.  They're a sneer culture.  They just want to put you down.  They want to put all the reds down.

But the world isn't always that simple.  We can do better than that.

An accidental experiment in location memory

8 PhilGoetz 31 August 2015 04:50PM

I bought a plastic mat to put underneath my desk chair, to protect the wooden floor from having bits of stone ground into it by the chair wheels. But it kept sliding when I stepped onto it, nearly sending me stumbling into my large, expensive, and fragile monitor. I decided to replace the mat as soon as I found a better one.

Before I found a better one, though, I realized I wasn't sliding on it anymore. My footsteps had adjusted themselves to it.

This struck me as odd. I couldn't be sensing the new surface when stepping onto it and adjusting my step to it, because once I've set my foot down on it, it's too late; I've already leaned toward the foot in a way that would make it physically impossible to reduce my angular momentum, and the slipping seems instantaneous on contact. Nor was I consciously aware of the mat anymore. It's thin, transparent, and easy to overlook.

I could think of two possibilities: Either my brain had learned to walk differently in a small, precise area in front of my desk, or I noticed the mat subconsciously and adjusted my steps subconsciously. The latter possibility freaked me out a little, because it seems like the kind of thing my brain should tell me about. Adjusting my steps subconsciously I expect; noticing a new object or environment, I expect to be told about.

A few weeks later, the mat had gradually moved a foot or two out of position, so I moved it back. The next time I came back to my desk, hours later, having forgotten all about the mat, I immediately slipped on it.

So it seems my brain was not noticing the mat, but remembering its precise location. (It's possible this is instead some physical mechanism that makes the mat stick better to the floor over time, but I can't think how that would work.)

Have any of you had similar experiences?

List of common human goals

8 Elo 24 August 2015 07:58AM
List of common goal areas:
This list is meant to be in the area of goal-space.  It is non-exhaustive and the descriptions are including but not limited to - some hints to help you understand where in the idea-space these goals land.  When constructing this list I try to imagine a large venn diagram where sometimes they overlap.  The areas mentioned are areas that have an exclusive part to them; i.e. where sometimes knowledge overlaps with self-awareness there are parts of each that do not overlap; so both are mentioned.  If you prefer a more "focussing" or feeling base description; Imagine each of these goals is a hammer, designed with a specific weight to hit a certain note on a xylophone.  Often one hammer can produce the note that is meant for that key and several other keys as well.  But sometimes they can't quite make them sound perfect.  What is needed is the right hammer for that block to hit the right note and make the right sound.  Each of these "hammers" has some note that cannot be produced through the use of other hammers.

This list has several purposes:

  1. For someone with some completed goals who is looking to move forward to new horizons; help you consider which common goal-pursuits you have not explored and if you want to try to strive for something in one of these directions.
  2. For someone without clear goals who is looking to create them and does not know where to start.
  3. For someone with too many specific goals who is looking to consider the essences of those goals and what they are really striving for.
  4. For someone who doesn't really understand goals or why we go after them to get a better feel for "what" potential goals could be.

What to do with this list?

0. Agree to invest 30 minutes of effort into a goal confirmation exercise as follows.
  1. Go through this list (copy paste to your own document) and cross out the things you probably don't care about.  Some of these have overlapping solutions of projects that you can do that fulfils multiple goal-space concepts. (5mins)
  2. For the remaining goals; rank them either "1 to n", in "tiers" of high to low priority or generally order them in some way that is coherent to you.  (For serious quantification; consider giving them points - i.e. 100 points for achieving a self-awareness and understanding goal but a pleasure/creativity goal might be only worth 20 points in comparison) (10mins)
  3. Make a list of your ongoing projects (5-10mins), and check if they actually match up to your most preferable goals. (or your number ranking) (5-10mins)  If not; make sure you have a really really good excuse for yourself.
  4. Consider how you might like to do things differently that prioritise your current plans to fit more inline with your goals. (10-20mins)
  5. Repeat this task at an appropriate interval (6monthly, monthly, when your goals significantly change, when your life significantly changes, when major projects end)

Why have goals?

Your goals could change in life; you could explore one area and realise you actually love another area more.  It's important to explore and keep confirming that you are still winning your own personal race to where you want to be going.
It's easy to insist that goals serve to only disappoint or burden a person.  These are entirely valid fears for someone who does not yet have goals.  Goals are not set in stone; however they don't like to be modified either.  I like to think of goals as doing this:
(source: internet viral images) Pictures from the Internet aside; The best reason I have ever reasoned for picking goals is to do exactly this.  Make choices that a reasonable you in the future will be motivated to stick to Outsource that planning and thinking of goal/purpose/direction to your past self.  Naturally you could feel like making goals is piling on the bricks (but there is a way to make goals that do not leave them piling on like bricks); you should think of it as rescuing future you from a day spent completely lost and wondering what you were doing.  Or a day spent questioning if "this" is something that is getting you closer to what you want to be doing in life.

Below here is the list.  Good luck.


personal:

Spirituality - religion, connection to a god, meditation, the practice of gratitude or appreciation of the universe, buddhism, feeling of  a greater purpose in life.
knowledge/skill + ability - learning for fun - just to know, advanced education, becoming an expert in a field, being able to think clearly, being able to perform a certain skill (physical skill), ability to do anything from run very far and fast to hold your breath for a minute, Finding ways to get into flow or the zone, be more rational.
self-awareness/understanding - to be at a place of understanding one’s place in the world, or have an understanding of who you are; Practising thinking in eclectic perspectives for various other people and how it effects your understanding of the world.
health + mental - happiness (mindset) - Do you even lift? http://thefutureprimaeval.net/why-we-even-lift/, are you fit, healthy, eating right, are you in pain, is your mind in a good place, do you have a positive internal voice, do you have bad dreams, do you feel confident, do you feel like you get enough time to yourself?
Live forever - do you want to live forever - do you want to work towards ensuring that this happens?
art/creativity - generating creative works, in any field - writing, painting, sculpting, music, performance.
pleasure/recreation - are you enjoying yourself, are you relaxing, are you doing things for you.
experience/diversity - Have you seen the world?  Have you explored your own city?  Have you met new people, are you getting out of your normal environment?
freedom - are you tied down?  Are you trapped in your situation?  Are your burdens stacked up?
romance - are you engaged in romance?  could you be?
Being first - You did something before anyone; you broke a record, It’s not because you want your name on the plaque - just the chance to do it first.  You got that.
Create something new - invent something; be on the cutting edge of your field; just see a discovery for the first time.  Where the new-ness makes creating something new not quite the same as being first or being creative.

personal-world:

legacy - are you leaving something behind?  Do you have a name? Will people look back and say; I wish I was that guy!
fame/renoundness - Are you “the guy”?  Do you want people to know your name when you walk down the street?  Are there gossip magazines talking about you; do people want to know what you are working on in the hope of stealing some of your fame?  Is that what you want?
leadership, and military/conquer - are you climbing to the top?  Do you need to be in control?  Is that going to make the best outcomes for you?  Do you wish to destroy your enemies?  As a leader do you want people following you?  Do as you do? People should revere you. And power - in the complex; “in control” and “flick the switch” ways that overlap with other goal-space areas.  Of course there are many forms of power; but if its something that you want; you can find fulfilment through obtaining it.
Being part of something greater - The opportunity to be a piece of a bigger puzzle, are you bringing about change; do we have you to thank for being part of bringing the future closer; are you making a difference.
Social - are you spending time socially? No man is an island, do you have regular social opportunities, do you have exploratory social opportunities to meet new people.  Do you have an established social network?  Do you have intimacy?
Family - do you have a family of your own?  Do you want one?  Are there steps that you can take to put yourself closer to there?  Do you have a pet? Having your own offspring? Do you have intimacy?
Money/wealth - Do you have money; possessions and wealth?  Does your money earn you more money without any further effort (i.e. owning a business, earning interest on your $$, investing)
performance - Do you want to be a public performer, get on stage and entertain people?  Is that something you want to be able to do?  Or do on a regular basis?
responsibility - Do you want responsibility?  Do you want to be the one who can make the big decisions?
Achieve, Awards - Do you like gold medallions?  Do you like to strive towards an award?
influence - Do you want to be able to influence people, change hearts and minds.
Conformity - The desire to blend in; or be normal.  Just to live life as is; without being uncomfortable.
Be treated fairly - are you getting the raw end of the stick?  Are there ways that you don't have to keep being the bad guy around here?
keep up with the Joneses - you have money/wealth already, but there is also the goal of appearing like you have money/wealth.  Being the guy that other people keep up with.
Validation/acknowledgement - Positive Feedback on emotions/feeling understood/feeling that one is good and one matters

world:

improve the lives of others (helping people) - in the charity sense of raising the lowest common denominator directly.
Charity + improve the world -  indirectly.  putting money towards a cause; lobby the government to change the systems to improve people’s lives.
winning for your team/tribe/value set - doing actions but on behalf of your team, not yourself. (where they can be one and the same)
Desired world-states - make the world into a desired alternative state.  Don't like how it is; are you driven to make it into something better?

other (and negative stimuli):

addiction (fulfil addiction) - addiction feels good from the inside and can be a motivating factor for doing something.
Virtual reality success - own all the currency/coin and all the cookie clickers, grow all the levels and get all the experience points!
Revenge - Get retribution; take back what you should have rightfully had, show the world who’s boss.
Negative - avoid (i.e. pain, loneliness, debt, failure, embarrassment, jail) - where you can be motivated to avoid pain - to keep safe, or avoid something, or “get your act together”.
Negative - stagnation (avoid stagnation) - Stop standing still.  Stop sitting on your ass and DO something.


Words:

This list being written in words; Will not mean the same thing to every reader.  Which is why I tried to include several categories that almost overlap with each other.  Some notable overlaps are: Legacy/Fame.  Being first/Achievement. Being first/skill and ability.  But of course there are several more.  I really did try to keep the categories open and several; not simplified.  My analogy to hammers and notes should be kept in mind when trying to improve this list.

I welcome all suggestions and improvements to this list.
I welcome all feedback to improve the do-at-home task.
I welcome all life-changing realisations as feedback from examining this list.
I welcome the opportunity to be told how wrong I am :D

Meta-information

This document in total has been 7-10 hours of writing over about two weeks.
I have had it reviewed by a handful of people and lesswrongers before posting.  (I kept realising that someone I was talking to might get value out of it)
I wrote this because I felt like it was the least-bad way that I could think of going about
finding these ideas in the one place
sharing these ideas and this way of thinking about them with you.

Please fill out the survey of if this was helpful.

Edit: also included; (not in the comments) desired world states; and live forever.

[Link] My review of Rationality: From AI to Zombies

8 James_Miller 12 August 2015 04:16PM

I wrote a review of Yudkowsky's Rationality: From AI to Zombies for The New Rambler.

Calling references: Rational or irrational?

7 PhilGoetz 28 August 2015 09:06PM

Over the past couple of decades, I've sent out a few hundred resumes (maybe, I don't know, 300 or 400--my spreadsheet for 2013-2015 lists 145 applications).  Out of those I've gotten at most two dozen interviews and a dozen job offers.

Throughout that time I've maintained a list of references on my resume.  The rest of the resume is, to my mind, not very informative.  The list of job titles and degrees says little about how competent I was.

Now and then, I check with one of my references to see if anyone called them.  I checked again yesterday with the second reference on my list.  The answer was the same:  Nope.  No one has ever, as far as I can recall, called any of my references.  Not the people who interviewed me; not the people who offered me jobs.

When the US government did a background check on me, they asked me for a list of references to contact.  My uncertain recollection is that they ignored it and interviewed my neighbors and other contacts instead, as if what I had given them was a list of people not to bother contacting because they'd only say good things about me.

Is this rational or irrational?  Why does every employer ask for a list of references, then not call them?

Rationality Reading Group: Part H: Against Doublethink

7 Gram_Stone 27 August 2015 01:22AM

This is part of a semi-monthly reading group on Eliezer Yudkowsky's ebook, Rationality: From AI to Zombies. For more information about the group, see the announcement post.


Welcome to the Rationality reading group. This fortnight we discuss Part H: Against Doublethink (pp. 343-361)This post summarizes each article of the sequence, linking to the original LessWrong post where available.

H. Against Doublethink

81. Singlethink - The path to rationality begins when you see a great flaw in your existing art, and discover a drive to improve, to create new skills beyond the helpful but inadequate ones you found in books. Eliezer's first step was to catch what it felt like to shove an unwanted fact to the corner of his mind. Singlethink is the skill of not doublethinking.

82. Doublethink (Choosing to be Biased) - George Orwell wrote about what he called "doublethink", where a person was able to hold two contradictory thoughts in their mind simultaneously. While some argue that self deception can make you happier, doublethink will actually lead only to problems.

83. No, Really, I've Deceived Myself - Some people who have fallen into self-deception haven't actually deceived themselves. Some of them simply believe that they have deceived themselves, but have not actually done this.

84. Belief in Self-Deception - Deceiving yourself is harder than it seems. What looks like a successively adopted false belief may actually be just a belief in false belief.

85. Moore's Paradox - People often mistake reasons for endorsing a proposition for reasons to believe that proposition.

86. Don't Believe You'll Self-Deceive - It may be wise to tell yourself that you will not be able to successfully deceive yourself, because by telling yourself this, you may make it true.

 


This has been a collection of notes on the assigned sequence for this fortnight. The most important part of the reading group though is discussion, which is in the comments section. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

The next reading will cover Part I: Seeing with Fresh Eyes (pp. 365-406). The discussion will go live on Wednesday, 9 September 2015, right here on the discussion forum of LessWrong.

Personal story about benefits of Rationality Dojo and shutting up and multiplying

7 Gleb_Tsipursky 26 August 2015 04:38PM

My wife and I have been going to Ohio Rationality Dojo for a few months now, started by Raelifin, who has substantial expertise in probabilistic thinking and Bayesian reasoning, and I wanted to share about how the dojo helped us make a rational decision about house shopping. We were comparing two houses. We had an intuitive favorite house (170 on the image) but decided to compare it to our second favorite (450) by actually shutting up and multiplying, based on exercises we did as part of the dojo.

What we did was compare mathematically each part of the house by comparing the value of that part of the house multiplied by the use of that part of the house, and had separate values for the two of us (A for my wife, Agnes Vishnevkin, and G for me, Gleb Tsipursky, on the image). By comparing it mathematically, 450 came out way ahead. Hard to update our beliefs, but we did it, and are now orienting toward that one as our primary choice. Rationality for the win!

Here is the image of our back-of-the-napkin calculations.

 

Rationality Compendium: Principle 1 - A rational agent, given its capabilities and the situation it is in, is one that thinks and acts optimally

7 ScottL 23 August 2015 08:01AM

A perfect rationalist is an ideal thinker. Rationality , however, is not the same as perfection. Perfection guarantees optimal outcomes. Rationality only guarantees that the agent will, to the utmost of their abilities, reason optimally. Optimal reasoning cannot, unfortunately, guarantee optimal outcomes. This is because most agents are not omniscient or omnipotent. They are instead fundamentally and inexorably limited. To be fair to such agents, the definition of rationality that we use should take this into account. Therefore, a rational agent will be defined as: an agent that, given its capabilities and the situation it is in, thinks and acts optimally. Although it is noted that rationality does not guarantee the best outcome, a rational agent will most of the time achieve better outcomes than those of an irrational agent. 

Rationality is often considered to be split into three parts: normative, descriptive and prescriptive rationality.

Normative rationality describes the laws of thought and action. That is, how a perfectly rational agent with unlimited computing power, omniscience etc. would reason and act. Normative rationality basically describes what is meant by the phrase "optimal reasoning". Of course, for limited agents true optimal reasoning is impossible and they must instead settle for bounded optimal reasoning, which is the closest approximation to optimal reasoning that is possible given the information available to the agent and the computational abilities of the agent. The laws of thought and action (what we currently believe optimal reasoning involves) are::

  • Logic  - math and logic are deductive systems, where the conclusion of a successful argument follows necessarily from its premises, given the axioms of the system you’re using: number theory, geometry, predicate logic, etc.
  • Probability theory  - is essentially an extension of logic. Probability is a measure of how likely a proposition is to be true, given everything else that you already believe. Perhaps, the most useful rule to be derived from the axioms of probability theory is Bayes’ Theorem , which tells you exactly how your probability for a statement should change as you encounter new information. Probability is viewed from one of two perspectives: the Bayesian perspective which sees probability as a measure of uncertainty about the world and the Frequentist perspective which sees probability as the proportion of times the event would occur in a long run of repeated experiments. Less wrong follows the Bayesian perspective. 
  • Decision theory  - is about choosing actions based on the utility function of the possible outcomes. The utility function is a measure of how much you desire a particular outcome. The expected utility of an action is simply the average utility of the action’s possible outcomes weighted by the probability that each outcome occurs. Decision theory can be divided into three parts:
    • Normative decision theory studies what an ideal agent (a perfect agent, with infinite computing power, etc.) would choose.
    • Descriptive decision theory studies how non-ideal agents (e.g. humans) actually choose.
    • Prescriptive decision theory studies how non-ideal agents can improve their decision-making (relative to the normative model) despite their imperfections.

Descriptive rationality describes how people normally reason and act. It is about understanding how and why people make decisions. As humans, we have certain limitations and adaptations which quite often makes it impossible for us to be perfectly rational in the normative sense of the word. It is because of this that we must satisfice or approximate the normative rationality model as best we can. We engage in what's called bounded, ecological or grounded rationality  . Unless explicitly stated otherwise, 'rationality' in this compendium will refer to rationality in the bounded sense of the word. In this sense, it means that the most rational choice for an agent depends on the agents capabilities and the information that is available to it. The most rational choice for an agent is not necessarily the most certain, true or right one. It is just the best one given the information and capabilities that the agent has. This means that an agent that satisfices or uses heuristics may actually be reasoning optimally, given its limitations, even though satisficing and heuristics are shortcuts that are potentially error prone.  

Prescriptive or applied rationality is essentially about how to bring the thinking of limited agents closer to what the normative model stipulates. It is described by Baron in Thinking and Deciding   pg.34: 

In short, normative models tell us how to evaluate judgments and decisions in terms of their departure from an ideal standard. Descriptive models specify what people in a particular culture actually do and how they deviate from the normative models. Prescriptive models are designs or inventions, whose purpose is to bring the results of actual thinking into closer conformity to the normative model. If prescriptive recommendations derived in this way are successful, the study of thinking can help people to become better thinkers.

The behaviours and thoughts that we consider to be rational for limited agents is much larger than those for the perfect, i.e. unlimited, agents. This is because for the limited agents we need to take into account, not only those thoughts and behaviours which are optimal for the agent, but also those thoughts and behaviours which allow the limited agent to improve their reasoning. It is for this reason that we consider curiousity, for example, to be rational as it often leads to situations in which the agents improve their internal representations or models of the world. We also consider wise resource allocation to be rational because limited agents only have a limited amount of resources available to them. Therefore, if they can get a greater return on investment on the resources that they do use then they will be more likely to be able to get closer to thinking optimally in a greater number of domains.

We also consider the rationality of particuar choices to be something that is in a state of flux. This is because the rationality of choices depends on the information that an agent has access to and this is something which is frequently changing. This hopefully highlights an important fact. If an agent is suboptimal in its ability to gather information, then it will often end up with different information than an agent with optimal informational gathering abilities would. In short, this is a problem for the suboptimal (irrational) agent as it means that its rational choices are going to differ more from the perfect normative agents than the rational agents would. The closer an agents rational choices are to the rational choices of a perfect normative agent the more that the agent is rational.

It can also be said that the rationality of an agent depends in large part on the agents truth seeking abilities. The more accurate and up to date the agents view of the world the closer its rational choices will be to those of the perfect normative agents. It is because of this that a rational agent is one that is inextricably tied to the world as it is. It does not see the world as it wishes it, fears it or has seen it to be, but instead constantly adapts to and seeks out feedback from interactions with the world. The rational agent is attuned to the current state of affairs. One other very important characteristic of rational agents is that they adapt. If the situation has changed and the previously rational choice is no longer the one with the greatest expected utility, then the rational agent will adapt and change its preferred choice to the one that is now the most rational.

The other important part of rationality, besides truth seeking, is that it is about maximising the ability to actually achieve important goals. These two parts or domains of rationality: truth seeking and goal reaching are referred to as epistemic and instrumental rationality.  

  • Epistemic rationality is about the ability to form true beliefs. It is governed by the laws of logic and probability theory.
  • Instrumental rationality is about the ability to actually achieve the things that matter to you. It is governed by the laws of decision theory. In a formal context, it is known as maximizing “expected utility”. It important to note that it is about more than just reaching goals. It is also about discovering how to develop optimal goals.

As you move further and further away from rationality you introduce more and more flaws, inefficiencies and problems into your decision making and information gathering algorithms. These flaws and inefficiencies are the cause of irrational or suboptimal behaviors, choices and decisions. Humans are innately irrational in a large number of areas which is why, in large part, improving our rationality is just about mitigating, as much as possible, the influence of our biases and irrational propensities.

If you wish to truly understand what it means to be rational, then you must also understand what rationality is not. This is important because the concept of rationality is often misconstrued by the media. An epitomy of this misconstrual is the character of Spock from Star Trek. This character does not see rationality as if it was about optimality, but instead as if it means that 

  • You can expect everyone to react in a reasonable, or what Spock would call rational, way. This is irrational because it leads to faulty models and predictions of other peoples behaviors and thoughts.
  • You should never make a decision until you have all the information. This is irrational because humans are not omniscient or omnipotent. Their decisions are constrained by many factors like the amount of information they have, the cognitive limitations of their brains and the time available for them to make decisions. This means that a person if they are to act rationally must often make predictions and assumptions.
  • You should never rely on intuition. This is irrational because intuition (system 1 thinking)  does have many advantages over conscious and effortful deliberation (system 2 thinking) mainly its speed. Although intuitions can be wrong, to disregard them entirely is to hinder yourself immensely. If your intuitions are based on multiple interactions that are similar to the current situation and these interactions had short feedback cycles, then it is often irrational to not rely on your intuitions.
  • You should not become emotional. This is irrational because while it is true that emotions can cause you to use less rational ways of thinking and acting, i.e. ways that are optimised for ancestral or previous environments, it does not mean that we should try to eradicate emotions in ourselves. This is because emotions are essential to rational thinking and normal social behavior . An aspiring rationalist should remember four points in regards to emotions:
    • The rationality of emotions depends on the rationality of the thoughts and actions that they induce. It is rational to feel fear when you are actually in a situation where you are threatened. It is irrational to feel fear in situations where are not being threatened. If your fear compels you to take suboptimal actions, then and only then is that fear irrational.
    • Emotions are the wellspring of value. A large part of instrumental rationality is about finding the best way to achieve your fundamental human needs. A person who can fulfill these needs through simple methods is more rational than someone who can't. In this particular area people tend to become alot less rational as they age. As adults we should be jealous of the innocent exuberance that comes so naturally to children. If we are not as exuberant as children, then we should wonder at how it is that we have become so shackled by our own self restraint. 
    • Emotional control is a virtue, but denial is not. Emotions can be considered a type of internal feedback. A rational person does not be consciously ignore or avoid feedback as this means that would be limiting or distorting the information that they have access to. It is possible that a rational agent may may need to mask or hide their emotions for reasons related to societal norms and status, but they should not repress emotions unless there is some overriding rational reason to do so. If a person volitionally represses their emotions because they wish to perpetually avoid them, then this is both irrational and cowardly.
    • By ignoring, avoiding and repressing emotions you are limiting the information that you exhibit, which means that other people will not know how you are actually feeling. In some situations this may be helpful, but it is important to remember that people are not mind readers. Their ability to model your mind and your emotional state depends on the information that they know about you and the information, e.g. body language, vocal inflections, that you exhibit. If people do not know that you are vulnerable, then they cannot know that you are courageous. If people do not know that you are in pain, then they cannot know that you need help.   
  • You should only value quantifiable things like money, productivity, or efficiency. This is irrational because it means that you are reducing the amount of potentially valuable information that you consider. The only reason a rational person ever reduces the amount of information that they consider is because of resource or time limitations.

Related Materials

Wikis:

  • Rationality - the characteristic of thinking and acting optimally. An agent is rational if it wields its intelligence in such a way as to maximize the convergence between its beliefs and reality; and acts on these beliefs in such a manner as to maximize its chances of achieving whatever goals it has. For humans, this means mitigating (as much as possible) the influence of cognitive biases
  • Maths/Logic - Math and logic are deductive systems, where the conclusion of a successful argument follows necessarily from its premises, given the axioms of the system you’re using: number theory, geometry, predicate logic, etc.   
  • Probability theory - a field of mathematics which studies random variables and processes. 
  • Bayes theorem - a law of probability that describes the proper way to incorporate new evidence into prior probabilities to form an updated probability estimate.
  • Bayesian - Bayesian probability theory is the math of epistemic rationality, Bayesian decision theory is the math of instrumental rationality.
  • Bayesian probability - represents a level of certainty relating to a potential outcome or idea. This is in contrast to a frequentist probability that represents the frequency with which a particular outcome will occur over any number of trials. An event with Bayesian probability of .6 (or 60%) should be interpreted as stating "With confidence 60%, this event contains the true outcome", whereas a frequentist interpretation would view it as stating "Over 100 trials, we should observe event X approximately 60 times." The difference is more apparent when discussing ideas. A frequentist will not assign probability to an idea; either it is true or false and it cannot be true 6 times out of 10. 
  • Bayesian Decision theory - Bayesian decision theory refers to a decision theory which is informed by Bayesian probability 
  • Decision theory – is the study of principles and algorithms for making correct decisions—that is, decisions that allow an agent to achieve better outcomes with respect to its goals. 
  • Hollywood rationality- What Spock does, not what actual rationalists do.

Posts:

Suggested posts to write:

  • Bounded/ecological/grounded Rationality - I couldn't find a suitable resource for this on less wrong.  

Academic Books:

Popular Books:

Talks:

Notes on decisions I have made while creating this post

 (these notes will not be in the final draft): 

  • I agree denotationally, but object connotatively  with 'rationality is systemized winning', so I left it out. I feel that it would take too long to get rid of the connotation of competition that I believe is associated with 'winning'. The other point that would need to be delved into is: what exactly does the rationalist win at? I believe by winning Elizer meant winning at newcomb's problem, but the idea of winning is normally extended into everything.  I also believe that I have basically covered the idea with: “Rationality maximizes expected performance, while perfection maximizes actual performance.”
  • I left out the 12 virtues of rationality because I don’t like perfectionism. If it was not in the virtues, then I would have included them. My problem with perfectionism is that having it as a goal makes you liable to premature optimization and developing tendencies for suboptimal levels of adaptability. Everything I have read in complexity theory, for example, makes me think that perfectionism is not really a good thing to be aiming for, at least in uncertain and complex situations. I think truth seeking should be viewed as an optimization process. If it doesn't allow you to become more optimal, then it is not worth it. I have a post about this here.
  • I couldn't find an appropriate link for bounded/ecological/grounded rationality. 

[Link] First almost fully-formed human [foetus] brain grown in lab, researchers claim

7 ESRogs 19 August 2015 06:37AM

This seems significant:

An almost fully-formed human brain has been grown in a lab for the first time, claim scientists from Ohio State University. The team behind the feat hope the brain could transform our understanding of neurological disease.

Though not conscious the miniature brain, which resembles that of a five-week-old foetus, could potentially be useful for scientists who want to study the progression of developmental diseases. 

...

The brain, which is about the size of a pencil eraser, is engineered from adult human skin cells and is the most complete human brain model yet developed

...

Previous attempts at growing whole brains have at best achieved mini-organs that resemble those of nine-week-old foetuses, although these “cerebral organoids” were not complete and only contained certain aspects of the brain. “We have grown the entire brain from the get-go,” said Anand.

...

The ethical concerns were non-existent, said Anand. “We don’t have any sensory stimuli entering the brain. This brain is not thinking in any way.”

...

If the team’s claims prove true, the technique could revolutionise personalised medicine. “If you have an inherited disease, for example, you could give us a sample of skin cells, we could make a brain and then ask what’s going on,” said Anand.

...

For now, the team say they are focusing on using the brain for military research, to understand the effect of post traumatic stress disorder and traumatic brain injuries.

http://www.theguardian.com/science/2015/aug/18/first-almost-fully-formed-human-brain-grown-in-lab-researchers-claim

 

 

Truth seeking as an optimization process

7 ScottL 18 August 2015 11:03AM

From the costs of rationality wiki:

Becoming more epistemically rational can only guarantee one thing: what you believe will include more of the truth . Knowing that truth might help you achieve your goals , or cause you to become a pariah. Be sure that you really want to know the truth before you commit to finding it; otherwise, you may flinch from it.

The reason that truth seeking is often seen as being integral to rationality is that in order to make optimal decisions you must first be able to make accurate predictions. Delusions, or false beliefs, are self-imposed barriers to accurate prediction. They are surprise inducers. It is because of this that the rational path is often to break delusions, but you should remember that doing so is a slow and hard process that is rife with potential problems.

Below I have listed three scenarios in which a person could benefit from considering the costs of truth seeking. The first scenario is when seeking a more accurate measurement is computationally expensive and not really required. The second scenario is when you know that the truth will be emotionally distressing to another person and that this person is not in an optimal state to handle this truth. The third scenario is when you are trying to change the beliefs of others. It is often beneficial if you can understand the costs involved for them to change their beliefs as well as their perspective. This allows you to become better able to actually change their beliefs rather than to just win an argument.

 

Scenario 1: computationally expensive truth

We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. – Donald Knuth

If optimization requires significant effort and only results in minimal gains in utility, then it is not worth it. If you only need to be 90% sure that something is true and you are currently 98% sure that it is, then it is not worth spending some extra effort to get to 99% certainty. For example, if you are testing ballistics on Earth then it may be appropriate to use Newtons laws even though they are known to be inexact in some extreme conditions. Now, this does not mean that optimization should never be done. Sometimes that extra 1% certainty is actually extremely important. What it does mean is that you should be spending your resources wisely. The beliefs that you do make should lead to increased abilities to anticipate accurately. You should also remember occams Razor. If you are committing yourself to a decision procedure that is accurate, but slow and wasteful then you will probably be outcompeted by others who spend their resources on more suitable and worthy activities.

 

Scenario 2: emotionally distressing truth

Assume for a moment that you have a child and that you have just finished watching that child fail horribly at a school performance. If your child then asks you, while crying, how the performance was. Do you tell them the truth in full or not? Most people would choose not to and would instead attempt to calm and comfort the child. To do otherwise is not seen as rational, but is instead seen as situationally unaware, rude and impolite. Obviously, some ways of telling the truth are worse than others. But, overall telling the full truth is probably not going to be the most prudent thing to do in this situation. This is because the child is not in an emotional state that will allow them to handle the truth well. The truth in this situation is unlikely to lead to improvement and will instead lead to further stress and trauma which will often cause future performance anxiety, premature optimization and other issues. For these reasons, I think that the truth should not be expressed in this situation. This does not mean that I think the rational person should forget about what has happened. They should instead remember it so that they can bring it up when the child is in an emotional state that would allow them to be better able to implement any advice that is given. For example, when practicing in a safe environment.

I want to point out that avoiding the truth is not what I am advocating. I am instead saying that we should be strategic about telling potentially face-threatening or emotionally distressing truths. I do believe that repression and avoidance of issues that have a persistent nature most often tends to lead to exacerbation or resignation of those issues. Hiding from the truth rarely improves the situation. Consider the child if you don't ever mention the performance because you don't want to cause the child pain then they are still probably going to get picked on at school. Knowing this, we can say that the best thing to do is to bring up the truth and frame it in a particular situation where the child can find it useful and come to be able to better handle it.

 

Scenario 3: psychologically exhausting truth

If we remember that truth seeking involves costs, then we are more likely to be aware of how we can reduce this cost when we are trying to change the beliefs of others. If you are trying to convince someone and they do not agree with you, this may not be because your arguments are weak or that the other person is stupid. It may just be that there is a significant cost involved for them to either understand your argument or to update their beliefs. If you want to convince someone and also avoid the illusion of transparency, then it is best to take into account the following:

  • You should try to end arguments well and to avoid vitriol - the emotional contagion heuristic leads people to avoid contact with people or objects viewed as "contaminated" by previous contact with someone or something viewed as bad—or, less often, to seek contact with objects that have been in contact with people or things considered good. If someone gets emotional when you are in an argument, then you are going to be less likely to change their minds about that topic in the future. It is also a good idea to consider the peak-end rule which basically means that you should try to end your arguments well.
  • If you find that someone is already closed off due to emotional contagion, then you should try a surprising strategy so that your arguments aren't stereotyped and avoided. As elizer said here
  • The first rule of persuading a negatively disposed audience - rationally or otherwise - is not to say the things they expect you to say. The expected just gets filtered out, or treated as confirmation of pre-existing beliefs regardless of its content.

  • Processing fluency - is the ease with which information is processed. You should ask yourself if your argument worded in such a way that it is fluent and easy to understand?
  • Cognitive dissonance - is a measure of how much your argument conflicts with the other persons pre-existing beliefs? Perhaps, you need to convince them of a few other points first before your argument will work
  • Inferential distance - is about how much background information that they need access to in order for them to understand your argument?
  • Leave a line of retreat - think about whether they can admit that they were wrong without also looking stupid or foolish? In winning arguments there are generally two ways that you can go about it. The first is to totally demolish the other persons position. The second is to actually change their minds. The first leaves them feeling wrong, stupid and foolish which is often going to make them start rationalizing. The second just makes them feel wrong. You win arguments the second way by seeming to be reasonable and non face threatening. A good way to do this is through empathy and understanding the argument from the other persons position. It is important to see things as others would see them because we don't see the world as it is; we see the world as we are. The other person is not stupid or lying they might just in the middle of what I call an 'epistemic contamination cascade' (perhaps there is already a better name for this). It is when false beliefs lead to filters, framing effects and other false beliefs. Another potential benefit from viewing the argument from the other persons perspective is that it is possible that you may come to realise that your own is not as steadfast as you once believed.
  • Maximise the cost of holding a false belief - ask yourself if there are any costs to them if they continue to hold a belief that you believe is false? One way to cause some cost is to convince their friends and associates of your position. The extra social pressure may help in getting them to change their minds.
  • Give it time and get them inspecting their maps rather than information that has been filtered through their map. It is possible that there are filtering and framing effects which mean that your arguments are being distorted by the other person? Consider a depressed person: you can argue with them, but this is not likely to be overly helpful. THis is because it is likely that while arguing you will need to contradict them and this will probably lead to them blocking out what you are saying. I think that in these kinds of situations what you really need to do is to get them to inspect their own maps. This can be done by asking "what" or "how does that make you" type of questions. For example,“What are you feeling?”,“What’s going on?” and“What can I do to help?”. There are two main benefits to these types of questions over arguments. The first is that it gets them inspecting their maps and the second is that it is much harder for them to block out the responses since they are the ones providing them. This is a related quote from Sarah Silverman's book:
  • My stepfather, John O'Hara, was the goodest man there was. He was not a man of many words, but of carefully chosen ones. He was the one parent who didn't try to fix me. One night I sat on his lap in his chair by the woodstove, sobbing. He just held me quietly and then asked only, 'What does it feel like?' It was the first time I was prompted to articulate it. I thought about it, then said, "I feel homesick." That still feels like the most accurate description--I felt homesick, but I was home. - Sarah Silverman

  • Remember the other-optimizing bias and that perspectival types of issues need to be resolved by the individual facing them. If you have a goal to change another persons minds, then it often pays dividends to not only understand why they are wrong, but also why they think they are right or at least unaware that they are wrong. This kind of understanding can only come from empathy. Sometimes it is impossible to truly understand what another person is going through, but you should always try, without condoning or condemning, to see things as they are from the other persons perspective. Remember that hatred blinds and so does love. You should always be curious and seek to understand things as they are, not as you wish them, fear them or desire them to be. It is only when you can do this that you can truly understand the costs involved for someone else to change their minds.

 

If you take the point of view that changing beliefs is costly. Then you are less likely to be surprised when others don't want to change their beliefs. You are also more likely to think about how you can make the process of changing their beliefs easier for them.

 

Some other examples of when seeking the truth is not necessarily valuable are:

  • Fiction writing and the cinematic experience
  • When the pragmatic meaning does not need truth, but the semantic meaning does. An example is "Hi. How are you?" and other similar greetings which are peculiar because they look the same as questions or adjacency pairs, but function slightly differently. They are like a kind of ritualised question in which the answer is normally pre-specified or at least the detail of the answer is. If someone asks: "How are you" it is seen as aberrant to answer the question in full detail with the truth rather than simply with fine, which may be a lie. If they actually do want to know how you are, then they will probably ask a follow up question after the greeting like "so, is everything good with the kids".
  • Evolutionary biases which cause delusions, but may help in perspectival and self confidence issues. For example, the sexual over perception bias from men. From a truth-maximization perspective young men who assume that all women want them are showing severe social-cognitive inaccuracies, judgment biases, and probably narcissistic personality disorder. However, from an evolutionary perspective, the same young men are behaving more optimally. That is, the bias is an adaptive bias one which has consistently maximized the reproductive success of their male ancestors. Other examples are the women's underestimation of men's commitment bias and positively biased perceptions of partners

 

tldr: this post posits that truth seeking should be viewed as an optimization process. This means that it may not always be worth it.

[LINK] The Bayesian Second Law of Thermodynamics

7 shminux 12 August 2015 04:52PM

Sean Carroll et al. posted a preprint with the above title. Sean also has a discussion of it in his blog. 

While I am a physicist by training, statistical mechanics and thermodynamics is not my strong suit, and I hope someone with expertise in the area can give their perspective on the paper. For now, here is my summary, apologies for any potential errors:

There is a tension between different definitions of entropy: Boltzmann entropy, which counts macroscopically indistinguishable microstates always increases, except for extremely rare decreases. Gibbs/Shannon entropy, which counts our knowledge of a system, can decrease if an observer examines the system and learns something new about it. Jaynes had a paper on that topic, Eliezer discussed this in the Sequences, and spxtr recently wrote a post about it. Now Carroll and collaborators propose the "Bayesian Second Law" that quantifies this decrease in Gibbs/Shannon entropy due to a measurement:

[...] we derive the Bayesian Second Law of Thermodynamics, which relates the original (un-updated) distribution at initial and final times to the updated distribution at initial and final times. That relationship makes use of the cross entropy between two distributions [...] 

[...] the Bayesian Second Law (BSL) tells us that this lack of knowledge — the amount we would learn on average by being told the exact state of the system, given that we were using the un-updated distribution — is always larger at the end of the experiment than at the beginning (up to corrections because the system may be emitting heat)

This last point seems to resolve the tension between the two definitions of entropy, and has applications to non-equilibrium processes, where an observer is replaced with an outcome of some natural process, such as RNA self-assembly.

 

Crazy Ideas Thread, Aug. 2015

7 polymathwannabe 11 August 2015 01:24PM

This thread is intended to provide a space for 'crazy' ideas. Ideas that spontaneously come to mind (and feel great), ideas you long wanted to tell but never found the place and time for and also for ideas you think should be obvious and simple - but nobody ever mentions them.

This thread itself is such an idea. Or rather the tangent of such an idea which I post below as a seed for this thread.

 

Rules for this thread:

  1. Each crazy idea goes into its own top level comment and may be commented there.
  2. Voting should be based primarily on how original the idea is.
  3. Meta discussion of the thread should go to the top level comment intended for that purpose. 

 


If this should become a regular thread I suggest the following :

  • Use "Crazy Ideas Thread" in the title.
  • Copy the rules.
  • Add the tag "crazy_idea".
  • Create a top-level comment saying 'Discussion of this thread goes here; all other top-level comments should be ideas or similar'
  • Add a second top-level comment with an initial crazy idea to start participation.

Actually existing prediction markets?

6 Douglas_Knight 02 September 2015 10:24PM

What public prediction markets exist in the world today? Have you used one recently?

What attributes do they have that should make us trust them or not, such as liquidity and transaction costs? Do they distort the tails? Which are usable by Americans?

This post is just a request for information. I don’t have much to say.

Intrade used to be the dominant market, but it is gone, opening up this question. The most popular question on prediction markets has been the US Presidential election. If a prediction market wants to get off the ground, it should start with this question. Since the campaign is gearing up, markets that hope to fill the vacuum should exist right now, hence this post.

Many sports bookies give odds on the election. Bookmakers are not technically prediction markets, but they are awfully close and I think the difference is not so important, though maybe they are less likely to provide historical data. They may well be the most liquid and accurate sources of odds. But the fact that they concentrate on sports is important. It means that they are less likely to expand into other forms of prediction and less likely to be available to Americans. I suspect that there are too many covering the election for an exhaustive list to be interesting, but feel free point to point out interesting ones, such as the most liquid, most accessible to Americans, or with the most extensive coverage of non-sports events.

Betting is illegal in America. This is rarely enforced directly against individuals, but often creates difficulty depositing money or using the sites. I don’t think that they usually run into problems if they avoid sports and finance. In particular, Intrade was spun off of a sports bookie specifically to reach Americans.

Here are a few comments on Wikipedia’s list. It seems to be using a strict market criterion, so it includes two sports sites just because they are structured as markets. Worse, it might exclude bookies that I would like to know about. Not counting cryptocurrency markets (which I would like to hear about), it appears that there are no serious money prediction markets. The closest is New Zealand-based iPredict, which is limited to a total deposit of US$6000, and it takes a 18 months to build up to that. The venerable Iowa Electronic Markets (restricted to federal elections) and the young NZ PredictIt have even smaller limits, in return for explicit legality in America. It includes two play money markets: Microsoft and Hypermind. Finally, it mentions the defunct play-money Scicast, most notable for its different topic: science and technology. Hypermind and Scicast came out of the IARPA contest. Not on the list, I should mention PredictionBook, which is close to being a play-money prediction market, but tuned in different directions, both in terms of the feedback it provides to participants and the way it encourages a proliferation of questions.

Update: In the previous paragraph, I discarded two sports bookies from Wikipedia's list. I did so because I thought that they had very little non-sports offerings, but in both cases I did a poor job of navigating them and underestimated the numbers. Smarkets still seems too small to be interesting, but Betfair does have solid political offerings and is rightfully at the top of the list.

Unlearning shoddy thinking

6 malcolmocean 21 August 2015 03:07AM

School taught me to write banal garbage because people would thumbs-up it anyway. That approach has been interfering with me trying to actually express my plans in writing because my mind keeps simulating some imaginary prof who will look it over and go "ehh, good enough".

Looking good enough isn't actually good enough! I'm trying to build an actual model of the world and a plan that will actually work.

Granted, school isn't necessarily all like this. In mathematics, you need to actually solve the problem. In engineering, you need to actually build something that works. But even in engineering reports, you can get away with a surprising amount of shoddy reasoning. A real example:

Since NodeJS uses the V8 JavaScript engine, it has native support for the common JSON (JavaScript Object Notation) format for data transfer, which means that interoperability between SystemQ and other CompanyX systems can still be fairly straightforward (Jelvis, 2011).

This excerpt is technically totally true, but it's also garbage, especially as a reason to use NodeJS. Sure, JSON is native to JS, but every major web programming language supports JSON. The pressure to provide citable justifications for decisions which were made for reasons more like "I enjoy JavaScript and am skilled with it," produces some deliberately confirmation-biased writing. This is just one pattern—there are many others.

I feel like I need to add a disclaimer here or something: I'm a ringed engineer, and I care a lot about the ethics of design, and I don't think any of my shoddy thinking has put any lives (or well-being, etc) at risk. I also don't believe that any of my shoddy thinking in design reports has violated academic integrity guidelines at my university (e.g. I haven't made up facts or sources).

But a lot of it was still shoddy. Most students are familiar with the process of stating a position, googling for a citation, then citing some expert who happened to agree. And it was shoddy because nothing in the school system was incentivizing me to make it otherwise, and I reasoned it would have cost more to only write stuff that I actually deeply and confidently believed, or to accurately and specifically present my best model of the subject at hand. I was trying to spend as little time and attention as possible working on school things, to free up more time and attention for working on my business, the productivity app Complice.

What I didn't realize was the cost of practising shoddy thinking.

Having finished the last of my school obligations, I've launched myself into some high-level roadmapping for Complice: what's the state of things right now, and where am I headed? And I've discovered a whole bunch of bad thinking habits. It's obnoxious.

I'm glad to be out.

(Aside: I wrote this entire post in April, when I was finished my last assignments & tests. I waited awhile to publish it so that I've now safely graduated. Wasn't super worried, but didn't want to take chances.)

Better Wrong Than Vague

So today.

I was already aware of a certain aversion I had to planning. So I decided to make things a bit easier with this roadmapping document, and base it on one my friend Oliver Habryka had written about his main project. He had created a 27-page outline in google docs, shared it with a bunch of people, and got some really great feedback and other comments. Oliver's introduction includes the following paragraph, which I decided to quote verbatim in mine:

This document was written while continuously repeating the mantra “better wrong than vague” in my head. When I was uncertain of something, I tried to express my uncertainty as precisely as possible, and when I found myself unable to do that, I preferred making bold predictions to vague statements. If you find yourself disagreeing with part of this document, then that means I at least succeeded in being concrete enough to be disagreed with.

In an academic context, at least up to the undergrad level, students are usually incentivized to follow "better vague than wrong". Because if you say something the slightest bit wrong, it'll produce a little "-1" in red ink.

Because if you and the person grading you disagree, a vague claim might be more likely to be interpreted favorably. There's a limit, of course: you usually can't just say "some studies have shown that some people sometimes found X to help". But still.

Practising being "good enough"

Nate Soares has written about the approach of whole-assed half-assing:

Your preferences are not "move rightward on the quality line." Your preferences are to hit the quality target with minimum effort.

If you're trying to pass the class, then pass it with minimum effort. Anything else is wasted motion.

If you're trying to ace the class, then ace it with minimum effort. Anything else is wasted motion.

My last two yearly review blog posts have followed structure of talking about my year on the object level (what I did), the process level (how I did it) and the meta level (my more abstract approach to things). I think it's helpful to apply the same model here.

There are lots of things that humans often wished their neurology naturally optimized for. One thing that it does optimize for though is minimum energy expenditure. This is a good thing! Brains are costly, and they'd have to function less well if they always ran at full power. But this has side effects. Here, the relevant side effect is that, if you practice a certain process for awhile, and it achieves the desired object-level results, you might lose awareness of the bigger picture approach that you're trying to employ.

So in my case, I was practising passing my classes with minimum effort, and not wasting motion, following the meta-level approach of whole-assed half-assing. But while the meta-level approach of "hitting the quality target with minimum effort" is a good one in all domains (some of which will have much, much higher quality targets) the process of doing the bare minimum to create something that doesn't have any obvious glaring flaws, is not a process that you want to be employing in your business. Or in trying to understand anything deeply.

Which I am now learning to do. And, in the process, unlearning the shoddy thinking I've been practising for the last 5 years.

Related LW post: Guessing the Teacher's Password

(This article crossposted from my blog)

Fragile Universe Hypothesis and the Continual Anthropic Principle - How crazy am I?

6 PeterCoin 18 August 2015 12:53AM

Personal Statement

I like to think about big questions from time to time. A fancy that quite possibly causes me more harm than good. Every once in a while I come up with some idea and wonder "hey, this seems pretty good, I wonder if anyone is taking it seriously?" Usually, answering that results at worst in me wasting a couple days on google and blowing $50 on amazon before I find someone who’s going down the same path and can tell myself. "Well, someone's got that covered". This particular idea is a little more stubborn and the amazon bill is starting to get a little heavy. So I cobbled together this “paper” to get this idea out there and see where it goes.  

I've been quite selective here and have only submitted it on two other places Vixra, and FXQI forum.  Vixra for posterity in the bizarre case that it's actually right.  FXQI because they play with some similar ideas (but the forum turned out to be not really vibrant for such things).  I'm now posting it on Less Wrong because you guys seem to be the right balance of badass skeptics and open minded geeks.  In addition I see a lot of cool work on Anthropic Reasoning and the like so it seems to go along with your theme.

Any and all feedback is welcome, I'm a good sport!

Abstract

A popular objection to the Many-worlds interpretation of Quantum Mechanics is that it allows for quantum suicide where an experimenter creates a device that instantly kills him or leaves him be depending the output of a quantum measurement, since he has no experience of the device killing him he experiences quantum immortality. This is considered counter-intuitive and absurd. Presented here is a speculative argument that accepts counter-intuitiveness and proposes it as a new approach to physical theory without accepting some of the absurd conclusions of the thought experiment. The approach is based on the idea that the Universe is Fragile in that only a fraction of the time evolved versions retain the familiar structures of people and planets, but the fractions that do not occur are not observed. This presents to us as a skewed view of physics and only by accounting for this fact (which I propose calling the Continual Anthropic Principle) can we understand the true fundamental laws.

Preliminary reasoning

Will a supercollider destroy the Earth?

A fringe objection to the latest generation of high energy supercolliders was they might trigger some quantum event that would destroy the earth such as by turning it to strangelets (merely an example). To assuage those fears it has been noted that since Cosmic Rays have been observed with higher energies then the collisions these supercolliders produce that if a supercollider were able to create such Earth-destroying events cosmic rays would have already destroyed the Earth. Since that hasn't happened physics must not work that way and we thus must be safe.

A false application of the anthropic principle

One may try to cite the anthropic principle as an appeal against the conclusion that physics disallows Earth-destruction by said mechanism. If the Earth were converted to strangelets, there would be no observers on it. If the right sort of multiverse exists, some Earths will be lucky enough to escape this mode of destruction. Thus physics may still allow for strangelet destruction and supercolliders may still destroy the world. We can reject that objection by noting that if that were the case, it is far more probable that our planet would be alone in a sea of strangelet balls that were already converted by highenergy cosmic rays. Since we observe other worlds made of ordinary matter, we can be sure physics doesn't allow for the Earth to be converted into strange matter by interactions at Earth’s energy level.

Will a supercollider destroy the universe?

Among the ideas on how supercolliders will destroy the world there are some that destroy not just the Earth but entire universe as well. A proposed mechanism is in triggering vacuum energy to collapse to a new lower energy state. By that mechanism the destructive event spreads out from the nucleation site at the speed of light and shreds the universe to something completely unrecognizable. In the same way cosmic rays rule out an Earth-destroying event it has said that this rules out a universe destroying event.

Quantum immortality and suicide

Quantum suicide is a thought experiment there is a device that measures a random quantum event, and kills an experimenter instantly upon one outcome, and leaves him alive upon the other. If Everett multiple worlds is true, then no matter how matter how many times an experiment is performed, the experimenter will only experience the outcome where he is not killed thus experiencing subjective immortality. There are some pretty nutty ideas about the quantum suicide and immortality, and this has been used as an argument against many-worlds. I find the idea of finding oneself for example perpetually avoiding fatal accidents or living naturally well beyond any reasonable time to be mistaken (see objections). I do however think that Max Tegmark came up with a good system of rules on his "crazy" page for how it might work: http://space.mit.edu/home/tegmark/crazy.html

The rules he outlines are: "I think a successful quantum suicide experiment needs to satisfy three criteria:

1. The random number generator must be quantum, not classical (deterministic), so that you really enter a superposition of dead and alive.

2. It must kill you (at least make you unconscious) on a timescale shorter than that on which you can become aware of the outcome of the quantum coin-toss - otherwise you'll have a very unhappy version of yourself for a second or more who knows he's about to die for sure, and the whole effect gets spoiled.

3. It must be virtually certain to really kill you, not just injure you.”

Have supercolliders destroyed the universe? 

Let's say that given experiment has a certain "probability" (by a probabilistic interpretation of QM) of producing said universe destructive event. This satisfies all 3 of Tegmark's conditions for a successful quantum suicide experiment. As such the experimenter might conclude that said event cannot happen. However, he would be mistaken, and a corresponding percentage of successor states would in fact be ones where the event occurred. If the rules of physics are such that an event is allowed then we have a fundamentally skewed perceptions of what physics are.

It's not a bug it's a feature!

If we presume such events could occur, we have no idea how frequent they are. There's no necessary reason why they need to be confined to rare high energy experiments and cosmic rays. Perhaps it dictates more basic and fundamental interactions. For instance certain events within an ordinary atomic nucleus could create a universe-destroying event. Even if these events occur at an astonishing rate, so long as there's a situation where the event doesn't occur (or is "undone" before the runaway effect can occur), it would not be contradictory with our observation. The presumption that these events don't occur may be preventing us from understanding a simpler law that describes physics in a certain situation in favor of more complex theories that limit behavior to that which we can observe.

Fragile Universe Hypothesis

Introduction

Because of this preliminary reasoning I am postulating what I call the "Fragile Universe Hypothesis". The core idea is that our universe is constantly being annihilated by various runaway events initiated by quantum phenomena. However, because for any such event there's always a possible path where such event does not occur, and since all possible paths are realized we are presented with an illusion of stability. What we see as persistent structures in the universe (chairs, planets, galaxies) are so only because events that destroy them by and large destroy us as well. What we may think are fundamental laws of our universe, are merely descriptions of the nature of possible futures consistent with our continued existence.

Core theory

The hypothesis can be summarized as postulating the following:

1. For a given event at Time T there are multiple largely non-interacting future successor events at T + ε (i.e. Everett Many Worlds is either correct or at least on the right track)

2. There are some events where some (but not all) successor events trigger runaway interactions that destroy the universe as we know it. Such events expand from the origin at C and immediately disrupt the consciousness of any being it encounters.

3. We experience only a subset of possible futures and thus have a skewed perspective of the laws of physics.

4. To describe the outcome of an experiment we must first calculate possible outcomes then filter out those that result in observer destruction (call it the "continual anthropic principle")

Possible Objections

"If I get destroyed I die and will no longer have experiences. This is at face value absurd"

I'm sympathetic, and I'd say this requires a stretch of imagination to consider. But do note that under this hypothesis, no one will ever have an experience that isn't followed by a successive experience (see quantum immortality for discussion of death). So from our perspective our existence will go on unimpeded. As an example, consider a video game save. The game file can be saved, copied, compressed, decompressed, moved from medium to medium (with some files being deleted after being copied to a new location). We say that the game continues so long as someone plays at least one copy of the file. Likewise for us, we say life (or the universe as we know it) goes on so long as at least one successor continues.

"This sort of reasoning would result in having to accept absurdities like quantum immortality"

I don't think so. Quantum immortality (the idea that many worlds guarantees one immortality as there will always be some future state in which one continues to exist) presumes that personhood is an all-ornothing thing. In reality a person is more of a fragmented collection of mental processes. We don't suddenly stop having experiences as we die, rather the fragments unbind, some live on in the memory of others or in those experiencing the products of our expression, while others fade out. A destructive event of the kind proposed would absolutely be an all-or-nothing affair. Either everything goes, or nothing goes.

"This isn't science. What testable predictions are you making? Heck you don't even have a solid theory" 

Point taken! This is, at this point, speculation, but I think at this point it might have the sort of elegance that good theories have. The questions that I have are:

1. Has this ever been seriously considered? (I’ve done some homework but undoubtedly not enough).

2. Are there any conceptual defeaters that make this a nonstarter?

3. Could some theories be made simpler by postulating a fragile universe and continual anthropic principle?

4. Could those hypothetical theories make testable predictions?

5. Have those tests been consistent with the theory.

My objective in writing this is to provide an argument against 2, and starting to look into 1 and 3. 4 and 5 are essential to good science as well too, but we’re simply not at that point yet.

Final Thoughts

The Copernican Principle for Many worlds

When we moved the Earth as the center of the solar system, the orbits of the other planets became simpler and clearer. Perhaps physical law can be made simpler and clearer when we move the futures we will experience away from the center of possible futures. And like the solar system's habitable zone, perhaps only a small portion of futures are habitable.

Why confine the Anthropic Principle to the past? 

Current models of cosmology limit the impact of the Anthropic selection on the cosmos to the past: string landscapes, bubble universes or cosmic branes, these things all got fixed at some set of values 13 billion years ago and the selection effect does no more work at the cosmic scale. Perhaps the selection effect is more fundamental then that. Could it be that instead 13 billion years ago is when the anthropic selection merely switched from being creative in sowing our cosmic seeds to conservative in allowing them to grow? 

Rationality Compendium: Principle 2 - You are implemented on a human brain

5 ScottL 29 August 2015 04:24PM

Irrationality is ingrained in our humanity. It is fundamental to who we are. This is because being human means that you are implemented on kludgy and limited wetware (a human brain). A consequence of this is that biases  and irrational thinking are not mistakes, persay, they are not misfirings or accidental activations of neurons. They are the default mode of operation for wetware that has been optimized for purposes other than truth maximization.

 

If you want something to blame for the fact that you are innately irrational, then you can blame evolution . Evolution tends to not to produce optimal organisms, but instead produces ones that are kludgy , limited and optimized for criteria relating to ancestral environments rather than for criteria relating to optimal thought.

 

A kludge is a clumsy or inelegant, yet surprisingly effective, solution to a problem. The human brain is an example of a kludge. It contains many distinct substructures dating from widely separated periods of evolutionary development . An example of this is the two kinds of processes in human cognition where one is fast (type 1) and the other is slow (type2) 

There are many other characteristics of the brain that induce irrationality. The main ones are that:

  • The brain is innately limited in its computational abilities and so it must use heuristics , which are mental shortcuts that ease the cognitive load of making a decision. 
  • The brain has a tendency to blindly use salient or pre-existing responses to answers rather than developing new answers or thoroughly checking pre-existing solutions 
  • The brain does not inherently value truth. One of the main reasons for this is that many of the biases can actually be adaptive. An example of an adaptive bias is the sexual over perception bias  in men. From a truth-maximization perspective young men who assume that all women want them are showing severe social-cognitive inaccuracies, judgment biases, and probably narcissistic personality disorder. However, from an evolutionary perspective, the same young men are behaving in a more optimal manner. One which has consistently maximized the reproductive success of their male ancestors. Another similar example is the bias for positive perception of partners .
  • The brain acts more like a coherence maximiser than a truth maximiser, which makes people liable to believing falsehoods . If you want to believe something or you are often in situations in which two things just happen to be related then your brain is often by default going to treat them as if they were right 
  • The brain trusts its own version of reality much more than other peoples. This makes people defend their beliefs even when doing so is extremely irrational . It is also makes it hard for people to change their minds  and to accept when they are wrong
  • Disbelief requires System 2 thought . This means that if system 2 is engaged then we are liable to believe pretty much anything. System 1 is gullible and biased to believe. It is system 2 that is in charge of doubting and disbelieving.

One important non-brain related factor is that we must make use of and live with our current adaptations . People cannot reconform themselves to fulfill purposes suitable to their current environment, but must instead make use of pre-existing machinery that has been optimised for other environments. This means that there is probably never going to be any miracle cures to irrationality because eradicating it would require that you were so fundamentally altered that you were no longer human.

 

One of the first major steps on the path to becoming more rational, is the realisation that you are not only by default irrational, but that you are always fundamentally comprimised. This doesn't mean that improving your rationality is impossible. It just means that if you stop applying your knowledge of what improves rationality then you will slip back into irrationality. This is because the brain is a kludge. It works most of the time, but in some cases its innate and natural course of action must be diverted if we are to be rational. The good news is that this kind of diversion is possible. This is because humans possess second order thinking . This means that they can observe their inherent flaws and systematic errors. They can then through studying the laws of thought and action apply second order corrections and from doing so become more rational.  

 

The process of applying these second order corrections or training yourself to mitigate the effects of your propensities is called debiasing . Debiasing is not a thing that you can do once and then forget about. It is something that you must either be doing constantly or that you must instill into habits so that it occurs without volitional effort. There are generally three main types of debaising and they are described below: 

  • Counteracting the effects of bias - this can be done by adjusting your estimates or opinions in order to avoid errors due to biases. This is probably the hardest of the three types of debiasing because to do it correctly you need to know exactly how much you are already biased. This is something that people are rarely aware of.
  • Catching yourself when you are being or could be biased and applying a cogntive override. The basic idea behind this is that you observe and track your own thoughts and emotions so that you can catch yourself before you move to deeply into irrational modes of thinking. This is hard because it requires that you have superb self-awareness skills and these often take a long time to develop and train. Once you have caught yourself it is often best to resort to using formal thought in algebra, logic, probability theory or decision theory etc. It is also useful to instill habits in yourself that would allow this observation to occur without conscious and volitional effort. It should be noted that incorrectly applying the first two methods of debiasing can actually make you more biased and that this is a common conundrum and problem faced by beginners to rationality training 
  • Understanding the situations which make you biased so that you can avoid them  - the best way to achieve this is simply to ask yourself: how can I become more objective? You do this by taking your biased and faulty perspective as much as possible out of the equation. For example, instead of taking measurements yourself you could get them taken automatically by some scientific instrument.

Related Materials

Wikis:

  • Bias - refers to the obstacles to truth which are produced by our kludgy and limited wetware (brains) working exactly the way that they should. 
  • Evolutionary psychology - the idea of evolution as the idiot designer of humans - that our brains are not consistently well-designed - is a key element of many of the explanations of human errors that appear on this website.
  • Slowness of evolution- The tremendously slow timescale of evolution, especially for creating new complex machinery (as opposed to selecting on existing variance), is why the behavior of evolved organisms is often better interpreted in terms of what did in fact work  
  • Alief - an independent source of emotional reaction which can coexist with a contradictory belief. For example, the fear felt when a monster jumps out of the darkness in a scary movie is based on the alief that the monster is about to attack you, even though you believe that it cannot. 
  • Wanting and liking - The reward system consists of three major components:
    • Liking: The 'hedonic impact' of reward, comprised of (1) neural processes that may or may not be conscious and (2) the conscious experience of pleasure.
    • Wanting: Motivation for reward, comprised of (1) processes of 'incentive salience' that may or may not be conscious and (2) conscious desires.
    • Learning: Associations, representations, and predictions about future rewards, comprised of (1) explicitpredictions and (2) implicit knowledge and associative conditioning (e.g. Pavlovian associations). 
  • Heuristics and biases - program in cognitive psychology tries to work backward from biases (experimentally reproducible human errors) to heuristics (the underlying mechanisms at work in the brain). 
  • Cached thought – is an answer that was arrived at by recalling a previously-computed conclusion, rather than performing the reasoning from scratch.  
  • Sympathetic Magic - humans seem to naturally generate a series of concepts known as sympathetic magic, a host of theories and practices which have certain principles in common, two of which are of overriding importance: the Law of Contagion holds that two things which have interacted, or were once part of a single entity, retain their connection and can exert influence over each other; the Law of Similarity holds that things which are similar or treated the same establish a connection and can affect each other. 
  • Motivated Cognition - an academic/technical term for various mental processes that lead to desired conclusions regardless of the veracity of those conclusions.   
  • Rationalization - Rationalization starts from a conclusion, and then works backward to arrive at arguments apparently favoring that conclusion. Rationalization argues for a side already selected; rationality tries to choose between sides.  
  • Opps - There is a powerful advantage to admitting you have made a large mistake. It's painful. It can also change your whole life.
  • Adaptation executors - Individual organisms are best thought of as adaptation-executers rather than as fitness-maximizers. Our taste buds do not find lettuce delicious and cheeseburgers distasteful once we are fed a diet too high in calories and too low in micronutrients. Tastebuds are adapted to an ancestral environment in which calories, not micronutrients, were the limiting factor. Evolution operates on too slow a timescale to re-adapt to adapt to a new conditions (such as a diet).
  • Corrupted hardware - our brains do not always allow us to act the way we should. Corrupted hardware refers to those behaviors and thoughts that act for ancestrally relevant purposes rather than for stated moralities and preferences.
  • Debiasing - The process of overcoming bias. It takes serious study to gain meaningful benefits, half-hearted attempts may accomplish nothing, and partial knowledge of bias may do more harm than good. 
  • Costs of rationality - Becoming more epistemically rational can only guarantee one thing: what you believe will include more of the truth. Knowing that truth might help you achieve your goals, or cause you to become a pariah. Be sure that you really want to know the truth before you commit to finding it; otherwise, you may flinch from it.
  • Valley of bad rationality - It has been observed that when someone is just starting to learn rationality, they appear to be worse off than they were before. Others, with more experience at rationality, claim that after you learn more about rationality, you will be better off than you were before you started. The period before this improvement is known as "the valley of bad rationality".
  • Dunning–Kruger effect - is a cognitive bias wherein unskilled individuals suffer from illusory superiority, mistakenly assessing their ability to be much higher than is accurate. This bias is attributed to a metacognitive inability of the unskilled to recognize their ineptitude. Conversely, highly skilled individuals tend to underestimate their relative competence, erroneously assuming that tasks that are easy for them are also easy for others. 
  • Shut up and multiply - In cases where we can actually do calculations with the relevant quantities. The ability to shut up and multiply, to trust the math even when it feels wrong is a key rationalist skill.  

Posts

Popular Books:

Papers:

  • Haselton, M. (2003). The sexual overperception bias: Evidence of a systematic bias in men from a survey of naturally occurring events. Journal of Research in Personality, 34-47.
  • Hasselton, M., & Buss, D. (2000). Error Management Theory: A New Perspective on Biases in Cross-Sex Mind Reading. Jounral of Personality and Social Psychology, 81-91. 
  • Murray, S., Griffin, D., & Holmes, J. (1996). The Self-Fulfilling Nature of Positive Illusions in Romantic Relationships: Love Is Not Blind, but Prescient. Journal of Personality and Social Psychology,, 1155-1180. 
  • Gilbert, D.T.,  Tafarodi, R.W. and Malone, P.S. (1993) You can't not believe everything you read. Journal of Personality and Social Psychology, 65, 221-233 
 

Notes on decisions I have made while creating this post

 (these notes will not be in the final draft): 

  • This post doesn't have any specific details on debiasing or the biases. I plan to provide these details in later posts. The main point of this post is convey the idea in the title.

Open Thread - Aug 24 - Aug 30

5 Elo 24 August 2015 08:14AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

View more: Next