Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Actually existing prediction markets?

4 Douglas_Knight 02 September 2015 10:24PM

What public prediction markets exist in the world today? Have you used one recently?

What attributes do they have that should make us trust them or not, such as liquidity and transaction costs? Do they distort the tails? Which are usable by Americans?

This post is just a request for information. I don’t have much to say.

Intrade used to be the dominant market, but it is gone, opening up this question. The most popular question on prediction markets has been the US Presidential election. If a prediction market wants to get off the ground, it should start with this question. Since the campaign is gearing up, markets that hope to fill the vacuum should exist right now, hence this post.

Many sports bookies give odds on the election. Bookmakers are not technically prediction markets, but they are awfully close and I think the difference is not so important, though maybe they are less likely to provide historical data. They may well be the most liquid and accurate sources of odds. But the fact that they concentrate on sports is important. It means that they are less likely to expand into other forms of prediction and less likely to be available to Americans. I suspect that there are too many covering the election for an exhaustive list to be interesting, but feel free point to point out interesting ones, such as the most liquid, most accessible to Americans, or with the most extensive coverage of non-sports events.

Betting is illegal in America. This is rarely enforced directly against individuals, but often creates difficulty depositing money or using the sites. I don’t think that they usually run into problems if they avoid sports and finance. In particular, Intrade was spun off of a sports bookie specifically to reach Americans.

Here are a few comments on Wikipedia’s list. It seems to be using a strict market criterion, so it includes two sports sites just because they are structured as markets. Worse, it might exclude bookies that I would like to know about. Not counting cryptocurrency markets (which I would like to hear about), it appears that there are no serious money prediction markets. The closest is New Zealand-based iPredict, which is limited to a total deposit of US$6000, and it takes a 18 months to build up to that. The venerable Iowa Electronic Markets (restricted to federal elections) and the young NZ PredictIt have even smaller limits, in return for explicit legality in America. There are three play money markets: Microsoft, Hypermind, and Scicast. The last two came out of the IARPA contest. Scicast is notable for its different topic: science and technology. It closed for the summer and might resume in the fall, pending funding. Not on the list, I should mention PredictionBook, which is close to being a play-money prediction market, but tuned in different directions, both in terms of the feedback it provides to participants and the way it encourages a proliferation of questions.

Stupid Questions September 2015

3 polymathwannabe 02 September 2015 06:26PM

This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.

Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.

To any future monthly posters of SQ threads, please remember to add the "stupid_questions" tag.


Bragging thread September 2015

3 polymathwannabe 02 September 2015 06:24PM

Your job, should you choose to accept it, is to comment on this thread explaining the most awesome thing you've done this month. You may be as blatantly proud of yourself as you feel. You may unabashedly consider yourself the coolest freaking person ever because of that awesome thing you're dying to tell everyone about. This is the place to do just that.

Remember, however, that this isn't any kind of progress thread. Nor is it any kind of proposal thread. This thread is solely for people to talk about the awesome things they have done. Not "will do". Not "are working on"Have already done. This is to cultivate an environment of object level productivity rather than meta-productivity methods.

So, what's the coolest thing you've done this month?

(Previous bragging thread)

Rough utility estimates and clarifying questions

1 Romashka 02 September 2015 12:55PM

Related to: diminishing returns, utility.

I, for example, really don't care that much about trillions of dollars being won in a lottery or offered by an alien AI iff I make 'the right choice'. I mostly deal with things on pretty linear scales, barring sudden gifts from my relatives and Important Life Decisions. So the below was written with trivialities in mind. Why? Because I think we should train our utility-assigning skilz just like we train our prior-probability-estimating ones.

However, I am far from certain we should do it exactly this way. Maybe this would lead to a shiny new bias. At least I vaguely think I already have it, and formalizing it shouldn't make me worse off. I have tried to apply to myself the category of 'risk-averse', but in the end, it didn't change my prevailing heuristic: 'Everything's reasonable, if you have a sufficient reason.' Like, a pregnant woman should not run if she cares about carrying her child, but even then she should run if the house is on fire. Maybe my estimates of 'sufficient' are different than other people's, but they have served me so far; and setting the particular goal of ridding self of particular biases seems less instrumentally rational than just checking how accurate my individual predictions/impressions/any kind of actionable thoughts are.

So I drew up this list of utility components and will try it out at my leisure, tweaking it ad hoc and paying with time and money and health for my mistakes.

Utility of a given item/action for a given owner/actor = produced value + reduced cost + saved future opportunities + fun.

PV points: -2 if A/I 'takes from tomorrow'*, -1 if'harmful' only within the day, 0 if gives zero on net, 1 ifuseful within the day, 2 if 'gives to tomorrow'

*'tomorrow' is foreseeable future:)

RC points: -3 if takes from overall amount of money I have, less the *really* last-resort stash, -2 if takes from more than one-day-budget, -1 if takes from one-day-budget, 0 if zero on net, 1 if saves within a day (like 'saved on a ticket, might buy candy'), 2 saves for 'tomorrow' on net

SFO points: -2 if 'really sucks', -1 if no, 0 if dunno, 1 if yes

F points: -1 if no, 0 if okay, 1 if yes, 2 if hell yes.

U(bout of flue) =-2-3+0-1=-6. Even if I have flue, I might do research or call a friend or do something useful if it'snot very bad, then it will be only -5. On the other hand, I might get pneumonia, which really sucks, and then it willbe -7. Knowing this, I can, when I feel myself going under, 1) make sure I don't get pneumonia, and 2) go through low-effort stuff I keep labelling 'slow-day-stuff'.

U(room of a house) = use + status -maintenance = U(weighted activities of, well, life) + U(weighted signalling activities, like polishing family china) - U(weighted repair activities).

U(route) = f(weather, price, time, destination, health, 'carrying' potential, changeability on short notice, explainability to somebody else) = U(clothes) + U(activities during commute) + U(shopping/exchanging things/..) + U(emergencies)+ U(rescue missions).

What do you think? 

Should there be more people on the leaderboard?

2 casebash 02 September 2015 11:52AM

I'm wondering what the optimal number of people on the leaderboard would be. I suspect that people who appear on the leaderboard post more often because they want to remain on it. The other advantage, is that if the leaderboard seems in reach, more people will compete to get on it.On the other hand, if too many people were added to the leaderboard, then "being on the leaderboard" would be worthless and people would only care if they had a high position.

There are currently 15 people on the leaderboard. I suspect that if there were 20 people on the leaderboard, that would increase the motivation effect, without significantly devaluing being on the leaderboard itself.

What do people think?

September 2015 Media Thread

3 ArisKatsaris 01 September 2015 10:42PM

This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.

Rules:

  • Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
  • If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
  • Please post only under one of the already created subthreads, and never directly under the parent media thread.
  • Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
  • Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.

Yudkowsky, Thiel, de Grey, Vassar panel on changing the world

11 NancyLebovitz 01 September 2015 03:57PM

30 minute panel

The first question was why isn't everyone trying to change the world, with the underlying assumption that everyone should be. However, it isn't obviously the case that the world would be better if everyone were trying to change it. For one thing, trying to change the world mostly means trying to change other people. If everyone were trying to do it, this would be a huge drain on everyone's attention. In addition, some people are sufficiently mean and/or stupid that their efforts to change the world make things worse.

At the same time, some efforts to change the world are good, or at least plausible. Is there any way to improve the filter so that we get more ambition from benign people without just saying everyone should try to change the world, even if they're Osama bin Laden?

The discussion of why there's too much duplicated effort in science didn't bring up the problem of funding, which is probably another version of the problem of people not doing enough independent thinking.

There was some discussion of people getting too hooked on competition, which is a way of getting a lot of people pointed at the same goal. 

Link thanks to Clarity

Typical Sneer Fallacy

8 calef 01 September 2015 03:13AM

I like going to see movies with my friends.  This doesn't require much elaboration.  What might is that I continue to go see movies with my friends despite the radically different ways in which my friends happen to enjoy watching movies.  I'll separate these movie-watching philosophies into a few broad and not necessarily all-encompassing categories (you probably fall into more than one of them, as you'll see!):

(a): Movie watching for what was done right.  The mantra here is "There are no bad movies." or "That was so bad it was good."  Every movie has something redeeming about it, or it's at least interesting to try and figure out what that interesting and/or good thing might be.  This is the way that I watch movies, most of the time (say 70%).

 

(b): Movie watching for entertainment.  Mantra: "That was fun!".  Critical analysis of the movie does not provide any enjoyment.  The movie either succeeds in 'entertaining' or it fails.  This is the way that I watch movies probably 15% of the time.

 

(c): Movie watching for what was done wrong.  Mantra: "That movie was terrible."  The only enjoyment that is derived from the movie-watching comes from tearing the film apart at its roots--common conversation pieces include discussion of plot inconsistencies, identification of poor directing/cinematography/etc., and even alternative options for what could have 'fixed' the film to the extent that the film could even said to be 'fixed'.  I do this about ~12% of the time.

 

(d): Sneer. Mantra: "Have you played the drinking game?".  Vocal, public, moderately-drunken dog-piling of a film's flaws are the only way a movie can be enjoyed.  There's not really any thought put into the critical analysis.  The movie-watching is more an excuse to be rambunctious with a group of friends than it is to actually watch a movie.  I do this, conservatively, 3% of the time.

What's worth stressing here is that these are avenues of enjoyment.  Even when a (c) person watches a 'bad' movie, they enjoy it to the extent that they can talk at length about what was wrong with the movie. With the exception of the Sneer category, none of these sorts of critical analysis are done out of any sort of vindictiveness, particularly and especially (c).

So, like I said, I'm mostly an (a) person.  I have friends that are (a) people, (b) people, (c) people, and even (d) people (where being a (_) person means watching movies with that philosophy more than 70% of the time).

 

This can generate a certain amount of friction.  Especially when you really enjoy a movie, and your friend starts shitting all over it.

 

Or at least, that's what it feels like from the inside!  Because you might have really enjoyed a movie because you thought it was particularly well-shot, or it evoked a certain tone really well, but here comes your friend who thought the dialogue was dumb, boring, and poorly written.  Fundamentally, you and your friend are watching the movie for different reasons.  So when you go to a movie with 6 people who are exclusively (c), category (c) can start looking a whole lot like category (d) when you're an (a) or (b) person.

And that's no fun, because (d) people aren't really charitable at all.  It can be easy to translate in one's mind the criticism "That movie was dumb" into "You are dumb for thinking that movie wasn't dumb".  Sometimes the translation is even true!  Sneer Culture is a thing that exists, and while its connection to my 'Sneer' category above is tenuous, my word choice is intentional.  There isn't anything wrong with enjoying movies via (d), but because humans are, well, human, a sneer culture can bloom around this sort of philosophy.

Being able to identify sneer cultures for what they are is valuable.  Let's make up a fancy name for misidentifying sneer culture, because the rationalist community seems to really like snazzy names for things:

Typical Sneer Fallacy: When you ignore or are offended by criticism because you've mistakenly identified it as coming purely from sneer.  In reality, the criticism was genuine and actually true, to the extent that it represents someone's sincere beliefs, and is not simply from a place of malice.

 

This is the point in the article where I make a really strained analogy between the different ways in which people enjoy movies, and how Eliezer has pretty extravagantly committed the Typical Sneer Fallacy in this reddit thread.

 

Some background for everyone that doesn't follow the rationalist and rationalist-adjacent tumblr-sphere:  su3su2u1, a former physicist, now data scientist, has a pretty infamous series of reviews of HPMOR.  These reviews are not exactly kind.  Charitably, I suspect this is because su3su2u1 is a (c) kind of person, or at least, that's the level at which he chose to interact with HPMOR.  For disclosure, I definitely (a)-ed by way through HPMOR.

su3su2u1 makes quite a few science criticisms of Eliezer.  Eliezer doesn't really take these criticisms seriously, and explicitly calls them "fake".  Then, multiple physicists come out of the woodwork to tell Eliezer he is wrong concerning a particular one involving energy conservation and quantum mechanics (I am also a physicist, and su3su2u1's criticism is, in fact, correct.  If you actually care about the content of the physics issue, I'd be glad to get into it in the comments.  It doesn't really matter, except insofar as this is not the first time Eliezer's discussions of quantum mechanics have gotten him into trouble) (Note to Eliezer: you probably shouldn't pick physics fights with the guy whose name is the symmetry of the standard model Lagrangian unless you really know what you're talking about (yeah yeah, appeal to authority, I know)).

I don't really want to make this post about stupid reddit and tumblr drama.  I promise.  But I think the issue was rather succinctly summarized, if uncharitably, in a tumblr post by nostalgebraist.

 

The Typical Sneer Fallacy is scary because it means your own ideological immune system isn't functioning correctly.  It means that, at least a little bit, you've lost the ability to determine what sincere criticism actually looks like.  Worse, not only will you not recognize it, you'll also misinterpret the criticism as a personal attack.  And this isn't singular to dumb internet fights.

Further, dealing with criticism is hard.  It's so easy to write off criticism as insincere if it means getting to avoid actually grappling with the content of that criticism:  You're red tribe, and the blue tribe doesn't know what it's talking about.  Why would you listen to anything they have to say?  All the blues ever do is sneer at you.  They're a sneer culture.  They just want to put you down.  They want to put all the reds down.

But the world isn't always that simple.  We can do better than that.

Meta post: Did something go wrong?

3 Elo 31 August 2015 10:58PM

There is a post at this link: http://lesswrong.com/lw/mp3/proper_posture_for_mental_arts/

 

It does not appear in my discussion feed.  See the timestamps in the pictures below:

 

and:

 

It's currently 0900 local time on the 1/9/15.  The only reason that I can think of this being wrong is that its a timezone publishing effect.  But I don't know how, or what to do about it...

It also does not appear in an incognito window view of the discussion.

 

Now what?

An accidental experiment in location memory

8 PhilGoetz 31 August 2015 04:50PM

I bought a plastic mat to put underneath my desk chair, to protect the wooden floor from having bits of stone ground into it by the chair wheels. But it kept sliding when I stepped onto it, nearly sending me stumbling into my large, expensive, and fragile monitor. I decided to replace the mat as soon as I found a better one.

Before I found a better one, though, I realized I wasn't sliding on it anymore. My footsteps had adjusted themselves to it.

This struck me as odd. I couldn't be sensing the new surface when stepping onto it and adjusting my step to it, because once I've set my foot down on it, it's too late; I've already leaned toward the foot in a way that would make it physically impossible to reduce my angular momentum, and the slipping seems instantaneous on contact. Nor was I consciously aware of the mat anymore. It's thin, transparent, and easy to overlook.

I could think of two possibilities: Either my brain had learned to walk differently in a small, precise area in front of my desk, or I noticed the mat subconsciously and adjusted my steps subconsciously. The latter possibility freaked me out a little, because it seems like the kind of thing my brain should tell me about. Adjusting my steps subconsciously I expect; noticing a new object or environment, I expect to be told about.

A few weeks later, the mat had gradually moved a foot or two out of position, so I moved it back. The next time I came back to my desk, hours later, having forgotten all about the mat, I immediately slipped on it.

So it seems my brain was not noticing the mat, but remembering its precise location. (It's possible this is instead some physical mechanism that makes the mat stick better to the floor over time, but I can't think how that would work.)

Have any of you had similar experiences?

My future posts; a table of contents.

16 Elo 30 August 2015 10:27PM

My future posts

I have been living in the lesswrong rationality space for at least two years now. Recently more active than previously. This has been deliberate. I plan to make more serious active posts in the future. In saying so I wanted to announce the posts I intend on making when moving forwards from today.  This should do a few things:

  1. keep me on track
  2. keep me accountable to me more than anyone else
  3. keep me accountable to others
  4. allow others to pick which they would like to be created sooner
  5. allow other people to volunteer to create/collaborate on these topics
  6. allow anyone to suggest more topics
  7. meta: this post should help to demonstrate one person's method of developing rationality content and the time it takes to do that.
feel free to PM me about 6, or comment below.

Unfortunately these are not very well organised, they are presented in no particular order.  They are probably missing posts that will help link them all together, as well as skills required to understand some of the posts on this list.


Unpublished but written:

A very long list of sleep maintenance suggestions – I wrote up all the ideas I knew of; there are about 150 or so; worth reviewing just to see if you can improve your sleep because the difference in quality of life with good sleep is a massive change. (20mins to write an intro)

A list of techniques to help you remember names. - remembering names is a low-hanging social value fruit that can improve many of your early social interactions with people. I wrote up a list of techniques to help. (5mins to post)

 

Posts so far:

The null result: a magnetic ring wearing experiment. - a fun one; about how wearing magnetic rings was cool; but not imparting of superpowers. (done)

An app list of useful apps for android my current list of apps that I use also some very good suggestions in the comments. (done)

How to learn X How to attack a problem of learning a new area that you don't know a lot about (for a generic thing) (done)

A list of common human goals – when plotting out goals that matter to you; so you can look over some common ones and see you fulfilling them interests you. (done)

 

Future posts

Goals of your lesswrong group – Do you have a local group; why? What do you want out of it (do people know)? setting goals, doing something particularly, having fun anyway, changing your mind. (4hrs)

 

Goals interrogation + Goal levels – Goal interrogation is about asking <is this thing I want to do actually a goal of mine> and <is this the best way to achieve that>, goal levels are something out of Sydney Lesswrong that help you have mutual long term goals and supporting short term goal. (2hrs)

 

How to human – A zero to human guide. A guide for basic functionality of a humanoid system. (4hrs)

 

How to effectively accrue property – Just spent more than the value of an object on it? How to think about that and try to do it better. (5hrs)

 

List of strategies for getting shit done – working around the limitations of your circumstances and understanding what can get done with the resources you have at hand. (4hrs)

 

List of superpowers and kryptonites – when asking the question "what are my superpowers?" and "what are my kryptonites?". Knowledge is power; working with your powers and working out how to avoid your kryptonites is a method to improve yourself. (6hrs over a week)

 

List of effective behaviours – small life-improving habits that add together to make awesomeness from nothing. And how to pick them up. (8hrs over 2 weeks)

 

Memory and notepads – writing notes as evidence, the value of notes (they are priceless) and what you should do. (1hr + 1hr over a week)

 

Suicide prevention checklist – feeling off? You should have already outsourced the hard work for "things I should check on about myself" to your past self. Make it easier for future you. Especially in the times that you might be vulnerable. (4hrs)

 

Make it easier for future you. Especially in the times that you might be vulnerable. - as its own post in curtailing bad habits. (5hrs)

 

A p=np approach to learning – Sometimes you have to learn things the long way; but sometimes there is a short cut. Where you could say, "I wish someone had just taken me on the easy path early on". It's not a perfect idea; but start looking for the shortcuts where you might be saying "I wish someone had told me". Of course my line now is, "but I probably wouldn't have listened anyway" which is something that can be worked on as well. (2hrs)

 

Rationalists guide to dating – attraction. Relationships. Doing things with a known preference. Don't like stupid people? Don't try to date them. Think first; an exercise in thinking hard about things before trying trial-and-error on the world. (half written, needs improving 2hrs)

 

Training inherent powers (weights, temperatures, smells, estimation powers) – practice makes perfect right? Imagine if you knew the temperature always, the weight of things by lifting them, the composition of foods by tasting them, the distance between things without measuring. How can we train these, how can we improve. (2hrs)

 

Strike to the heart of the question. The strongest one; not the one you want to defeat – Steelman not Strawman. Don't ask "how do I win at the question"; ask, "am I giving the best answer to the best question I can give", (2hrs)

Open Thread August 31 - September 6

3 Elo 30 August 2015 09:26PM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Deworming a movement

-6 Clarity 30 August 2015 09:25AM

Over the last few days I've been reviewing the evidence for EA charity recommendations. Based on my personal experience alone, the community seems to be comprehensively inept, poor at marketing, extremely insular, methodologically unsophisticated but meticulous, transparent and well-intentioned. I currently hold the belief that EA movement building does more harm than good and that is requires significant rebranding and shifts in its informal leadership or to die out before it damages the reputation of the rationalist community and our capacity to cooperate with communities that share mutual interests.

It's one thing to be ineffective and know it. It's another thing to be ineffective and not know it. It's yet another thing to be ineffective, not know it, yet champion effectiveness and make a claim to moral superiority.

In case you missed the memo deworming is controversial, GiveWell doesn't engage with the meat of the debate, and my investigations of the EA community's spaces suggests that it's not at all known. I've even briefly posted about it elsewhere on LessWrong to see if there was unspoken knowledge about it, but it seems not. Given that it's the hot topic in mainstream development studies and related academic communities, I'm aghast at how irresponsive 'we' are.

What's actionable for us here. If you're looking for a high reliability effective altruism prospect, do not donate to SCI or Evidence Action. And by extension, do not donate to EA organisations to donate to these groups, including GiveWell. I am assuming you will use those funds more wisely instead, say buying healthier food for yourself.

For who don't to review the links for a more comprehensive analyses from Cochrane and GiveWell, here is one summary of the debate recommended in the Cochrane article:

Last month there was another battle in an ongoing dispute between economists and epidemiologists over the merits of mass deworming. In brief, economists claim there is clear evidence that cheap deworming interventions have large effects on welfare via increased education and ultimately job opportunities. It’s a best buy development intervention. Epidemiologists claim that although worms are widespread and can cause illnesses sometimes, the evidence of important links to health is weak and knock-on effects of deworming to education seem implausible. As stated by Garner “the belief that deworming will impact substantially on economic development seems delusional when you look at the results of reliable controlled trials.”

Aside: Framing this debate as one between economists and epidemiologists captures some of the dynamic of what has unfortunately been called the “worm wars” but it is a caricature. The dispute is not just between economists and epidemiologists. For an earlier round of this see this discussion here, involving health scientists on both sides. Note also that the WHO advocates deworming campaigns.

So. Deworming: good for educational outcomes or not?

On their side, epidemiologists point to 45 studies that are jointly analyzed in Cochrane reports. Among these they see few high quality studies on school attendance in particular, with a recent report concluding that they “do not know if there is an effect on school attendance (very low quality evidence).” Indeed they also see surprisingly few health benefits. One randomized control trial included one million Indian students and found little evidence of impact on health outcomes. Much bigger than all other trials combined; such results raise questions for them about the possibility of strong downstream effects. Economists question the relevance of this result and other studies in the Cochrane review.

On their side, the chief weapon in the economists’ arsenal has for some time been a paper from 2004 on a study of deworming in West Kenya by Ted Miguel and Michael Kremer, two leading development economists that have had an enormous impact on the quality of research in their field. In this paper, Miguel and Kremer (henceforth MK) claimed to show strong effects of deworming on school attendance not just for kids in treated schools but also for the kids in untreated schools nearby. More recently a set of new papers focusing on longer term impacts, some building on this study, have been added to this arsenal. In addition, on their side, economists have a few things that do not depend on the evidence at all: determination, sway, and the moral high ground. After all, who could be against deworming kids?

Group Rationality Diary, August 30 - September 12

2 Dahlen 30 August 2015 06:11AM

This is the public group rationality diary for August 30th - September 12th, 2015. It's a place to record and chat about it if you have done, or are actively doing, things like:

  • Established a useful new habit

  • Obtained new evidence that made you change your mind about some belief

  • Decided to behave in a different way in some set of situations

  • Optimized some part of a common routine or cached behavior

  • Consciously changed your emotions or affect with respect to something

  • Consciously pursued new valuable information about something that could make a big difference in your life

  • Learned something new about your beliefs, behavior, or life that surprised you

  • Tried doing any of the above and failed

Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves. Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.

Archive of previous rationality diaries

Rationality Compendium: Principle 2 - You are implemented on a human brain

5 ScottL 29 August 2015 04:24PM

Irrationality is ingrained in our humanity. It is fundamental to who we are. This is because being human means that you are implemented on kludgy and limited wetware (a human brain). A consequence of this is that biases  and irrational thinking are not mistakes, persay, they are not misfirings or accidental activations of neurons. They are the default mode of operation for wetware that has been optimized for purposes other than truth maximization.

 

If you want something to blame for the fact that you are innately irrational, then you can blame evolution . Evolution tends to not to produce optimal organisms, but instead produces ones that are kludgy , limited and optimized for criteria relating to ancestral environments rather than for criteria relating to optimal thought.

 

A kludge is a clumsy or inelegant, yet surprisingly effective, solution to a problem. The human brain is an example of a kludge. It contains many distinct substructures dating from widely separated periods of evolutionary development . An example of this is the two kinds of processes in human cognition where one is fast (type 1) and the other is slow (type2) 

There are many other characteristics of the brain that induce irrationality. The main ones are that:

  • The brain is innately limited in its computational abilities and so it must use heuristics , which are mental shortcuts that ease the cognitive load of making a decision. 
  • The brain has a tendency to blindly use salient or pre-existing responses to answers rather than developing new answers or thoroughly checking pre-existing solutions 
  • The brain does not inherently value truth. One of the main reasons for this is that many of the biases can actually be adaptive. An example of an adaptive bias is the sexual over perception bias  in men. From a truth-maximization perspective young men who assume that all women want them are showing severe social-cognitive inaccuracies, judgment biases, and probably narcissistic personality disorder. However, from an evolutionary perspective, the same young men are behaving in a more optimal manner. One which has consistently maximized the reproductive success of their male ancestors. Another similar example is the bias for positive perception of partners .
  • The brain acts more like a coherence maximiser than a truth maximiser, which makes people liable to believing falsehoods . If you want to believe something or you are often in situations in which two things just happen to be related then your brain is often by default going to treat them as if they were right 
  • The brain trusts its own version of reality much more than other peoples. This makes people defend their beliefs even when doing so is extremely irrational . It is also makes it hard for people to change their minds  and to accept when they are wrong
  • Disbelief requires System 2 thought . This means that if system 2 is engaged then we are liable to believe pretty much anything. System 1 is gullible and biased to believe. It is system 2 that is in charge of doubting and disbelieving.

One important non-brain related factor is that we must make use of and live with our current adaptations . People cannot reconform themselves to fulfill purposes suitable to their current environment, but must instead make use of pre-existing machinery that has been optimised for other environments. This means that there is probably never going to be any miracle cures to irrationality because eradicating it would require that you were so fundamentally altered that you were no longer human.

 

One of the first major steps on the path to becoming more rational, is the realisation that you are not only by default irrational, but that you are always fundamentally comprimised. This doesn't mean that improving your rationality is impossible. It just means that if you stop applying your knowledge of what improves rationality then you will slip back into irrationality. This is because the brain is a kludge. It works most of the time, but in some cases its innate and natural course of action must be diverted if we are to be rational. The good news is that this kind of diversion is possible. This is because humans possess second order thinking . This means that they can observe their inherent flaws and systematic errors. They can then through studying the laws of thought and action apply second order corrections and from doing so become more rational.  

 

The process of applying these second order corrections or training yourself to mitigate the effects of your propensities is called debiasing . Debiasing is not a thing that you can do once and then forget about. It is something that you must either be doing constantly or that you must instill into habits so that it occurs without volitional effort. There are generally three main types of debaising and they are described below: 

  • Counteracting the effects of bias - this can be done by adjusting your estimates or opinions in order to avoid errors due to biases. This is probably the hardest of the three types of debiasing because to do it correctly you need to know exactly how much you are already biased. This is something that people are rarely aware of.
  • Catching yourself when you are being or could be biased and applying a cogntive override. The basic idea behind this is that you observe and track your own thoughts and emotions so that you can catch yourself before you move to deeply into irrational modes of thinking. This is hard because it requires that you have superb self-awareness skills and these often take a long time to develop and train. Once you have caught yourself it is often best to resort to using formal thought in algebra, logic, probability theory or decision theory etc. It is also useful to instill habits in yourself that would allow this observation to occur without conscious and volitional effort. It should be noted that incorrectly applying the first two methods of debiasing can actually make you more biased and that this is a common conundrum and problem faced by beginners to rationality training 
  • Understanding the situations which make you biased so that you can avoid them  - the best way to achieve this is simply to ask yourself: how can I become more objective? You do this by taking your biased and faulty perspective as much as possible out of the equation. For example, instead of taking measurements yourself you could get them taken automatically by some scientific instrument.

Related Materials

Wikis:

  • Bias - refers to the obstacles to truth which are produced by our kludgy and limited wetware (brains) working exactly the way that they should. 
  • Evolutionary psychology - the idea of evolution as the idiot designer of humans - that our brains are not consistently well-designed - is a key element of many of the explanations of human errors that appear on this website.
  • Slowness of evolution- The tremendously slow timescale of evolution, especially for creating new complex machinery (as opposed to selecting on existing variance), is why the behavior of evolved organisms is often better interpreted in terms of what did in fact work  
  • Alief - an independent source of emotional reaction which can coexist with a contradictory belief. For example, the fear felt when a monster jumps out of the darkness in a scary movie is based on the alief that the monster is about to attack you, even though you believe that it cannot. 
  • Wanting and liking - The reward system consists of three major components:
    • Liking: The 'hedonic impact' of reward, comprised of (1) neural processes that may or may not be conscious and (2) the conscious experience of pleasure.
    • Wanting: Motivation for reward, comprised of (1) processes of 'incentive salience' that may or may not be conscious and (2) conscious desires.
    • Learning: Associations, representations, and predictions about future rewards, comprised of (1) explicitpredictions and (2) implicit knowledge and associative conditioning (e.g. Pavlovian associations). 
  • Heuristics and biases - program in cognitive psychology tries to work backward from biases (experimentally reproducible human errors) to heuristics (the underlying mechanisms at work in the brain). 
  • Cached thought – is an answer that was arrived at by recalling a previously-computed conclusion, rather than performing the reasoning from scratch.  
  • Sympathetic Magic - humans seem to naturally generate a series of concepts known as sympathetic magic, a host of theories and practices which have certain principles in common, two of which are of overriding importance: the Law of Contagion holds that two things which have interacted, or were once part of a single entity, retain their connection and can exert influence over each other; the Law of Similarity holds that things which are similar or treated the same establish a connection and can affect each other. 
  • Motivated Cognition - an academic/technical term for various mental processes that lead to desired conclusions regardless of the veracity of those conclusions.   
  • Rationalization - Rationalization starts from a conclusion, and then works backward to arrive at arguments apparently favoring that conclusion. Rationalization argues for a side already selected; rationality tries to choose between sides.  
  • Opps - There is a powerful advantage to admitting you have made a large mistake. It's painful. It can also change your whole life.
  • Adaptation executors - Individual organisms are best thought of as adaptation-executers rather than as fitness-maximizers. Our taste buds do not find lettuce delicious and cheeseburgers distasteful once we are fed a diet too high in calories and too low in micronutrients. Tastebuds are adapted to an ancestral environment in which calories, not micronutrients, were the limiting factor. Evolution operates on too slow a timescale to re-adapt to adapt to a new conditions (such as a diet).
  • Corrupted hardware - our brains do not always allow us to act the way we should. Corrupted hardware refers to those behaviors and thoughts that act for ancestrally relevant purposes rather than for stated moralities and preferences.
  • Debiasing - The process of overcoming bias. It takes serious study to gain meaningful benefits, half-hearted attempts may accomplish nothing, and partial knowledge of bias may do more harm than good. 
  • Costs of rationality - Becoming more epistemically rational can only guarantee one thing: what you believe will include more of the truth. Knowing that truth might help you achieve your goals, or cause you to become a pariah. Be sure that you really want to know the truth before you commit to finding it; otherwise, you may flinch from it.
  • Valley of bad rationality - It has been observed that when someone is just starting to learn rationality, they appear to be worse off than they were before. Others, with more experience at rationality, claim that after you learn more about rationality, you will be better off than you were before you started. The period before this improvement is known as "the valley of bad rationality".
  • Dunning–Kruger effect - is a cognitive bias wherein unskilled individuals suffer from illusory superiority, mistakenly assessing their ability to be much higher than is accurate. This bias is attributed to a metacognitive inability of the unskilled to recognize their ineptitude. Conversely, highly skilled individuals tend to underestimate their relative competence, erroneously assuming that tasks that are easy for them are also easy for others. 
  • Shut up and multiply - In cases where we can actually do calculations with the relevant quantities. The ability to shut up and multiply, to trust the math even when it feels wrong is a key rationalist skill.  

Posts

Popular Books:

Papers:

  • Haselton, M. (2003). The sexual overperception bias: Evidence of a systematic bias in men from a survey of naturally occurring events. Journal of Research in Personality, 34-47.
  • Hasselton, M., & Buss, D. (2000). Error Management Theory: A New Perspective on Biases in Cross-Sex Mind Reading. Jounral of Personality and Social Psychology, 81-91. 
  • Murray, S., Griffin, D., & Holmes, J. (1996). The Self-Fulfilling Nature of Positive Illusions in Romantic Relationships: Love Is Not Blind, but Prescient. Journal of Personality and Social Psychology,, 1155-1180. 
  • Gilbert, D.T.,  Tafarodi, R.W. and Malone, P.S. (1993) You can't not believe everything you read. Journal of Personality and Social Psychology, 65, 221-233 
 

Notes on decisions I have made while creating this post

 (these notes will not be in the final draft): 

  • This post doesn't have any specific details on debiasing or the biases. I plan to provide these details in later posts. The main point of this post is convey the idea in the title.

Calling references: Rational or irrational?

7 PhilGoetz 28 August 2015 09:06PM

Over the past couple of decades, I've sent out a few hundred resumes (maybe, I don't know, 300 or 400--my spreadsheet for 2013-2015 lists 145 applications).  Out of those I've gotten at most two dozen interviews and a dozen job offers.

Throughout that time I've maintained a list of references on my resume.  The rest of the resume is, to my mind, not very informative.  The list of job titles and degrees says little about how competent I was.

Now and then, I check with one of my references to see if anyone called them.  I checked again yesterday with the second reference on my list.  The answer was the same:  Nope.  No one has ever, as far as I can recall, called any of my references.  Not the people who interviewed me; not the people who offered me jobs.

When the US government did a background check on me, they asked me for a list of references to contact.  My uncertain recollection is that they ignored it and interviewed my neighbors and other contacts instead, as if what I had given them was a list of people not to bother contacting because they'd only say good things about me.

Is this rational or irrational?  Why does every employer ask for a list of references, then not call them?

New LW Meetup: Rochester NY

3 FrankAdamek 28 August 2015 04:49PM

This summary was posted to LW Main on August 21st. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.

continue reading »

Is my brain a utility minimizer? Or, the mechanics of labeling things as "work" vs. "fun"

9 contravariant 28 August 2015 01:12AM

I recently encountered something that is, in my opinion, one of the most absurd failure modes of the human brain. I first encountered this after introspection on useful things that I enjoy doing, such as programming and writing. I noticed that my enjoyment of the activity doesn't seem to help much when it comes to motivation for earning income. This was not boredom from too much programming, as it did not affect my interest in personal projects. What it seemed to be, was the brain categorizing activities into "work" and "fun" boxes. On one memorable occasion, after taking a break due to being exhausted with work, I entertained myself, by programming some more, this time on a hobby personal project (as a freelancer, I pick the projects I work on so this is not from being told what to do). Relaxing by doing the exact same thing that made me exhausted in the first place.

The absurdity of this becomes evident when you think about what distinguishes "work" and "fun" in this case, which is added value. Nothing changes about the activity except the addition of more utility, making a "work" strategy always dominate a "fun" strategy, assuming the activity is the same. If you are having fun doing something, handing you some money can't make you worse off. Making an outcome better makes you avoid it. Meaning that the brain is adopting a strategy that has a (side?) effect of minimizing future utility, and it seems like it is utility and not just money here - as anyone who took a class in an area that personally interested them knows, other benefits like grades recreate this effect just as well. This is the reason I think this is among the most absurd biases - I can understand akrasia, wanting the happiness now and hyperbolically discounting what happens later, or biases that make something seem like the best option when it really isn't. But knowingly punishing what brings happiness just because it also benefits you in the future? It's like the discounting curve dips into the negative region. I would really like to learn where is the dividing line between which kinds of added value create this effect and which ones don't (like money obviously does, and immediate enjoyment obviously doesn't). Currently I'm led to believe that the difference is present utility vs. future utility, (as I mentioned above) or final vs. instrumental goals, and please correct me if I'm wrong here.

This is an effect that has been studied in psychology and called the overjustification effect, called that because the leading theory explains it in terms of the brain assuming the motivation comes from the instrumental gain instead of the direct enjoyment, and then reducing the motivation accordingly. This would suggest that the brain has trouble seeing a goal as being both instrumental and final, and for some reason the instrumental side always wins in a conflict. However, its explanation in terms of self-perception bothers me a little, since I find it hard to believe that a recent creation like self-perception can override something as ancient and low-level as enjoyment of final goals. I searched LessWrong for discussions of the overjustification effect, and the ones I found discussed it in the context of self-perception, not decision-making and motivation. It is the latter that I wanted to ask for your thoughts on.

 

Rationality Reading Group: Part H: Against Doublethink

7 Gram_Stone 27 August 2015 01:22AM

This is part of a semi-monthly reading group on Eliezer Yudkowsky's ebook, Rationality: From AI to Zombies. For more information about the group, see the announcement post.


Welcome to the Rationality reading group. This fortnight we discuss Part H: Against Doublethink (pp. 343-361)This post summarizes each article of the sequence, linking to the original LessWrong post where available.

H. Against Doublethink

81. Singlethink - The path to rationality begins when you see a great flaw in your existing art, and discover a drive to improve, to create new skills beyond the helpful but inadequate ones you found in books. Eliezer's first step was to catch what it felt like to shove an unwanted fact to the corner of his mind. Singlethink is the skill of not doublethinking.

82. Doublethink (Choosing to be Biased) - George Orwell wrote about what he called "doublethink", where a person was able to hold two contradictory thoughts in their mind simultaneously. While some argue that self deception can make you happier, doublethink will actually lead only to problems.

83. No, Really, I've Deceived Myself - Some people who have fallen into self-deception haven't actually deceived themselves. Some of them simply believe that they have deceived themselves, but have not actually done this.

84. Belief in Self-Deception - Deceiving yourself is harder than it seems. What looks like a successively adopted false belief may actually be just a belief in false belief.

85. Moore's Paradox - People often mistake reasons for endorsing a proposition for reasons to believe that proposition.

86. Don't Believe You'll Self-Deceive - It may be wise to tell yourself that you will not be able to successfully deceive yourself, because by telling yourself this, you may make it true.

 


This has been a collection of notes on the assigned sequence for this fortnight. The most important part of the reading group though is discussion, which is in the comments section. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

The next reading will cover Part I: Seeing with Fresh Eyes (pp. 365-406). The discussion will go live on Wednesday, 9 September 2015, right here on the discussion forum of LessWrong.

Personal story about benefits of Rationality Dojo and shutting up and multiplying

7 Gleb_Tsipursky 26 August 2015 04:38PM

My wife and I have been going to Ohio Rationality Dojo for a few months now, started by Raelifin, who has substantial expertise in probabilistic thinking and Bayesian reasoning, and I wanted to share about how the dojo helped us make a rational decision about house shopping. We were comparing two houses. We had an intuitive favorite house (170 on the image) but decided to compare it to our second favorite (450) by actually shutting up and multiplying, based on exercises we did as part of the dojo.

What we did was compare mathematically each part of the house by comparing the value of that part of the house multiplied by the use of that part of the house, and had separate values for the two of us (A for my wife, Agnes Vishnevkin, and G for me, Gleb Tsipursky, on the image). By comparing it mathematically, 450 came out way ahead. Hard to update our beliefs, but we did it, and are now orienting toward that one as our primary choice. Rationality for the win!

Here is the image of our back-of-the-napkin calculations.

 

Sensation & Perception

1 ScottL 26 August 2015 01:13PM

(The below notes are pretty much my attempt to summarise the content in this sample chapter from this book. I am posting this in discussion because I don’t think I will get the time/be bothered enough to improve upon this, so I am posting it now and hope someone finds it interesting or useful. If you do find it interesting check out the full chapter, which goes into more detail)

We don’t experience the world directly, but instead we experience it through a series of “filters” that we call our senses. We know that this is true because of cases of sensory loss. An example is Jonathan I., a 65-year-old New Yorker painter who following an automobile accident suffered from cerebral achromatopsia as well as the loss of the ability to remember and to imagine colours. He would look at a tomato and instead of seeing colours like red or green would instead see only black and shades of grey. The problem was not that Johnathan's eyes no longer worked it was that his brain was unable to process the neural messages for colour.

To understand why Johnathan cannot see colour, we first have to realise that incoming light travels only as far  as the back of the eyes. There the information it contains is converted into neural messages in a process called transduction. We call these neural messages: "sensations". These sensations only involve neural representations of stimuli, not the actual stimuli themselves. Sensations such as “red” or “sweet” or “cold” can be said to have been made by the brain. They also only occur when the neural signal reaches the cerebral cortex. They do not occur when you first interact with the stimuli. To us, the process seems so immediate and direct that we are often fooled into thinking that the sensation of "red" is a characteristic of tomato or that the sensation of “cold” is a characteristic of ice cream. But they are not. What we sense is an electrochemical rendition of the world created by our brain and sensory receptors.

There is another separation between reality as it is and how we sense it to be as well. Organisms can only sense some types of stimulus between certain ranges. This is called the absolute threshold for different types of stimulation and it is the minimum amount of physical energy needed to produce a sensory experience. It should be noted that a faint stimulus does not abruptly become detectable as its intensity increases. There is instead a fuzzy boundary between detection and non-detection, which means that a person’s absolute threshold is in fact not absolute at all. Instead, it varies continually with our mental alertness and physical condition.

To understand the reasons why the thresholds vary, we can turn to the signal detection theory. According to the signal detection theory, sensation depends on the characteristics of the stimulus, the background stimulation and the detector (the brain). Signal detection theory says that the background stimulation makes it less likely, for example, for you to hear someone calling your name on a busy downtown street than in a quiet park. The signal detection theory also tells us that your ability to hear them would depend on the condition of your brain, i.e. detector, and, perhaps, whether it has been aroused by a strong cup of coffee or dulled by drugs or lack of sleep.

The thresholds also change as similar stimuli are continued. This is called sensory adaption and it refers to the diminishing responsiveness of sensory systems to prolonged stimulation. An example of this would be when you adapt to the feeling of swimming in cold water. Unchanging stimulation generally shifts to the back of people's awareness, whereas, intense or changing stimulation will immediately draw your attention.

So far, we have talked about how the sensory organs filter incoming stimuli and how they can only pick up certain types of stimuli. But, there is also something more. We don’t just sense the world; we perceive it as well. The brain in a process called perception combines sensations with memories, motives and emotions to create a representation of the world that fits our current concerns and interests. In essence, we impose our own meanings on sensory experience. People obviously have different memories, motives and current emotional states and this means that we attach different meanings to our sensations i.e. we have perceptual differences. Two people can look at the same political party or religion and come to starkly different conclusions about them. 

The below picture provides a summary of the whole process discussed so far (stimulation to perception):

From simulation to perception, there are a great number of chances for errors to creep in and for you to either misperceive or even not perceive some types stimuli at all. These errors are often exacerbated by mistakes made by the brain. The brain, while brilliant and complex, is not perfect. Some of the mistakes it can make include perceptual phenomena such as: illusions, constancies, change blindness, and inattentional blindness. Illusions, for example, are when your mind deceives you by interpreting a stimulus pattern incorrectly. It is troubling that despite all we know about sensation and perception many people still uncritically accept the evidence of their senses and perceptions at face value.

Another important aspect of perception is that the different types of sensory stimuli, e.g. hearing and vision, need to be integrated. This process of sensory integration can be another source of perceptual phenomenon. An example of this is the McGurk effect in which  the auditory component of one sound is paired with the visual component of another sound. This leads to an illusion, i.e. the perception of a third sound which is not actually spoken. You have to really see (or hear) this in action to understand it, so take a look at this short video which demonstrates the effect.

That was a quick summary on perception. But, an important question still needs to be asked. Is sensory perception and how its input gets organised in our minds the sole basis of our internal representations of the world or is there something else that might placate any creeping errors from perception? This question was asked by many philosophers. Kant in particular, had a distinction between a priori concepts (things that we know before any experience) and a posteriori concepts (things that we know only from experience). He pointed out that there are some things that we can’t know from experience and instead need to be born with them. The work of Konrad Lorenz, though, pointed out that Kant’s a priori were really evolutionary a posteriori concepts. That is we didn’t learn them, but our ancestors did. We might believe X despite not having seen it with our own eyes, but this is only because our ancestors who believed X survived. If we couldn’t navigate the world because our internal representations of the world were too distant from how the world actually is, then we would have been less likely to survive and reproduce. What this means is that we can have a priori concepts i.e. innate knowledge. But, that this innate knowledge is itself based on sensory perceptions of the world, just not yours. The types of a priori knowledge can be differentiated into the naturalistic a priori and the inference-from-premises a priori.

Is semiotics bullshit?

13 PhilGoetz 25 August 2015 02:09PM

I spent an hour recently talking with a semiotics professor who was trying to explain semiotics to me.  He was very patient, and so was I, and at the end of an hour I concluded that semiotics is like Indian chakra-based medicine:  a set of heuristic practices that work well in a lot of situations, justified by complete bullshit.

I learned that semioticians, or at least this semiotician:

  • believe that what they are doing is not philosophy, but a superset of mathematics and logic
  • use an ontology, vocabulary, and arguments taken from medieval scholastics, including Scotus
  • oppose the use of operational definitions
  • believe in the reality of something like Platonic essences
  • look down on logic, rationality, reductionism, the Enlightenment, and eliminative materialism.  He said that semiotics includes logic as a special, degenerate case, and that semiotics includes extra-logical, extra-computational reasoning.
  • seems to believe people have an extra-computational ability to make correct judgements at better-than-random probability that have no logical basis
  • claims materialism and reason each explain only a minority of the things they are supposed to explain
  • claims to have a complete, exhaustive, final theory of how thinking and reasoning works, and of the categories of reality.

When I've read short, simple introductions to semiotics, they didn't say this.  They didn't say anything I could understand that wasn't trivial.  I still haven't found one meaningful claim made by semioticians, or one use for semiotics.  I don't need to read a 300-page tome to understand that the 'C' on a cold-water faucet signifies cold water.  The only example he gave me of its use is in constructing more-persuasive advertisements.

(Now I want to see an episode of Mad Men where they hire a semotician to sell cigarettes.)

Are there multiple "sciences" all using the name "semiotics"?  Does semiotics make any falsifiable claims?  Does it make any claims whose meanings can be uniquely determined and that were not claimed before semiotics?

His notion of "essence" is not the same as Plato's; tokens rather than types have essences, but they are distinct from their physical instantiation.  So it's a tripartite Platonism.  Semioticians take this division of reality into the physical instantiation, the objective type, and the subjective token, and argue that there are only 10 possible combinations of these things, which therefore provide a complete enumeration of the possible categories of concepts.  There was more to it than that, but I didn't follow all the distinctions. He had several different ways of saying "token, type, unbound variable", and seemed to think they were all different.

Really it all seemed like taking logic back to the middle ages.

Yudkowsky's brain is the pinnacle of evolution

-26 Yudkowsky_is_awesome 24 August 2015 08:56PM

Here's a simple problem: there is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are 3^^^3 people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person, Eliezer Yudkowsky, on the side track. You have two options: (1) Do nothing, and the trolley kills the 3^^^3 people on the main track. (2) Pull the lever, diverting the trolley onto the side track where it will kill Yudkowsky. Which is the correct choice?

The answer:

Imagine two ant philosophers talking to each other. “Imagine," they said, “some being with such intense consciousness, intellect, and emotion that it would be morally better to destroy an entire ant colony than to let that being suffer so much as a sprained ankle."

Humans are such a being. I would rather see an entire ant colony destroyed than have a human suffer so much as a sprained ankle. And this isn't just human chauvinism either - I can support my feelings on this issue by pointing out how much stronger feelings, preferences, and experiences humans have than ants do.

How this relates to the trolley problem? There exists a creature as far beyond us ordinary humans as we are beyond ants, and I think we all would agree that its preferences are vastly more important than those of humans.

Yudkowsky will save the world, not just because he's the one who happens to be making the effort, but because he's the only one who can make the effort.

The world was on its way to doom until the day of September 11, 1979, which will later be changed to national holiday and which will replace Christmas as the biggest holiday. This was of course the day when the most important being that has ever existed or will exist, was born.

Yudkowsky did the same to the field of AI risk as Newton did to the field of physics. There was literally no research done on AI risk in the same scale that has been done in the 2000's by Yudkowsky. The same can be said about the field of ethics: ethics was an open problem in philosophy for thousands of years. However, Plato, Aristotle, and Kant don't really compare to the wisest person who has ever existed. Yudkowsky has come closest to solving ethics than anyone ever before. Yudkowsky is what turned our world away from certain extinction and towards utopia.

We all know that Yudkowsky has an IQ so high that it's unmeasurable, so basically something higher than 200. After Yudkowsky gets the Nobel prize in literature due to getting recognition from Hugo Award, a special council will be organized to study the intellect of Yudkowsky and we will finally know how many orders of magnitude higher Yudkowsky's IQ is to that of the most intelligent people of history.

Unless Yudkowsky's brain FOOMs before it, MIRI will eventually build a FAI with the help of Yudkowsky's extraordinary intelligence. When that FAI uses the coherent extrapolated volition of humanity to decide what to do, it will eventually reach the conclusion that the best thing to do is to tile the whole universe with copies of Eliezer Yudkowsky's brain. Actually, in the process of making this CEV, even Yudkowsky's harshest critics will reach such understanding of Yudkowsky's extraordinary nature that they will beg and cry to start doing the tiling as soon as possible and there will be mass suicides because people will want to give away the resources and atoms of their bodies for Yudkowsky's brains. As we all know, Yudkowsky is an incredibly humble man, so he will be the last person to protest this course of events, but even he will understand with his vast intellect and accept that it's truly the best thing to do.

Why people want to die

45 PhilGoetz 24 August 2015 08:13PM

Over and over again, someones says that living for a very long time would be a bad thing, and then some futurist tries to persuade them that their reasoning is faulty.  They tell them that they think that way now, but they'll change their minds when they're older.

The thing is, I don't see that happening.  I live in a small town full of retirees, and those few I've asked about it are waiting for death peacefully.  When I ask them about their ambitions, or things they still want to accomplish, they have none.

Suppose that people mean what they say.  Why do they want to die?

continue reading »

Manhood of Humanity

9 Viliam 24 August 2015 06:31PM

This is my re-telling of Korzybski's Manhood of Humanity. First part here.)

continue reading »

The virtual AI within its virtual world

3 Stuart_Armstrong 24 August 2015 04:42PM

A putative new idea for AI control; index here.

In a previous post, I talked about an AI operating only on a virtual world (ideas like this used to be popular, until it was realised the AI might still want to take control of the real world to affect the virtual world; however, with methods like indifference, we can guard against this much better).

I mentioned that the more of the AI's algorithm that existed in the virtual world, the better it was. But why not go the whole way? Some people at MIRI and other places are working on agents modelling themselves within the real world. Why not have the AI model itself as an agent inside the virtual world? We can quine to do this, for example.

Then all the restrictions on the AI - memory capacity, speed, available options - can be specified precisely, within the algorithm itself. It will only have the resources of the virtual world to achieve its goals, and this will be specified within it. We could define a "break" in the virtual world (ie any outside interference that the AI could cause, were it to hack us to affect its virtual world) as something that would penalise the AI's achievements, or simply as something impossible according to its model or beliefs. It would really be a case of "given these clear restrictions, find the best approach you can to achieve these goals in this specific world".

It would be idea if the AI's motives were not given in terms of achieving anything in the virtual world, but in terms of making the decisions that, subject to the given restrictions, were most likely to achieve something if the virtual world were run in its entirety. That way the AI wouldn't care if the virtual world were shut down or anything similar. It should only seek to self modify in way that makes sense within the world, and understand itself existing completely within these limitations.

Of course, this would ideally require flawless implementation of the code; we don't want bugs developing in the virtual world that point to real world effects (unless we're really confident we have properly coded the "care only about the what would happen in the virtual world, not what actually does happen).

Any thoughts on this idea?

 

AI, cure this fake person's fake cancer!

9 Stuart_Armstrong 24 August 2015 04:42PM

A putative new idea for AI control; index here.

An idea for how an we might successfully get useful work out of a powerful AI.

 

The ultimate box

Assume that we have an extremely detailed model of a sealed room, with a human in it and enough food, drink, air, entertainment, energy, etc... for the human to survive for a month. We have some medical equipment in the room - maybe a programmable set of surgical tools, some equipment for mixing chemicals, a loud-speaker for communication, and anything else we think might be necessary. All these objects are specified within the model.

We also have some defined input channels into this abstract room, and output channels from this room.

The AI's preferences will be defined entirely with respect to what happens in this abstract room. In a sense, this is the ultimate AI box: instead of taking a physical box and attempting to cut it out from the rest of the universe via hardware or motivational restrictions, we define an abstract box where there is no "rest of the universe" at all.

 

Cure cancer! Now! And again!

What can we do with such a setup? Well, one thing we could do is to define the human in such a way that they have some from of advanced cancer. We define what "alive and not having cancer" counts as, as well as we can (the definition need not be fully rigorous). Then the AI is motivated to output some series of commands to the abstract room that results in the abstract human inside not having cancer. And, as a secondary part of its goal, it outputs the results of its process.

continue reading »

Open Thread - Aug 24 - Aug 30

5 Elo 24 August 2015 08:14AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

List of common human goals

8 Elo 24 August 2015 07:58AM
List of common goal areas:
This list is meant to be in the area of goal-space.  It is non-exhaustive and the descriptions are including but not limited to - some hints to help you understand where in the idea-space these goals land.  When constructing this list I try to imagine a large venn diagram where sometimes they overlap.  The areas mentioned are areas that have an exclusive part to them; i.e. where sometimes knowledge overlaps with self-awareness there are parts of each that do not overlap; so both are mentioned.  If you prefer a more "focussing" or feeling base description; Imagine each of these goals is a hammer, designed with a specific weight to hit a certain note on a xylophone.  Often one hammer can produce the note that is meant for that key and several other keys as well.  But sometimes they can't quite make them sound perfect.  What is needed is the right hammer for that block to hit the right note and make the right sound.  Each of these "hammers" has some note that cannot be produced through the use of other hammers.

This list has several purposes:

  1. For someone with some completed goals who is looking to move forward to new horizons; help you consider which common goal-pursuits you have not explored and if you want to try to strive for something in one of these directions.
  2. For someone without clear goals who is looking to create them and does not know where to start.
  3. For someone with too many specific goals who is looking to consider the essences of those goals and what they are really striving for.
  4. For someone who doesn't really understand goals or why we go after them to get a better feel for "what" potential goals could be.

What to do with this list?

0. Agree to invest 30 minutes of effort into a goal confirmation exercise as follows.
  1. Go through this list (copy paste to your own document) and cross out the things you probably don't care about.  Some of these have overlapping solutions of projects that you can do that fulfils multiple goal-space concepts. (5mins)
  2. For the remaining goals; rank them either "1 to n", in "tiers" of high to low priority or generally order them in some way that is coherent to you.  (For serious quantification; consider giving them points - i.e. 100 points for achieving a self-awareness and understanding goal but a pleasure/creativity goal might be only worth 20 points in comparison) (10mins)
  3. Make a list of your ongoing projects (5-10mins), and check if they actually match up to your most preferable goals. (or your number ranking) (5-10mins)  If not; make sure you have a really really good excuse for yourself.
  4. Consider how you might like to do things differently that prioritise your current plans to fit more inline with your goals. (10-20mins)
  5. Repeat this task at an appropriate interval (6monthly, monthly, when your goals significantly change, when your life significantly changes, when major projects end)

Why have goals?

Your goals could change in life; you could explore one area and realise you actually love another area more.  It's important to explore and keep confirming that you are still winning your own personal race to where you want to be going.
It's easy to insist that goals serve to only disappoint or burden a person.  These are entirely valid fears for someone who does not yet have goals.  Goals are not set in stone; however they don't like to be modified either.  I like to think of goals as doing this:
(source: internet viral images) Pictures from the Internet aside; The best reason I have ever reasoned for picking goals is to do exactly this.  Make choices that a reasonable you in the future will be motivated to stick to Outsource that planning and thinking of goal/purpose/direction to your past self.  Naturally you could feel like making goals is piling on the bricks (but there is a way to make goals that do not leave them piling on like bricks); you should think of it as rescuing future you from a day spent completely lost and wondering what you were doing.  Or a day spent questioning if "this" is something that is getting you closer to what you want to be doing in life.

Below here is the list.  Good luck.


personal:

Spirituality - religion, connection to a god, meditation, the practice of gratitude or appreciation of the universe, buddhism, feeling of  a greater purpose in life.
knowledge/skill + ability - learning for fun - just to know, advanced education, becoming an expert in a field, being able to think clearly, being able to perform a certain skill (physical skill), ability to do anything from run very far and fast to hold your breath for a minute, Finding ways to get into flow or the zone, be more rational.
self-awareness/understanding - to be at a place of understanding one’s place in the world, or have an understanding of who you are; Practising thinking in eclectic perspectives for various other people and how it effects your understanding of the world.
health + mental - happiness (mindset) - Do you even lift? http://thefutureprimaeval.net/why-we-even-lift/, are you fit, healthy, eating right, are you in pain, is your mind in a good place, do you have a positive internal voice, do you have bad dreams, do you feel confident, do you feel like you get enough time to yourself?
Live forever - do you want to live forever - do you want to work towards ensuring that this happens?
art/creativity - generating creative works, in any field - writing, painting, sculpting, music, performance.
pleasure/recreation - are you enjoying yourself, are you relaxing, are you doing things for you.
experience/diversity - Have you seen the world?  Have you explored your own city?  Have you met new people, are you getting out of your normal environment?
freedom - are you tied down?  Are you trapped in your situation?  Are your burdens stacked up?
romance - are you engaged in romance?  could you be?
Being first - You did something before anyone; you broke a record, It’s not because you want your name on the plaque - just the chance to do it first.  You got that.
Create something new - invent something; be on the cutting edge of your field; just see a discovery for the first time.  Where the new-ness makes creating something new not quite the same as being first or being creative.

personal-world:

legacy - are you leaving something behind?  Do you have a name? Will people look back and say; I wish I was that guy!
fame/renoundness - Are you “the guy”?  Do you want people to know your name when you walk down the street?  Are there gossip magazines talking about you; do people want to know what you are working on in the hope of stealing some of your fame?  Is that what you want?
leadership, and military/conquer - are you climbing to the top?  Do you need to be in control?  Is that going to make the best outcomes for you?  Do you wish to destroy your enemies?  As a leader do you want people following you?  Do as you do? People should revere you. And power - in the complex; “in control” and “flick the switch” ways that overlap with other goal-space areas.  Of course there are many forms of power; but if its something that you want; you can find fulfilment through obtaining it.
Being part of something greater - The opportunity to be a piece of a bigger puzzle, are you bringing about change; do we have you to thank for being part of bringing the future closer; are you making a difference.
Social - are you spending time socially? No man is an island, do you have regular social opportunities, do you have exploratory social opportunities to meet new people.  Do you have an established social network?  Do you have intimacy?
Family - do you have a family of your own?  Do you want one?  Are there steps that you can take to put yourself closer to there?  Do you have a pet? Having your own offspring? Do you have intimacy?
Money/wealth - Do you have money; possessions and wealth?  Does your money earn you more money without any further effort (i.e. owning a business, earning interest on your $$, investing)
performance - Do you want to be a public performer, get on stage and entertain people?  Is that something you want to be able to do?  Or do on a regular basis?
responsibility - Do you want responsibility?  Do you want to be the one who can make the big decisions?
Achieve, Awards - Do you like gold medallions?  Do you like to strive towards an award?
influence - Do you want to be able to influence people, change hearts and minds.
Conformity - The desire to blend in; or be normal.  Just to live life as is; without being uncomfortable.
Be treated fairly - are you getting the raw end of the stick?  Are there ways that you don't have to keep being the bad guy around here?
keep up with the Joneses - you have money/wealth already, but there is also the goal of appearing like you have money/wealth.  Being the guy that other people keep up with.
Validation/acknowledgement - Positive Feedback on emotions/feeling understood/feeling that one is good and one matters

world:

improve the lives of others (helping people) - in the charity sense of raising the lowest common denominator directly.
Charity + improve the world -  indirectly.  putting money towards a cause; lobby the government to change the systems to improve people’s lives.
winning for your team/tribe/value set - doing actions but on behalf of your team, not yourself. (where they can be one and the same)
Desired world-states - make the world into a desired alternative state.  Don't like how it is; are you driven to make it into something better?

other (and negative stimuli):

addiction (fulfil addiction) - addiction feels good from the inside and can be a motivating factor for doing something.
Virtual reality success - own all the currency/coin and all the cookie clickers, grow all the levels and get all the experience points!
Revenge - Get retribution; take back what you should have rightfully had, show the world who’s boss.
Negative - avoid (i.e. pain, loneliness, debt, failure, embarrassment, jail) - where you can be motivated to avoid pain - to keep safe, or avoid something, or “get your act together”.
Negative - stagnation (avoid stagnation) - Stop standing still.  Stop sitting on your ass and DO something.


Words:

This list being written in words; Will not mean the same thing to every reader.  Which is why I tried to include several categories that almost overlap with each other.  Some notable overlaps are: Legacy/Fame.  Being first/Achievement. Being first/skill and ability.  But of course there are several more.  I really did try to keep the categories open and several; not simplified.  My analogy to hammers and notes should be kept in mind when trying to improve this list.

I welcome all suggestions and improvements to this list.
I welcome all feedback to improve the do-at-home task.
I welcome all life-changing realisations as feedback from examining this list.
I welcome the opportunity to be told how wrong I am :D

Meta-information

This document in total has been 7-10 hours of writing over about two weeks.
I have had it reviewed by a handful of people and lesswrongers before posting.  (I kept realising that someone I was talking to might get value out of it)
I wrote this because I felt like it was the least-bad way that I could think of going about
finding these ideas in the one place
sharing these ideas and this way of thinking about them with you.

Please fill out the survey of if this was helpful.

Edit: also included; (not in the comments) desired world states; and live forever.

The Sleeping Beauty problem and transformation invariances

1 aspera 23 August 2015 08:57PM

I recently read this blog post by Allen Downey in response to a reddit post in response to Julia Galef's video about the Sleeping Beauty problem. Downey's resolution boils down to a conjecture that optimal bets on lotteries should be based on one's expected state of prior information just before the bet's resolution, as opposed to one's state of prior information at the time the bet is made.

I suspect that these two distributions are always identical. In fact, I think I remember reading in one of Jaynes' papers about a requirement that any prior be invariant under the acquisition of new information. That is to say, the prior should be the weighted average of possible posteriors, where the weights are the likelihood that each posterior would be acheived after some measurement. But now I can't find this reference anywhere, and I'm starting to doubt that I understood it correctly when I read it.

So I have two questions:

1) Is there such a thing as this invariance requirement? Does anyone have a reference? It seems intuitive that the prior should be equivalent to the weighted average of posteriors, since it must contain all of our prior knowledge about a system. What is this property actually called?

2) If it exists, is it a corollary that our prior distribution must remain unchanged unless we acquire new information?

Magic and the halting problem

-5 kingmaker 23 August 2015 07:34PM

It is clear that the Harry Potter book series is fairly popular on this site, e.g. the fanfiction. This fanfiction approaches the existence of magic objectively and rationally. I would suggest, however, that most if not all of the people on this site would agree that magic, as presented in Harry Potter, is merely fantasy. Our understanding of the laws of physics and our rationality forbids anything so absurd as magic; it is universally regarded by most rational people as superstition.


This position can be strengthened by grabbing a stick, pointing it at some object and chanting "wingardium leviosa" and waiting for it to rise magically. When (or if) this fails to work, a proponent of magic may resort to special pleading, and claim that as we didn't believe it would work it could not work, or that we need a special wand or that we are a squib or muggle. The proponent can perpetually move the goalposts since their idea of magic is unfalsifiable. But as it is unfalsifiable, it is rejected, in the same way that most of us on this site do not believe in any god(s). If magic were to found to explain certain phenomena scientifically, however, then I and I hope everyone else would come to believe in it, or at least shut up and calculate.


I personally subscribe to the Many Worlds Interpretation of quantum mechanics, so I effectively "believe" in the multiverse. That means it is possible that somewhere in the universal wavefunction, there is an Everett Branch in which magic is real. Or at least every time someone chants an incantation, by total coincidence, the desired effect occurs. But how would the denizens of this universe be able to know that magic is not real, and that everything they had seen was by sheer coincidence? Alan Turing pondered a related problem known as the halting problem, which asks if a general algorithm can distinguish between an algorithm that will finish or one that will run forever. He proved that one could not for all algorithms, although some algorithms will obviously finish executing or infinitely loop e.g. this code segment will loop forever:

 

while (true) {

    //do nothing

}

 

So how would a person distinguish between pseudo-magic that will inevitably fail, and real magic that is the true laws of physics? The only way to be certain that magic doesn't exist in this Everett Branch would be for incantations to fail repeatedly and testably, but this may happen far into the future, long after all humans are deceased. This line of thinking leads me to wonder, do our laws of physics seem as absurd to these inhabitants as their magic seems to us? How do we know that we have the right understanding of reality, as opposed to being deceived by coincidence? If every human in this magical branch is deceived the same way, does this become their true reality? And finally, what if our entire understanding of reality, including logic, is mere deception by happenstance, and everything we think we know is false?

 

Rationality Compendium: Principle 1 - A rational agent, given its capabilities and the situation it is in, is one that thinks and acts optimally

7 ScottL 23 August 2015 08:01AM

A perfect rationalist is an ideal thinker. Rationality , however, is not the same as perfection. Perfection guarantees optimal outcomes. Rationality only guarantees that the agent will, to the utmost of their abilities, reason optimally. Optimal reasoning cannot, unfortunately, guarantee optimal outcomes. This is because most agents are not omniscient or omnipotent. They are instead fundamentally and inexorably limited. To be fair to such agents, the definition of rationality that we use should take this into account. Therefore, a rational agent will be defined as: an agent that, given its capabilities and the situation it is in, thinks and acts optimally. Although it is noted that rationality does not guarantee the best outcome, a rational agent will most of the time achieve better outcomes than those of an irrational agent. 

Rationality is often considered to be split into three parts: normative, descriptive and prescriptive rationality.

Normative rationality describes the laws of thought and action. That is, how a perfectly rational agent with unlimited computing power, omniscience etc. would reason and act. Normative rationality basically describes what is meant by the phrase "optimal reasoning". Of course, for limited agents true optimal reasoning is impossible and they must instead settle for bounded optimal reasoning, which is the closest approximation to optimal reasoning that is possible given the information available to the agent and the computational abilities of the agent. The laws of thought and action (what we currently believe optimal reasoning involves) are::

  • Logic  - math and logic are deductive systems, where the conclusion of a successful argument follows necessarily from its premises, given the axioms of the system you’re using: number theory, geometry, predicate logic, etc.
  • Probability theory  - is essentially an extension of logic. Probability is a measure of how likely a proposition is to be true, given everything else that you already believe. Perhaps, the most useful rule to be derived from the axioms of probability theory is Bayes’ Theorem , which tells you exactly how your probability for a statement should change as you encounter new information. Probability is viewed from one of two perspectives: the Bayesian perspective which sees probability as a measure of uncertainty about the world and the Frequentist perspective which sees probability as the proportion of times the event would occur in a long run of repeated experiments. Less wrong follows the Bayesian perspective. 
  • Decision theory  - is about choosing actions based on the utility function of the possible outcomes. The utility function is a measure of how much you desire a particular outcome. The expected utility of an action is simply the average utility of the action’s possible outcomes weighted by the probability that each outcome occurs. Decision theory can be divided into three parts:
    • Normative decision theory studies what an ideal agent (a perfect agent, with infinite computing power, etc.) would choose.
    • Descriptive decision theory studies how non-ideal agents (e.g. humans) actually choose.
    • Prescriptive decision theory studies how non-ideal agents can improve their decision-making (relative to the normative model) despite their imperfections.

Descriptive rationality describes how people normally reason and act. It is about understanding how and why people make decisions. As humans, we have certain limitations and adaptations which quite often makes it impossible for us to be perfectly rational in the normative sense of the word. It is because of this that we must satisfice or approximate the normative rationality model as best we can. We engage in what's called bounded, ecological or grounded rationality  . Unless explicitly stated otherwise, 'rationality' in this compendium will refer to rationality in the bounded sense of the word. In this sense, it means that the most rational choice for an agent depends on the agents capabilities and the information that is available to it. The most rational choice for an agent is not necessarily the most certain, true or right one. It is just the best one given the information and capabilities that the agent has. This means that an agent that satisfices or uses heuristics may actually be reasoning optimally, given its limitations, even though satisficing and heuristics are shortcuts that are potentially error prone.  

Prescriptive or applied rationality is essentially about how to bring the thinking of limited agents closer to what the normative model stipulates. It is described by Baron in Thinking and Deciding   pg.34: 

In short, normative models tell us how to evaluate judgments and decisions in terms of their departure from an ideal standard. Descriptive models specify what people in a particular culture actually do and how they deviate from the normative models. Prescriptive models are designs or inventions, whose purpose is to bring the results of actual thinking into closer conformity to the normative model. If prescriptive recommendations derived in this way are successful, the study of thinking can help people to become better thinkers.

The behaviours and thoughts that we consider to be rational for limited agents is much larger than those for the perfect, i.e. unlimited, agents. This is because for the limited agents we need to take into account, not only those thoughts and behaviours which are optimal for the agent, but also those thoughts and behaviours which allow the limited agent to improve their reasoning. It is for this reason that we consider curiousity, for example, to be rational as it often leads to situations in which the agents improve their internal representations or models of the world. We also consider wise resource allocation to be rational because limited agents only have a limited amount of resources available to them. Therefore, if they can get a greater return on investment on the resources that they do use then they will be more likely to be able to get closer to thinking optimally in a greater number of domains.

We also consider the rationality of particuar choices to be something that is in a state of flux. This is because the rationality of choices depends on the information that an agent has access to and this is something which is frequently changing. This hopefully highlights an important fact. If an agent is suboptimal in its ability to gather information, then it will often end up with different information than an agent with optimal informational gathering abilities would. In short, this is a problem for the suboptimal (irrational) agent as it means that its rational choices are going to differ more from the perfect normative agents than the rational agents would. The closer an agents rational choices are to the rational choices of a perfect normative agent the more that the agent is rational.

It can also be said that the rationality of an agent depends in large part on the agents truth seeking abilities. The more accurate and up to date the agents view of the world the closer its rational choices will be to those of the perfect normative agents. It is because of this that a rational agent is one that is inextricably tied to the world as it is. It does not see the world as it wishes it, fears it or has seen it to be, but instead constantly adapts to and seeks out feedback from interactions with the world. The rational agent is attuned to the current state of affairs. One other very important characteristic of rational agents is that they adapt. If the situation has changed and the previously rational choice is no longer the one with the greatest expected utility, then the rational agent will adapt and change its preferred choice to the one that is now the most rational.

The other important part of rationality, besides truth seeking, is that it is about maximising the ability to actually achieve important goals. These two parts or domains of rationality: truth seeking and goal reaching are referred to as epistemic and instrumental rationality.  

  • Epistemic rationality is about the ability to form true beliefs. It is governed by the laws of logic and probability theory.
  • Instrumental rationality is about the ability to actually achieve the things that matter to you. It is governed by the laws of decision theory. In a formal context, it is known as maximizing “expected utility”. It important to note that it is about more than just reaching goals. It is also about discovering how to develop optimal goals.

As you move further and further away from rationality you introduce more and more flaws, inefficiencies and problems into your decision making and information gathering algorithms. These flaws and inefficiencies are the cause of irrational or suboptimal behaviors, choices and decisions. Humans are innately irrational in a large number of areas which is why, in large part, improving our rationality is just about mitigating, as much as possible, the influence of our biases and irrational propensities.

If you wish to truly understand what it means to be rational, then you must also understand what rationality is not. This is important because the concept of rationality is often misconstrued by the media. An epitomy of this misconstrual is the character of Spock from Star Trek. This character does not see rationality as if it was about optimality, but instead as if it means that 

  • You can expect everyone to react in a reasonable, or what Spock would call rational, way. This is irrational because it leads to faulty models and predictions of other peoples behaviors and thoughts.
  • You should never make a decision until you have all the information. This is irrational because humans are not omniscient or omnipotent. Their decisions are constrained by many factors like the amount of information they have, the cognitive limitations of their brains and the time available for them to make decisions. This means that a person if they are to act rationally must often make predictions and assumptions.
  • You should never rely on intuition. This is irrational because intuition (system 1 thinking)  does have many advantages over conscious and effortful deliberation (system 2 thinking) mainly its speed. Although intuitions can be wrong, to disregard them entirely is to hinder yourself immensely. If your intuitions are based on multiple interactions that are similar to the current situation and these interactions had short feedback cycles, then it is often irrational to not rely on your intuitions.
  • You should not become emotional. This is irrational because while it is true that emotions can cause you to use less rational ways of thinking and acting, i.e. ways that are optimised for ancestral or previous environments, it does not mean that we should try to eradicate emotions in ourselves. This is because emotions are essential to rational thinking and normal social behavior . An aspiring rationalist should remember four points in regards to emotions:
    • The rationality of emotions depends on the rationality of the thoughts and actions that they induce. It is rational to feel fear when you are actually in a situation where you are threatened. It is irrational to feel fear in situations where are not being threatened. If your fear compels you to take suboptimal actions, then and only then is that fear irrational.
    • Emotions are the wellspring of value. A large part of instrumental rationality is about finding the best way to achieve your fundamental human needs. A person who can fulfill these needs through simple methods is more rational than someone who can't. In this particular area people tend to become alot less rational as they age. As adults we should be jealous of the innocent exuberance that comes so naturally to children. If we are not as exuberant as children, then we should wonder at how it is that we have become so shackled by our own self restraint. 
    • Emotional control is a virtue, but denial is not. Emotions can be considered a type of internal feedback. A rational person does not be consciously ignore or avoid feedback as this means that would be limiting or distorting the information that they have access to. It is possible that a rational agent may may need to mask or hide their emotions for reasons related to societal norms and status, but they should not repress emotions unless there is some overriding rational reason to do so. If a person volitionally represses their emotions because they wish to perpetually avoid them, then this is both irrational and cowardly.
    • By ignoring, avoiding and repressing emotions you are limiting the information that you exhibit, which means that other people will not know how you are actually feeling. In some situations this may be helpful, but it is important to remember that people are not mind readers. Their ability to model your mind and your emotional state depends on the information that they know about you and the information, e.g. body language, vocal inflections, that you exhibit. If people do not know that you are vulnerable, then they cannot know that you are courageous. If people do not know that you are in pain, then they cannot know that you need help.   
  • You should only value quantifiable things like money, productivity, or efficiency. This is irrational because it means that you are reducing the amount of potentially valuable information that you consider. The only reason a rational person ever reduces the amount of information that they consider is because of resource or time limitations.

Related Materials

Wikis:

  • Rationality - the characteristic of thinking and acting optimally. An agent is rational if it wields its intelligence in such a way as to maximize the convergence between its beliefs and reality; and acts on these beliefs in such a manner as to maximize its chances of achieving whatever goals it has. For humans, this means mitigating (as much as possible) the influence of cognitive biases
  • Maths/Logic - Math and logic are deductive systems, where the conclusion of a successful argument follows necessarily from its premises, given the axioms of the system you’re using: number theory, geometry, predicate logic, etc.   
  • Probability theory - a field of mathematics which studies random variables and processes. 
  • Bayes theorem - a law of probability that describes the proper way to incorporate new evidence into prior probabilities to form an updated probability estimate.
  • Bayesian - Bayesian probability theory is the math of epistemic rationality, Bayesian decision theory is the math of instrumental rationality.
  • Bayesian probability - represents a level of certainty relating to a potential outcome or idea. This is in contrast to a frequentist probability that represents the frequency with which a particular outcome will occur over any number of trials. An event with Bayesian probability of .6 (or 60%) should be interpreted as stating "With confidence 60%, this event contains the true outcome", whereas a frequentist interpretation would view it as stating "Over 100 trials, we should observe event X approximately 60 times." The difference is more apparent when discussing ideas. A frequentist will not assign probability to an idea; either it is true or false and it cannot be true 6 times out of 10. 
  • Bayesian Decision theory - Bayesian decision theory refers to a decision theory which is informed by Bayesian probability 
  • Decision theory – is the study of principles and algorithms for making correct decisions—that is, decisions that allow an agent to achieve better outcomes with respect to its goals. 
  • Hollywood rationality- What Spock does, not what actual rationalists do.

Posts:

Suggested posts to write:

  • Bounded/ecological/grounded Rationality - I couldn't find a suitable resource for this on less wrong.  

Academic Books:

Popular Books:

Talks:

Notes on decisions I have made while creating this post

 (these notes will not be in the final draft): 

  • I agree denotationally, but object connotatively  with 'rationality is systemized winning', so I left it out. I feel that it would take too long to get rid of the connotation of competition that I believe is associated with 'winning'. The other point that would need to be delved into is: what exactly does the rationalist win at? I believe by winning Elizer meant winning at newcomb's problem, but the idea of winning is normally extended into everything.  I also believe that I have basically covered the idea with: “Rationality maximizes expected performance, while perfection maximizes actual performance.”
  • I left out the 12 virtues of rationality because I don’t like perfectionism. If it was not in the virtues, then I would have included them. My problem with perfectionism is that having it as a goal makes you liable to premature optimization and developing tendencies for suboptimal levels of adaptability. Everything I have read in complexity theory, for example, makes me think that perfectionism is not really a good thing to be aiming for, at least in uncertain and complex situations. I think truth seeking should be viewed as an optimization process. If it doesn't allow you to become more optimal, then it is not worth it. I have a post about this here.
  • I couldn't find an appropriate link for bounded/ecological/grounded rationality. 

Rationality Compendium

11 ScottL 23 August 2015 08:00AM

I want to create a rationality compendium (a collection of concise but detailed information about a particular subject) and I want to know whether you think this would be a good idea. The rationality compendium would essentially be a series of posts that will eventually serve as a guide for less wrong newbies that they can use to discover which resources to look into further, a refresher of the main concepts for less wrong veterans and a guideline or best practices document that will explain techniques that can be used to apply the core less wrong/rationality concepts. These techniques should preferably have been verified to be useful in some way. Perhaps, there will be some training specific posts in which we can track if people are actually finding the techniques to be useful.

I only want to write this because I am lazy. In this context, I mean lazy as it is described by Larry Wall:

Laziness: The quality that makes you go to great effort to reduce overall energy expenditure.

I think that a rationality compendium would not only prove that I have correctly understood the available rationality material, but it would also ensure that I am actually making use of this knowledge. That is, applying the rationality materials that I have learnt in ways that allow me to improve my life.

If you think that a rationality compendium is not needed or would not be overly helpful, then please let me know. I also want to point out that I do not think that I am necessarily the best person to do this and that I am only doing it because I don’t see it being done by others.

For the rationality compendium, I plan to write a series of posts which should, as much as possible, be:

  • Using standard terms: less wrong specific terms might be linked to in the related materials section, but common or standard terminology will be used wherever possible.
  • Concise: the posts should just contain quick overviews of the established rationality concepts. They shouldn’t be introducing “new” ideas. The one exception to this is if a new idea allows multiple rationality concepts to be combined and explained together. If existing ideas require refinement, then this should happen in a seperate post which the rationality compendium may provide a link to if the post is deemed to be high quality.
  • Comprehensive: links to all related posts, wikis or other resources should be provided in a related materials section. This is so that readers can deep dive or just go deeper on materials that pique their interest while still ensuring that the posts are concise. The aim of the rationality compendium is to create a resource that is a condensed and distilled version of the available rationality materials. This means that it is not meant to be light reading as a large number of concepts will be presented in one post.
  • Collaborative: the posts should go through many series of edits based on the feedback in the comments. I don't think that I will be able to create perfect first posts, but I am willing to expend some effort to iteratively improve the posts until they reach a suitable standard. I hope that enough people will be interested in a rationality compendium so that I can gain enough feedback to improve the posts. I plan for the posts to stay in discussion for a long time and will possibly rerun posts if it is required. I welcome all kinds of feedback, positive or negative, but request that you provide information that I can use to improve the posts.
  • Be related only to rationality: For example, concepts from AI or quantum mechanics won’t be mentioned unless they are required to explain some rationality concepts.
  • Ordered: the points in the compendium will be grouped according to overarching principles. 
I will provide a link to the posts created in the compendium here:

Self-confidence and status

4 asd 23 August 2015 07:36AM

I've seen the advice "be (more) confident" given so that the person may become more socially successful. For example self-confident get more raises. But I'm not sure if self-confidence is the cause of becoming more socially successful, or the result of it. I don't think self-confidence can be separated from social status. I see it more as an intersocial function that gives information to participants on who to follow or listen to, and you can't act fully confident in isolation with others and their feelings. Artificially raised confidence sounds really hard, and if possible it sounds more closer to arrogance or delusion and real confidence must have some connection to reality and knowledge of what kind of value you provide. I mean, delusion might work if those around you are delusional too, but that seems pretty risky and surer way is to just be connected to reality all the time. If what you're saying is not true or interesting, then it's hard to be confident when you're saying it and others will usually quickly notify in one way or another if what you're saying is not true or interesting. If the truth is that you can't really provide much value, then I can't see how you would be able to feel as confident as those who provide more value (note that value might sometimes be quite complex and not the first thing that comes into your mind, inept dictators might provide the kind of value that really makes sense in evolutionary context even though they seem to do nothing useful at all in reality)

So in short, self-confidence usually seems approximation of what kind of value you provide in the real world, where value is to be thought of as something that is beneficial in evolutionary context. There are some hacks to quickly raise your value, like dressing better or working out, but ultimately it comes down raising your value in the real world and confidence must follow that and not the other way around.

Thoughts? How does "fake it till you make it" strategy appears in this context?

Instrumental Rationality Questions Thread

14 AspiringRationalist 22 August 2015 08:25PM

This thread is for asking the rationalist community for practical advice.  It's inspired by the stupid questions series, but with an explicit focus on instrumental rationality.

Questions ranging from easy ("this is probably trivial for half the people on this site") to hard ("maybe someone here has a good answer, but probably not") are welcome.  However, please stick to problems that you actually face or anticipate facing soon, not hypotheticals.

As with the stupid questions thread, don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better, and please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.

(See also the Boring Advice Repository)

A list of apps that are useful to me. (And other phone details)

9 Elo 22 August 2015 12:24PM

 

I have noticed I often wish "Damn I wish someone had made an app for that" and when I search for it I can't find it.  Then I outsource the search to facebook or other people; and they can usually say - yes, its called X.  Which I can put down to an inability to know how to search for an app on my part; more than anything else.

With that in mind; I wanted to solve the problem of finding apps for other people.

The following is a list of apps that I find useful (and use often) for productive reasons:


The environment

This list is long.  The most valuable ones are the top section that I use regularly.  

Other things to mention:

Internal storage - I have a large internal memory card because I knew I would need lots of space.  So I played the "out of sight out of mind game" and tried to give myself as much space as possible by buying a large internal card.

Battery - I use anker external battery blocks to save myself the trouble of worrying about batteries.  If prepared I leave my house with 2 days of phone charge (of 100% use).  I used to count "wins" of days I beat my phone battery (stay awake longer than it) but they are few and far between.  Also I doubled my external battery power and it sits at two days not one (28000mA + 2*460ma spare phone batteries)

Phone - I have a Samsung S4 (android Running KitKat) because it has a few features I found useful that were not found in many other phones - Cheap, Removable battery, external storage card, replaceable case.

Screen cover - I am using the one that came with the phone still

I carry a spare phone case, in the beginning I used to go through one each month; now I have a harder case than before it hasn't broken.

MicroUSB cables - I went through a lot of effort to sort this out, it's still not sorted, but its "okay for now".  The advice I have - buy several good cables (read online reviews about it), test them wherever possible, and realise that they die.  Also carry a spare or two.

Restart - I restart my phone probably most days when it gets slow.  It's got programming bugs, but this solution works for now.

The overlays

These sit on my screen all the time.

Data monitor - Gives an overview of bits per second upload or download. updated every second.

CpuTemp - Gives an overlay of the current core temperature.  My phone is always hot, I run it hard with bluetooth, GPS and wifi blaring all the time.  I also have a lot of active apps.

Mindfulness bell - My phone makes a chime every half hour to remind me to check, "Am I doing something of high-value right now?" it sometimes stops me from doing crap things.

Facebook chat heads - I often have them open, they have memory leaks and start slowing down my phone after a while, I close and reopen them when I care enough.

 

The normals:

Facebook - communicate with people.  I do this a lot.

Inkpad - its a note-taking app, but not an exceptionally great one; open to a better suggestion.

Ingress - it makes me walk; it gave me friends; it put me in a community.  Downside is that it takes up more time than you want to give it.  It's a mobile GPS game.  Join the Resistance.

Maps (google maps) - I use this most days; mostly for traffic assistance to places that I know how to get to.

Camera - I take about 1000 photos a month.  Generic phone-app one.

Assistive light - Generic torch app (widget) I use this daily.

Hello - SMS app.  I don't like it but its marginally better than the native one.

Sunrise calendar - I don't like the native calendar; I don't like this or any other calendar.  This is the least bad one I have found.  I have an app called "facebook sync" which helps with entering in a fraction of the events in my life.  

Phone, address book, chrome browser.

GPS logger - I have a log of my current gps location every 5 minutes.  If google tracks me I might as well track myself.  I don't use this data yet but its free for me to track; so if I can find a use for the historic data that will be a win.

 

Quantified apps:

Fit - google fit; here for multiple redundancy

S Health - Samsung health - here for multiple redundancy

Fitbit - I wear a flex step tracker every day, and input my weight daily manually through this app

Basis - I wear a B1 watch, and track my sleep like a hawk.

Rescuetime - I track my hours on technology and wish it would give a better breakdown. (I also paid for their premium service)

Voice recorder - generic phone app; I record around 1-2 hours of things I do per week.  Would like to increase that.

Narrative - I recently acquired a life-logging device called a narrative, and don't really know how to best use the data it gives.  But its a start.

How are you feeling? - Mood tracking app - this one is broken but the best one I have found, it doesn't seem to open itself after a phone restart; so it won't remind you to enter in a current mood.  I use a widget so that I can enter in the mood quickly.  The best parts of this app are the way it lets you zoom out, and having a 10 point scale.  I used to write a quick sentence about what I was feeling, but that took too much time so I stopped doing it.

Stopwatch - "hybrid stopwatch" - about once a week I time something and my phone didn't have a native one.  This app is good at being a stopwatch.

Callinspector - tracks ingoing or outgoing calls and gives summaries of things like, who you most frequently call, how much data you use, etc.  can also set data limits.

 

Misc

Powercalc - the best calculator app I could find

Night mode - for saving batter (it dims your screen), I don't use this often but it is good at what it does.  I would consider an app that dims the blue light emitted from my screen; however I don't notice any negative sleep effects so I have been putting off getting around to it.

Advanced signal status - about once a month I am in a place with low phone signal - this one makes me feel better about knowing more details of what that means.

Ebay - To be able to buy those $5 solutions to problems on the spot is probably worth more than $5 of "impulse purchases" that they might be classified as.

Cal - another calendar app that sometimes catches events that the first one misses.

ES file explorer - for searching the guts of my phone for files that are annoying to find.  Not as used or as useful as I thought it would be but still useful.

Maps.Me - I went on an exploring adventure to places without signal; so I needed an offline mapping system.  This map saved my life.

Wikipedia - information lookup

Youtube - don't use it often, but its there.

How are you feeling? (again) - I have this in multiple places to make it as easy as possible for me to enter in this data

Play store - Makes it easy to find.

Gallery - I take a lot of photos, but this is the native gallery and I could use a better app.

 

Social

In no particular order;

Facebook groups, Yahoo Mail, Skype, Facebook Messenger chat heads, Whatsapp, meetup, google+, Hangouts, Slack, Viber, OKcupid, Gmail, Tinder.

They do social things.  Not much to add here.

 

Not used:

Trello

Workflowy

pocketbook

snapchat

AnkiDroid - Anki memoriser app for a phone.

MyFitnessPal - looks like a really good app, have not used it 

Fitocracy - looked good

I got these apps for a reason; but don't use them.

 

Not on my front pages:

These I don't use as often; or have not moved to my front pages (skipping the ones I didn't install or don't use)

S memo - samsung note taking thing, I rarely use, but do use once a month or so.

Drive, Docs, Sheets - The google package.  Its terrible to interact with documents on your phone, but I still sometimes access things from my phone.

bubble - I don't think I have ever used this

Compass pro - gives extra details about direction. I never use it.

(ingress apps) Glypher, Agentstats, integrated timer, cram, notify

TripView (public transport app for my city)

Convertpad - converts numbers to other numbers. Sometimes quicker than a google search.

ABC Iview - National TV broadcasting channel app.  Every program on this channel is uploaded to this app, I have used it once to watch a documentary since I got the app.

AnkiDroid - I don't need to memorise information in the way it is intended to be used; so I don't use it. Cram is also a flashcard app but I don't use it.

First aid - I know my first aid but I have it anyway for the marginal loss of 50mb of space.

Triangle scanner - I can scan details from NFC chips sometimes.

MX player - does videos better than native apps.

Zarchiver - Iunno.  Does something.

Pandora - Never used

Soundcloud - used once every two months, some of my friends post music online.

Barcode scanner - never used

Diskusage - Very useful.  Visualises where data is being taken up on your phone, helps when trying to free up space.

Swiftkey - Better than native keyboards.  Gives more freedom, I wanted a keyboard with black background and pale keys, swiftkey has it.

Google calendar - don't use it, but its there to try to use.

Sleepbot - doesn't seem to work with my phone, also I track with other methods, and I forget to turn it on; so its entirely not useful in my life for sleep tracking.

My service provider's app.

AdobeAcrobat - use often; not via the icon though.

Wheresmydroid? - seems good to have; never used.  My phone is attached to me too well for me to lose it often.  I have it open most of the waking day maybe.

Uber - I don't use ubers.

Terminal emulator, AIDE, PdDroid party, Processing Android, An editor for processing, processing reference, learn C++ - programming apps for my phone, I don't use them, and I don't program much.

Airbnb - Have not used yet, done a few searches for estimating prices of things.

Heart rate - measures your heart rate using the camera/flash.  Neat, not useful other than showing off to people how its possible to do.

Basis - (B1 app), - has less info available than their new app

BPM counter - Neat if you care about what a "BPM" is for music.  Don't use often.

Sketch guru - fun to play with, draws things.

DJ studio 5 - I did a dj thing for a friend once, used my phone.  was good.

Facebook calendar Sync - as the name says.

Dual N-back - I Don't use it.  I don't think it has value giving properties.

Awesome calendar - I don't use but it comes with good reccomendations.

Battery monitor 3 - Makes a graph of temperature and frequency of the cores.  Useful to see a few times.  Eventually its a bell curve.

urbanspoon - local food places app.

Gumtree - Australian Ebay (also ebay owns it now)

Printer app to go with my printer

Car Roadside assistance app to go with my insurance

Virgin air entertainment app - you can use your phone while on the plane and download entertainment from their in-flight system.


Two things now;

What am I missing? Was this useful?  Ask me to elaborate on any app and why I used it.  If I get time I will do that anyway. 

P.S. this took two hours to write.

P.P.S - I was intending to make, keep and maintain a list of useful apps, that is not what this document is.  If there are enough suggestions that it's time to make and keep a list; I will do that.

Robert Aumann on Judaism

3 iarwain1 21 August 2015 07:13PM

Just came across this interview with Robert Aumann. On pgs. 20-27 he describes why and how he believes in Orthodox Judaism. I don't really understand what he's saying. Key quote (I think):

H (interviewer): Take for example the six days of creation; whether or not this is how it happened is practically irrelevant to one is decisions and way of conduct. It is on a different level.

A (Aumann): It is a different view of the world, a different way of looking at the world. That is why I prefaced my answer to your question with the story about the roundness of the world being one way of viewing the world. An evolutionary geological perspective is one way of viewing the world. A different way is with the six days of creation. Truth is in our minds. If we are sufficiently broad-minded, then we can simultaneously entertain different ideas of truth, different models, different views of the world.

H: I think a scientist will have no problem with that. Would a religious person have problems with what you just said?

A: Different religious people have different viewpoints. Some of them might have problems with it. By the way, I'm not so sure that no scientist would have a problem with it. Some scientists are very doctrinaire.

Anybody have a clue what he means by all this? Do you think this is a valid way of looking at the world and/or religion? If not, how confident are you in your assertion? If you are very confident, on what basis do you think you have greatly out-thought Robert Aumann?

Please read the source (all 7 pages I referenced, rather than just the above quote), and think about it carefully before you answer. Robert Aumann is an absolutely brilliant man, a confirmed Bayesian, author of Aumann's Agreement Theorem, Nobel Prize winner, and founder / head of Hebrew University's Center for the Study of Rationality. Please don't strawman his arguments or simply dismiss them!

Weekly LW Meetups

2 FrankAdamek 21 August 2015 04:23PM

Pro-Con-lists of arguments and onesidedness points

3 Stefan_Schubert 21 August 2015 02:15PM

Follow-up to Reverse Engineering of Belief Structures

Pro-con-lists of arguments such as ProCon.org and BalancedPolitics.org fill a useful purpose. They give an overview over complex debates, and arguably foster nuance. My network for evidence-based policy is currently in the process of constructing a similar site in Swedish.

 

I'm thinking it might be interesting to add more features to such a site. You could let people create a profile on the site. Then you would let them fill in whether they agree or disagree with the theses under discussion (cannabis legalization, GM foods legalization, etc), and also whether they agree or disagree with the different argument for and against these theses (alternatively, you could let them rate the arguments from 1-5).

Once you have this data, you could use them to give people different kinds of statistics. The most straightforward statistic would be their degree of "onesidedness". If you think that all of the arguments for the theses you believe in are good, and all the arguments against them are bad, then you're defined as onesided. If you, on the other hand, believe that some of your own side's arguments are bad, whereas some of the opponents' arguments are good, you're defined as not being onesided. (The exact mathematical function you would choose could be discussed.)

Once you've told people how one-sided they are, according to the test, you would discuss what might explain onesidedness. My hunch is that the most plausible explanation normally is different kinds of bias. Instead of reviewing new arguments impartially, people treat arguments for their views more leniently than arguments against their views. Hence they end up being onesided, according to the test.

There are other possible explanations, though. One is that all of the arguments against the thesis in question actually are bad. That might happen occassionally, but I don't think that's very common. As Eliezer Yudkowsky says in "Policy Debates Should Not Appear One-sided":

On questions of simple fact (for example, whether Earthly life arose by natural selection) there's a legitimate expectation that the argument should be a one-sided battle; the facts themselves are either one way or another, and the so-called "balance of evidence" should reflect this.  Indeed, under the Bayesian definition of evidence, "strong evidence" is just that sort of evidence which we only expect to find on one side of an argument.

But there is no reason for complex actions with many consequences to exhibit this onesidedness property.  

Instead, the reason why people end up with one-sided beliefs is bias, Yudkowsky argues:

Why do people seem to want their policy debates to be one-sided?

Politics is the mind-killer.  Arguments are soldiers.  Once you know which side you're on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise it's like stabbing your soldiers in the back.  If you abide within that pattern, policy debates will also appear one-sided to you—the costs and drawbacks of your favored policy are enemy soldiers, to be attacked by any means necessary.

Especially if you're consistently one-sided in lots of different debates, it's hard to see that any other hypothesis besides bias is plausible. It depends a bit on what kinds of arguments you include in the list, though. In our lists we haven't really checked the quality of the arguments (our purpose is to summarize the debate, rather than to judge it), but you could also do that, of course.

My hope is that such a test would make people more aware both of their own biases, and of the problem of political bias in general. I'm thinking that is the first step towards debiasing. I've also constructed a political bias test with similar methods and purposes together with ClearerThinking, which should be released soon.

 

You could also add other features to a pro-con-list. For instance, you could classify arguments in different ways: ad hominem-arguments, consequentialist arguments, rights-based arguments, etc. (Some arguments might be hard to classify, and then you just wouldn't do that. You wouldn't necessarily have to classify every argument.) Using this info, you could give people a profile: e.g., what kinds of arguments do they find most persuasive? That could make them reflect more on what kinds of arguments really are valid.

You could also combine these two features. For instance, some people might accept ad hominem-arguments when they support their views, but not when they contradict them. That would make your use of ad hominem-arguments onesided.

 

Yet another feature that could be added is a standard political compass. Since people fill in what theses they believe in (cannabis legalization, GM goods legalization, etc) you could calcluate what party is closest to them, based on the parties' stances on these issues. That could potentially make the test more attractive to take.

 

Suggestions of more possible features are welcome, as well as general comments - especially about implementation.

Glossary of Futurology

2 mind_bomber 21 August 2015 05:51AM

Hi guys,

So I've been curating this glossary over at https://www.reddit.com/r/Futurology/.  I want it to be sort of an introduction to future focused topics.  A list of words that the layman can read and be inspired by.  I try to stay away from household words (i.e. cyberspace), science fiction topics (i.e. dyson sphere), words that describe themselves (i.e. self driving cars), obscure and rarely used words (i.e. betelgeuse-brain), and words that can't be found in most dictionaries (i.e. Rocko's Basilisk (i've been meaning to remove that one)).  Most of the glossary is from words and phrases I find on the /r/Futurology forum.  I have a whole other list with potential words for the glossary that i collect just waiting for the day to be added (i.e particle accelerator, Aerogel, proactionary principle).  I find curating the glossary to be more of an art than a science.  I try to balance the list between science, technology, philosophy, ideology, and sociology.  I like to find related topics to expand the list (i.e. terraforming & geoengineering). Even though the glossary is in alphabetical order i want it to read somewhat like a story.    

Anders Sandberg of The Future of Humanities Institute, Oxford told me "I like the usefulness of your list..."

 

I'm interested to know what you guys think.

 

Glossary located below (the See /r/*.* is native to the reddit website.  See /r/*.* links the glossary to subreddits (other reddit pages) related to that word or phrase on the reddit website):


continue reading »

View more: Next