Information Theory vs Harry Potter [LINK]

-7 Academian 25 April 2012 05:40PM

Needed: A large database of statements for true/false exercises

3 Academian 13 April 2012 02:26AM

Does anybody know where to find a large database of statements that are roughly 50% likely to be true or false?  These would be used for confidence calibration / Bayesian updating exercises for CMR/HRP.

One way to make such a database would be to buy a bunch of trivia games with True/False questions, and type each statement and its negation into a computer.  A problem with this might be that trivia questions are selected to have surprising/counterintuitive truth values; I'm not sure if that's true.  I'd be happy to acquire an already-made database of this form, but ideally I'd like statements that are "more neutral" in terms of how counterintuitive they are.

Any thoughts on where we might find a database like this to use/buy?

Thanks for any help!

Revision: We actually want a database of two-choice answer questions. This way, the player won't get trained on a base rate of 50% of statements in the world being true... they'll just get trained that when there are two possible answers, one is always true.  In the end, the database should look something like this (warning: I made up the "correct" answers):

Question: "Which is diagnosed more often in America (2011)?"; 
Answers: (a) "the cold", (b) allergies"; 
Correct Answer: (a); 
Tags: {medical}

Question: "Which city has a higher average altitude?"; 
Answers: (a) "Chicago", (b) "Las Vegas"; 
Correct Answer: (a)
Tags: {geography}

Question: "Who sold more albums while living"?; 
Answers: (a) "Michael Jackson", (b) "Elvis Presley"; 
Correct Answer: (b)
Tags: {history, pop-culture, music}

Question: "Was the price of IBM stock higher or lower at the start of the month after the Berlin wall fell, compared with the start of the previous month?"; 
Answers: (a) "higher", (b) "lower"; 
Correct Answer: (a)
Tags: {history, finance}

 

 

Teaching rationality made me better (at research and other things)

2 Academian 02 February 2012 06:10PM

Hi all,

I just wanted to loudly recommend the position to design and write rationality curriculum --- to anyone who is interested --- as a potential way to make yourself more awesome.  After helping teach mini-camp last year, I definitely experienced a huge increase in motivation for my own research, and in turn, productivity.  Somehow, giving serious thought to rationality advice for a large group and *actually delivering it* made be internalize even more deeply some things I thought I'd already absorbed completely. 

... and my sense that more is possible is still tingling :)

So yeah, definitely give it a shot if you think you might be good at it!

Mini-camp was indeed awesome, and so was Luke (just add Bayes)

2 Academian 02 September 2011 08:55AM

Yep, I'm saying that without hard data.  But I was there.  So let me say it again, in response to numerous comments I've seen complaining that no judgement should be passed until a quantitative analysis confirms it:

Mini-camp was awesome.  Note that mini-camp was far from the first time I've travelled to an event to surround myself with like-minded peers working toward common goals...  I find such events events extremely motivating and enjoyable, which is why I've been to many such workshops, inside and outside academia (~3 per year for the past 10 years).

Yet mini-camp is still topping my charts.  Specifically, the camp is tied for the title of the most life-altering workshop-like event of my life, and the tie is with the workshop that got me onto my PhD topic (graphical causal modelling), so that's saying something.

In particular, I've been visibly-to-myself-and-others more motivated and hard-working since the camp.  I've had more energy for learning and adaptation, and I find Luke to have been a highly inspiring input to that result.

(I'm talking about Luke because his position is the one being discussed right now, but I got a lot of really inspiring ideas and motivation from Anna before, during, and after the camp as well.)

Hard data will be great to have, but it's hard to get, especially certifiably causal data (though the prospect is not hopeless, with enough conditional independence tests), especially since the camp was planned and executed on short notice.  

In the meantime, let's do a little Bayes.  First, assign priors to how well you expect a week-long sustained interaction between growth-oriented rationalists to go.  (If your prior is something like 80%[failure], I'd like to know where you're getting your growth-oriented rationalists).  Now which of the following theories, "failure" or "success", assigns a higher likelihood to the following observations?

-----

1. People wrote these: 

https://docs.google.com/spreadsheet/pub?key=0AnoM_ZsIBBwEdGNicUMzRkNJNzRKLVpEb2RxZzU3V0E

In particular, 

“The week I spent in minicamp had by far the highest density of fun and learning I have ever experienced. It's like taking two years of college and condensing it to a week: you learn just as much and you have just as much fun. The skills I've learned will help me set and achieve my own life goal, and the friends I've made will help me get there.” --Alexei

“This was an intensely positive experience. This was easily the most powerful change self-modification I've ever made, in all of the social, intellectual, and emotional spheres. I'm now a more powerful person than I was a week ago -- and I can explain exactly how and why this is true.

At mini-camp, I've learned techniques for effective self-modification -- that is, I have a much deeper understanding of how to change my desires, gather my willpower, channel my time and cognitive resources, and model and handle previously confusing situations. What's more, I have a fairly clear map of how to build these skills henceforth, and how to inculcate them in others. And all this was presented in such a way that any sufficiently analytical folk -- anyone who has understood a few of the LW sequences, say -- can gain in extreme measures.” --Matt Elder / Fiddlemath

“I expected a week of interesting things and some useful tools to take away. What I got was 8 days of constant, deep learning, challenges to my limits that helped me grow. I finally grokked that I can and should optimize myself on every dimension I care about, that practice and reinforcement can make me a better thinker, and that I can change very quickly when I'm not constrained by artificial barriers or stress.

I would not recommend doing something like this right before another super-busy week, because I was learning at 100% of capacity and will need a lot of time to unpack all the things I learned and apply them to my life, but I came away with a clear plan for becoming better. It is now a normal and easy thing for me to try things out, test my beliefs, and self-improve. And I'm likely to be much more effective at making the world a better place as well, by prioritizing without fear.

The material was all soundly-researched and effectively taught, with extremely helpful supplemental exercises and activities. The instructors were very helpful in and out of session. The other participants were excited, engaged, challenging, and supportive.

I look forward to sharing what I've learned with my local Lesswrong meetup and others in the area. If that's even 1/4 as awesome as my time at the Mini-Camp, it will make our lives much better.” --Ben Hoffman / Benquo

“I really can't recommend this camp enough! This workshop broke down a complex and intertwined set of skills labelled in my brain as "common sense" and distinguished each part so that I could work on them separately. Sessions on motivation, cognition, and what habits to build to not fool yourself were particularly helpful. This camp was also the first example that I've seen of people taking current cognitive science and other research, decoding it, and showing people what's been documented to work so that they can use it too. It feels to me now as though the coolest parts of the sequences have been given specific exercises and habits to build off of. This camp, and the people in it, have changed my path for the better.” --David Jones / TheDave

 

2. I wrote this post.

3. Eliezer wants to keep Luke as a permanent hire.

4. Whatever other comments you've seen/heard about the camp from people who attended.

-----

Is this a biased sample?  Probably.  Is it hard data?  Easy to quantify?  Not so much.  Might this be a big conspiracy by Luke-originating ninja bloggers?  Perhaps.  But really... which theory assigns the higher likelihood here?  Success, or failure?

Lets allow the arguments that can be made about the minicamp be made, rather than ritualistically abstaining from decision-making until numbers show up.

That, and I really hope Luke stays with SingInst :)

Frequentist vs Bayesian breakdown: interpretation vs inference

24 Academian 30 August 2011 03:58PM

Suppose we have two different human beings, Connor and Diane, who agree to interpret their subjective anticipations as probabilities, thereby commonly earning them the title "Bayesian".  On a particular project or venture, they might disagree on Trick A or Trick B to decide the next step in the project.  It might be that Trick A is commonly labelled a "Frequentist inference method" and B is a "Bayesian inference method".  Why might they disagree?

As far as I can see, there are 3 disagreements that get labelled "Bayesian vs Frequentist" debates, and conflating them is a problem:

(1) Whether to interpret all subjective anticipations as probabilities.

(2) Whether to interpret all probabilities as subjective anticipations.

(3) Whether, on a particular project, to use Statistical Trick B instead of Statistical Trick A to infer the best course of action, when B is commonly labelled a "Bayesian method" and A is a "Frequentist method".

(Regarding 3, UC Berkeley professor Michael Jordan offers a good heuristic for how statistical tricks get labelled as Bayesisn or Frequentist, in terms of which terms in a loss function one treats as fixed or variable.  I recommend watching the first twenty minutes of his video lecture on this if you're not familiar.)

The question "is Connor a Bayesian or a Frequentist?" is commonly posed as though Connor's position on 1, 2, and 3 must be either "yes, yes, yes" or "no, no, no".  I don't believe this is so often the case. For example, my position is:

(1) - Yes.  Insofar as we have subjective anticipations, I agree normatively that they should behave and update as probabilities.

(2) - Don't care much. Expressions like P(X|Y) and P(X and Y) are useful for denoting both subjective anticipations and proportions of a whole, and in particular, proportions of real future events.  Whether to use the word "probability" is a terminological question.  Personally I try to reserve the word "probability" for when they mean subjective anticipations, and say "proportion" when they mean proportions of real future, but this is word choice.  Unfortunately this word choice is strongly associated and confused with positions on (1) and (3).

(3) - It depends.  In statistical inference, we commonly consider data sets x, world models M, and parameters θ that specify the model M more precisely.  I consider the separation of belief into M and θ to be purely formal. When guessing the next data set y, one considers expressions of the form P(x|M,θ) in some way.  If I'm already very confident in a specific world model M, and expect θ to actually vary from situation to situation, I'll probably try to estimate the parameters θ from x in a way that has the best expected success rate across all possible data sets M would generate. You might say here that I "trust the model more than the data" (though what I really don't trust are the changing model parameters), and this is a trick commonly referred to as "Frequentist".  If I'm not confident in the model M, or expect the parameters θ to the be the same in many future situations, I'll probably try to estimate M,θ from x in a way that has the best expected success rate assuming x.  You might say here that I "trust the data more than the model", and label this a "Bayesian" trick.  

Throughout (3), since my position in (1) is not changing, a member of the Bayes Tribe will say I'm "really a Bayesian all along", but I don't want to continue with this conflation of position names.  It's true that if I use the "Frequentist trick", it will be because I've updated in favor of it, i.e. my subjective confidence levels in the various theory elements are appropriate for it. 

... But from now on, when term "Bayesian" or "Frequentist" arises in a debate, my plan is to taboo the terms immediately, and proceed to either dissolve the issue into (1), (2), and (3) above, or change the conversation if people don't have the energy or interest for that length of conversation.

Do people agree with this breakdown?  I think I could be persuaded otherwise and would of course appreciate it if I were :)

 


 

ETA: I think the wisdom to treat beliefs as anticipation controllers and update our confidences based on evidence might be too precious to alienate people from it with the label "Bayesian", especially if the label is as ambiguous as my breakdown has found it to be.

Michael Jordan dissolves Bayesian vs Frequentist inference debate [video lecture]

6 Academian 30 August 2011 01:12AM

UC Berkeley professor Michael Jordan, a leading researcher in machine learning, has a great reduction of the question "Are your inferences Bayesian or Frequentist?". The reduction is basically "Which term are you varying in the loss function?". He calls this the "decision theoretic perspective" on the debate, and uses this terminology well in keeping with LessWrong interests.

I don't have time to write a top-level post about this (maybe someone else does?), but I quite liked the lecture, and thought I should at least post the link!

http://videolectures.net/mlss09uk_jordan_bfway/

The discussion gets much clearer starting at the 10:11 slide, which you can click on and skip to if you like, but I watched the first 10 minutes anyway to get a sense of his general attitude.

Enjoy! I recommend watching while you eat, if it saves you time and the food's not too distracting :)

Measuring aversion and habit strength

79 Academian 27 May 2011 01:05AM

Not me! tl;dr: Strong aversions don't always originate from strong feelings (see Ugh fields). It's useful to measure the strength of an aversion by how effectively it averts your thoughts/behavior instead of how saliently you can feel it, or even remember feeling it. If there's a low cost behaviour that you somehow always "end up not doing", there's evidence for a mechanism steering you away from it. Try to find it, and defy it.

Story

Right after writing Break your habits: be more empirical, someone asked me to a live music show, and I declined, with some explanation about being busy. This felt a little forced, and I realized: I always decline live music shows. This counts as a habit. The interesting thing was that I declined them for many different, unrelated reasons. This was evidence for something more systemic, because it would be a coincidence if random, unrelated reasons always came up to prevent me from attending live music.

So I asked myself if I really disliked live music. Emotions returned: "Not really. It's not awesome, but it's not terrible." Now, there was a time when I would have stopped thinking there. My time is valuable, and mediocrity is enough to stop me from doing anything, right?

But wait... is it? Is it enough to always stop me? If it was only mediocre, and not terrible, than surely on one of the many occasions I could have seen live music, there would have been sufficient justification to go... a particularly good composer, a particularly interesting group of people go with, a particular need to get out and do something different... but no, somehow I always didn't go.

And that's when I realized I probably had an aversion to live music: some brain mechanism that consistently and effectively averted me from seeing it, and in this case, not something I could feel. In particular, it wasn't accompanied by any sense of "Ugh". So since I couldn't feel the aversion, I took an outside view to ask what could have caused it, if it indeed exists...

continue reading »

List of literally false statements in the Bible

-13 Academian 20 May 2011 08:10AM

Jehova's Witnesses aim to interpret the Bible literally, which is in some sense admirable because that is the only way it can serve much to constrain one's anticipations about reality.  By contrast, if one aims to interpret a religious text only "metaphorically", then there are so many possible meanings that it does essentially nothing to constrain one's anticipations.

For example, when one accepts the best scientific knowledge about the origin of Earth, one believes that it was not in fact created in 6 days, and that the literal meaning of the English Bible is false in this case.  Christians who accept the true age of Earth are not usually bothered by this, and resort to a "metaphorical" interpretation wherein "days" are metaphors for longer periods.

But if you only believe that each statement in the Bible has some metaphorical interpretation which is true, it doesn't tell you much about the world at all.  The Bible asserts that God exists... but since we're only taking things metaphorically now, maybe God doesn't actually literally exist.  Maybe He's pretend.  Maybe there in fact is no God, but there is a rainforest, and God is a metaphor for the rainforest.  Or for the sun.  Who knows.  Since there is no way to tell which metaphor is the right one, believing that the Bible is "metaphorically true" basically tells you nothing.

Jehova's Witnesses seem to understand this, so they're not going there.  They're sticking to the literal Word of the Lord.  Which makes me interested:

What verses of the Bible can we cite that are false in their literal interpretation, according to accepted scientific or well-founded historical knowledge?

Thanks to anyone who contributes!

Best videos inspiring first interest in rationality or the singularity

4 Academian 13 April 2011 02:32AM

When faced with a decision that might be really important — if say, the life of a loved one may be at risk — many people, though unfortunately not all, are moved to a sense of responsibility whereby they suddenly care more about being right than about looking right, feeling right, or even feeling good. It's when we have something to protect that many of us are most motivated to transcend our usual desires to "win the debate", "uphold our beliefs", or "have faith", and instead actually try to become right ... to have the best shot we can at saving the day with the decisions we make.

The upcoming technological singularity — an event where the lives of all our loved ones may or may not hang in the balance — is for many people a great inspiration to become more rational. Also, most of us want to convince others to be more rational, and videos are a powerful way to reach people, so I want to know:

What do you think is the online video that best inspires a strong initial interest in rationality or the singularity?

Please upvote each comment that contains a video that you approve for this purpose, not just your favorite; I want to use approval voting here so we get a robust ordering on the videos. Also to this end, please post at most one video per comment.

If you don't already have a favorite but want one, a place to start looking is the Singularity Summit videos at vimeo.com. Vimeo allows you to like/dislike videos, so that's another way you can donate information.

Some things to consider when voting:

  • Public appeal --- is this a video you'd want sent out on a mailing list to a bunch of random but educated people? (We want people in positions of intellectual or political influence to promote rationality.)
  • First exposure --- if this is someone's first exposure to thinking about rationality or the singularity, will it keep their interest?
  • Video quality --- is the camerawork respectable, or does it make people want to stop watching?
  • Speaker personality --- will people be annoyed at the speaker and lose attention, or be inspired?
  • OMG factor --- does the video have enough punchlines that make people say "oh my gosh I want to be more rational"?

I will periodically update this post with a list of links and their ratings, so we all have an easily accessible source of high-quality presentations we can send to our friends and colleauges to inspire rationality :)


List of videos; last updated April 17, 2011.

  1. (11 points since April 13, 2011; vote here) Open-Mindedness, by QualiaSoup.  
  2. (3 points since April 14, 2011; vote here) What is the Singularity?  
  3. (3 points since April 13, 2011; vote here) Why didn't anybody tell me?  
  4. (2 points since April 13, 2011; vote here) Reaching the stars is easy...  
  5. (2 points since April 13, 2011; vote here) Hans Rosling: No more boring data: TEDTalks  
  6. (1 points since April 15, 2011; vote here) What is "rationality"?

Yes, a blog.

88 Academian 19 November 2010 01:53AM

When I recommend LessWrong to people, their gut reaction is usually "What? You think the best existing philosophical treatise on rationality is a blog?"

Well, yes, at the moment I do.

"But why is it not an ancient philosophical manuscript written by a single Very Special Person with no access to the massive knowledge the human race has accumulated over the last 100 years?"

Besides the obvious? Three reasons: idea selection, critical mass, and helpful standards for collaboration and debate.

Idea selection.

Ancient people came up with some amazing ideas, like how to make fire, tools, and languages. Those ideas have stuck around, and become integrated in our daily lives to the point where they barely seem like knowledge anymore. The great thing is that we don't have to read ancient cave writings to be reminded that fire can keep us warm; we simply haven't forgotten. That's why more people agree that fire can heat your home than on how the universe began.

Classical philosophers like Hume came up with some great ideas, too, especially considering that they had no access to modern scientific knowledge. But you don't have to spend thousands of hours reading through their flawed or now-uninteresting writings to find their few truly inspiring ideas, because their best ideas have become modern scientific knowledge. You don't need to read Hume to know about empiricism, because we simply haven't forgotten it... that's what science is now. You don't have to read Kant to think abstractly about Time; thinking about "timelines" is practically built into our language nowadays.

See, society works like a great sieve that remembers good ideas, and forgets some of the bad ones. Plenty of bad ideas stick around because they're viral (self-propagating for reasons other than helpfulness/verifiability), so you can't always trust an idea just because it's old. But that's how any sieve works: it narrows your search. It keeps the stuff you want, and throws away some of the bad stuff so you don't have to look at it.

LessWrong itself is an update patch for philosophy to fix compatibility issues with science and render it more useful. That it would exist now rather than much earlier is no coincidence: right now, it's the gold at the bottom of the pan, because it's taking the idea filtering process to a whole new level. Here's a rough timeline of how LessWrong happened:

continue reading »

View more: Prev | Next