Credit card that donates to SIAI.
Luke posted about this on SIAI blog. Essentially, you can get a credit card with cash-back rewards program that automatically donates money to Singularity Institute.
This seems to me like a dream come true. Am I missing something? Are there any catches? Is there a better rewards program, which I can use to save more money, so I can donate more money to SIAI?
Futurama does an episode on nano-technology.
Futurama, Season 6, Episode 15: Benderama
Naturally, this episode's goal is entertainment, not so much the accurate portrayal of the technology.
Quoting from wikipedia article:
The premise of "Benderama" is based around the transhumanist theory of grey goo, an end-of-the-world scenario in which out-of-control self-replicating robots consume all matter on Earth while building more of themselves.
The episode is not officially available for viewing online (but you can still find it, of course).
Considering all scenarios when using Bayes' theorem.
Disclaimer: this post is directed at people who, like me, are not Bayesian/probability gurus.
Recently I found an opportunity to use the Bayes' theorem in real life to help myself update in the following situation (presented in gender-neutral way):
Let's say you are wondering if a person is interested in you romantically. And they bought you a drink.
A = they are interested in you.
B = they bought you a drink.
P(A) = 0.3 (Just an assumption.)
P(B) = 0.05 (Approximately 1 out of 20 people, who might be at all interested in you, will buy you a drink for some unknown reason.)
P(B|A) = 0.2 (Approximately 1 out of 5 people, who are interested in you, will buy you a drink for some unknown reason. Though it's more likely they will buy you a drink because they are interested in you.)
These numbers seem valid to me, and I can't see anything that's obviously wrong. But when I actually use Bayes' theorem:
P(A|B) = P(B|A) * P(A) / P(B) = 1.2
Uh-oh! Where did I go wrong? See if you can spot the error before continuing.
Turns out:
P(B|A) = P(A∩B) / P(A) ≤ P(B) / P(A) = 0.1667
BUT
P(B|A) = 0.2 > 0.1667
I've made a mistake in estimating my probabilities, even though it felt intuitive. Yet, I don't immediately see where I went wrong when I look at the original estimates! What's the best way to prevent this kind of mistake?
I feel pretty confident in my estimates of P(A) and P(B|A). However, estimating P(B) is rather difficult because I need to consider many scenarios.
I can compute P(B) more precisely by considering all the scenarios that would lead to B happening (see wiki article):
P(B) = ∑i P(B|Hi) * P(Hi)
Let's do a quick breakdown of everyone who would want to buy you a drink (out of the pool of people who might be at all interested in you):
P(misc. reasons) = 0.05; P(B|misc) = 0.01
P(they are just friendly and buy drinks for everyone they meet) = 0.05; P(B|friendly) = 0.8
P(they want to be friends) = 0.3; P(B|friends) = 0.1
P(they are interested in you) = 0.6; P(B|interested) = P(B|A) = 0.2
So, P(B) = 0.1905
And, P(A|B) = 0.315 (very different from 1.2!)
Once I started thinking about all possible scenarios, I found one I haven't considered explicitly -- some people buy drinks for everyone they meet -- which adds a good amount of probability (0.04) to B happening. (Those types of people are rare, but they WILL buy you a drink.) There are also other interesting assumptions that are made explicit:
- Out of all the people under consideration in this problem, there are twice as many people who would be romantically interested in you vs. people who would want to be your friend.
- People who are interested in you will buy you a drink twice as often as people who want to be your friend.
The moral of the story is to consider all possible scenarios (models/hypothesis) which can lead to the event you have observed. It's possible you are missing some scenarios, which under consideration will significantly alter your probability estimates.
Do you know any other ways to make the use of Bayes' theorem more accurate? (Please post in comments, links to previous posts of this sort are welcome.)
Discussion for Eliezer Yudkowsky's paper: Timeless Decision Theory
I have not seen any place to discuss Eliezer Yudkowsky's new paper, titled Timeless Decision Theory, so I decided to create a discussion post. (Have I missed an already existing post or discussion?)
Life-tracking application for android
Hi, lesswrong.
I just finished my application for android devices, LifeTracking, which has been motivated by the discussions here; primarily discussions about akrasia and measuring/tracking your own actions. I don't want to make this sound like an advertisement (the application is completely free anyway), but I would really really like to get feedback from you and hear your comments, criticism, and suggestions. If there are enough LessWrong-specific feature requests, I will make a separate application just for that.
Here is a brief description of the app:
LifeTracking application allows you to track any value (like your weight or your lesswrong karma), as well as any time-consuming activities (like sleeping, working, reading Harry Potter fanfic, etc). You can see the data visually, edit it, and analyze it.
The goal of the application is to help you know yourself and your schedule better. Hopefully, when you graph various aspects of your life side-by-side you will come to a better understanding of yourself. Also, this way you will not have to rely on your faulty memory to remember all that data.
You can download the app from the Market (link only works from Android devices) or download .apk directly. Screenshots: [1], [2], [3], [4], [5], [6].
Edit: LifeTracking website
And while we are on topic of mobile apps, what other applications would you like to see made? (For example, another useful application would be "your personal prediction tracker", where you enter various short-term predictions, your confidence interval, and then enter the actual result. You can classify each prediction and then see if you are over- or under-confident in certain areas. (I remember seeing a website that does something similar, but can't find it now.))
View more: Prev
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)