Linkposts now live!

You can now submit links to LW! As the rationality community has grown up, more and more content has moved off LW to other places, and so rather than trying to generate more content here we'll instead try to collect more content here. My hope is that Less Wrong becomes something like "the Rationalist RSS," where people can discover what's new and interesting without necessarily being plugged in to the various diaspora communities.
Some general norms, subject to change:
- It's okay to link someone else's work, unless they specifically ask you not to. It's also okay to link your own work; if you want to get LW karma for things you make off-site, drop a link here as soon as you publish it.
- It's okay to link old stuff, but let's try to keep it to less than 5 old posts a day. The first link that I made is to Yudkowsky's Guide to Writing Intelligent Characters.
- It's okay to link to something that you think rationalists will be interested in, even if it's not directly related to rationality. If it's political, think long and hard before deciding to submit that link.
- It's not okay to post duplicates.
2016 LessWrong Diaspora Survey Analysis: Part Four (Politics, Calibration & Probability, Futurology, Charity & Effective Altruism)
Politics
The LessWrong survey has a very involved section dedicated to politics. In previous analysis the benefits of this weren't fully realized. In the 2016 analysis we can look at not just the political affiliation of a respondent, but what beliefs are associated with a certain affiliation. The charts below summarize most of the results.
Political Opinions By Political Affiliation

Miscellaneous Politics
There were also some other questions in this section which aren't covered by the above charts.
Voting
| Group | Turnout |
|---|---|
| LessWrong | 68.9% |
| Austrailia | 91% |
| Brazil | 78.90% |
| Britain | 66.4% |
| Canada | 68.3% |
| Finland | 70.1% |
| France | 79.48% |
| Germany | 71.5% |
| India | 66.3% |
| Israel | 72% |
| New Zealand | 77.90% |
| Russia | 65.25% |
| United States | 54.9% |
Calibration And Probability Questions
Calibration Questions
I just couldn't analyze these, sorry guys. I put many hours into trying to get them into a decent format I could even read and that sucked up an incredible amount of time. It's why this part of the survey took so long to get out. Thankfully another LessWrong user, Houshalter, has kindly done their own analysis.
All my calibration questions were meant to satisfy a few essential properties:
- They should be 'self contained'. I.E, something you can reasonably answer or at least try to answer with a 5th grade science education and normal life experience.
- They should, at least to a certain extent, be Fermi Estimable.
- They should progressively scale in difficulty so you can see whether somebody understands basic probability or not. (eg. In an 'or' question do they put a probability of less than 50% of being right?)
At least one person requested a workbook, so I might write more in the future. I'll obviously write more for the survey.
Probability Questions
| Question | Mean | Median | Mode | Stdev |
| Please give the obvious answer to this question, so I can automatically throw away all surveys that don't follow the rules: What is the probability of a fair coin coming up heads? | 49.821 | 50.0 | 50.0 | 3.033 |
| What is the probability that the Many Worlds interpretation of quantum mechanics is more or less correct? | 44.599 | 50.0 | 50.0 | 29.193 |
| What is the probability that non-human, non-Earthly intelligent life exists in the observable universe? | 75.727 | 90.0 | 99.0 | 31.893 |
| ...in the Milky Way galaxy? | 45.966 | 50.0 | 10.0 | 38.395 |
| What is the probability that supernatural events (including God, ghosts, magic, etc) have occurred since the beginning of the universe? | 13.575 | 1.0 | 1.0 | 27.576 |
| What is the probability that there is a god, defined as a supernatural intelligent entity who created the universe? | 15.474 | 1.0 | 1.0 | 27.891 |
| What is the probability that any of humankind's revealed religions is more or less correct? | 10.624 | 0.5 | 1.0 | 26.257 |
| What is the probability that an average person cryonically frozen today will be successfully restored to life at some future time, conditional on no global catastrophe destroying civilization before then? | 21.225 | 10.0 | 5.0 | 26.782 |
| What is the probability that at least one person living at this moment will reach an age of one thousand years, conditional on no global catastrophe destroying civilization in that time? | 25.263 | 10.0 | 1.0 | 30.510 |
| What is the probability that our universe is a simulation? | 25.256 | 10.0 | 50.0 | 28.404 |
| What is the probability that significant global warming is occurring or will soon occur, and is primarily caused by human actions? | 83.307 | 90.0 | 90.0 | 23.167 |
| What is the probability that the human race will make it to 2100 without any catastrophe that wipes out more than 90% of humanity? | 76.310 | 80.0 | 80.0 | 22.933 |
Probability questions is probably the area of the survey I put the least effort into. My plan for next year is to overhaul these sections entirely and try including some Tetlock-esque forecasting questions, a link to some advice on how to make good predictions, etc.
Futurology
This section got a bit of a facelift this year. Including new cryonics questions, genetic engineering, and technological unemployment in addition to the previous years.
Cryonics
Interestingly enough, of those who think it will work with enough confidence to say 'yes', only 14 are actually signed up for cryonics.
sqlite> select count(*) from data where CryonicsNow="Yes" and Cryonics="Yes - signed up or just finishing up paperwork";
14
sqlite> select count(*) from data where CryonicsNow="Yes" and (Cryonics="Yes - signed up or just finishing up paperwork" OR Cryonics="No - would like to sign up but unavailable in my area" OR "No - would like to sign up but haven't gotten around to it" OR "No - would like to sign up but can't afford it");
34
LessWrongers seem to be very bullish on the underlying physics of cryonics even if they're not as enthusiastic about current methods in use.
The Brain Preservation Foundation also did an analysis of cryonics responses to the LessWrong Survey.
Singularity
SingularityYear
By what year do you think the Singularity will occur? Answer such that you think, conditional on the Singularity occurring, there is an even chance of the Singularity falling before or after this year. If you think a singularity is so unlikely you don't even want to condition on it, leave this question blank.
Mean: 8.110300081581755e+16
Median: 2080.0
Mode: 2100.0
Stdev: 2.847858859055733e+18
I didn't bother to filter out the silly answers for this.Obviously it's a bit hard to see without filtering out the uber-large answers, but the median doesn't seem to have changed much from the 2014 survey.
Genetic Engineering
Well that's fairly overwhelming.
I find it amusing how the strict "No" group shrinks considerably after this question.
This question is too important to just not have an answer to so I'll do it manually. Unfortunately I can't easily remove the 'excluded' entries so that we're dealing with the exact same distribution but only 13 or so responses are filtered out anyway.
sqlite> select count(*) from data where GeneticImprovement="Yes";
1100
>>> 1100 + 176 + 262 + 84
1622
>>> 1100 / 1622
0.6781750924784217
67.8% are willing to genetically engineer their children for improvements.
These numbers go about how you would expect, with people being progressively less interested the more 'shallow' a genetic change is seen as.
All three of these seem largely consistent with peoples personal preferences about modification. Were I inclined I could do a deeper analysis that actually takes survey respondents row by row and looks at correlation between preference for ones own children and preference for others.
Technological Unemployment
LudditeFallacy
Do you think the Luddite's Fallacy is an actual fallacy?
Yes: 443 (30.936%)
No: 989 (69.064%)
We can use this as an overall measure of worry about technological unemployment, which would seem to be high among the LW demographic.
UnemploymentYear
By what year do you think the majority of people in your country will have trouble finding employment for automation related reasons? If you think this is something that will never happen leave this question blank.
Mean: 2102.9713740458014
Median: 2050.0
Mode: 2050.0
Stdev: 1180.2342850727339
Question is flawed because you can't distinguish answers of "never happen" from people who just didn't see it.Interesting question that would be fun to take a look at in comparison to the estimates for the singularity.
EndOfWork
Do you think the "end of work" would be a good thing?
Yes: 1238 (81.287%)
No: 285 (18.713%)
Fairly overwhelming consensus, but with a significant minority of people who have a dissenting opinion.
EndOfWorkConcerns
If machines end all or almost all employment, what are your biggest worries? Pick two.
| Question | Count | Percent |
| People will just idle about in destructive ways | 513 | 16.71% |
| People need work to be fulfilled and if we eliminate work we'll all feel deep existential angst | 543 | 17.687% |
| The rich are going to take all the resources for themselves and leave the rest of us to starve or live in poverty | 1066 | 34.723% |
| The machines won't need us, and we'll starve to death or be otherwise liquidated | 416 | 13.55% |
The plurality of worries are about elites who refuse to share their wealth.
Existential Risk
XRiskType
Which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?
Nuclear war: +4.800% 326 (20.6%)
Asteroid strike: -0.200% 64 (4.1%)
Unfriendly AI: +1.000% 271 (17.2%)
Nanotech / grey goo: -2.000% 18 (1.1%)
Pandemic (natural): +0.100% 120 (7.6%)
Pandemic (bioengineered): +1.900% 355 (22.5%)
Environmental collapse (including global warming): +1.500% 252 (16.0%)
Economic / political collapse: -1.400% 136 (8.6%)
Other: 35 (2.217%)
Significantly more people worried about Nuclear War than last year. Effect of new respondents, or geopolitical situation? Who knows.
Charity And Effective Altruism
Charitable Giving
Income
What is your approximate annual income in US dollars (non-Americans: convert at www.xe.com)? Obviously you don't need to answer this question if you don't want to. Please don't include commas or dollar signs.
Sum: 66054140.47384
Mean: 64569.052271593355
Median: 40000.0
Mode: 30000.0
Stdev: 107297.53606321265
IncomeCharityPortion
How much money, in number of dollars, have you donated to charity over the past year? (non-Americans: convert to dollars at http://www.xe.com/ ). Please don't include commas or dollar signs in your answer. For example, 4000
Sum: 2389900.6530000004
Mean: 2914.5129914634144
Median: 353.0
Mode: 100.0
Stdev: 9471.962766896671
XriskCharity
How much money have you donated to charities aiming to reduce existential risk (other than MIRI/CFAR) in the past year?
Sum: 169300.89
Mean: 1991.7751764705883
Median: 200.0
Mode: 100.0
Stdev: 9219.941506342007
CharityDonations
How much have you donated in US dollars to the following charities in the past year? (Non-americans: convert to dollars at http://www.xe.com/) Please don't include commas or dollar signs in your answer. Options starting with "any" aren't the name of a charity but a category of charity.
| Question | Sum | Mean | Median | Mode | Stdev |
| Against Malaria Foundation | 483935.027 | 1905.256 | 300.0 | None | 7216.020 |
| Schistosomiasis Control Initiative | 47908.0 | 840.491 | 200.0 | 1000.0 | 1618.785 |
| Deworm the World Initiative | 28820.0 | 565.098 | 150.0 | 500.0 | 1432.712 |
| GiveDirectly | 154410.177 | 1429.723 | 450.0 | 50.0 | 3472.082 |
| Any kind of animal rights charity | 83130.47 | 1093.821 | 154.235 | 500.0 | 2313.493 |
| Any kind of bug rights charity | 1083.0 | 270.75 | 157.5 | None | 353.396 |
| Machine Intelligence Research Institute | 141792.5 | 1417.925 | 100.0 | 100.0 | 5370.485 |
| Any charity combating nuclear existential risk | 491.0 | 81.833 | 75.0 | 100.0 | 68.060 |
| Any charity combating global warming | 13012.0 | 245.509 | 100.0 | 10.0 | 365.542 |
| Center For Applied Rationality | 127101.0 | 3177.525 | 150.0 | 100.0 | 12969.096 |
| Strategies for Engineered Negligible Senescence Research Foundation | 9429.0 | 554.647 | 100.0 | 20.0 | 1156.431 |
| Wikipedia | 12765.5 | 53.189 | 20.0 | 10.0 | 126.444 |
| Internet Archive | 2975.04 | 80.406 | 30.0 | 50.0 | 173.791 |
| Any campaign for political office | 38443.99 | 366.133 | 50.0 | 50.0 | 1374.305 |
| Other | 564890.46 | 1661.442 | 200.0 | 100.0 | 4670.805 |
This table is interesting given the recent debates about how much money certain causes are 'taking up' in Effective Altruism.
Effective Altruism
Vegetarian
Do you follow any dietary restrictions related to animal products?
Yes, I am vegan: 54 (3.4%)
Yes, I am vegetarian: 158 (10.0%)
Yes, I restrict meat some other way (pescetarian, flexitarian, try to only eat ethically sourced meat): 375 (23.7%)
No: 996 (62.9%)
EAKnowledge
Do you know what Effective Altruism is?
Yes: 1562 (89.3%)
No but I've heard of it: 114 (6.5%)
No: 74 (4.2%)
EAIdentity
Do you self-identify as an Effective Altruist?
Yes: 665 (39.233%)
No: 1030 (60.767%)
The distribution given by the 2014 survey results does not sum to one, so it's difficult to determine if Effective Altruism's membership actually went up or not but if we take the numbers at face value it experienced an 11.13% increase in membership.
EACommunity
Do you participate in the Effective Altruism community?
Yes: 314 (18.427%)
No: 1390 (81.573%)
Same issue as last, taking the numbers at face value community participation went up by 5.727%
EADonations
Has Effective Altruism caused you to make donations you otherwise wouldn't?
Yes: 666 (39.269%)
No: 1030 (60.731%)
Wowza!
Effective Altruist Anxiety
EAAnxiety
Have you ever had any kind of moral anxiety over Effective Altruism?
Yes: 501 (29.6%)
Yes but only because I worry about everything: 184 (10.9%)
No: 1008 (59.5%)
There's an ongoing debate in Effective Altruism about what kind of rhetorical strategy is best for getting people on board and whether Effective Altruism is causing people significant moral anxiety.
It certainly appears to be. But is moral anxiety effective? Let's look:
Sample Size: 244
Average amount of money donated by people anxious about EA who aren't EAs: 257.5409836065574
Sample Size: 679
Average amount of money donated by people who aren't anxious about EA who aren't EAs: 479.7501384388807
Sample Size: 249 Average amount of money donated by EAs anxious about EA: 1841.5292369477913
Sample Size: 314
Average amount of money donated by EAs not anxious about EA: 1837.8248407643312
It seems fairly conclusive that anxiety is not a good way to get people to donate more than they already are, but is it a good way to get people to become Effective Altruists?
Sample Size: 1685
P(Effective Altruist): 0.3940652818991098
P(EA Anxiety): 0.29554896142433235
P(Effective Altruist | EA Anxiety): 0.5
Maybe. There is of course an argument to be made that sufficient good done by causing people anxiety outweighs feeding into peoples scrupulosity, but it can be discussed after I get through explaining it on the phone to wealthy PR-conscious donors and telling the local all-kill shelter where I want my shipment of dead kittens.
EAOpinion
What's your overall opinion of Effective Altruism?
Positive: 809 (47.6%)
Mostly Positive: 535 (31.5%)
No strong opinion: 258 (15.2%)
Mostly Negative: 75 (4.4%)
Negative: 24 (1.4%)
EA appears to be doing a pretty good job of getting people to like them.
Interesting Tables
| Affiliation | Income | Charity Contributions | % Income Donated To Charity | Total Survey Charity % | Sample Size |
|---|---|---|---|---|---|
| Anarchist | 1677900.0 | 72386.0 | 4.314% | 3.004% | 50 |
| Communist | 298700.0 | 19190.0 | 6.425% | 0.796% | 13 |
| Conservative | 1963000.04 | 62945.04 | 3.207% | 2.612% | 38 |
| Futarchist | 1497494.1099999999 | 166254.0 | 11.102% | 6.899% | 31 |
| Left-Libertarian | 9681635.613839999 | 416084.0 | 4.298% | 17.266% | 245 |
| Libertarian | 11698523.0 | 214101.0 | 1.83% | 8.885% | 190 |
| Moderate | 3225475.0 | 90518.0 | 2.806% | 3.756% | 67 |
| Neoreactionary | 1383976.0 | 30890.0 | 2.232% | 1.282% | 28 |
| Objectivist | 399000.0 | 1310.0 | 0.328% | 0.054% | 10 |
| Other | 3150618.0 | 85272.0 | 2.707% | 3.539% | 132 |
| Pragmatist | 5087007.609999999 | 266836.0 | 5.245% | 11.073% | 131 |
| Progressive | 8455500.440000001 | 368742.78 | 4.361% | 15.302% | 217 |
| Social Democrat | 8000266.54 | 218052.5 | 2.726% | 9.049% | 237 |
| Socialist | 2621693.66 | 78484.0 | 2.994% | 3.257% | 126 |
| Community | Count | % In Community | Sample Size |
|---|---|---|---|
| LessWrong | 136 | 38.418% | 354 |
| LessWrong Meetups | 109 | 50.463% | 216 |
| LessWrong Facebook Group | 83 | 48.256% | 172 |
| LessWrong Slack | 22 | 39.286% | 56 |
| SlateStarCodex | 343 | 40.98% | 837 |
| Rationalist Tumblr | 175 | 49.716% | 352 |
| Rationalist Facebook | 89 | 58.94% | 151 |
| Rationalist Twitter | 24 | 40.0% | 60 |
| Effective Altruism Hub | 86 | 86.869% | 99 |
| Good Judgement(TM) Open | 23 | 74.194% | 31 |
| PredictionBook | 31 | 51.667% | 60 |
| Hacker News | 91 | 35.968% | 253 |
| #lesswrong on freenode | 19 | 24.675% | 77 |
| #slatestarcodex on freenode | 9 | 24.324% | 37 |
| #chapelperilous on freenode | 2 | 18.182% | 11 |
| /r/rational | 117 | 42.545% | 275 |
| /r/HPMOR | 110 | 47.414% | 232 |
| /r/SlateStarCodex | 93 | 37.959% | 245 |
| One or more private 'rationalist' groups | 91 | 47.15% | 193 |
| Affiliation | EA Income | EA Charity | Sample Size |
|---|---|---|---|
| Anarchist | 761000.0 | 57500.0 | 18 |
| Futarchist | 559850.0 | 114830.0 | 15 |
| Left-Libertarian | 5332856.0 | 361975.0 | 112 |
| Libertarian | 2725390.0 | 114732.0 | 53 |
| Moderate | 583247.0 | 56495.0 | 22 |
| Other | 1428978.0 | 69950.0 | 49 |
| Pragmatist | 1442211.0 | 43780.0 | 43 |
| Progressive | 4004097.0 | 304337.78 | 107 |
| Social Democrat | 3423487.45 | 149199.0 | 93 |
| Socialist | 678360.0 | 34751.0 | 41 |
The call of the void
Original post: http://bearlamp.com.au/the-call-of-the-void
L'appel du vide - The call of the void.
When you are standing on the balcony of a tall building, looking down at the ground and on some track your brain says "what would it feel like to jump". When you are holding a kitchen knife thinking, "I wonder if this is sharp enough to cut myself with". When you are waiting for a train and your brain asks, "what would it be like to step in front of that train?". Maybe it's happened with rope around your neck, or power tools, or what if I take all the pills in the bottle. Or touch these wires together, or crash the plane, crash the car, just veer off. Lean over the cliff... Try to anger the snake, stick my fingers in the moving fan... Or the acid. Or the fire.
There's a strange phenomenon where our brains seem to do this, "I wonder what the consequences of this dangerous thing are". And we don't know why it happens. There has only been one paper (sorry it's behind a paywall) on the concept. Where all they really did is identify it. I quite like the paper for quoting both (“You know that feeling you get when you're standing in a high place… sudden urge to jump?… I don't have it” (Captain Jack Sparrow, Pirates of the Caribbean: On Stranger Tides, 2011). And (a drive to return to an inanimate state of existence; Freud, 1922).
Taking a look at their method; they surveyed 431 undergraduates for their experiences of what they coined HPP (High Place Phenomenon). They found that 30% of their constituents have experienced HPP, and tried to measure if it was related to anxiety or suicide. They also proposed a theory.
...we propose that at its core, the experience of the high place phenomenon stems from the misinterpretation of a safety or survival signal. (e.g., “back up, you might fall”)
I want to believe it, but today there are Literally no other papers on the topic. And no evidence either way. So all I can say is - We don't really know. s'weird. Dunno.
This week I met someone who uncomfortably described their experience of toying with L'appel du vide. I explained to them how this is a common and confusing phenomenon, and to their relief said, "it's not like I want to jump!". Around 5 years ago (before I knew it's name) an old friend recounting the experience of living and wondering what it was like to step in front of moving busses (with discomfort), any time she was near a bus. I have coaxed a friend out of the middle of a road (they weren't drunk and weren't on drugs at the time). And dragged friends out of the ocean. I have it with knives, in a way that borderlines OCD behaviour. The desire to look at and examine the sharp edges.
What I do know is this. It's normal. Very normal. Even if it's not 30% of the population, it could easily be 10 or 20%. Everyone has a right to know that it happens, and it's normal and you're not broken if you experience it. Just as common a shared human experience as common dreams like your teeth falling out, or of flying, running away from groups of people, or being underwater. Or the experience of rehearsing what you want to say before making a phone call. Or walking into a room for a reason and forgetting what it was.
Next time you are struck with the L'appel du vide, don't get uncomfortable. Accept that it's a neat thing that brains do, and it's harmless. Experience it. And together with me - wonder why. Wonder what evolutionary benefit has given so many of us the L'appel du vide.
And be careful.
Meta: this took one hour to write.
Do you want to be like Kuro5hin? Because this is how you get to be like Kuro5hin.
I log in this morning on a whim, and notice I have -15 karma. I dig around for a bit and find this:
http://lesswrong.com/lw/nsm/open_thread_jul_25_jul_31_2016/ddjm
To be clear, that's a block of four comments, each at -10, for no apparent obvious good reason other than eugine nier has a vendetta against Elo. I've apparently just been hit as splash damage, since I had the gall to try posting on an Elo comment thread.
I dig a little more, and I find this:
http://lesswrong.com/user/Elo/overview/
That's Elo's page, and I see a pile of discussion-grade posts that are all bulk downvoted below visibility, again for no apparent obvious good reason.
I find myself incredibly disincentivized to post or comment as a result of this. My feeble amount of karma has taken literally years to build up, and to see sizable fractions of it wiped out any time I step on a eugine nier landmine is bullshit. Sure, it's silly to value karma, but I value it anyway and if a year of incidental effort can be burned in two days because one guy wants to be an asshole to me, then I'm done here.
This has been going on for months. Years even.
I understand the staff of LW are pressed for time. I understand nobody understands how the code works. I understand that maintaining the site is hard. However, reality is that which does not go away when we close our eyes, and reality does not care: no matter how difficult the problems are, the fact remains that this sort of thing is abusive and it is actively driving people off the site.
If you value LW, fix this. Use the force harder, site owners.
On the other hand, if you want LW to turn into another Kuro5hin, then keep doing what you're doing.
Prediction: 50% odds this post will be downvoted below visibility within two days due to eugine, and will basically disappear without trace.
Prediction: if this isn't dealt with soon, 50% odds I'll stop visiting LW completely other than as an article archive by year end, because there's no goddamned point in trying to use the discussion system.
Deepmind Plans for Rat-Level AI
Demis Hassabis gives a great presentation on the state of Deepmind's work as of April 20, 2016. Skip to 23:12 for the statement of the goal of creating a rat-level AI -- "An AI that can do everything a rat can do," in his words. From his tone, it sounds like this is more a short-term, not a long-term goal.
I don't think Hassabis is prone to making unrealistic plans or stating overly bold predictions. I strongly encourage you to scan through Deepmind's publication list to get a sense of how quickly they're making progress. (In fact, I encourage you to bookmark that page, because it seems like they add a new paper about twice a month.) The outfit seems to be systematically knocking down all the "Holy Grail" milestones on the way to GAI, and this is just Deepmind. The papers they've put out in just the last year or so concern successful one-shot learning, continuous control, actor-critic architectures, novel memory architectures, policy learning, and bootstrapped gradient learning, and these are just the most stand-out achievements. There's even a paper co-authored by Stuart Armstrong concerning Friendliness concepts on that list.
If we really do have a genuinely rat-level AI within the next couple of years, I think that would justify radically moving forward expectations of AI development timetables. Speaking very naively, if we can go from "sub-nematode" to "mammal that can solve puzzles" in that timeframe, I would view it as a form of proof that "general" intelligence does not require some mysterious ingredient that we haven't discovered yet.
A Review of Signal Data Science
I took part in the second signal data science cohort earlier this year, and since I found out about Signal through a slatestarcodex post a few months back (it was also covered here on less wrong), I thought it would be good to return the favor and write a review of the program.
The tl;dr version:
Going to Signal was a really good decision. I had been doing teaching work and some web development consulting previous to the program to make ends meet, and now I have a job offer as a senior machine learning researcher1. The time I spent at signal was definitely necessary for me to get this job offer, and another very attractive data science job offer that is my "second choice" job. I haven't paid anything to signal, but I will have to pay them a fraction of my salary for the next year, capped at 10% and a maximum payment of $25k.
The longer version:
Obviously a ~12 week curriculum is not going to be a magic pill that turns a nontechnical, averagely intelligent person into a super-genius with job offers from Google and Facebook. In order to benefit from Signal, you should already be somewhat above average in terms of intelligence and intellectual curiosity. If you have never programmed and/or never studied mathematics beyond high school2 , you will probably not benefit from Signal in my opinion. Also, if you don't already understand statistics and probability to a good degree, they will not have time to teach you. What they will do is teach you how to be really good with R, make you do some practical machine learning and learn some SQL, all of which are hugely important for passing data science job interviews. As a bonus, you may be lucky enough (as I was) to explore more advanced machine learning techniques with other program participants or alumni and build some experience for yourself as a machine learning hacker.
As stated above, you don't pay anything up front, and cheap accommodation is available. If you are in a situation similar to mine, not paying up front is a huge bonus. The salary fraction is comparatively small, too, and it only lasts for one year. I almost feel like I am underpaying them.
This critical comment by fluttershy almost put me off, and I'm glad it didn't. The program is not exactly "self-directed" - there is a daily schedule and a clear path to work through, though they are flexible about it. Admittedly there isn't a constant feed of staff time for your every whim - ideally there would be 10-20 Jonahs, one per student; there's no way to offer that kind of service at a reasonable price. Communication between staff and students seemed to be very good, and key aspects of the program were well organised. So don't let perfect be the enemy of good: what you're getting is an excellent focused training program to learn R and some basic machine learning, and that's what you need to progress to the next stage of your career.
Our TA for the cohort, Andrew Ho, worked tirelessly to make sure our needs were met, both academically and in terms of running the house. Jonah was extremely helpful when you needed to debug something or clarify a misunderstanding. His lectures on selected topics were excellent. Robert's Saturday sessions on interview technique were good, though I felt that over time they became less valuable as some people got more out of interview practice than others.
I am still in touch with some people I met on my cohort, even though I had to leave the country, I consider them pals and we keep in touch about how our job searches are going. People have offered to recommend me to companies as a result of Signal. As a networking push, going to Signal is certainly a good move.
Highly recommended for smart people who need a helping hand to launch a technical career in data science.
1: I haven't signed the contract yet as my new boss is on holiday, but I fully intend to follow up when that process completes (or not). Watch this space.
2: or equivalent - if you can do mathematics such as matrix algebra, know what the normal distribution is, understand basic probability theory such as how to calculate the expected value of a dice roll, etc, you are probably fine.
Even Odds
(Cross-posted on my personal blog, which has LaTeX, and is easier to read.)
Let's say that you are are at your local less wrong meet up and someone makes some strong claim and seems very sure of himself, "blah blah blah resurrected blah blah alicorn princess blah blah 99 percent sure." You think he is probably correct, you estimate a 67 percent chance, but you think he is way over confident. "Wanna bet?" You ask.
"Sure," he responds, and you both check your wallets and have 25 dollars each. "Okay," he says, "now you pick some betting odds, and I'll choose which side I want to pick."
"That's crazy," you say, "I am going to pick the odds so that I cannot be taken advantage of, which means that I will be indifferent between which of the two options you pick, which means that I will expect to gain 0 dollars from this transaction. I wont take it. It is not fair!"
"Okay," he says, annoyed with you. "We will both write down the probability we think that I am correct, average them together, and that will determine the betting odds. We'll bet as much as we can of our 25 dollars with those odds."
"What do you mean by 'average' I can think of at least four possibilities. Also, since I know your probability is high, I will just choose a high probability that is still less than it to maximize the odds in my favor regardless of my actual belief. Your proposition is not strategy proof."
"Fine, what do you suggest?"
You take out some paper, solve some differential equations, and explain how the bet should go.
Satisfied with your math, you share your probability, he puts 13.28 on the table, and you put 2.72 on the table.
"Now what?" He asks.
A third meet up member takes quickly takes the 16 dollars from the table and answers, "You wait."
I will now derive a general algorithm for determining a bet from two probabilities and a maximum amount of money that people are willing to bet. This algorithm is both strategy proof and fair. The solution turns out to be simple, so if you like, you can skip to the last paragraph, and use it next time you want to make a friendly bet. If you want to try to derive the solution on your own, you might want to stop reading now.
First, we have to be clear about what we mean by strategy proof and fair. "Strategy proof" is clear. Our algorithm should ensure that neither person believes that they can increase their expected profit by lying about their probabilities. "Fair" will be a little harder to define. There is more than one way to define "fair" in this context, but there is one way which I think is probably the best. When the players make the bet, they both will expect to make some profit. They will not both be correct, but they will both believe they are expected to make profit. I claim the bet is fair if both players expect to make the same profit on average.
Now, lets formalize the problem:
Alice believes S is true with probability p. Bob believes S is false with probability q. Both players are willing to bet up to d dollars. Without loss of generality, assume p+q>1. Our betting algorithm will output a dollar amount, f(p,q), for Alice to put on the table and a dollar amount, g(p,q) for Bob to put on the table. Then if S is true, Alice gets all the money, and if S is false, Bob gets all the money.
From Alice's point of view, her expected profit for Alice will be p(g(p,q))+(1-p)(-f(p,q)).
From Bob's point of view, his expected profit for Bob will be q(f(p,q))+(1-q)(-g(p,q)).
Setting these two values equal, and simplifying, we get that (1+p-q)g(p,q)=(1+q-p)f(p,q), which is the condition that the betting algorithm is fair.
For convenience of notation, we will define h(p,q) by h(p,q)=g(p,q)/(1+q-p)=f(p,q)/(1+p-q).
Now, we want to look at what will happen if Alice lies about her probability. If instead of saying p, Alice were to say that her probability was r, then her expected profit would be p(g(r,q))+(1-p)(-f(r,q)), which equals p(1+q-r)h(r,q)+(1-p)(-(1+r-q)h(r,q))=(2p-1-r+q)h(r,q).
We want this value as a function of r to be maximized when r=p, which means that -h+(2r-1-r+q)(dh/dr)=0.
Separation of variables gives us (1/h)dh=1/(-1+r+q)dr,
which integrates to ln(h)=C+ln(-1+r+q) at r=p,
which simplifies to h=e^C(-1+r+q)=e^C(-1+p+q).
This gives the solution f(p,q)=e^C(-1+p+q)(1+p-q)=e^C(p^2-(1-q)^2) and g(p,q)=e^C(-1+p+q)(1+q-p)=e^C(q^2-(1-p)^2).
It is quick to verify that this solution is actually fair, and both players' expected profit is maximized by honest reporting of beliefs.
The value of the constant multiplied out in front can be anything, and the most either player could ever have to put on the table is equal to this constant. Therefore, if both players are willing to bet up to d dollars, we should define e^C=d.
Alice and Bob are willing to bet up to d dollars, Alice thinks S is true with probability p, and Bob thinks S is false with probability q. Assuming p+q>1, Alice should put in d(p^2-(1-q)^2), while Bob should put in d(q^2-(1-p)^2). I suggest you use this algorithm next time you want to have a friendly wager (with a rational person), and I suggest you set d to 25 dollars and require both players to say an odd integer percent to ensure a whole number of cents.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)