Neutralizing Physical Annoyances
Once in a while, I learn something about a seemingly unrelated topic - such as freediving - and I take away some trick that is well known and "obvious" in that topic, but is generally useful and NOT known by many people outside. Case in point, you can use equalization techniques from diving to remove pressure in your ears when you descend in a plane or a fast lift. I also give some other examples.
Ears
Reading about a few equalization techniques took me maybe 5 minutes, and after reading this passage once I was able to successfully use the "Frenzel Maneuver":
The technique is to close off the vocal cords, as though you are about to lift a heavy weight. The nostrils are pinched closed and an effort is made to make a 'k' or a 'guh' sound. By doing this you raise the back of the tongue and the 'Adam's Apple' will elevate. This turns the tongue into a piston, pushing air up.
(source: http://freedivingexplained.blogspot.com.mt/2008/03/basics-of-freediving-equalization.html)
Hiccups
A few years ago, I started regularly doing deep relaxations after yoga. At some point, I learned how to relax my throat in such a way that the air can freely escape from the stomach. Since then, whenever I start hiccuping, I relax my throat and the hiccups stop immediately in all cases. I am now 100% hiccup-free.
Stiff Shoulders
I've spent a few hours with a friend who is doing massage, and they taught me some basics. After that, it became natural for me to self-massage my shoulders after I do a lot of sitting work etc. I can't imagine living without this anymore.
Other?
If you know more, please share!
MIRI AMA plus updates
MIRI is running an AMA on the Effective Altruism Forum tomorrow (Wednesday, Oct. 11): Ask MIRI Anything. Questions are welcome in the interim!
Nate also recently posted a more detailed version of our 2016 fundraising pitch to the EA Forum. One of the additions is about our first funding target:
We feel reasonably good about our chance of hitting target 1, but it isn't a sure thing; we'll probably need to see support from new donors in order to hit our target, to offset the fact that a few of our regular donors are giving less than usual this year.
The Why MIRI's Approach? section also touches on new topics that we haven't talked about in much detail in the past, but plan to write up some blog posts about in the future. In particular:
Loosely speaking, we can imagine the space of all smarter-than-human AI systems as an extremely wide and heterogeneous space, in which "alignable AI designs" is a small and narrow target (and "aligned AI designs" smaller and narrower still). I think that the most important thing a marginal alignment researcher can do today is help ensure that the first generally intelligent systems humans design are in the “alignable” region. I think that this is unlikely to happen unless researchers have a fairly principled understanding of how the systems they're developing reason, and how that reasoning connects to the intended objectives.
Most of our work is therefore aimed at seeding the field with ideas that may inspire more AI research in the vicinity of (what we expect to be) alignable AI designs. When the first general reasoning machines are developed, we want the developers to be sampling from a space of designs and techniques that are more understandable and reliable than what’s possible in AI today.
In other news, we've uploaded a new intro talk on our most recent result, "Logical Induction," that goes into more of the technical details than our previous talk.
See also Shtetl-Optimized and n-Category Café for recent discussions of the paper.
2016 LessWrong Diaspora Survey Analysis: Part Four (Politics, Calibration & Probability, Futurology, Charity & Effective Altruism)
Politics
The LessWrong survey has a very involved section dedicated to politics. In previous analysis the benefits of this weren't fully realized. In the 2016 analysis we can look at not just the political affiliation of a respondent, but what beliefs are associated with a certain affiliation. The charts below summarize most of the results.
Political Opinions By Political Affiliation

Miscellaneous Politics
There were also some other questions in this section which aren't covered by the above charts.
Voting
| Group | Turnout |
|---|---|
| LessWrong | 68.9% |
| Austrailia | 91% |
| Brazil | 78.90% |
| Britain | 66.4% |
| Canada | 68.3% |
| Finland | 70.1% |
| France | 79.48% |
| Germany | 71.5% |
| India | 66.3% |
| Israel | 72% |
| New Zealand | 77.90% |
| Russia | 65.25% |
| United States | 54.9% |
Calibration And Probability Questions
Calibration Questions
I just couldn't analyze these, sorry guys. I put many hours into trying to get them into a decent format I could even read and that sucked up an incredible amount of time. It's why this part of the survey took so long to get out. Thankfully another LessWrong user, Houshalter, has kindly done their own analysis.
All my calibration questions were meant to satisfy a few essential properties:
- They should be 'self contained'. I.E, something you can reasonably answer or at least try to answer with a 5th grade science education and normal life experience.
- They should, at least to a certain extent, be Fermi Estimable.
- They should progressively scale in difficulty so you can see whether somebody understands basic probability or not. (eg. In an 'or' question do they put a probability of less than 50% of being right?)
At least one person requested a workbook, so I might write more in the future. I'll obviously write more for the survey.
Probability Questions
| Question | Mean | Median | Mode | Stdev |
| Please give the obvious answer to this question, so I can automatically throw away all surveys that don't follow the rules: What is the probability of a fair coin coming up heads? | 49.821 | 50.0 | 50.0 | 3.033 |
| What is the probability that the Many Worlds interpretation of quantum mechanics is more or less correct? | 44.599 | 50.0 | 50.0 | 29.193 |
| What is the probability that non-human, non-Earthly intelligent life exists in the observable universe? | 75.727 | 90.0 | 99.0 | 31.893 |
| ...in the Milky Way galaxy? | 45.966 | 50.0 | 10.0 | 38.395 |
| What is the probability that supernatural events (including God, ghosts, magic, etc) have occurred since the beginning of the universe? | 13.575 | 1.0 | 1.0 | 27.576 |
| What is the probability that there is a god, defined as a supernatural intelligent entity who created the universe? | 15.474 | 1.0 | 1.0 | 27.891 |
| What is the probability that any of humankind's revealed religions is more or less correct? | 10.624 | 0.5 | 1.0 | 26.257 |
| What is the probability that an average person cryonically frozen today will be successfully restored to life at some future time, conditional on no global catastrophe destroying civilization before then? | 21.225 | 10.0 | 5.0 | 26.782 |
| What is the probability that at least one person living at this moment will reach an age of one thousand years, conditional on no global catastrophe destroying civilization in that time? | 25.263 | 10.0 | 1.0 | 30.510 |
| What is the probability that our universe is a simulation? | 25.256 | 10.0 | 50.0 | 28.404 |
| What is the probability that significant global warming is occurring or will soon occur, and is primarily caused by human actions? | 83.307 | 90.0 | 90.0 | 23.167 |
| What is the probability that the human race will make it to 2100 without any catastrophe that wipes out more than 90% of humanity? | 76.310 | 80.0 | 80.0 | 22.933 |
Probability questions is probably the area of the survey I put the least effort into. My plan for next year is to overhaul these sections entirely and try including some Tetlock-esque forecasting questions, a link to some advice on how to make good predictions, etc.
Futurology
This section got a bit of a facelift this year. Including new cryonics questions, genetic engineering, and technological unemployment in addition to the previous years.
Cryonics
Interestingly enough, of those who think it will work with enough confidence to say 'yes', only 14 are actually signed up for cryonics.
sqlite> select count(*) from data where CryonicsNow="Yes" and Cryonics="Yes - signed up or just finishing up paperwork";
14
sqlite> select count(*) from data where CryonicsNow="Yes" and (Cryonics="Yes - signed up or just finishing up paperwork" OR Cryonics="No - would like to sign up but unavailable in my area" OR "No - would like to sign up but haven't gotten around to it" OR "No - would like to sign up but can't afford it");
34
LessWrongers seem to be very bullish on the underlying physics of cryonics even if they're not as enthusiastic about current methods in use.
The Brain Preservation Foundation also did an analysis of cryonics responses to the LessWrong Survey.
Singularity
SingularityYear
By what year do you think the Singularity will occur? Answer such that you think, conditional on the Singularity occurring, there is an even chance of the Singularity falling before or after this year. If you think a singularity is so unlikely you don't even want to condition on it, leave this question blank.
Mean: 8.110300081581755e+16
Median: 2080.0
Mode: 2100.0
Stdev: 2.847858859055733e+18
I didn't bother to filter out the silly answers for this.Obviously it's a bit hard to see without filtering out the uber-large answers, but the median doesn't seem to have changed much from the 2014 survey.
Genetic Engineering
Well that's fairly overwhelming.
I find it amusing how the strict "No" group shrinks considerably after this question.
This question is too important to just not have an answer to so I'll do it manually. Unfortunately I can't easily remove the 'excluded' entries so that we're dealing with the exact same distribution but only 13 or so responses are filtered out anyway.
sqlite> select count(*) from data where GeneticImprovement="Yes";
1100
>>> 1100 + 176 + 262 + 84
1622
>>> 1100 / 1622
0.6781750924784217
67.8% are willing to genetically engineer their children for improvements.
These numbers go about how you would expect, with people being progressively less interested the more 'shallow' a genetic change is seen as.
All three of these seem largely consistent with peoples personal preferences about modification. Were I inclined I could do a deeper analysis that actually takes survey respondents row by row and looks at correlation between preference for ones own children and preference for others.
Technological Unemployment
LudditeFallacy
Do you think the Luddite's Fallacy is an actual fallacy?
Yes: 443 (30.936%)
No: 989 (69.064%)
We can use this as an overall measure of worry about technological unemployment, which would seem to be high among the LW demographic.
UnemploymentYear
By what year do you think the majority of people in your country will have trouble finding employment for automation related reasons? If you think this is something that will never happen leave this question blank.
Mean: 2102.9713740458014
Median: 2050.0
Mode: 2050.0
Stdev: 1180.2342850727339
Question is flawed because you can't distinguish answers of "never happen" from people who just didn't see it.Interesting question that would be fun to take a look at in comparison to the estimates for the singularity.
EndOfWork
Do you think the "end of work" would be a good thing?
Yes: 1238 (81.287%)
No: 285 (18.713%)
Fairly overwhelming consensus, but with a significant minority of people who have a dissenting opinion.
EndOfWorkConcerns
If machines end all or almost all employment, what are your biggest worries? Pick two.
| Question | Count | Percent |
| People will just idle about in destructive ways | 513 | 16.71% |
| People need work to be fulfilled and if we eliminate work we'll all feel deep existential angst | 543 | 17.687% |
| The rich are going to take all the resources for themselves and leave the rest of us to starve or live in poverty | 1066 | 34.723% |
| The machines won't need us, and we'll starve to death or be otherwise liquidated | 416 | 13.55% |
The plurality of worries are about elites who refuse to share their wealth.
Existential Risk
XRiskType
Which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?
Nuclear war: +4.800% 326 (20.6%)
Asteroid strike: -0.200% 64 (4.1%)
Unfriendly AI: +1.000% 271 (17.2%)
Nanotech / grey goo: -2.000% 18 (1.1%)
Pandemic (natural): +0.100% 120 (7.6%)
Pandemic (bioengineered): +1.900% 355 (22.5%)
Environmental collapse (including global warming): +1.500% 252 (16.0%)
Economic / political collapse: -1.400% 136 (8.6%)
Other: 35 (2.217%)
Significantly more people worried about Nuclear War than last year. Effect of new respondents, or geopolitical situation? Who knows.
Charity And Effective Altruism
Charitable Giving
Income
What is your approximate annual income in US dollars (non-Americans: convert at www.xe.com)? Obviously you don't need to answer this question if you don't want to. Please don't include commas or dollar signs.
Sum: 66054140.47384
Mean: 64569.052271593355
Median: 40000.0
Mode: 30000.0
Stdev: 107297.53606321265
IncomeCharityPortion
How much money, in number of dollars, have you donated to charity over the past year? (non-Americans: convert to dollars at http://www.xe.com/ ). Please don't include commas or dollar signs in your answer. For example, 4000
Sum: 2389900.6530000004
Mean: 2914.5129914634144
Median: 353.0
Mode: 100.0
Stdev: 9471.962766896671
XriskCharity
How much money have you donated to charities aiming to reduce existential risk (other than MIRI/CFAR) in the past year?
Sum: 169300.89
Mean: 1991.7751764705883
Median: 200.0
Mode: 100.0
Stdev: 9219.941506342007
CharityDonations
How much have you donated in US dollars to the following charities in the past year? (Non-americans: convert to dollars at http://www.xe.com/) Please don't include commas or dollar signs in your answer. Options starting with "any" aren't the name of a charity but a category of charity.
| Question | Sum | Mean | Median | Mode | Stdev |
| Against Malaria Foundation | 483935.027 | 1905.256 | 300.0 | None | 7216.020 |
| Schistosomiasis Control Initiative | 47908.0 | 840.491 | 200.0 | 1000.0 | 1618.785 |
| Deworm the World Initiative | 28820.0 | 565.098 | 150.0 | 500.0 | 1432.712 |
| GiveDirectly | 154410.177 | 1429.723 | 450.0 | 50.0 | 3472.082 |
| Any kind of animal rights charity | 83130.47 | 1093.821 | 154.235 | 500.0 | 2313.493 |
| Any kind of bug rights charity | 1083.0 | 270.75 | 157.5 | None | 353.396 |
| Machine Intelligence Research Institute | 141792.5 | 1417.925 | 100.0 | 100.0 | 5370.485 |
| Any charity combating nuclear existential risk | 491.0 | 81.833 | 75.0 | 100.0 | 68.060 |
| Any charity combating global warming | 13012.0 | 245.509 | 100.0 | 10.0 | 365.542 |
| Center For Applied Rationality | 127101.0 | 3177.525 | 150.0 | 100.0 | 12969.096 |
| Strategies for Engineered Negligible Senescence Research Foundation | 9429.0 | 554.647 | 100.0 | 20.0 | 1156.431 |
| Wikipedia | 12765.5 | 53.189 | 20.0 | 10.0 | 126.444 |
| Internet Archive | 2975.04 | 80.406 | 30.0 | 50.0 | 173.791 |
| Any campaign for political office | 38443.99 | 366.133 | 50.0 | 50.0 | 1374.305 |
| Other | 564890.46 | 1661.442 | 200.0 | 100.0 | 4670.805 |
This table is interesting given the recent debates about how much money certain causes are 'taking up' in Effective Altruism.
Effective Altruism
Vegetarian
Do you follow any dietary restrictions related to animal products?
Yes, I am vegan: 54 (3.4%)
Yes, I am vegetarian: 158 (10.0%)
Yes, I restrict meat some other way (pescetarian, flexitarian, try to only eat ethically sourced meat): 375 (23.7%)
No: 996 (62.9%)
EAKnowledge
Do you know what Effective Altruism is?
Yes: 1562 (89.3%)
No but I've heard of it: 114 (6.5%)
No: 74 (4.2%)
EAIdentity
Do you self-identify as an Effective Altruist?
Yes: 665 (39.233%)
No: 1030 (60.767%)
The distribution given by the 2014 survey results does not sum to one, so it's difficult to determine if Effective Altruism's membership actually went up or not but if we take the numbers at face value it experienced an 11.13% increase in membership.
EACommunity
Do you participate in the Effective Altruism community?
Yes: 314 (18.427%)
No: 1390 (81.573%)
Same issue as last, taking the numbers at face value community participation went up by 5.727%
EADonations
Has Effective Altruism caused you to make donations you otherwise wouldn't?
Yes: 666 (39.269%)
No: 1030 (60.731%)
Wowza!
Effective Altruist Anxiety
EAAnxiety
Have you ever had any kind of moral anxiety over Effective Altruism?
Yes: 501 (29.6%)
Yes but only because I worry about everything: 184 (10.9%)
No: 1008 (59.5%)
There's an ongoing debate in Effective Altruism about what kind of rhetorical strategy is best for getting people on board and whether Effective Altruism is causing people significant moral anxiety.
It certainly appears to be. But is moral anxiety effective? Let's look:
Sample Size: 244
Average amount of money donated by people anxious about EA who aren't EAs: 257.5409836065574
Sample Size: 679
Average amount of money donated by people who aren't anxious about EA who aren't EAs: 479.7501384388807
Sample Size: 249 Average amount of money donated by EAs anxious about EA: 1841.5292369477913
Sample Size: 314
Average amount of money donated by EAs not anxious about EA: 1837.8248407643312
It seems fairly conclusive that anxiety is not a good way to get people to donate more than they already are, but is it a good way to get people to become Effective Altruists?
Sample Size: 1685
P(Effective Altruist): 0.3940652818991098
P(EA Anxiety): 0.29554896142433235
P(Effective Altruist | EA Anxiety): 0.5
Maybe. There is of course an argument to be made that sufficient good done by causing people anxiety outweighs feeding into peoples scrupulosity, but it can be discussed after I get through explaining it on the phone to wealthy PR-conscious donors and telling the local all-kill shelter where I want my shipment of dead kittens.
EAOpinion
What's your overall opinion of Effective Altruism?
Positive: 809 (47.6%)
Mostly Positive: 535 (31.5%)
No strong opinion: 258 (15.2%)
Mostly Negative: 75 (4.4%)
Negative: 24 (1.4%)
EA appears to be doing a pretty good job of getting people to like them.
Interesting Tables
| Affiliation | Income | Charity Contributions | % Income Donated To Charity | Total Survey Charity % | Sample Size |
|---|---|---|---|---|---|
| Anarchist | 1677900.0 | 72386.0 | 4.314% | 3.004% | 50 |
| Communist | 298700.0 | 19190.0 | 6.425% | 0.796% | 13 |
| Conservative | 1963000.04 | 62945.04 | 3.207% | 2.612% | 38 |
| Futarchist | 1497494.1099999999 | 166254.0 | 11.102% | 6.899% | 31 |
| Left-Libertarian | 9681635.613839999 | 416084.0 | 4.298% | 17.266% | 245 |
| Libertarian | 11698523.0 | 214101.0 | 1.83% | 8.885% | 190 |
| Moderate | 3225475.0 | 90518.0 | 2.806% | 3.756% | 67 |
| Neoreactionary | 1383976.0 | 30890.0 | 2.232% | 1.282% | 28 |
| Objectivist | 399000.0 | 1310.0 | 0.328% | 0.054% | 10 |
| Other | 3150618.0 | 85272.0 | 2.707% | 3.539% | 132 |
| Pragmatist | 5087007.609999999 | 266836.0 | 5.245% | 11.073% | 131 |
| Progressive | 8455500.440000001 | 368742.78 | 4.361% | 15.302% | 217 |
| Social Democrat | 8000266.54 | 218052.5 | 2.726% | 9.049% | 237 |
| Socialist | 2621693.66 | 78484.0 | 2.994% | 3.257% | 126 |
| Community | Count | % In Community | Sample Size |
|---|---|---|---|
| LessWrong | 136 | 38.418% | 354 |
| LessWrong Meetups | 109 | 50.463% | 216 |
| LessWrong Facebook Group | 83 | 48.256% | 172 |
| LessWrong Slack | 22 | 39.286% | 56 |
| SlateStarCodex | 343 | 40.98% | 837 |
| Rationalist Tumblr | 175 | 49.716% | 352 |
| Rationalist Facebook | 89 | 58.94% | 151 |
| Rationalist Twitter | 24 | 40.0% | 60 |
| Effective Altruism Hub | 86 | 86.869% | 99 |
| Good Judgement(TM) Open | 23 | 74.194% | 31 |
| PredictionBook | 31 | 51.667% | 60 |
| Hacker News | 91 | 35.968% | 253 |
| #lesswrong on freenode | 19 | 24.675% | 77 |
| #slatestarcodex on freenode | 9 | 24.324% | 37 |
| #chapelperilous on freenode | 2 | 18.182% | 11 |
| /r/rational | 117 | 42.545% | 275 |
| /r/HPMOR | 110 | 47.414% | 232 |
| /r/SlateStarCodex | 93 | 37.959% | 245 |
| One or more private 'rationalist' groups | 91 | 47.15% | 193 |
| Affiliation | EA Income | EA Charity | Sample Size |
|---|---|---|---|
| Anarchist | 761000.0 | 57500.0 | 18 |
| Futarchist | 559850.0 | 114830.0 | 15 |
| Left-Libertarian | 5332856.0 | 361975.0 | 112 |
| Libertarian | 2725390.0 | 114732.0 | 53 |
| Moderate | 583247.0 | 56495.0 | 22 |
| Other | 1428978.0 | 69950.0 | 49 |
| Pragmatist | 1442211.0 | 43780.0 | 43 |
| Progressive | 4004097.0 | 304337.78 | 107 |
| Social Democrat | 3423487.45 | 149199.0 | 93 |
| Socialist | 678360.0 | 34751.0 | 41 |
UC Berkeley launches Center for Human-Compatible Artificial Intelligence
Source article: http://news.berkeley.edu/2016/08/29/center-for-human-compatible-artificial-intelligence/
UC Berkeley artificial intelligence (AI) expert Stuart Russell will lead a new Center for Human-Compatible Artificial Intelligence, launched this week.
Russell, a UC Berkeley professor of electrical engineering and computer sciences and the Smith-Zadeh Professor in Engineering, is co-author of Artificial Intelligence: A Modern Approach, which is considered the standard text in the field of artificial intelligence, and has been an advocate for incorporating human values into the design of AI.
The primary focus of the new center is to ensure that AI systems are beneficial to humans, he said.
The co-principal investigators for the new center include computer scientists Pieter Abbeel and Anca Dragan and cognitive scientist Tom Griffiths, all from UC Berkeley; computer scientists Bart Selman and Joseph Halpern, from Cornell University; and AI experts Michael Wellman and Satinder Singh Baveja, from the University of Michigan. Russell said the center expects to add collaborators with related expertise in economics, philosophy and other social sciences.
The center is being launched with a grant of $5.5 million from the Open Philanthropy Project, with additional grants for the center’s research from the Leverhulme Trust and the Future of Life Institute.
Russell is quick to dismiss the imaginary threat from the sentient, evil robots of science fiction. The issue, he said, is that machines as we currently design them in fields like AI, robotics, control theory and operations research take the objectives that we humans give them very literally. Told to clean the bath, a domestic robot might, like the Cat in the Hat, use mother’s white dress, not understanding that the value of a clean dress is greater than the value of a clean bath.
The center will work on ways to guarantee that the most sophisticated AI systems of the future, which may be entrusted with control of critical infrastructure and may provide essential services to billions of people, will act in a manner that is aligned with human values.
“AI systems must remain under human control, with suitable constraints on behavior, despite capabilities that may eventually exceed our own,” Russell said. “This means we need cast-iron formal proofs, not just good intentions.”
One approach Russell and others are exploring is called inverse reinforcement learning, through which a robot can learn about human values by observing human behavior. By watching people dragging themselves out of bed in the morning and going through the grinding, hissing and steaming motions of making a caffè latte, for example, the robot learns something about the value of coffee to humans at that time of day.
“Rather than have robot designers specify the values, which would probably be a disaster,” said Russell, “instead the robots will observe and learn from people. Not just by watching, but also by reading. Almost everything ever written down is about people doing things, and other people having opinions about it. All of that is useful evidence.”
Russell and his colleagues don’t expect this to be an easy task.
“People are highly varied in their values and far from perfect in putting them into practice,” he acknowledged. “These aspects cause problems for a robot trying to learn what it is that we want and to navigate the often conflicting desires of different individuals.”
Russell, who recently wrote an optimistic article titled “Will They Make Us Better People?,” summed it up this way: “In the process of figuring out what values robots should optimize, we are making explicit the idealization of ourselves as humans. As we envision AI aligned with human values, that process might cause us to think more about how we ourselves really should behave, and we might learn that we have more in common with people of other cultures than we think.”
European Soylent alternatives
A person at our local LW meetup (not active at LW.com) tested various Soylent alternatives that are available in Europe and wrote a post about them:
______________________
Over the course of the last three months, I've sampled parts of the
european Soylent alternatives to determine which ones would work for me
longterm.
- The prices are always for the standard option and might differ for
e.g. High Protein versions.
- The prices are always for the amount where you get the cheapest
marginal price (usually around a one month supply, i.e. 90 meals)
- Changing your diet to Soylent alternatives quickly leads to increased
flatulence for some time - I'd recommend a slow adoption.
- You can pay for all of them with Bitcoin.
- The list is sorted by overall awesomeness.
So here's my list of reviews:
Joylent:
Taste: 7/10
Texture: 7/10
Price: 5eu / day
Vegan option: Yes
Overall awesomeness: 8/10
This one is probably the european standard for nutritionally complete
meal replacements.
The texture is nice, the taste is somewhat sweet, the flavors aren't
very intensive.
They have an ok amount of different flavors but I reduced my orders to
Mango (+some Chocolate).
They offer a morning version with caffeine and a sports version with
more calories/protein.
They also offer Twennybars (similar to a cereal bar but each offers 1/5
of your daily needs), which everyone who tasted them really liked.
They're nice for those lazy times where you just don't feel like pouring
the powder, adding water and shaking before you get your meal.
They do cost 10eu per day, though.
I also like the general style. Every interaction with them was friendly,
fun and uncomplicated.
Veetal:
Taste: 8/10
Texture: 7/10
Price: 8.70 / day
Vegan option: Yes
Overall awesomeness: 8/10
This seems to be the "natural" option, apparently they add all those
healthy ingredients.
The texture is nice, the taste is sweeter than most, but not very sweet.
They don't offer flavors but the "base taste" is fine, it also works
well with some cocoa powder.
It's my favorite breakfast now and I had it ~54 of the last 60 days.
Would have been first place if not for the relatively high price.
Mana:
Taste: 6/10
Texture: 7/10
Price: 6.57 / day
Vegan option: Only Vegan
Overall awesomeness: 7/10
Mana is one of the very few choices that don't taste sweet but salty.
Among all the ones I've tried, it tastes the most similar to a classic meal.
It has a somewhat oily aftertaste that was a bit unpleasent in the
beginning but is fine now that I got used to it.
They ship the oil in small bottles seperate from the rest which you pour
into your shaker with the powder. This adds about 100% more complexity
to preparing a meal.
The packages feel somewhat recycled/biodegradable which I don't like so
much but which isn't actually a problem.
It still made it to the list of meals I want to consume on a regular
basis because it tastes so different from the others (and probably has a
different nutritional profile?).
Nano:
Taste: 7/10
Texture: 7/10
Price: 1.33eu / meal
*I couldn't figure out whether they calculate with 3 or 5 meals per day
** Price is for an order of 666 meals. I guess 222 meals for 1.5eu /meal
is the more reasonable order
Vegan option: Only Vegan
Overall awesomeness: 7/10
Has a relatively sweet taste. Only comes in the standard vanilla-ish flavor.
They offer a Veggie hot meal which is the only one besides Mana that
doesn't taste sweet. It tastes very much like a vegetable soup but was a
bit too spicy for me. (It's also a bit more expensive)
Nano has a very future-y feel about it that I like. It comes in one meal
packages which I don't like too much but that's personal preference.
Queal:
Taste: 7/10
Texture: 6/10
Price: 6.5 / day
Vegan option: No
Overall awesomeness: 7/10
Is generally similar to Joylent (especially in flavor) but seems
strictly inferior (their flavors sound more fun - but don't actually
taste better).
Nutrilent:
Taste: 6/10
Texture: 7/10
Price: 5 / day
Vegan option: No
Overall awesomeness: 6/10
Taste and flavor are also similar to Joylent but it tastes a little
worse. It comes in one meal packages which I don't fancy.
Jake:
Taste: 6/10
Texture: 7/10
Price: 7.46 / day
Vegan option: Only Vegan
Overall awesomeness: 6/10
Has a silky taste/texture (I didn't even know that was a thing before I
tried it). Only has one flavor (vanilla) which is okayish.
Also offers a light and sports option.
Huel:
Taste: 1/10
Texture: 6/10
Price: 6.70 / day
Vegan option: Only Vegan
Overall awesomeness: 4/10
The taste was unanimously rated as awful by every single person to whom
I gave it for trying. The Vanilla flavored version was a bit less awful
then the unflavored version but still...
The worst packaging - it's in huge bags that make it hard to pour and
are generally inconvenient to handle.
Apart from that, it's ok, I guess?
Ambronite:
Taste: ?
Texture: ?
Price: 30 / day
Vegan option: Only Vegan
Overall awesomeness: ?
Price was prohibitive for testing - they advertise it as being very
healthy and natural and stuff.
Fruiticio:
Taste: ?
Texture: ?
Price: 5.76 / day
Vegan option: No
Overall awesomeness: ?
They offer a variety for women and one for men. I didn't see any way for
me to find out which of those I was supposed to order. I had to give up
the ordering process at that point. (I guess you'd have to ask your
doctor which one is for you?)
Conclusion:
Meal replacements are awesome, especially when you don't have much time
to make or eat a "proper" meal.
I generally don't feel full after drinking them but also stop being hungry.
I assume they're healthier than the average European diet.
The texture and flavor do get a bit dull after a while if I only use
meal replacements.
On my usual day I eat one serving of Joylent, Veetal and Mana at the
moment (and have one or two "non-replaced" meals).
A Review of Signal Data Science
I took part in the second signal data science cohort earlier this year, and since I found out about Signal through a slatestarcodex post a few months back (it was also covered here on less wrong), I thought it would be good to return the favor and write a review of the program.
The tl;dr version:
Going to Signal was a really good decision. I had been doing teaching work and some web development consulting previous to the program to make ends meet, and now I have a job offer as a senior machine learning researcher1. The time I spent at signal was definitely necessary for me to get this job offer, and another very attractive data science job offer that is my "second choice" job. I haven't paid anything to signal, but I will have to pay them a fraction of my salary for the next year, capped at 10% and a maximum payment of $25k.
The longer version:
Obviously a ~12 week curriculum is not going to be a magic pill that turns a nontechnical, averagely intelligent person into a super-genius with job offers from Google and Facebook. In order to benefit from Signal, you should already be somewhat above average in terms of intelligence and intellectual curiosity. If you have never programmed and/or never studied mathematics beyond high school2 , you will probably not benefit from Signal in my opinion. Also, if you don't already understand statistics and probability to a good degree, they will not have time to teach you. What they will do is teach you how to be really good with R, make you do some practical machine learning and learn some SQL, all of which are hugely important for passing data science job interviews. As a bonus, you may be lucky enough (as I was) to explore more advanced machine learning techniques with other program participants or alumni and build some experience for yourself as a machine learning hacker.
As stated above, you don't pay anything up front, and cheap accommodation is available. If you are in a situation similar to mine, not paying up front is a huge bonus. The salary fraction is comparatively small, too, and it only lasts for one year. I almost feel like I am underpaying them.
This critical comment by fluttershy almost put me off, and I'm glad it didn't. The program is not exactly "self-directed" - there is a daily schedule and a clear path to work through, though they are flexible about it. Admittedly there isn't a constant feed of staff time for your every whim - ideally there would be 10-20 Jonahs, one per student; there's no way to offer that kind of service at a reasonable price. Communication between staff and students seemed to be very good, and key aspects of the program were well organised. So don't let perfect be the enemy of good: what you're getting is an excellent focused training program to learn R and some basic machine learning, and that's what you need to progress to the next stage of your career.
Our TA for the cohort, Andrew Ho, worked tirelessly to make sure our needs were met, both academically and in terms of running the house. Jonah was extremely helpful when you needed to debug something or clarify a misunderstanding. His lectures on selected topics were excellent. Robert's Saturday sessions on interview technique were good, though I felt that over time they became less valuable as some people got more out of interview practice than others.
I am still in touch with some people I met on my cohort, even though I had to leave the country, I consider them pals and we keep in touch about how our job searches are going. People have offered to recommend me to companies as a result of Signal. As a networking push, going to Signal is certainly a good move.
Highly recommended for smart people who need a helping hand to launch a technical career in data science.
1: I haven't signed the contract yet as my new boss is on holiday, but I fully intend to follow up when that process completes (or not). Watch this space.
2: or equivalent - if you can do mathematics such as matrix algebra, know what the normal distribution is, understand basic probability theory such as how to calculate the expected value of a dice roll, etc, you are probably fine.
Superintelligence and physical law
It's been a few years since I read http://lesswrong.com/lw/qj/einsteins_speed/ and the rest of the quantum physics sequence, but I recently learned about the company Nutonian, http://www.nutonian.com/. Basically it's a narrow AI system that looks at unstructured data and tries out billions of models to fit it, favoring those that use simpler math. They apply it to all sorts of fields, but that includes physics. It can't find Newton's laws from three frames of a falling apple, but it did find the Hamiltonian of a double pendulum given its motion data after a few hours of processing: http://phys.org/news/2009-12-eureqa-robot-scientist-video.html
[Link] My Interview with Dilbert creator Scott Adams
In the second half of the interview we discussed several topics of importance to the LW community including cryonics, unfriendly AI, and eliminating mosquitoes.
https://soundcloud.com/user-519115521/scott-adams-dilbert-interview
Jocko Podcast
I've recently been extracting extraordinary value from the Jocko Podcast.
Jocko Willink is a retired Navy SEAL commander, jiu-jitsu black belt, management consultant and, in my opinion, master rationalist. His podcast typically consists of detailed analysis of some book on military history or strategy followed by a hands-on Q&A session. Last week's episode (#38) was particularly good and if you want to just dive in, I would start there.
As a sales pitch, I'll briefly describe some of his recurring talking points:
- Extreme ownership. Take ownership of all outcomes. If your superior gave you "bad orders", you should have challenged the orders or adapted them better to the situation; if your subordinates failed to carry out a task, then it is your own instructions to them that were insufficient. If the failure is entirely your own, admit your mistake and humbly open yourself to feedback. By taking on this attitude you become a better leader and through modeling you promote greater ownership throughout your organization. I don't think I have to point out the similarities between this and "Heroic Morality" we talk about around here.
- Mental toughness and discipline. Jocko's language around this topic is particularly refreshing, speaking as someone who has spent too much time around "self help" literature, in which I would partly include Less Wrong. His ideas are not particularly new, but it is valuable to have an example of somebody who reliably executes on his the philosophy of "Decide to do it, then do it." If you find that you didn't do it, then you didn't truly decide to do it. In any case, your own choice or lack thereof is the only factor. "Discipline is freedom." If you adopt this habit as your reality, it become true.
- Decentralized command. This refers specifically to his leadership philosophy. Every subordinate needs to truly understand the leader's intent in order to execute instructions in a creative and adaptable way. Individuals within a structure need to understand the high-level goals well enough to be able to act in a almost all situations without consulting their superiors. This tightens the OODA loop on an organizational level.
- Leadership as manipulation. Perhaps the greatest surprise to me was the subtlety of Jocko's thinking about leadership, probably because I brought in many erroneous assumptions about the nature of a SEAL commander. Jocko talks constantly about using self-awareness, detachment from one's ideas, control of one's own emotions, awareness of how one is perceived, and perspective-taking of one's subordinates and superiors. He comes off more as HPMOR!Quirrell than as a "drill sergeant".
The Q&A sessions, in which he answers questions asked by his fans on Twitter, tend to be very valuable. It's one thing to read the bullet points above, nod your head and say, "That sounds good." It's another to have Jocko walk through the tactical implementation of this ideas in a wide variety of daily situations, ranging from parenting difficulties to office misunderstandings.
For a taste of Jocko, maybe start with his appearance on the Tim Ferriss podcast or the Sam Harris podcast.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)