2009: "Extreme Rationality: It's Not That Great"
2010: "Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality"
2013: "How about testing our ideas?"
2014: "Truth: It's Not That Great"
2015: "Meta-Countersignaling Equilibria Drift: Can We Accelerate It?"
2016: "In Defense Of Putting Babies In Wood Chippers"
2016: "In Defense Of Putting Babies In Wood Chippers"
Heck, I could write that post right now. But what's it got to do with truth and such?
Yes. I assume this is why she's collecting these ideas.
Katja doesn't speak for all of MIRI when she says above what "MIRI is interested in".
In general MIRI isn't in favor of soliciting storytelling about the singularity. It's a waste of time and gives people a false sense that they understand things better than they do by incorrectly focusing their attention on highly salient, but ultimately unlikely scenarios.
Than you should reduce your confidence in what you consider obvious.
So MIRI is interested in making a better list of possible concrete routes to AI taking over the world.
I wouldn't characterize this as something that MIRI wants.
To clarify, One Medical partnered with us on this event... but are not materially involved with expanding MIRI themselves. They're simply an innovative business nearby us in Berkeley who wanted to support our work. I know it's somewhat unprecedented to see MIRI with strong corporate support, but trust me, it's a good thing. One Medical's people did a ton of legwork and made it super easy to host over 100 guests at that event with almost no planning needed on our part. They took care of everything so we could just focus on our work. A perfect partnership in...
Thanks. That was what I thought, but I haven't read Causality yet.
Fixed. Thanks.
The unreasonably low estimates would suggest things like "I'm net reducing factory-farming suffering if I eat meat and donate a few bucks, so I should eat meat if it makes me happier or healthier sufficiently to earn and donate an extra indulgence of $5 ." There are some people going around making the claim, based on the extreme low-ball cost estimates.
Correct. I make this claim. If vegetarianism is that cheap, it's reasonable to bin it with other wastefully low-value virtues like recycling paper, taking shorter showers, turning off lights, voting, "staying informed", volunteering at food banks, and commenting on less wrong.
Yep, you're right. I've never used the Open Threads so I didn't know that. Thanks.
Unfortunately, the Open Thread is rather difficult to find. You must know it exists, because it quickly gets lost among the new articles -- at least a third of which should be better placed in the Open Thread. So the problem makes itself worse. Unless someone reminds other people to use it... which always feels like starting a conflict with the author; and there are not even obvious guidelines. So thanks for not getting offended.
Americans can only report their health derivative (dx/dt) :)
A lot of the most unhealthy groups in the US are also poor and somewhat outside te reach of casual academic sampling.
I assumed that at first too. It turns out even removing the poor or minorities from the sample doesn't fix this gap.
I guess the study used the modifier "wealthy" along with developed to explain their choice of reference class. I looked at the list and it didn't seem obviously cherry picked. What countries would you add?
The guts of the study lists one (of many) possible causes:
"getting health care depends more on the market and on each person’s financial resources in the U.S. than elsewhere".
Insurance companies should point out to their detractors that they provide a valuable service by making healthcare so inaccessible that Americans no longer have any idea how they're doing. And that given this absence of knowledge, Americans assume they're doing great.
I received a letter telling me on no uncertain terms that if [US Customs] found another shipment of modafinil addressed to me, they would prosecute me as a drug smuggler.
You mean something like this? That's not really as meaningful as it seems. There is always some legal risk associated with doing anything since there are so many US laws that no one has even managed to count them, but a pretty serious search through legal database turns up no records of people being prosecuted for modafinil importation, ever. So that letter is 100% posturing by US Custo...
Yeah, don't be discouraged. LW is just like that sometimes. If you link to something with little or no commentary, it really needs to be directly about rationality itself or be using lots of LW-style rationality in the piece. This was a bit too mainstream to be appreciated widely here (even in discussion).
Glad to see you're posting though! You still in ATL and learning about FAI? I made a post you might like. :)
Just to clarify, I recommend the book "Probability and Computing" but the course I'm recommending is normally called something along the lines of "Combinatorics and Discrete Probability" at most universities. So the online course isn't as far off base as it may have looked. However, I agree there are better choices that cover more exactly what I want. So I've updated it with a more on-point Harvard Extension course.
The MIT and CMU courses both cover combinatorics and discrete probability. They are probably the right thing to take or very close to it if you're at those particular schools.
Thanks again for the feedback Klao.
Fixed. Thanks.
Yep, SI has summer internships. You're already in Berkeley, right?
Drop me an email with the dates you're available and what you'd want out of an internship. My email and Malo's are both on our internship page:
http://singularity.org/interns/
Look forward to hearing from you.
Well, I figure I don't really want to recommend a ton of programming courses anyway. I'm already recommending what I presume is more than a bachelor's degree worth of course when pre-reqs and outside requirements at these universities are taken into account.
So if someone takes one course, they can learn so much more that helps them later in this curriculum from the applied, function programming course than its imperative counterpart. And the normal number of functional programming courses that people take in a traditional math or CS program is 0. So I have...
Ahh. Yeah, I'd expect that kind of content is way too specific to be built into initial FAI designs. There are multiple reasons for this, but off the top of my head,
I expect design considerations for Seed AI to favor smaller designs that only emphasize essential components for both superior ability to show desirable provability criteria, as well as improving design timelines.
All else equal, I expect that the less arbitrary decisions or content the human programmers provide to influence the initial dynamic of FAI, the better.
And my broadest answer is
I don't think those courses would impoverish anyones' minds. I expect people to take courses that aren't on this list without me having to tell people to do that. But I wouldn't expect courses drawn from these subjects to be mainstream recommendations for Friendliness researchers who were doing things like formalizing and solving problems relating to self-referencing mathematical structures and things along those lines.
Good question. If I remember correctly, Berkeley teaches from it and one person I respect agreed it was good. I think the impenetrability was consider more of a feature than a bug by the person doing the recommending. IOW, he was assuming that people taking my recommendations would be geniuses by-and-large and that the harder book would be better in the long-run for the brightest people who studied from it.
Part of my motivation for posting this here was to improve my recommendations. So I'm happy to change the rec to something more accessible if we can crowd-source something like a consensus best choice here on LW that's still good for the smartest readers.
[he was assuming that] people taking my recommendations would be geniuses by-and-large and that the harder book would be better in the long-run for the brightest people who studied from it.
Is this actually true? My current guess is that even though for a given level of training, smarter people can get through harder texts, they will learn more if they go through easier texts first.
Fixed. Thanks.
The functional/imperative distinction is not a real one
How is the distinction between functional and imperative programming languages "not a real one"? I suppose you mean that there's a continuum of language designs between purely functional and purely imperative. And I've seen people argue that you can program functionally in python or emulate imperative programming in Haskell. Sure. That's all true. It doesn't change the fact that functional-style programming is manifestly more machine checkable in the average (and best) case.
...it's less imp
How is the distinction between functional and imperative programming languages "not a real one"?
"Not a real one" is sort of glib. Still, I think Jim's point stands.
The two words "functional" and "imperative" do mean different things. The problem is that, if you want to give a clean definition of either, you wind up talking about the "cultures" and "mindsets" of the programmers that use and design them, rather than actual features of the language. Which starts making sense, really, when you note...
But I'm not sure where that is best covered.
Yeah, universities don't reliably teach a lot of things that I'd want people to learn to be Friendliness researchers. Heuristics and Biases is about the closest most universities get to the kind of course you recommend... and most barely have a course on even that.
I'd obviously be recommending lots of Philosophy and Psychology courses as well if most of those courses weren't so horribly wrong. I looked through the course handbooks and scoured them for courses I could recommend in this area that wouldn't steer ...
Believe me, Luke and I are sad beyond words every day of our lives that we have to continue recommending people read a blog to learn philosophy and a ton of other things that colleges don't know how to teach yet. We don't particularly enjoy looking crazy to everyone outside of the LW bubble.
This doesn't look as bad as it looks like it looks. Among younger mathematicians, I think it's reasonably well-known that the mathematical blogosphere is of surprisingly high quality and contains many insights that are not easily found in books (see, for example, Fie...
PS - I had some initial trouble formatting my table's appearance. It seems to be mostly fixed now. But if an admin wants to tweak it somehow so the text isn't justified or it's otherwise more readable, I won't complain! :)
I believe Coq is already short and proven using other proving programs that are also short and validated. So I believe the tower of formal validation that exists for these techniques is pretty well secured. I could be wrong about that though... would be curious to know the answer to that.
Relatedly, there are a lot of levels you can go with this. For instance, I wish someone would create other programming languages like CompCert for programming formally validated programs.
Martel (1997) estimates a considerably higher annualized death rate of 3,500 from meteorite impacts alone (she doesn’t consider continental drift or gamma-ray bursts), but the internal logic of safety engineering demands we seek a lower bound, one that we must put up with no matter what strides we make in redistribution of food, global peace, or healthcare.
Is this correct? I'd expect that this lower-bound was superior to the above (10 deaths / year) for the purpose of calculating our present safety factor... unless we're currently able to destroy earth-threatening meteorites and no one told me.
To paraphrase Kornai's best idea (which he's importing from outside the field):
A reasonable guideline is limiting the human caused xrisk to several orders of magnitude below the natural background xrisk level, so that human-caused dangers are lost in the noise compared to the pre-existing threat we must live with anyway.
I like this idea (as opposed to foolish proposals like driving risks from human made tech down to zero), but I expect someone here could sharpen the xrisk level that Kornai suggests. Here's a disturbing note from the appendix where he d...
Note that this was most of the purpose of the Fellows program in the first place -- [was] to help sort/develop those people into useful roles, including replacing existing management
FWIW, I never knew the purpose of the VF program was to replace existing SI management. And I somewhat doubt that you knew this at the time, either. I think you're just imagining this retroactively given that that's what ended up happening. For instance, the internal point system used to score people in the VFs program had no points for correctly identifying organizational i...
I didn't notice any factual inaccuracies
Although, multiple quotes were manufactured and misattributed.
I preferred the original version that appeared on your private website.
Once you sanitized it for LW by making it more abstract and pedantic, it lost many of the most biting, hilarious asides, that made it a fun and entertaining to read.
Nope, I was wrong. It is the case that agents require equal priors for ATT to hold. AAT is like proving that mixing the same two colors of paint will always result in the same shade or that two equal numbers multiplied by another number will still be equal.
What a worthless theorem!
I guess when I read that AAT required "common priors" I assumed Aumann must mean known priors or knowledge of each others' priors, since equal priors would constitute both 1) an asinine assumption and, 2) a result not worth reporting. Hanson's assumption that humans sho...
They need to have the same priors? Wouldn't that make AAT trivial and vacuous?
I thought the requirement was that priors just weren't pathologically tuned.
Those "generics" you're talking about are ordered by your friends from overseas. The average American won't take advantage of Modafinil until they can pay x10 as much to buy it in a pharmacy in their neighborhood.
People are too risk-averse to try things that work. Hmm... if only there were some sort of drug they could take to make them smarter?
I think the bigger difference between CBT and psychoanalysis is something like, CBT: "Your feelings are the residue of your thoughts, many of which are totally wrong and should be countered by your therapist and you because human brains are horribly biased." vs, Psychoanalysis: "Your feelings are a true reflection of what an awful, corrupt, contemptible, morally bankrupt human being you are. As your therapist, I will agree with and validate anything you believe about yourself since anything you report about yourself must be true by definitio...
Thanks for doing the research on this. It actually makes me feel a lot better knowing how low these base rates are.
Let me try again.
In 2009, each licensed driver drove an average of 14,000 miles.
For cars, the fatality rate per 100M VMT was 0.87 (the exact number is on page 22 of my original link). 14,000 miles/year * 0.87 deaths/100,000,000 miles = .0001218 deaths/year = 0.1218 millideaths/year. Inversely, 1 in 8210 people will die each year. Now, my math is hiding subtle assumptions - Traffic Safety Facts 2009 gives the fatality rate for passenger car occupants per vehicle miles traveled. This is affected by how many people occupy a given car! Their definition of moto...
I know lukeprog personally, but I suppose I should call him lukeprog on LW for other people's benefit. Thanks for the reminder.
I'm concerned with the overuse of the term "applause light" here.
An applause light is not as simple as "any statement that pleases an in-group". The way I read it, a charge of applause lights requires all of the following to hold:
1) There are no supporting details to provide the statement with any substance.
2) The statement is a semantic stopsign.
3) The statement exists purely to curry favor with an in-group.
4) No policy recommendations follow from that statement.
I don't see a bunch of applause lights when I read this post. I see a post...
Thanks MinibearRex.
I've added ads on Google AdWords that will start coming up for this in a couple days when the new ads get approved so that anyone searching for something even vaguely like "How to think better" or "How to figure out what's true" will get pointed at Less Wrong. Not as good as owning the top 3 spots in the organic results, but some folks click on ads, especially when it's in the top spot. And we do need to make landing on the path towards rationality less of a stroke of luck and more a matter of certainty for those who are looking.
I also thought you meant that Bill O'Reilly had (surprisingly) written the best book ever on the Lincoln shooting when you said "But I was wrong."
Thanks for the helpful comments! I was uninformed about all those details above.
These posts are not about GiveWell's process.
One of the posts has the sub-heading "The GiveWell approach" and all of the analysis in both posts use examples of charities you're comparing. I agree you weren't just talking about the GiveWell process... you were talking about a larger philosophy of science you have that informs things like the GiveWell process.
I recognize that you're making sophisticated arguments for your points. Especially the assumptions that you...
Louie, I think you're mischaracterizing these posts and their implications. The argument is much closer to "extraordinary claims require extraordinary evidence" than it is to "extraordinary claims should simply be disregarded." And I have outlined (in the conversation with SIAI) ways in which I believe SIAI could generate the evidence needed for me to put greater weight on its claims.
I wrote more in my comment followup on the first post about why an aversion to arguments that seem similar to "Pascal's Mugging" does not entail ...
Your comments are a cruel reminder that I'm in a world where some of the very best people I know are taken from me.
Hi, here are the details of whom I spoke with and why:
Carl Shulman pointed out how absurd this was: If GiveWell had existed 100 years ago, they would have argued against funding the eradication of smallpox. Their process forces them to reject the possibility that an intervention could be that effective
To clarify what I said in those comments:
Holden had a few posts that 1) made the standard point that one should use both prior and evidence to generate one's posterior estimate of a quantity like charity effectiveness, 2) used example prior distributions that assigned vanishingly low probability to outcomes f...
Holden seems to have spoken with Jasen "and others", so at least two people. I don't think it's fair to say that speaking with 1/3 of the people in an organization is as unrepresentative as speaking with 1/3,000,000 of the Boy Scouts. And since Holden sent SIAI his notes and got their feedback before publishing, they had a second chance to correct any misstatements made by the guy they gave him to interview.
So calling this interview "a complete lie" seems very unfair.
I agree that GiveWell's process is limited, and I'm interested in the GiveWell Labs project.
That's cool. Where did you hear that?
In Silicon Valley. With a group of people who know about LessWrong but are dubious about its instrumental value.