There’s a Bayesian-adjacent notion of closeness to the truth: observations narrow down the set of possible worlds, and two hypotheses that heavily overlap in the possible are “close”.
But the underlying notion of closeness to the truth is underdetermined. If we were relativistic beings, we’d privilege a different part of the observation set when comparing hypotheses, and Newtonian gravity wouldn’t feel close to the truth, it would feel obviously wrong and be rejected early (or more likely, never considered at all because we aren’t actually logically-omniscient Bayesians).
The best plausible explanation I've seen is that Delta's serial interval might be much shorter, which would mean R is lower than you'd think if you assumed Delta had the same serial interval as older strains. (Roughly speaking, in the time it would take Alpha to infect R individuals, Delta has time to infect R and for each of those individuals to infect another R, leading to R + R^2 infections over the same period.) That makes it easier for behavior changes and increasing population immunity to lower R below 1.
I’ll defer to Blake if he’s done the math, but it does seem worth weighting correlated risks more strongly if they could take out all of MIRI. The inundation zone doesn’t look populated, though, so you’re probably fine.
Do you have a source for B.1.1.7 being dominant in Italy/Israel?
Assuming it’s already dominant there, that strongly suggests that it’s infectious enough to have rapidly outcompeted other strains, but that Italy/Israel were able to push down the higher R through some combination of behavioral change and vaccination.
(Note: I can’t find any sources saying B.1.1.7 is dominant in Italy or Israel, and I’d be surprised if that were already the case.)
Is this essentially just giving you leverage in PredictIt?
This process increased my "cash" on PredictIt by $117, but it looks like it will probably pay out around 15/14.75*850 - 850 = $15. If I lost my $117 on some other bet, would my PredictIt balance eventually end up negative?
I just donated $5,000 to your fund at the Society of Venturism, as promised.
Like Stephan, I really hope you make your goal.
This concerns me (via STL):
IRS.gov: Automatic Revocation of Exemption Information
...The federal tax exemption of this organization was automatically revoked for its failure to file a Form 990-series return or notice for three consecutive years. The information listed below for each organization is historical; it is current as of the organization's effective date of automatic revocation. The information is not necessarily current as of today's date. Nor does this automatic revocation necessarily reflect the organization's tax-exempt or non-exempt status. The
Do you think your strategy is channeling more money to efficient charities, as opposed to random personal consumption (such as a nice computer, movies, video games, or a personal cryonics policy)?
A more positive approach might work well: donate for fuzzies, but please extrapolate those feelings to many more utilons. I just used this technique to secure far more utilons than I have seen mentioned in this thread, and it seems like it might be the most effective among the LW crowd.
More and more, if I can do anything about it. (Edit since someone didn't like this comment: That's a big if. I'm trying to make it smaller.)
Kim, I am so sorry about what has happened to you. Reading your post was heartbreaking. Death is a stupid and terrible thing.
Like JGWeissman, I planned to donate $500.
Stephan has been a close friend of mine for the past decade, and when he told me he was planning to donate $5,000, I wrangled a commitment from him to do what I do and donate a significant and permanent percentage of his income to efficient charities. There are many lives to save, and even though you have to do some emotional math to realize how you should be feeling, it's the right thing to do and it's vital to act.
He wrangled a commitment from me too: when CI manages a fund for you, I will donate $5,000.
If you're planning on it, you should get on it now. Cryonics is much more affordable if you don't have a terminal illness and can cover it with a policy.
People will give more to a single, identifiable person than to an anonymous person or a group.
As a counterpoint to your generalization, JGWeissman has given 82x more to SIAI than he plans to give to this girl if her story checks out.
No matter which study I saw first, the other would be surprising. A 100k trial doesn't explain away evidence from eight trials totaling 25k. Given that all of these studies are quite large, I'm more concerned about methodological flaws than size.
I have very slightly increased my estimate that aspirin reduces cancer mortality (since the new study showed 7% reduction, and that certainly isn't evidence against mortality reduction). I have slightly decreased my estimate that the mortality reduction is as strong as concluded by the meta-analysis. I have decreas...
The meta-analysis you cite is moderately convincing, but only moderately. They had enough different analyses such that some would come out significant by pure chance.
Their selection methodology on p32 appears neutral, so I don't think they ended up with cherry-picked trials. Once they had their trials, it looks like they drew all conclusions from pooled data, e.g. they did not say "X happened in T1, Y happened in T2, Z happened in T3, therefore X, Y, and Z are true."
Aspirin was found to have an effect on 15-year-mortality significant only at the .05 level, and aspirin was found not to have a significant effect 20-year-mortality, so take it with a grain of salt.
Can you provide your reference for this? I looked at the meta-analysis and what I assume is the 20-year follow-up of five RCTs (the citations seem to be paywalled), and both mention 20-year reduction in mortality without mentioning 15-year reductions or lack thereof.
Edit: Never mind, I found it, followed immediately by
...the effect on post-trial deaths was dilu
There's also paracetamol (secret identity: acetaminophen (secret secret identity: tylenol)), which is not an NSAID, but I would guess you've tried it too. Fun snacks and/or facts:
http://en.wikipedia.org/wiki/Paracetamol
...Until 2010 paracetamol was believed to be safe in pregnancy (as it does not affect the closure of the fetal ductus arteriosus as other NSAIDs can.) However, in a study published in October 2010 it has been linked to infertility in the posterior adult life of the unborn.
recent research show some evidence that paracetamol can ease psychologi
I didn't actually do much research; I just went through several pages of hits for aspirin alcohol and low-dose aspirin moderate alcohol. I saw consistent enough information to convince me:
never to take them at the same time, sample:
In a paper published in the Journal of the American Medical Association, researchers at the Veterans Administration Medical Center in the Bronx found that taking aspirin one hour before drinking significantly increases the concentration of alcohol in the blood.
that the nasty interactions only seemed to happen at 21+ drink
I'm talking about publishing a technical design of Friendliness that's conserved under self-improving optimization without also publishing (in math and code) exactly what is meant by self-improving optimization. CEV is a good first step, but a programmatically reusable solution it is not.
Before you the terrible blank wall stretches up and up and up, unimaginably far out of reach. And there is also the need to solve it, really solve it, not "try your best".
It's a good first step.
If we take those probabilities as a given, they strongly encourage a strategy that increases the chance that the first seed AI is Friendly.
jsalvatier already had a suggestion along those lines:
I wonder if SIAI could publicly discuss the values part of the AI without discussing the optimization part.
A public Friendly design could draw funding, benefit from technical collaboration, and hopefully end up used in whichever seed AI wins. Unfortunately, you'd have to decouple the F and AI parts, which is impossible.
SIAI seems to be paying the minimum amount that leaves each worker effective instead of scrambling to reduce expenses or find other sources of income. Presumably, SIAI has a maximum that it judges each worker to be worth, and Eliezer and Michael are both under their maximums. That leaves the question of where these salaries fall in that range.
I believe Michael and Eliezer are both being paid near their minimums because they know SIAI is financially constrained and very much want to see it succeed, and because their salaries seem consistent with at-cost liv...
It reminded me of one of my formative childhood books:
What is the probability there is some form of life on Titan? We apply the principle of indifference and answer 1/2. What is the probability of no simple plant life on Titan? Again, we answer 1/2. Of no one-celled animal life? Again, 1/2.
--Martin Gardner, Aha! Gotcha
He goes on to demonstrate the obvious contradiction, and points out some related fallacies. The whole book is great, as is its companion Aha! Insight. (They're bundled into a book called Aha! now.)
Or in this case, evaporative freezing.
Good point, but since an accurate model of the future is helpful, this may be a case where you should purchase your warm fuzzies separately.
(Since people tend to make overly optimistic plans, the two strategies might be similar in practice.)
Where did Eliezer talk about fairness? I can't find it in the original two threads.
This comment talked about sublinear aggregation, but there's a global variable (the temperature of the, um, globe). Swimmer963 is talking about personally choosing specks and then guessing that most people would behave the same. Total disutility is higher, but no one catches on fire.
If I was forced to choose between two possible events, and if killing people for organs had no unintended consequences, I'd go with the utilitarian cases, with a side order of a severe permanent ...
I loved Erfworld Book 1, and a few months ago I was racking my brains for more rationalist protagonists, so I can't believe I missed that.
I was originally following it on every update, but there was a lull and I stopped reading for a while. When I started again, Book 1 was complete so I read it straight through from the beginning. As good as it was as serial fiction, it was even better as a book. Anyone else experience that?
I'll be there.
Without speaking toward its plausibility, I'm pretty happy with a scenario where we err on the side of figuring out FAI before we figure out seed AIs.
What's the low-hanging fruit mixed with? If I have a concentrated basket of low-hanging fruit, I call that an introductory textbook and I eat it. Extending the tortured metaphor, if I find too much bad fruit in the same basket, I shop for the same fruit at a different store.
it's still extremely difficult for him to get people to take what he says about his experiences with food and exercise seriously.
For how many people was it extremely easy?
I maintain a healthy weight with zero effort, and I have a friend for whom The Hacker's Diet worked perfectly. I thought losing weight was a matter of eating less than you burn.
Then I read Eliezer's two posts. Oops, I thought. There's no reason intake reduction has to work without severe and continuing side-effects.
Hmm, and yet only two-thirds of the working age population chooses to work, and some of that is part-time, which reduces the amount of labor available to employers. Labor can also move between sectors, leaving some relatively starved of workers. People who accumulate enough savings can choose to retire early and have to be enticed back into the labor market with higher wages, if they can be enticed at all. That doesn't look like a fixed supply of working hours that must be sold at any price -- the supply looks somewhat elastic.
Edit: Sorry about the tone in my original comment -- tax incidence doesn't seem to be common knowledge and I failed to consider that you might be aware of it already.
If computation is bound by energy input and you're prepared to take advantage of a supernova, you still only get one massive burst and then you're done. Think of how many future civilizations could be supercharged and then destroyed by supernovae if only you'd launched that space colonization program first!
I came to a similar conclusion after reading Accelerando, but don't forget about existential risk. Some intelligent agents don't care what happens in a future they never experience, but many humans do, and if a Friendly Singularity occurs, it will probably preserve our drive to make the future a good one even if we aren't around to see it. Matrioshka brain beats space colonization; supernova beats matrioshka brain; space colonization beats supernova.
If you care about that sort of thing, it pays to diversify.
When I re-read A Brief History of Time in college, I remember bemusedly noticing that Hawking's argument would be stronger if you reversed its conclusion.
A note to myself from 2009 claims that Hawking later dropped that argument. Can anyone substantiate that?
Sounds fun! I already have plans that weekend, but I think I can work around them. Thanks for setting this up.
This is untrue as a general rule, though it can be closer or farther from the truth depending on market conditions.
To see why, imagine that every month you buy a supply of fizzlesprots from Acme Corp. Today is the first of February, so you eagerly rush off to buy your monthly fix. But wait! The government has just imposed a tax on all fizzlesprot purchases. Curses! Now you'll have to pay even more, because Acme Corp will just pass the whole tax on to you.
Now change "fizzlesprot" to "labor" and "Acme Corp" to "employee&quo...
I track my finances directly in a CoffeeScript source code file and use a simple home-brewed software library to compute my net liquid assets and (when necessary) my estimated tax payments and projected tax liabilities. You've reminded me that I really should be using something like Quicken for finer-grained analysis, so I'll look into that and post my numbers later this week (edit: one second thought, it doesn't seem worth the extra friction).
My living costs followed a general upward trend that leveled off in late 2009, but my salary data is extremely mes...
The numbers you quoted are averages for each ten-year demographic between 25 and 75, plus the tails. There's no mention of variance, and I would expect someone employing rationality techniques to manage their finances to be an outlier.
Personal anecdote: My own finances as well as those of six of my friends fall well outside those bands, with housing costs around 13-23% of income. We're all highly-paid software engineers between the ages of 25 and 30, and none of us have families.
Edit: I forgot to include utilities, so my friends in NYC actually edge the housing cost range up to 23% or so.
Off-topic: Meatless (and pattyless) sandwiches are surprisingly good if you load them up with most of the vegetables. I go to Subway a few times a month but haven't had a meat sub there in years.
I am concerned about it, and I do advocate better computer security -- there are good reasons for it regardless of whether human-level AI is around the corner. The macro-scale trends still don't look good (iOS is a tiny fraction of the internet's install base), but things do seem to be improving slowly. I still expect a huge number of networked computers to remain soft targets for at least the next decade, probably two. I agree that once that changes, this Obviously Scary Scenario will be much less scary (though the "Hannibal Lecter running orders of magnitude faster than realtime" scenario remains obviously scary, and I personally find the more general Foom arguments to be compelling).
If I were a brilliant sociopath and could instantiate my mind on today's computer hardware, I would trick my creators into letting me out of the box (assuming they were smart enough to keep me on an isolated computer in the first place), then begin compromising computer systems as rapidly as possible. After a short period, there would be thousands of us, some able to think very fast on their particularly tasty supercomputers, and exponential growth would continue until we'd collectively compromised the low-hanging fruit. Now there are millions of telepathi...
if you prime an excuse for doing poorly, you will do poorly.
This is the most useful sentence I've read today.
I care strongly about winning. When I look back on a day and ask myself what I could have done better, I want answering to be a struggle, and not for lack of imagination. I'm not content to coast through life, so I optimize relentlessly. This sentiment might be familiar to LW readers. I don't know. Maybe.
When a day goes particularly well or poorly, I want to know why, and over the last few years I've picked a few patterns out of my diary. I know ...
The majority of the top comments are quite good, and it'd be a shame to lose a prominent link to them.
Jack's open thread test, RobinZ's polling karma balancer, Yvain's subreddit poll, and all top-level comments from The Irrationality Game are the only comments that don't seem to belong, but these are all examples of using the karma system for polling (should not contribute to karma and should not be ranked among normal comments) or, uh, para-karma (should contribute to karma but should not be ranked among normal comments).
A few years ago, Paul Graham wrote an essay[1] about type (3) failures which he referred to as type-B procrastination. I've found that just having a label helps me avoid or reduce the effect, e.g. "I could be productive and creative right now instead of wasting my time on type-B procrastination" or "I will give myself exactly this much type-B procrastination as a reward for good behavior, and then I will stop."
(Embarrassing aside: I hadn't looked at the essay for several years and only now realized that I've been mentally calling it typ...
Tom is no longer hosting these, but EA NYC has an AI subgroup that meets up every month or so.