I still think that if you want to know where X is on someone's TODO list, you should ask that instead of asking for their full TODO list. This feels nearly as wrong as asking for someone's top 5 movies of the year, instead of whether or not they liked Oppenheimer (when you want to know if they liked Oppenheimer).
I don't think this level of trickery is a good idea.
If you're working with someone honest, you should ask for the info you want. On the other hand, if you're working with someone who will obfuscate when asked "Are you working on X?", I don't see a strong reason to believe that they will give better info when instead asking about their top priorities.
In regard to Waymo (and Cruise, although I know less there) in San Francisco, the last CPUC meeting for allowing Waymo to charge for driverless service had the vote delayed. Waymo operates in more areas and times of day than Cruise in SF last I checked.
https://abc7news.com/sf-self-driving-cars-robotaxis-waymo-cruise/13491184/
I feel like Paul's right that the only crystal clear 'yes' is Waymo in Phoenix, and the other deployments are more debatable (due to scale and scope restrictions).
You gave the caveats, but I'm still curious to hear what companies you felt had this engineer vs manager conflict routinely about code quality. Mostly, I'd like to know so I can avoid working at those companies.
I suspect the conflict might be exacerbated at places where managers don't write code (especially if they've never written code). My managers at Google and Waymo have tended to be very supportive of code health projects. The discussion of how to trade-off code debt and velocity is also very explicit. We've gotten pretty guidance in some quarte...
Fair point that GiveWell has updated their RFMF and increased their estimated cost per QALY.
I do think that 300K EAs doing something equivalent to eliminating the global disease burden is substantially more plausible than 66K doing so. This seems trivially true since more people can do more than fewer people. I agree that it still sounds ambitious, but saying that ~3X the people involved in the Manhattan project could eliminate the disease burden certainly sounds easier than doing the same with half the Manhattan project's workforce size.
This is gett...
I'm surprised that you think that direct work has such a high impact multiplier relative to one's normal salary. The footnote seems to suggest that you expect someone who could get a $100K salary trying to earn to give could provide $3M in impact per year.
I think GiveWell still estimates it can save a life for ~$6K on the margin, which is ~50 QALYs.
(life / $6K) X (50 QALY / life) X ($3 million / EA year) ~= 25K QALY per EA year
Which both seems like a very high figure and seems to imply that 66K EAs would be sufficient to do good equivalent to totally elimi...
I'm not sure that contagiousness is a good reason to believe that an (in)action is particularly harmful, outside of the multiplier contagiousness creates by generating a larger total harm. It seems clear that we'd all agree that murder is much worse than visiting a restaurant with a common cold, despite the fact that the latter is a contagious harm.
Although there is a good point that the analogy breaks down because a DUI doesn't cause harm during your job (assuming you don't drive in your work), whereas being unvaccinated does cause expected harm to colleagues and customers.
I think you're correct that the difference between R0 and Rt is that Rt takes into account the proportion of the population already immune.
However, R0 is still dependent on its environment. A completely naive (uninfected) population of hermits living in caves hundreds of miles distant from one another has an R0 of 0 for nearly anything. A completely naive population of immunocompromised packed-warehouse rave attendees would probably have an R0 of 100+ for measles.
I don't know if there is another Rte type variable that tries to define the infectivenes...
A one point improvement (measured on a ten point scale) feels like a massive change to expect. I like the guts to bet that it'll happen and change your mind otherwise, but I'm curious if you actually expected that scale of change.
For me, a one point change requires super drastic measures (ex. getting 2 hours too few sleep for 3+ days straight). Although I may well be arbitrarily compressing too much of my life into 6-9 on the ten point scale.
Fair points!
I don't know if I'd consider JPAL directly EA, but they at least claim to conduct regular qualitative fieldwork before/after/during their formal interventions (source from Poor Economics, I've sadly forgotten the exact point but they mention it several times). Similarly, GiveDirectly regularly meets with program participants for both structured polls and unstructured focus groups if I recall correctly. Regardless, I agree with the concrete point that this is an important thing to do and EA/rationality folks are less inclined to collect unstructured qualitative feedback than its importance deserves.
I found it immensely refreshing to see valid criticisms of EA. I very much related to the note that many criticisms of EA come off as vague or misinformed. I really appreciated that this post called out specific instances of what you saw as significant issues, and also engaged with the areas where particular EA aligned groups have already taken steps to address the criticisms you mention.
I think I disagree on the degree to which EA folks expect results to be universal and generalizable (this is in response to your note at the end of point 3). As a concrete...
Picking a Schelling point is hard. Since the post focused on very recent results, I thought that a one year time horizon was an obvious line. Vanguard does note that the performance numbers I quoted are time weighted averages.
You are of course correct that over the long run you should expect closer to 5-8% returns from the stock market at large.
I currently have a roughly 50/50 split between VTIAX and VTSAX. I would of course not expect to continue to get 30% returns moving forward (I expect 5% return after inflation), but that is the figure I got when I selected a one year time horizon for showing my return on Vanguard.com.
If I instead compute from 01/2020 to 01/2021, I had a roughly 18% rate of return. I don't know how your Wealthfront is setup, but I'll note that I have a relatively aggressive split of 100% stock and nothing in bonds.
You're claiming you've been correctly noticing good investment opportunities over a several month period. What has been your effective return over the last year (real return on all actual investments, not hypothetical)?
I feel like the strongest way to address the "If you are so smart why aren't you rich?" question is to show that you are in fact rich.
My Vanguard has gotten a 30.4% return over the last year. I have a very simple, everything in the basic large funds strategy (I can share the exact mix if its relevant). Your advice is substantially harder to execute than this, so it would be great to know the actual relative return.
"You're claiming you've been correctly noticing good investment opportunities over a several-month period." This not what I am arguing. I am arguing that you can check the EMH right now and notice it is false.
The actual answer to your question is unfairly favorable to me given market conditions. I put a relatively large percentage of money into crypto so my overall portfolio is up more than 200% over the last twelve months. This is not replicable going forward. Pretty much everything in crypto is up but Solana started spiking later than other coins because...
knot shirt, force under door, pull
twist knob back and forth for hours, slowly wearing through a hole
kick down door
rip out teeth, use to scratch through door
text owner
call police
call friends to open door
hire locksmith
windows
air duct
ply away floorboards
give up concern with the door
die
smash phone, cut through door
hack through electronic door with phone
order pizza to door
post criminal pictures from phone, include locations (SWAT self)
break apart phone, use wires and battery to burn hinge
break phone, use screen protector as 'credit card trick'
rope ladder using
These questions are way too 'Eureka!'/trivia for my taste. The first question relies on language specifics and then is really much more of a 'do you know the moderately weird sorting algs' question than an actual algorithms question. The second involves an oddly diluting resistant test. The third, again seems to test moderately well known crypto triva.
I've conducted ~60 interviews for Google and Waymo. To be clear, when I or most other interviewers I've seen say that a question covers 'algorithms' it means that it co...
Donated 5k. I think LessWrong is a big reason why I got into EA, quantified self (which has downstream effects on me getting married), and exposed me to many useful ideas.
I'm not sure about the marginal value of my donation, but I'm viewing this more as payment for services rendered. I think I owe LessWrong a fair amount, since I expect my counter-factual life to be much worse.