Ariel
Ariel has not written any posts yet.

Ariel has not written any posts yet.

At the point of death, presumably, the person whose labour is seized does not exist. I think that's a good point to consider, since I also estimate that a significant amount of resistance to the idea of no inheritance assumes the dead person's will is a moral factor after their death.
I tend to agree that in such a world there would be more consumption rather than saving approaching old age, but I'm not sure that's a problem or how big of a problem that is, and there are ways for governments to nudge that ratio through monetary policy.
I also don't agree that you're effectively limiting people's power of affecting causes they care... (read more)
Thank you, that was very informative.
I don't find the "probability of inclusion in final solution" model very useful, compared to "probability of use in future work" (similarly for their expected value versions) because
Given my model, I think 20% generalizability is worth a person's time. Given yours, I'd say 1% is enough.
I see what you mean with regards to the number of researchers. I do wonder a lot about the amount of waste from multiple researchers unknowingly coming up with the same research (a different problem to what you pointed out) and the uncoordinated solution to that is to work on niche problems and ideas (which coincidentally seem less likely to individually generalize).
Could you share your intuition for why the solution space in AI alignment research is large, or larger than in cancer? I don't have an intuition about the solution space in alignment v.s. a "typical" field, but I strongly think cancer research has a huge space and can't think of anything... (read more)
Besides reiterating Ryan Greenblat's objection to the assumption of a single bottleneck problem, I would also like to add that there is apriori value in having many weakly generalizable solutions even if only few will have posteriori value.
Designing only best-worst-case subproblem solutions while waiting for Alice would be like restricting strategies in game to ones agnostic to the opponent's moves, or only founding startups that solve a modal person's problem. That's not to say that generalizability isn't a good quality, but I think the claim in the article goes a little too far.
There's one common reason I sometimes undervalue weakly-generalizeable solutions (it's not in response to any claim in the article, but... (read more)
For [1], could you point at some evidence, if you have any on hand? My impression from TAing STEM at an Ivy League school is that homework load and the standards for its grading (as with the exams) is very light, compared to what I remember from my previous experience in a foreign state university.
It wasn't at all what I expected, and shaped (among other signals of implied preference by the university) my view that the main services the university offers to its current and former students are networking opportunities and a signal of prestige.