All of Ariel's Comments + Replies

Ariel1-1

For [1], could you point at some evidence, if you have any on hand? My impression from TAing STEM at an Ivy League school is that homework load and the standards for its grading (as with the exams) is very light, compared to what I remember from my previous experience in a foreign state university. 

 

It wasn't at all what I expected, and shaped (among other signals of implied preference by the university) my view that the main services the university offers to its current and former students are networking opportunities and a signal of prestige.

2Quinn
I don't know what legible/transferable evidence would be. I've audited a lot of courses at a lot of different universities. Anecdote, sorry.
Ariel1-6

At the point of death, presumably, the person whose labour is seized does not exist. I think that's a good point to consider, since I also estimate that a significant amount of resistance to the idea of no inheritance assumes the dead person's will is a moral factor after their death.

 

I tend to agree that in such a world there would be more consumption rather than saving approaching old age, but I'm not sure that's a problem or how big of a problem that is, and there are ways for governments to nudge that ratio through monetary policy. 

 

I al... (read more)

8ulyssessword
Yes, I make that assumption.  I believe I'm in very good company there, with both the general public and (many, but not all) decision theories/moral systems recognizing agreements that carry on past death.  Why would you think otherwise? I'm not quite sure what this post's hypothetical is supposed to be, but sure.  Let's say that charitable donations are fully exempt from the tax. People don't care about charity to any substantial extent.  Donation rates are around 4%, whereas raising a child averages 15%ish per child for nearly half of a parent's career, never mind the non-financial investments in their wellbeing.  It's not a complete restriction on giving, but it cuts out the most important one in many peoples' lives. Allowing for charitable donations as an alternative to simple taxation does shift the needle a bit but not enough to substantially alter the argument IMO. No, they absolutely are not.  Spending your money before your death is heavily constrained by uncertainty.  The chance of sudden unexpected death between 20-64 totals about 1.5% (calculated from here), and the anti-loophole protections would catch more.  Even outside of the worst-case scenarios, you will always die before a sufficiently-optimistic estimate (and if you aren't optimistic enough?  Have fun living out your last days while completely broke, I guess.) To be clear, I was talking about the parents being good stewards by managing the wealth for the benefit of future generations (i.e. Bob, and perhaps his kids).  I have opinions about how effective the government would be compared to the children, but those differences pale in comparison to tearing everything down to get the last drop of value out before you die and lose it all.
Ariel*90

Thank you, that was very informative.

I don't find the "probability of inclusion in final solution" model very useful, compared to "probability of use in future work" (similarly for their expected value versions) because

  1. I doubt that central problems are a good model for science or problem solving in general (or even in the navigation analogy).
  2. I see value in impermanent improvements (e.g. current status of HIV/AIDS in rich countries) and in future-discounting our value estimations.
  3. Even if a good description of a field as a central problem and satalite proble
... (read more)
1Elliot Callender
How much would you say (3) supports (1) on your model? I'm still pretty new to AIS and am updating from your model. I agree that marginal improvements are good for fields like medicine, and perhaps so too AIS. E.g. I can imagine self-other overlap scaling to near-ASI, though I'm doubtful about stability under reflection. I'll put 35% we find a semi-robust solution sufficient to not kill everyone. I think that the distribution of success probability of typical optimal-from-our-perspective solutions is very wide for both of the ways we describe generalizability; within that, we should weight generalizability heavier than my understanding of your model does. Earlier: Is this saying people should coordinate in case valuable solutions aren't in the apriori generalizable space?
Ariel40

I see what you mean with regards to the number of researchers. I do wonder a lot about the amount of waste from multiple researchers unknowingly coming up with the same research (a different problem to what you pointed out) and the uncoordinated solution to that is to work on niche problems and ideas (which coincidentally seem less likely to individually generalize).

Could you share your intuition for why the solution space in AI alignment research is large, or larger than in cancer? I don't have an intuition about the solution space in alignment v.s. a "ty... (read more)

2Elliot Callender
I was being careless / unreflective about the size of the cancer solution space, by splitting the solution spaces of alignment and cancer differently; nor do I know enough about cancer to make such claims. I split the space into immunotherapies, things which target epigenetics / stem cells, and "other", where in retrospect the latter probably has the optimal solution. This groups many small problems with possibly weakly-general solutions into a "bottleneck", as you mentioned: Later: For a given problem, generalizability is how likely a given sub-solution is to be part of the final solution, assuming you solve the whole problem. You might choose to model expected utility, if that differs between full solutions; I chose not to here because I natively separate generality from power. I agree that "more general is better" with a linear or slightly superlinear (because you can make plans which rely heavier on solution) association with success probability. We were already making different value statements about "weakly" vs "strongly" general, where putting concrete probabilities / ranges might reveal us to agree w.r.t the baseline distribution of generalizability and disagree only on semantics. I.e. thresholds are only useful for communication. Perhaps a better way to frame this is in ratios of tractability (how hard to identify and solve) and usefulness (conditional on the solution working) between solutions with different levels generalizability. E.g. suppose some solution w is 5x less general than g. Then you expect, for the types of problems and solutions humans encounter, that w will be more than 5x as tractable * useful as g. I disagree in expectation, meaning for now I target most of my search at general solutions.   My model of the central AIS problems: 1. How to make some AI do what we want? (under immense functionally adversarial pressures) 1. Why does the AI do things? (Abstractions / context-dependent heuristics; how do agents split reality given
Ariel215

Besides reiterating Ryan Greenblat's objection to the assumption of a single bottleneck problem, I would also like to add that there is apriori value in having many weakly generalizable solutions even if only few will have posteriori value. 

Designing only best-worst-case subproblem solutions while waiting for Alice would be like restricting strategies in game to ones agnostic to the opponent's moves, or only founding startups that solve a modal person's problem. That's not to say that generalizability isn't a good quality, but I think the claim in the... (read more)

9Noosphere89
I would go further, and say that one of the core problem-solving strategies that are done to attack hard problems, especially of the NP-complete/NP-hard problems is to ask for less robustly generalizable solutions, and being more willing to depend on assumptions that might work, but also might not, because trying to find a robustly generalizable solution is too hard. Indeed, one way of problem relaxation is to assume more structure about the problem you are studying, such that you can get instances that can actually be solved in a reasonable amount of time. I think there's something to the "focus on bottlenecks" point, but also I think that trying to be too general instead of specializing and admitting your work might be useless is a key reason why people fail to progress.
3Elliot Callender
I think general solutions are especially important for fields with big solution spaces / few researchers, like alignment. If you were optimizing for, say, curing cancer, it might be different (I think both the paradigm-and subproblem-spaces are smaller there). From my reading of John Wentworth's Framing Practicum sequence, implicit in his (and my) model is that solution spaces for these sorts of problems are apriori enormous. We (you and I) might also disagree on what apriori feasibility would be "weakly" vs "strongly" generalizable; I think my transition is around 15-30%.