Humans often really really want something in the world to happen
This sentence is adjacent to my core concern regarding AI alignment, and why I'm not particularly reassured by the difficulty-of-superhuman-performance or return-on-compute reassurances regarding AGI: we don't need superhuman AI to deal superhuman-seeming amounts of damage. Indeed, even today's "perfectly-sandboxed" models (in that according to the most reliable publicly-available information none of the most cutting-edge models are allowed direct read/write access to the systems which wo...
“Imagine a square circle, and now answer the following questions about it…”.
Just use the Chebyshev (aka maximum or ) metric.
...I think a somewhat-more-elegant toy model might look something like the following: Alice’s object-level preferences are , and Beth’s are . Alice’s all-things-considered preferences are , and Beth’s are . Here, & represent Beth’s current beliefs about Alice’s desires and vice-versa, and the parameters represent how much Alice cares about Beth’s object-level desires and vice-versa. The latter could arise from admiration of the other person, fear of pissing them off, or v
trains but not dinosaurs
Did you get this combo from this video, or is this convergent evolution?
This argument is in no small part covered in
https://worrydream.com/refs/Lockhart_2002_-_A_Mathematician's_Lament.pdf
which is also available in 5-times-the-page-count-and-costs-$10.
Then you should pay them 10 years of generous salary to produce a curriculum and write model textbooks. You need both of that. (If you let someone else write the textbook, the priors say that the textbook will probably suck, and then everyone will blame the curriculum authors. And you, for organizing this whole mess.) They should probably also write model tests.
The problem underg...
The negative examples are the things that fail to exist because there aren't enough people with that overlap of skills. The Martian for automotive repair might exist, but I haven't heard of it.
Zen and the Art of Motorcycle Maintenance?
Why "selection" could be a capacity which would generalize: albeit to a (highly-lossy) first approximation, most of the most successful models have been based on increasingly-general types of gamification of tasks. The more general models have more general tasks. Video can capture sufficient information to describe almost any action which humans do or would wish to take along with numerous phenomena which are impossible to directly experience in low-dimensional physical space, so if you can simulate a video, you can operate or orchestrate reality.
Why selec...
One way to identify counterfactually-excellent researchers would be to compare the magnitude of their "greatest achievement" and secondary discoveries, because the credit that parade leaders get is often useful for propagating their future success and the people who do more with that boost are the ones who should be given extra credit for originality (their idea) as opposed to novelty (their idea first). Newton and Leibniz both had remarkably successful and diverse achievements, which suggests that they were relatively high in counterfactual impact in most...
Right, and the correct value is 37/72, not 19/36, because exactly half of the remaining 70/72 players lose (in the limit).
I think that this post relies too heavily on a false binary. Specifically, the description of all arguments as "good faith" or "bad faith" completely ignores the (to my intuition, far likelier) possibility that most arguments begin primarily (my guess is 90% or so, but maybe I just tend not to hold arguments with people below 70%) good faith, then people adjust according to their perception of their interlocutor(s), audience (if applicable), and the importance of the issue being argued. Common signals of arguments in particularly bad faith advanced by othe...
As linked by @turchin, Ayn Rand already took "Rational Egoism" and predecessors took "Effective Egoism." Personally, I think "Effective Hedonism" ought to be reserved for improving the efficiency of your expenditures (of time, money, natural resources, etc.) in generating hedons for yourself and possibly your circles of expanding moral concern (e.g. it's not ineffective hedonism to buy a person you care about a gift which they'll enjoy, and not entirely egocentric, and while you are allowed to care about your values in the world in this framewo...
I believe the availability side of this is what organizational-level calendars are for.
For the preference side, it's handy to share a physiological time zone (e.g. having similar availability and "best working hours" regardless of actual geography), precomitting to some minimum waiting period (e.g. "rolling an RNG with anyone who chimes in within 5 minutes" rather than "who's free?") to reduce the fastest-hand-raise problem, and if you end up noticing a preference, you can always weight the RNG accordingly.