All of CTVKenney's Comments + Replies

Very nice piece, and thank you for your service

"cloudy brother of bacteria" should probably be "cloudy broth of bacteria".

Do you understand mathematically what operation we're doing when we say two species or organisms have xx% similar genomes? Each genome is, I guess, several sequences of ATCG, but how do you get a percent similarity for two sequences of different lengths?

1Bolverk
Sequence 1 length:3 Sequence 2 length:6 Alignment length: 6 Identity: 3/6 (50.00%) Similarity: 3/6 (50.00%) Gaps: 3/6 (50.00%) ---AGC ||| AGCAGC Like this. Difference between lengths is considered non-matching. https://en.vectorbuilder.com/tool/sequence-alignment.html

The VARIANCE of a random variable seems like one of those ad hoc metrics. I would be very happy for someone to come along and explain why I'm wrong on this. If you want to measure, as Wikipedia says, "how far a set of numbers is spread out from their average value," why use E[ (X - mean)^2 ] instead of E[ |X - mean| ], or more generally E[ |X - mean|^p ]? The best answer I know of is that E[ (X - mean)^2 ] is easier to calculate than those other ones.

3Adele Lopez
Maybe entropic uncertainty (conjectured by Everett as part of his "Many Worlds" thesis, and proved by Hirschmann and Beckner) is along the lines of what you're looking for. It's a generalization of the Heisenberg uncertainty principle that applies even when the variance isn't well defined.

Variance has more motivation than just that it's a measure of how spread out the distribution is. Variance has the property that if two random variables are independent, then the variance of their sum is the sum of their variances. By the central limit theorem, if you add up a sufficiently large number of independent and identically distributed random variables, the distribution you get is well-approximated by a distribution that depends only on mean and variance (or any other measure of spreadout-ness). Since it is the variance of the distributions you we... (read more)

Dear CraigMichael,

I am by no means a guru. It seems like you prefer Apollo Creed problems to Clubber Lang problems because you're more able to motivate yourself to do Apollo Creed problems. I feel the same way. I find it exciting to start new projects, and grueling to continue my existing projects. My advice:

If you need to solve a Clubber Lang problem, then in moments of clarity, you should establish habits/systems to solve the Clubber Lang problem that don't require you to be motivated on any given day.

 E.g. go for a jog even when you're not feeling ... (read more)

Money should be able to guarantee that, over several periods of play, you perform not-too-much-worse than an actual expert. Here: https://www.cs.cmu.edu/afs/cs.cmu.edu/academic/class/15859-f11/www/notes/lecture16.pdf is a paper about an idealized CS-version of this problem. 

4johnswentworth
Cool piece! I don't think it's particularly relevant to the problems this post is talking about, since things like "how do we evaluate success?" or "what questions should we even be asking?" are core to the problem; we usually don't have lots of feedback cycles with clear, easy-to-evaluate outcomes. (The cases where we do have lots of feedback cycles with clear, easy-to-evaluate outcomes tend to be the "easy cases" for expert evaluation, and those methods you linked are great examples of how to handle the problem in those cases.) Drawing from some of the examples: * Evaluating software engineers is hard because, unless you're already an expert, you can't just look at the code or the product. The things which separate the good from the bad mostly involve long-term costs of maintenance and extensibility. * Evaluating product designers is hard because, unless you're already an expert, you won't consciously notice the things which matter most in a design. You'd need to e.g. a/b test designs on a fairly large user base, and even then you need to be careful about asking the right questions to avoid Goodharting. * In the smallpox case, the invention of clinical trials was exactly what gave us lots of clear, easy-to-evaluate feedback on whether things work. Louis XV only got one shot, and he didn't have data on hand from prior tests.

With regard to the rootclaim link, I agree that it would be good to try to adapt what they've done to our own beliefs. However, I want to urge some caution with regard to the actual calculation shown on that website. The event to which they give a whopping 81% probability, "the virus was developed during gain-of-function research and was released by accident," is a conjunction of two independent theses. We have to be very cautious about such statements, as pointed out in the Rationality A-Z, here https://www.lesswrong.com/s/5g5TkQTe9rmPS5vvM/p/Yq6aA4M3JKWaQepPJ

I mean to include all the alternatives that involve the virus passing through a laboratory before spreading to humans; so all the options you list are included. There's nothing wrong with asking about the probability of a composite event.