This is a special post for quick takes by abstractapplic. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
17 comments, sorted by Click to highlight new comments since:

Has some government or random billionaire sought out Petrov's heirs and made sure none of them have to work again if they don't want to? It seems like an obviously sensible thing to do from a game-theoretic point of view.

It seems like an obviously sensible thing to do from a game-theoretic point of view.

Hmm, seems highly contingent on how well-known the gift would be? And even if potential future Petrovs are vaguely aware that this happened to Petrov's heirs, it's not clear that it would be an important factor when they make key decisions, if anything it would probably feel pretty speculative/distant as a possible positive consequence of doing the right thing. Especially if those future decisions are not directly analogous to Petrov's, such that it's not clear whether it's the same category. But yeah, mainly I just suspect this type of thing to not get enough attention that it ends up shifting important decisions in the future? Interesting idea, though -- upvoted.

. . . Is there a way a random punter could kick in, say, $100k towards Elon's bid? Either they end up spending $100k on shares valued at somewhere between $100k and $150k; or, more likely, they make the seizure of OpenAI $100k harder at no cost to themselves.

That's one millionth of the bid, 0.0001%. I expect the hassle of the paperwork to handle there being more than one bidder to be more trouble than it's worth, akin to declaring a dollar you picked up on the street on your income tax forms.

True. But if things were opened up this way, realistically more than one person would want to get in on it. (Enough to cover an entire percentage point of the bid? I have no idea.)

You want to be an insignificant, and probably totally illiquid, junior partner in a venture with Elon Musk, and you think you could realize value out of the shares? In a venture whose long-term "upside" depends on it collecting money from ownership of AGI/ASI? In a world potentially made unrecognizable by said AGI/ASI?

All of that seems... unduly optimistic.

I once saw an advert claiming that a pregnancy test was “over 99% accurate”. This inspired me to invent an only-slightly-worse pregnancy test, which is over 98% accurate. My invention is a rock with “NOT PREGNANT” scrawled on it: when applied to a randomly selected human being, it is right more than 98% of the time. It is also cheap, non-invasive, endlessly reusable, perfectly consistent, immediately effective and impossible to apply incorrectly; this massive improvement in cost and convenience is obviously worth the ~1% decrease in accuracy.

I think they meant over 99% when used on a non-randomly selected human who's bothering to take a pregnancy test. Your rock would run maybe 70% or so on that application.

This is a general problem with the measure of accuracy. In binary classification, with two events and , "accuracy" is broadly defined as the probability of the "if and only if" biconditional, . Which is equivalent to . It's the probability of both events having the same truth value, of either both being true or both being false.

In terms of diagnostic testing it is the probability of the test being positive if and only if the tested condition (e.g. pregnancy) is present.

The problem with this is that the number is strongly dependent on the base rates. If pregnancy is rare, say it has a base rate of 2%, the accuracy of the rock test (which always says "not pregnant", i.e. is always negative) is

Two better measures are Pearson/Phi correlation (which ranges from -1 to +1), and the odds ratio, which ranges from 0 to , but which can also be scaled to the range [-1, +1] and is then called Yule's Y.

Both correlation and Yule's Y are 0 when the two events are statistically independent, but they differ for when they assume their maximum and minimum values. Correlation is 1 if both events always co-occur (imply each other), and -1 if they never co-occur (each event implies the negation of the other). Yule's Y is 1 if at least one event implies the other, or the negation of at least one implies the negation of the other. It is -1 if at least one event implies the negation of the other, or negation of at least one event implies the other.

This also means that correlation is still dependent on the base rates (e.g. marginal probability of the test being positive, or of someone being pregnant) because the measure can only be maximal if both events have equal base rates (marginal probability), or minimal if the base rate of one event is equal to the base rate of the negation of the other. This is not the case for odds ratio / Yule's Y. It is purely a measure of statistical dependence. Another interesting fact: The correlation of two events is exactly equal to Yule's Y if both events have base rates of 50%.

Enemy HP: 72/104 

Fractionalist cast Reduce-4!

It succeeded!

Enemy HP: 18/26

"What important truth do you believe, which most people don't?"

"I don't think I possess any rare important truths."

How could we test the inverse? How do we test if others believe in rare important truths? Because obviously if they are rare, then that implies that either we don't share them, therefore do not believe they are truthful or important.
"Mel believes in the Law of Attraction, he believes it is very important even though it's a load of hooey"

I suppose there are "Known-Unknowns" and things which we know are significant but kept secret (i.e. Google Pagerank Algorithm, in 2008 the 'appetite' for debt in European Bond Markets was a very important belief and those who believed the right level avoided disaster), we believe there is something to believe, but don't know what the sin-qua-non belief is. 
 

I should probably get into the habit of splitting my comments up. I keep making multiple assertions in a single response, which means when people add (dis)agreement votes I have no idea which part(s) they're (dis)agreeing with.

I used to implicitly believe that when I have a new idea for a creative(/creative-adjacent) project, all else being equal, I should add it to the end of my to-do list (FIFO). I now explicitly believe the opposite: that the fresher an idea is, the sooner I should get started making it a reality (LIFO). This way:

  • I get to use the burst of inspired-by-a-new-idea energy on the project in question.
  • I spend more time working on projects conceived by a me with whom I have a lot in common.

The downsides are:

  • Some old ideas will end up near the bottom of the pile until I die or the Singularity happens. (But concepts are cheaper than execution, and time is finite.)
  • I get less time to polish ideas in my head before committing pen to paper. (But maybe that's good?)

Thoughts on this would be appreciated.

Personally I use a mix of heuristics based on how important the new idea is, how rapid it is and how painful it will be to execute it in the future once the excitement dies down.

The more ADHD you are and the more the "burst of inspired-by-a-new-idea energy" effect is strong, so that should count. 

Curated and popular this week