The distribution of outcomes is much more achievable and much more useful than determining the one true way some specific thing will evolve. Like, it's actually in-principle achievable, unlike making a specific pointlike prediction of where a molecular ensemble is going to be given a starting configuration (QM dependency? Not merely a matter of chaos). And it's actually useful, in that it shows which configurations have tightly distributed outcomes and which don't, unlike that specific pointlike prediction.
I see. I figured U/A meant something like that. I think it's potentially useful to consider that case, but I wouldn't design a system entirely around it.
In terms of explaining the result, I think Schulze is much better. You can do that very compactly and with only simple, understandable steps. The best I can see doing with RP is more time-consuming and the steps have potential to be more complicated.
As far as promotion is concerned, I haven't run into it; since it's so similar to RP, I think non-algorithmic factors like I mentioned above begin to be more important.
~~~~
The page you linked there has some undefined terms like u/a (it says it's defined in previous articles, but I don't se...
What are the improved Condorcet methods you're thinking of? I do recall seeing that Ranked Pairs and Schulze have very favorable strategy-backfire to strategy-works ratios in simulations, but I don't know what you're thinking of for sure. If those are it, then if you approach it right, Schulze isn't that hard to work through and demonstrate an election result (wikipedia now has an example).
95% of the sperm reaching the endpoint, then, if they're not independent.
And, like with sperm, it may be that there were many more insects than needed to fulfill their role? Like, if 20 sperm reach an egg, you can lose 95% of them and end up just as pregnant.
Not entirely true; low sperm counts are associated with low male fertility in part because sperm carry enzymes which clear the way for other sperm - so a single sperm isn't going to get very far.
That dialog reminds me of some scenes from Friendship is Optimal, only even more morally off-kilter than CelestAI, which is saying something.
I have no RSS monkeying going on, and Wei Dai and Kaj Sotala have the same font size as you or me.
Instructions unclear, comment stuck in ceiling fan?
That does not always produce a reduced fraction, of course. In order to do that, you need to go find a GCF just like before... but I agree, that should be presented as an *optimization* after teaching the basic idea.
Where is that second quote from? I can't find it here.
Mostly, yes. Feynman gets a lot of credit for making QED comprehensible, even though he didn't make it in the first place.
It seems like you're relying on the existence of exponentially hard problems to mean that taking over the world is going to BE an exponentially hard problem. But you don't need to solve every problem. You just need to take over the world.
Like, okay, the three body problem is 'incomputable' in the sense that it has chaotically sensitive dependence on initial conditions in many cases. So… don't rely on specific behavior in those cases on long time horizons without the ability to do small adjustments to keep things on track.
If the AI can detect most of the ha... (read more)