I am into something that can be called "meta-politics": institutional reform. That is, crafting decisionmaking algorithms to have good characteristics — incentives, participation, etc. — independent of the object-level goals of politics. I think this is "meta" in a different way than what you're talking about in this article; in short, it's prescriptive meta, not descriptive meta. And I think that makes it "OK"; that is, largely exempt from the criticisms in this article.
Would you agree?
I believe that Bitcoin is a substantial net negative for the world. I think that blockchain itself, even without proof of work, is problematic as a concept — with some real potential upsides, but also real possibly-intrinsic downsides even apart from proof of work. I'd like a world where all PoW-centric cryptocurrency was not a thing (with possible room for PoW as a minor ingredient for things like initial bootstrapping), and crypto in general was more an area of research than investment for now. I think that as long as >>90% of crypto is PoW, it's better (for me, at least) to stay away entirely rather than trying to invest in some upstart PoS coin.
#2. Note that even if ETH does switch in the future, investing in ETH today is still investing in proof-of-work. Also, as long as BTC remains larger and doesn't switch, I suspect there's likely to be spillover between ETH and BTC such that it would be difficult to put energy into ETH without to some degree propping up the BTC ecosystem.
I feel it's worth pointing out that all proof-of-work cryptocurrency is based on literally burning use-value to create exchange-value, and that this is not a sustainable long-term plan. And as far as I can tell, non-proof-of-work cryptocurrency is mostly a mirage or even a deliberate red herring / bait-and-switch.
I'm not an expert, but I choose not to participate on moral grounds. YMMV.
I realize that what I'm saying here is probably not a new idea to most people reading, but it seems clearly enough true to me that it bears repeating anyway.
If anyone wants links to further arguments in this regard, from me rather than Google, I'd be happy to provide.
Thanks for pointing that out. My arguments above do not apply.
I'm still skeptical. I buy anthropic reasoning as valid in cases where we share an observation across subjects and time (eg, "we live on a planet orbiting a G2V-type star", "we inhabit a universe that appears to run on quantum mechanics"), but not in cases where each observation is unique (eg, "it's the year 2021, and there have been about 107,123,456,789 (plus or minus a lot) people like me ever"). I am far less confident of this than I stated for the arguments above, but I'm still reasonably confident, and my expertise does still apply (I've thought about it more than just what you see here).
Our sense-experiences are "unitary" (in some sense which I hope we can agree on without defining rigorously), so of course we use unitary measure to predict them. Branching worlds are not unitary in that sense, so carrying over unitarity from the former to the latter seems an entirely arbitrary assumption.
A finite number (say, the number of particles in the known universe), raised to a finite number (say, the number of Planck time intervals before dark energy tears the universe apart), gives a finite number. No need for divergence. (I think both of those are severe overestimates for the actual possible branching, but they are reasonable as handwavy demonstrations of the existence of finite upper bounds)
I don't think the point you were arguing against is the same as the one I'm making here, though I understand why you think so.
My understanding of your model is that, simplifying relativistic issues so that "simultaneous" has a single unambiguous meaning, total measure across quantum branches of a simultaneous time slice is preserved; and your argument is that, otherwise, we'd have to assign equal measure to each unique moment of consciousness, which would lead to ridiculous "Bolzmann brain" scenarios. I'd agree that your argument is convincing that different simultaneous branches have different weight according to the rules of QM, but that does not at all imply that total weight across branches is constant across time.
I didn't do this problem, but I can imagine I might have been tripped up by the fact that "hammer" and "axe" are tools and not weapons. In standard DnD terminology, these are often considered "simple weapons"; distinct from "martial weapons" like warhammer and battleaxe, but still within the category of "weapons".
I guess that the "toolish" abstractions might have tipped me off, though. And even if I had made this mistake, it would only have mattered for "simple-weapon" tools with a modifier.
This is certainly a cogent counterargument. Either side of this debate relies on a theory of "measure of consciousness" that is, as far as I can tell, not obviously self-contradictory. We won't work out the details here.
In other words: this is a point on which I think we can respectfully agree to disagree.
I think both your question and self-response are pertinent. I have nothing to add to either, save a personal intuition that large-scale fully-quantum simulators are probably highly impractical. (I have no particular opinion about partially-quantum simulators — even possibly using quantum subcomponents larger than today's computers — but they wouldn't change the substance of my not-in-a-sim argument.)
Yes, your restatement feels to me like a clear improvement.
In fact, considering it, I think that if algorithm A is "truly more intelligent" than algorithm B, I'd expect if f(x) is the compute that it takes for B to perform as well or better than A, f(x) could even be super-exponential in x. Exponential would be the lower bound; what you'd get from a mere incremental improvement in pruning. From this perspective, anything polynomial would be "just implementation", not "real intelligence".
Though I've posted 3 more-or-less-strong disagreements with this list, I don't want to give the impression that I think it has no merit. Most specifically: I strongly agree that "Institutions could be way better across the board", and I've decided to devote much of my spare cognitive and physical resources to gaining a better handle on that question specifically in regards to democracy and voting.
Third, separate disagreement: This list states that "vastly more is at stake in [existential risks] than in anything else going on". This seems to reflect a model in which "everything else going on" — including power struggles whose overt stakes are much much lower — does not substantially or predictably causally impact outcomes of existential risk questions. I think I disagree with that model, though my confidence in this is far, far less than for the other two disagreements I've posted.
Separate point: I also strongly disagree with the idea that "there's a strong chance we live in a simulation". Any such simulation must be either:
Strongly disagree about the "great filter" point.
Any sane understanding of our prior on how many alien civilizations we should have expected to see is structured (or at least, has much of its structure that is) more or less like the Drake equation: a series of terms, each with more or less prior uncertainty around it, that multiply together to get an outcome. Furthermore, that point is, to some degree, fractal; the terms themselves can be — often and substantially, though not always and completely — understood as the products of sub-terms.
By the Central Li...
I'm not sure if this comment goes best here, or in the "Against Strong Bayesianism" post. But I'll put it here, because this is fresher.
I think it's important to be careful when you're taking limits.
I think it's true that "The policy that would result from a naive implementation of Solomonoff induction followed by expected utility maximization, given infinite computing power, is the ideal policy, in that there is no rational process (even using arbitrarily much computing power) that leads to a policy that beats it."
But say somebody offered you ...
PLACE is compatible with primaries; primaries would still be used in the US.
Thus, PLACE has all the same (weak) incentives for the local winner to represent any nonpartisan interests of the local district, along with strong incentives to represent the interests of their party X district combo. The extra (weaker) incentives for those other winners who have the district in their territory to represent the interests of their different party X district combos, to fill out the matrix, make PLACE's representation strictly better.
Also worth noting that both AV and FPTP are winner-take-all methods, unlike the proportional methods I discuss here. The AV referendum question was essentially "do you want to take a disruptive half-step that lines you up for maybe, sometime in the future, actually fixing the problem?"; I'm not the only one who believes it was intentionally engineered to fail.
It seems that most of what you're talking about are single-winner reforms (including single-winner pathologies such as center squeeze). In particular, the RCV you're talking about is RCV1, single-winner, while the one I discuss in this article is RCV5, multi-winner; there are important differences. For discussing single-winner, I'd recommend the first two articles linked at the top; this article is about multi-winner reforms.
Personally, I think that the potential benefits of both kinds of reform are huge, but there are some benefits that only multi-winner ...
Formally speaking, nothing. Indirectly speaking: the candidate is a Schelling point for voters in those districts, especially if they are not excited by the that-party candidate in their own district. So those voters are a potential source of direct votes for that candidate, which help win not just directly, but also by moving the candidate up in the preference order that gets filled in on ballots cast for other candidates.
This is not an article about the specific circumstances in the US. Suffice it to say that, while you make good points, I stand by my assessment that things are more hopeful for electoral reform in the US some time in the next decade, than they have been in my 25 years of engagement with the issue. That doesn't mean hopes are very high in an absolute sense, but they're high enough to be noticeably higher.
You're right, the sentence you quoted is only a small part of the necessary ingredients for reform; finding a proposal that's minimally disruptive to incumbents (unless they owe their seat to gerrymandering) is key to getting something passed; and even then, it's a heavy lift.
The 4 methods I chose here are the ones I think have the best chances, from exactly those perspectives. It's still a long shot, but IMO realistic enough to be worth talking about.
You've described, essentially, a weighted-seats closed-list method.
List methods: meh. It's actually possible to be biproportional — that is, to represent both party/faction and geography pretty fairly — so reducing it to just party (and not geography or even faction) is a step down IMO. But you can make reasonable arguments either way.
Closed methods (party, not voters, decides who gets their seats): yuck. Why take power from the people to give it to some party elite?
Weighted methods: who knows, it's scarcely been tried. A few points:
You seem to be comparing Arrow's theorem to Lord Vetinari, implying that both are undisputed sovereigns? If so, I disagree. The part you left out about Arrow's theorem — that it only applies to ranked voting methods (not "systems") — means that its dominion is far more limited than that of the Gibbard-Satterthwaite theorem.
As for the RL-voting paper you cite: thanks, that's interesting. Trying to automate voting strategy is hard; since most voters most of the time are not pivotal, the direct strategic signal for a lea...
V 0.7.2: A terminology change. New terms: Retroactive Power, Effective Voting Equality, Effective Choice, Average Voter Effectiveness. (The term "effective" is a nod to Catherine Helen Spence). The math is the same except for some ultimately-inconsequential changes in when you subtract from 1. Also, started to add a closed list example from Israel; not done yet.
I am rewriting the overall "XXX: a xxx proportionality metric" section because I've thought of a more-interpretable metric. So, where it used to be "Representational fairness: an overall proportionality metric", now it will be "Vote wastage: a combined proportionality metric". Here's the old version, before I erase it:
Since we've structured RQ_d as an "efficiency" — 100% at best, 0% at worst — we can take each voter's "quality-weighted voter power" (QWVP) to be the sum of t...
Finding "Z-best" is not the same as finding the posterior over Z, and in fact differs systematically. In particular, because you're not being a real Bayesian, you're not getting the advantage of the Bayesian Occam's Razor, so you'll systematically tend to get lower-entropy-than-optimal (aka more-complex-than-optimal, overfitted) Zs. Adding an entropy-based loss term might help — but then, I'd expect that H already includes entropy-based loss, so this risks double-counting.
The above critique is specific and nitpicky...
Thank you.
Bit of trivia on Switzerland and voting methods: I've heard (but have not seen primary sources for) that in 1798 the briefly-independent city-state of Geneva used the median-based voting method we anachronously know as "Bucklin" after its US-based reinventor. This was at the (posthmous) suggestion of the Marquis de Condorcet. Notably that suggestion was not to use what we know of as "Condorcet" voting, as that would have been logistically too complex for the time.
Also, if I'm not mistaken, Swiss municipal councils us...
Rewritten to reflect Thomas Sepulchre's contribution. Which is awesome, by the way.
Or in other words...
V 0.5.1: the main changes since the previous version 0.5.0 are a complete rewrite of the "Tentative Answer" section based on a helpful comment by a reader here, with further discussion of that solution; including the new Shorter "Solution" Statement subsection. I also added a sketch to visualize the loss I'm using.
(Comment rewritten from scratch after comment editor glitched.)
This article is not about what I expected from the title. I've been thinking about "retroactively allocating responsibility", which sounds a lot like "assigning credit", in the context of multi-winner voting methods: which voters get credit for ensuring a given candidate won? The problem here is that in most cases no individual voter could change their vote to have any impact whatsoever on the outcome; in ML terms, this is a form of "vanishing gradient". The s...
Nice. Thank you!!!
This corresponds to the Shapley-Shubik index. I had previously discounted this idea but after your comment I took another look and I think it's the right answer. So I'm sincerely grateful to you for this comment.
This is very well-said, but I still want to dispute the possibility of "perfect alignment". In your clustering analogy: the very existence of clusters presupposes definitions of entities-that-correspond-to-points, dimensions-of-the-space-of-points, and measurements-of-given-points-in-given-dimensions. All of those definitions involve imperfect modeling assumptions and simplifications. Your analogy also assumes that a normal-mixture-model is capable of perfectly capturing reality; I'm aware that this is provably asymptotically true for an inf...
Doesn't matter until the switch is done.