All of homunq's Comments + Replies

homunq10

Is "do whatever action you predict to maximize the electricity in this particular piece of wire" really "general"? You're basically claiming that the more intelligent someone is, the more likely they are to wirehead. With humans, in my experience, and for a loose definition of "wirehead", the pattern seems to be the opposite; and that seems to me to be solid enough in terms of how RL works that I doubt it's worth the work to dig deep enough to resolve our disagreement here.

homunq10

I've posted on LW before, but I posted again here after a long hiatus because of recent AI news, and entirely unaware of the good heart thing; then made several comments after reading the original post, but thinking it was a joke. Now I understand why the site was so strangely active.

homunq30

"An animal looking curiously in the mirror, but the reflection is a different kind of animal; in digital style."

"A cat looking curiously in the mirror, but the reflection is a different kind of animal; in digital style."

"A cat looking curiously in the mirror, but the reflection is a dog; in digital style."

Curious to see how it handles modified-reflection and lack-of-specificity.

2Dave Orr
Posted. Interesting sequence, clearly shows some of the limits of its understanding.
homunq30

Another thing whose True Name is probably a key ingredient for alignment (and which I've spent a lot of time trying to think rigorously about): collective values.

Which is interesting, because most of what we know so far about collective values is that, for naive definitions of "collective" and "values", they don't exist. Condorcet, Arrow, Gibbard and Satterthwaite, and (crucially) Sen have all helped show that.

I personally don't think that means that the only useful things one can say about "collective values" are negative results like the ones above. I th... (read more)

1MSRayne
I'm pretty sure there is no such thing as collective values. Individual egregores (distributed agents running on human wetware, like governments, religions, businesses, etc) can have coherent values, but groups of people in general do not. Rather, there are more and less optimal (in the sense of causing minimal total regret - I'm probably thinking of Pareto optimality here) mechanisms for compromising. The "collective values" that emerge are the result of the process, not something inherent before the process begins, and further, different processes will lead to different "collective values", the same way that different ways of thinking and making decisions will lead a person to prioritize their various desires / subagents differently. It does look, though, as if some mechanisms for compromising work better than others. Markets and democracies work very differently, but nearly everyone agrees either one is better than dictatorship.
homunq130

I think this post makes sense given the premises/arguments that I think many people here accept: that AG(S)I is either amazingly good or amazingly bad, and that getting the good outcome is a priori vastly improbable, and that the work needed to close the gap between that prior and a good posterior is not being done nearly fast enough.

I don't reject those premises/arguments out of hand, but I definitely don't think they're nearly as solid as I think many here do. In my opinion, the variance in goodness of reasonably-thinkable post-AGSI futures is mind-boggl... (read more)

3johnlawrenceaspden
We might. High dimensional space, tiny target area for anything particularly emotionally salient. Like finding a good book in the Library of Babel. Mostly the universe gets turned into random rubbish.
homunq10

Sure, humans are effectively ruthless in wiping out individual ant colonies. We've even wiped out more than a few entire species of ant. But our ruthfulness about our ultimate goals — well, I guess it's not exactly ruthfulness that I'm talking about...

...The fact that it's not in our nature to simply define an easy-to-evaluate utility function and then optimize, means that it's not mere coincidence that we don't want anything radical enough to imply the elimination of all ant-kind. In fact, I'm pretty sure that for a large majority of people, there's no ut... (read more)

homunq10

I guess we're using different definitions of "friendly/unfriendly" here. I mean something like "ruthlessly friendly/unfriendly" in the sense that humans (neurotic as they are) aren't. (Yes, some humans appear ruthless, but that's just because their "ruths" happen not to apply. They're still not effectively optimizing for future world-states, only for present feels.)

I think many of the arguments about friendly/unfriendly AI, at least in the earlier stages of that idea (I'm not up on all the latest) are implicitly relying on that "ruthless" definition of (un... (read more)

2jimmy
It's possible that it "wouldn't use all it's potential power" in the same sense that a high IQ neurotic mess of a person wouldn't use all of their potential power either if they're too poorly aligned internally to get out of bed and get things done. And while still not harmless, crazy people aren't as scary as coherently ruthless people optimized for doing harm. But "People aren't ruthless" isn't true in any meaningful sense. If you're an ant colony, and the humans pave over you to make a house, the fact that they aren't completely coherent in their optimization for future states over feelings doesn't change the fact that their successful optimization for having a house where your colony was destroyed everything you care about. People generally aren't in a position of that much power over other people such that reality doesn't strongly suggest that being ruthful will help them with their goals. When they do perceive that to be the case, you see an awful lot of ruthless behavior. Whether the guy in power is completely ruthless is much less important than whether you have enough threat of power to keep him feeling ruthful towards your existence and values. When you start positing superintelligence, and it gets smart enough that it actually can take over the world regardless of what stupid humans want, that becomes a real problem to grapple with. So it makes sense that it gets a lot of attention, and we'd have to figure it out even if it were just a massively IQ and internal-coherence boosted human. With respect to the "smart troubled person, dumb therapist" thing, I think you have some very fundamental misgivings about human aims and therapy. It's by no means trivial to explain in a tangent of a LW comment, but "if the person knew how to feel better in the future, they would just do that" is simply untrue. We do "optimize for feelings" in a sense, but not that one. People choose their unhappiness and their suffering because the alternative is subjectively worse (a
homunqΩ010

Why does the AI even "want" failure mode 3? If it's a RL agent, it's not "motivated to maximize its reward", it's "motivated to use generalized cognitive patterns that in its training runs would have marginally maximized its reward". Failure mode 3 is the peak of an entirely separate mountain than the one RL is climbing, and I think a well-designed box setup can (more-or-less "provably") prevent any cross-peak bridges in the form of cognitive strategies that undermine this. 

That is to say: yes, it can (or at least, it it's not provable that it can't) ... (read more)

4Donald Hobson
Consider the strategy "do whatever action you predict to maximize the electricity in this particular piece of wire in your reward circuitry". This is a very general cognitive pattern that would maximize reward in the training runs. Now there are many different cognitive patterns that maximize reward in the training runs. But this is one simple one, so its at least reasonably plausible it is used. What I was thinking when I wrote it was more like. When someone proposes a fancy concrete and vacuum box, they are claiming that the fancy box is doing something. None of your "the AI is an RL agent, so it shouldn't want to break out" works any differently whether the box is a fancy concrete vacuum faraday cage, or just a cardboard box. A fancy box is only useful if there is something the AI wants, but is unable to do.  To a RL agent, if it hasn't tried to break out of the box, then breaking out of the box is a case of generalization. For that matter, other forms of wireheading are also a case of generalization. 
homunq40

One way of dividing up the options is: fix the current platform, or find new platform(s). The natural decay process seems to be tilting towards the latter, but there are downsides: the diaspora loses cohesion, and while the new platforms obviously offer some things the current one doesn't, they are worse than the current one in various ways (it's really hard to be an occasional lurker on FB or tumblr, especially if you are more interested in the discussion than the "OP").

If the consensus is to fix the current platform, I suggest trying the simple... (read more)

homunq00

I disagree. I think the issue is whether "pro-liberty" is the best descriptive term in this context. Does it point to the key difference between things it describes and things it doesn't? Does it avoid unnecessary and controversial leaps of abstraction? Are there no other terms which all discussants would recognize as valid, if not ideal? No, no, and no.

0Lumifer
Would you like to suggest a better term for the subject of this subthread, then?
homunq00

Whether something is a defensible position, and whether it should be embedded in the very terms you use when more-neutral terms are available, are separate questions.

If you say "I'm pro-liberty", and somebody else says "no you're not, and I think we could have a better discussion if you used more specific terms", you don't get to say "why won't you accept me at face value".

0Lumifer
Oh, but I do :-) The issue in this subthread is whether the call for liberty is a terminal goal in itself or is it a proxy for some other, hidden goal (here -- laissez-faire capitalism).
homunq00

When you say "Nothing short of X can get you to Y", the strong implication is that it's a safe bet that X will at least not move you away from Y, and sometimes move you toward it. So OK, I'll rephrase:

The OP suggests that colonization is in fact a proven way to turn at least some poor countries into more productive ones.

homunq00

Note that my post just above was basically an off-the-cuff response to what I felt was a ludicrously wrong assumption buried in the OP. I'm not an expert on African history, and I could be wrong. I think that I gave the OP's idea about the level of refutation it deserved, but I should have qualified my statements more ("I'd guess..."), so I certainly didn't deserve 5 upvotes for this (5 points currently; I deserve 1-3 at most).

homunq80

I think that it's worth being more explicit in your critique here.

The OP suggests that colonization is in fact a proven way to turn poor countries into productive ones. But in fact, it does the opposite. Several parts of Africa were at or above average productivity before colonization¹, and well below after; and this pattern has happened at varied enough places and times to be considered a general rule. The examples of successful transitions from poor countries to rich ones—such as South Korea—do not involve colonization.

¹Note that I'm considering the tria... (read more)

-2Salemicus
Nope. Provide a quote or retract. What I actually said was that nothing short of colonisation is known to work.
0homunq
Note that my post just above was basically an off-the-cuff response to what I felt was a ludicrously wrong assumption buried in the OP. I'm not an expert on African history, and I could be wrong. I think that I gave the OP's idea about the level of refutation it deserved, but I should have qualified my statements more ("I'd guess..."), so I certainly didn't deserve 5 upvotes for this (5 points currently; I deserve 1-3 at most).
5Jiro
If you redefine colonization, you can get the results you wish. Also, South Korea (and Taiwan) were colonized by Japan and while their main success happened after the end of colonization, if you're going to blame Africa's after-colonization state on colonization, you need to credit these countries' after-colonization state to colonization as well.
6VoiceOfRa
Evidence? Seriously what on earth are you talking about?
427chaos
Also, it's reprehensible. I would probably be willing to accept reprehensible policies, very reluctantly, if they actually did result in productive countries. But when neither means nor ends are good, and the results of past failed attempts still cause massive suffering today, giving even passing credit to colonialist ideas is an enormous red flag. It's in the same realm as Holocaust denial imo. I don't think OP was seriously endorsing colonialism, but I'm also not highly confident he wasn't; neoreactionaries frequent this site, after all. Just so the intensity of my position is clear, the hopefully-not-an-endorsement of colonialism alone wouldn't have motivated me to downvote, I'm usually pretty good at avoiding that armchair online-activist failure mode, but I found the main argument pretty weak as well. Had either one of those flaws not been present, I'd have been willing to overlook the other.
homunq150

I think you can make this critique more pointed. That is: "pro-liberty" is flag-waving rhetoric which makes us all stupider.

I dislike the "politics is a mind-killer" idea if it means we can't talk about politically touchy subjects. But I entirely agree with it if it means that we should be careful to keep our language as concrete and precise as possible when we approach these subjects. I could write several paragraphs about all the ways that the term "pro-liberty" takes us in the wrong direction, but I expect that most of you can figure all that out for yourselves.

0Lumifer
Do you believe that existing human societies can evolve (or made to move) in the direction of more liberty or in the direction of less liberty? Is this axis meaningful to you?
2[anonymous]
Basically, the use of a flagrant applause light to disguise confused thinking in this article is a big red flag. He is advocating a peculiar view of economic development held in some very right-wing economics circles, but economic development is not altruistic in the sense that the Effective Altruism movement means: it does not generate utility for any particular human individuals who happen to be suffering now, and in fact often generates very few "hedons" per marginal dollar invested, because the rich are well into diminishing happiness returns to marginal wealth increases.
homunq10

It appears that you need to be logged in from FB or twitter to be fully non-guest. That seems like a... strange... choice for an anti-akrasia tool.

(Tangentially related to above, not really a reply)

1Lachouette
As tkadlubo says, most people choose to visit as guests. Otherwise you are free to create an account on tinychat.com and visit the chat after logging in, which is what I do. It allows you to PM people and potentially become a moderator, neither of which are necessary for just participating in the pomodoros.
8tkadlubo
You don't need to use your Twitter or Facebook credentials. You even don't want to, since tinychat will spam your feeds. Logging in as tinychat guest is the status quo for pretty much everyone on the LWSH.
homunq00

Fair enough. Thanks. Again, I agree with some of your points. I like blemish-picking as long as it doesn't require open-ended back-and-forth.

homunq20

You're raising some valid questions, but I can't respond to all of them. Or rather, I could respond (granting some of your arguments, refining some, and disputing some), but I don't know if it's worth it. Do you have an underlying point to make, or are you just looking for quibbles? If it's the latter, I still thank you for responding (it's always gratifying to see people care about issues that I think are important, even if they disagree); but I think I'll disengage, because I expect that whatever response I give would have its own blemishes for you to find.

In other words: OK, so what?

1Lumifer
Some people find blemish-finding services valuable, some don't :-)
homunq00

Full direct democracy is a bad idea because it's incredibly inefficient (and thus also boring/annoying, and also subject to manipulation by people willing to exploit others' boredom/annoyance). This has little or nothing to do with whether people's preferences correlate with their utilities, which is the question I was focused on. In essence, this isn't a true Goldilocks situation ("you want just the right amount of heat") but rather a simple tradeoff ("you want good decisions, but don't want to spend all your time making them").

As to t... (read more)

0Lumifer
No, I don't think so. It is a bad idea even in a society technologically advanced to make it efficient and even if it's invoked not frequently enough to make it annoying. People's preferences are many, multidimensional, internally inconsistent, and dynamic. I am not quite sure what do you want to correlate to a single numerical value of "utility". Why are you considering only these two options? The connection is that what is a "better" voting system depends on the context, context that includes things like rule of law, etc.
homunq00

(small note: the sentence you quote from me was unclear. "because" related to "presume", not "saying". But your response to what I accidentally said is still largely cogent in relation to what I meant to say, so the miscommunication isn't important. Still, I've corrected the original. Future readers: lumifer quoted me correctly.)

homunq20

The model is not easy to subject to full, end-to-end testing. It seems reasonable to test it one part at a time. I'm doing the best I can to do so:

  • I've run an experiment on Amazon Mechanical Turk involving hundreds of experimental subjects voting in dozens of simulated elections to probe my strategy model.

  • I'm working on getting survey data and developing statistical tools to refine my statistical model (mostly, posterior predictive checks; but it's not easy, given that this is a deeper hierarchical model than most).

  • In terms of the utilitarian assumpt

... (read more)
0Lumifer
Democracy is complicated. For a simple example, consider full direct democracy: instant whole-population referendums on every issue. I am not sure anyone considers this a good idea -- successful real-life democratic systems (e.g. the US) are built on limited amounts of democracy which is constrained in many ways. Given this, democracy looks to be a Goldilocks-type phenomenon where you don't want too little, but you don't want too much either. And, of course, democracy involves much more than just voting -- there are heavily... entangled concepts like the rule of law, human rights, civil society, etc.
homunq10

I presume you're saying that utility-based simulations are not credible. I don't think you're actually trying to say that they're not numerical estimates. So let me explain what I'm talking about, then say what parts I'm claiming are "credible".

I'm talking about monte-carlo simulations of voter satisfaction efficiency. You use some statistical model to generate thousands of electorates (that is, voters with numeric utilities for candidates); a media model to give the voters information about each other; and a strategy model to turn information, u... (read more)

1Lumifer
Actually, no, that's not what I mean. I have no problems with numerical estimates in general. What I mean by "credible", in this context, is "shown to be relevant to real-life situations" and "supported by empirical data". You've constructed a model. You've played with this model and have an idea of how it behaves in different regimes. That's all fine. But then you imply that this model reflects the real world and it's at this point that I start to get sceptical and ask for evidence. Not evidence of how your model works, but evidence that the map matches the territory.
homunq10

[ ] Wow, these people are smart. [ ] Wow, these people are dumb. [ ] Wow, these people are freaky. [ ] That's a good way of putting it, I'll remember that.

(For me, it's all of the above. "Insight porn" is probably the biggest, but it doesn't dominate.)

homunq10

Electology is an organization dedicated to improving collective decision making — that is, voting. We run on a shoestring; somewhere in the lowish 5 digits $ per year. We've helped get organizations such as the German Pirate Party and the various US stat Libertarian Parties to use approval voting, and gotten bills brought up in several states (no major victories so far, but we're just starting.)

Is a better voting system worth it, even if most people still vote irrationally? I'd say emphatically yes. Plurality voting is just a disaster as a system, filled w... (read more)

-2Lumifer
The first three words here are in contradiction with the last three words... :-/
homunq20

In terms of “saving throws” one can buy for a humanity that may be navigating tricky situations in an unknown future, improvements to thinking skill seem to be one of the strongest and most robust.

Improvements to collective decision making seem to be potentially an even bigger win. I mean, voting reform; the kind of thing advocated by Electology. Disclaimer: I'm a board member.

Why do I think that? Individual human decisionmaking has already been optimized by evolution. Sure, that optimization doesn't fit perfectly with a modern need for rationality, but... (read more)

0[anonymous]
Disclaimer: I now support you. What do you need done, what's your vision, and where do you work? Making democracy work better has been a pet drive of mine for an extremely long time. EDIT: Upon your website loading and my finding that you push Approval Voting, I am now writing in about volunteering.
homunq00

One idea for measurement in a randomized trial:

In order to apply, you have to list 4 people who would definitely know how awesome you're being a year from now, and give their contact info. Then, choose 1 of those people 6 months later and 1 person a year later and ask them how awesome the person is being. When you ask, include a "rubric" of various stories of various awesomeness levels, in which the highest levels are not always just $$$ but sometimes are. Ask the people you're asking to please not contact the person specifically to check awesome... (read more)

homunq00

I think you've misunderstood the question. As I understand it, it's not "is the distribution of startup values a power law" but "do startups distribute their profits to employees according to a power law".

0Vaniver
I hear that ownership is distributed roughly so that founders get 1/f, and early employees get 1/n^2, where f is the number of founders and n is the employee number (counting the first non-founder as employee f+1). (Both are obviously proportional; there's some constant term in there.)
homunq-20

Wish I could both up- and down- vote this comment. +1 for interesting, cogent observation; -1 for followinng that up with facile beakering. So instead I upvoted this comment and downvoted your reply below ( which deserves the downvote in its own right)

(I just made up the word "beakering". It means doing TV science, with beakers and bafflegab, in real life. A lot of amateur evo-something and neuro-something involve beakering.)

homunq00

Would be better if you didn't say whom you ended up agreeing with. Most people here have either a halo or horns on Eliezer, and discounting that is distracting.

homunq50

That's simpler to say, but not at all simpler to do.

homunq20

Bump.

(I realize you're busy, this is just a friendly reminder.)

Also, I added one clause to my comment above: the bit about "imperfectly measured", which is of course usually the case in the real world.

1Thrasymachus
Belatedly updated. Thanks for your helpful comments!
homunq160

Great article overall. Regression to the mean is a key fact of statistics, and far too few people incorporate it into their intuition.

But there's a key misunderstanding in the second-to-last graph (the one with the drawn-in blue and red "outcome" and "factor"). The black line, indicating a correlation of 1, corresponds to nothing in reality. The true correlation is the line from the vertical tangent point at the right (marked) to the vertical tangent point at the left (unmarked). If causality indeed runs from "factor" (height)... (read more)

3Thrasymachus
Thanks for this important spot - I don't think it is a nitpick at all. I'm switching jobs at the moment, but I'll revise the post (and diagrams) in light of this. It might be a week though, sorry!
homunq10

No argument here. It's hard to build a good social welfare function in theory (ie, even if you can assume away information limitations), and harder in practice (with people actively manipulating it). My point was that it is a mistake to think that Arrow showed it was impossible.

(Also: I appreciate the "thank you", but it would feel more sincere if it came with an upvote.)

1Davidmanheim
I had upvoted you. Also, I used Arrow as a shorthand for that class of theorem, since they all show that a class of group decision problem is unsolvable - mostly because I can never remember how to spell Satterthewaite.
homunq30

I think you've done better than CarlShulman and V_V at expressing what I see as the most fundamental problem with EA: the fact that it is biased towards the easily- and short-term- measurable, while (it seems to me) the most effective interventions are often neither.

In other words: how do you avoid the pathologies of No Child Left Behind, where "reform" becomes synonymous with optimizing to a flawed (and ultimately, costly) metric?

This issue is touched by the original post, but not at all deeply.

homunq70

Note: Arrow's Impossibility Theorem is not actually a serious philosophical hurdle for a utilitarian (though related issues such as the Gibbard-Satterthwaite theorem may be). That is to say: it is absolutely trivial to create a social utility function which meets all of Arrow's "impossible" criteria, if you simply allow cardinal instead of just ordinal utility. (Arrow's theorem is based on a restriction to ordinal cases.)

1Davidmanheim
Thank you for the clarification; despite this, cardinal utility is difficult because it assumes that we care about different preferences the same amount, or definably different amounts. Unless there is a commodity that can adequately represent preferences (like money) and a fair redistribution mechanism, we still have problems maximizing overall welfare.
homunq10

Upvoted because I think this is a real issue, though I'm far from sure whether I'd put it at "worst".

homunq40

... And that is not a new idea either. "Allow me to play the devil's advocate for a moment" is a thing people say even when they are expressing support before and after that moment.

homunq00

Can anyone explain why the parent was downvoted? I don't get it. I hope there's a better reason than the formatting fail.

1TheOtherDave
I suppose, given the context, I should say out loud that it wasn't me, both because I don't find it downvoteworthy and because I make a practice of not downvoting comments that reply to mine or that I reply to. I endorse not trying to read much into one or two downvotes... the voting behavior of arbitrarily selected individuals in a group like this doesn't necessarily mean much.
0Vaniver
The way to fix the formatting is to use a \ in front of the asterisk whenever you want to actually display it. This is also necessary for underscores, which some people use in their usernames. I didn't downvote it, and don't have interesting speculation as to why it was downvoted.
homunq20

This is a key question. The general answer is:

  1. For realistic cases, there is no such theorem, and so the task of choosing a good system is a lot about choosing one which doesn't reward strategy in realistic cases.

  2. Roughly speaking, my educated intuition is that strategic payoffs grow insofar as you know that the distinctions you care about are orthogonal to what the average/modal/median voter cares about. So insofar as you are average/modal/median, your strategic incentive should be low; which is a way of saying that a good voting system can have low str

... (read more)
homunq10

Yup. That's what people say. I don't know what the general rule is, but it's definitely right for this case.

homunq30

I, too, hope that our disagreement will soon disappear. But as far as I can see, it's clearly not a semantic disagreement; one of us is just wrong. I'd say it's you.

So. Say there are 3 voters, and without loss of generality, voter 1 prefers A>B>C. Now, for every one of the 21 distinct combinations for the other two, you have to write down who wins, and I will find either an (a priori, determinative; not mirror) dictator or a non-IIA scenario.

ABC ABC: A

ABC ACB: A

ABC BAC: ?... you fill in these here

ABC BCA: ?

ABC CAB: .

ABC CBA: .

ACB ACB: .

ACB BAC:

ACB BC... (read more)

3Vaniver
Thanks; I see it now. Editing my earlier posts.
homunq50

I'm sorry, you really are wrong here. You can't make up just one scenario and its result and say that you have a voting rule; a rule must give results for all possible scenarios. And once you do, you'll realize that the only ones which pass both unanimity and IIA are the ones with an a priori dictatorship. I'm not going to rewrite Arrow's whole paper here but that's really what he proved.

0Vaniver
I think I see how the grandparent was confusing. I was assuming that the voting rule was something like plurality voting, with enough sophistication to make it a well-defined rule. What I meant to do was define two dictatorship criteria which differ from Arrow's, which apply to individuals under voting rules, rather than applying to rules. Plurality voting (with a bit more sophistication) is a voting rule. Bob choosing for everyone is a voting rule. But the rule where Bob chooses for everyone has an a priori dictator- Bob. (He's also an a posteriori dictator, which isn't surprising.) Plurality voting as a voting rule does not empower an a priori dictator as I defined that in the grandparent. But it is possible to find a situation under plurality voting where an a posteriori dictator exists; that is, we cannot say that plurality voting is free from a posteriori dictators. That is what the nondictatorship criterion (which is applied to voting rules!) means- for a rule to satisfy nondictatorship, it must be impossible to construct a situation where that voting rule empowers an a posteriori dictator. Because unanimity and IIA imply not nondictatorship, for any election which satisfies unanimity and IIA, you can carefully select a ballot and report just that ballot as the group preferences. But that's because it's impossible for the group to prefer A>B>C with no individual member preferring A>B>C, and so there is guaranteed to be an individual who mirrors the group, not an individual who determines the group. Since individuals determining group preferences is what is politically dangerous, I'm not worried about the 'nondictatorship' criterion, because I'm not worried about mirroring. I've read it; I've read Yu's proof; I've read Barbera's proof, I've read Geanakoplos's proof, I've read Hansen's proof. (Hansen's proof does follow a different strategy from the one I discussed, but I came across it after writing the grandparent.) I'm moderately confident I know what the
homunq00

Under Arrow's terms, this still counts as a dictator, as long as the other ballots have no effect. (Not "no net effect", but no effect at all.)

In other words: if I voted for myself, and everyone else voted for Kanye, and my ballot happened to get chosen, then I would win, despite being 1 vote against 100 million.

It may not be the traditional definition of dictatorship, but it sure ain't democracy.

homunq40

Again, you're simply not understanding the theorem. If a system fails non-dictatorship, that really does mean that there is an a priori dictator. That could be that one vote is chosen by lot after the ballots are in, or it could be that everybody (or just some special group or person) knows beforehand that Mary's vote will decide it. But it's not that Mary just happens to turn out to be the pivotal voter between a sea of red on one side and blue on the other.

I realize that this is counterintuitive. Do you think I have to be clearer about it in the post?

0selylindi
Yes, please.
1Douglas_Knight
I would say that Arrow simply excludes that possibility. As you said, he only considers systems that "consistently give the same winner...for the same voter preferences." Nothing wrong with your analysis, but I think it's an important disclaimer.
0fubarobfusco
This is the case that doesn't sound like an a-priori dictator to me, because you don't know who the dictator will be, and thus can't do anything to manipulate the outcome by dint of there being a dictator.
-1Vaniver
By an a priori dictatorship, I mean there is some individual 1 such that =R_1\%20\forall\%20(R_2,\ldots,R_N)\in%20L(A)%5E{N-1}). By an a posteriori dictatorship, I mean there is some individual 1 such that \in%20L(A)%5EN\%20s.t.\%20F(R_1,\ldots,R_N)=R_1\%20\forall\%20R_1) There is obviously not an a priori dictationship for all voting environments under all aggregation rules that satisfy unanimity and IIA. For example, if 9 people prefer A>B>C, and 1 person prefers B>C>A, then society prefers A, regardless of how any specific individual changes their vote (so long as only one vote is changed). (Note the counterfactual component of my statement- there needs to be an individual who can change the social preference function, not just identify the social preference function.) Every proof of the theorem that I can see operates exactly this way; I'm still not seeing what specific step you think I misunderstand.
homunq30

Wait until I get to explaining SODA; a voting system where you can vote for one and still get better results.

As for comparing different societies: there are of course societies with different electoral systems, and I think some systems do tend to lead to better governance than in the US/UK, but the evidence is weak and VERY confounded. It's certainly impossible to clearly demonstrate a causal effect; and would be, even assuming such an effect existed and were sizeable. I will talk about this more as I finish this post.

homunq10

Thanks, I'll work on that.

homunq00

Your probability theory here is flawed. The question is not about P(A&B), the probability that both are true, but about P(A|B), the probability that A is true given that B is true. If A is "has cancer" and B is "cancer test is positive", then we calculate P(A|B) as P(B|A)P(A)/P(B); that is, if there's a 1/1000 chance of cancer and and the test is right 99/100, then P(A|B) is .99.001/(.001.99+.999.01) which is about 1 in 10.

0homunq
Can anyone explain why the parent was downvoted? I don't get it. I hope there's a better reason than the formatting fail.
0TheOtherDave
That's fair.
homunq40

I'll certainly have more content that addresses these questions as the post develops. For now, I'll simply respond to your misunderstanding about Arrow. The problem is not that there will always be an a posteori pivotal voter, but that (to satisfy the other criteria) there must be an a priori dictator. In other words, you would get the same election result by literally throwing away all ballots but one without ever looking at them. This is clearly not democracy.

1Sniffnoy
This is kind of late and not exactly on point, but I wanted to add in two points about Arrow's Theorem that I hope will help clarify things somewhat. 1. There is a generalization of Arrow's Theorem to the case where the set of voters may be infinite. In this more general case, the conclusion of the theorem (where here I'm considering the premises to be IIA and unanimity) is not that there must be a dictator, but rather that the voting system is given by an ultrafilter on the set of voters. That is to say, there exists an ultrafilter U on the set of voters such that the candidate elected is precisely the candidate such that the set of people voting for that candidate is large with respect to U. The fact that if there are only finitely many voters there must be a dictator then follows as any ultrafilter on a finite set is principal. 2. In a nephew/niece comment, homunq mentions sortition (select a voter at random to be dictator) as an example. I don't think this is correct? That is to say, Arrow's Theorem only applies to deterministic voting systems, and "dictatorship" refers to having a pre-specified dictator. There may be a generalization to nondeterministic systems for all I know, but if so, I don't think it's normally considered part of the theorem per se, and I don't think sortition is normally considered an example of dictatorship.
-1Vaniver
It's not clear to me why you think that's a misunderstanding; the statement of the theorem is not that the dictator is an a priori dictator, just that there never exists a situation where an individual can completely determine society's preferences. The proof is a construction of a situation given the first two fairness axioms and at least three alternatives, where one voter will be a pivotal voter who can completely determine society's preferences. But if you don't care about the third axiom, you don't care about the proof. Okay, in a deeply divided but balanced situation, the one non-partisan can pick whether we go left, right, or to the middle; this isn't a huge tragedy. (The collapse of the scale of preferences is a huge tragedy.)
homunq40

This is still in-progress, and I'm going to get to some of that later. Here's my defense of the current summary:

  • First, it's just a summary. If it could include all the subtleties of the article, I wouldn't need to write the article.
  • Second, even if the public voting systems (muni, state, and national) wherever you happen to live continue to be stupid ones, understanding voting systems better is useful knowledge. You should understand bad voting systems if they affect you, and good voting systems if you're in organizations that could use them.
  • Third, I do
... (read more)
homunq10

It's easy, but not helpful, to use "postmodern" as a shorthand for "bad ideas" of some kind. Something like Sturgeon's law ("90% of everything is crap") applies to postmodernism as to everything else, and I'd even agree that it's a kind of thinking that is more likely than average to come unmoored from reality, but that doesn't mean that it's barren of all insight. Especially today, at least 20 years after its heydey, and considering that even in its heyday it was a very rare academic department indeed where drinking the kool-... (read more)

Load More