Thank you for the response <3
"...I would like to prove to the Court Philosopher that I'm right and he's wrong."
This part of the story tickles me more, reading it a second time.
I like to write stories that mean different things to different people ...this story isn't a puzzle at all. It is a joke about D&D-style alignment systems.
And it kinda resonates with this bit. In both cases there's a certain flexibility. The flexibility itself is unexpected, but reasonable safe... which is often a formula for comedy? It is funny to see the flexibility in Phil as he "goes social", and also funny to see it in you as you "go authorial" :-)
It is true that there are some favorable properties that many systems other than the best system has compared to FPTP.
I like methods that are cloneproof and which can't be spoofed by irrelevant alternatives, and if there is ONLY a choice between "something mediocre" and "something mediocre with one less negative feature" then I guess I'll be in favor of hill climbing since "some mysterious force" somehow prevents "us" from doing the best thing.
However, I think cloning and independence are "nice to haves" whereas the condorcet criterion is probably a "need to have"
((The biggest design fear I have is actually the "participation criterion". One of the very very few virtues of FPTP is that it at least satisfies the criterion where someone showing up and "wasting their vote on a third party" doesn't cause their least preferred candidate to jump ahead of a more preferred candidate. But something similar can happen in every method I know of that reliably selects the Condorcet Winner when one exists :-(
Mathematically, I've begun to worry that maybe I should try to prove that Condorcet and Participation simply cannot both be satisfied at the same time?
Pragmatically, I'm not sure what it looks like to "attack people's will to vote" (or troll sad people into voting in ways that harm their interests and have the sad people fight back righteously by insisting that they shouldn't vote, because voting really will net harm their interests).
One can hope that people will simply "want to vote" because it make civic sense, but it actually looks like a huge number of humans are biased to feel like a peasant, and to have a desire to be ruled? Or something? And maybe you can just make it "against the law to not vote" (like in Australia) but maybe that won't solve the problems that could hypothetically "sociologically arise" from losing the participation criterion in ways that might be hard to foresee.))
In general, I think people should advocate for the BEST thing. The BEST thing I currently know of for picking an elected civilian commander in chief is "Ranked Pairs tabulation over Preference Ballots (with a law that requires everyone to vote during the two day Voting Holiday)".
Regarding approval ratings on products using stars...
...I'd like to point out that a strategic voter using literal "star" voting should generally always collapse down to "5 stars for the good ones, 0 stars for everyone else".
This is de facto approval voting, and a strategic voter doing approval voting learns to restrict their approval to ONLY the "electable favorite", which de facto gives you FPTP all over gain.
And FPTP is terrible.
Among the quoted takes, this was the best, about the sadness of the star voting systems, because it was practical, and placed the blame where it properly belongs: on the designers and maintainers of the central parts of the system.
Nobe: On Etsy you lose your “star seller” rating if it dips below 4.8. A couple of times I’ve gotten 4 stars and I’ve been beside myself wondering what I did wrong even when the comment is like “I love it, I’ll cherish it forever”
If you look at LessWrong, you'll find a weirdly large number of people into Star Voting but they don't account for "the new meta" that it would predictably introduce. (Approval voting also gets some love, but less.)
My belief is that LW-ers who are into these things naively think that "stars on my ballot would be like a proxy for my estimate of the utility estimate, and utility estimates would be the best thing (and surely everyone (just like me) would not engage in strategic voting to break this pleasing macro property of the aggregation method (that arises if everyone is honest and good and smart like I am))".
Which makes sense, for people from LessWrong, who are generally not cynical enough about how a slight admixture of truly bad faith (or just really stupid) players, plus everyone else "coping with reality" often leads to bad situations.
Like the bad situation you see on Etsy, with Etsy's rating system.
...
Its weird to me that LW somehow stopped believing (or propagating the belief very far?) that money is the unit of caring.
When you propagate this belief quite far, I think you end up with assurance contracts instead of voting. for almost all "domestic" or "normal" issues.
And when you notice how using money to vote in politics is often considered a corrupt practice, its pretty natural to get confused.
You wouldn't let your literal enemy in literal war spend the tiny amount it would (probably) cost to bribe your own personal commander in chief to be nice to your enemy while your enemy plunders your country at a profit relative (to the size of the bribe)...
...and so then you should realize that your internal political system NEEDS security mindset, and you should be trying to get literally the most secure possible method to get literally the best possible "trusted component" in your communal system for defending the community.
The reason THIS is necessary is that we live in a world of hobbesian horror. This is the real state of affairs on the international stage. There are no global elected leaders who endorse globally acceptable moral principles for the entire world.
(((Proposing to elect such a person democratically over all the voters in China, India, Africa, and the Middle East swiftly leads reasonable and wise Americans to get cold feet. I'm not so crazy as to propose this... yet... and I don't want to talk about multi-cultural "fully collective" extrapolated volition here. But I will say that I personal suspect "extrapolated volition and exit rights" is probably better than "collective extrapolated volition" when it comes to superintelligent benevolence algorithms.)))
In lots of modern spy movies, the "home office" gets subverted, and the spy hero has to "go it alone".
That story-like trope is useful for symbolically and narratively explaining the problem America is facing, since our constitution has this giant festering bug in the math of elections, and its going to be almost impossible for us to even patch the bug.
The metaphorical situation where "the hero can't actually highly trust the home office in this spy movie" is the real situation for almost all of us because "ameicans (and people outside of America) can't actually highly trust America's president selected by America's actual elections"... because in the movie, the home office was broken because it was low security, and in real life out elections are broken because they have low security... just like Etsy's rating systems are broken because they are badly designed.
Creating systemic and justified trust is the EXACT issue shared across all examples: random spy movies, each US election, and Etsy.
A traditional way to solve this is to publicly and verifiably selecting a clear single human leader (assuming we're not punting, and putting AI in charge yet) to be actually trusted.
You need someone who CAN and who SHOULD have authority over your domestic intelligence community, because otherwise your domestic intelligence community will have no real public leader and once you're in that state of affairs, you have no way to know they haven't gone entirely off the rails into 100% private corruption for the pure hedonic enjoyment of private power over weak humans who can't defend themselves because they gain sexual enjoyment from watching humans suffer at their hands.
Biden was probably against that stuff? I think that's part of why he insisted on getting out of Afghanistan?
But our timeline got really really really lucky that an actually moral man might have been in the whitehouse for a short period of history from 2020 to 2024. But that was mostly random.
FPTP generates random presidents.
Approval voting collapses down to FPTP under strategy and would (under optimization pressure) also generate random presidents.
Star voting collapses down to approval voting under strategy and would (under optimization pressure) also generate random presidents.
I've thought about this a lot, and I think that the warfighting part of a country needs an elected civilian commander in chief, and the single best criteria for picking someone to fill that role is the Condorcet Criterion and from there I'm not as strongly certain, but I think the most secure way to hit that criterion with a practical implementation that has quite a few other properties include Schulze and Ranked Pair ballot tabulation...
...neither of which use "stars", which is a stupid choice for preference aggregation!!
Star voting is stupid.
Years ago I heard from someone, roughly, that "optics is no longer science, just a field of engineering, because there are no open questions in optics anymore, we now 'merely' learn the 'science' of optics to become better engineers".
(This was in a larger discussion about whether and how long it would take for anything vaguely similar to happen to "all of physics", and talking about the state of optics research was helpful in clarifying whether or not "that state of seeming to be fully solved" would count as a "fully solved" field for other fields for various people in the discussion.)
In searching just now, I find that Stack Exchange also mentions ONLY the Abraham-Minkowki question as an actual suggestion about open questions in optics... and it is at -1, with four people quibbling with the claim! <3
Thank you for surprising me in a way that I was prepared to connect to a broader question about the the sociology of science and the long run future of physics!
I hit ^f and searched for "author" and didn't find anything, and this is... kind of surprising.
For me, nothing about Harry Potter's physical existence as a recurring motif in patterns of data inscribed on physical media in the physical world makes sense without positing a physically existent author (and in Harry's case a large collection of co-authors who did variational co-authoring in a bunch of fics).
Then I can do a similar kind of "obtuse intest in the physical media where the data is found" when I think about artificial rewards signals in digital people... in nearly all AIs, there is CODE that implements reinforcement learning signals...
...possibly ab initio, in programs where the weights, and the "game world", and the RL schedule for learning weights by playing in the game world were all written at the same time...
...possibly via transduction of real measurements (along with some sifting, averaging, or weighting?) such that the RL-style change in the AI's weights can only be fully predicted by not only knowing the RL schedule, but also by knowing about whatever more-distant-thing as being measured such as to predict the measurements in advance.
The code that implements the value changes during the learning regime, as the weights converge on the ideal is "the author of the weights" in some sense...
...and then of course almost all code has human authors who physically exist. And of course, with all concerns of authorship we run into issues like authorial intent and skill!
It is natural, at this juncture to point out that "the 'author' of the conscious human experience of pain, pleasure, value shifts while we sleep, and so on (as well as the 'author' of the signals fed to this conscious process from sub-conscious processes that generate sensoria, or that sample pain sensors, to create a subjective pain qualia to feed to the active self model, and so on)" is the entire human nervous system as a whole system.
And the entire brain as a whole system is primarily authored by the human genome.
And the human genome is primarily authored by the history of human evolution.
So like... One hypothesis I have is that you're purposefully avoiding "being Pearlian enough about the Causes of various Things" for the sake of writing a sequence with bite-sized chunks, than can feel like they build on each other, with the final correct essay and the full theory offered only at the end, with links back to all the initial essays with key ideas?
But maybe you guys just really really don't want to be forced down the Darwinian sinkhole, into a bleak philosophic position where everything we love and care about turns out to have been constructed by Nature Red In Tooth And Claw and so you're yearning for some kind of platonistic escape hatch?
I definitely sympathize with that yearning!
Another hypothesis is that you're trying to avoid "invoking intent in an author" because that will be philosophically confusing to most of the audience, because it explains a "mechanism with ought-powers" via a pre-existing "mechanism with ought-powers" which then cannot (presumably?) produce a close-ended "theory of ought-powers" which can start from nothing and explain how they work from scratch in a non-circularly way?
Personally, I think it is OK to go "from ought to ought to ought" in a good explanation, so long as there are other parts to the explanation.... So minimally, you would need two parts, that work sort of like a proof by induction. Maybe?
First, you would explain how something like "moral biogenesis" could occur in a very very very simple way. Some catholic philosophers, call this "minimal unit" of moral faculty "the spark of conscience" and a technical term that sometimes comes up is "synderesis".
Then, to get the full explanation, and "complete the inductive proof" the theorist would explain how any generic moral agent with the capacity for moral growth could go through some kind of learning step (possibly experiencing flavors of emotional feedback on the way) and end up better morally calibrated at the end.
Together the two parts of the theory could explain how even a small, simple, mostly venal, mostly stupid agent with a mere scintilla of moral development, and some minimal bootstrap logic, could grow over time towards something predictably and coherently Good.
(Epistemics can start and proceed analogously... The "epistemic equivalent of synderesis" would be something like a "uniform bayesian prior" and the "epistemic equivalent of moral growth" would be something like "bayesian updating".)
Whether the overall form of the Good here is uniquely convergent for all agents is not clear.
It would probably depend at least somewhat on the details of the bootstrap logic, and the details of the starting agent, and the circumstances in which development occurs? Like... surely in epistemics you can give an agent a "cursed prior" to make it unable to update epistmically towards a real truth via only bayesian updates? (Likewise I would expect at least some bad axiological states, or environmental setups, to be possible to construct if you wanted to make a hypothetically cursed agent as a mental test of the theory.)
So...
The best test case I could come up with for separately out various "metaphysical and ontology issues" around your "theory of Thingness" as it relates to abstract data structures (including ultimately perhaps The Algorithm of Goodness (if such a thing even exists)) was this smaller, simpler, less morally loaded, test case...
(Sauce is figure 4 from this paper.)
Granting that the Thingness Of Most Things rests in the sort of mostly-static brute physicality of objects...
...then noticing and trying to deal with a large collection of tricky cases lurking in "representationally stable motifs that seem thinglike despite not being very Physical" that almost all have Physical Authors...
...would you say that the Lorenz Attractor (pictured above) is a Thing?
If it is a Thing, is it a thing similar to Harry Potter?
And do you think this possible-thing has zero, one, or many Authors?
If it has non-zero Authors... who are the Authors? Especially: who was the first Author?
There's a long time contributor to lesswrong who has been studying this stuff since at least 2011 in a very mechanistic way, with lots of practical experimental data. His blog is still up, and still has circa-2011 essays like "What Trance Says About Rationality".
What I'd prefer is to have someone do data science on all that content, and find the person inside of wikipedia who is least bad, and the most good, according to my preferences and ideals, and then I'd like to donate $50 to have all their votes count twice as much in every vote for a year.
Remember the OP?
The question is "How could a large number of venal idiots attacking The Internet cost more damage than all the GDP of all the people who create and run The Internet via market mechanisms?"
I'm claiming that the core issue is that The Internet is mostly a public good, and there is no known way to turn dollars into "more or better public goods" (not yet anyway) but there are ways to ruin public goods, and then charge for access to an unruined simulacrum of a public good.
All those votes... those are a cost (and one invisible to the market, mostly). And they are only good if they reliably "generate the right answer (as judged from far away by those who wish Wikipedia took its duties as a public goods institution more seriously and coherently)".
Are you a wikipedian? Is there some way that I could find all the wikipedians and just appeal to them directly and fix the badness more simply? I like fixing things simply when simple fixes can work... :-)
(However, in my experience, most problems like this are caused by conflicts of interest, and it has seemed to me in the past that when pies are getting bigger, people are more receptive to ideas of fair and good justice, whereas when pies are getting smaller people's fallenness becomes more prominent.
I'm not saying Jimbo is still ruining things. For all I know he's not even on the board of directors of Wilkipedia anymore. I haven't checked. I'm simply saying that there are clear choices that were made in the deep past that seem to have followed a logic that would naturally help his pocketbook and naturally hurt natural public interests, and these same choices seem to still be echoing all the way up to the present.
I'm with Shankar and that meme: Stack Exchange used to be good, but isn't any more.
Regarding Wikipedia, I've had similar thoughts, but they caused me to imagine how to deeply restructure Wikipedia so that it can collect and synthesize primary sources.
Perhaps it could contain a system for "internal primary sources" where people register as such, and start offering archived testimony (which could then be cited in "purely secondary articles") similarly to the way random people hired by the NYT are trusted to offer archived testimony suitable for inclusion in current Wikipedia stuff?
This is the future. It runs on the Internet. Shall this future be democratic and flat, or full of silos and tribalism?
The thing I object to, Christian, is that "outsiders" are the people Wikipedia should properly be trying to serve but Wikipedia (like most public institutions eventually seem to do?) seems to have become insular and weird and uninterested in changing their mission to fulfill social duties that are currently being neglected by most institutions.
Wikipedia seem, to me, from the outside, as someone who they presumably are nominally "hoping to serve by summarizing all the world's trustworthy knowledge" to not actually be very good at governance, or vetting people who can or can't lock pages, or allocating power wisely, or choosing good operating policies.
Some of it I understand. "Fandom" used to be called "Wikia" and was (maybe still is?) run by Jimbo as a terrible and ugly "for profit, ad infested" system of wikis.
He naturally would have wanted wikipedia to have a narrow mandate so that "the rest of the psychic energy" could accumulate in his for-profit monstrosity, I think? But I don't think it served the world for this breakup and division into subfields to occur.
And, indeed, I think it would be good for Wikipedia to import all the articles across all of Fandom that it can legally import as "part of RETVRNING to inclusionism" <3
I already think that "the entire shape of the zeitgeist in America" is downstream of non-trivial efforts by more than one state actor. Those links explain documented cases of China and Russia both trying to foment race war in the US, but I could pull links for other subdimensions of culture (in science, around the second amendment, and in other areas) where this has been happening since roughly 2014.
My personal response is to reiterate over and over in public that there should be a coherent response by the governance systems of free people, so that, for example, TikTok should either (1) be owned by human people who themselves have free speech rights and rights to a jury trial, or else (2) should be shut down by the USG via taxes, withdrawal of corporate legal protections, etc...
...and also I just track actual specific people, and what they have personally seen and inferred and probably want and so on, in order to build a model of the world from "second hand info".
I've met you personally, Jan, at a conference, and you seemed friendly and weird and like you had original thoughts based on original seeing, and so even if you were on the payroll of the Russians somehow... (which to me clear I don't think you are) ....hey: Cyborgs! Neat idea! Maybe true. Maybe not. Maybe useful. Maybe not.
Whether or not your cyborg ideas are good or bad can be screened off from whether or not you're on the payroll of a hostile state actor. Basically, attending primarily to local validity is basically always possible, and nearly always helpful :-)