I think a lot of people have high time-discounting rates, resulting in a pretty adversarial relationship with their future selves, such that contracts that allow them to commit to arbitrary future actions are bad. For example, imagine a drug addict being offered to commit themselves to slavery a month from now, in exchange for some drugs right now. I would argue that the existence of this offer is overall net negative from a humanitarian perspective.
I think there is a part of human psychology, and human culture, that expects that large commitments should be accompanied by significant-seeming rituals, in order to help people grok the significance of what they are committing to (example: Marriage). As such, I think it would be important that this platform limits the type of thing that people can commit to, to stuff that wouldn't be extremely costly to their future selves (though this is already mostly covered with modern contract law, which mostly prevents you from signing contracts that are extremely costly for your future self).
But this leads to a moral philosophy question: are time-discounting rates okay, and is your future self actually less important in the moral calculus than your present self ?
Entrenched interests with a large body of people attached would be able to use such a thing more effectively than the public at large. Consider the case of coal power: it appears that most people have a soft preference for coal power plants to go away; coal power plant workers, coal miners, and coal mining towns have a very strong preference for them to remain or expand.
This could easily mean the coal industry effectively uses this mechanism to advance coal interests, or at least that it sets overcoming coal's efforts as the minimum threshold.
There doesn't seem to be a way to disentangle inadequate equilibria shifting from general equilibria shifting, including the case of shifting to the same equilibrium which might be thought of as equilibria fixing.
That being said, even if such activity takes place it will (presumably) be rendered transparent by the mechanism. I think the new information gained across all equilibria could outweigh an evil shift or an evil fix on some of them.
Addendum: I think the key variable here is how much cheaper and easier the mechanism makes coordination relative to the actual cost that gets people with soft preferences to participate.
It feels like because the groups that are already working to drive inadequate equilibria are already investing in coordination mechanisms, making coordination cheaper and easier will be a net-loss until the soft preference threshold is crossed.
A couple of thoughts:
Lack of follow-through means that too few people actually change and the new equilibrium is not achieved. This makes future coordination more difficult as people lose faith in coordination attempts in general.
If I were to be truly cynical I could create/join a coordination for something I was against, spam a lot of fake accounts, get the coordination conditions met and watch it fail due to poor follow-through. Now people lose faith in the idea of coordinating to make that change.
Not sure how likely this is, how easy it is to counter or how much worse than the status quo coordination attempts can get...
In general, a commitment means little if there's no punishment for failing to follow through. If a platform can't impute a punishment on those who fail to follow through, it is not particularly good, maybe not even the thing we're talking about.
Regarding sybil attacks, in New Zealand, there's a state-funded auth system called RealMe that ensures one account per person. You use it for filing taxes. I've seen non-government services (crypto trading platforms) that use it, as any other site would use facebook or google auth (it's...
For profitable ventures, the reciprocal commitment way of doing things would be to build a coop by getting everyone to commit to paying in large amounts of their own money to keep the lights on for the first 6 months, iff enough contributing members are found.
The current alternative is getting an investor. Investors, as a mechanism for shifting equilibria, has a lot of filters that make unviable ideas less likely to recieve funding (the investor has an interest in making good bets, and experience in it) and insulate the workers from risk (if the venture fails, it's the investor who eats the cost, not the workers).
It's conceivable that having reciprocal commitment technologies would open the way for lots of hardship as fools wager a lot of their own money on projects that never could have succeeded. It's conceivable that the reason the investor system isn't creating the change we want to see is that those changes aren't really viable yet under any system and "enabling" them would just result in a lot of pain. (I hope this isn't generally true, but in some domains it probably is.)
Yes, powerful tools for coordinated action have a potential to be used for bad. one way to think of some is just asking "what if ISIS discovered my tool?", so of course we'd want some policy that rules out violence. but it gets trickier with laws - do actions that break the law should be ruled out? what about events like the Arab spring? surly the laws weren't that good there... and we may even say that the violence was justified.
basically there's two questions - how do we make sure this tool moves us from one Nash equilibria, to a better Nash equilibria, and not the other way around. and, how do we make sure that it's doing it in a reasonable and ethical way.
Has Kickstarter enabled evil/turned out to be net-negative? It's purpose seems very similar after all. (And what policing does it have? Any weird effects/results?)
So, my default hypothesis is "this is going to be a net-positive thing." But it seemed worthwhile (or at least interesting) to spend some time thinking about how it might go wrong, security mindset style.
I do think that this is a fairly different class of tool that kickstarter, and apart from using the name as shorthand I'd expect fairly different effects. Sites that make it easier to sign petitions or otherwise credibly commit to crowd-action on social change seem more like the reference class to be drawing any lessons from.
Reference class you inquired about: "Sites that make it easier to sign petitions or otherwise credibly commit to crowd-action on social change".
I was gesturing at "coordination", or "coordinating funding" because the data might exist for Kickstarter. (A streetlight search.)
There also might be a relationship between coordinating and funding - if not for monetary purposes, then to gauge commitment - how many people will probably be there? And it could help solve some of the issues mentioned in peoples' answers.
Following up to "If a "Kickstarter for Inadequate Equlibria" was built, do you have a concrete inadequate equilibrium to fix?"
I think a kickstarter for coordinated action would be net positive, but it's the sort of general purpose powerful tool that might turn out bad in ways I can't easily predict. It might give too much power to mobs of people who don't know what they're doing, or have weird/bad goals.
How bad might it be if misused? What equilibrias might be we end up in in the world where everyone freely has access to such a tool?