It seems like you're gesturing in a similar direction to Big History. I wonder if you'd like to highlight what you see as the distinctions?
My impression is that the OP says that history is valuable and deep without needing to go back as far as the big bang -- that there's a lot of insight in connecting the threads of different regional histories in order to gain an understanding of how human society works, without needing to go back even further.
The second and most already-implemented way is to jump outside the system and change the game to a non-doomed one. If people can't share the commons without defecting, why not portion it up into private property? Or institute government regulations? Or iterate the game to favor tit-for-tat strategies? Each of these changes has costs, but if the wage of the current game is 'doom,' each player has an incentive to change the game.
This is cooperation. The hard part is in jumping out, and getting the other person to change games with you, not in whether or not better games to play exist.
Moloch has discovered reciprocal altruism since iterated prisoner's dilemmas are a pretty common feature of the environment, but because Moloch creates adaptation-executors rather than utility maximizers, we fail to cooperate across social, spatial, and temporal distance, even if the payoff matrix stays the same.
Even if you have an incentive to switch, you need to notice the incentive before it can get you to change your mind. Since many switches require all the players to cooperate and switch at the same time, it's unlikely that groups will accidentally start playing the better game.
Convincing people that the other game is indeed better is hard when evaluating incentives is difficult. Add too much complexity and it's easy to imagine that you're hiding something. This is hard to get past since moving past it requires trust, in a context where we maybe are correct to distrust people -- i.e. if only lawyers know enough law to write contracts, they should probably add loopholes that lawyers can find, or at least make it complicated enough that only lawyers can understand it, so that you need to continue to hire lawyers to use your contracts. In fact contracts are generally complicated and full of loopholes and basically require lawyers to deal with.
Also, most people don't know about Nash equilibria, economics, game theory, etc., and it would be nice to be able to do things in a world with sub-utopian levels of understanding incentives. Also, trying to explain game theory to people as a substep of getting them to switch to another game runs into the same kind of justified mistrust as the lawyer example -- if they don't know game theory and you're saying that game theory says you're right, and evaluating arguments is costly and noisy, and they don't trust you at the start of the interaction, it's reasonable to distrust you even after the explanation, and not switch games.
Interesting. I didn't know about the x4 limitation. As that puts a natural limit on the downvoting I do not see any problem in principle with the 'mass' downvoting. If you do not have the freedom to actually spend your karma on (mass) downvotes, then the problem is not the downvoting but the limit.
The limit ensures that you downvotes need to be compensated by correspondingly valued contributions. If more people exercised their downvoting share this 'mass downvoting' wouldn't even have been noticable.
The problem may be that it is applied to individuals. But even though that can be perceived as unfair it is still strictly the choice available to the voter (not much different that voting on the popularity of people instead of comments which is seldom nowadays instead of in popularity (up)votes.
My proposal would be to either a) reduce the limit to x2 or b) change the limit to x1 ''per person'' (if that is possible easily).
This is conditional on attackers not artificially accumulating karma by upvoting themselves (via multiple accounts). Such self-voting can in principle be either detected or prevented by network flow algorithms like Advogato's ( http://www.advogato.org/trust-metric.html ) but that requires significant changes to the karma logic.
Note: I'm not afiliated with Advogato but I'd really like to see the basic principle (the network flow) be applied more to voting algorithms in general.
I tend to think of downvoting as a mechanism to signal and filter low-quality content rather than as a mechanism to 'spend karma' on some goal or another. It seems that mass downvoting doesn't really fit the goal of filtering content -- it just lets you know that someone is either trolling LW in general, or just really doesn't like someone in a way that they aren't articulating in a PM or response to a comment/article.
Expected value (it tells you not to play slot machines)
Casinos are apparently still making money, so I question the extent to which this has been adopted by the Masses.
That just means that the sanity waterline isn't high enough that casinos have no customers -- it could be the case that there used to be lots of people who went to casinos, and the waterline has been rising, and now there are fewer people who do.
Extending the literally worst part of most people's lives for as long as you can, to the tune of over 20% of medical spending in the US.
The presence of an object (or even my own finger) near the center of my forehead causes a tingling sensation, which can even shift directions (but still, always centered on my forehead) as the object moves.
I have the same, though it seems to be stronger when the finger is right in front of my nose. It always stops if the finger touches me.
Hobbes uses a similar argument in Leviathan -- people are inclined towards not starting fights unless threatened, but if people feel threatened they will start fights. But people disagree about what is and isn't threatening, and so (Hobbes argues) there needs to be a fixed set of definitions that all of society uses in order to avoid conflict.
Yes, most x-risk reduction will have to come about through explicit work on x-risk reduction at some point.
It could still easily be the case that working on improving the living standards of the world's poorest people is an effective route to x-risk reduction. In practice, scarcely anyone is going to work on x-risk as long as their own life is precarious, and scarcely anyone is going to do useful work on x-risk reduction if they are living somewhere that doesn't have the resources to do serious scientific or engineering work. So interventions that aim, in the longish term, to bring the whole world up to something like current affluent-West living standards seem likely to produce a much larger population of people who might be interested in reducing x-risk and better conditions for them to do such work in.
See the point about why its weird to think that new affluent populations will work more on x-risk if current affluent populations don't do so at a particularly high rate.
Also, it's easier to move specific people to a country than it is to raise the standard of living of entire countries. If you're doing raising-living-standards as an x-risk strategy, are you sure you shouldn't be spending money on locating people interested in x-risk instead?
Effective altruism, being centrally planned
Hold on a second. This is news to me.
What is it about EA being centrally planned?
My guess is that Eli is referring to the fact that the EA community seems to largely donate to where GiveWell says to donate, and that a lot of the discourse is centered around a system of trying to figure out all of the effects of a particular intervention, weigh it against all other factors, and then come up with a plan of what to do, where said plan is incredibly sensitive to you being right about the prioritization, facts about the situation, etc. in a way that will cause you to predictably fail to do as well as you could, due to factors like lack of on-the-ground feedback suggesting other important areas, misunderstanding people's values, errors in reasoning, and a lack of diversity in attempts to do something so that if one of the parts fails nothing gets accomplished.
I tend to think that global health is relatively non-controversial as a broad goal (nobody wants malaria! like, actually nobody) that doesn't suffer from the "we're figuring out what other people value" problem as much as other things, but I also think that that's almost certainly not the most important thing for people to be dealing with now to the exclusion of all else, and lots of people in the EA community seem to hold similar views.
I also think that GiveWell is much better and handling that type of issue than people in the EA community are, but that (at least the facebook group) is somewhat slow to catch up.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
To clarify, this meet-up is not at MIT, even though it's the third Sunday?
Yes. This meetup is at the citadel.