The karma system here seems to bully people into conforming to popular positions and philosophies. I don't see it having a positive impact on rationalism, or reducing bias. And it seems to create an echo chamber. This may sound like a cheap shot, but it's not: I've observed more consistent objectivity on certain reddit forums (including some that have nothing to do with philosophy or science).
I'm curious what those reddit forums are, got any examples to link to? Ideally with comparison examples of shitty LW conversations?
Is there any karma system that seems better? I am highly skeptical that in-general adopting a reddit "one person one vote" system would do anything. It seems to have a very predictable endpoint around the "what gets upvoted is what's most accessible/popular" attractor, which like, is fine for some parts of the internet to have, but seems particularly bad for a place like LW. Also, having agreement/approval split out helps a bunch IMO.
One thing that might be worth experimenting with is how visible karma/upvotes are. Some examples:
(Haven’t really thought about this at all, quite plausible that there are lots of problems this would cause)
Not what I’d expect of reddit. Do you have particular subreddits in mind? I’d personally like to spend my time in places like the ones you described.
I consistently get upvotes and lots of disagrees when I post thoughts on alignment, which is much more encouraging than downvotes.
For a bad exposition of an unpopular position, it's tempting to attribute its reception to properties of the position. I think LW does an OK job at appreciating good expositions of unpopular positions on relevant topics. I sometimes both karma-upvote and agreement-downvote the same comment.
I've noticed the sanity waterline has in fact been going up on other websites over the past ten years. I suspect
prediction market ethos,
YouTube science education,
things like brilliant,
and general rationalist idea leakage,
Are to blame
That is awesome if true! But I worry that maybe this is instead about your selection of the sources you read.
Maybe over the years you learned to recognize the stupid sources and reject them quickly, and also over the years you have accumulated a nice collection of smart sources. That would be a pessimistic hypothesis.
.
I also have a bounded-optimistic hypothesis, which is that only a fixed small fraction of people are rationality-compatible... but thanks to the spreading of our memes, now these people are better exposed to rationality, better exposed to each other, and more likely to blog.
In other words, I assume that the greatest improvement happened with the group of people who "have a potential to be rational, but they need a nudge to make it click". As opposed to the past, now these people (1) have the other sources that can nudge them; and (2) if they succeed to become rational, they won't be alone doing so.
Seems to me that in the past, there were rational individuals out there, but they were a small minority at any website. And if they made their own blogs, most commenters there would be deeply irrational. Today there is a rationalist community (and a much larger community of people who are influenced by the rationalist community even if they don't consider themselves being a part of it), and if you start your own blog, you can attract readers from there. There is a sufficiently large group of people who share a tacit understanding of what rational interaction looks like.
But I assume that for an average person on the internet, things are probably the same as before, or worse.
@j-5 looking forward to your reply here. as you can see, it's likely you'll be upvoted for expanding, so I hope it's not too spooky to reply.
@Shankar Sivarajan, why did you downvote the request for examples?
@Shankar Sivarajan, why did you downvote the request for examples?
My attempt at warning anyone who might be tempted to provide an example that it's a trap, since I've fallen into this one before: if you misunderstand the ritual, and provide true examples of actually unpopular positions, you're gonna get strongly downvoted (and likely triggering rate-limiting).
What you're supposed to do is provide false examples that you then get simultaneously karma-upvoted & agreement-downvoted for, which reinforces the community's self-conception of encouraging disagreement.
EDIT: @the gears to ascension Thanks in part to your downvotes, I can't reply to your comment. What do you think "bullying people into conformity" would look like other than those with lots of karma being able to brute-force the site into looking more like what they prefer? To call this an "issue" is risible: this is deliberate design. Well-Kept Gardens, etc.
On your "Haha" react to the gears to ascension's words:
Mmm. If someone provides real examples and gets downvoted for them, isn't that stronger evidence of issues here?
There's presence of examples of any kind, distinction between real and not-real examples, and (voting) response to an attempt to provide examples if they are considered not-real. We could in principle collect data on all three of these, and if you provide the labeling we can also see your distinction between real and not-real examples, how it differs from someone else's distinction.
Absence of such data makes the issue more murky and less resolved, requests for more data seem clearly beneficial regardless of anyone's motivations or positions. Especially if the proponents of the unpopular position are the ones framing the data (there is no need to stay in the trap of the framing of a particular request for data), as this presents a greater opportunity for someone to change their mind, which is progress compared to the no-op of nobody changing their mind. Data is an asymmetric weapon, it acts to convince with more efficacy in the direction of true state of things that generated the data.
Mmm. If someone provides real examples and gets downvoted for them, isn't that stronger evidence of issues here? I think you are actually just wrong about this claim. Also, if you in particular provide examples of where you've been downvoted I expect it will be ones that carry my downvote because you were being an ass, rather than because of any factual claim they contain. (And I expect to get downvoted for saying so!)
j is likely to not have that problem, unless they also are systematically simply being an ass and calling it an opinion. If you describe ideas and don't do this shit Shankar and I are, I think you probably won't be downvoted.
I've been thinking about how most (maybe all) thought and intelligence are simulation. Whether we're performing a mathematical calculation, planning our day, or betting on a basketball game, it's all the same mental exercise of simulating reality. This might mean the ultimate potential of ai is the ability to simulate reality at higher and higher resolutions.
As an aside, it also seems that all scientific knowledge is simulation and maybe even all intellectual endeavors. Any attempt to understand or explain our reality is simulation. The essence of intelligence is reality simulation. Our brains are reality simulators and the ultimate purpose of intelligence is to simulate potential realities.
When people muse about reality being someone's dream, that might not be terribly far from the true nature of our universe.
the use of hidden strategies (that afford an advantage) by political contestants could make those contestants less likely to represent the will of the people upon election if those strategies helped them win despite appealing to the desires of the people less than they otherwise would have.
this problem could be mitigated by requiring that political campaigns document all internal communications which transpire as part of the campaign and disclose them at the conclusion of the election. this would both a) raise awareness among voters about strategies political candidates are using and b) share the strategies with future political candidates, thereby eliminating the advantage those strategies afford. the premise here is that political candidates should win or lose primarily (or preferably, entirely) based on how well the candidates' policy positions represent the voters, how much faith the voters have in candidates' commitments to those positions, and how well voters believe the candidates would enact those policies.
(I realize donor money is the other major problem corrupting politics, and there may be different solutions for that)
The more I think about AI the more it seems like the holy grail of capitalism. If AI agents can themselves be both producers and consumers in a society, then they can be arbitrarily multiplied to expand an economy, and they have a far smaller cost in terms of labor cost, spatial cost, resource cost, healthcare cost, housing cost, etc. compared to humans. At the extreme, this seems to solve every conceivable economic problem with modern societies as it can create arbitrarily large tax revenue without the need to scale government services per ai agent the way they would need to be scaled per human.
I guess it's possible that, long-term, AI could obsolete money and capitalism, but presumably there could be a transitionary period where experience something like the aforementioned super-capitalism.