Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
J11-29

The karma system here seems to bully people into conforming to popular positions and philosophies. I don't see it having a positive impact on rationalism, or reducing bias. And it seems to create an echo chamber. This may sound like a cheap shot, but it's not: I've observed more consistent objectivity on certain reddit forums (including some that have nothing to do with philosophy or science).

J10

you'd have to ask a moral realist. but i think they would say hitler caused the holocaust so hitler is bad. 

J10

the use of hidden strategies (that afford an advantage) by political contestants could make those contestants less likely to represent the will of the people upon election if those strategies helped them win despite appealing to the desires of the people less than they otherwise would have. 

this problem could be mitigated by requiring that political campaigns document all internal communications which transpire as part of the campaign and disclose them at the conclusion of the election. this would both a) raise awareness among voters about strategies political candidates are using and b) share the strategies with future political candidates, thereby eliminating the advantage those strategies afford. the premise here is that political candidates should win or lose primarily (or preferably, entirely) based on how well the candidates' policy positions represent the voters, how much faith the voters have in candidates' commitments to those positions, and how well voters believe the candidates would enact those policies.

(I realize donor money is the other major problem corrupting politics, and there may be different solutions for that)

J10

This was intended as agreement with the post it's replying to.

J10

True... I don't know why i used the word 'only' there actually. Bad habit using hyperbole I guess. There are certainly even many unknown unknown threats that inspire the idea of a 'singularity'. Every step humanity is taking to develop AI feels like a huge leap of faith now.

Personally, I'm optimistic, or at least unworried, but that's probably partly because i know I'm going to die before things could get to a point where e.g. Humans are in slave camps or some other nightmarish scenario transpires. But I just don't think a superintelligence would choose a path that humans would be clearly resistant to, when it could simply incentivize us to do voluntarily do what it wants. Humans are far easier to deal with when they're duped into doing something they think they want to do. And it shouldn't be that hard for a superintelligence to figure out how to manipulate us that way. Using force or fear to control humans is probably the least efficient option.

I also have little doubt that corporations and state actors are already exploring how to use gpt-type ai for e.g. propaganda and other kinds of social and psychological manipulation. I mean that's what marketing is and algorithms designed to manipulate our behavior already drive the internet.

J10

I've been thinking about how most (maybe all) thought and intelligence are simulation. Whether we're performing a mathematical calculation, planning our day, or betting on a basketball game, it's all the same mental exercise of simulating reality. This might mean the ultimate potential of ai is the ability to simulate reality at higher and higher resolutions.

As an aside, it also seems that all scientific knowledge is simulation and maybe even all intellectual endeavors. Any attempt to understand or explain our reality is simulation. The essence of intelligence is reality simulation. Our brains are reality simulators and the ultimate purpose of intelligence is to simulate potential realities.

When people muse about reality being someone's dream, that might not be terribly far from the true nature of our universe.

J10

The more I think about AI the more it seems like the holy grail of capitalism. If AI agents can themselves be both producers and consumers in a society, then they can be arbitrarily multiplied to expand an economy, and they have a far smaller cost in terms of labor cost, spatial cost, resource cost, healthcare cost, housing cost, etc. compared to humans. At the extreme, this seems to solve every conceivable economic problem with modern societies as it can create arbitrarily large tax revenue without the need to scale government services per ai agent the way they would need to be scaled per human.

I guess it's possible that, long-term, AI could obsolete money and capitalism, but presumably there could be a transitionary period where experience something like the aforementioned super-capitalism.

J10

Hmmm I guess I don't really use the terms 'investing' and 'trading' interchangeably.

J0-6

Humans are the most destructive entity on earth and my only fear with ai is that it ends up being too human.

Load More