the use of hidden strategies (that afford an advantage) by political contestants could make those contestants less likely to represent the will of the people upon election if those strategies helped them win despite appealing to the desires of the people less than they otherwise would have.
this problem could be mitigated by requiring that political campaigns document all internal communications which transpire as part of the campaign and disclose them at the conclusion of the election. this would both a) raise awareness among voters about strategies ...
True... I don't know why i used the word 'only' there actually. Bad habit using hyperbole I guess. There are certainly even many unknown unknown threats that inspire the idea of a 'singularity'. Every step humanity is taking to develop AI feels like a huge leap of faith now.
Personally, I'm optimistic, or at least unworried, but that's probably partly because i know I'm going to die before things could get to a point where e.g. Humans are in slave camps or some other nightmarish scenario transpires. But I just don't think a superintelligence would choose a ...
I've been thinking about how most (maybe all) thought and intelligence are simulation. Whether we're performing a mathematical calculation, planning our day, or betting on a basketball game, it's all the same mental exercise of simulating reality. This might mean the ultimate potential of ai is the ability to simulate reality at higher and higher resolutions.
As an aside, it also seems that all scientific knowledge is simulation and maybe even all intellectual endeavors. Any attempt to understand or explain our reality is simulation. The essence of intellig...
The more I think about AI the more it seems like the holy grail of capitalism. If AI agents can themselves be both producers and consumers in a society, then they can be arbitrarily multiplied to expand an economy, and they have a far smaller cost in terms of labor cost, spatial cost, resource cost, healthcare cost, housing cost, etc. compared to humans. At the extreme, this seems to solve every conceivable economic problem with modern societies as it can create arbitrarily large tax revenue without the need to scale government services per ai agent the wa...
Humans are the most destructive entity on earth and my only fear with ai is that it ends up being too human.
For arbitrary time horizons nothing is 'safe', but that just means our economy shifts to a new model. It doesn't mean the outcome is bad for humans. I don't know if it makes sense to worry about which part of the ship will become submerged first because everyone will rush for the other parts and those jobs will be too competitive. It might be better to worry about how to pressure the political system to take proactive action to rearchitect our economy. Ubi and/or a shorter workweek are inevitable and the sooner we sort out how to implement that the better....
Thanks! Just read some summaries of parfit. Do you know any literature that addresses this issue within the context of a) impacts to other species, or b) using artificial minds as the additional population? I assume the total utilitarianism theory assumes arbitrarily growing physical space for populations to expand into and would not apply to finite spaces or resources (I think I recall bostrom addressing that).
Reading up on parfit also made me realize that deep utopia really has prerequisites and you were right that it's probably more readily understood by those with philosophy background. I didn't really understand what he was saying about utilitarianism until just reading about parfit.
This is a major theme in Star Trek: The Next Generation, where they refer to it as the Prime Directive. It always bothered me when they violated the Prime Directive and intervened because it seemed like it was an act of moral imperialism. But I guess that's just my morals (an objection to moral imperialism) conflicting with theirs.
A human monoculture seems bad for many reasons analogous to the ones that make an agricultural monoculture bad, though. Cultural diversity and heterogeneity should make our species more innovative and more robust to potential fut...
I've only skimmed it but so far I'm surprised bostrom didn't discuss a possible future where ai 'agents' act as both economic producers and consumers. Human population growth would seem to be bad in a world where ai can accommodate human decline (i.e. Protecting modern economies from the loss of consumers and producers), since finite resources will be a pie that gets divided either into smaller slices or larger ones depending on the number of humans around to allocate them to. And larger slices would seem to increase average well-being. Maybe he addressed but I missed it in my skim.
The karma system here seems to bully people into conforming to popular positions and philosophies. I don't see it having a positive impact on rationalism, or reducing bias. And it seems to create an echo chamber. This may sound like a cheap shot, but it's not: I've observed more consistent objectivity on certain reddit forums (including some that have nothing to do with philosophy or science).
I'm curious what those reddit forums are, got any examples to link to? Ideally with comparison examples of shitty LW conversations?
Not what I’d expect of reddit. Do you have particular subreddits in mind? I’d personally like to spend my time in places like the ones you described.
Is there any karma system that seems better? I am highly skeptical that in-general adopting a reddit "one person one vote" system would do anything. It seems to have a very predictable endpoint around the "what gets upvoted is what's most accessible/popular" attractor, which like, is fine for some parts of the internet to have, but seems particularly bad for a place like LW. Also, having agreement/approval split out helps a bunch IMO.