All of J's Comments + Replies

J11-29

The karma system here seems to bully people into conforming to popular positions and philosophies. I don't see it having a positive impact on rationalism, or reducing bias. And it seems to create an echo chamber. This may sound like a cheap shot, but it's not: I've observed more consistent objectivity on certain reddit forums (including some that have nothing to do with philosophy or science).

4the gears to ascension
@j-5 looking forward to your reply here. as you can see, it's likely you'll be upvoted for expanding, so I hope it's not too spooky to reply. @Shankar Sivarajan, why did you downvote the request for examples?
6the gears to ascension
I've noticed the sanity waterline has in fact been going up on other websites over the past ten years. I suspect Are to blame

I'm curious what those reddit forums are, got any examples to link to? Ideally with comparison examples of shitty LW conversations?

Not what I’d expect of reddit. Do you have particular subreddits in mind? I’d personally like to spend my time in places like the ones you described.

9Hastings
I consistently get upvotes and lots of disagrees when I post thoughts on alignment, which is much more encouraging than downvotes.
8Vladimir_Nesov
For a bad exposition of an unpopular position, it's tempting to attribute its reception to properties of the position. I think LW does an OK job at appreciating good expositions of unpopular positions on relevant topics. I sometimes both karma-upvote and agreement-downvote the same comment.

Is there any karma system that seems better? I am highly skeptical that in-general adopting a reddit "one person one vote" system would do anything. It seems to have a very predictable endpoint around the "what gets upvoted is what's most accessible/popular" attractor, which like, is fine for some parts of the internet to have, but seems particularly bad for a place like LW. Also, having agreement/approval split out helps a bunch IMO.

you'd have to ask a moral realist. but i think they would say hitler caused the holocaust so hitler is bad. 

1ABlue
Alright, based on your phrasing I had thought it was something you believed.  I'm open to moral realism and I don't immediately see how phenomena being objectively bad would imply that physics is objectively bad.
2Shmi
Consider moral constructivism.
1ABlue
Why does something causing something bad make that thing itself bad?

the use of hidden strategies (that afford an advantage) by political contestants could make those contestants less likely to represent the will of the people upon election if those strategies helped them win despite appealing to the desires of the people less than they otherwise would have. 

this problem could be mitigated by requiring that political campaigns document all internal communications which transpire as part of the campaign and disclose them at the conclusion of the election. this would both a) raise awareness among voters about strategies ... (read more)

This was intended as agreement with the post it's replying to.

True... I don't know why i used the word 'only' there actually. Bad habit using hyperbole I guess. There are certainly even many unknown unknown threats that inspire the idea of a 'singularity'. Every step humanity is taking to develop AI feels like a huge leap of faith now.

Personally, I'm optimistic, or at least unworried, but that's probably partly because i know I'm going to die before things could get to a point where e.g. Humans are in slave camps or some other nightmarish scenario transpires. But I just don't think a superintelligence would choose a ... (read more)

I've been thinking about how most (maybe all) thought and intelligence are simulation. Whether we're performing a mathematical calculation, planning our day, or betting on a basketball game, it's all the same mental exercise of simulating reality. This might mean the ultimate potential of ai is the ability to simulate reality at higher and higher resolutions.

As an aside, it also seems that all scientific knowledge is simulation and maybe even all intellectual endeavors. Any attempt to understand or explain our reality is simulation. The essence of intellig... (read more)

2Vladimir_Nesov
That's model-based RL (with a learned model).

The more I think about AI the more it seems like the holy grail of capitalism. If AI agents can themselves be both producers and consumers in a society, then they can be arbitrarily multiplied to expand an economy, and they have a far smaller cost in terms of labor cost, spatial cost, resource cost, healthcare cost, housing cost, etc. compared to humans. At the extreme, this seems to solve every conceivable economic problem with modern societies as it can create arbitrarily large tax revenue without the need to scale government services per ai agent the wa... (read more)

Hmmm I guess I don't really use the terms 'investing' and 'trading' interchangeably.

Humans are the most destructive entity on earth and my only fear with ai is that it ends up being too human.

1J
This was intended as agreement with the post it's replying to.
3RogerDearnaley
The most dangerous currently on Earth, yes. That AI which picked up unaligned behaviors from human bad examples could be extremely dangerous, yes (I've written other posts about that). That that's the only possibility we need to worry about, I disagree — paperclip maximizers are also quite a plausible concern and are absolutely an x-risk.
Answer by J30

For arbitrary time horizons nothing is 'safe', but that just means our economy shifts to a new model. It doesn't mean the outcome is bad for humans. I don't know if it makes sense to worry about which part of the ship will become submerged first because everyone will rush for the other parts and those jobs will be too competitive. It might be better to worry about how to pressure the political system to take proactive action to rearchitect our economy. Ubi and/or a shorter workweek are inevitable and the sooner we sort out how to implement that the better.... (read more)

4RogerDearnaley
I think active stock-market investing, or running your own company, in a post AGI-world is about as safe as rubbing yourself down in chum before jumping into a shark feeding frenzy. Making money on the stock market is about being better then the average investor at making predictions. If the average investor is an ASI, then you're clearly one of the suckers. One obvious strategy would be to just buy stock and hold it (which I think may be what you were actually suggesting). But in an economy as turbulent as a post-AGI FOOM, that's only going to work for a certain amount of time before your investments turn sour, and your judgement of when to sell and buy something else puts you back in the stock market losing game. So I think that leaves something comparable to an ASI-managed fund, or an index fund. I don't know that that strategy is safe either, but it seems less clearly doomed than either of the previous ones.
J10

Thanks! Just read some summaries of parfit. Do you know any literature that addresses this issue within the context of a) impacts to other species, or b) using artificial minds as the additional population? I assume the total utilitarianism theory assumes arbitrarily growing physical space for populations to expand into and would not apply to finite spaces or resources (I think I recall bostrom addressing that).

Reading up on parfit also made me realize that deep utopia really has prerequisites and you were right that it's probably more readily understood by those with philosophy background. I didn't really understand what he was saying about utilitarianism until just reading about parfit.

J12

This is a major theme in Star Trek: The Next Generation, where they refer to it as the Prime Directive. It always bothered me when they violated the Prime Directive and intervened because it seemed like it was an act of moral imperialism. But I guess that's just my morals (an objection to moral imperialism) conflicting with theirs.

A human monoculture seems bad for many reasons analogous to the ones that make an agricultural monoculture bad, though. Cultural diversity and heterogeneity should make our species more innovative and more robust to potential fut... (read more)

J10

I've only skimmed it but so far I'm surprised bostrom didn't discuss a possible future where ai 'agents' act as both economic producers and consumers. Human population growth would seem to be bad in a world where ai can accommodate human decline (i.e. Protecting modern economies from the loss of consumers and producers), since finite resources will be a pie that gets divided either into smaller slices or larger ones depending on the number of humans around to allocate them to. And larger slices would seem to increase average well-being. Maybe he addressed but I missed it in my skim.

2PeterMcCluskey
You seem to assume we should endorse something like average utilitarianism. Bostrom and I consider total utilitarianism to be closer to the best moral framework. See Parfit's writings if you want deep discussion of this topic.