aaq

An engineering student at Northwestern University.

Wiki Contributions

Comments

aaq2y30

1a -> Broadly agree. "Weaker" is an interesting word to pick here; I'm not sure whether an anarcho-primitivist society would be considered weaker or stronger than a communist one systemically. Maybe it depends on timescale. Of course, if this were the only size lever we had to move x-risk up and down, we'd be in a tough position - but I don't think anyone takes that view seriously.

1b -> Logically true, but I do see strong reason to think short term x-risk is mostly anthropogenic. That's why we're all here.

2 -> I do agree it would probably take a while.

3a -> Depends on how coarse or fine grained the distribution of resources is, a simple linear optimizer program would probably do the same job better for most coarser distribution schemes.

3b -> Kind of. I'm looking into them as a curiosity.

aaq2y30

Why is it a stretch?

aaq2y10

AI development is a tragedy of the commons

Per Wikipedia:

In economic science, the tragedy of the commons is a situation in which individual users, who have open access to a resource unhampered by shared social structures or formal rules that govern access and use, act independently according to their own self-interest and, contrary to the common good of all users, cause depletion of the resource through their uncoordinated action.

The usual example of a TotC is a fishing pond: Everyone wants to fish as much as possible, but fish are not infinite, and if you fish them faster than they can reproduce, you end up with less and less fish per catch.

AI development seems to have a similar dynamic: Everyone has an incentive to build more and more powerful AIs, because there is a lot of money to be made in doing so. But more and more powerful AIs being made increases the likelihood of an unstoppable AGI being made.

There are some differences, but I think this is the underlying dynamic driving AI development today. The biggest point of difference is that, whereas one person's overfishing eventually causes a noticeable negative effect on other fishers, and at the least does not improve their own catches, one firm building a more powerful AI probably does improve the economic situation of the other people who leverage it, up until a critical point.

Are there other tragedies of the commons that exhibit such non-monotonic behavior?

aaq2y10

Would AGI still be an x-risk under communism?

1-bit verdict

Yes.

2-bit verdict

Absolutely, yes.

Explanation

An artificial general intelligence (AGI) is a computer program that can perform at least as good as an average human being can across a wide variety of tasks. The concept is closely linked to that of a general superintelligence, which can perform better than even the best human being can across a wide variety of tasks.

There are reasons to believe most, perhaps almost all, general superintelligences would end up causing human extinction. AI safety is a crossdisciplinary field of mathematics, economics, computer science, and philosophy which tackles the problem of how to stop such superintelligences.

AI alignment is a subfield of AI safety which studies theoretical conditions under which superintelligences aligned with human values can emerge. Another branch, which might be called AI deterrence, aims instead to make the production of unaligned superintelligences less likely in the first place.

One of the primary reasons why someone might want to create a superintelligence, even while understanding the risks involved, is because of the vast economic value such a program could generate. It makes sense then from a deterrence lens to look into the question of how this profit motive might be curtailed before catastrophe. Why not communism?

Unfortunately, this is almost certainly a bad move. Communism at almost every scale has to date never been able to escape the rampant black markets that appear due to the distortion of price signals. There is no reason to suspect such black markets wouldn't have just as strong a profit motive to create stronger and stronger AGIs. Indeed, because black markets are already illegal, this may worsen the problem: Well funded teams of people producing AGI outside of the eyes of the broader public is likely to generate less pushback and to be better equipped to avoid deterrence oriented legislation than a clear market team such as OpenAI is.

aaq2y70

Towards a #1-flavored answer, a Hansonian fine insured bounty system seems like it might scale well for enforcing cooperation against AI research.

https://www.overcomingbias.com/2018/01/privately-enforced-punished-crime.html

aaq4y10

Metcalfe's (revised!) law states that the value of a communications network grows at about .

I frequently give my friends the advice that they should aim to become pretty good at 2 synergistic disciplines (CS and EE for me, for example), but I have wondered in the past why I don't give them the advice to become okay at 4 or 5 synergistic disciplines instead.

It just struck me these ideas might be connected in some way, but I am having trouble figuring out exactly how.

aaq4y10

Try to think about this in terms of expected value. On your specific example, they do score more, but this is probabilistic thinking, so we want to think about it in terms of the long run trend.

Suppose we no longer know what the answer is, and you are genuinely 50/50 on it being either A or B. This is what you truly believe, you don't think there's a chance in hell it's C. If you sit there and ask yourself, "Maybe I should do a 50-25-25 split, just in case", you're going to immediately realize "Wait, that's moronic. I'm throwing away 25% of my points on something I am certain is wrong. This is like betting on a 3-legged horse."

Now let's say you do a hundred of these questions, and most of your 50-50s come up correct-ish as one or the other. Your opponent consistently does 50-25-25s, and so they end up more wrong than you overall, because half the time the answer lands on one of their two 25s, not their single 50.

It's not a game of being more correct, it's a game of being less wrong.

aaq4y-10

I disagree with your first point, I consider the 50:25:25:0 thing is the point. It's hard to swallow because admitting ignorance rather than appearing falsely confident always is, but that's why it makes for such a good value to train.

Load More