Wiki Contributions

Comments

Sorted by

The first and third kinds of power are a bit vague.

They overlap, but don't fully cover the "I have a gun and you don't" kind of power.

It's not dominance because the person without the gun will or will not get shot regardless of displays of dominance. At least as long as the situation isn't already about dominance before the gun comes into play.

It's also not a form of getting-things-done power because the gun is a potential that exists even if no one involved currently has any reason to shoot anyone.

Kinrany10

Agents want to be liquid.

An agent created in a computer would be an exception to that?

it would make the problem more tractable

The problem of creating a strong AI and surviving, that is. We'd still get Hanson's billions of self-directed EMs.

Thanks!

It has been explored (multiple times even on this site), and doesn't avoid doom. It does close off some specific paths that might otherwise lead to doom, but not all or even most of them.

Do you have any specific posts in mind?

To be clear, I'm not suggesting that because of this possibility we can just hope that this is how it plays out and we will get lucky.

If we could find a hard limit like this, it seems like it would make the problem more tractable, however. It doesn't have to exist simply because we want it to exist. Searching for it still seems like a good idea.

There's a hundred problems to solve, but it seems like it could avoid the main bad scenario at least: that of AI rapidly self-improving. Improving its hardware wouldn't be trivial for a human-level AI, and it wouldn't have options present in other scenarios. And scaling beyond a single machine seems likely to be a significant barrier at least.

It could still create millions of copies of itself. That's still a problem, but also still a better problem to have than a single AI with no coordination overhead.

This should be mitigated by pools of mutual trust that naturally form whenever there's a loop in the trust graph.

When the number of layers grows, the only thing that really works is metrics that cannot be goodhearted. Whenever those metrics exist, money becomes a perfectly good expression of success.

It might work to completely prohibit more than one layer of middle management. Instead, when middle manager Bob wants more people, he and his boss Alice come up with a contract that can't be gamed too much. Alice spins out Bob's org subtree into a new organization, and then it becomes Bob's job to buy the service from the new org as necessary. Alice also publishes the contract, so that entrepreneurs can swoop in and offer a better/cheaper service.

The ability to put up with bullshit is valuable: bullshit cannot be ignored once it is reified into real world objects, documents, habits.

Markdown has syntax for quotes: a line with > this on it will look like

this

Honestly "fiction" was enough of a spoiler. "As a child, we were always told that every sapient life is precious." made it a certainty.

Suggestion: "sangaku proving the Pythagoras' theorem". I wonder if it can do visual explanations.

Load More