All of ewjordan's Comments + Replies

If someone asks the tool-AI "How do I create an agent-AI?" and it gives an answer, the distinction is moot anyways, because one leads to the other.

Given human nature, I find it extremely difficult to believe that nobody would ask the tool-AI that question, or something that's close enough, and then implement the answer...

ewjordan170

Even if we accepted that the tool vs. agent distinction was enough to make things "safe", objection 2 still boils down to "Well, just don't build that type of AI!", which is exactly the same keep-it-in-a-box/don't-do-it argument that most normal people make when they consider this issue. I assume I don't need to explain to most people here why "We should just make a law against it" is not a solution to this problem, and I hope I don't need to argue that "Just don't do it" is even worse...

More specifically, fast forwa... (read more)

0Strange7
If computing power is that much cheaper, it will be because tremendous resources, including but certainly not limited to computing power, have been continuously devoted over the intervening decades to making it cheaper. There will be correspondingly fewer yet-undiscovered insights for a seed AI to exploit in the course of it's attempted takeoff.
9Eliezer Yudkowsky
There isn't that much computing power in the physical universe. I'm not sure even smarter AIXI approximations are effective on a moon-sized nanocomputer. I wouldn't fall over in shock if a sufficiently smart one did something effective, but mostly I'd expect nothing to happen. There's an awful lot that happens in the transition from infinite to finite computing power, and AIXI doesn't solve any of it.
0Shmi
My point is that either the Obj 2 holds, or tools are equivalent to agents. If one thinks that the latter is true (EY doesn't), then one should work on proving it. I have no opinion on whether it's true or not (I am not a domain expert).

Is there anything that can be done about it?

I don't know how much of a problem it is, but there's definitely something that can be done about it: instead of a "dumb" karma count, use some variant of Pagerank on the vote graph.

In other words, every person is a node, every upvote that each person gets from another user is a directed edge (also signed to incorporate downvotes), every person starts with a base amount of karma, and then you iteratively update the user karma by weighting each inbound vote by the karma of the voter.

When I say "v... (read more)

0Will_Sawin
I think they do store the votes because otherwise you'd be able to upvote something twice. However my understanding is that changing lesswrong, even something as basic as what posts are displayed on the front page, is difficult, and so it makes sense why they haven't implemented this.