Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Alex_U00

In a very real sense, wouldn't an AGI itself be a descendant of humanity? It's not obvious, anyway, that there would be big categorical differences between an AGI and humanity 200+ years down the road after we've been merged/cyborged/upgraded, etc., to the hilt, all with technologies made possible by the AGI. This goes back to Phil's point above -- it seems a little short-sighted to place undo importance on the preservation of this particular incarnation, or generation, of humanity, when what we really care about is some fuzzy concept of "human intelligence" or "culture."

Alex_U00

An AGI that's complicit with the phasing out of humanity (presumably as humans merge with it, or an off-shoot of it, e.g., uploading), to the point that "not much would remian that's recognizably human" would seem to be at odds with its coded imperative to remain "friendly." At the very least, I think this concern highlights the trickiness of formalizing a definition for "friendliness," which AFAIK anyone has yet to do.

Alex_U00

Scientist 2's theory is more susceptible to over-fitting of the data; we have no reason to believe it's particularly generalizable. His theory could, in essence, simply be restating the known results and then giving a more or less random prediction for the next one. Let's make it 100,000 trials rather than 20 (and say that Scientist A has based his yet-to-be-falsified theory off the first 50,000 trials), and stipulate that Scientist 2 is a neural network -- then the answer seems clear.