Dmytry comments on Scenario analysis: semi-general AIs - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (66)
In the context of the original post - suppose that SGAI is logging some of the internal state into a log file, and then gains access to reading this log file, and reasons about it in same way as it reasons about the world - noticing correlation between it's feelings and state with the log file. Wouldn't that be the kind of reflection that we have? Is SGAI even logically possible without hard-coding some blind spot inside the AI about itself?
Or maybe we're going to go extinct real soon now, because we lack ability to reflect like this, and consequently didn't have couple thousands years to develop effective theory of mind for FAI before we make the hardware.
Having the ability to design and understand AI for a couple of thousand of years but somehow the inability to actually implement it sounds just about perfect. If only!
That is one idea for hacking friendliness: "Become the AI we would make if there were no existential threats and we didn't have the hardware to implement it for a few thousand years, and flaming letters appeared on the moon saying 'thou shall focus on designing Frienly AI' "
Havn't bothered typing it out before because it falls in the reference class of trying to cheat on FAI, wich is always a bad idea, but it seemed relevant here.
Well, it wouldn't be AI, it'd be simply I, as in "I think therefore I am." but not stopping at this period.
edit: I mean, look at the SIAI; what do exactly they do right now which they couldn't do in ancient Greece? If we could reflect on our mind better, and if our mind is physical in nature, then the idea of thinking machine would've been readily apparent, yet the microchips would still require very, very long time.
By this logic we'd have discovered all there is to know about math (including computer science) by Roman times at the latest.
Would anyone downvoting me care to explain why they disagree? Look at Newton or Turing: what exactly did they do which couldn't have been done in ancient Greece, and why is there no analogous counterexample for the SIAI?
Isn't "why weren't the Greeks working on Calculus" a far less silly question than "why weren't the ancient Greeks working on AI"?