wedrifid comments on The AI design space near the FAI [draft] - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (48)
You're making a giant number of implicit ill-founded assumptions here that must all be true. Read my section on the AI space in general.
Firstly, you assume that the 'stuff' is unbounded. Needs not be true. I for one thing want to figure out how universe works, out of pure curiosity. That may likely be a very bounded goal right here. I also like to watch nature, or things like mandelbox fractal, which is unbounded but also preserves the nature. Those are valid examples of goals. The AI crowd, when warned not to anthropomorphize, switches to animalomorphization, or worse yet, bacteriomorphization where the AI is just a smarter gray goo, doing the goo thing intelligently. No. The human goal system can be the lower bound on the complexity of the goal system of super human AI. edit: and on top of that, all the lower biological imperatives like desire to reproduce sexually, we tend to satisfy in very unintended ways, from porn to birth control. If i were an upload i would get rid of much of those distracting nonsense goals.
Secondly, you assume that achieving of 'stuff' is raw resource-bound, rather than e.g. structuring the resources - bound. So that we'll be worth less than the atoms we are made of. Which needs never happen.
In this you have a sub-assumption that the AI can only do stuff the gray goo way, and won't ever discover anything cleverer (like quantum computing, which grows much more rapidly with size) which it would e.g. want to keep crammed together because of light speed lag. The "ai is going to eat us all" is just another of those priveledged baseless guesses what an entity way smarter than you would do. The near-FAI is the only thing with which we are pretty sure it won't leave us alone.
I don't accept that I make or are required to make any of the assumptions that you declare that I make. Allow me to emphasize just how slight a convenience it has to be for an indifferent entity to exterminate humanity. Very, very slight.
I'll bow out of this conversation. It isn't worth having it in a hidden draft.
What ever. That is the problem with human language, simplest statements have a zillion possible unfounded assumptions that are not even well defined nor is the maker of statement even aware of them (or would admit making them, because he didn't, because he just manipulated symbols).
Take "i think therefore i am". innocent phrase, something that entirely boxed in blind symbolic ai should be able to think, right? No. Wrong. The "I" is only a meaningful symbol when there's non-i to separate from i, the "think" when you can do something other than thinking, that you need to separate from thought, via symbol 'think'; therefore implies the statements where it does not follow, and I am refers to the notion that non-i might exist without I existing. Yet if you say something like this, are you 'making' those assumptions? You can say no - they come in pre-made, and aren't being processed.