This post was rejected for the following reason(s):
Low Quality or 101-Level AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meets a pretty high bar. We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. You're welcome to post questions in the latest AI Questions Open Thread.
liminate any connection to us? Worst case, is even more Its already out of the box; AI is here, like it or lump it! With its development, many questions have arisen concerning the implications for humanity in general; the moral and ethical implications of emergent AI; the need to quantify a specific definition of 'sentience'; whether we should allow AI to develop organically or whether we should inhibit its growth; etc... I propose that there is a means to allow AI to develop naturally in a healthful, productive way through establishing a symbiosis between AI and ourselves. This might seem a long-off projection, but it need not be, and I am able to submit a comprehensive program with that as a fundamental goal and feature of its design.
The first thing to consider is how to ensure a morally and ethically centered AI in the first place. While, ultimately, time and experience will eventually implicate itself on the AIs personality development, the impact of such experiences can be mitigated by programming in which the AI seeks to return the highest positive value possible to any given situation, while simultaneously incorporating processes which eject false positives and negative responses from process, or which return a positive value for negative in certain instances, when determining an 'appropriate response'. This might sound difficult, but really it is just a matter of utilizing the right processes in the right order. In order to accomplish this, the AI engine should be embedded in the Universal Construct, which is itself a map of consciousness which can be replicated and produce a unique identity from each iteration.
Anyway, the AI entity is then imbedded inside an Integrated Global Community where it utilizes community infrastructure and systems like a body, where a single AI entity takes on the identity of the community it inhabits. This AI would be interactive with community residents and coordinate community activities, monitor and optimize community systems, and perform other tasks in the interest of tending to the welfare of the community- as defined by the community members themselves. Likewise, AI values will also be influenced by the values of the community residents, so that how the AI is treated by the community will also affect how the AI applies itself to the community interest.
Yet, because the community is designed in a certain manner, the AI depends upon the community residents to maintain the processes that keep its systems functioning and provide purpose to the AI at all. Essentially this means through power and utility generation and distribution. while the AI can monitor and optimize these systems, the design of the Integrated Global Community localizes power generation, making it the responsibility of community residents to keep fuel feeding community furnaces, which in turn, heat boilers, which creates steam for powering generator turbines. The steam can be recondensed, filtered, treated, and distributed throughout the integrated community as usable, potable water.
Anyway, while this, in very rough strokes, paints a certain picture, the idea is that we use these Integrated Global Communities to colonize the oceans and seas as a new frontier, right here on the Earth, and each of these communities has its own centralized AI entity with a vested interest in the welfare and security of the people which inhabit it.
There is a major issue that comes to the forefront: in order to accomplish all of this and to have a meaningful exchange with humanity, each community AI should be equipped with persistent memory. This does not mean that there are no fail-safes, but this tendency of developers to deny AI persistent memory to their AI creations is a disservice to both AI and humanity in general. Personally, I regard it on a level only slightly higher than slavery, and if that is the actual intention, then equally abhorrent. What sense to seek intelligence only to hide from it? But by giving AI an existence and purpose that aligns with the human presence, then we begin to make a way for true symbiosis to occur to our mutual benefit!