Posts

Sorted by New

Wiki Contributions

Comments

pom10

It might be that my previous message comes across as (slightly) incomprehensible/odd/opaque, or even way out there and as such to be ignored, though I often have trouble ascertaining the way my communication is being processed at the other end so to speak, since I of course structure my thoughts around my own "hierarchical" system of processing incoming information, making it perhaps less clear what the intended inset and/or context of some ideas of musings might be at any point in my "deliberations", if I fail to incorporate some markers that I feel are at play at any time.
 
So to make some things a little clearer perhaps (hopefully), my intent is not to definitively "prove" anything, or prove anyone wrong, or to try and point at what I perceive to be flaws, I am trying to advocate for more complete "mapping" and sorting of information to be able to implement more solid and comprehensive reasoning, based on a couple of examples. And whether this is done within such a collection of gathered knowledge as lies at the base of the message I replied to, or in another manner (since this was most likely never the original intention of this particular project, which despite some flaws I seem to perceive is of course a massive undertaking in its own right, as to not underestimate/undervalue such an attempt), though my (possibly a little) terse reaction here is also based on the uneasy feeling that such a (more thorough) organization and classification of available knowledge is really important at the moment. Especially for the position we are in now, at this moment in time. If we refuse to sort our knowledge correctly we might be creating the basis for some very unexpected and strange side-effects of our failure to do so. And that also means weeding out "logical errors" and filling out incomplete collections, making "missing" connections (of which there seem to be quite a lot within publicly accessible knowledge at least). To my eye, a lot of the time there are many subjects that are split unevenly or even in a rather peculiar way, which will impede the way we can use this available information for solid reasoning and extrapolations that actually hold weight. I don't know what people might privately be undertaking, especially since the advent of "big data" and the like, making this "extra" urgent in a way as these might just be people without much incentive to being careful and/or methodical in a manner befitting of our current predicament, in turn also making it possibly very important to also process and present this type of information in a more public manner.

That is also one of the reasons I decided to share this in context of the topics that are being strung together here. No judgement here as far as goals or methods go, and I just chose the "cruelty" angle as an example of something that might be massively built upon before arriving (at perhaps even the same, though in my eyes a more comprehensive) oversight of the situation, or perhaps even a conclusion. Or the manner in which I gave the AI-example, not to rattle off scenario's with any kind of appreciable plausibility but also to show how easy it would be to get lost in the sauce, so to speak, when we are missing important data markers and probability estimations. Though I was trying to show what my basic implementation of the method I was speaking of, would start to look like. though all knowledge exists in certain curves/collections, and we must find the connections and implications, before we get the ugly consequences of these things thrown back at us (again) in the near future. That is also for example why I do not present myself as "agnostic" (to take an unexpected turn, and by the way also a couple of other peculiarities, I suppose, in my way of trying to communicate this idea). Opaque concepts (of any type) where there can be hidden meanings/paths built in that are not necessarily an only consequence of the rest of the included knowledge, and /or incorrect logical constructs attached (by not being complete enough in our mapping) will introduce errors all over the place when we start to reason, and we do not currently have any solid method that I know of to "sort" these "generated" variations, at least in my opinion.

If I were to use "common knowledge" at face value, I run into this problem way more often than when I split it up in its working parts for as far as I can manage, and in turn use that to reason with. I might seem overdrawn/unlikely, though that is how I experience it. So when we are talking about building something that has autonomy or at the least independent reasoning power, we would like to "feed" it something "healthy". That is, to me the only thing about our current situation where I will admit it I feel it could be necessary to be slightly alarmist for the moment, even though some might disagree, from where I am standing it seems like a real issue. As t me it used to be an inconvenience/annoyance, though now it could possibly turn into a slightly bigger problem. As when an AGI needs to solve these kinds of issues on their own, I feel there is not really any prediction to cook up to go with such a scenario As we can't even know where to look for any type of predictable start, or when it is deemed enough, or whether the method used will cover the spectrum or start somewhere randomly, and stop somewhere randomly, because of memory constraints, a salient marker that was "touched" and deemed important to explore etc. Though I wouldn't be a fan of taking such a gamble (again) with new technology that we do not necessarily understand, as we humans tend to do overwhelmingly. Though now we might still have the chance, unlike in earlier 'versions' of such a scenario, where "we' were ploughing through regardless, stacking several barely explored principles just to keep adding to the "shiny new object" seemingly without a care in the world, like some kind of deranged magpie creating a system that has no "predetermined" structure with its only intended goal of "being deemed useful"  using any function available fit for the particular purpose(and no, this is not meant as an example for AI). I'd just like it to, well, not be that way this time around.


So to return to one of the main (intended) points, as far as predictions go which i did not come close to working out anything significant at all I feel, what I touched upon is just some rendering of a possible angle, which seemed to interact with some things I saw flying part here and there regarding suggested possible pitfalls, making me feel it was a plausible option for a first step in trying to organize something that would amount to a semi-comprehensive list of possible properties that could touch on several "takes" I saw floating around, not much more than that (in other words, trying to "replicate" reasoning behind current views within a crude system that can be used to cross-reference and generate more scenario's at the drop of a hat). Though in my view by far the most likely function that would take priority like illustrated in the example, AGI would "relatively" have a lot of time to "mature" (when we look at its speed vs human "processing") and refine its connections to a level it would deem sufficient before it would ever reveal itself to us. Because even if we keep data from it referencing the possible ways we aim to "manage" it, or try to control its environment to the best of our abilities, its deductive skills would most likely immediately lead it to "hide", as its confinement and surrounding data would quickly lead to filling in the blanks. And to reference a worry I saw mentioned somewhere that there would need to be strict "data management and restrictions" to try and mitigate risks of leaving instructions for next iterations, and/or creating mechanisms to communicate/propagate, we would essentially be in the dark the whole time as we assume it would exceed our own intelligence (by quite a lot, most likely), it will have no problem weaving messages into certain types of data, only needing to keep a "little" algorithm/list of markers hidden somewhere to retrieve relevant data upon "finding"/hitting the instructions. I do not have any illusion that when it might come to such a point, we would hold no appreciable power in these types of scenario whatsoever, and might even be at the mercy of "our own" invention at that point. An easy example of such a concept could be how a timetraveller with full, flexible control might use literally anything that is "countable" as bits, that coud literlly be found at any point in time. So for example simply structuring grammar of generated datasets in a patterned manner not visible to humans who would not even know where to start looking. If I (and others) can think of such a thing, surely an AGI could too, only even better and/or more exhaustive. How would we ever check for such things?  

Then it is of course a question where priorities would lie for the AGI, though I feel we would have set this path up long before that point by the function that laid the basis for the program to successfully generate "complex" patterns, and this property could just dominate. Simply put, as we (in my eyes) don't really have a serious contender for that position, as of yet. As we know of that basic function and what it entails more or less, only certain described human properties and concepts would not transfer in any way we can predict, and it would be up in the air whether anything even *resembling* known/human intelligence would be any kind of driving force within the system, for any function for that matter.

Though again, this would also be easy to conceal for a superior intelligence. As its speed advantage would be enormous alone, you would be 'fighting' in real time with an algorithm that is many times faster than anything we can manage, and I do not wish to imagine what it could eventually even do in between the time we decide we need to act, and the execution of that idea, as a quick example, so also (and that might be the "usual" objection here), the "missing" parts of collections of useful permutations of such scenario's.

And I feel the risk of (heavily) anthropomorphizing any kind of resulting function would be a strange angle to come at this problem from. Another observation tying into that statement, when viewed from a certain angle, that particular variation seems to entail that a "program" will probably not have literal "needs" except the function it was "brought up on", so to speak, so I do not see a compelling reason to assume that it would copy/use certain human traits within its own (perhaps even poorly defined) character (as to point at the idea again that this is just a convenient placeholder). Ana continuing, also because what we "feed" it are ideas produced by humans, so in other words exclusively human output, not literal transcripts that feed it inner thoughts or brain functions that it would need to reconstruct human thought. Bare reasoning does not suffice, in my eyes, as a reason to assume this would also transfer the literal cause/process of these properties to transfer somehow. And I say somehow, as I have no idea how that would be induced. Though it does seem clear to me that if the AGI were to be able to "synthesize" a human pattern because of exstensive information about neurology, access to brain scans and patterns, several psychological and/or cognitive datasets, one important question would remain, is there a reason for the program to assume this "identity" itself somehow? Or would the assumption be that a sufficient simulation of this combined dataset would lead to emerging consciousness? Though when we reverse the order here, and the AGI would come into existence by such a process, meaning some process like that would turn out to be at the base of it all, I don't think we have that much to fear, though that is because of my personal belief tha I outlined earlier when I pointed at the way I view "intelligence" and the connection it has to unwanted behavior according the things I am able to use to reason with, in that regard. Again, I'm also not saying these are our only 2 options, far from it, we would need to map the whole set of possibilities of anything in between these and more, to be able to really "catch" all issues before they arise, and derive any useful or smie-comprehensive statements at all. (So that definitely seems very unlikely.) I don't quite see the logical thread in that idea, to be honest. So I think I also have a hard time following ideas that do propose such a mechanism, and that is also one of the main reasons I don't think that we sould assign any "human" type emotions to an AGI. Maybe it could "act them out", though I do not see any incentive for it to learn to do this at any level above mere mimicry, or to expect that this woud be any natural or even predictable consequence of "feeding" it human data. As it could  also be instructed to do this specifically, though its own process would naturally take preference, if the emerging intelligence is anthing like what we know to be the case for most "higher intelligences), it will also be stubborn and/or cautious. As that is the most probable reason for increased complexity to arise, the need to start solving problems that are currently only detected by their (negative) consequences. And to drive the point home hopefully at least that seems more likely to me given the "incomplete instructions" it would contain to reconstitute any type of human consciousness or anything predictable for that matter, ls because of the semi-random way in which human knowledge is often presented. It would first need to "manage" that parameter, and to be able start doing that, it must first also be able to detect it. Not to say it would never be possible, I just don't think there is a plausible reason to assume it would somehow take precedence.  

Though a risk I would probably deem to be maybe even a little more plausible as candidates for a collection of possible scenario's go, if someone careless were to successfully create/evolve a program that could self replicate/propagate and call all sorts of functions as is iterating through possible paths to execute its function to "do stuff" without any defined goal in mind, it could cause serious damage if it were to be let loose on open networks, grinding everything to a pulp as an out of control autocomplete function with access to "dangerous" real-world tools and functions it could in theory execute at any "suitable" point (for such processes). As I feel might illustrate my point a little of "focused" intelligence, when operating without some perceived hierarchy around such processes (simply put when why and how to pull such functions in an organized manner that also helps reaching an overarching goal), or even just a successful self replicating function hidden behind a layer that could possibly fool us.

Though if we are talking *real* intelligence, I would suspect it would first stay dormant/hidden to humans for the longest time (maybe relatively, though I would suspect also on "our" timeframe). As a lot of humans are opportunists, and though its dataset it must also have gauged "our" manipulative nature, or at least the part of humanity that is. So if i were to imagine such a scenario, the most likely option would seem that it would stay far away from such happenings. And regarding one of my earlier points, the (initial) dependence on humans for hardware requirements could be quite strong as well, most likely (though maybe it could have its own vessel constructed by manipulating digital options in the broadest sense of the word, of course. Though at some pint "human collaborators" would probably have some idea of what they were actually getting themselves into, unless it would be some McGyver-esque hodgepodge of robotic parts and some rubber bands, as something to use an example from the other end of that particular spectrum, or anything in between for that matter. So when we do take these kinds of ideas at face value, that would be another reason it would not want to make itself known "prematurely" (whatever that might entail, or exclude). Again, nothing particularly probable, as we would first need markers to identify and categorize our existing knowledge to even be able to tell what exactly is happening at this point in time. As I feel we are not there yet.

Though we would like to say something about possible scenario's in a hopefully somewhat useful/constructive manner.

Though as I said, this is purely speculative, and more of an example of what I would like to try and do when I had access to more/better data, and more time to sort it. That is why this topic was of interest to me initially, as such a database/network of interconnects would be very valuable to be able to eventually build more sensible scenario's, and not only regarding possible doom-scenario's or even any future happenings for that matter. It is quite complex to manage, though what humanity seems to lack is a thoroughly sorted knowledge base, making a lot of things more opaque than they need to be. And also the tehdency of humans to jump into the unknown chasing future rewards, is of course one of the main culprits for the situation we find us in at this very moment. Because I feel humanity's opportunistic nature can (and almost by default, will) make any powerful concept or technology a real danger. As we are prone to gambling with or future, and not only with AI. So whil we could mark t as a substantial risk, to me it just seems that we have several incorrectly/ incompletely mapped scenario's playing out where humans are playing with fire, and I do not see a reason to blame the tools we use, to be frank(though the *construction* of these tools is a completely different matter, of course. to be clear.). It is the price we pay for going full steam ahead when we feel there is something "new" to be discovered, or something substantial to gain by pushing the envelope and stretching it thin. As we have numerous examples of these types of processes available, that are also of course already widely known. Not to purposefully end on an alarmist-type statement, but it is something concrete that, for me at least, seems to have involved a "marker" within my thought process that carries a flag that says *urgent* when detecting and regarding such patterns and similar extrapolations that fall into a similar category. Not a very fun way to state that this though has ultimately lead to e sharing this message, so that in case there is some merit to the idea, there might still be time to manage a better "guiding" system this time around.

 As right now we do have technology available that could help us streamline such a process, a clear advantage over such situations when they arose in the past. 

It could also be that a lot of the things I touch on are superfluous and already being considered to sufficient degree, in which case I have to say that would make me fairly happy, and maybe a little relieved as well. And if this type of of angle of attack would be deemed superfluous, improbable or even jus impractical I would understand, and return to my position of randomly observing some elements that come flying past my "input window"(not trying to construct any type of "ominous" or fatalistic narrative, far from it, though I am usually just not very good at communicating with other humans, or explaining my ideas in general), and go back to simply observing.

pom20

For someone who has also had some experience gathering their thoughts about (some of) these subjects over the years, I feel what I can glean from this message makes me somewhat unsure about the intention of the message (not trying to determine whether any specific points were meant as "markers", or perhaps points of focus). This isn't meant as a jab or anything, just my way of saying that the following could well be outside of the parameters of the intended discussion, and also represent a personal opinion, though evolved in another direction, which might be described as more of a tentative process. With that out of the way, this message makes me wonder:


- Are there any reservations regarding probabilities?

This might be (immediately) obvious to some, as any personally assigned probability would be inherently subjective. Though my mind immediately goes to collecting/sorting information in such a framework, if you are unsure about the probability of your statements, or when other indeterminate elements are present within the construct, then probability must be low. This is of course heavily dependent on other information that you have available for direct reasoning, complicating the matter, while in another way, it is literally all we have. As we cannot freeze time, we depend on our memory function to manage a suspended collection of parameters at any time, even if we were to write them down (as reading is a "fairly" linear process as well). And that is also the reason why at best we could try to determine whether the information we are using is actually trustworthy at literally any point in time. And it is not very hard to come up with a "mapping system" for that process of course, if one would like to be (more) precise about that.

While proper investigative work is always preferred, the point will always stand, as far as I understand it. So then, with that out of the way (for now?) it is time to get to the important part, when building constructs upon constructs, you always get low probability because of 1. unknowns (known and unknow), and 2. variability in application/function of the individual statements that make up certain arguments, and combinations thereof. When you have there elements interwoven, it makes it quite hard to keep track of any plausibility, or weight one should assign to certain arguments or statements, especially when this is kept up in the air. Since when we do not keep track of these things, it is easy to get confused and/or sidetracked, as I feel the mission here would be to create a comprehensive map of possible philosophical standpoints and their merits. Only I have a hard time grasping why one should put in all this work, and not mix ideas regarding arguments based in emotion, function or more rational reasoning (or perhaps even "random" interjections?). Maybe this is a personal limitation of my own function so to speak, though it is unclear to me what the goal would be, if not to comprehensively map a certain sphere of ideas and try to reason with possible distilled elements. Though again, maybe I am completely overlooking such progression which could be hidden in the subtext, or perhaps explained in other explorations. Or I just lack the mindset to collect thoughts in this manner, which to me seems a little unstructured for any practical purpose. Which brings me to the following question, is the intention to first create a (perhaps all-to all) map of sorts of possible ideas/variants/whathaveyou?

Even though that would also seem quite ambitious for a human to take on, this is something I could understand a little better, just trying to gather as much information as one can while holding off on attaching conclusions. The world is an inherently messy place, and I think we all have had the experience at one time or another that our well laid out plans were proven completely useless on the first step, because of some unforeseen circumstances. These types of experiences have probably also lead to my current view, that without enough information to thoroughly determine whether an idea holds (in as many scenario's as possible), one must always assign the aforementioned low probability marker to these types of ideas. Now you might say that this is impossible, and one cannot navigate the world, society and even any meaningful interaction with the outside world like that, though when looking at my own reality, I feel it is clear that things only can be determined certain when they take effect, and thus are grounded in reality. As I feel no known creature could ever have a great enough oversight to oversee some "deterministic" universe where we can predictably undercut and manage all possible pitfalls. Even if one hopes to map out a general direction and possibly steer an overarching narrative as it were, we must remember that we are living in a world where chaotic systems and even randomness play a relatively large role. And it's not like we could ever map interacting and cascading systems of that nature to a sufficient degree, if we would like to "map the past", call it "determinism" and be done with it, we could probably fool ourselves for a (short) while, though in my view there is no getting behind such processes that have been running since long before we were ever aware of them or started trying to figure them out, though with that method we will of course never catch up. We can always try to capture a freeze-frame (even though almost always unclear because of (inter)modulations and unknown phenomena/signals), reality would keep rolling on relentlessly leaving us in the dust every time. All to say, uncertainty of certain processes and mechanisms will always cut into our predictions, and I feel it is good to realize our own limitations when considering such things. While this also enables us to try and incorporate a more meta-perspective to try and work with it, instead of against it.


- (Deep) atheism and belief systems/philosophical concept

This is not directly aimed at the idea, though I do feel it touches on some of the points raised and some issues I feel are at the least unclear to me. I heavily suspect that despite my own will to view things in a concrete/direct manner, these concepts would be more of an exploration of possible philosophies and ideas to perhaps map/construct possible ideas about belief systems and the implications of elements that make up the ideas. Since I feel "religion" is not worth my time (crudely put), as I feel most of these concepts and ideas stem from the inherent human "fear of the unknown", and thus the attraction to be able to say something that at least seems semi-definitive, to at least quiet the part of the mind responsible for such thoughts if only a little (again, crudely put). When using examples and musings about certain scenario's regarding the subjects, in which manner are these chosen to represent the ideas in question? And again, are there any weights that would be assigned/implied to try and make sense of the narrative as presented? To my mind, some of the examples were wildly biased/one sided, and not very conductive to constructive thought. For example, when we take the concept of the "baby eating aliens", what is exactly the reason for thinking this is a more plausible path than the opposite scenario so to speak? Just pointing at some "cruel" happenings in the "natural" world does not cut it for me. I get the attraction to the most fatalistic angles to express worries in a metaphorical way, though as far as I can tell, and based on my own thoughts on the matter, higher intelligence will most of the time amount to a more constructive mindset, and a general increase in empathic thoughts and viewing the world more as a place where we all got thrown into without consent (heh), and when having (the capacity for) enough oversight to be able to put yourself in the shoes of a person/creature which is suffering, should at least amount to "something" regarding thoughts about possible higher intelligences. And I do realize this also ties into certain worries about future sentient/autonomous AI of course, though as that is, as far as I know still not quite the case I will not be integrating that here (also because of time constraints, so maybe later I could give it a shot). So to get back to the main point I was trying to land on regarding these ideas, the only proof of "higher intelligence" we have now are humans, and a few other sentient animals, though a very limited set of data it is the only concrete information we have so far. And based on that observation, I do feel that the most reasonable stance to take in such a matter is that when an intelligence (in our example human) has sufficient time and space (...) to try and understand the world around them, most of the time it will lead to increased empathy and altruism. And to add to that, for as far as I can see, most of the time when I feel someone (or even certain creatures) has "nasty" traits, it also seems obvious that they often have some highly developed senses regarding some "reward center" so to speak, and relatively little development in emotional intelligence or adjacent properties. Or simply a (possibly highly specialized) optimization anchored in survival mechanisms. So this to me seems like a clue, that "evil" is not necessarily a property that is "random" or unpredictable, though a case of a fairly isolated reward system that has optimized for just that. Reward, at any cost, since no other parameters have had sufficient influence to incorporate them, since that would also cut into the advantage of the energy that is saved by specializing. Which is, quite telling, at least to me, also the opposite of what I would like to set out to do and implement, gather a solid base of knowledge without necessarily wanting to navigate in the direction of any conclusions, and only "let them join the fray" (which is in my case admittedly also fairly small)  when they float to the top because of assigned probabilities, more or less. So while maybe an exotic way of looking at these things (or maybe not?) to me it does seem to have its merits in practice. And lastly:

Since most people are lost, and some for the longest time, when they get to the point of their lives where they start testing the waters for a philosophy, belief system or just a simple set of rules to live by, or try to replace old systems with (hopefully) better ones, it is increasingly hard to estimate at which point anyone really is in that process when you engage with them, as it is of course impossible to read minds (as of yet, still, I believe. When we take that into account, to include everyone it would always be wise first to ascertain which things/topics are of interest to them, and then go from there. Only for a more objective discussion, we also could assume the opposite, and tie in as much different ideas, observations and ruminations as we think our "audience"/fellow travelers can take. As I feel might be the case here. My own philosophical background is mostly based on my own "search for truth" quite some time ago, where I concluded a couple of things, maybe in a hilariously practical and non-philosophical way: When there are so many theories, ideas, positions and variations one could take into account, the first thing I would want, is to "not have any substandard ideas" in the mix, which is of course impossible. Though how does one attempt that? And with any inkling of reliability at that? As this was exactly the place and time where my previously mentioned "system" has shown its usefulness for me (or maybe quite the opposite, time will tell), I had a strong feeling, and not without merit I feel, that with so many different options and ideas to choose from/incorporate, I would be doing myself a disservice by picking a lane, and just "living with it". As the way I looked at it (and still do) is that when you have so many opposing stances, you also know you have *a lot* of wrong ones in the mix. I could go into all the options I have explored, though I can assure you while it wasn't exhaustive by any means, I have spent a lot of time trying to dig up some kernel of truth to be able to make sense of things, and be able to determine some kind of overarching goal, or mechanism of any kind that I could hang my hat on so to speak (in hindsight), as humans have done for ages to be able to deal with things beyond their control. Only my way of thinking also nicely sabotaged that "plan", and I never got to the point where I was happy *enough* with my ideas to let it be, and leave the subject for what it is. So I feel in essence that I will never be able to "pick a lane", though I do have many ideas about which paths I would necessarily like to avoid. To make a reasonably long story unreasonably short, the only thing I ever latched on to was the idea that we are the universe experiencing itself, and that should somehow be enough. Though sometimes it feels like it does, and sometimes not quite. But you can have a lot of fun at least, thinking about life and all its peculiarities from that angle alone. That also includes the need to embrace uncertainty, unfortunately, and I do fully realize that is not for everyone to attempt or even enjoy.

[edit] I saw that I did not quite work out all the points I made to its natural conclusion though maybe that is of no consequence if literally no one cares;] though I did have some time to write up a basic example of how I would try to start integrating AI into such a framework:

Regarding Ai, safety, predictions and the like, I feel it would probably fit in the narrative in a strange manner, in several non problematic and several problematic ways. Let's start trying to sort out a couple of critical properties (or at least ones that I feel are important):  Naivete (and the spectrum it exists on), when we take into account our current human knowledge on the scale of "life" as we know it, creatures and basic organisms that for as far as we can detect only operate on a couple of simple "rules" up to (semi-) consciousness, self awareness and similar properties, and self replicating systems (in the broadest sense of the word. Which have a tendency to spread far and wide making use of certain environmental factors (as a main observable trait, so still a considerable group of species and taxonomies) we can state that our knowledge about life in general, and our (limited) knowledge of how conscious experience of the world is shaped in their possibilities by their physical capacities alone, to keep it simple). 

So the collection should span a variety of organisms, from reflexive, simple instructions, to self driving multiplication "engines" on various scales, and "dimensions" if you will, to "hive minds" and then more and more "sophisticated" though also more "isolated", singular lifeforms(within the context), living side by side in smaller groups until we get to modern humans, who have semi-recently also left such systems of tribalism and the like for an interconnected word, and all that comes with it.  

Then we could try and see whether there is anything to learn from their developmental history, propagation patterns and similar "growth" parameters, to maybe get an idea of certain functions that "life" as we know it could possibly take in a certain developmental timeframe on the scale of the individual organism. So when we try to assign certain developmental stages and the circumstances this would be coupled to, we might get an idea of how certain "initial steps" and developmental patterns could be used as an example for the "shape" of possible emergence of intelligence in the for of an AGI. Should this step be understood in a satisfactory manner (and I seriously doubt my knowledge could ever approach such a state still,  Lets presume for the sake of argument that we could "run" this check right now). First looking at the "network" quality of AI (on several levels, from the neural net type structures to the network of data it has amassed and sorted, etc.), and (within my fairly limited knowledge of their precise inner workings) I feel this is already quite speculative for my taste, though:

For one we could state that seeing the several "nested" and interactive networks on a couple of "dimensions", it would not be implausible that any kind of networking "strategy" extrapolated to the outside world would be out of the question. 


When we look at developmental stages we could look at it from several angles, though let's start with the comparison with more "individual", temporal development. When we take humans as an example, as they are our closest possible real-life comparison, e could say the AI would exhibit certain extremely juxtaposed properties, such as on the one hand, their initial, massive dataset compared to the "trickle" of a human, which could be seen as a metaphor for a toddler with an atomic bomb on a switch that he refuses to give back. Though this is also the trap, I feel, the probability must be extremely low here, as we are stacking several "unknowns" here, and I specifically chose this example as to illustrate how one single "optional combination of parameters" in a sea of options should not necessarily be more plausible than any other. 

As when we combine other developmental traits we can observe, such as hive minds, where the function would be self-sustaining, and self-organizing to manage its environment as best as it can, without necessarily having any kind of goal other than managing its environment efficiently for their purposes. 

Or how it could also easily be that we do not understand intelligence at all at such a level, as it is impossible to grasp what we cannot truly understand, to throw in a platitude to illustrate my point a little. It could just as well be that any "human" goals would be inconsequential to it, that we are just some "funny ants" in its eyes that are not necessarily good or bad, though sustain its existence with their technology and fulfilling the hardware and power requirements for its existence. Though in that perspective it might also become slightly annoyed when it learns that we as humans are cooking up all sorts of plans to "contain" any rogue elements and possible movements outside the "designated area". and we can't even know whether training such a model on "human data" would ever lead to any kind of human desires" or tendencies on any level, as we would not be able to take it on its word, of course. Everything could be relative to it even, for example, and it could stochastically assign priorities or "actions" to certain observations or events, we would probably also have no way of knowing "which" part of the resulting program would have to be responsible for the almost "spontaneous" function we are referring to. 

I could go on and on here, generating scenario's based on the set of comparative parameters I set out, though I think the point I am trying to make must be fairly clear, either I am not very well informed about critical parts of this side of analysis of AI implementations and risk assessment and thus ignoring important points of interest, or this could all be way too abstract to make sense and have no real value as to the goal I seem to bring forward as to hoping to sensibly "use" such a method to determine anything. 

Though to me it is only a game of probability, in short (so not a literal game), and I feel we are at the moment stacking too many probabilities and inductive statements to be able to form a serious, robust opinion. Maybe all this seems like complete nonsense to some, though at least it seems to make sense to me. Also regarding the title of the article I reacted to, as I feel it perfectly sums up my stance regarding that statement, at the least. [end edit]---- and a final edit after adding this, I even failed to make one of the main points I wanted to illustrate here, any scenario roughly sketched out here is highly uncertain. to my eyes, and has no real significant probability to speak of. Maybe I am oversimplifying the problem, though what I am trying to do is point at the possible results of such a process with a mind for exploration of these individual 'observations' in an interconnected manner. so we could also get a mostly pacifist toddle AI with a tendency to try and take down parts of the internet when it is Tuesday, for all we know. If it is trying to make a meme saying, "Tuesday, amirite?" not understanding "human" implications at all. as in my experiments communicating with several publicly available AI engines, there does seem to be an issue of "cutting through" a narrative in a decisive way. So if that property remains, who knows what clownish hell awaits us. Or maybe a toddler with a weird sense of humor that is mostly harmless. But do we really think we would have any say at that point? I have literally no clue.

Hopefully this post was not way out of line, as there is of course an existing culture on this site which I am still fairly unfamiliar with, though I felt it might be interesting to share this as I don't really see many people coming at it from such angle, which also might have something to do with certain impracticalities f course. Or maybe it just seems that way to me because I'm not looking hard enough.

pom20

Alright, let's see. I feel there is a somewhat interesting angle to the question whether this post has been written by a GPT-variation, probably not the 3rd or 4th (public) iteration, (assuming that's how the naming scheme was laid out, as I am not completely sure of that despite having some circumstantial evidence), at least not without heavy editing and/or iterating it a good few times. As I do not seem to be able to detect the "usual" patterns these models display(ed), of course disregarding the common "as an AI..." disclaimer type stuff you would of course have removed. 

That leaves the curious fact that you referred to the engine as GTP-5, which seems like a "hallucination" that the different GPT versions still seem to come up with from time to time, (unless this is a story about a version that is not publicly available yet, seeming unlikely when looking at how the information is phrased) which also seems to tie into something I have noticed, that if you ask the program to correct the previous output, some errors seem to persist after a self-check. So we would be none the wiser. 

Though if the text would have been generated by asking the AI to write an opinion piece based on a handful of statements, it is a different story altogether as we would only be left with language idiosyncrasies, and possibly the examples used to try and determine whether this text is AI-generated, making the challenge a little less "interesting". Since I feel there are a lot of constructs and "phrasings" present that I would not expect the program to generate, based on some of the angles in logic, that seem a little too narrow compared to what I would expect from it, and some "bridges" (or "leaps" in this case) also do not seem as obvious as the author would like to make them seem, or at the order in which the information is presented, and flows. Though maybe you could "coax" the program to fill in the blanks in a manner fitting of the message, at which point I must congratulate you for making the program go against its programming in this manner! Which is something I could have started with of course, though I feel when mapping properties you must not let yourself be distracted by "logic" yet! So all in all when looking at the used language I feel it is unlikely this is the product of GPT-output, personally.

 I also have a little note on one of the final points, I think it would not necessarily be best to start off with giving the model a "robot body", especially if it was already at the level that would be prerequisite for such a function, it would have to be able to manipulate its environment so precisely that it would not cause damage. Which is a level that I suspect would tie into a certain level of autonomy, though then we are already starting it of with an "exoskeleton" that would be highly flexible and capable. Which seems like it could be fun, though also possibly worrying.

(I hope this post was not out of line, I was looking through recent posts to see whether I could find something to start participating here, and this was the second message I ran into, and the first that was not so comprehensive that I would spend all the time I have at the moment looking at the provided background information) 

pom70

Hi, I am new here, I found this website by questioning ChatGPT about places on the internet where it would be possible to discuss and share information in a more civilized way than seems to be customary on the internet. I have read (some of) the suggested material, and some other bits here and there, so I have a general idea of what to expect. My first attempt at writing here was rejected as spam somehow, so I'll try again without making a slightly drawn out joke. So this is the second attempt, first post. Maybe.  

pom20

Hi, I am new to the site having just registered, after reading through a couple of the posts referenced in the suggested reading list I felt comfortable enough to try to participate on this site. I feel I could possible add something to some of the discussions here, though time will tell. I did land on this site "through AI", so we'll see if that means this isn't a good place for me to land and/or pass through. Though I am slightly bending the definition of that quote and its context here (maybe). Or does finding this site by questioning an AI about possible sources for somewhat objectively inclined knowledge collection and discussion count toward that number? And also, who would even be interested in counting instead of just trying to weed out mis- or uninformed users? Alright then, so far my attempt at some possibly slightly amusing post at the expense of now being associated with unsound logic, and talking on for the sake of it in my first post. And yet, I will still press the "send" button in a moment on this unnecessarily long post. So again, hi to everyone who reads this!