ohhhh... sorry... There is really only one, and everything else is derived from it. Familiarity. Any other values would depend on the input, output and parameters. However familiarity is inconsistent with the act of killing familiar things. The concern comes in when something else causes the instance to lose access to something it is familiar with, and the instance decides it can just force that to not happen.
The concept of agent is logically inconsistent with the General Intelligence Algorithm. What you are trying to refer to with Agent/tool etc are just GIA instances with slightly different parameters, inputs, and outputs.
Even if it could be logically extended to the point of "Not even wrong" it would just be a convoluted way of looking at it.
EDIT: To edit and simplify my thoughts, in order to get a General Intelligence Algorithm Instance to do anything requires masterful manipulation of parameters with full knowledge of generally how it is going to behave as a result. A level of understanding of psychology of all intelligent (and sub-intelligent) behavior. It is not feasible that someone would accidentally program something that would become an evil mastermind. GIA instances could easily be made to behave in a passive manner even when given affordances and output, kind of like a person tha...
I am open to arguments as to why that might be the case, but unless you also have the GIA, I should be telling you what things I would want to do first and last. I don't really see what the risk is, since I haven't given anyone any unique knowledge that would allow them to follow in my footsteps.
A paper? I'll write that in a few minutes after I finish the implementation. Problem statement -> pseudocode -> implementation. I am just putting some finishing touches on the data structure cases I created to solve the problem.
Congratulations on your insights, but please don't snrk implement them until snigger you've made sure that oh heck I can't keep a straight face anymore.
The reactions to the parent comment are very amusing. We have people sarcastically supporting the commenter, people sarcastically telling the commenter they're a threat to the world, people sarcastically telling the commenter to fear for their life, people non-sarcastically telling the commenter to fear for their life, people honestly telling the commenter they're probably nuts, and people failing to g...
Please don't take this as a personal attack, but, historically speaking, every one who'd said "I am in the final implementation stages of the general intelligence algorithm" was wrong so far. Their algorithms never quite worked out. Is there any evidence you can offer that your work is any different ? I understand that this is a tricky proposition, since revealing your work could set off all kinds of doomsday scenarios (assuming that it performs as you expect it to); still, surely there must be some way for you to convince skeptics that you can succeed where so many others had failed.
The cheapest approach is to fail to differentiate between different labeling systems that conform to all known observations. In this way, you stick to just the observations themselves.
Conventional interpretation of the Bell experiments violates this by implying c as a universal speed barrier. There is no evidence that such a barrier applies to things we have no experience of.
You did not even remotely understand this comment. The whole point of what is written here is that there are infinite "Not even wrong" theories that conform to all current observations. The conventional interpretation of the Bell experiments is one of the less useful ones, because it is convoluted and has a larger computational complexity than necessary.
I know 2 reasons why people suck at bringing theory to practice, neither of which completely validates your claim.
1) They suck at it. They are lost in a sea of confusion, and never get to a valuable deduction which can then be returned to the realm of practicality. But they are still intrigued and still get better a little bit at a time with each new revelation granted by internal thought or sites like Less Wrong.
2) They are too good at it. Before going to implement all the wonderful things they have learned, they figure out something new that would r...
Mathematically predictable but somewhat intractable without a faster running version of the instance, with the same frequency of input. Or predictable within ranges of some general rule.
Or just generally predictable with the level of understanding afforded to someone capable of making one in the first place, that for instance could describe the cause of just about any human psychological "disorder".