All of FinalState's Comments + Replies

Mathematically predictable but somewhat intractable without a faster running version of the instance, with the same frequency of input. Or predictable within ranges of some general rule.

Or just generally predictable with the level of understanding afforded to someone capable of making one in the first place, that for instance could describe the cause of just about any human psychological "disorder".

Familiar things didn't kill you. No, they are interested in familiarity. I just said that. It is rare but possible for a need for familiarity (as defined mathematically instead of linguistically) to result in sacrifice of a GIA instance's self...

-2JoshuaZ
I'm not at all sure I understand what you mean. I don't see the connection between familiarity and survival. Moreover, Not all general intelligences will be interested in survival.

ohhhh... sorry... There is really only one, and everything else is derived from it. Familiarity. Any other values would depend on the input, output and parameters. However familiarity is inconsistent with the act of killing familiar things. The concern comes in when something else causes the instance to lose access to something it is familiar with, and the instance decides it can just force that to not happen.

0gwern
"I Have No Mouth, and I Must Scream".
2TimS
Well, I'm not sure that Familiarity is sufficient to resolve every choice faced by a GIA - for example, how does one derive a reasonable definition of self-defense from Familiarity. But let's leave that aside for a moment. Why must a GIA subscribe to the value of Familiarity?

The concept of agent is logically inconsistent with the General Intelligence Algorithm. What you are trying to refer to with Agent/tool etc are just GIA instances with slightly different parameters, inputs, and outputs.

Even if it could be logically extended to the point of "Not even wrong" it would just be a convoluted way of looking at it.

0TimS
I'm sorry, I wasn't trying to use terminology to misstate your position. What are three values that a GIA must have, and why must they have them?

EDIT: To edit and simplify my thoughts, in order to get a General Intelligence Algorithm Instance to do anything requires masterful manipulation of parameters with full knowledge of generally how it is going to behave as a result. A level of understanding of psychology of all intelligent (and sub-intelligent) behavior. It is not feasible that someone would accidentally program something that would become an evil mastermind. GIA instances could easily be made to behave in a passive manner even when given affordances and output, kind of like a person tha... (read more)

2thomblake
When you say "predictable", do you mean in principle or actually predictable? That is, are you claiming that you can predict what any human values given their environment, and furthermore that the environment can be easily and compactly specified? Can you give an example?
2TimS
Name three values all agents must have, and explain why they must have them.
3thomblake
Retraction means that you no longer endorse the contents of a comment. The comment is not deleted so that it will not break existing conversations. Retracted comments are no longer eligible for voting. Once a comment is retracted, it can be revisited at which point there is a 'delete' option, which removes the comment permanently.
-5FinalState
-21[anonymous]
0[anonymous]
People who created Deep Thought have no problem beating it at chess.

I am open to arguments as to why that might be the case, but unless you also have the GIA, I should be telling you what things I would want to do first and last. I don't really see what the risk is, since I haven't given anyone any unique knowledge that would allow them to follow in my footsteps.

A paper? I'll write that in a few minutes after I finish the implementation. Problem statement -> pseudocode -> implementation. I am just putting some finishing touches on the data structure cases I created to solve the problem.

6Bugmaster
As far as I understand, the SIAI folks believe that the risk is, "you push the Enter key, your algorithm goes online, bootstraps itself to transhuman superintelligence, and eats the Earth with nanotechnology" (nanotech is just one possibility among many, of course). I personally don't believe we're in any danger of that happening any time soon, but these guys do. They have made it their mission in life to prevent this scenario from happening. Their mission and yours appear to be in conflict.
-2Monkeymind
If you are concerned about Intellectual Property rights, by all means have a confidentiality agreement signed b4 revealing any proprietary information. Any reasonable person would not have a problem signing such an agreement. Expect some skepticism until a working prototype is available. Good luck with your project!

Congratulations on your insights, but please don't snrk implement them until snigger you've made sure that oh heck I can't keep a straight face anymore.

The reactions to the parent comment are very amusing. We have people sarcastically supporting the commenter, people sarcastically telling the commenter they're a threat to the world, people sarcastically telling the commenter to fear for their life, people non-sarcastically telling the commenter to fear for their life, people honestly telling the commenter they're probably nuts, and people failing to g... (read more)

If this works, it's probably worth a top-level post.

You'll have to forgive Eliezer for not responding; he's busy dispatching death squads.

2[anonymous]
If you are not totally incompetent or lying out of your ass, please stop. Do not turn it on. At least consult SI.

I am in the final implementation stages of the general intelligence algorithm.

it's both amusing and disconcerting that people on this forum treat such a comment seriously.

Please don't take this as a personal attack, but, historically speaking, every one who'd said "I am in the final implementation stages of the general intelligence algorithm" was wrong so far. Their algorithms never quite worked out. Is there any evidence you can offer that your work is any different ? I understand that this is a tricky proposition, since revealing your work could set off all kinds of doomsday scenarios (assuming that it performs as you expect it to); still, surely there must be some way for you to convince skeptics that you can succeed where so many others had failed.

-31private_messaging
8khafra
Do you mean "I am in the final writing stages of a paper on a general intelligence algorithm?" If you were in the final implementation stages of what LW would recognize as the general intelligence algorithm, the very last thing you would want to do is mention that fact here; and the second-to-last thing you'd do would be to worry about personal credit.

The cheapest approach is to fail to differentiate between different labeling systems that conform to all known observations. In this way, you stick to just the observations themselves.

Conventional interpretation of the Bell experiments violates this by implying c as a universal speed barrier. There is no evidence that such a barrier applies to things we have no experience of.

0Luke_A_Somers
I have no wish to defend the 'standard' interpretation, whatever that is - but if you stick just to the observations themselves and provide no additional interpretation, then you are passing up an opportunity for massive compaction by way of explanation. Moreover, supposing that the c limit only applies to the things we can see implies adding rules that go very far from sticking just to the observations themselves.

You did not even remotely understand this comment. The whole point of what is written here is that there are infinite "Not even wrong" theories that conform to all current observations. The conventional interpretation of the Bell experiments is one of the less useful ones, because it is convoluted and has a larger computational complexity than necessary.

5Shmi
I assume that this is your personal model, given the lack of references. Feel free to flesh it out so that it makes new quantifiable testable predictions. My personal crackpot index counter clicks like mad after reading this.
6Luke_A_Somers
Could you unpack that a little more? It sounds like you're saying that 'some people' are unfairly discounting the possibility that QM is incomplete and locality is violated, for reasons that are not logically required . Is that accurate? If so, I would like to point out that computational cheapness is not a good prior. It's extremely computationally cheaper to believe that our solar system is the only one and the other dots are simulated, coarse-grained, on a thin shell surrounding its outside. It simplifies the universe to a mind-boggling degree for this to be the case. Indeed, we should not stop there. It is best if we get rid of the interior of the sun, the interior of the earth, the interior of every rock, trees falling in the forest, people we don't know... people we do know... and replace our interactions with them with simulacra that make stuff up and just provide enough to maintain a thin veneer of plausibility. The rule set to implement such a world is HUGE, but the data and computational complexity is enough smaller to make up for it. Don't you think?

I know 2 reasons why people suck at bringing theory to practice, neither of which completely validates your claim.

1) They suck at it. They are lost in a sea of confusion, and never get to a valuable deduction which can then be returned to the realm of practicality. But they are still intrigued and still get better a little bit at a time with each new revelation granted by internal thought or sites like Less Wrong.

2) They are too good at it. Before going to implement all the wonderful things they have learned, they figure out something new that would r... (read more)