Comment author: thomblake 23 May 2012 05:13:55PM 1 point [-]

It is true that the specifics of body and environment drive some specific human values, but those are just side effects of X in that environment and X in different environments only changes so much and in predictable ways.

When you say "predictable", do you mean in principle or actually predictable?

That is, are you claiming that you can predict what any human values given their environment, and furthermore that the environment can be easily and compactly specified?

Can you give an example?

Comment author: FinalState 24 May 2012 11:53:09AM 0 points [-]

Mathematically predictable but somewhat intractable without a faster running version of the instance, with the same frequency of input. Or predictable within ranges of some general rule.

Or just generally predictable with the level of understanding afforded to someone capable of making one in the first place, that for instance could describe the cause of just about any human psychological "disorder".

Comment author: JoshuaZ 23 May 2012 09:27:17PM -1 points [-]

Because it is the proxy for survival.

I'm not at all sure I understand what you mean. I don't see the connection between familiarity and survival. Moreover, Not all general intelligences will be interested in survival.

Comment author: FinalState 23 May 2012 09:34:17PM -2 points [-]

Familiar things didn't kill you. No, they are interested in familiarity. I just said that. It is rare but possible for a need for familiarity (as defined mathematically instead of linguistically) to result in sacrifice of a GIA instance's self...

Comment author: TimS 23 May 2012 05:02:44PM 0 points [-]

I'm sorry, I wasn't trying to use terminology to misstate your position.

What are three values that a GIA must have, and why must they have them?

Comment author: FinalState 23 May 2012 08:53:16PM -2 points [-]

ohhhh... sorry... There is really only one, and everything else is derived from it. Familiarity. Any other values would depend on the input, output and parameters. However familiarity is inconsistent with the act of killing familiar things. The concern comes in when something else causes the instance to lose access to something it is familiar with, and the instance decides it can just force that to not happen.

Comment author: TimS 23 May 2012 03:56:33PM 1 point [-]

Name three values all agents must have, and explain why they must have them.

Comment author: FinalState 23 May 2012 04:23:04PM *  -2 points [-]

The concept of agent is logically inconsistent with the General Intelligence Algorithm. What you are trying to refer to with Agent/tool etc are just GIA instances with slightly different parameters, inputs, and outputs.

Even if it could be logically extended to the point of "Not even wrong" it would just be a convoluted way of looking at it.

Comment author: FinalState 23 May 2012 03:31:45PM *  0 points [-]

EDIT: To edit and simplify my thoughts, in order to get a General Intelligence Algorithm Instance to do anything requires masterful manipulation of parameters with full knowledge of generally how it is going to behave as a result. A level of understanding of psychology of all intelligent (and sub-intelligent) behavior. It is not feasible that someone would accidentally program something that would become an evil mastermind. GIA instances could easily be made to behave in a passive manner even when given affordances and output, kind of like a person that was happy to assist in any way possible because they were generally warm or high or something.

You can define the most important elements of human values for a GIA instance, because most of human values are a direct logical consequence of something that cannot be separated from the GIA... IE if general motivation X accidentally drove intelligence (see: Orthogonality Thesis ) and it also drove positive human values, then positive human values would be unavoidable. It is true that the specifics of body and environment drive some specific human values, but those are just side effects of X in that environment and X in different environments only changes so much and in predictable ways.

You can directly implant knowledge/reasoning into a GIA instance. The easiest way to do this is to train one under very controlled circumstances, and then copy the pattern. This reasoning would then condition the GIA instance's interpretation of future input. However, under conditions which directly disprove the value of that reasoning in obtaining X the GIA instance would un-integrate that pattern and reintegrate a new one. This can be influenced with parameter weights.

I suppose this could be a concern regarding the potential generation of an anger instinct. This HEAVILY depends on all the parameters however, and any outputs given to the GIA instance. Also, robots and computers do not have to eat, and have no associated instincts with killing things in order to do so... Nor do they have reproductive instincts...

Comment author: FinalState 23 May 2012 10:18:08AM -2 points [-]

This one is actually true.

Comment author: Luke_A_Somers 25 April 2012 05:32:14PM 4 points [-]

Could you unpack that a little more? It sounds like you're saying that 'some people' are unfairly discounting the possibility that QM is incomplete and locality is violated, for reasons that are not logically required . Is that accurate?

If so, I would like to point out that computational cheapness is not a good prior. It's extremely computationally cheaper to believe that our solar system is the only one and the other dots are simulated, coarse-grained, on a thin shell surrounding its outside. It simplifies the universe to a mind-boggling degree for this to be the case. Indeed, we should not stop there. It is best if we get rid of the interior of the sun, the interior of the earth, the interior of every rock, trees falling in the forest, people we don't know... people we do know... and replace our interactions with them with simulacra that make stuff up and just provide enough to maintain a thin veneer of plausibility.

The rule set to implement such a world is HUGE, but the data and computational complexity is enough smaller to make up for it.

Don't you think?

Comment author: FinalState 27 April 2012 01:25:51AM -1 points [-]

The cheapest approach is to fail to differentiate between different labeling systems that conform to all known observations. In this way, you stick to just the observations themselves.

Conventional interpretation of the Bell experiments violates this by implying c as a universal speed barrier. There is no evidence that such a barrier applies to things we have no experience of.

Comment author: shminux 25 April 2012 06:31:51PM 3 points [-]

It's really simple. The hidden variables are not local. General Relativity does not apply in the case of the particles below a certain size.

I assume that this is your personal model, given the lack of references. Feel free to flesh it out so that it makes new quantifiable testable predictions.

Some people however, tribals to be specific, are more interested in protecting legacies than they are with using the computationally cheaper belief set. The cost is reduced frequency of new inspirations of understanding.

My personal crackpot index counter clicks like mad after reading this.

Comment author: FinalState 27 April 2012 01:15:49AM *  0 points [-]

You did not even remotely understand this comment. The whole point of what is written here is that there are infinite "Not even wrong" theories that conform to all current observations. The conventional interpretation of the Bell experiments is one of the less useful ones, because it is convoluted and has a larger computational complexity than necessary.

Comment author: FinalState 24 April 2012 04:22:39PM *  3 points [-]

I know 2 reasons why people suck at bringing theory to practice, neither of which completely validates your claim.

1) They suck at it. They are lost in a sea of confusion, and never get to a valuable deduction which can then be returned to the realm of practicality. But they are still intrigued and still get better a little bit at a time with each new revelation granted by internal thought or sites like Less Wrong.

2) They are too good at it. Before going to implement all the wonderful things they have learned, they figure out something new that would require updating their implementation approach. Then another thing. And another. Then they die.

I suffered from 1 for a while when I was younger, and now from 2. I have found the best way to overcome this is to convince other people of what I have figured out thus far. They take what I give them and they run with it in their practical applications.

The act of explaining it to others is the thing that survives from your "dojo" model into the optimal approach to theory. It causes you to better understand it yourself and have more things to explain. It is this which brought me from identifying myself as an epistemologist to a mathematician who could create problem statements from the knowledge I had and provide functional solutions that could be programmed into computers or analyzed by computers to create optimal solutions. Before that I felt I was at my best when providing concise and elegant descriptions of functional knowledge that people could easily integrate into their approach.

A lot of that knowledge was thought experiment versions of the type of stuff you read on LessWrong. So to sum up, this site presents ready to consume concise functional knowledge, and promotes communication between people on interesting subjects. I understand a lot of people are going to be stuck at 1 for the foreseeable future, but so was I at one point. In the meantime, they can spread the ready to consume concise functional knowledge.

View more: Next