Wiki Contributions

Comments

Yes there is a strong collective mind made of communication through words, but its a very self-deceptive mind. It tries to redefine common words to redefine ideas that other parts of the mind do not intend to redefine, and those parts of mind later find their memory has been corrupted. Its why people start expecting to pay money when they agree to get something "free". Intuition is much more honest. Its based on floating points at the subconscious level instead of symbols at the conscious level. By tunneling between the temporal lobes of peoples' brains, Human AI Net will bypass the conscious level and access the core of the problems that lead to conscious disagreements. Words are a corrupted interface so any AI built on them will have errors.

To the LessWrong and Singularity community, I offered an invitation to influence by designing details of this plan for singularity. Downvoting an invitation will not cancel the event, but if you can convince me that my plan may result in UnFriendly AI then I will cancel it. Since I have considered many possibilities, I do not expect a reason against it exists. Would your time be better spent calculating the last digit of friendliness probability for all of the mind space, or working to fix any problems you may see in a singularity plan that's in progress and will finish before yours?

Its not a troll. Its a very confusing subject, and I don't know how to explain it better unless you ask specific questions.

When he says "intelligent design", he is not referring to the common theory that there is some god that is not subject to the laws of physics which created physics and everything in the universe. He says reality created itself as a logical consequence of having to be a closure. I don't agree with everything he says, but based only on the logical steps that lead up to that, him and Yudkowsky should have interesting things to talk about. Both are committed to obey logic and get rid of their assumptions, so there should be no unresolvable conflicts, but I expect lots of conflicts to start with.

I suggest Christopher Michael Langan, as roland said. His "Cognitive-Theoretic Model of the Universe (CTMU)" ( download it at http://ctmu.org ) is very logical and conflicts in interesting ways with how Yudkowsky thinks of the universe at the most abstract level. Langan derives the need for an emergent unification of "syntax" (like the laws of physics) and "state" (like positions and times of objects) and that the universe must be a closure. I think he means the only possible states/syntaxes are very abstractly similar to quines. He proposes a third category, not determinism or random, but somewhere between that fits into his logical model in subtle ways.

QUOTE: The currency of telic feedback is a quantifiable self-selection parameter, generalized utility, a generalized property of law and state in the maximization of which they undergo mutual refinement (note that generalized utility is self-descriptive or autologous, intrinsically and retroactively defined within the system, and “pre-informational” in the sense that it assigns no specific property to any specific object). Through telic feedback, a system retroactively self-configures by reflexively applying a “generalized utility function” to its internal existential potential or possible futures. In effect, the system brings itself into existence as a means of atemporal communication between its past and future whereby law and state, syntax and informational content, generate and refine each other across time to maximize total systemic self-utility. This defines a situation in which the true temporal identity of the system is a distributed point of temporal equilibrium that is both between and inclusive of past and future. In this sense, the system is timeless or atemporal.

When he says a system which tends toward a "generalized utility function", I think he means, for example, our physics follow a geodesic, so geodesic would be their utility function.

The cache problem is worst for language because its usually made entirely of cache. Most words/phrases are understood by example instead of reading a dictionary or thinking of your own definitions. I'll give an example of a phrase most people have an incorrect cache for. Then I'll try to cause your cache of that phrase to be updated by making you think about something relevant to the phrase which is not in most peoples' cache of it. Its something which, by definition, should be included but for other reasons will usually not be included.

"Affirmative action" means for certain categories including religion and race, those who tend to be discriminated against are given preference when the choices are approximately equal.

Most people have caches for common races and religions, especially about black people in USA because of the history of slavery in USA. Higher quantity of relevant events gets more cache. More cache makes it harder to define.

One who thinks one acts in affirmative action ways for religion would usually redefine "affirmative action" when they sneeze and instead of hearing "God bless you" they hear "Devil bless you. I hope you don't discriminate against devil worshippers." Usually the definition is updated to end with "except for devil worshippers" and/or an exclusion is added to the cache. Then, one may consider previous incorrect uses of the phrase "affirmative action". The cache did not mean what they thought it meant.

We should distrust all language until we convert it from cache to definitions.

Language usually is not verified and stays as cache. It appears to be low pressure because no pressure is remembered. Its expected to always be cache. Its experienced as high pressure when one chooses a different definition. High pressure is what causes us to reevaluate our beliefs, and with language, reevaluating our beliefs leads to high pressure. With language, neither of those things tends to be first so neither happens usually. Many things are that way but it applies to language the most.

Example of changing cache to definition resulting in high pressure to change back to cache: Using the same words for both sides of a war regardless of which side your country is on can be the result of defining those words. A common belief is soldiers should be respected and enemy combatants deserve what they get. Language is full of stateful words like those. If you think in stateful words, then the cost of learning is multiplied by the number of states at each branch in your thinking. If you don't convert cache to definition (to verify later caches of the same idea), then such trees of assumptions and contexts are not verified, which merge with other such trees and form a tangled mess of exceptions to every rule which eventually prevents you from defining something based on those caches. That's why most people think its impossible to have no contradictions in your mind, which is why they choose to believe new things which they know have unsolvable contradictions.

I think someone needs to put forward the best case they can find that human brain emulations have much of a chance of coming before engineered machine intelligence.

I misunderstood. I thought you were saying it was your goal to prove that instead of you thought it would not be proven. My question does not make sense.

Load More