Even ignoring the technical problems, and the fact nobody knows and the risk is to big, there's still a huge difference between the CEV of humanity and the CEV of a-bunch-of-guys-on-the-internet. You might get a "none of us is as cruel as all of us" type Anonymous for example.
My plan will have approximately the same effect as connecting many peoples' temporal lobes (where audio first comes into brains) to other peoples' temporal lobes, the same way brain parts normally wire to or disconnect from other brain parts, forming a bigger mind. The massively multiplayer audio game is to make those connections.
Its like tunneling a network program through SSL, but this is much more complex because its tunneling statistical thoughts through mouse movements, audio software, ears, brain, mouse movements again (and keep looping), internet, audio software, ears, brain, mouse movements, audio software, ears brain (and keep looping), and back on the same path to the first person and many other people.
If we all shared neural connections, that would be closer to CEV than any one volition of any person or group. Since its an overall increase in the coherence of volition on Earth, it is purely a move toward CEV and away from UnFriendly AI.
It is better to increase the coherence of volition of a-bunch-of-guys-on-the-internet than not to. Or do you want everyone to continue disagreeing with eachother approximately equal amounts until most of those disagreements can be solved all at once with the normal kind of CEV?
Networking Human minds together subconsciously (through feedback loops of mouse movements and generated audio) either (1) doesn't work, or (2) causes people to think x amount more like 1 mind and therefore is x amount of progress toward Coherent Extrapolated Volition. The design is not nearly smart enough to create superhuman intelligence without being a CEV, so there is no danger of UnFriendly AI.
Its not a troll. Its a very confusing subject, and I don't know how to explain it better unless you ask specific questions.
He appears to be an ID proponent, though that is probably a simplification of his actual position.
When he says "intelligent design", he is not referring to the common theory that there is some god that is not subject to the laws of physics which created physics and everything in the universe. He says reality created itself as a logical consequence of having to be a closure. I don't agree with everything he says, but based only on the logical steps that lead up to that, him and Yudkowsky should have interesting things to talk about. Both are committed to obey logic and get rid of their assumptions, so there should be no unresolvable conflicts, but I expect lots of conflicts to start with.
Someone with very high IQ like:
- Christopher Michael Langan (he is also an autodidact)
- Marilyn Vos Savant
There is a list at : http://onemansblog.com/2007/11/08/the-massive-list-of-genius-people-with-the-highest-iq/
I suggest Christopher Michael Langan, as roland said. His "Cognitive-Theoretic Model of the Universe (CTMU)" ( download it at http://ctmu.org ) is very logical and conflicts in interesting ways with how Yudkowsky thinks of the universe at the most abstract level. Langan derives the need for an emergent unification of "syntax" (like the laws of physics) and "state" (like positions and times of objects) and that the universe must be a closure. I think he means the only possible states/syntaxes are very abstractly similar to quines. He proposes a third category, not determinism or random, but somewhere between that fits into his logical model in subtle ways.
QUOTE: The currency of telic feedback is a quantifiable self-selection parameter, generalized utility, a generalized property of law and state in the maximization of which they undergo mutual refinement (note that generalized utility is self-descriptive or autologous, intrinsically and retroactively defined within the system, and “pre-informational” in the sense that it assigns no specific property to any specific object). Through telic feedback, a system retroactively self-configures by reflexively applying a “generalized utility function” to its internal existential potential or possible futures. In effect, the system brings itself into existence as a means of atemporal communication between its past and future whereby law and state, syntax and informational content, generate and refine each other across time to maximize total systemic self-utility. This defines a situation in which the true temporal identity of the system is a distributed point of temporal equilibrium that is both between and inclusive of past and future. In this sense, the system is timeless or atemporal.
When he says a system which tends toward a "generalized utility function", I think he means, for example, our physics follow a geodesic, so geodesic would be their utility function.
This is my solution to Newcomb's Paradox.
Causal decision theory is a subset of evidential decision theory. We have much evidence that information flows from past to future. If we observe new evidence that information flows the other direction or the world works a different way than we think which allows Omega (or anyone else) to repeatedly react to the future before it happens, then we should give more weight to other parts of decision theory than causal. Depending on what we observe, our thoughts can move gradually between the various types of decision theory, using evidential decision theory as the meta-algorithm to choose the weighting of the other algorithms.
Observations are all we have. Observations may be that information flows past to future, or they may be that Omega predicts accurately, or some combination. In this kind of decision theory, estimate the size of the evidence for each kind of decision theory.
The evidence for causal theory is large but can be estimated as the logbase2 of the number of synapses in a Human brain (10^15) multiplied by http://en.wikipedia.org/wiki/Dunbar%27s_number (150). The evidence may be more, but that is a limit on how advanced a thing any size group of people can learn (without changing how we learn). That result is around 57 bits.
The game played in Newcomb's Paradox has 2 important choices: one-boxing and two-boxing, so I used log base 2. Combining the evidence from all previous games and other ways Newcomb's Paradox is played, if the evidence that Omega is good at predicting builds up to exceed 57 bits, then in choices related to that, I would be more likely to one-box. If there have only been 56 observations and in all of them two-boxing lost or one-boxing won, then I would be more likely to two-box because there are more observations that information flows past to future and Omega doesn't know what I will do.
The Newcomb Threshold of 57 is only an estimate for a specific Newcomb problem. For each choice, we should reconsider the evidence for the different kinds of decision theory so we can learn to win Newcomb games more often than we lose.
The cache problem is worst for language because its usually made entirely of cache. Most words/phrases are understood by example instead of reading a dictionary or thinking of your own definitions. I'll give an example of a phrase most people have an incorrect cache for. Then I'll try to cause your cache of that phrase to be updated by making you think about something relevant to the phrase which is not in most peoples' cache of it. Its something which, by definition, should be included but for other reasons will usually not be included.
"Affirmative action" means for certain categories including religion and race, those who tend to be discriminated against are given preference when the choices are approximately equal.
Most people have caches for common races and religions, especially about black people in USA because of the history of slavery in USA. Higher quantity of relevant events gets more cache. More cache makes it harder to define.
One who thinks one acts in affirmative action ways for religion would usually redefine "affirmative action" when they sneeze and instead of hearing "God bless you" they hear "Devil bless you. I hope you don't discriminate against devil worshippers." Usually the definition is updated to end with "except for devil worshippers" and/or an exclusion is added to the cache. Then, one may consider previous incorrect uses of the phrase "affirmative action". The cache did not mean what they thought it meant.
We should distrust all language until we convert it from cache to definitions.
Language usually is not verified and stays as cache. It appears to be low pressure because no pressure is remembered. Its expected to always be cache. Its experienced as high pressure when one chooses a different definition. High pressure is what causes us to reevaluate our beliefs, and with language, reevaluating our beliefs leads to high pressure. With language, neither of those things tends to be first so neither happens usually. Many things are that way but it applies to language the most.
Example of changing cache to definition resulting in high pressure to change back to cache: Using the same words for both sides of a war regardless of which side your country is on can be the result of defining those words. A common belief is soldiers should be respected and enemy combatants deserve what they get. Language is full of stateful words like those. If you think in stateful words, then the cost of learning is multiplied by the number of states at each branch in your thinking. If you don't convert cache to definition (to verify later caches of the same idea), then such trees of assumptions and contexts are not verified, which merge with other such trees and form a tangled mess of exceptions to every rule which eventually prevents you from defining something based on those caches. That's why most people think its impossible to have no contradictions in your mind, which is why they choose to believe new things which they know have unsolvable contradictions.
Humans evolved from the ancestors of Monkeys therefore there is no line between person and nonperson. There are many ways to measure it, but all correct ways are a continuous function. More generally, the equations of quantum physics are continuous. There is a continuous path from any possible state of the universe to any possible state of the universe. Therefore, for any 2 possible life forms, there is a continuous path of quantum wavefunction (state of the universe) between them, which would look like a video morphing continuously between 2 pictures but morphing between living patterns instead of pictures. For example, there is a continuous path between both possible states (alive and dead in the box) of Schrodinger's Cat, but its more important that there are an infinite number of continuous paths, not just the path that crosses the point in spacetime where it is decided if the cat lives or dies. For what I'm explaining here, it does not matter if all these possibilities exist or not. It only matters that they can be defined in logic, even if we do not know the definition. To solve hard problems, its useful to know a solution exists.
Starting from the knowledge that there are definable functions that can approximate continuous measures between any 2 life forms, I will explain a sequence of tasks that starts at something simple enough that we know how to do it, and continuous as tasks of increasing difficulty, finally defining a task that calculates a Nonperson Predicate, the subject of this thread. It is very slow and uses a lot of computer memory, but to define it at all is progress.
I am not defining ethics. I am writing a more complex version of "select * from..." in a database language, but this process defines how to select something thats not a person. That is a completely different question than if its right or wrong to simulate people and delete the simulations.
The second-last step is to define a continuous function that returns 0 for the average Monkey and returns 1 for the average Human and returns a fraction for any evolution between them (if such transition species were still alive to measure), and to define many similar functions that measure between Human and many other things.
All of these functions must return approximately the same number for a simulation as for a simulation of a simulation, to any recursive depth.
A computer can run a physics simulation of another computer which runs a simulation of a life form. Such a recursive simulation is inside quantum physics. Quantum physics equations are continuous and have an infinite number of paths between all possible states of the universe. Therefore continuous functions can be defined that measure between a simulation and a simulation of a simulation. That does not depend on if it has ever been done. I only need to explain that it can be defined abstractly.
The "continuous function that returns 0 for the average Monkey and returns 1 for the average Human" problem, counting simulations and simulations of simulations equally, are much too hard to solve directly, so start at a similar and extremely simpler problem:
Define a continuous function that returns 0 for the average electron and returns 1 for the average photon, counting simulations of electrons/photons the same as simulations of simulations of electrons/photons.
Just the part of counting a simulation the same as a simulation of a simulation (to any recursive depth) is enough to send most people "screaming and running away from the problem". No need to get into the Human parts. The same question about simple particles in physics, which we have well known equations for, is more than we know how to do. Learn to walk before you learn to run.
Choose many things as training data including electrons, photons, atoms, molecules, protein-folding, tissues, bacteria, plants, insects, animals, and eventually approach Humans without ever getting there. Calculate continuous functions between pairs of these things, and calculate a web of functions that approaches a Nonperson Predicate without ever simulating a person. For the last step, extrapolate from Monkey to Human the same way you can use statistical clustering to extrapolate from simpler animals to Monkey.
That's how you calculate a Nonperson Predicate without ever simulating a person.
Also, near the last few steps, because of the way it can simulate and predict brains of animals and the simpler behaviors of people, this algorithm, including details about the clustering and evolution of continuous measuring functions to be figured out later, may converge to a Coherent Extrapolated Volition (CEV) algorithm and therefore generate a Friendly AI, if you had the unreasonably large amount of computers needed to run it.
Its basically an optimization process for simulating everything from physics up to animals and extrapolating that to higher life forms like people. Its not practical to build this and use it. Its purpose is to define a solution so we can think of faster ways to do it later.
I don't understand why you are bothering asking your question - but to give a literal answer, my interest in synthesising intelligent agents is an offshoot of my interest in creating living things - which is an interest I have had for a long time and share with many others. Machine intelligence is obviously possible - assuming you have a materialist and naturalist world-view like mine.
I think someone needs to put forward the best case they can find that human brain emulations have much of a chance of coming before engineered machine intelligence.
I misunderstood. I thought you were saying it was your goal to prove that instead of you thought it would not be proven. My question does not make sense.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Good point about speech. Many of the comments hear lead me to think of something I think I see too little of on this board. Much of what we talk about in AI has clear, interesting, and educational analogs in NI (Natural Intellitence). We certainly have a problem with unfriendly NI's (Hitler, Stalin, Pol Pot, etc). Further, the significant structure of government in the western world (at least) shows we do a medium-good job of determining a CEV for humanity. And it also points out that a CEV will likely always be a compromise, finding an optimum-like compromise between components that are truly and actually different.
Since starting to read this site, I have thought more that Humanity has a collective intelligence to it which is way beyond that of the individuals in it. The difference between one human in isolation and one chimp in isolation is probably noticable but small. But with much higher bandwidth between individuals, humanity beats the pants of chimps (we wear pants, they do not).
Your insight about the role of speech ni providing the link between brains is a good one. Results of the project proposed above should be analyzed with respect to how the results achieved are the same as with voice, and how they might differ. We might learn something that way.
Yes there is a strong collective mind made of communication through words, but its a very self-deceptive mind. It tries to redefine common words to redefine ideas that other parts of the mind do not intend to redefine, and those parts of mind later find their memory has been corrupted. Its why people start expecting to pay money when they agree to get something "free". Intuition is much more honest. Its based on floating points at the subconscious level instead of symbols at the conscious level. By tunneling between the temporal lobes of peoples' brains, Human AI Net will bypass the conscious level and access the core of the problems that lead to conscious disagreements. Words are a corrupted interface so any AI built on them will have errors.
To the LessWrong and Singularity community, I offered an invitation to influence by designing details of this plan for singularity. Downvoting an invitation will not cancel the event, but if you can convince me that my plan may result in UnFriendly AI then I will cancel it. Since I have considered many possibilities, I do not expect a reason against it exists. Would your time be better spent calculating the last digit of friendliness probability for all of the mind space, or working to fix any problems you may see in a singularity plan that's in progress and will finish before yours?