He appears to be an ID proponent, though that is probably a simplification of his actual position.
How to detonate a technology singularity using only parrot level intelligence - new meetup.com group in Silicon Valley to design and create it
http://www.meetup.com/technology-singularity-detonator
9 people joined in the last 5 hours and the first meetup hasn't even happened yet. This is the meetup description, including technical designs and how it leads to singularity:
The plan is to detonate an intelligence explosion (leading to a technology-singularity) starting with an open-source Java artificial intelligence (AI) software which networks peoples' minds together through the internet using realtime interactive psychology of feedback loops between mouse movements and generated audio. "Technological singularity refers to the hypothetical future emergence of greater-than human intelligence through technological means." http://en.wikipedia.org/wiki/Technological_singularity Computer programming is not required to join the group, but some kind of technical or abstract thinking skill is. We are going to make this happen, not talk about it endlessly like so many other AI groups do. Audivolv 0.1.7 is a very early and version of the user-interface. The final version will be a massively multiplayer audio game unlike any existing game. It will learn based on mouse movements in realtime instead of requiring good/bad buttons to train it. The core AI systems have not been created yet. Audivolv is just the user-interface for that. http://sourceforge.net/projects/audivolv The whole system will be 1 file you double-click to run and it works immediately on Windows, Mac, or Linux. This does not include Audivolv yet and has some parts that may be removed: http://sourceforge.net/projects/humanainet It must be a "Friendly AI", which means it will be designed not to happen like in the Terminator movies or similar science fiction. It will work toward more productive goals and help the Human species. http://en.wikipedia.org/wiki/Friendly_artificial_intelligence My plan to make that happen is for it to be made of many peoples' minds and many computers, so it is us. It becomes smarter when we become smarter. One of the effects of that will be to extremely increase Dunbar's Number, which is the number of people or organizations that a person can intelligently interact with before forgetting others. Dunbar's number is estimated around 150 today. http://en.wikipedia.org/wiki/Dunbar%27s_number
This only requires the AI be as smart as a parrot, since the people using the program do most of the thinking and the AI only organizes their thoughts statistically enough to decide who should connect to who else, in the way evolved code is traded (and verified to use only math so its safe) between computers automatically, in this massively multiplayer audio game. We will detonate a technology singularity using only the intelligence of a parrot plus the intelligence of people using the program. This is very surprising to most people who think huge grids of computers and experts are required to build Human intelligence in a machine. This is a shortcut, and will have much better results because it is us so it has no reason to act against us, like an AI made only of software may do.
Infrastructure
Communication between these programs through the internet will be done as a Distributed Hash Table. The most important part of that is each key (hash of some file bytes) has a well-defined distance to each other key, a distance(hash1,hash2) function, which proves the correct direction to search the network to find the bytes of any hash, or to statistically verify (but not certainly) that its not in the network. There may be a way to do it certainly, but for my purposes approximate searching will work.
In the same Distributed Hash Table, there will be public-keys, used like filenames or identities, whose content can be modified only by whoever has the private-key. If code evolves to include calculations based on your mouse movements and the mouse movements of 5 other people in realtime, then the numbers from those other mouse movements (between -1 and 1 for each of 2 dimensions, for each of 5 people) will be digitally-signed so everyone who uses the evolved code will know it is using the same people's continuing mouse movements instead of is a modified code. The code can be modified, but that would have a different hash and would be considered on its own merits instead of knowledge about the previous code and its specific connections to specific people. This will be done in realtime, not something to be saved and loaded later from a hard-drive. Each new mouse position (or a few of them sent at once) will be digitally-signed and broadcast to the network, the same as any other data broadcast to the network.
http://en.wikipedia.org/wiki/Distributed_hash_table
Similarly, but more fuzzy, the psychology of feedback loops between mouse movements and automatically evolving Java code, will be used as a distance function, and a second network organized that way, so you can search the network in the direction of other people whose psychology is more similar to your current state of mind and how you're using the program. This decentralized network will be searchable by your subconscious thoughts, because subconscious thoughts are expressed in how your mouse movements cause the code to evolve.
As you search this network automatically by moving your mouse, you will trade evolved code with those computers, always automatically verifying the code only uses math and no file-access or java.lang.System class or anything else not provably safe. You will experience the downloaded code as it gradually connects to the code evolved for your mouse movements, code which generates audio as 44100 audio amplitudes (number between -1 and 1) per second per speaker.
Some of the variables in the evolved code will be the hash of other evolved code. Each evolved code will have a hash, probably from the SHA-256 algorithm, so it could be a length 64 hex string written in the code. Each variable will be a number beween -1 and 1. No computer will have all the codes for all its variables, but for those it doesn't have, it will use them simply as a variable. If it has those codes, then there is an extra behavior of giving that code an amount of influence proportional to the value of the variable, or deleting the code if the variable becomes negative for too long. In that way, evolved code will decide which other evolved code to download and how much influence each evolved code should have on the array of floating point numbers in the local computer.
Since the decentralized network will be searched by psychology (instead of text or pixels in an image or other things search-engines know how to do today), and since its connected to each person's subconscious mind through mouse/music feedback loops, the effect will be a collective mind made of many people and computers. We are Human AI Net, do you want to be temporarily assimilated?
Alternative To Brain Implants
Statistically inputs and outputs to neurons subconsciously without extra hardware.
A neuron is a brain cell that connects to thousands of other neurons and slowly adjusts its electricity and chemical patterns as it learns.
An incorrect assumption has extremely delayed the creation of technology that transfers thoughts between 2 brains. That assumption is, to quickly transfer large amounts of information between a brain and a computer, you need hardware that connects directly to neurons.
Eyes and ears transfer a lot of information to a brain, but the other part of that assumption is eyes and ears are only useful for pictures and sounds that make sense and do not appear as complete randomness or whitenoise. People assume anything that sounds like radio static (a typical random sound) can't be used to transfer useful information into a brain.
Most of us remember what a dial-up-modem sounds like. It sounds like information is in it but its too fast for Humans to understand. That's true of the dial-up-modem sound only because its digital and is designed for a modem instead of for Human ears which can hear around 1500 tones and simultaneously a volume for each. The dial-up-modem can only hear 1 tone that oscillates between 1 and 0, and no volume, just 1 or 0. It gets 56000 of those 1s and 0s per second. Human ears are analog so they have no such limits, but brains can think at most at 100 changes per second.
If volume can have 20 different values per tone, then Human ears can hear up to 1500*100*log_base_2(20)=650000 bits of information per second. If you could take full advantage of that speed, you could transfer a book every few seconds into your brain, but the next bottleneck is your ability to think that fast.
If you use ears the same way dial-up-modems use a phone line, but in a way designed for Human ears and Human brains instead of computers, then your ears are much faster data transfer devices than brain implants, and the same is true for transferring information as random-appearing grids of changing colors through your eyes. We have computer speakers and screens for input to brains. We still have some work to do on the output speeds of mouse and keyboard, but there are electricity devices you can wear on your head for the output direction. For the input direction, eyes and ears are currently far ahead of the most advanced technology in their data speeds to your brain.
So why do businesses and governments keep throwing huge amounts of money at connecting computer chips directly to neurons? They should learn to use eyes and ears to their full potential before putting so much resources into higher bandwidth connections to brains. They're not nearly using the bandwidth they already have to brains.
Intuitively most people know how music can affect their subconscious thoughts. Music is a low bandwidth example. It has mostly predictable and repeated sounds. The same voices. The same instruments. What I'm talking about would sound more like radio static or whitenoise. You wouldn't know what information is in it from its sound. You would only understand it after it echoed around your neuron electricity patterns in subconscious ways.
Most people have only a normal computer available, so the brain-to-computer direction of information flow has to be low bandwidth. It can be mouse movements, gyroscope based game controllers, video camera detecting motion, or devices like that. The computer-to-brain direction can be high bandwidth, able to transfer information faster than you can think about it.
Why hasn't this been tried? Because science proceeds in small steps. This is a big step from existing technology but a small step in the way most people already have the hardware (screen, speakers, mouse, etc). The big step is going from patterns of random-appearing sounds or video to subconscious thoughts to mouse movements to software to interpret it statistically, and around that loop many times as the Human and computer learn to predict each other. Compared to that, connecting a chip directly to neurons is a small step.
Its a feedback loop: computer, random-appearing sound or video, ears or eyes, brain, mouse movements, and back to computer. Its very indirect but uses hardware that has evolved for millions of years, compared to low-bandwidth hardware they implant in brains. Eyes and ears are much higher bandwidth, and we should be using them in feedback loops for brain-to-brain and brain-to-computer communication.
What would it feel like? You would move the mouse and instantly hear the sounds change based on how you moved it. You would feel around the sound space for abstract patterns of information you're looking for, and you would learn to find it. When many people are connected this way through the internet, using only mouse movements and abstract random-like sounds instead of words and pictures, thoughts will flow between the brains of different people, thoughts that they don't know how to put into words. They would gradually learn to think more as 1 mind. Brains naturally learn to communicate with any system connected to them. Brains dont care how they're connected. They grow into a larger mind. It happens between the parts of your brain, and it will happen between people using this system through the internet.
Artificial intelligence software does not have to replace us or compete with us. The best way to use it is to connect our minds together. It can be done through brain implants, but why wait for that technology to advance and become cheap and safe enough? All you need is a normal computer and the software to connect our subconscious thoughts and statistical patterns of interaction with the computer.
Dial-up-modem sounds were designed for computers. These interactive sounds/videos would be designed for Human ears/eyes and the slower but much bigger and parallel way the data goes into brains. For years I've been carefully designing a free open-source software http://HumanAI.net - Human and Artificial Intelligence Network, or Human AI Net - to make this work. It will be a software that does for Human brains what dial-up-modems do for computers, and it will sound a little like a dial-up-modem at first but start to sound like music when you learn how to use it. I don't need brain implants to flow subconscious thoughts between your brains over internet wires.
Intelligence is the most powerful thing we know of. The brain implants are simply overkill, even if they become advanced enough to do what I'll use software and psychology to do. We can network our minds together and amplify intelligence and share thoughts without extra hardware. After thats working, we can go straight to quantum devices for accessing brains without implants. Lets do this through software and skip the brain implant paradigm. If it works just a little, it will be enough that our combined minds will figure out how to make it work a lot more. Thats how I prefer to start a http://en.wikipedia.org/wiki/Technological_singularity We don't need businesses and militaries to do it first. We have the hardware on our desks. We're only missing the software. It doesn't have to be smarter than Human software. It just has to be smart enough to connect our subconscious thoughts together. The authorities have their own ideas about how we should communicate and how our minds should be allowed to think together, but their technology was obsolete before it was created. We can do everything they can do without brain implants, using only software and subconscious psychology. We don't need a smarter-than-Human software, or anything nearly that advanced, to create a technology singularity. Who wants to help me change the direction of Human evolution using an open-source (GNU GPL) software? Really, you can create a technology singularity starting from a software with the intelligence of a parrot, as long as you use it to connect Human minds together.
When he says "intelligent design", he is not referring to the common theory that there is some god that is not subject to the laws of physics which created physics and everything in the universe. He says reality created itself as a logical consequence of having to be a closure. I don't agree with everything he says, but based only on the logical steps that lead up to that, him and Yudkowsky should have interesting things to talk about. Both are committed to obey logic and get rid of their assumptions, so there should be no unresolvable conflicts, but I expect lots of conflicts to start with.
Someone with very high IQ like:
- Christopher Michael Langan (he is also an autodidact)
- Marilyn Vos Savant
There is a list at : http://onemansblog.com/2007/11/08/the-massive-list-of-genius-people-with-the-highest-iq/
I suggest Christopher Michael Langan, as roland said. His "Cognitive-Theoretic Model of the Universe (CTMU)" ( download it at http://ctmu.org ) is very logical and conflicts in interesting ways with how Yudkowsky thinks of the universe at the most abstract level. Langan derives the need for an emergent unification of "syntax" (like the laws of physics) and "state" (like positions and times of objects) and that the universe must be a closure. I think he means the only possible states/syntaxes are very abstractly similar to quines. He proposes a third category, not determinism or random, but somewhere between that fits into his logical model in subtle ways.
QUOTE: The currency of telic feedback is a quantifiable self-selection parameter, generalized utility, a generalized property of law and state in the maximization of which they undergo mutual refinement (note that generalized utility is self-descriptive or autologous, intrinsically and retroactively defined within the system, and “pre-informational” in the sense that it assigns no specific property to any specific object). Through telic feedback, a system retroactively self-configures by reflexively applying a “generalized utility function” to its internal existential potential or possible futures. In effect, the system brings itself into existence as a means of atemporal communication between its past and future whereby law and state, syntax and informational content, generate and refine each other across time to maximize total systemic self-utility. This defines a situation in which the true temporal identity of the system is a distributed point of temporal equilibrium that is both between and inclusive of past and future. In this sense, the system is timeless or atemporal.
When he says a system which tends toward a "generalized utility function", I think he means, for example, our physics follow a geodesic, so geodesic would be their utility function.
The cache problem is worst for language because its usually made entirely of cache. Most words/phrases are understood by example instead of reading a dictionary or thinking of your own definitions. I'll give an example of a phrase most people have an incorrect cache for. Then I'll try to cause your cache of that phrase to be updated by making you think about something relevant to the phrase which is not in most peoples' cache of it. Its something which, by definition, should be included but for other reasons will usually not be included.
"Affirmative action" means for certain categories including religion and race, those who tend to be discriminated against are given preference when the choices are approximately equal.
Most people have caches for common races and religions, especially about black people in USA because of the history of slavery in USA. Higher quantity of relevant events gets more cache. More cache makes it harder to define.
One who thinks one acts in affirmative action ways for religion would usually redefine "affirmative action" when they sneeze and instead of hearing "God bless you" they hear "Devil bless you. I hope you don't discriminate against devil worshippers." Usually the definition is updated to end with "except for devil worshippers" and/or an exclusion is added to the cache. Then, one may consider previous incorrect uses of the phrase "affirmative action". The cache did not mean what they thought it meant.
We should distrust all language until we convert it from cache to definitions.
Language usually is not verified and stays as cache. It appears to be low pressure because no pressure is remembered. Its expected to always be cache. Its experienced as high pressure when one chooses a different definition. High pressure is what causes us to reevaluate our beliefs, and with language, reevaluating our beliefs leads to high pressure. With language, neither of those things tends to be first so neither happens usually. Many things are that way but it applies to language the most.
Example of changing cache to definition resulting in high pressure to change back to cache: Using the same words for both sides of a war regardless of which side your country is on can be the result of defining those words. A common belief is soldiers should be respected and enemy combatants deserve what they get. Language is full of stateful words like those. If you think in stateful words, then the cost of learning is multiplied by the number of states at each branch in your thinking. If you don't convert cache to definition (to verify later caches of the same idea), then such trees of assumptions and contexts are not verified, which merge with other such trees and form a tangled mess of exceptions to every rule which eventually prevents you from defining something based on those caches. That's why most people think its impossible to have no contradictions in your mind, which is why they choose to believe new things which they know have unsolvable contradictions.
I don't understand why you are bothering asking your question - but to give a literal answer, my interest in synthesising intelligent agents is an offshoot of my interest in creating living things - which is an interest I have had for a long time and share with many others. Machine intelligence is obviously possible - assuming you have a materialist and naturalist world-view like mine.
I think someone needs to put forward the best case they can find that human brain emulations have much of a chance of coming before engineered machine intelligence.
I misunderstood. I thought you were saying it was your goal to prove that instead of you thought it would not be proven. My question does not make sense.
These "Whole Brain Emulation" discussions are surreal for me. I think someone needs to put forward the best case they can find that human brain emulations have much of a chance of coming before engineered machine intelligence.
The efforts in that direction I have witnessed so far seem feeble and difficult to take seriously - while the case that engineered machine intelligence will come first seems very powerful to me.
Without such a case, why spend so much time and energy on a discussion of what-if?
Why do you consider the possibility of smarter than Human AI at all? The difference between the AI we have now and that is bigger than the difference between those 2 technologies you are comparing.
I believe Eliezer expressed it as something that tells you that even if you think it would be right (because of your superior ability) to murder the chief and take over the tribe, it still is not right to murder the chief and take over the tribe.
That's exactly the high awareness I was talking about, and most people don't have it. I wouldn't be surprised if most people here failed at it, if it presented itself in their real lives.
I mean, are you saying you wouldn't save the burning orphans?
We do still have problems with abuses of power, but I think we have well-developed ways of spotting this and stopping it.
We have checks and balances of political power, but that works between entities on roughly equal political footing, and doesn't do much for those outside of that process. We can collectively use physical power to control some criminals who abuse their own limited powers. But we don't have anything to deal with supervillains.
There is fundamentally no check on violence except more violence, and 10,000 accelerated uploads could quickly become able to win a war against the rest of the world.
It is the fashion in some circles to promote funding for Friendly AI research as a guard against the existential threat of Unfriendly AI. While this is an admirable goal, the path to Whole Brain Emulation is in many respects more straightforward and presents fewer risks.
I believe Eliezer expressed it as something that tells you that even if you think it would be right (because of your superior ability) to murder the chief and take over the tribe, it still is not right to murder the chief and take over the tribe.
That's exactly the high awareness I was talking about, and most people don't have it. I wouldn't be surprised if most people here failed at it, if it presented itself in their real lives.
Most people would not act like a Friendly AI therefore "Whole Brain Emulation" only leads to "fewer risks" if you know exactly which brains to emulate and have the ability to choose which brain(s).
If whole brain emulation (for your specific brain) its expensive, it might result in the brain being from a person who starts wars and steals from other countries, so he can get rich.
Most people prefer that 999 people from their country should live at the cost of 1000 people of another country would die, given no other known differences between those 1999 people. Also unlike a "Friendly AI", their choices are not consistent. Most people will leave the choice at whatever was going to happen if they did not choose, even if they know there are no other effects (like jail) from choosing. If the 1000 people were going to die, unknown to any of them, to save 999, then most people would think "Its none of my business, maybe god wants it to be that way" and let the extra 1 person die. A "Friendly AI" would maximize lives saved if nothing else is known about all those people.
There are many examples why most people are not close to acting like a "Friendly AI" even if we removed all the bad influences on them. We should build a software to be a "Friendly AI" instead of emulating brains and only emulate brains for different reasons, except maybe the few brains that think like a "Friendly AI". Its probably safer to do it completely in software.
My Conclusions It seems there is Far Near and Near Near, and if you ever again find yourself with time to meta-think that you are operating in Near mode.... then you're actually in Far mode. and so I will be more suspicious of the hypothetical thought experiments from now on.
When one watches the movie series called "Saw", they will experience the "near mode" of thinking much more than the examples given in this thread. "Saw" is about people trapped in various situations, enforced by mechanical means only (no psychotic person to beg for mercy, the same way you can't beg the train to stop), where they must choose which things to sacrifice to save a larger number of lives, sometimes including their own life. For example, the first "Saw" movie starts with 2 dieing people trapped in an abandoned basement, with their legs chained to the wall, and the only way the first person can escape is to cut off their foot with the saw. Many times in the movie series, the group of trapped people chose whose turn it was to go into the next dangerous area to get the key to the next room. Similarly, the psychotic person who puts the people in those situations thinks he is doing it for their own good because he chooses people who have little respect for their own life and through the process of escaping his horrible traps some of them have a better state of mind after escaping than before. I'm not saying that would really work, but that's the main subject of the movies and is shown in many ways simultaneously. These are good examples of how to avoid "meta thinking" and really think in "near mode": Watch the "Saw" movies.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Its not a troll. Its a very confusing subject, and I don't know how to explain it better unless you ask specific questions.