New music powers
Original post: http://bearlamp.com.au/new-music-powers/
I have written before about how I am pretty terrible at canvassing music in my head. This lends to the appalling ability (to musically oriented people) to be able to do things like listen to the same song on repeat 500 times or more in a row without being bothered by it either way. I never cared more than the sense of "this is interesting but irrelevant" on the idea.
Being indifferent to music has given me the ability to be completely useless at holding a musical preference, or explore the value of music in terms of going to music events, or participating in musical experiences.
This week something changed! Or more accurately last week. Last week I was listening to a piece for the n'th time, but at the same time was quite badly sleep deprived. As I was listening the music started falling apart. Different parts of the music changed volume so that I could isolate different instruments and follow different features of the music. At the time, being a bit sleep deprived I took it as a warning that maybe it was time to go to bed. hint hint: your going a little nuts.
Today I noticed I can still do it. When I am no longer sleep deprived I can pay attention to music in a different way than I used to be able to. I can single out the drums and only "listen" to that part, or the guitar, or the vocals. (it's pop music on the radio).
Of course the reason I bothered to write about it, and the reason that it's interesting is; as half the readers can probably imagine - I told a musical friend of mine that I had developed new powers and he said,
Wait, people can't normally do that?
So I get to add this to the pile of typical mind, sensory perception assumptions that we make when we interpret our own individual world through our own senses. What if your's worked a bit differently? How much would that fundamentally change how you operate as a human? How much you assume about the world around you and how it works? And how everyone else works?
Question: What are your natural assumptions about how your senses work? Have you ever noticed anyone else acting on different basic natural assumptions?
Meta: this took 45mins to write.
My mind must be too highly trained
I've played various musical instruments for nearly 40 years now, but some simple things remain beyond my grasp. Most frustrating is sight reading while playing piano. Though I've tried for years, I can't read bass and treble clef at the same time. To sight-read piano music, when you see this:

you need your right hand to read it as C D E F, but your left hand to read it as E F G A. To this day, I can't do it, and I can only learn piano music by learning the treble and bass clef parts separately to the point where I don't rely on the score for more than reminders, then playing them together.
Merry Newtonmas LW. Have some rationalist music.
Related to: So You've Changed Your Mind
Basically my band did an album whose theme was "change your mind", largely inspired by the LW sequence. It's not Bayes Theorem in rhyming form, but the subject matter and spirit of it should (hopefully) resonate with LW readers.
Anyway, for obvious reasons I'm curious what you'll think of it, and so in the spirit of giving here's a direct link to download the album for free. If you don't want to download the whole thing immediately, you can also stream each song via Bandcamp.
Sound-wise, you'll probably like it if you like 90s rock/pop/alternative.
How to detonate a technology singularity using only parrot level intelligence - new meetup.com group in Silicon Valley to design and create it
http://www.meetup.com/technology-singularity-detonator
9 people joined in the last 5 hours and the first meetup hasn't even happened yet. This is the meetup description, including technical designs and how it leads to singularity:
The plan is to detonate an intelligence explosion (leading to a technology-singularity) starting with an open-source Java artificial intelligence (AI) software which networks peoples' minds together through the internet using realtime interactive psychology of feedback loops between mouse movements and generated audio. "Technological singularity refers to the hypothetical future emergence of greater-than human intelligence through technological means." http://en.wikipedia.org/wiki/Technological_singularity Computer programming is not required to join the group, but some kind of technical or abstract thinking skill is. We are going to make this happen, not talk about it endlessly like so many other AI groups do. Audivolv 0.1.7 is a very early and version of the user-interface. The final version will be a massively multiplayer audio game unlike any existing game. It will learn based on mouse movements in realtime instead of requiring good/bad buttons to train it. The core AI systems have not been created yet. Audivolv is just the user-interface for that. http://sourceforge.net/projects/audivolv The whole system will be 1 file you double-click to run and it works immediately on Windows, Mac, or Linux. This does not include Audivolv yet and has some parts that may be removed: http://sourceforge.net/projects/humanainet It must be a "Friendly AI", which means it will be designed not to happen like in the Terminator movies or similar science fiction. It will work toward more productive goals and help the Human species. http://en.wikipedia.org/wiki/Friendly_artificial_intelligence My plan to make that happen is for it to be made of many peoples' minds and many computers, so it is us. It becomes smarter when we become smarter. One of the effects of that will be to extremely increase Dunbar's Number, which is the number of people or organizations that a person can intelligently interact with before forgetting others. Dunbar's number is estimated around 150 today. http://en.wikipedia.org/wiki/Dunbar%27s_number
This only requires the AI be as smart as a parrot, since the people using the program do most of the thinking and the AI only organizes their thoughts statistically enough to decide who should connect to who else, in the way evolved code is traded (and verified to use only math so its safe) between computers automatically, in this massively multiplayer audio game. We will detonate a technology singularity using only the intelligence of a parrot plus the intelligence of people using the program. This is very surprising to most people who think huge grids of computers and experts are required to build Human intelligence in a machine. This is a shortcut, and will have much better results because it is us so it has no reason to act against us, like an AI made only of software may do.
Infrastructure
Communication between these programs through the internet will be done as a Distributed Hash Table. The most important part of that is each key (hash of some file bytes) has a well-defined distance to each other key, a distance(hash1,hash2) function, which proves the correct direction to search the network to find the bytes of any hash, or to statistically verify (but not certainly) that its not in the network. There may be a way to do it certainly, but for my purposes approximate searching will work.
In the same Distributed Hash Table, there will be public-keys, used like filenames or identities, whose content can be modified only by whoever has the private-key. If code evolves to include calculations based on your mouse movements and the mouse movements of 5 other people in realtime, then the numbers from those other mouse movements (between -1 and 1 for each of 2 dimensions, for each of 5 people) will be digitally-signed so everyone who uses the evolved code will know it is using the same people's continuing mouse movements instead of is a modified code. The code can be modified, but that would have a different hash and would be considered on its own merits instead of knowledge about the previous code and its specific connections to specific people. This will be done in realtime, not something to be saved and loaded later from a hard-drive. Each new mouse position (or a few of them sent at once) will be digitally-signed and broadcast to the network, the same as any other data broadcast to the network.
http://en.wikipedia.org/wiki/Distributed_hash_table
Similarly, but more fuzzy, the psychology of feedback loops between mouse movements and automatically evolving Java code, will be used as a distance function, and a second network organized that way, so you can search the network in the direction of other people whose psychology is more similar to your current state of mind and how you're using the program. This decentralized network will be searchable by your subconscious thoughts, because subconscious thoughts are expressed in how your mouse movements cause the code to evolve.
As you search this network automatically by moving your mouse, you will trade evolved code with those computers, always automatically verifying the code only uses math and no file-access or java.lang.System class or anything else not provably safe. You will experience the downloaded code as it gradually connects to the code evolved for your mouse movements, code which generates audio as 44100 audio amplitudes (number between -1 and 1) per second per speaker.
Some of the variables in the evolved code will be the hash of other evolved code. Each evolved code will have a hash, probably from the SHA-256 algorithm, so it could be a length 64 hex string written in the code. Each variable will be a number beween -1 and 1. No computer will have all the codes for all its variables, but for those it doesn't have, it will use them simply as a variable. If it has those codes, then there is an extra behavior of giving that code an amount of influence proportional to the value of the variable, or deleting the code if the variable becomes negative for too long. In that way, evolved code will decide which other evolved code to download and how much influence each evolved code should have on the array of floating point numbers in the local computer.
Since the decentralized network will be searched by psychology (instead of text or pixels in an image or other things search-engines know how to do today), and since its connected to each person's subconscious mind through mouse/music feedback loops, the effect will be a collective mind made of many people and computers. We are Human AI Net, do you want to be temporarily assimilated?
Alternative To Brain Implants
Statistically inputs and outputs to neurons subconsciously without extra hardware.
A neuron is a brain cell that connects to thousands of other neurons and slowly adjusts its electricity and chemical patterns as it learns.
An incorrect assumption has extremely delayed the creation of technology that transfers thoughts between 2 brains. That assumption is, to quickly transfer large amounts of information between a brain and a computer, you need hardware that connects directly to neurons.
Eyes and ears transfer a lot of information to a brain, but the other part of that assumption is eyes and ears are only useful for pictures and sounds that make sense and do not appear as complete randomness or whitenoise. People assume anything that sounds like radio static (a typical random sound) can't be used to transfer useful information into a brain.
Most of us remember what a dial-up-modem sounds like. It sounds like information is in it but its too fast for Humans to understand. That's true of the dial-up-modem sound only because its digital and is designed for a modem instead of for Human ears which can hear around 1500 tones and simultaneously a volume for each. The dial-up-modem can only hear 1 tone that oscillates between 1 and 0, and no volume, just 1 or 0. It gets 56000 of those 1s and 0s per second. Human ears are analog so they have no such limits, but brains can think at most at 100 changes per second.
If volume can have 20 different values per tone, then Human ears can hear up to 1500*100*log_base_2(20)=650000 bits of information per second. If you could take full advantage of that speed, you could transfer a book every few seconds into your brain, but the next bottleneck is your ability to think that fast.
If you use ears the same way dial-up-modems use a phone line, but in a way designed for Human ears and Human brains instead of computers, then your ears are much faster data transfer devices than brain implants, and the same is true for transferring information as random-appearing grids of changing colors through your eyes. We have computer speakers and screens for input to brains. We still have some work to do on the output speeds of mouse and keyboard, but there are electricity devices you can wear on your head for the output direction. For the input direction, eyes and ears are currently far ahead of the most advanced technology in their data speeds to your brain.
So why do businesses and governments keep throwing huge amounts of money at connecting computer chips directly to neurons? They should learn to use eyes and ears to their full potential before putting so much resources into higher bandwidth connections to brains. They're not nearly using the bandwidth they already have to brains.
Intuitively most people know how music can affect their subconscious thoughts. Music is a low bandwidth example. It has mostly predictable and repeated sounds. The same voices. The same instruments. What I'm talking about would sound more like radio static or whitenoise. You wouldn't know what information is in it from its sound. You would only understand it after it echoed around your neuron electricity patterns in subconscious ways.
Most people have only a normal computer available, so the brain-to-computer direction of information flow has to be low bandwidth. It can be mouse movements, gyroscope based game controllers, video camera detecting motion, or devices like that. The computer-to-brain direction can be high bandwidth, able to transfer information faster than you can think about it.
Why hasn't this been tried? Because science proceeds in small steps. This is a big step from existing technology but a small step in the way most people already have the hardware (screen, speakers, mouse, etc). The big step is going from patterns of random-appearing sounds or video to subconscious thoughts to mouse movements to software to interpret it statistically, and around that loop many times as the Human and computer learn to predict each other. Compared to that, connecting a chip directly to neurons is a small step.
Its a feedback loop: computer, random-appearing sound or video, ears or eyes, brain, mouse movements, and back to computer. Its very indirect but uses hardware that has evolved for millions of years, compared to low-bandwidth hardware they implant in brains. Eyes and ears are much higher bandwidth, and we should be using them in feedback loops for brain-to-brain and brain-to-computer communication.
What would it feel like? You would move the mouse and instantly hear the sounds change based on how you moved it. You would feel around the sound space for abstract patterns of information you're looking for, and you would learn to find it. When many people are connected this way through the internet, using only mouse movements and abstract random-like sounds instead of words and pictures, thoughts will flow between the brains of different people, thoughts that they don't know how to put into words. They would gradually learn to think more as 1 mind. Brains naturally learn to communicate with any system connected to them. Brains dont care how they're connected. They grow into a larger mind. It happens between the parts of your brain, and it will happen between people using this system through the internet.
Artificial intelligence software does not have to replace us or compete with us. The best way to use it is to connect our minds together. It can be done through brain implants, but why wait for that technology to advance and become cheap and safe enough? All you need is a normal computer and the software to connect our subconscious thoughts and statistical patterns of interaction with the computer.
Dial-up-modem sounds were designed for computers. These interactive sounds/videos would be designed for Human ears/eyes and the slower but much bigger and parallel way the data goes into brains. For years I've been carefully designing a free open-source software http://HumanAI.net - Human and Artificial Intelligence Network, or Human AI Net - to make this work. It will be a software that does for Human brains what dial-up-modems do for computers, and it will sound a little like a dial-up-modem at first but start to sound like music when you learn how to use it. I don't need brain implants to flow subconscious thoughts between your brains over internet wires.
Intelligence is the most powerful thing we know of. The brain implants are simply overkill, even if they become advanced enough to do what I'll use software and psychology to do. We can network our minds together and amplify intelligence and share thoughts without extra hardware. After thats working, we can go straight to quantum devices for accessing brains without implants. Lets do this through software and skip the brain implant paradigm. If it works just a little, it will be enough that our combined minds will figure out how to make it work a lot more. Thats how I prefer to start a http://en.wikipedia.org/wiki/Technological_singularity We don't need businesses and militaries to do it first. We have the hardware on our desks. We're only missing the software. It doesn't have to be smarter than Human software. It just has to be smart enough to connect our subconscious thoughts together. The authorities have their own ideas about how we should communicate and how our minds should be allowed to think together, but their technology was obsolete before it was created. We can do everything they can do without brain implants, using only software and subconscious psychology. We don't need a smarter-than-Human software, or anything nearly that advanced, to create a technology singularity. Who wants to help me change the direction of Human evolution using an open-source (GNU GPL) software? Really, you can create a technology singularity starting from a software with the intelligence of a parrot, as long as you use it to connect Human minds together.
Music: Hatsune Miku 3D Live in Los Angeles on the 2nd July 2011
Hatsune Miku is a singing synthesizer application with a female persona, developed by Crypton Future Media using Yamaha's Vocaloid technology. She will be preforming live in the NOKIA Theatre in Los Angeles Convention Center on the 2nd July 2011, during the ANIME EXPO 2011.
Vocaloid technology was first released in 2004, but didn't meet wide recognition at first. When designing the second generation of Vocaloids, Crypton Future Media asked the manga artist Kei to design an avatar for their upcoming synthesized voice. The first to be released, "CV01 - Hatsune Miku" became runaway success, as thousands of internet users started making their songs with Hatsune Miku. Sega made a game and helped with organizing the first solo live perfomance in Tokyo on March 9, 2010, titled "Miku no Hi Kanshasai 39's Giving Day"(Miku's Day Thanksgiving). The concert included some of the best songs made by users, performed by Hatsune Miku and a band of human musicians. The concert in LA on 2nd July, 2011 should be the second of a kind and the first one outside Japan of that level (there was another event, and it is said to be not that good without Sega's involvement; but now the organisers promise "a few improvements over the original event")
Examples of Vocaloids in action:
Vocaloid Sweet Ann (English): Let it be (the Beatles)
Vocaloid Hatsune Miku (Japanese): Nebula (Tripshots)
The previous event "Miku no Hi Kanshasai 39's Giving Day"(Miku's Day Thanksgiving) on March 9, 2010 at the Zepp Tokyo in Odaiba, Tokyo
Nice fragment on YouTube (HD too)
torrent of the whole thing in HD
What seems to be the official event page:
Hatsune Miku 3D Live in Los Angeles
It looks like the tickets are sold out, but maybe it will be possible to buy some tickets at the event or something?. I would try if I was not living half the globe away.
I think that it's fitting for a community of people interested in the possibility of a technological singularity to take interest in the kind of entertainment that pushes the envelope of what's possible with technology. Human voice was the last musical instrument unconquered by synthesis. Now this page of human history is turned, and for me, Miku is the symbol of it. If there are technically better voice synthesizers (which is quite possible, since Miku was released in 2007), they don't have their own concerts yet.
Sorry if this is unappropriate.
Music: The 21st Century Monads
The 21st Century Monads are an international musical collaboration whose songs address fundamental issues in philosophy, including specialized topics in contemporary analytic philosophy and the history of philosophy. The musical genres range from dance to folk. The songs are unique, original songs, not cheesy parodies.
Link: people.umass.edu/phil511/monads/
Via: m-phi.blogspot.com/2011/06/song-of-love-and-logic.html
It's all completely free. Here are some of the titles:
“We Can’t Stop Doing Metaphysics” MP3
“Your Body Is You” MP3
“Utilitarian Girlfriend” MP3
“My Paper Was Rejected Again” MP3
"In the land of P and not-P
You’d both be and not be mine" MP3
new Bright Eyes (Conor Oberst) single 'Singularity'
Bright Eyes' new (2011) single, 'Singularity', is maybe the best song ever about the Singularity. It's a catchy electro-pop refrain.
Lyrics:
Learning on the fly
How to gather and analyze
Nothing is living if nothing dies
What an exception to make
Roundly rejecting our faithWhen singularity comes
We will be fully revealed
Wandering limitless fieldsWhen singularity comes
We will be abstraction then
We will buried within
All will be balanced
We will be oneNow we're on our way
All of our instincts accelerate
Nothing you imagine could keep this pace
We will know freedom at last
Finally make up for the pastWhen singularity comes
We will be faster than light
Whistling, skipping through timeWhen singularity comes
We will be children again
We will be cradled within
We will be perfect
We will be oneWhen singularity comes
Living in one mind
Every pin drop is amplified
Every outcome before its tried
Will make a rag doll of God
Wind up our new music boxWhen singularity comes
We will be fully revealed
Wandering limitless fieldsWhen singularity comes
We'll be completely awake
Neophyte make no mistake
We're in this together
We will be oneWhen singularity comes
When singularity comes
Oberst's comments on the song:
I don't know if you're familiar with the theory of singularity. This guy, Ray Kurzweil, who was the inventor of early synthesizers, he has this theory -- a few other people write about it too -- but essentially there's a point where artificial intelligence reaches beyond human intelligence and we fuse in with the internet and become what he calls "spiritual machines." Essentially, you stop having to die and stop having to eat. Our physical form is no longer important because you're able to maintain your consciousness by uploading it to the next frame, which sounds spooky and weird but I think it's 100% achievable, especially when you think about how fast new machines invent newer machines, which invent the newer machines. It's exponential growth. A person doesn't have to sit down and invent every one of these steps. His vision is really utopian, like this is the way forward. Humans, we're obviously going to destroy our planet and destroy our physical form, but we'll continue in this way.
Hat tip to Kevin.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)