Posts

Sorted by New

Wiki Contributions

Comments

Ohhhh... oh so many things I could substitute for the word 'Zebra'....

Eliezer,

How are you going to be 'sure' that there is no landmine when you decide to step?

Are you going to have many 'experts' check your work before you'll trust it? Who are these experts if you are occupying the highest intellectual orbital? How will you know they're not YesMen?

Even if you can predict the full effects of your code mathematically (something I find somewhat doubtful, given that you will be creating something more intelligent than we are, and thus its actions will be by nature unpredictable to man), how can you be certain that the hardware it will run on will perform with the integrity you need it to?

If you have something that is changing itself towards 'improvement,' than won't the dynamic nature of the program leave it open to errors that might have fatal consequences? I'm thinking of a digital version of genetic mutation in which your code is the DNA...

Like, lets say the superintelligence invents some sort of "Code shuffling" mechanism for itself whereby it can generate many new useful functions in an expedited evolutionary manner (Like we generate antibodies) but in the process accidentally does something disasterous.

The argument, 'it would be too intelligent and well intentioned to do that, doesn't seem to cut it, because the machine will be evolving from something of below human intelligence into something above, and it is not certain what types of intelligence it will evolve faster, or what trajectory this 'general' intelligence will take. If we knew that, then we could program the intelligence directly and not need to make it recursively self-improving.

Eliezer, How do you envision the realistic consequences of mob-created AGI? Do you see it creeping up piece by piece with successive improvements until it reaches a level beyond our control,

Or do you see it as something that will explosively take over once one essential algorithm has been put into place, and that could happen any day?

If a recursively self-improving AGI were created today, using technology with the current memory storage and speed, and it had access to the internet, how much damage do you suppose it could do?

What I think is a far more likely scenario than missing out on the mysterious essence of rightness by indulging the collective human id, is that what 'humans' want as a complied whole is not what we'll want as individuals. Phil might be aesthetically pleased by a coherent metamorality, and distressed if the CEV determines what most people want is puppies, sex, and crack. Remember that the percentage of the population that actually engages in debates over moral philosophy is diminishingly small, and everyone else just acts, frequently incoherently.

Actually, I CANNOT grasp what life being 'meaningful' well... means. Meaningful to what? To the universe? That only makes sense if you believe there is some objective judge of what state of the universe is best. And then, why should we care? Cuz we should? HUH? Meaningful to us? Well yes- we want things...Did you think that there was one thing all people wanted? Why would you think that necessary to evolution? What on earth did you think 'meaning' could be?

I second Valter and Ben. It's hard for me to grasp that you actually believed there was any meaning to life at all, let alone with high confidence. Any ideas on where that came from? The thought, "But what if life is meaningless?" hardly seems like a "Tiny Note of Discord," but like a huge epiphany in my book. I was not raised with any religion (well, some atheist-communism, but still), and so never thought there was any meaning to life to begin with. I don't think this ever bothered me 'til I was 13 and recognized the concept of determinism, but that's another issue. Still- why would someone who believed that we're all just information-copying-optimization matter think there was any meaning to begin with?

Greindl, Ah, but could not one be overconfident in their ability to handle uncertainties? People might interpret your well-reasoned arguments about uncertain things as arrogant if you do not acknowledge the existence of unknown variables. Thus, you might say, "If there's a 70% probability of X, and a 50% probability of Y, then there's a clear 35% probability of Z," while another is thinking, "That arrogant fool hasn't thought about A, B, C, D, and E!" In truth, those factors may have been irrelevant, or so obvious that you didn't mention their impact, but all the audience heard was your definitive statement. I'm not arguing that there is a better style (you might confuse people, which would be far worse), but I do think there are ways that people can be offended by it without being irrational. Claiming so seems very akin to Freud claiming his opponents had oedipal complexes.

There are also many factors that contribute to an assessment that someone is 'overconfident,' aside from their main writings.

Cool name, by the way. What are its origins?

By George! You all need to make a hollywood blockbuster about the singularity and get all these national-security soccor moms screaming hellfire about regulating nanotechnology... "THE END IS NEAR!" I mean, with 'Left Behind' being so popular and all, your cause should fit right into the current milieu of paranoia in America.

I can see the preview now, children are quietly singing "My Country 'tis of Thee" in an old-fashioned classroom, a shot zooms from out the window to show suburban homes, a man taking out the trash with a dog, a woman gardening, a newscast can be overheard intermingling with the singing, "Ha Ha Mark, well, today's been a big day for science! Japanese physicist Uki Murakazi has unveiled his new, very tiny, and CUTE I might add, hydrogen-fuel creating nanobots..." Woman looks up as sky starts to darken. Silence 'What if all that ever mattered to you...' Lone voice, "Mommy?" Screaming chaos, school busses get sucked into some pit in the earth, upclose shots of hot half-naked woman running away in a towel with a bruise crying, firemen running pel-mell, buildings collapsing, the works... "What if all of it..." Dramatic "EUNK!" sound upon a black screen... Voices fade in, "God, where are you?" "I don't think we can stop it..." "Mommy? Where are we?" "Be prepared, because this September," violins making that very high pitched mournful noise, the words "The Singularity is Near" appear on the screen.

It practically writes itself... Then at the high point of the movie's popularity, you begin making press releases, interviews, etc. that declare you find such doomsdays scenarios (though not exactly as depicted) possible and of important security risk. Could backfire and make you look insane, I suppose... But even so, there's a lot of money in Hollywood- think about the Scientologists.

I understand that there are many ways in which nanotechnology could be dangerous, even to the point of posing extinction risks, but I do not understand why these risks seem inevitable. I would find it much more likely that humanity will invent some nanotech device that gets out of hand, poisons a water supply, kills several thousand people, and needs to be contained/quarantined, leading to massive nano-tech development regulation, rather than a nano-tech mistake that immediately depressurizes the whole space suit, is impossible to contain, and kills us all.

A recursively improving, superintelligent AI, on the other hand, seems much more likely to fuck us over, especially if we're convinced it's acting in our best interest for the beginning of its 'life,' and problems only become obvious after it's already become far more 'intelligent' than we are.

Load More