I actually know one of the guys working on it - I could ask him to come over here if you like.
This seems like a great idea - if we put together a concrete list of questions to ask, it could be worth his time to come over.
If anyone wants to ask any questions, leave a comment and maybe we can get some direct answers. (But make sure your question isn't in the AmA, first!)
I think we need to separate the concept of whole brain emulation, from that of biology-inspired human-like AI. This actually looks pretty bad for Robin Hanson's singularity hypothesis, where the first emulations to perfectly emulate existing humans suddenly make the cost of labor drop dramatically. If this research pans out, then we could have a "soft takeoff", where AI slowly catches up to us, and slowly overtakes us.
CNRG_UWaterloo, regarding mind uploads:
Being able to simulate a particular person's brain is incredibly far away. There aren't any particularly good ideas as to how we might be able to reasonably read out that sort of information from a person's brain. That said, there are also lots of uses that a repressive state would have for any intelligent system (think of automatically scanning all surveillence camera footage). But, you don't want a realistic model of the brain to do that -- it's get bored exactly as fast as people do.
So we should expect machine labor to gradually replace human labor, exactly as it has since the beginning of the industrial revolution, as more and more capabilities are added, with "whole brain emulation" being one of the last features needed to make machines with all the capabilities of humans (if this step is even necessary). It's possible, of course, that we could wind up in a situation where the "last piece of the puzzle" turns out to be hugely important, but I don't see any particular reason to think that will happen.
I think we need to separate the concept of whole brain emulation, from that of biology-inspired human-like AI.
This seems completely true. Part of the problem is that the media hype surrounding this stuff drops lines like this:
Spaun can recognize numbers, remember lists and write them down. It even passes some basic aspects of an IQ test, the team reports in the journal Science.... the simplified model of the brain, which took a year to build, captures many aspects of neuroanatomy, neurophysiology and psychological behaviour... They say Spaun can shift from task to task, "just like the human brain," recognizing an object one moment and memorizing a list of numbers the next. And like humans, Spaun is better at remembering numbers at the beginning and end of the list than the ones in the middle. Spaun's cognition and behaviour is very basic, but it can learn patterns it has never seen before and use that knowledge to figure out the best answer to a question. "So it does learn," says Eliasmith.
Basically: to explain this stuff to normal readers, writers anthropomorphize the hell out of the project and you end up with words like 'intuition' and 'understanding' and 'learn' and 'remember' - which make the articles both sexier and way more misleading. The same thing happened with IBM's project and, to my understanding, the Blue Blain Project as well.
In 2007, the Department of Children, Youth, and Families (DCYF) held a seminar for the nonprofits vying for a piece of $78 million in funding. Grant seekers were told that in the next funding cycle, they would be required — for the first time — to provide quantifiable proof their programs were accomplishing something.
The room exploded with outrage. This wasn't fair. "What if we can bring in a family we've helped?" one nonprofit asked. Another offered: "We can tell you stories about the good work we do!" Not every organization is capable of demonstrating results, a nonprofit CEO complained. He suggested the city's funding process should actually penalize nonprofits able to measure results, so as to put everyone on an even footing. Heads nodded: This was a popular idea.
Actually, these objections might not be quite as insane as they might sound at first.
The issue is that rigorously measuring results is hard, and frequently when people try to quantify results, they screw it up and force people to spend their time gaming a dysfunctional metric instead of doing real work. Just look at everyone who complains about academia forcing researchers to publish everything they can in as small bites as possible in order to maximize citations, instead of being able to do things in a way that'd be more useful for everyone. Or look at the software companies that used to measure programmer productivity in terms of lines of code written, and - as far as I know - still haven't managed to come up with any very good objective metric for comparing their workers.
The fact is that there are plenty of cases where we know something, but don't have any way of showing it in an objective and easy-to-quantify way. A boss might know for sure who's a valuable researcher or programmer on the basis of her interactions with them, but be unable to prove it rigorously. And these are still relatively simple domains - take something very open-ended like "the impact of nonprofits", and things get even worse.
Given that people are generally bad at designing good ways of quantifying such things, and that bad measures will produce worse results than no measures at all, then it can actually make perfect sense for somebody interested in helping people to object to the creation of such measures. Better (the thought goes) to give everyone money and end up funding both useless and high-impact organizations, than to concentrate all the money to a few organizations which are good at gaming the metrics and most probably all useless.
The issue is that rigorously measuring results is hard, and frequently when people try to quantify results, they screw it up and force people to spend their time gaming a dysfunctional metric instead of doing real work.
This is a problem in business as well. Marketo is able to charge companies thousands per month for tracking online advertising outcomes in companies with long, relationship-based B2B sales cycles (who might be aiming to make a few huge sales per year).
John Wanamaker: "Half the money I spend on advertising is wasted; the trouble is I don't know which half."
I'm currently researching startup concepts surrounding two main themes - big data analysis/visualization and scientific research. I have a plan for making this happen, and at the current stage I'm setting as many meetings as possible with people who know about these topics. The goal is to map out how science works - where the money comes from, who does what, how labor is divided, what the problems are - and then start isolating big problems in the space that might be solved through data analysis or visualization. After that, I test and develop a business model hypothesis via Steve Blank's startup development process (as described in the Startup Owner's Manual).
But anyway, back to this month: I'm setting as many meetings as possible with scientific researchers, people who run labs, R&D managers, people in the NSF or other organizations, and other relevant individuals. So if any of you fall into these categories I'd love to talk to you! Private message me.
Here.
The following data are missing because I had no easy way to export them:
- Government budget appropriations or outlays for RD
- R-D personnel by sector of employment and qualification
You will need the Beyond 20/20 Professional Browser to view the .ivt files.
Thanks! Do you know of any way to view .ivt files on a Mac without Bootcamp? Google yielded no answers.
How about adding "international conflict (or lack thereof)" as another dimension? The space race, after all, occurred (and is discussed) largely in the context of the cold war.
So a fantastic scenario would be that there is no such conflict, and it's developed multinationally and across multinational blocs; a pretty good scenario would be that two otherwise politically-similar countries compete for prestige in being the first to develop FAI (which may positively affect funding and meme-status, but negatively affect security), and a sufficiently good scenario would be that the competition is between different political blocs, who nonetheless can recognize that the development of FAI means making their own political organizations obsolete.
Sure - if you can format your scenarios into an easily copy-pastable format like that in the post, I'd be happy to add it.
This list is focused on scenarios where FAI succeeds by creating an AI that explodes and takes over the world. What about scenarios where FAI succeeds by creating an AI that provably doesn't take over the world? This isn't a climactic ending (although it may be a big step toward one), but it's still a success for FAI, since it averts a UFAI catastrophe.
(Is there a name for the strategy of making an oracle AI safe by making it not want to take over the world? Perhaps 'Hermit AI' or 'Anchorite AI', because it doesn't want to leave its box?)
This scenario deserves more attention that it has been getting, because it doesn't depend on solving all the problems of FAI in the right order. Unlike Nanny AI that takes over the world but only uses its powers for certain purposes, Anchorite AI might be a much easier problem than full-fledged FAI, so it might be developed earlier.
In the form of the OP:
- Fantastic: FAI research proceeds much faster than AI research, so by the time we can make a superhuman AI, we already know how to make it Friendly (and we know what we really want that to mean).
- Pretty good: Superhuman AI arrives before we learn how to make it Friendly, but we do learn how to make an Anchorite AI that definitely won't take over the world. The first superhuman AIs use this architecture, and we use them to solve the harder problems of FAI before anyone sets off an exploding UFAI.
- Sufficiently good: The problems of Friendliness aren't solved in time, or the solutions don't apply to practical architectures, or the creators of the first superhuman AIs don't use them, so the AIs have only unreliable safeguards. They're given cheap, attainable goals; the creators have tools to read the AIs' minds to ensure they're not trying anything naughty, and killswitches to stop them; they have an aversion to increasing their intelligence beyond a certain point, and to whatever other failure modes the creators anticipate; they're given little or no network connectivity; they're kept ignorant of facts more relevant to exploding than to their assigned tasks; they require special hardware, so it's harder for them to explode; and they're otherwise designed to be safer if not actually safe. Fortunately they don't encounter any really dangerous failure modes before they're replaced with descendants that really are safe.
Thanks! I've added it to the post. I particularly like that you included the 'sufficiently good' scenario - I hadn't directly thought about that before.
On the contrary, adversarial questioners are often highly productive. I've already incited one of the best comments you've seen on LessWrong, haven't I?
Yes, my cognition is significantly motivated along these lines. Doesn't Hitler deserve some of the credit for the rapid development of computers and nuclear bombs? Perhaps I or someone like me will play a similar role in the development of AI?
On the contrary, adversarial questioners are often highly productive. I've already incited one of the best comments you've seen on LessWrong, haven't I?
Don't take too much credit. Steve_Rayhawk generated the comment by actively trying to help. But if his goal was to engage you in thoughtful and productive discussion, he probably failed, and it was probably a waste of his time to try. There happened to be this positive externality of an excellent comment - but that's the kind of thing that's generated as a result of doing your best to understand a complex issue, not adversarially mucking up the conversation about it.
Yes, my cognition is significantly motivated along these lines. Doesn't Hitler deserve some of the credit for the rapid development of computers and nuclear bombs? Perhaps I or someone like me will play a similar role in the development of AI?
Somehow I doubt that's the true cause of your behavior, but I'd be delighted to find out that I'm wrong.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
What kind of jobs are you looking for, and what skills do you have (if you don't mind me asking)? If I know of a good match I can try to make a connection.