- Arrange to move to the SF bay area next summer
- Find a job in SF bay area
- Gain a more positive attitude
- Increase openness
- Get into good physical condition
What kind of jobs are you looking for, and what skills do you have (if you don't mind me asking)? If I know of a good match I can try to make a connection.
I actually know one of the guys working on it - I could ask him to come over here if you like.
This seems like a great idea - if we put together a concrete list of questions to ask, it could be worth his time to come over.
If anyone wants to ask any questions, leave a comment and maybe we can get some direct answers. (But make sure your question isn't in the AmA, first!)
I think we need to separate the concept of whole brain emulation, from that of biology-inspired human-like AI. This actually looks pretty bad for Robin Hanson's singularity hypothesis, where the first emulations to perfectly emulate existing humans suddenly make the cost of labor drop dramatically. If this research pans out, then we could have a "soft takeoff", where AI slowly catches up to us, and slowly overtakes us.
CNRG_UWaterloo, regarding mind uploads:
Being able to simulate a particular person's brain is incredibly far away. There aren't any particularly good ideas as to how we might be able to reasonably read out that sort of information from a person's brain. That said, there are also lots of uses that a repressive state would have for any intelligent system (think of automatically scanning all surveillence camera footage). But, you don't want a realistic model of the brain to do that -- it's get bored exactly as fast as people do.
So we should expect machine labor to gradually replace human labor, exactly as it has since the beginning of the industrial revolution, as more and more capabilities are added, with "whole brain emulation" being one of the last features needed to make machines with all the capabilities of humans (if this step is even necessary). It's possible, of course, that we could wind up in a situation where the "last piece of the puzzle" turns out to be hugely important, but I don't see any particular reason to think that will happen.
I think we need to separate the concept of whole brain emulation, from that of biology-inspired human-like AI.
This seems completely true. Part of the problem is that the media hype surrounding this stuff drops lines like this:
Spaun can recognize numbers, remember lists and write them down. It even passes some basic aspects of an IQ test, the team reports in the journal Science.... the simplified model of the brain, which took a year to build, captures many aspects of neuroanatomy, neurophysiology and psychological behaviour... They say Spaun can shift from task to task, "just like the human brain," recognizing an object one moment and memorizing a list of numbers the next. And like humans, Spaun is better at remembering numbers at the beginning and end of the list than the ones in the middle. Spaun's cognition and behaviour is very basic, but it can learn patterns it has never seen before and use that knowledge to figure out the best answer to a question. "So it does learn," says Eliasmith.
Basically: to explain this stuff to normal readers, writers anthropomorphize the hell out of the project and you end up with words like 'intuition' and 'understanding' and 'learn' and 'remember' - which make the articles both sexier and way more misleading. The same thing happened with IBM's project and, to my understanding, the Blue Blain Project as well.
[LINK] AmA by computational neuroscientists behind 'the world's largest functional brain model'
Not sure if this has been covered on LW, but it seems highly relevant to WBE development. Link here:
http://www.reddit.com/r/IAmA/comments/147gqm/we_are_the_computational_neuroscientists_behind/
A few questioners mention the Singularity and make Skynet jokes.
The abstract from their paper in Science:
A central challenge for cognitive and systems neuroscience is to relate the incredibly complex behavior of animals to the equally complex activity of their brains. Recently described, large-scale neural models have not bridged this gap between neural activity and biological function. In this work, we present a 2.5-million-neuron model of the brain (called “Spaun”) that bridges this gap by exhibiting many different behaviors. The model is presented only with visual image sequences, and it draws all of its responses with a physically modeled arm. Although simplified, the model captures many aspects of neuroanatomy, neurophysiology, and psychological behavior, which we demonstrate via eight diverse tasks.
I'm curious to see LWers' perspectives on the project.
In 2007, the Department of Children, Youth, and Families (DCYF) held a seminar for the nonprofits vying for a piece of $78 million in funding. Grant seekers were told that in the next funding cycle, they would be required — for the first time — to provide quantifiable proof their programs were accomplishing something.
The room exploded with outrage. This wasn't fair. "What if we can bring in a family we've helped?" one nonprofit asked. Another offered: "We can tell you stories about the good work we do!" Not every organization is capable of demonstrating results, a nonprofit CEO complained. He suggested the city's funding process should actually penalize nonprofits able to measure results, so as to put everyone on an even footing. Heads nodded: This was a popular idea.
Actually, these objections might not be quite as insane as they might sound at first.
The issue is that rigorously measuring results is hard, and frequently when people try to quantify results, they screw it up and force people to spend their time gaming a dysfunctional metric instead of doing real work. Just look at everyone who complains about academia forcing researchers to publish everything they can in as small bites as possible in order to maximize citations, instead of being able to do things in a way that'd be more useful for everyone. Or look at the software companies that used to measure programmer productivity in terms of lines of code written, and - as far as I know - still haven't managed to come up with any very good objective metric for comparing their workers.
The fact is that there are plenty of cases where we know something, but don't have any way of showing it in an objective and easy-to-quantify way. A boss might know for sure who's a valuable researcher or programmer on the basis of her interactions with them, but be unable to prove it rigorously. And these are still relatively simple domains - take something very open-ended like "the impact of nonprofits", and things get even worse.
Given that people are generally bad at designing good ways of quantifying such things, and that bad measures will produce worse results than no measures at all, then it can actually make perfect sense for somebody interested in helping people to object to the creation of such measures. Better (the thought goes) to give everyone money and end up funding both useless and high-impact organizations, than to concentrate all the money to a few organizations which are good at gaming the metrics and most probably all useless.
The issue is that rigorously measuring results is hard, and frequently when people try to quantify results, they screw it up and force people to spend their time gaming a dysfunctional metric instead of doing real work.
This is a problem in business as well. Marketo is able to charge companies thousands per month for tracking online advertising outcomes in companies with long, relationship-based B2B sales cycles (who might be aiming to make a few huge sales per year).
John Wanamaker: "Half the money I spend on advertising is wasted; the trouble is I don't know which half."
I'm currently researching startup concepts surrounding two main themes - big data analysis/visualization and scientific research. I have a plan for making this happen, and at the current stage I'm setting as many meetings as possible with people who know about these topics. The goal is to map out how science works - where the money comes from, who does what, how labor is divided, what the problems are - and then start isolating big problems in the space that might be solved through data analysis or visualization. After that, I test and develop a business model hypothesis via Steve Blank's startup development process (as described in the Startup Owner's Manual).
But anyway, back to this month: I'm setting as many meetings as possible with scientific researchers, people who run labs, R&D managers, people in the NSF or other organizations, and other relevant individuals. So if any of you fall into these categories I'd love to talk to you! Private message me.
Here.
The following data are missing because I had no easy way to export them:
- Government budget appropriations or outlays for RD
- R-D personnel by sector of employment and qualification
You will need the Beyond 20/20 Professional Browser to view the .ivt files.
Thanks! Do you know of any way to view .ivt files on a Mac without Bootcamp? Google yielded no answers.
How about adding "international conflict (or lack thereof)" as another dimension? The space race, after all, occurred (and is discussed) largely in the context of the cold war.
So a fantastic scenario would be that there is no such conflict, and it's developed multinationally and across multinational blocs; a pretty good scenario would be that two otherwise politically-similar countries compete for prestige in being the first to develop FAI (which may positively affect funding and meme-status, but negatively affect security), and a sufficiently good scenario would be that the competition is between different political blocs, who nonetheless can recognize that the development of FAI means making their own political organizations obsolete.
Sure - if you can format your scenarios into an easily copy-pastable format like that in the post, I'd be happy to add it.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
That's actually a question I'm working through right now. Almost certainly something in programming. Probably something in web development, though I've been strongly considering trying to break into games, and other than that I'd like to do something actually worthwhile. I've worked programming web surveys software for a long time now, and I'd ideally like to do something more important than market research with my life.
I've been working as what I guess you'd call a "full stack web developer" for about 5 years. I'm great at solving problems using algorithms, and passably good at all of the object-level things that go into front- and back-end web development. LAMP, not MS.
I also have some skill with academic research, philosophy, and probably some other amazingly useful stuff I won't remember till I need it. And I'm conversant with all of the literature on Machine Ethics and close to an expert on logic and Computer Ethics. Also, I have the stereotypical New England good work ethic.
EDIT: I forgot to mention, I'm really excited about online education (a la Udacity) and will probably look into opportunities there.
Cool, PM me your email address and I'll make a couple connections. (Might be helpful to know your first name as well).