timtyler comments on What if AI doesn't quite go FOOM? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (186)
Proposed measures seem to be ineffective for prevention of seed AI creation.
Human genome weights ~770 MBytes. This fact suggests that given open research publications in neuroscience, theory of algorithms, etc. one can come up with insight, which will allow small group of researchers to build and run comparatively simple human like ML algorithm. It can be expected that this algorithm is highly parallelizable, and it can be run on a cluster of consumer grade equipment (current GPUs have approximately 2GFLOPS/$). So it'll cost about 5 to 20M$ (infrastructure overhead included) to run 1 petaflops (for specific tasks) cluster now (BTW Google uses something like this).
Is there other way to prevent such scenario besides worldwide hightech police state?
"Small groups of researchers" are surely highly likely to be beaten by larger groups with decent funding and access to lots of training data and fast machines. We are not at the "Wright brothers" stage - that was back in the 1950s.
Want the white hats to get there before the black hats? Make sure they are better funded is the best way, I figure. They are, in fact quite a bit better funded. Effectively, consumers are voting with their feet. Though it is also true that some are "whiter" than others: I for one still pray that machine intelligence will not come from Microsoft.
Attempts to prevent a "race to the bottom" are likely to prove ineffective - and seem to be largely misguided. There is bound to be a race, the issue is which teams to back, and which teams to hinder.
And which information to conceal. Right?
As for "Wright brothers" situation, it's not so obvious. We have AI methods which work but don't scale well (theorem provers, semantic nets, expert systems. Not a method, but nevertheless worth mentioning: SHRDLU), we have well scaling methods, which lack generalization power (statistical methods, neural nets, SVMs, deep belief networks, etc.), and yet we don't know how to put it all together.
It looks like we are going to "Wright stage", where one will have all equipment to put together and make working prototype.
You got it backwards. These methods have generalization power, especially the SVM (achieving generalization is the whole point of the VC theory on which it's based), but don't scale well.
Yes, bad wording on my side. I mean something like capability of representing and operating on complex objects, situations and relations. However it doesn't invalidate my (quite trivial) point that we don't have practical theory of AGI yet.
The race participants are the ones with things to conceal, mostly. One could try and incentivise them to reveal things by using something like the patent system - but since machine intelligence is likely to start out as a server-side technology, patents seem likely to be irrelevant - you can just use trade secrets instead, since those have better security, don't need lawyers to enforce and have no expiration date. I discuss code-hiding issues here:
"Tim Tyler: Closed source intelligent machines"
I figure that we are well past the "Wright brothers" stage - in the sense that huge corporations are already involved in exploiting machine intelligence technology - and large sums of money are already being made out of it.
I don't understand. The difference between server-side and client side is how you use it. It's just going to be "really powerful technology" and from there it will be 'server', 'client', a combination of the two, a standalone system or something that does not reasonably fit that category (like Summer Glau).
Server side has enormous computer farms. Client side is mostly desktop and mobile devices - where there is vastly less power, storage and bandwidth available.
The server is like the queen bee - or with the analogy of multicellularity, the server is like the brain of the whole system.
The overwhelming majority of servers actually require less computing power than the average desktop. Many powerful computer farms don't particularly fit in the category of 'server', in particular it isn't useful to describe large data warehousing and datamining systems using a 'client-server' model. That would just be a pointless distraction.
I agree that the first machine intelligence is unlikely to be an iPhone app.
Right, but compare with the Google container data center tour.
I have little sympathy for the idea that most powerful computer farms are not "servers". It is not right: most powerful computer farms are servers. They run server-side software, and they serve things up to "clients". See:
http://en.wikipedia.org/wiki/Server_farm
I selected the word majority for a reason. I didn't make a claim about the outliers and I don't even make a claim about the 'average power'.
That is a naive definition of 'server'. "Something that you can access remotely and runs server software" is trivial enough that it adds nothing at all to our understanding of AIs to say it uses a server.
For comparison just last week I had a task requiring use of one of the servers I rent from some unknown server farm over the internet. The specific task involved automation of a process and required client side software (firefox, among other things). The software I installed and used was all the software that makes up a client. It also performed all the roles of a client. On the list I mentioned earlier that virtual machine is clearly "a combination of the two" and that fact is in no way a paradox. "Client" and "server" are just roles that a machine can take on and they are far from the most relevant descriptions of the machines that will run an early AI.
"Server" is a red herring.
It's the servers in huge server farms where machine intelligence will be developed.
They will get the required power about 5-10 years before desktops do, and have more direct access to lots of training data.
Small servers in small businesses may be numerous - but they are irrelevant to this point - there seems to be no point in discussing them further.
Arguing about the definition of http://en.wikipedia.org/wiki/Computer_server would seem to make little difference to the fact that most powerful computer farms are servers. Anyhow, if you don't like using the term "server" in this context, feel free to substitute "large computer farm" instead - as follows:
"machine intelligence is likely to start out as a large computer farm technology"
Thanks, it's interesting, despite I'm not very good at recognition of spoken english, I was unable to decipher robots part in particular.
Nevertheless I doubt that R&D division of single corporation can make all the work, which is nessesary for AGI launch, without open information from scientific community. Thus they can hide details of implementation, but they cannot hide ideas they based their work upon. Going back to Wright brothers, in 1910 there was already industry of internal combustion engines, and Henry Ford was already making money, and aerodynamics made some progress. All in all, I can't see crucial difference.
The Ford Airplane Company did get in on aeroplanes - but in the 1920s. In 1910 there was no aeroplane business.
For the inventors of machine intelligence, I figure you have to look back to people like Alan Turing. What we are seeing now is more like the ramping up of an existing industrial process. Creating very smart agents is better seen as being comparable to breaking the sound barrier.