I talk to a guy on a private AGI IRC server sometimes. He now works for them. He does some really impressive AI work.
He can't talk about most of the stuff he is working on now due to NDAs. But he did mention he is working on (and has worked with in the past) evolving learning rules for AIs instead of hand coding them.
I discussed AI risk with him, but he doesn't particularly care about it. He thinks an intelligence explosion is possible, but that an unfriendly AI wouldn't be so bad. It would just be the next step of evolution. I see the same view in some of the comments on that blog post, though I'm not sure if they are from members of that organization.
I see similar kinds of views about AI risk in even well respected and accomplished AI researchers like Jürgen Schmidhuber.
The other thing different about this company is they come from the game industry. They appear to have written their own NN code from scratch in CUDA. It works on windows, and has a good user interface.
Culturally, it is much easier to block an annoying person and delete a poorly written post on facebook than ban him on Less Wrong. As a general rule, people like being kings in their own small kingdoms instead of being merely citizens in a larger country, that's why people move to personal blogs. On Less Wrong you are a citizen, you cannot set your rules and impose them onto others and expect that others will simply agree to them.
He prefers his Facebook audience. It's a more constructive environment, and there are people whose opinions he cares more about (I assume, he may have other reasons).
https://www.facebook.com/yudkowsky/posts/10153630593339228?pnref=story
You can click on the date that a status was posted on (under their name) to get a direct link.
The website is full of hype and posturing but contains no technical details about these so-called AIs. I haven't downloaded the "brain simulator" and from the description on the website I have no idea what it is supposed to be.
I haven't yet used it for anything, but I did look through the docs and demos. It's a really cool piece of software. They wrote their own deep neural network library. And they gave it a very nice user interface for building and experimenting with complicated neural networks.
It's not only useful for research, it might be a great way to get non experts and beginners into AI. They are also experimenting a lot with reinforcement learning on video games. This is a much more general branch of machine learning. Deepmind impressed a lot of people and made a lot of money when they demonstrated playing some simple atari games. These people are aiming for AIs that can play Space Engineers.
So, it's a neural network library and visual design tool. It wasn't clear from the description on the website. In that case, it may be interesting.
Eliezer commented on FB about a post Announcing GoodAI (by Marek Rosa GoodAIs CEO). I think this deserves some discussion as it has a quite effective approach to harness the crowd to improve the AI: