EY arguing that a UFAI threat is worth considering -- as a response to Bryan Caplan's scepticism about it. I think it's a repost from Facebook, though.
ETA: Caplan's response to EY's points. EY answers in the comments.
EY arguing that a UFAI threat is worth considering -- as a response to Bryan Caplan's scepticism about it. I think it's a repost from Facebook, though.
ETA: Caplan's response to EY's points. EY answers in the comments.
EY warns against extrapolating current trends into the future. Seriously?
This can be tested by estimating how much IQ screens off race/gender as a success predictor
Got any good references on that? Googleing these kind of terms doesn't lead to good links.
National average IQ is strongly correlated with national wealth and development indexes
I know, but the way it does so is bizarre (IQ seems to have a much stronger effect between countries than between individuals). Then I add the fact that IQ is very heritable, and also pretty malleable (flynn effect), and I'm still confused.
Now, I'm not going to throw out all I previously believed on heredity and IQ and so on, but the picture just got a lot more complicated. Or "nuanced", if I wanted to use a positive term. Let's go with nuanced.
Got any good references on that? Googleing these kind of terms doesn't lead to good links.
I don't know if anybody already did it, but I guess it can be done by comparing the average IQ of various professions or high-performing and low-performing groups with their racial/gender makeup.
I know, but the way it does so is bizarre (IQ seems to have a much stronger effect between countries than between individuals).
This is probably just the noise (i.e. things like "blind luck") being averaged out.
Then I add the fact that IQ is very heritable, and also pretty malleable (flynn effect), and I'm still confused.
Heritability studies tend to be done on people living in the same country, of roughly the same age, which means that population-wide effects like the Flynn effect don't register.
Obviously racial effects go under this category as well. It covers anything visible. So a high heritability is compatible with genetics being a cause of competence, and/or prejudice against visible genetic characteristics being important ("Our results indicate that we either live in a meritocracy or a hive of prejudice!").
This can be tested by estimating how much IQ screens off race/gender as a success predictor, assuming that IQ tests are not prejudiced and things like the stereotype threat don't exist or are negligible.
But is it possible that IQ itself is in part a positional good? Consider that success doesn't just depend on competence, but on social skills, ability to present yourself well in an interview, and how managers and peers judge you. If IQ affects or covaries with one or another of those skills, then we would be overemphasising the importance of IQ in competence. Thus attempts to genetically boost IQ could give less impact than expected. The person whose genome was changed would benefit, but at the (partial) expense of everyone else.
National average IQ is strongly correlated with national wealth and development indexes, which I think refutes the hypothesis that IQ mainly affects success as a positional quality, or a proxy of thereof, at least at the level of personal interactions.
Almost any game that their AI can play against itself is probably going to work. Except stuff like Pictionary where it's really important how a human, specifically, is going to interpret something.
I know a little bit about training neural networks, and I think it would be plausible to train one on a corpus of well-played StarCraft games to give it an initial sense of what it's supposed to do, and then having achieved that, let it play against itself a million times. But I don't think there's any need to let it watch how humans play. If it plays enough games against itself, it will internalize a perfectly sufficient sense of "the metagame".
If we're talking about AI in RTS games, I've always dreamed of the day when I can "give orders" in an RTS and have the units carry the orders out in a relatively common-sense way instead of needing to be micromanaged down to the level of who they're individually shooting at.
IMHO, AI safety is a thing now because AI is a thing now and when people see AI breakthroughs they tend to think of the Terminator.
Under that hypothesis, shouldn't AI safety have become a "thing" (by which I assume you mean "gain mainstream recognition") back when Deep Blue beat Kasparov?
If you look up mainstream news article written back then, you'll notice that people were indeed concerned. Also, maybe it's a coincidence, but The Matrix movie, which has AI uprising as it's main premise, came out two years later.
The difference is that in 1997 there weren't AI-risk organizations ready to capitalize on these concerns.
That looks like judgment from availability bias. How do you think MIRI did go about getting researchers and these better directors? And funding? And all those connections that seem to lead to AI safety being a thing now?
IMHO, AI safety is a thing now because AI is a thing now and when people see AI breakthroughs they tend to think of the Terminator.
Anyway, I agree that EY is good at getting funding and publicity (though not necessarily positive publicity), my comment was about his (lack of) proven technical abilities.
While this may be formally right the question is what it shows (or should show)? Because on the other hand MIRI does have quite some research output as well as impact on AI safety - and that is what they set out for.
Most MIRI research output (papers, in particular the peer-reviewed ones) was produced under the direction of Luke Muehlhauser or Nate Soares. Under the direction of EY the prevalent outputs were the LessWrong sequences and Harry Potter fanfiction.
The impact of MIRI research on the work of actual AI researchers and engineers is more difficult to measure, my impression is that it has not been very much so far.
EY was influenced by E.T. Jaynes, who was really against neural networks, in favor of bayesian networks. He thought NNs were unprincipled and not mathematically elegant, and bayes nets were. I see the same opinions in some of EY's writings, like the one you link. And the general attitude that "non-elegant = bad" is basically MIRI's mission statement.
I don't agree with this at all. I wrote a thing here about how NNs can be elegant, and derived from first principles. But more generally, AI should use whatever works. If that happens to be "scruffy" methods, then so be it.
I don't agree with this at all. I wrote a thing here about how NNs can be elegant, and derived from first principles.
Nice post.
Anyway, according to some recent works (ref, ref), it seems to be possible to directly learn digital circuits from examples using some variant of backproagation. In principle, if you add a circuit size penalty (which may be well the tricky part) this becomes time-bounded maximum a posteriori Solomonoff induction.
He has ability to attract groups of people and write interesting texts. So he could attract good programmers for any task.
He has ability to attract groups of people and write interesting texts. So he could attract good programmers for any task.
He has the ability to attract self-selected groups of people by writing texts that these people find interesting. He has shown no ability to attract, organize and lead a group of people to solve any significant technical task. The research output of SIAI/SI/MIRI has been relatively limited and most of the interesting stuff came out when he was not at the helm anymore.
Why does that surprise you? None of EY's positions seem to be dependent on trend-extrapolation.
Other than a technological singularity with artificial intelligence explosion to a god-like level?