A possibly incomplete list of novel stuff that Eliezer did (apart from popularizing his method of thinking, which I also like):
1) Locating the problem of friendly AI and pointing out the difficult parts.
2) Coming up with the basic idea of TDT (it was valuable to me even unformalized).
3) The AI-box experiments.
4) Inventing interesting testcases for decision-making, like torture vs. dust specks and Pascal's mugging.
While most of that list seems accurate, one should note that torture v. dust specks is a variant of a standard issue with utilitarianism that would be discussed in a lot of intro level philosophy classes.
Actually I was talking about analytical phiosophy. At lest "continental" philosophy (I hate that term, honestly, it's very Anglo-centric), is built out of huge, systematized books (that doesn't allow bickering as easily: people make their point and then move on to explain what they have come up with), and as there are less works to reference, it's easier to make a genealogy of big core books and work your way along that. Criss-crossing papers in periodical publications are far less convenient, especially if you don't have a suscription or access to archives.
No, what exhausted my patience with the continentals is that the vampire bloodlines are long rather than tangled, and that some authors seem to go out of their way to not be understood. For example Nietzsche's early books were freaking full of contemporary pop references that make annotations indispensable, and turn the reading into a sort of TV Tropes Wiki Walk through the XIXth century media rather than the lesson in human nature it's supposed to be. Not to say it isn't fun, but such things have a time and place and that is not it. Others seem to rely on Department Of Redundancy Department to reinforce their points:...
Please don't downvote the hell out of me, I'm just trying to create a future reference for this sort of annoyance.
It is actually very important. He is a figurehead when it comes to risks from AI. As to better be able to estimate the claims made by him, including the capability of the SIAI to mitigate risks from AI, we need to know if he is either one hell of an entrepreneur or a really good mathematician. Or else, if other people who work for the SIAI are sufficiently independent of his influence.
Yeah, but given how easy it is to collect karma points simply by praising him even without substantiating the praise (yes, I have indulged in "karma whoring" once or twice), I was afraid of the backlash.
A recurring theme here seems to be "grandiose plans, left unfinished". I really hope this doesn't happen with this project. The worst part is, I really understand the motivations behind those "castles in the sky" and... bah, that's for another thread.
...given how easy it is to collect karma points simply by praising him even without substantiating the praise...
There is praise everywhere on the Internet, and in the case of Yudkowsky it is very much justified. People actually criticize him as well. The problem are some of the overall conclusions, extraordinary claims and ideas. They might be few compared to the massive amount of rationality, but they can easily outweigh all other good deeds if they are faulty.
Note that I am not saying that any of those ideas are wrong, but I think people here are too focused on, and dazzled by, the mostly admirable and overall valuable writings on the basics of rationality.
Really smart and productive people can be wrong, especially if they think they have to save the world. And if someone admits:
I mean, it seems to me that where I think an LW post is important and interesting in proportion to how much it helps construct a Friendly AI, how much it gets people to participate in the human project...
...I am even more inclined to judge the output of that person in the light of his goals.
To put it bluntly, people who focus on unfriendly AI might miss the weak spots that are more likely to be unf...
On a related note... has Eliezer successfully predicted anything? I'd like to see his beliefs pay rent, so to speak. Has his interpretation of quantum mechanics predicted any phenomena which have since been observed? Has his understanding of computer science and AI lead him to accurately predict milestones in the field before they have happened?
Well, here are my two cents. (1) It isn't strictly correct to call him an AI researcher. A more correct classification would be something like AGI theorist; more accurate still would be FAI theorist. (2) Normal anomaly mentioned his TDT stuff, but of course that is only one of his papers. Will Newsome mentioned CFAI. I would add to that list the Knowability of FAI paper, his paper coauthored with Nick Bostrom, Coherent Extrapolated Volition, Artificial Intelligence as a Positive and Negative Factor in Global Risk, and LOGAI.
He (as I understand it, though, perhaps I am wrong about this) essentially invented the field (Friendly Artificial General Intelligence) as an area for substantial study and set out some basic research programs. The main one of which seems to be a decision theory for an agent in a general environment that is capable of overcoming the issues that current decision theories have; mainly that they do not always give the action that we would recognize as having the greatest utility relative to our utility function.
He got Peter Thiel to donate $1.1 million to the SIAI, which you should take as a sign of EY's potential and achievements.
Innovation in any area is a team effort. In his efforts to create friendly AI, EY has at least one huge accomplishment: creating a thriving organization devoted to creating friendly AI. Realistically, this accomplishment is almost certainly more significant than any set of code he alone could have written.
He got Peter Thiel to donate $1.1 million to the SIAI, which you should take as a sign of EY's potential and achievements.
Isn't that potentially double-counting evidence?
He got Peter Thiel to donate $1.1 million to the SIAI, which you should take as a sign of EY's potential and achievements.
It shows marketing skill. That doesn't necessarily indicate competence in other fields - and this is an area where competence is important. Especially so if you want to participate in the race - and have some chance of actually winning it.
He got Peter Thiel to donate $1.1 million to the SIAI, which you should take as a sign of EY's potential and achievements.
That's a huge achievement. But don't forget that he wasn't able to convince him that the SIAI is the most important charity:
In February 2006, Thiel provided $100,000 of matching funds to back the Singularity Challenge donation drive of the Singularity Institute for Artificial Intelligence.
vs.
In September 2006, Thiel announced that he would donate $3.5 million to foster anti-aging research through the Methuselah Mouse Prize foundation.
...
In May 2007, Thiel provided half of the $400,000 matching funds for the annual Singularity Challenge donation drive.
vs.
On April 15, 2008, Thiel pledged $500,000 to the new Seasteading Institute, directed by Patri Friedman, whose mission is "to establish permanent, autonomous ocean communities to enable experimentation and innovation with diverse social, political, and legal systems".
I wouldn't exactly say that he was able to convince him of risks from AI.
And the logical next question... what is the greatest technical accomplishment of anyone in this thriving organization? Ideally in the area of AI. Putting together a team is an accomplishment proportional to what we can anticipate the team to accomplish. If there is anyone on this team that has done good things in the area of AI, some credit would go to EY for convincing that person to work on friendly AI.
Eh, it looks like we're becoming the New Hippies or the New New Age. The "sons of Bayes and 4chan" instead of "the sons of Marx and Coca-Cola". Lots of theorizing, lots of self-improvement and wisdom-generation, some of which is quite genuine, lots of mutual reassuring that it's the rest of the world that's insane and of breaking free of oppressive conventions... but under all the foam surprisingly little is actually getting done, apparently.
However, humanity might look back on us forty years from now and say: "those guys were pretty awesome, they were so avant la lettre, of course, the stuff they thought was so mindblowing is commonplace now, and lots of what they did was pointless flailing, but we still owe them a lot".
Perhaps I am being overly optimistic. At least we're having awesome fun together whenever we meet up. It's something.
What is "divulgation"? (Yes, I googled it.) My best guess is that you are not a native speaker of English and this is a poor translation of the cognate you are thinking of.
Yes, "divulgation" (or cognates thereof) is the word used in Romance languages to mean what we call "popularization" in English.
It would probably be more accurate to classify him as a researcher into Machine Ethics than broader Artificial Intelligence, at least after 2001-2003. To the best of my knowledge he doesn't claim to be currently trying to program an AGI; the SIAI describes him as "the foremost researcher on Friendly AI and recursive self-improvement," not an AI researcher in the sense of somebody actively trying to code an AI.
Flare.
(As far as "technical stuff" goes, there's also some of that, though not much. I still think Eliezer's most brilliant work was CFAI; not because it was correct, but because the intuitions that produced it are beautiful intuitions. For some reason Eliezer has changed his perspective since then, though, and no one knows why.)
Looking at Flare made me lower my estimation of Eliezer's technical skill, not raise it. I'm sure he's leveled up quite a bit since, but the basic premise of the Flare project (an XML-based language) is a bad technical decision made due to a fad. Also, it never went anywhere.
I think jimrandomh is slightly too harsh about Flare, the idea of using a pattern-matching object database as the foundation of a language rather than a bolted-on addition is at least an interesting concept. However, it seems like Eliezer focused excessively on bizarre details like supporting HTML in code comments, and having some kind of reference counting garbage collection which would be unlike anything to come before (even though the way he described it sounded pretty much exactly like the kind of reference counting GC that had been in use for decades), and generally making grandiose, highly detailed plans that were mostly impractical and/or far too ambitious for a small team to hope to implement in anything less than a few lifetimes. And then the whole thing was suddenly abandoned unfinished.
The last three experiments had bigger (more than 2 orders of magnitude, I think) outside cash stakes. I suspect Russell and D. Alex may have been less indifferent about that than me, i.e. I think the record shows that Eliezer acquitted himself well with low stakes ($10, or more when the player is indifferent about the money) a few times, but failed with high stakes.
I think the record shows that Eliezer acquitted himself well with low stakes ($10, or more when the player is indifferent about the money) a few times, but failed with high stakes.
Which suggests to me that as soon as people actually feel a bit of real fear- rather than just role-playing- they become mostly immune to Eliezer's charms.
That sounds like you are trying to rouse anger, or expressing a personal dislike, but not much like an argument.
The AI-box experiments have the flavor of (and presumably are inspired by) the Turing test - you could equally have accused Turing at the time of being "unscientific" in that he had proposed an experiment that hadn't even been performed and would not be for many years. Yes, they are a conceptual rather than a scientific experiment.
The point of the actual AI-box demonstration isn't so much to "prove" something, in the sense of demonstrating a particular exploitable regularity of human behaviour that a putative UFAI could use to take over people's brains over a text link (though that would be nice to have). Rather, it is that prior to the demonstration one would have assigned very little probability to the proposition "Eliezer role-playing an AI will win this bet".
As such, I'd agree that they "prove little" but they do constitute evidence.
Eliezer invented Timeless Decision Theory. Getting a decision theory that works for self-modifying or self-copying agents is in his view an important step in developing AGI.
Eliezer invented Timeless Decision Theory.
He hasn't finished it. I hope he does and I will be impressed. But I don't think that answers what Raw_Power asks for. Humans are the weak spot when it comes to solving friendly AI. In my opinion it is justified to ask if Eliezer Yudkowsky (but also other people within the SIAI), are the right people for the job.
If the SIAI openly admits that it doesn't have the horse power yet to attempt some hard problems, that would raise my confidence in their capability. That's no contradiction, because it would pose a solvable short-term goal that can be supported by contributing money and finding experts who can judge the mathematical talent of job candidates.
It would probably be more accurate to classify him as a researcher into Machine Ethics than broader Artificial Intelligence, at least after 2001-2003. To the best of my knowledge he doesn't claim to be currently trying to program an AGI; the SIAI describes him as "the foremost researcher on Friendly AI and recursive self-improvement," not an AI researcher in the sense of somebody actively trying to code an AI.
Reading this has made me rather more ticked off about the philosopher-bashing that sometimes goes on here ("Since free will is about as easy as a philosophical problem in reductionism can get, while still appearing "impossible" to at least some philosophers", )
The annoying thing about those is that we only have the participants' word for it, AFAIK. They're known to be trustworthy, but it'd be nice to see a transcript if at all possible.
Reading this has made a bit more ticked off about the philosopher-bashing that goes on round here.
Basically this: "Eliezer Yudkowsky writes and pretends he's an AI researcher but probably hasn't written so much as an Eliza bot."
While the Eliezer S. Yudkowsky site has lots of divulgation articles and his work on rationality is of indisputable value, I find myself at a loss when I want to respond to this. Which frustrates me very much.
So, to avoid this sort of situation in the future, I have to ask: What did the man, Eliezer S. Yudkowsky, actually accomplish in his own field?
Please don't downvote the hell out of me, I'm just trying to create a future reference for this sort of annoyance.