I wonder why Eliezer doesn't want to say anything concrete about his work with Marcello? ("Most of the real progress that has been made when I sit down and actually work on the problem is things I'd rather not talk about")
There seem to be only two plausible reasons:
For 1. someone else stealing his work and finishing a provably friendly AI first would be a good thing, would it not? Losing the chance to do it himself shouldn't matter as much as the fate of the future intergalactic civilization to an altruist like him. Maybe his work on provable friendliness would reveal ideas on AI design that could be used to produce an unfriendly AI? But even then the ideas would probably only help AI researchers who work on transparent design, are aware of the friendliness problem and take friendliness serious enough to mine the work on friendliness of the main proponent of friendliness for useful ideas. Wouldn't giving these people a relative advantage compared to e. g. connectivists be a good thing? Unless he thinks that AGI would then suddenly be very ...
Maybe his work on provable friendliness would reveal ideas on AI design that could be used to produce an unfriendly AI? But even then the ideas would probably only help AI researchers who work on transparent design
All FAIs are AGIs, most of the FAI problem is solving the AGI problem in particular ways.
What would be way cool is a description of the question along with the link, though I realize that might be a bit of work.
1.
What is your information diet like? Do you control it deliberately (do you have a method; is it, er, intelligently designed), or do you just let it happen naturally.
By that I mean things like: Do you have a reading schedule (x number of hours daily, etc)? Do you follow the news, or try to avoid information with a short shelf-life? Do you frequently stop yourself from doing things that you enjoy (f.ex reading certain magazines, books, watching films, etc) to focus on what is more important? etc.
2.
Your "Bookshelf" page is 10 years old (and contains a warning sign saying it is obsolete):
http://yudkowsky.net/obsolete/bookshelf.html
Could you tell us about some of the books and papers that you've been reading lately? I'm particularly interested in books that you've read since 1999 that you would consider to be of the highest quality and/or importance (fiction or not).
3.
What is a typical EY workday like? How many hours/day on average are devoted to FAI research, and how many to other things, and what are the other major activities that you devote your time to?
4.
Could you please tell us a little about your brain? For example, what is your IQ, at what age did you learn calculus,...
20: What is the probability that this is the ultimate base layer of reality?
Eliezer gave the joke answer to this question, because this is something that seems impossible to know.
However, I myself assign a significant probability that this is not the base level of reality. Theuncertainfuture.com tells me that I assign a 99% probability of AI by 2070 and it starts approaching .99 before 2070. So why would I be likely to be living as an original human circa 2000 when transhumans will be running ancestor simulations? I suppose it's possible that transhumans w...
In my posts, I've argued that indexical uncertainty like this shouldn't be represented using probabilities. Instead, I suggest that you consider yourself to be all of the many copies of you, i.e., both the ones in the ancestor simulations and the one in 2010, making decisions for all of them. Depending on your preferences, you might consider the consequences of the decisions of the copy in 2010 to be the most important and far-reaching, and therefore act mostly as if that was the only copy.
Your answer to question #8 doesn't mention how you convinced your parents to let you drop out of school at age 12...
I couldn't figure out a way to "play all", so I put everything but the Vimeo one on a YouTube playlist.
Thanks for putting all this together! It would be great if you could put the question text above each of the videos in the post so readers can scan through and find questions they're most interested in.
Re: autodidacticism & Bayesian enlightenment
For comparison, I did a lot of self-education, but also had a conventional education (ending with a BA in Computer Science). I think I was introduced to Bayesianism in a probability class in college, and it was also the background assumption in a couple of economics courses that I took for fun (Game Theory and Industrial Organization). It seems to me that choosing pure autodidacticism probably delayed Eliezer's Bayesian enlightenment by at least a couple of years.
Society is supported by "hydraulic pressure", a myriad flows of wealth/matter/energy/information and human effort each holding the others up. It's a layered, cyclic graph - technology depends on the surplus food of agriculture, agriculture depends on the efficiencies of technology. It's a massively connected graph. It has non-obvious dependencies even at short range - think what computer gamers have done for Moore's law, or music pirates for broadband. It has dependencies across time. It has a lot of dependencies in which the supporter does not ...
Oh, so that's what Eliezer looks like! I had imagined him as a wise old man with long white hair and beard. Like Tellah the sage, in Final Fantasy IV.
From answer 5 there is a great quote from Eliezer:
Reality is one thing... your emotions are another.
About how we don't feel the importance of the singularity.
I'd find it incredibly useful to be able to download these videos, so I can watch them on my TV rather than on the PC. I'm doing so one by one via a rather painful process that doesn't work for Vimeo at the moment; if anyone can make it easier that would be wonderful!
EDIT: A torrent of the videos would seem the most straightforward way.
In Video 12, Eliezer says that the SIAI is probably not going to be funding any Ad Hoc AI programs that may or may not produce any lightning bolts of AH-HA! or Eureka Moments.
He also says that he believes that any recursive self-improving AI that is created must be done so (created) to very high standards of precision (so that we don't die in the process)...
Given these two things. what exactly is the SIAI going to be funding?
Working on papers submitted to peer-reviewed scientific journals is not marketing but research.
If SIAI wants to build some credibility then it needs some publications in scientific journals. Doing so could help to ensure further funding and development of actual implementations.
I think that it is a very good idea to first formulate and publish the theoretical basis for the work they intend to do, rather than just saying: we need money to develop component X of our friendly AI.
Of course a possible outcome will be that the scientific community will deem the research shallow, unoriginal or unrealistic to implement. However, it is necessary to publish the ideas before they can be reviewed.
So my take on this is that SIAI is merely asking for a chance to demonstrate their skills rather than for blind commitment.
None of the YouTube videos seem to be linked in the post, but they are available here: https://www.youtube.com/@MichaelGrahamRichard/videos
Specifically in response to #11, it sounds like you really need more help but can't find anyone right now. What about more broadly reaching out to mathematicians of sufficient caliber?
One idea: throw a mini-conference for super-genius level mathematicians. Whether or not they believe in the possibility of AI, a lot of them would probably be delighted to come if you gave them free airfare, hotel stay, and continental breakfast. Would this be productive?
On UFAI, you should liaise with Shane Legg, his recent estimate for brain-structure-copying AI of human level but not subject to FAI style proofs - he puts the peak chance around 2028. This would be AI that duplicates brain algorithms with similar conventional AI algorithms, not a neuron-for-neuron copy.
I'm really curious, why exactly was this interview made via video?
It seems much less useful than, well, posts and textual comments.
Just a quick question here...
While I agree with everything that Eliezer is saying (in the videos up to #5. I have not yet watched the remaining 25 videos yet), I think that some of his comments could be taken hugely out of context if care is not given to think of this ahead of time.
For instance, he, rightly, makes the claim that this point in history is crunch time for our species (although I have some specific questions about the specific consequences he believes might befall us if we fail), and for the inter-galactic civilization to which we will eventua...
Eliezer and I continue to look rather alike. I still don't have a full beard, but I put on some weight last year and my face pudged up a bit, accentuating the similarity. I took a short vid of myself with a Flip camcorder and ran it next to my laptop screen while running one of the YouTube vids, and it was pretty uncanny. Incidentally, elizombies.jpg is nowhere to be found... :-( .
Question 25: I'm surprised that Orthodox Judaism would disincline people to choose cryonics-- I thought it's a religion which is strongly oriented towards living this life well rather than towards an afterlife.
I read about an ethics of life extension conference where the only people who were unambiguously in favor of life extention were the Orthodox Jews.
What am I missing?
I'm surprised you found my "success creating rationalists" Q confusing. What are the factors of success? How many, how good, how successful are the teaching techniques, can the techniques scale to more than just a clique (or to trees-of-cliques), is the teacher-pupil-teacher cycle properly closed, and so on.
Regarding #5, I think the world needs more tuberculosis drugs more urgently than it needs more FAI research. The future will take care of itself; I don't expect to be around to see it anyway.
On October 29th, I asked Eliezer and the LW community if they were interested in doing a video Q&A. Eliezer agreed and a majority of commenters were in favor of the idea, so on November 11th, I created a thread where LWers could submit questions. Dozens of questions were asked, generating a total of over 650 comments. The questions were then ranked using the LW voting system.
On December 11th, Eliezer filmed his replies to the top questions (skipping some), and sent me the videos on December 22nd. Because voting continued after that date, the order of the top questions in the original thread has changed a bit, but you can find the original question for each video (and the discussion it generated, if any) by following the links below.
Thanks to Eliezer and everybody who participated.
Update: If you prefer to download the videos, they are available here (800 MB, .wmw format, sort the files by 'date created').
Link to question #1.
Link to question #2.
Link to question #3.
Link to question #4.
Eliezer Yudkowsky - Less Wrong Q&A (5/30) from MikeGR on Vimeo.
Link to question #5.
(Video #5 is on Vimeo because Youtube doesn't accept videos longer than 10 minutes and I only found out after uploading about a dozen. I would gladly have put them all on Vimeo, but there's a 500 MB/week upload limit and these videos add up to over 800 MB.)
Link to question #6.
Link to question #7.
Link to question #8.
Link to question #9.
Link to question #10.
Link to question #11.
Link to question #12.
Link to question #13.
Link to question #14.
Link to question #15.
Link to question #16.
Link to question #17.
Link to question #18.
Link to question #19.
Link to question #20.
Link to question #21.
Link to question #22.
Link to question #23.
Link to question #24.
Link to question #25.
Link to question #26.
Link to question #27.
Link to question #28.
Link to question #29.
Link to question #30.
If anything is wrong with the videos or links, let me know in the comments or via private message.