What is your information diet like? Do you control it deliberately (do you have a method; is it, er, intelligently designed), or do you just let it happen naturally.
By that I mean things like: Do you have a reading schedule (x number of hours daily, etc)? Do you follow the news, or try to avoid information with a short shelf-life? Do you frequently stop yourself from doing things that you enjoy (f.ex reading certain magazines, books, watching films, etc) to focus on what is more important? etc.
During a panel discussion at the most recent Singularity Summit, Eliezer speculated that he might have ended up as a science fiction author, but then quickly added:
I have to remind myself that it's not what's the most fun to do, it's not even what you have talent to do, it's what you need to do that you ought to be doing.
Shortly thereafter, Peter Thiel expressed a wish that all the people currently working on string theory would shift their attention to AI or aging; no disagreement was heard from anyone present.
I would therefore like to ask Eliezer whether he in fact believes that the only two legitimate occupations for an intelligent person in our current world are (1) working directly on Singularity-related issues, and (2) making as much money as possible on Wall Street in order to donate all but minimal living expenses to SIAI/Methuselah/whatever.
How much of existing art and science would he have been willing to sacrifice so that those who created it could instead have been working on Friendly AI? If it be replied that the work of, say, Newton or Darwin was essential in getting us to our current perspective wherein we have a hope of intelligently tackling this problem, might the same not hold true in yet unknown ways for string theorists? And what of Michelangelo, Beethoven, and indeed science fiction? Aren't we allowed to have similar fun today? For a living, even?
Somewhat related, AGI is such an enormously difficult topic, requiring intimate familiarity with so many different fields, that the vast majority of people (and I count myself among them) simply aren't able to contribute effectively to it.
I'd be interested to know if he thinks there are any singularity-related issues that are important to be worked on, but somewhat more accessible, that are more in need of contributions of man-hours rather than genius-level intellect. Is the only way a person of more modest talents can contribute through donations?
Almost no-one can do FAI, or even recognize talent at a level capable of doing FAI, but if more people were doing the easier things it wouldn't be nearly so hard to find people who could do the core work.
SIAI keeps supporting this attitude, yet I don't believe it, at least in the way it's presented. A good mathematician who gets to understand the problem statement and succeeds in weeding out the standard misunderstanding can contribute as well as anyone, at this stage where we have no field. Creating a programme that would allow people to reliably get to work on the problem requires material to build upon, and there is still nothing, no matter of what quality. Systematizing the connections with existing science, trying to locate the place of FAI project in it, is something that only requires expertise in that science and understanding of FAI problem statement. At the very least, a dozen steps in, we'll have a useful curriculum, to get folks up to speed in the right direction.
Your "Bookshelf" page is 10 years old (and contains a warning sign saying it is obsolete):
http://yudkowsky.net/obsolete/bookshelf.html
Could you tell us about some of the books and papers that you've been reading lately? I'm particularly interested in books that you've read since 1999 that you would consider to be of the highest quality and/or importance (fiction or not).
What is a typical EY workday like? How many hours/day on average are devoted to FAI research, and how many to other things, and what are the other major activities that you devote your time to?
Could you please tell us a little about your brain? For example, what is your IQ, at what age did you learn calculus, do you use cognitive enhancing drugs or brain fitness programs, are you Neurotypical and why didn't you attend school?
Why exactly do majorities of academic experts in the fields that overlap your FAI topic, who have considered your main arguments, not agree with your main claims?
I also disagree with the premise of Robin's claim. I think that when our claims are worked out precisely and clearly, a majority agree with them, and a supermajority of those who agree with Robin's part (new future growth mode, get frozen...) agree.
Still, among those who take roughly Robin's position, I would say that an ideological attraction to libertarianism is BY FAR the main reason for disagreement. Advocacy of a single control system just sounds too evil for them to bite that bullet however strong the arguments for it.
You have written a lot of words. Just how many of your words would someone have had to read to make you feel a substantial need to explain the fact they are world class AI experts and disagree with your conclusions?
Roughly, what I expect to happen by default is no modular analysis at all - just snap consideration and snap judgment. I feel little need to explain such.
You, or somebody anyway, could still offer a modular causal model of that snap consideration and snap judgment. For example:
What cached models of the planning abilities of future machine intelligences did the academics have available when they made the snap judgment?
What fraction of the academics are aware of any current published AI architectures which could reliably reason over plans at the level of abstraction of "implement a proxy intelligence"?
What false claims have been made about AI in the past? What decision rules might academics have learned to use, to protect themselves from losing prestige for being associated with false claims like those?
How much do those decision rule
I confess, it doesn't seem to me on a gut level like this is either healthy to obsess about, or productive to obsess about. It seems more like worrying that my status isn't high enough to do work, than actually working. If someone shows up with amazing analyses I haven't considered, I can just listen to the analyses then. Why spend time trying to guess who might have a hidden deep analysis I haven't seen, when the prior is so much in favor of them having made a snap judgment, and it's not clear why if they've got a deep analysis they wouldn't just present it?
I think that on a purely pragmatic level there's a lot to be said for the Traditional Rationalist concept of demanding that Authority show its work, even if it doesn't seem like what ideal Bayesians would do.
There's certainly always a point in considering specific arguments. But to be nervous merely that someone else has a different view, one ought, generally speaking, to suspect (a) that they know something you do not or at least (b) that you know no more than them (or far more rarely (c) that you are in a situation of mutual Aumann awareness and equal mutual respect for one another's meta-rationality). As far as I'm concerned, these are eminent scientists from outside the field that I work in, and I have no evidence that they did anything more than snap judgment of my own subject material. It's not that I have specific reason to distrust these people - the main name I recognize is Horvitz and a fine name it is. But the prior probabilities are not good here.
I don't actually spend time obsessing about that sort of thing except when you're asking me those sorts of questions - putting so much energy into self-justification and excuses would just slow me down if Horvitz showed up tomorrow with an argument I hadn't considered.
I'll say again: I think there's much to be said for the Traditional Rationalist ideal of - once you're at least inside a science and have enough expertise to eva...
You admit you have not done much to make it easy to show them your reasons. You have not written up your key arguments in a compact form using standard style and terminology and submitted it to standard journals. You also admit you have not contacted any of them to ask them for their reasons; Horvitz would have have to "show up" for you to listen to him. This looks a lot like a status pissing contest; the obvious interpretation: Since you think you are better than them, you won't ask them for their reasons, and you won't make it easy for them to understand your reasons, as that would admit they are higher status. They will instead have to acknowledge your higher status by coming to you and doing things your way. And of course they won't since by ordinary standard they have higher status. So you ensure there will be no conversation, and with no conversation you can invoke your "traditional" (non-Bayesian) rationality standard to declare you have no need to consider their opinions.
"and then take out whatever time was needed to collect the OB/LW posts in our discussion into a sequence with summaries."
Why? No one in the academic community would spend that much time reading all that blog material for answers that would be best given in a concise form in a published academic paper. So why not spend the time? Unless you think you are that much of an expert in the field as to not need the academic community. If that be the case where are your publications and where are your credentials, where is the proof of this expertise (expert being a term that is applied based on actual knowledge and accomplishments)?
"Since I don't expect senior traditional-AI-folk to pay me any such attention short of spending a HUGE amount of effort to get it and probably not even then, I haven't, well, expended a huge amount of effort to get it."
Why? If you expect to make FAI you will undoubtedly need people in the academic communities' help; unless you plan to do this whole project by yourself or with purely amateur help. I think you would admit that in its current form SIAI has a 0 probability of creating FAI first. That being said your best hope is to convince others that the cause is worthwhile and if that be the case you are looking at the professional and academic AI community.
I am sorry I prefer to be blunt.. that way there is no mistaking meanings...
How can you be so confident that you know so much about this topic that no world class AI expert could possibly know something relevant that you do not?
You changed what I said into a bizarre absolute. I am assuming no such thing. I am just assuming that, by default, world class experts on various topics in narrow AI, produce their beliefs about the Singularity by snap judgment rather than detailed modular analysis. This is a prior and hence an unstable probability - as soon as I see contrary evidence, as soon as I see the actual analysis, it gets revoked.
but they clearly disagree with that assessment of how much consideration your arguments deserve.
They have no such disagreement. They have no idea I exist. On the rare occasion when I encounter such a person who is physically aware of my existence, we often manage to have interesting though brief conversations despite their having read none of my stuff.
Academic authorities are not always or even usually wrong when they disagree with less authoritative contrarians
Science only works when you use it; scientific authority derives from science. If you've got Lord Kelvin running around saying that you can't have flying m...
Robin, why do most academic experts (e.g. in biology) disagree with you (and Eliezer) about cryonics? Perhaps a few have detailed theories on why it's hopeless, or simply have higher priorities than maximizing their expected survival time; but mostly it seems they've simply never given it much consideration, either because they're entirely unaware of it or assume it's some kind of sci-fi cult practice, and they don't take cult practices seriously as a rule. But clearly people in this situation can be wrong, as you yourself believe in this instance.
Similarly, I think most of the apparent "disagreement" about the Singularity is nothing more than unawareness of Yudkowsky and his arguments. As far as I can tell, academics who come into contact with him tend to take him seriously, and their disagreements are limited to matters of detail, such as how fast AI is approaching (decades vs. centuries) and the exact form it will take (uploads/enhancement vs. de novo). They mainly agree that SIAI's work is worth doing by somebody. Examples include yourself, Scott Aaronson, and David Chalmers.
Is your pursuit of a theory of FAI similar to, say, Hutter's AIXI, which is intractable in practice but offers an interesting intuition pump for the implementers of AGI systems? Or do you intend on arriving at the actual blueprints for constructing such systems? I'm still not 100% certain of your goals at SIAI.
I know at one point you believed in staying celibate, and currently your main page mentions you are in a relationship. What is your current take on relationships, romance, and sex, how did your views develop, and how important are those things to you? (I'd love to know as much personal detail as you are comfortable sharing.)
Could you (Well, "you" being Eliezer in this case, rather than the OP) elaborate a bit on your "infinite set atheism"? How do you feel about the set of natural numbers? What about its power set? What about that thing's power set, etc?
From the other direction, why aren't you an ultrafinitist?
ZFC's countable model isn't that weird.
Imagine a computer programmer, watching a mathematician working at a blackboard. Imagine asking the computer programmer how many bytes it would take to represent the entities that the mathematician is manipulating, in a form that can support those manipulations.
The computer programmer will do a back of the envelope calculation, something like: "The set of all natural numbers" is 30 characters, and essentially all of the special symbols are already in Unicode and/or TeX, so probably hundreds, maybe thousands of bytes per blackboard, depending. That is, the computer programmer will answer "syntactically".
Of course, the mathematician might claim that the "entities" that they're manipulating are more than just the syntax, and are actually much bigger. That is, they might answer "semantically". Mathematicians are trained to see past the syntax to various mental images. They are trained to answer questions like "how big is it?" in terms of those mental images. A math professor asking "How big is it?" might accept answers like "it's a subset of the integers" or "It's a superse...
What was the story purpose and/or creative history behind the legalization and apparent general acceptance of non-consensual sex in the human society from Three Worlds Collide?
Here's my attempt at explaining Eliezer's explanation. It's based heavily on my experiences as someone who's apparently quite atypical in a relevant way. This may require a few rounds of back-and-forth to be useful - I have more information about the common kind of experience (which I assume you share) than you have about mine, but I don't know if I have enough information about it to pinpoint all the interesting differences. Note that this information is on the border of what I'm comfortable sharing in a public area, and may be outside some peoples' comfort zones even to read about: If anyone reading is easily squicked by sexuality talk, they may want to leave the thread now.
I'm asexual. I've had sex, and experienced orgasms (anhedonically, though I'm not anhedonic in general), but I have little to no interest in either. However, I don't object to sex on principle - it's about as emotionally relevant as any other social interaction, which can range from very welcome to very unwelcome depending on the circumstances and the individual(s) with whom I'm socializing*. Sex tends to fall on the 'less welcome' end of that scale because of how other people react to it - I'm aware that othe...
Your approach to AI seems to involve solving every issue perfectly (or very close to perfection). Do you see any future for more approximate, rough and ready approaches, or are these dangerous?
Autodidacticism
Eliezer, first congratulations for having the intelligence and courage to voluntarily drop out of school at age 12! Was it hard to convince your parents to let you do it? AFAIK you are mostly self-taught. How did you accomplish this? Who guided you, did you have any tutor/mentor? Or did you just read/learn what was interesting and kept going for more, one field of knowledge opening pathways to the next one, etc...?
EDIT: Of course I would be interested in the details, like what books did you read when, and what further interests did they spark, etc... Tell us a little story. ;)
http://news.ycombinator.com/item?id=195959
"Oh, dear. Now I feel obliged to say something, but all the original reasons against discussing the AI-Box experiment are still in force...
All right, this much of a hint:
There's no super-clever special trick to it. I just did it the hard way.
Something of an entrepreneurial lesson there, I guess."
If you were to disappear (freak meteorite accident), what would the impact on FAI research be?
Do you know other people who could continue your research, or that are showing similar potential and working on the same problems? Or would you estimate that it would be a significant setback for the field (possibly because it is a very small field to begin with)?
How young can children start being trained as rationalists? And what would the core syllabus / training regimen look like?
For people not directly involved with SIAI, is there specific evidence that it isn't run by a group of genuine sociopaths with the goal of taking over the universe to fulfill (a compromise of) their own personal goals, which are fundamentally at odds with those of humanity at large?
Humans have built-in adaptions for lie detection, but betting a decision like this on the chance of my sense motive roll beating the bluff roll of a person with both higher INT and CHA than myself seems quite risky.
Published writings about moral integrity and ethical injunctions count for little in this regard, because they may have been written with the specific intent to deceive people into supporting SIAI financially. The fundamental issue seems rather similar to the AIBox problem: You're dealing with a potential deceiver more intelligent than yourself, so you can't really trust anything they say.
I wouldn't be asking this for positions that call for merely human responsibility, like being elected to the highest political office in a country, having direct control over a bunch of nuclear weapons, or anything along those lines; but FAI implementation calls for much more responsibility than that.
If the a...
I guess my main answers would be, in order:
1) You'll have to do with the base probability of a highly intelligent human being a sociopath.
2) Elaborately deceptive sociopaths would probably fake something other than our own nerdery...? Even taking into account the whole "But that's what we want you to think" thing.
3) All sorts of nasty things we could be doing and could probably get away with doing if we had exclusively sociopath core personnel, at least some of which would leave visible outside traces while still being the sort of thing we could manage to excuse away by talking fast enough.
4) Why are you asking me that? Shouldn't you be asking, like, anyone else?
What are your current techniques for balancing thinking and meta-thinking?
For example, trying to solve your current problem, versus trying to improve your problem-solving capabilities.
Could you give an uptodate estimate of how soon non-Friendly general AI might be developed? With confidence intervals, and by type of originator (research, military, industry, unplanned evolution from non-general AI...)
Say I have $1000 to donate. Can you give me your elevator pitch about why I should donate it (in totality or in part) to the SIAI instead of to the SENS Foundation?
Updating top level with expanded question:
I ask because that's my personal dilemma: SENS or SIAI, or maybe both, but in what proportions?
So far I've donated roughly 6x more to SENS than to SIAI because, while I think a friendly AGI is "bigger", it seems like SENS has a higher probability of paying off first, which would stop the massacre of aging and help ensure I'm still around when a friendly AGI is launched if it ends up taking a while (usual caveats; existential risks, etc).
It also seems to me like more dollars for SENS are almost assured to result in a faster rate of progress (more people working in labs, more compounds screened, more and better equipment, etc), while more dollars for the SIAI doesn't seem like it would have quite such a direct effect on the rate of progress (but since I know less about what the SIAI does than about what SENS does, I could be mistaken about the effect that additional money would have).
If you don't want to pitch SIAI over SENS, maybe you could discuss these points so that I, and others, are better able to make informed decisions about how to spend our philanthropic monies.
Hi there MichaelGR,
I’m glad to see you asking not just how to do good with your dollar, but how to do the most good with your dollar. Optimization is lives-saving.
Regarding what SIAI could do with a marginal $1000, the one sentence version is: “more rapidly mobilize talented or powerful people (many of them outside of SIAI) to work seriously to reduce AI risks”. My impression is that we are strongly money-limited at the moment: more donations allows us to more significantly reduce existential risk.
In more detail:
Existential risk can be reduced by (among other pathways):
SIAI is currently engaged in a number of specific projects toward both #1 and #2, and we have a backlog of similar projects waiting for skilled person-hours with which to do them. Our recent efforts along these lines have gotten good returns on the money and time we invested, and I’d expect similar returns from the (similar) projec...
You can donate to FHI too? Dang, now I'm conflicted.
Wait... their web form only works with UK currency, and the Americas form requires FHI to be a write-in and may not get there appropriately.
Crisis averted by tiny obstacles.
at 8 expected current lives saved per dollar donated
Even though there is a large margin of error this is at least 1500 times more effective than the best death-averting charities according to GiveWell. There's is a side note though that while normal charities are incrementally beneficial SIAI has (roughly speaking) only two possible modes, a total failure mode and a total win mode. Still, expected utility is expected utility. A paltry 150 dollars to save as many lives as Schindler... It's a shame warm fuzzies scale up so badly...
If Omega materialized and told you Robin was correct and you are wrong, what do you do for the next week? The next decade?
Are the book(s) based on your series of posts are OB/LW still happening? Any details on their progress (title? release date? e-book or real book? approached publishers yet? only technical books, or popular book too?), or on why they've been put on hold?
What do you view as your role here at Less Wrong (e.g. leader, preacher, monk, moderator, plain-old contributor, etc.)?
Is there any published work in AI (whether or not directed towards Friendliness) that you consider does not immediately, fundamentally fail due to the various issues and fallacies you've written on over the course of LW? (E.g. meaningfully named Lisp symbols, hiddenly complex wishes, magical categories, anthropomorphism, etc.)
ETA: By AI I meant AGI.
In one of the discussions surrounding the AI-box experiments, you said that you would be unwilling to use a hypothetical fully general argument/"mind hack" to cause people to support SIAI. You've also repeatedly said that the friendly AI problem is a "save the world" level issue. Can you explain the first statement in more depth? It seems to me that if anything really falls into "win by any means necessary" mode, saving the world is it.
This comes to mind:
But why not become an expert liar, if that's what maximizes expected utility? Why take the constrained path of truth, when things so much more important are at stake?
Because, when I look over my history, I find that my ethics have, above all, protected me from myself. They weren't inconveniences. They were safety rails on cliffs I didn't see.
I made fundamental mistakes, and my ethics didn't halt that, but they played a critical role in my recovery. When I was stopped by unknown unknowns that I just wasn't expecting, it was my ethical constraints, and not any conscious planning, that had put me in a recoverable position.
You can't duplicate this protective effect by trying to be clever and calculate the course of "highest utility". The expected utility just takes into account the things you know to expect. It really is amazing, looking over my history, the extent to which my ethics put me in a recoverable position from my unanticipated, fundamental mistakes, the things completely outside my plans and beliefs.
Ethics aren't just there to make your life difficult; they can protect you from Black Swans. A startling assertion, I know, but not one entirely irrelevant to current affairs.
I admit to being curious about various biographical matters. So for example I might ask:
What are your relations like with your parents and the rest of your family? Are you the only one to have given up religion?
Do you feel lonely often? How bad (or important) is it?
(Above questions are a corollary of:) Do you feel that — as you improve your understanding of the world more and more —, there are fewer and fewer people who understand you and with whom you can genuinely relate in a personal level?
What practical policies could politicians enact that would increase overall utility? When I say "practical", I'm specifically ruling out policies that would increase utility but which would be unpopular, since no democratic polity would implement them.
(The background to this question is that I stand a reasonable chance of being elected to the Scottish Parliament in 19 months time).
Previously, you endorsed this position:
Never try to deceive yourself, or offer a reason to believe other than probable truth; because even if you come up with an amazing clever reason, it's more likely that you've made a mistake than that you have a reasonable expectation of this being a net benefit in the long run.
One counterexample has been proposed a few times: holding false beliefs about oneself in order to increase the appearance of confidence, given that it's difficult to directly manipulate all the subtle signals that indicate confidence to others.
What do you think about this kind of self-deception?
In 2007, I wrote a blog post titled Stealing Artificial Intelligence: A Warning for the Singularity Institute.
Short summary: After a few more major breakthroughs, when AGI is almost ready, AI will no doubt appear on the radar of many powerful organizations, such as governments. They could spy on AGI researchers and steal the code when it is almost ready (or ready, but not yet Certified Friendly) and launch their copy first, but without all the care and understanding required.
If you think there's a real danger there, could you tell us what the SIAI is doing to minimize it? If it doesn't apply to the SIAI, do you know if other groups working on AGI have taken this into consideration? And if this scenario is not realistic, could you tell us why?
What criteria do you use to decide upon the class of algorithms / computations / chemicals / physical operations that you consider "conscious" in the sense of "having experiences" that matter morally? I assume it includes many non-human animals (including wild animals)? Might it include insects? Is it weighted by some correlate of brain / hardware size? Might it include digital computers? Lego Turing machines? China brains? Reinforcement-learning algorithms? Simple Python scripts that I could run on my desktop? Molecule movements in the wall behind John Searle's back that can be interpreted as running computations corresponding to conscious suffering? Rocks? How does it distinguish interpretations of numbers as signed vs. unsigned, or ones complement vs. twos complement? What physical details of the computations matter? Does it regard carbon differently from silicon?
2) How does one affect the process of increasing the rationality of people who are not ostensibly interested in objective reasoning and people who claim to be interested but are in fact attached to their biases?
I find that question interesting because it is plain that the general capacity for rationality in a society can be improved over time. Once almost no one understood the concept of a bell curve or a standard deviation, but now the average person has a basic understanding of how these concepts apply to the real world.
It seems to me that we really are faced with the challenge of explaining the value of empirical analysis and objective reasoning to much of the world. Today the Middle East is hostile towards reason though they presumably don't have to be this way.
So again, my question is how do more rational people affect the reasoning capacity in less rational people, including those hostile towards rationality?
You've achieved a high level of success as a self-learner, without the aid of formal education.
Would this extrapolate as a recommendation of a path every fast-learner autodidact should follow — meaning: is it a better choice?
If not, in which scenarios not going after formal education be more advisable to someone? (Feel free to add as many caveats and 'ifs' as necessary.)
In the spirit of considering semi abyssal plans, what happens if, say, next week you discover a genuine reduction of consciousness and in turns out that... There's simply no way to construct the type of optimization process you want without it being conscious, even if very different from us?
ie, what if it turned out that The Law turned out to have the consequence of "to create a general mind is to create a conscious mind. No way around that"? Obviously that shifts the ethics a bit, but my question is basically if so, well... "now what?" what would have to be done differently, in what ways, etc?
What five written works would you recommend to an intelligent lay-audience as a rapid introduction to rationality and its orbital disciplines?
What are the hazards associated with making random smart people who haven't heard about existential dangers more intelligent, mathematically inclined, and productive?
Let E(t) be the set of historical information available up until some time t, where t is some date (e.g. 1934). Let p(A|E) be your estimate of the probability an optimally rational Bayesian agent would assign to the event "Self-improving artificial general intelligence is discovered before 2100" given a certain set of historical information.
Consider the function p(t)=p(A|E(t)). Presumably as t approaches 2009, p(t) approaches your own current estimate of p(A).
Describe the function p(t) since about 1900. What events - research discoveries, economic trends, technological developments, sci-fi novel publications, etc, caused the largest changes in p(t)? Is it strictly increasing, or does it fluctuate substantially? Did the publication of any impossibility proofs (e.g. No Free Lunch) cause strong decreases in p(t)? Can you point to any specific research results that increased p(t)? What about the "AI winter" and related setbacks?
What single technique do you think is most useful for a smart, motivated person to improve their own rationality in the decisions they encounter in everyday life?
Can you make a living out of this rationality / SI / FAI stuff . . . or do you have to be independently wealthy?
What was the most useful suggestion you got from a would-be FAI solver? (I'm putting separate questions in separate comments per MichaelGR's request.)
What is the background that you most frequently wish would-be FAI solvers had when they struck up conversations with you? You mentioned the Dreams of Friendliness series; is there anything else? You can answer this question in comment form if you like.
In terms of your intellectual growth, what were your biggest mistakes or most harmful habits, and what, if anything, would you do differently if you had the chance?
What single source of material (book, website, training course) do you think is most useful for a smart, motivated person to improve their own rationality in the decisions they encounter in everyday life?
Of the questions you decide not to answer, which is most likely to turn out to be a vital question you should have publicly confronted?
Not the question you don't want to answer but would probably have bitten the bullet anyway. The question you would have avoided completely if it weren't for my question.
[Edit - "If I thought they were vital, I wouldn't avoid" would miss the point, as not wanting to consider something suppresses counterarguments to dismissing it. Take a step back - which question is most likely to be giving you this reaction?]
He will simply ignore questions he doesn't want to answer, even if they somehow received 3^^^3 votes.
I am 99.99% certain that he will not ignore such questions.
If you thought an AGI couldn't be built what would you dedicate your life to doing? Perhaps another formulation, or a related question: what is the most important problem/issue not directly related to AI.
Okay: Goedel, Escher, Bach. You like it. Big-time.
But why? Specifically, what insights should I have assimilated from reading it that are vital for AI and rationalist arts? I personally feel I learned more from Truly Part of You than all of GEB, though the latter might have offered a little (unproductive) entertainment.
I am sure you're familiar with the University of Chicago "Doomsday Clock", so: if you were in charge of a Funsday Clock, showing the time until positive singularity, what time would it be on? Any recent significant changes?
(Idea of Funsday Clock blatantly stolen from some guy on Twitter.)
Please estimate your probability of dying in the next year (5 years). Assume your estimate is perfectly accurate. What additional probability of dying in the next year (5 years) would you willingly accept for a guaranteed and safe increase of one (two, three) standard deviation(s) in terms of intelligence?
Which areas of science or angles of analysis currently seem relevant to the FAI problem, and which of those you've studied seem irrelevant? What about those that fall on the "AI" side of things? Fundamental math? Physics?
Boiling down rationality
Eliezer, if you only had 5 minutes to teach a human how to be rational, how would you do it? The answer has to be more or less self-contained so "read my posts on lw" is not valid. If you think that 5 minutes is not enough you may extend the time to a reasonable amount, but it should be doable in one day at maximum. Of course it would be nice if you actually performed the answer in the video. By perform I mean "Listen human, I will teach you to be rational now..."
EDIT: When I said perform I meant it as opposed to...
If you conceptualized the high-level tasks you must attend to in order to achieve (1) FAI-understanding and (2) FAI-realization in terms of a priority queue, what would be the current top few items in each queue (with numeric priorities on some arbitrary scale)?
Previously, in Ethical Injunctions and related posts, you said that, for example,
You should never, ever murder an innocent person who's helped you, even if it's the right thing to do; because it's far more likely that you've made a mistake, than that murdering an innocent person who helped you is the right thing to do.
It seems like you're saying you will not and should not break your ethical injunctions because you are not smart enough to anticipate the consequences. Assuming this interpretation is correct, how smart would a mind have to be in order to safely break ethical injunctions?
Within the next 20 years or so, would you consider having a child and raising him/her to be your successor? Would you adopt? Have you donated sperm?
Edit: the first two questions dependent on you not being satisfied by the progress on FAI.
Akrasia
Eliezer, you mentioned suffering from writer's molasses and your solution was to write daily on ob/lw. I consider this a clever and successful overcoming of akrasia. What other success stories from your life in relation to akrasia could you share?
Do you think that just explaining biases to people helps them substantially overcome those biases, or does it take practice, testing, and calibration to genuinely improve one's rationality?
How would a utopia deal with human's seemingly contradicting desires - the desire to go up in status and the desire to help lower status people go up in status. Because helping lower status people go up in status will hurt our own status positions. I remember you mentioning how in your utopia you would prefer not to reconfigure the human mind. So how would you deal with such a problem?
(If someone finds the premise of my question wrong, please point it out)
Eliezer, in Excluding the Supernatural, you wrote:
Ultimately, reductionism is just disbelief in fundamentally complicated things. If "fundamentally complicated" sounds like an oxymoron... well, that's why I think that the doctrine of non-reductionism is a confusion, rather than a way that things could be, but aren't.
"Fundamentally complicated" does sound like an oxymoron to me, but I can't explain why. Could you? What is the contradiction?
To what extent is the success of your FAI project dependent upon the reliability of the dominant paradigm in Evolutionary Psychology (a la Tooby & Cosmides)?
Old, perhaps off-the-cuff, and perhaps outdated quote (9/4/02): “ well, the AI theory assumes evolutionary psychology and the FAI theory definitely assumes evolutionary psychology” (http://www.imminst.org/forum/lofiversion/index.php/t144.html).
Thanks for all your hard work.
Do you think that morality or rationality recommends placing no intrinsic weight or relevance on either a) backwards-looking considerations (e.g. having made a promise) as opposed to future consequences, or b) essentially indexical considerations (e.g. that I would be doing something wrong)?
5) This thread will be open to questions and votes for at least 7 days. After that, it is up to Eliezer to decide when the best time to film his answers will be.
This disadvantages questions which are posted late (to a greater extent than would give people an optimal incentive to post questions early). (It also disadvantages questions which start with a low number of upvotes by historical accident and then are displayed low on the page, and are not viewed as much by users who might upvote them.)
It's not your fault; I just wish the LW software had a stati...
How does the goal of acquiring self-knowledge (for current humans) relate to the goal of happiness (insofar as such a goal can be isolated)?
If one aimed to be as rational as possible, how would this help someone (today) become happy? You have suggested (in conversation) that there might be a tradeoff such that those who are not perfectly rational might exist in an "unhappy valley". Can you explain this phenomenon, including how one could find themselves in such a valley (and how they might get out)? How much is this term meant to indicate an anal...
It seems like, if I'm trying to make up my mind about philosophical questions (like whether moral realism is true, or whether free will is an illusion) I should try to find out what professional philosophers think the answers to these questions are.
If I found out that 80% of professional philosophers who think about metaethical questions think that moral realism is true, and I happen to be an anti-realist, then I should be far less certain of my belief that anti-realism is true.
But surveys like this aren't done in philosophy (I don't think). Do you thin...
What does the fact that when you were celibate you espoused celibacy say about your rationality?
I have questions. You say we must have one question per comment. So, I will have to make multitple posts.
1) Is there a domain where rational analysis does not apply?
What is the practical value (e.g., predicted impact) of the Less Wrong website (and similar public communication regarding rationality) with respect to FAI and/or existential risk outcomes?
(E.g., Is there an outreach objective? If so, for what purpose?)
Well, Eliezer's reply to this comment prompts a follow-up question:
In "Free to optimize", you alluded to "the gift of a world that works on improved rules, where the rules are stable and understandable enough that people can manipulate them and optimize their own futures together". Can you say more about what you imagine such rules might be ?
In reference to this comment, can you give us more information about the interface between the modules. Also what leads you to believe that a human level intelligence can be decomposed nicely in such a fashion.
Sticking with biography/family background:
Anyone who has read this poignant essay knows that Eliezer had a younger brother who died tragically young. If it is not too insensitive of me, may I ask what the cause of death was?
Do you think a cog psych research program on “moral biases” might be helpful (e.g., regarding existential risk reduction)?
[The conceptual framework I aim working on (philosophy dissertation) targets a prevention-amenable form of “moral error” that requires (a) the perpetrating agent’s acceptance of the assessment of moral erroneousness (i.e., individual relativism to avoid categoricity problems), and (b) that the agent, for moral reasons, would not have committed the error had he been aware of the erroneousness (i.e., sufficiently motivating v. moral indifference, laziness, and/or akrasia).]
Of all the people you ever met in your life, who would you consider, if anyone, to be just a few hair's length away from your level. If properly taught, do you think this person can become the next Eliezer Yudkowsky?
Do you act all rational at home . . or do you switch out of work mode and stuff pizza and beer in front of the TV like any normal akrasic person? (and if you do act all rational, what do your partner/family/housemates make of it? do any of them ever give you a slap upside the head?)
:-)
Since you and most around here seem to be utilitarian consequentialists, how much thought have you put into developing your personal epistemological philosophy?
Worded differently, how have you come to the conclusion that "maximizing utility" is the optimized goal as opposed to say virtue seeking?
Let's say someone (today, given present technology) has the goal of achieving rational self-insight into one's thinking processes and the goal of being happy. You have suggested (in conversation) such a person might find himself in an "unhappy valley" insofar as he is not perfectly rational. If someone today -- using current hedonic/positive psychology --undertakes a program to be as happy as possible, what role would rational self-insight play in that program?
How does the goal of acquiring self-knowledge (for current humans) relate to the goal of happiness (insofar as such a goal can be isolated)?
If one aimed to be as rational as possible, how would this help someone (today) become happy? You have suggested (in conversation) that there might be a tradeoff such that those who are not perfectly rational might exist in an "unhappy valley". Can you explain this phenomenon, including how one could find themselves in such a valley (and how they might get out)? How much is this term meant to indicate an anal...
There seems to be two problems or components of the singularity program which are interchanged or conflated. Firstly, there is the goal of producing a GAI, say on the order of human intelligence (e.g., similar to the Data character from Star Trek.) Secondly, there is the goal or belief that a GAI will be strongly self-improving, to the extent that it reaches a super-human intelligence.
It is unclear to me that achieving the first goal means that the second goal is also achievable, or of a similar difficulty level. For example, I am inclined to think that w...
There seems to be two problems or components of the singularity program which are interchanged or conflated. Firstly, there is the goal of producing a GAI, say on the order of human intelligence (e.g., similar to the Data character from Star Trek.) Secondly, there is the goal or belief that a GAI will be strongly self-improving, to the extent that it reaches a super-human intelligence.
It is unclear to me that achieving the first goal means that the second goal is also achievable, or of a similar difficulty level. For example, I am inclined to think that w...
What if the friendly AI finds that our extrapolated volition is coherent and contains the value of 'self-determination' and concludes that it cannot meddle too much in our affairs? "Well, humankind, it looks like you don't want to have your destiny decided by a machine. my hands are tied. You need to save yourselves."
How do you reconcile the obvious conflict between the rationalization for why you persist in solving the hardest problem in all the cosmos, and the probability of it not being completed in time.
To unpack that, you're pursuit of a theory of provably correct, recursively self-improving seed AGI is a daunting task, to say the least,
What is your current position about the FOOM effect, when the exploding intelligence quickly acquires the surrounding matter for its own usage. Solves its computing needs by transforming everything nearby to something more computationally optimal. And that by some not so obvious physical operations, yet entirely permitted and achievable from "pure calculating" already granted to ("seed") AI?
You(EY)'ve mentioned moral beliefs from time to time, but I don't recall you addressing morality directly at length. A commonly expressed view in rationalist circles is that there is no such thing, but I don't think that is your view. What is a moral judgement, and how do you arrive at them?
ETA: As Psy-Kosh points out, he has, so scratch that unless EY has something more to say on the matter.
What do you think of this paper arguing that Godel's reasoning is not constructively valid?
In the most recent Singularity Summit, you said:
I have to remind myself that it's not what's the most fun to do, it's not even what you have talent to do, it's what you need to do that you ought to be doing.
Aren't these the same thing? Are you saying that what you're doing now is not the most fun thing you could be doing?
In 2000, you said this:
When voting in the United States, follow this algorithm: Vote Libertarian when available; otherwise, vote for the strongest third party available (usually Reform, unless they have a really evil candidate); then vote for any candidate who isn't a lawyer; then vote Republican (at present, they're slightly better).
Would you stick by your assertion that in 2000, the republicans were "slightly better", and who or what did you mean they were slightly better for? From where I'm standing, albeit with the benefit of hindsight, i...
As a question for everyone (and as a counter argument to CEV),
Is it okay to take an individual human's rights of life and property by force as opposed to volitionally through a signed contract?
And the use of force does include imposing on them without their signed volitional consent such optimizations as the coherent extrapolated volition of humanity, but could maybe(?) not include their individual extrapolated volition.
A) Yes B) No
I would tentatively categorize this as one possible empirical test for Friendly AI. If the AI chooses A, this could to an Unfriendly AI which stomps on human rights, which would be Really, Really Bad.
As promised, here is the "Q" part of the Less Wrong Video Q&A with Eliezer Yudkowsky.
The Rules
1) One question per comment (to allow voting to carry more information about people's preferences).
2) Try to be as clear and concise as possible. If your question can't be condensed to a few paragraphs, you should probably ask in a separate post. Make sure you have an actual question somewhere in there (you can bold it to make it easier to scan).
3) Eliezer hasn't been subpoenaed. He will simply ignore the questions he doesn't want to answer, even if they somehow received 3^^^3 votes.
4) If you reference certain things that are online in your question, provide a link.
5) This thread will be open to questions and votes for at least 7 days. After that, it is up to Eliezer to decide when the best time to film his answers will be. [Update: Today, November 18, marks the 7th day since this thread was posted. If you haven't already done so, now would be a good time to review the questions and vote for your favorites.]
Suggestions
Don't limit yourself to things that have been mentioned on OB/LW. I expect that this will be the majority of questions, but you shouldn't feel limited to these topics. I've always found that a wide variety of topics makes a Q&A more interesting. If you're uncertain, ask anyway and let the voting sort out the wheat from the chaff.
It's okay to attempt humor (but good luck, it's a tough crowd).
If a discussion breaks out about a question (f.ex. to ask for clarifications) and the original poster decides to modify the question, the top level comment should be updated with the modified question (make it easy to find your question, don't have the latest version buried in a long thread).
Update: Eliezer's video answers to 30 questions from this thread can be found here.