[LINK] Extra Credits: The Singularity
The show Extra Credits released a video today about the singularity. The show's audience is video gamers. Some of these gamers may not know about the singularity, but may become interested upon hearing about it.
The video itself is a very basic, non-technical introduction to the concept of intelligence explosion. It discusses it in the context of video games. At the end, they plug the SIAI as a place to go for more info.
I thought the video and the plug were pretty awesome, and wanted to share. If you think they're awesome, too, then take a second to give the video a view. Let's positively reinforce this kind of behavior. Here's the link:
A Much Better Life?
(Response to: You cannot be mistaken about (not) wanting to wirehead, Welcome to Heaven)
The Omega Corporation
Internal Memorandum
To: Omega, CEO
From: Gamma, Vice President, Hedonic Maximization
Sir, this concerns the newest product of our Hedonic Maximization Department, the Much-Better-Life Simulator. This revolutionary device allows our customers to essentially plug into the Matrix, except that instead of providing robots with power in flagrant disregard for the basic laws of thermodynamics, they experience a life that has been determined by rigorously tested algorithms to be the most enjoyable life they could ever experience. The MBLS even eliminates all memories of being placed in a simulator, generating a seamless transition into a life of realistic perfection.
Our department is baffled. Orders for the MBLS are significantly lower than estimated. We cannot fathom why every customer who could afford one has not already bought it. It is simply impossible to have a better life otherwise. Literally. Our customers' best possible real life has already been modeled and improved upon many times over by our programming. Yet, many customers have failed to make the transition. Some are even expressing shock and outrage over this product, and condemning its purchasers.
LW Philosophers versus Analytics
By and large, I would bet money that the devoted, experienced, and properly sequenced LWer, is a better philosopher than the average current philosophy majors concentrating in the analytic tradition. I say this because I have regular philosophical conversations with both populations, and notice many philosophical desiderata lacking in my conversations with my classmates, from my school and others, that I find abundantly on this website. Those desiderata, which are roughly the twelve virtues. I find that though my classmates have healthy doses of curiosity, empiricism and even scholarship, they lack in, evenness, lightness, relinquishment, precision, perfectionism and true humility.
How could that be? LW has built a huge positivized reductionist metaphysics, and a Bayesian epistemology which can almost be read as a self improvement manual. These are unprecedented, and in some circles, outrageous truths. This is not to mention the original work that has been done in LW posts and comment trees on, meta-ethics, ethics, biases, mathematics, rationality, quantum physics, economics, self-hack, etc. We have here a self-updating reliably transmittable well oiled machine, the likes of which philosophy has only so rarely seen.
What is even more impressive to me about LW as a philosophical movement, is that it seems to be nearly self contained when it comes to philosophy. I mean most experienced LWers probably really haven't read very much Kant, maybe some Wittgenstein or Quine; but LWers can still somehow solve the problems philosophers spend their lives solving by building disconnected and competing philosophical systems specifically designed for each task, by the use of roughly one rather generally successful epistemology and metaphysics, which can be called together LWism.
So if you agree that LW does better philosophy than analytic philosophers, let's put our money where our mouths are, as our own philosophy suggests we should. I will post a series of discussion posts each concentrating only on one currentish question from academic philosophy. In each post, I will cover the essentials of the problem, as well as provide external resources on the problem. Each post will also include a list of posts from the sequences which are recommended before participation. Each question will be solved with a consensus of less than 2 to 1 odds amongst professional philosophers, i.e., if more than 2/3s of professional philosophers agree, we won't bother. So as to not waste our time with small fish.
You guys, will then in turn cooperate in comment trees to find solutions and decide amongst them, then I'll compare the LW solutions to the solutions given by a random sampling of vaguely successful analytic philosophers, (I will use a university search for my sampling). I will compare the ratio of types of solutions of the two populations, and look for solutions that happen in the one population that don't occur in the other, then I'll post the results, hopefully the next week. (edit): This process of comparison will be the hardest part of this project for me, and if anyone with training or experience in statistics might want to help me with this, please let me know, and we can work on the comparison and the report thereof together. My prediction is that we will be able to quickly reach a high consensus on many issues that analytics have not internally resolved.
The series will be called: the "Enthusiastic Youngsters Formally Tackle Analytic Problems Test" or "the Eyftapt series" [pronounced: afe-taped]. Alternatively Eyftapt could stand for the "Eliezer Yudkowsky and Friends Train Amazing Philosophers Test." Besides shedding moderate light on our philosophical-competence/toolbox juxtaposed to analytic philosophical-competence/toolbox, I'd also like to learn what LW training offers that analytics are currently missing. So that we can focus in on that kind of training for our own benefit, and so that we can offer some advice to the analytics. That is, assuming my prediction that we'll do better is correct. This will not be as easy as comparing solutions, and I may need much more data than what I'll get out of this series, but it couldn't hurt to have a bunch of LWers doing difficult philosophy added to the available data.
What do you guys and gals think, might you be interested in something like this? Mind you it would be in discussion posts, since the main point is to discuss an issue.
(I know some of you cats don't like "philosophy", just call it "arguing about systems and elucidating messy language and thought in order to answer questions" instead. That is what I think we do better.)
BTW, if you have some problem you think we should work on, or or if you think we would be really good at solving some problem or really bad at it compared to non-LW philosophy, message me or comment below, and I'll give you credit for the suggestion. These are the topics I am already decided on: universals/nominalism, correspondence/deflation/coherency, grue/induction, science realism/constructivism, what is math?, scientific underdetermination, a priori knowledge?, radical translation, analytic synthetic division, proper name/description, deduction induction division, modality and possible worlds, what does it mean for a grammatical sentence to be meaningless and how do you tell?, meta-philosophy, i.e., questions about philosophy, and finally, personal identity, roughly to be posted in that order.
(edited after first posting, I just realized it may be worth mention that):
I was not happy about coming to this view. I have always thought of myself as an aspiring analytic philosopher, and even got attached to the ascetics of analytic philosophy. I thought of analytic philosophy as the new science of philosophy that finally got it right. It bothered me to no end that I had been lead to have more faith in the philosophical maturity/competence of a bunch of amateurs on a blog, than in the experts and students of the field that I planned to spend the rest of my life on. I have committed myself to the methods of academic-analytic philosophy publicly in speeches and to my closest friends, colleagues, and family; to turn around in under a year and say that that was all naive enthusiasm, and that there's this blog of college kids that do it better, made me look very stupid in more than one eye, I cared and care about. More than once, I have dissolved a question in my philosophy and cog-sci classes into an obvious cognitive error, explained why we are built to make this error, and left the class with little to do. Professors have praised me for this, and had even started approaching me outside of class to ask me about where I got my analysis from; their faces often came to a sincere awe when I tell them: "I made it up myself, but all the methods I used are neatly organized, generalized, and exemplified in this text called the 'sequences' on this blog of youngsters called 'Less Wrong'. It's only a few hundred pages, kinda reads like G.E.B."
One day, a few months back, one of my professors who I am on a particularly friendly basis with asked me: "Every time we are in class and there is a question, you use this blog of yours, and it seems it gives you an answer for everything, so why are you still studying the analytics, instead of just studying your blog?" I think he meant to ask this question sardonically, but that is not how I took it. I took it as a serious question about how to optimize my time if my goal is to do good philosophy. Not having a good answer to this question, and craving one, probably more than anything, is what prompted me to think of doing this series.
I may be wrong, and it may be that LW has just as hard of a time forming consensus on the issues that analytics have a hard time with, though I doubt it. But I am much more confident, that for some reason, even though I have had very good training, have a very high GPA, have read every classic philosophy text I could get my hands on, and had been reading several modern philosophy journals, all before I even knew about LW, LW has done more for my philosophical maturity, competence, and persuasiveness, than the entirety of the rest of my training, and I wouldn't doubt that many others have had similar thoughts.
Umbilical cord stem cell banking for future medical use
I had not heard of this until recently, and I doubt I am alone. I will quote from this source, which looks OK (there is also this).
What is the idea?
[D]uring the 1970s, researchers discovered that umbilical cord blood could supply the same kinds of blood-forming (hematopoietic) stem cells as a bone marrow donor. And so, umbilical cord blood began to be collected and stored.
How the cells are collected:
After a vaginal delivery, the umbilical cord is clamped on both sides and cut. In most cases, an experienced obstetrician or nurse collects the cord blood before the placenta is delivered. One side of the umbilical cord is unclamped, and a small tube is passed into the umbilical vein to collect the blood. After blood has been collected from the cord, needles are placed on the side of the surface of the placenta that was connected to the fetus to collect more blood and cells from the large blood vessels that fed the fetus.
How the cells are stored:
After cord-blood collection has taken place, the blood is placed into bags or syringes and is usually taken by courier to the cord-blood bank. Once there, the sample is given an identifying number. Then the stem cells are separated from the rest of the blood and are stored cryogenically (frozen in liquid nitrogen) in a collection facility, also known as a cord-blood bank.
How the cells can be used:
Then, if needed, blood-forming stem cells can be thawed and used in either autologous procedures (when someone receives his or her own umbilical cord blood in a transplant) or allogeneic procedures (when a person receives umbilical cord blood donated from someone else — a sibling, close relative, or anonymous donor).
The major upside:
The primary reason that parents consider banking their newborn's cord blood is because they have a child or close relative with or a family medical history of diseases that can be treated with bone marrow transplants.
The monetary cost:
The expense of collecting and storing the cord blood can be a deciding factor for many families. At a commercial cord-blood bank, you'll pay approximately $1,000-$2,000 to store a sample of cord blood, in addition to an approximately $100 yearly maintenance fee. You might also pay an additional fee of several hundred dollars for the cord-blood collection kit, courier service to the cord-blood bank, and initial processing.
Is it useful as "biological insurance"? Some say no:
Some doctors and organizations, such as the American Academy of Pediatrics (AAP), have expressed concern that cord-blood banks may capitalize on the fears of vulnerable new parents by providing misleading information about the statistics of bone marrow transplants.... The AAP doesn't recommend cord-blood banking for families who don't have a history of disease. That's because research has not yet determined the likelihood that a child would ever need his or her own stem cells, nor has it confirmed that transplantation using self-donated cells rather than cells from a relative or stranger is safer or more effective. According to the AAP, "private storage of cord blood as 'biological insurance' is unwise. However, banking should be considered if there is a family member with a current or potential need to undergo a stem cell transplantation."
Some say yes:
Other doctors and researchers support saving umbilical cord blood as a source of blood-forming stem cells in every delivery — mainly because of the promise that stem-cell research holds for the future. Most people would have little use for stem cells now, but research into the use of stem cells for treatment of disease is ongoing — and the future looks promising.
Three questions for LW:
1) Based on the above and/or outside information, would you recommend this procedure for two parents in their mid-20s living in the US with a middle class income whose babies will have no known risks for disease?
2) This seems like a good analogy for brain preservation using cryogenic nitrogen. Why do you think that umbilical cord stem cell banking is FDA regulated whereas brain preservation procedures are not?
3) Instead of privately banking them, it is possible to donate umbilical cord stem cells to public banks, for the benefit of others. Similarly, it is possible to donate one's postmortem brain (and organs) to the public, for the benefit of others. In both cases, there is the third option to do neither. In making these decisions, how do you weigh your own interests against those of others?
New Q&A by Nick Bostrom
Underground Q&A session with Nick Bostrom (http://www.nickbostrom.com) on existential risks and artificial intelligence with the Oxford Transhumanists (recorded 10 October 2011).
Rationality Dojo Examples?
Early on in my exposure to Less Wrong I encountered the metaphor of Rationality as Martial Art. I assumed at some point I would be a member of an active Rationality Dojo, regularly training and becoming progressively more formidable as I learned the Art.
Several years later, though I meet regularly with an awesome local group whose company I greatly enjoy, I still feel as though my training has not yet begun.
Can anyone point to an example of an active Rationality Dojo? What do you do there (Games? Exercises? Kata?)? Who are the instructors? The closest examples that I've seen are the Mega- and Mini-camps; can anyone shed some additional light on what went on there?
Less Wrong/Rationality Symbol or Seal?
Hey Everyone,
I was wondering if the LW community has a particular symbol or sign that would serve to act as a graphical representation of the community?
Something we could wear or include in things like business cards, that would act as an acknowledgement to others of our committment to rationality.
Any such thing exist, and if not, any good ideas?
I think the letters LW work pretty well if you could make them look more appealling.
Toward an overview analysis of intelligence explosion
A few months ago, Anna Salamon and I began to write an academic overview of intelligence explosion scenarios — something we could hand to people to explain all our major points in one brief article.
We encountered two major problems.
First: The Summit happened, taking all of our time. Then I was made Executive Director, taking all of my time in a more persistent way.
Second: Being thorough and rigorous in an overview of intelligence explosion requires deep knowledge of a huge spectrum of science and philosophy: history of AI progress, history of planning for the future mattering, AI architectures, hardware progress, algorithms progress, massive datasets, neuroscience, factors in the speed of scientific progress, embryo selection, whole brain emulation, properties of digital minds, AI convergent instrumental values, self-improvement dynamics, takeoff scenarios, heuristics and biases, unipolar and multipolar intelligence explosion scenarios, human values and value extrapolation, decision theory, arms races, human dynamics of technological development, technological forecasting, the economics of machine intelligence, anthropics, evolution, AI-boxing, and much more. Because we were trying to write a short article, we kept having to consume and compress an entire field of knowledge into a single paragraph (or even a single sentence!) with the perfect 2-8 citations, which occasionally meant several days of work for a single paragraph. (This is an extreme example, but it's the kind of problem we often encountered, in different degrees.)
So, we've decided to take a different approach and involve the broader community.
We'll be posting short snippets, short pieces of the puzzle, for feedback from the community. Sometimes we'll pose questions, or ask for references about a given topic, or ask for suggested additions to the dialectic we present.
In the end, we hope to collect and remix the best and most essential snippets, incorporate the feedback and additions provided by the community, and write up the final article.
Think of it as a Polymath Project for intelligence explosion analysis. It's collaborative science and philosophy. Members of Less Wrong tend to be smart, and each one has deep knowledge of one or a few fields that we may not have. We hope you'll join us, and contribute your expertise to this project.
I'll keep a table of contents of all the snippets here, as they are published.
Draft #1:
- Introduction
- Types of digital intelligence
- Why designing digital intelligence gets easier over time
- How long before digital intelligence?
- From digital intelligence to intelligence explosion
- [not finished]
- Snippet 1
- ...
Also see:
Intelligence Explosion analysis draft: introduction
I invite your feedback on this snippet from an intelligence explosion analysis Anna Salamon and myself have been working on.
This snippet is a possible introduction to the analysis article. Its purpose is to show readers that we aim to take seriously some common concerns about singularity thinking, to bring readers into Near Mode about the topic, and to explain the purpose and scope of the article.
Note that the target style is serious but still more chatty than a normal journal article.
_____
The best answer to the question, "Will computers ever be as smart as humans?" is probably “Yes, but only briefly."
Vernor Vinge
Humans may create human-level artificial intelligence in this century (Bainbridge 2006; Baum, Goertzel, and Goertzel 2011; Bostrom 2003; Legg 2008; Sandberg and Bostrom 2011). Shortly thereafter, we may see an “intelligence explosion” or “technological Singularity” — a chain of events by which human-level AI leads, fairly rapidly, to intelligent systems whose capabilities far surpass those of biological humanity as a whole (Chalmers 2010).
How likely is this, and what should we do about it? Others have discussed these questions previously (Turing 1950; Good 1965; Von Neumann 1966; Solomonoff 1985; Vinge 1993; Yudkowsky 2001, 2008a; Russell and Norvig 2010, sec. 26.3); we will build on their thinking in our review of the subject.
Singularity Skepticism
Many are skeptical of Singularity arguments because they associate such arguments with detailed storytelling — the “if and then” fallacy of “speculative ethics” by which an improbable conditional becomes a supposed actual (Nordmann 2007). They are right to be skeptical: hundreds of studies show that humans are overconfident of their beliefs (Moore and Healy 2008), regularly overestimate the probability of detailed visualized scenarios (Tversky and Kahneman 2002), and tend to seek out only information that confirms their current views (Nickerson 1998). AI researchers are not immune from these errors, as evidenced by a history of over-optimistic predictions going back to the 1956 Dartmouth conference on AI (Dreyfus 1972).
Nevertheless, mere mortals have at times managed to reason usefully and somewhat accurately about the future, even with little data. When Leo Szilard conceived of the nuclear chain reaction, he realized its destructive potential and filed his patent in a way that kept it secret from the Nazis (Rhodes 1995, 224–225). Svante Arrhenius' (1896) models of climate change lacked modern climate theory and data but, by making reasonable extrapolations from what was known of physics, still managed to predict (within 2°C) how much warming would result from a doubling of CO2 in the atmosphere (Crawford 1997). Norman Rasmussen's (1975) analysis of the safety of nuclear power plants, written before any nuclear accidents had occurred, correctly predicted several details of the Three Mile Island incident that previous experts had not (McGrayne 2011, 180).
In planning for the future, how can we be more like Rasmussen and less like the Dartmouth conference? For a start, we can apply the recommendations of cognitive science on how to meliorate overconfidence and other biases (Larrick 2004; Lillienfeld, Ammirati, and Landfield 2009). In keeping with these recommendations, we acknowledge unknowns and do not build models that depend on detailed storytelling. For example, we will not assume the continuation of Moore’s law, nor that hardware trajectories determine software progress. To avoid nonsense, it should not be necessary to have superhuman reasoning powers; all that should be necessary is to avoid believing we know something when we do not.
One might think such caution would prevent us from concluding anything of interest, but in fact it seems that intelligence explosion may be a convergent outcome of many or most future scenarios. That is, an intelligence explosion may have fair probability, not because it occurs in one particular detailed scenario, but because, like the evolution of eyes or the emergence of markets, it can come about through many different paths and can gather momentum once it gets started. Humans tend to underestimate the likelihood of such “disjunctive” events, because they can result from many different paths (Tversky and Kahneman 1974). We suspect the considerations in this paper may convince you, as they did us, that this particular disjunctive event (intelligence explosion) is worthy of consideration.
First, we provide evidence which suggests that, barring global catastrophe and other disruptions to scientific progress, there is a significant probability we will see the creation of digital intelligence within a century. Second, we suggest that the arrival of digital intelligence is likely to lead rather quickly to intelligence explosion. Finally, we discuss the possible consequences of an intelligence explosion and which actions we can take now to influence those results.
These questions are complicated, the future is uncertain, and our chapter is brief. Our aim, then, can only be to provide a quick survey of the issues involved. We believe these matters are important, and our discussion of them must be permitted to begin at a low level because there is no other place to lay the first stones.
References for this snippet
- Bainbridge 2006 managing nano-bio-info-cogno innovations
- Baum Goertzel Goertzel 2011 how long until human-level ai
- Bostrom 2003 ethical issues in advanced artificial intelligence
- Chalmers 2010 singularity philosophical analysis
- Legg 2008 machine super intelligence
- Sandberg & Bostrom 2011 machine intelligence survey
- Turing 1950 machine intelligence
- Good 1965 speculations concerning...
- Von neumann 1966 theory of self-reproducing autonomata
- Solomonoff 1985 the time scale of artificial intelligence
- Vinge 1993 coming technological singularity
- Yudkowsky 2001 creating friendly ai
- Yudkowsky 2008a negative and positive factor in global risk
- Russel Norvig 2010 artificial intelligence a modern approach 3e
- Nordman 2007 If and then: a critique of speculative nanoethics
- Moore and Healy the trouble with overconfidence
- Tversky Kahneman 2002 extensional versus intuitive reasoning, the conjunction fallacy
- Nickerson 1998 Confirmation Bias; A Ubiquitous Phenomenon in Many Guises
- Dreyfus 1972 what computers can't do
- Rhodes 1995 making of the atomic bomb
- Arrhenius 1896 On the Influence of Carbonic Acid in the Air Upon the Temperature
- Crawford 1997 Arrhenius' 1896 model of the greenhouse effect in context
- Rasmussen 1975 WASH-1400 report
- McGrayne 2011 theory that would not die
- Larrick 2004 debiasing
- Lillienfeld, Ammirati, and Landfield 2009 giving debiasing away
- Tversky and Kahneman 1974 Judgment under uncertainty: Heuristics and biases
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)