Young Americans believe they have the best health in the world...

1 Louie 04 February 2013 06:03AM

Of course, it turns out they're actually last among developed nations in real health outcomes.

The U.S. ranks #1 among 17 affluent, western countries in the percentage of people aged 5 to 34 who rate their health as good. Unfortunately, when doctors look at people’s actual health, at indicators such as obesity, diabetes, and simply the chance that someone will die before his or her next birthday, the U.S. ranks last: young Americans are #17 out of 17 in real health.

Course recommendations for Friendliness researchers

62 Louie 09 January 2013 02:33PM

When I first learned about Friendly AI, I assumed it was mostly a programming problem. As it turns out, it's actually mostly a math problem. That's because most of the theory behind self-reference, decision theory, and general AI techniques haven't been formalized and solved yet. Thus, when people ask me what they should study in order to work on Friendliness theory, I say "Go study math and theoretical computer science."

But that's not specific enough. Should aspiring Friendliness researchers study continuous or discrete math? Imperative or functional programming? Topology? Linear algebra? Ring theory?

I do, in fact, have specific recommendations for which subjects Friendliness researchers should study. And so I worked with a few of my best interns at MIRI to provide recommendations below:

  • University courses. We carefully hand-picked courses on these subjects from four leading universities — but we aren't omniscient! If you're at one of these schools and can give us feedback on the exact courses we've recommended, please do so.
  • Online courses. We also linked to online courses, for the majority of you who aren't able to attend one of the four universities whose course catalogs we dug into. Feedback on these online courses is also welcome; we've only taken a few of them.
  • Textbooks. We have read nearly all the textbooks recommended below, along with many of their competitors. If you're a strongly motivated autodidact, you could learn these subjects by diving into the books on your own and doing the exercises.

Have you already taken most of the subjects below? If so, and you're interested in Friendliness research, then you should definitely contact me or our project manager Malo Bourgon (malo@intelligence.org). You might not feel all that special when you're in a top-notch math program surrounded by people who are as smart or smarter than you are, but here's the deal: we rarely get contacted by aspiring Friendliness researchers who are familiar with most of the material below. If you are, then you are special and we want to talk to you.

Not everyone cares about Friendly AI, and not everyone who cares about Friendly AI should be a researcher. But if you do care and you might want to help with Friendliness research one day, we recommend you consume the subjects below. Please contact me or Malo if you need further guidance. Or when you're ready to come work for us.

 

COGSCI C127
PHIL 190
6.804J
85-213
Free Online
Cognitive Science
If you're endeavoring to build a mind, why not start by studying your own? It turns out we know quite a bit: human minds are massively parallel, highly redundant, and although parts of the cortex and neocortex seem remarkably uniform, there are definitely dozens of special purpose modules in there too. Know the basic details of how the only existing general purpose intelligence currently functions.
ECON 119
IPS 207A
15.847
80-302
Free Online
Heuristics and Biases
While cognitive science will tell you all the wonderful things we know about the immense, parallel nature of the brain, there's also the other side of the coin. Evolution designed our brains to be optimized at doing rapid thought operations that work in 100 steps or less. Your brain is going to make stuff up to cover up that its mostly cutting corners. These errors don't feel like errors from the inside, so you'll have to learn how to patch the ones you can and then move on.

PS - We should probably design our AIs better than this.
COMPSCI 61A
MATH 198
6.005
15-150
Free Online
Functional Programing
There are two major branches of programming: Functional and Imperative. Unfortunately, most programmers only learn imperative programming languages (like C++ or python). I say unfortunately, because these languages achieve all their power through what programmers call "side effects". The major downside for us is that this means they can't be efficiently machine checked for safety or correctness. The first self-modifying AIs will hopefully be written in functional programming languages, so learn something useful like Haskell or Scheme.
MATH 55
CME 305
6.042J/18.062J
21-228
Free Online
Discrete Math
Much like programming, there are two major branches of mathematics as well: Discrete and continuous. It turns out a lot of physics and all of modern computation is actually discrete. And although continuous approximations have occasionally yielded useful results, sometimes you just need to calculate it the discrete way. Unfortunately, most engineers squander the majority of their academic careers studying higher and higher forms of calculus and other continuous mathematics. If you care about AI, study discrete math so you can understand computation and not just electricity.

Also, you should pick up enough graph theory in this course to handle the basic mechanics of decision theory -- which you're gonna want to learn later.
MATH 110
MATH 113
18.06
21-341
Free Online
Linear Algebra
Linear algebra is the foundation of quantum physics and a huge amount of probability theory. It even shows up in analyses of things like neural networks. You can't possibly get by in machine learning (later) without speaking linear algebra. So learn it early in your scholastic career.
MATH 135
MATH 161
24.243
21-602
Free Online
Set Theory
Like learning how to read in mathematics. But instead of building up letters into words, you'll be building up axioms into theorems. This will introduce you to the program of using axioms to capture intuition, finding problems with the axioms, and fixing them.
MATH 125A
CS 103
24.241
21-600
Free Online
Mathematical Logic
The mathematical equivalent of building words into sentences. Essential for the mathematics of self-modification. And even though Sherlock Holmes and other popular depictions make it look like magic, it's just lawful formulas all the way down.
COMPSCI 170
CS 161
6.046J
15-451
Free Online
Efficient Algorithms and Intractable Problems
Like building sentences into paragraphs. Algorithms are the recipes of thought. One of the more amazing things about algorithm design is that it's often possible to tell how long a process will take to solve a problem before you actually run the process to check it. Learning how to design efficient algorithms like this will be a foundational skill for anyone programming an entire AI, since AIs will be built entirely out of collections of algorithms.
MATH 128A
CME206
18.330
21-660
Free Online
Numerical Analysis
There are ways to systematically design algorithms that only get things slightly wrong when the input data has tiny errors. And then there's programs written by amateur programmers who don't take this class. Most programmers will skip this course because it's not required. But for us, getting the right answer is very much required.
COMPSCI 172
CS 154
6.840J
15-453
Free Online
Computability and Complexity
This is where you get to study computing at it's most theoretical. Learn about the Church-Turing thesis, the universal nature and applicability of computation, and how just like AIs, everything else is algorithms... all the way down.
COMPSCI 191
CS 259Q
6.845
33-658
Free Online
Quantum Computing
It turns out that our universe doesn't run on Turing Machines, but on quantum physics. And something called BQP is the class of algorithms that are actually efficiently computable in our universe. Studying the efficiency of algorithms relative to classical computers is useful if you're programming something that only needs to work today. But if you need to know what is efficiently computable in our universe (at the limit) from a theoretical perspective, quantum computing is the only way to understand that.
COMPSCI 273
CS149
18.337J
15-418
Free Online
Parallel Computing
There's a good chance that the first true AIs will have at least some algorithms that are inefficient. So they'll need as much processing power as we can throw at them. And there's every reason to believe that they'll be run on parallel architectures. There are a ton of issues that come up when you switch from assuming sequential instruction ordering to parallel processing. There's threading, deadlocks, message passing, etc. The good part about this course is that most of the problems are pinned down and solved: You're just learning the practice of something that you'll need to use as a tool, but won't need to extend much (if any).
EE 219C
MATH 293A
6.820
15-414
Free Online
Automated Program Verification
Remember how I told you to learn functional programming way back at the beginning? Now that you wrote your code in functional style, you'll be able to do automated and interactive theorem proving on it to help verify that your code matches your specs. Errors don't make programs better and all large programs that aren't formally verified are reliably *full* of errors. Experts who have thought about the problem for more than 5 minutes agree that incorrectly designed AI could cause disasters, so world-class caution is advisable.
COMPSCI 174
CS 109
6.042J
21-301
Free Online
Combinatorics and Discrete Probability
Life is uncertain and AIs will handle that uncertainty using probabilities. Also, probability is the foundation of the modern concept of rationality and the modern field of machine learning. Probability theory has the same foundational status in AI that logic has in mathematics. Everything else is built on top of probability.
STAT 210A
STATS 270
6.437/438
36-266
Free Online
Bayesian Modeling and Inference
Now that you've learned how to calculate probabilities, how do you combine and compare all the probabilistic data you have? Like many choices before, there is a dominant paradigm (frequentism) and a minority paradigm (Bayesianism). If you learn the wrong method here, you're deviating from a knowably correct framework for integrating degrees of belief about new information and embracing a cadre of special purpose, ad-hoc statistical solutions that often break silently and without warning. Also, quite embarrassingly, frequentism's ability to get things right is bounded by how well it later turns out to have agreed with Bayesian methods anyway. Why not just do the correct thing from the beginning and not have your lunch eaten by Bayesians every time you and them disagree?
MATH 218A/B
MATH 230A/B/C
6.436J
36-225/21-325
Free Online
Probability Theory
No more applied probability: Here be theory! Deep theories of probabilities are something you're going to have to extend to help build up the field of AI one day. So you actually have to know why all the things you're doing are working inside out.
COMPSCI 189
CS 229
6.867
10-601
Free Online
Machine Learning
Now that you chose the right branch of math, the right kind of statistics, and the right programming paradigm, you're prepared to study machine learning (aka statistical learning theory). There are lots of algorithms that leverage probabilistic inference. Here you'll start learning techniques like clustering, mixture models, and other things that cache out as precise, technical definitions of concepts that normally have rather confused or confusing English definitions.
COMPSCI 188
CS 221
6.034
15-381
Free Online
Artificial Intelligence
We made it! We're finally doing some AI work! Doing logical inference, heuristic development, and other techniques will leverage all the stuff you just learned in machine learning. While modern, mainstream AI has many useful techniques to offer you, the authors will tell you outright that, "the princess is in another castle". Or rather, there isn't a princess of general AI algorithms anywhere -- not yet. We're gonna have to go back to mathematics and build our own methods ourselves.
MATH 136
PHIL 152
18.511
80-311
Free Online
Incompleteness and Undecidability
Probably the most celebrated results is mathematics are the negative results by Kurt Goedel: No finite set of axioms can allow all arithmetic statements to be decided as either true or false... and no set of self-referential axioms can even "believe" in its own consistency. Well, that's a darn shame, because recursively self-improving AI is going to need to side-step these theorems. Eventually, someone will unlock the key to over-coming this difficulty with self-reference, and if you want to help us do it, this course is part of the training ground.
MATH 225A/B
PHIL 151
18.515
21-600
Free Online
Metamathematics
Working within a framework of mathematics is great. Working above mathematics -- on mathematics -- with mathematics, is what this course is about. This would seem to be the most obvious first step to overcoming incompleteness somehow. Problem is, it's definitely not the whole answer. But it would be surprising if there were no clues here at all.
MATH 229
MATH 290B
24.245
21-603
Free Online
Model Theory
One day, when someone does side-step self-reference problems enough to program a recursively self-improving AI, the guy sitting next to her who glances at the solution will go "Gosh, that's a nice bit of Model Theory you got there!"

Think of Model Theory as a formal way to understand what "true" means.
MATH 245A
MATH 198
18.996
80-413
Free Online
Category Theory
Category theory is the precise way that you check if structures in one branch of math represent the same structures somewhere else. It's a remarkable field of meta-mathematics that nearly no one knows... and it could hold the keys to importing useful tools to help solve dilemmas in self-reference, truth, and consistency.
Outside recommendations
 
 
Harry Potter and the Methods of Rationality
Highly recommended book of light, enjoyable reading that predictably inspires people to realize FAI is an important problem AND that they should probably do something about that.

You can start reading this immediately, before any of the above courses.
 
Global Catastrophic Risks
A good primer on xrisks and why they might matter. SPOILER ALERT: They matter.

You can probably skim read this early on in your studies. Right after HP:MoR.
 
The Sequences
Rationality: the indispensable art of non-self-destruction! There are manifold ways you can fail at life... especially since your brain is made out of broken, undocumented spaghetti code. You should learn more about this ASAP. That goes double if you want to build AIs.

I highly recommend you read this before you get too deep into your academic career. For instance, I know people who went to college for 5 years, while somehow managing to learn nothing. That's because instead of learning, they merely recited the teacher's password every semester until they could dump whatever they "learned" out of their heads as soon as they walked out of the final. Don't let this happen to you! This, and a hundred other useful lessons like it about how to avoid predictable, universal errors in human reasoning and behavior await you in The Sequences!
 
Good and Real
A surprisingly thoughtful book on decision theory and other paradoxes in physics and math that can be dissolved. Reading this book is 100% better than continuing to go through your life with a hazy understanding of how important things like free will, choice, and meaning actually work.

I recommend reading this right around the time you finish up your quantum computing course.
 
MIRI Research Papers
MIRI has already published 30+ research papers that can help orient future Friendliness researchers. The work is pretty fascinating and readily accessible for people interested in the subject. For example: How do different proposals for value aggregation and extrapolation work out? What are the likely outcomes of different intelligence explosion scenarios? Which ethical theories are fit for use by an FAI? What improvements can be made to modern decision theories to stop them from diverging from winning strategies? When will AI arrive? Do AIs deserve moral consideration? Even though most of your work will be more technical than this, you can still gain a lot of shared background knowledge and more clearly see where the broad problem space is located.

I'd recommend reading these anytime after you finish reading The Sequences and Global Catastrophic Risks.
 
Universal Artificial Intelligence
A useful book on "optimal" AI that gives a reasonable formalism for studying how the most powerful classes of AIs would behave under conservative safety design scenarios (i.e., lots and lots of reasoning ability).

Wait until you finish most of the coursework above before trying to tackle this one.

 

Do also look into: Formal Epistemology, Game Theory, Decision Theory, and Deep Learning.

 

Bounding the impact of AGI

17 Louie 18 December 2012 07:47PM

For those of you interested, András Kornai's paper "Bounding the impact of AGI" from this year's AGI-Impacts conference at Oxford had a few interesting ideas (which I've excerpted below).

Summary:

  1. Acceptable risk tolerances for AGI design can be determined using standard safety engineering techniques from other fields
  2. Mathematical proof is the only available tool to secure the tolerances required to prevent intolerable increases in xrisk
  3. Automated theorem proving will be required so that the proof can reasonably be checked by multiple human minds

Safety engineering

Since the original approach of Yudkowsky (2006) to friendly AI, which sought mathematical guarantees of friendliness, was met with considerable skepticism, we revisit the issue of why such guarantees are essential. In designing radioactive equipment, a reasonable guideline is to limit emissions to several orders of magnitude below the natural background radiation level, so that human-caused dangers are lost in the noise compared to the pre-existing threat we must live with anyway. In the full paper, we take the “big five” extinction events that occurred within the past half billion years as background, and argue that we need to design systems with a failure rate below 10−63 per logical operation.

What needs to be emphasized in the face of this requirement is that the very best physical measurements have only one part in 1017 precision, not to speak of social and psychological phenomena where our understanding is considerably weaker. What this means is that guarantees of the requisite sort can only be expected from mathematics, where our measurement precision is already considerably better.

 

How reliable is mathematics?

The period since World War II has brought incredible advances in mathematics, such as the Four Color Theorem (Appel and Haken 1976), Fermat’s Last Theorem (Wiles 1995), the classification of finite simple groups (Gorenstein 1982, Aschbacher 2004), and the Poincare conjecture (Perelman 1994). While the community of mathematicians is entirely convinced of the correctness of these results, few individual mathematicians are, as the complexity of the proofs, both in terms of knowledge assumed from various branches of mathematics and in terms of the length of the deductive chain, is generally beyond our ken. Instead of a personal understanding of the matter, most of us now rely on argumentum ad verecundiam: well Faltings and Ribet now think that the Wiles-Taylor proof is correct, and even if I don’t know Faltings or Ribet at least I know and respect people who know and respect them, and if that’s not good enough I can go and devote a few years of my life to understand the proof for good. Unfortunately, the communal checking of proofs often takes years, and sometimes errors are discovered only after a decade has passed: the hole in the original proof of the Four Color Theorem (Kempe 1879) was detected by Heawood in 1890. Tomonaga in his Nobel lecture (1965) describes how his team’s work in 1947 uncovered a major problem in Dancoff (1939):

Our new method of calculation was not at all different in its contents from Dancoff’s perturbation method, but had the advantage of making the calculation more clear. In fact, what took a few months in the Dancoff type of calculation could be done in a few weeks. And it was by this method that a mistake was discovered in Dancoff’s calculation; we had also made the same mistake in the beginning.

To see that such long-hidden errors are are by no means a thing of the past, and to observe the ‘web of trust’ method in action, consider the following example from Mohr (2012).

The eighth-order coefficient A1(8) arises from 891 Feynman diagrams of which only a few are known analytically. Evaluation of this coefficient numerically by Kinoshita and coworkers has been underway for many years (Kinoshita, 2010). The value used in the 2006 adjustment is A1(8) = -1.7283(35) as reported by Kinoshita and Nio (2006). However, (...) it was discovered by Aoyama et al. (2007) that a significant error had been made in the calculation. In particular, 2 of the 47 integrals representing 518 diagrams that had not been confirmed independently required a corrected treatment of infrared divergences. (...) The new value is (Aoyama et al., 2007) A1(8) = 1.9144(35); (111) details of the calculation are given by Aoyama et al. (2008). In view of the extensive effort made by these workers to ensure that the result in Eq. (111) is reliable, the Task Group adopts both its value and quoted uncertainty for use in the 2010 adjustment.

 

Assuming no more than three million mathematics and physics papers published since the beginnings of scientific publishing, and no less than the three errors documented above, we can safely conclude that the overall error rate of the reasoning used in these fields is at least 10-6 per paper.

 

The role of automated theorem-proving

That human reasoning, much like manual arithmetic, is a significantly error-prone process comes as no surprise. Starting with de Bruijn’s Automath (see Nederpelt et al 1994) logicians and computer scientists have invested significant effort in mechanized proof checking, and it is indeed only through such efforts, in particular through the Coq verification (Gonthier 2008) of the entire logic behind the Appel and Haken proof that all lingering doubts about the Four Color Theorem were laid to rest. The error in A1(8) was also identified by using FORTRAN code generated by an automatic code generator (Mohr et al 2012).

To gain an appreciation of the state of the art, consider the theorem that finite groups of odd order are solvable (Feit and Thompson 1963). The proof, which took two humans about two years to work out, takes up an entire issue of the Pacific Journal of Mathematics (255 pages), and it was only this year that a fully formal proof was completed by Gonthier’s team (see Knies 2012). The effort, 170,000 lines, 15,000 definitions, 4,200 theorems in Coq terms, took person-decades of human assistance (15 people working six years, though many of them part-time) even after the toil of Bender and Glauberman (1995) and Peterfalvi (2000), who have greatly cleaned up and modularized the original proof, in which elementary group-theoretic and character-theoretic argumentation was completely intermixed.

The classification of simple finite groups is two orders of magnitude bigger: the effort involved about 100 humans, the original proof is scattered among 20,000 pages of papers, the largest (Aschbacher and Smith 2004a,b) taking up two volumes totaling some 1,200 pages. While everybody capable of rendering meaningful judgment considers the proof to be complete and correct, it must be somewhat worrisome at the 10-64 level that there are no more than a couple of hundred such people, and most of them have something of a vested interest in that they themselves contributed to the proof. Let us suppose that people who are convinced that the classification is bug-free are offered the following bet by some superior intelligence that knows the answer. You must enter a room with as many people you can convince to come with you and push a button: if the classification is bug-free you will each receive $100, if not, all of you will immediately die. Perhaps fools rush in where angels fear to tread, but on the whole we still wouldn’t expect too many takers.

Whether the classification of finite simple groups is complete and correct is very hard to say – the planned second generation proof will still be 5,000 pages, and mechanized proof is not yet in sight. But this is not to say that gaining mathematical knowledge of the required degree of reliability is hopeless, it’s just that instead of monumental chains of abstract reasoning we need to retreat to considerably simpler ones. Take, for example, the first Sylow Theorem, that if the order of a finite group G is divisible by some prime power pn, G will have a subgroup H of this order. We are absolutely certain about this. Argumentum ad verecundiam of course is still available, but it is not needed: anybody can join the hive-mind by studying the proof. The Coq verification contains 350 lines, 15 definitions, 90 theorems, and took 2 people 2 weeks to produce. The number of people capable of rendering meaningful judgment is at least three orders of magnitude larger, and the vast majority of those who know the proof would consider betting their lives on the truth of this theorem an easy way of winning $100 with no downside risk.

 

Further remarks

Not only do we have to prove that the planned AGI will be friendly, the proof itself has to be short enough to be verifiable by humans. Consider, for example, the fundamental theorem of algebra. Could it be the case that we, humans, are all deluded into thinking that an n-th degree polynomial will have roots? Yes, but this is unlikely in the extreme. If this so-called theorem is really a trap laid by a superior intelligence we are doomed anyway, humanity can find its way around it no more than a bee can find its way around the windowpane. Now consider the four-color theorem, which is still outside the human-verifiable range. It is fair to say that it would be unwise to create AIs whose friendliness critically depends on design limits implied by the truth of this theorem, while AIs whose friendliness is guaranteed by the fundamental theorem of algebra represent a tolerable level of risk. 

Recently, Goertzel and Pitt (2012) have laid out a plan to endow AGI with morality by means of carefully controlled machine learning. Much as we are in agreement with their goals, we remain skeptical about their plan meeting the plain failure engineering criteria laid out above.

LessWrong podcasts

38 Louie 03 December 2012 08:44AM

Today we're announcing a partnership with Castify to bring you Less Wrong content in audio form. Castify gets blog content read by professional readers and delivers it to their subscribers as a podcast so that you can listen to Less Wrong on the go. The founders of Castify are big fans of Less Wrong so they're rolling out their beta with some of our content.


Castify
 Note: The embedded player (above) isn't live as of this posting, but should be deployed soon.

To see how many people will use this, we're having the entire Mysterious Answers to Mysterious Questions core sequence read and recorded. We thought listening to it would be a great way for new readers to get caught up and for others to check out the quality of Castify's work. We will be adding more Less Wrong content based on community feedback, so let us know which content you'd like to see more of in the comments.

For instance: Which other sequences would you like to listen to? Would there be interest in an ongoing podcast channel for the promoted posts?

 

New WBE implementation

16 Louie 30 November 2012 11:16AM

It usually isn't profitable to pay attention to science news, since science journalists largely misintrepret new "breakthroughs". But I am somewhat interested in this story about "artificial brains" coming out of Canada.

Most large neuron simulations I've read about before don't actually do anything. But apparently there's a somewhat large new WBE implementation at the University of Waterloo that performs sub-humanly on several tasks while having similar weaknesses to human brains.

Curious what others think of this recent development.

 

Sign up to be notified about new LW meetups in your area

30 Louie 03 November 2012 10:38PM

LessWrong has rolled out a new feature on the user preference pages under "Location":

 

 

This is UNCHECKED by default.  Also, we don't know by default where you live.

If you think you might want to meet up with other LWers, please input your location and check this box. Once enough people have signed up in a new area, it will make starting a LW meetup there that much more straightforward for the future organizer.  (You'll just get that email; they won't see your email address.)  In general, it seems like being able to know something about where LWers live would be helpful, so please consider entering your approximate location even if you don't check the box.

Thanks to Wesley Moore for deploying this upgrade to the LW backend.

The Singularity Institute is hiring an executive assistant near Berkeley

14 Louie 22 January 2012 07:47AM

The Singularity Institute is hiring an executive assistant for Executive Director Luke Muehlhauser.

Right now his limiter (besides the need for some sleep and recreation) is not (1) cognitive exhaustion after a certain number of hours or (2) akrasia, but instead (3) needing to spend lots of time doing things that don't need to be him: e.g. hunting down the best product for X and buying it, shopping for food, finding names and email addresses for the top 30 researchers in field X, finding motorcycle classes and a motorcycle so he can stop paying so much for cabs when he doesn't have time for public transport, scheduling meetings with dozens of donors and collaborators, finding a good location for activity X, preparing an itinerary and buying plane tickets, and hundreds of other small things. (Some of these are 'life' things, some of these are SI things, but hours are hours.) Luke may also ask his executive assistant to handle certain tasks for other SI staffers.

Benefits:

  • Work directly with some of the central figures of Less Wrong, especially Luke(prog)
  • Work from home most of the time, with a somewhat self-determined schedule
  • Trial period at $15/hr for 20 hrs/week; if all goes well then get hired full-time at SI's standard starting salary of $3k/month

Responsibilities:

  • Represent our organization in a professional manner at all times
  • Manage scheduling and appointments for Luke
  • Prepare and manage correspondence professionally and accurately
  • Coordinate travel arrangements for Luke
  • Online and local shopping and transport
  • Internet research
  • Whatever else Luke needs done

Job requirements:

  • Good interpersonal skills and strong team-player attitude
  • Capable of clean, professional written communication with proper spelling, punctuation, and grammar
  • Positive, friendly and helpful attitude
  • Ability to handle sensitive and/or confidential material and information
  • Must pay strong attention to detail
  • Professional demeanor, dedicated and reliable, conscientious
  • Computer & internet literate
  • Own a car
  • Live in or near Berkeley, CA

Bonus points if you...

  • ...have read the Core Sequences
  • ...have experience as a personal or executive assistant
  • ...have even better creative non-fiction writing skills than is required for professional correspondance
  • ...are handy with Google Scholar
  • ...know a good amount of math, statistics, computer science, or cognitive science
  • ...have some skills in graphic design / presentation design
How to apply:
Send an email to jobs@intelligence.org with the subject line "Executive Assistant Position". Include your cover letter as plain text in the email body, and attach your résumé in PDF format.

 

Singularity Institute $100,000 end-of-year fundraiser only 20% filled so far

5 Louie 27 December 2011 09:24PM

** cross-posted from http://singinst.org/2011winterfundraiser/ **

Contains detailed info about accomplishments and plans at SI. Thanks for supporting our work!  -Louie


ARTIFICIAL INTELLIGENCE MORE RELEVANT THAN EVER

Recent books like Machine Ethics from Cambridge University Press and Robot Ethics from MIT Press, along with the U.S. military-funded research that resulted in Governing Lethal Behavior in Autonomous Robots show that the world is waking up to the challenges of building safe and ethical AI. But these projects focus on limited AI applications and fail to address the most important concern: how to ensure that smarter-than-human AI benefits humanity. The Singularity Institute has been working on that problem longer than anybody, a full decade before the Singularity landed on the cover of TIME magazine.


ACCOMPLISHMENTS IN 2011

2011 was our biggest year yet. Since the year began, we have:

  • Held our annual Singularity Summit in New York City, with more than 900 in attendance. Speakers included inventor and futurist Ray Kurzweil, economist Tyler Cowen, PayPal co-founder Peter Thiel, Skeptic publisher Michael Shermer, Mathematica and WolframAlpha creator Stephen Wolfram, neuroscientist Christof Koch, MIT physicist Max Tegmark, and famed Jeopardy! contestant Ken Jennings.
  • Held a smaller Singularity Summit in Salt Lake City.
  • Held a one-week Rationality Minicamp and a ten-week Rationality Boot Camp.
  • Created the Research Associates program, which currently has 7 researchers coordinating with Singularity Institute.
  • Published our Singularity FAQ, IntelligenceExplosion.com, and Friendly-AI.com.
  • Wrote three chapters for Springer's upcoming volume The Singularity Hypothesis, along with four other research papers.
  • Began work on a new, clearer website design with lots of new content, which should go live Q1 2012.
  • Began outlining open problems in Singularity research to help outside collaborators better understand our research priorities.

 

FUTURE PLANS YOU CAN HELP SUPPORT

In the coming year, we plan to do the following:

  • Hold our annual Singularity Summit, in San Francisco this year.
  • Improve organizational transparency by creating a simpler, easier-to-use website that includes Singularity Institute planning and policy documents.
  • Publish a document of open research problems in Singularity Research, to clarify the research space and encourage other researchers to contribute to our mission.
  • Add additional skilled researchers to our Research Associates program.
  • Publish a well-researched document making the case for existential risk reduction as optimal philanthropy.
  • Diversify our funding sources by applying for targeted grants and advertising our affinity credit card program.


  • We appreciate your support for our high-impact work. As Skype co-founder Jaan Tallinn said:

    We became the dominant species on this planet by being the most intelligent species around. This century we are going to cede that crown to machines… Since we have only one shot at getting this transition right, the importance of Singularity Institute's work cannot be overestimated.

 


Now is your last chance to make a tax-deductible donation in 2011.

If you'd like to support our work: please donate now!

$100 off for Less Wrong: Singularity Summit 2011 on Oct 15 - 16 in New York

6 Louie 28 September 2011 02:01AM

There's still time left to register for the Singularity Summit in New York. But hurry because there are only a few weeks left!

Register now so you can meet Eliezer, AnnaSalamon, Lukeprog, and more! I'm particularly excited about several invited speakers, such as neuroscientist Christof Koch (who I've blogged about here) and author Sharon Bertsch McGrayne who recently published The Theory That Would Not Die: How Bayes' Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy.

Register now and use the $100 off Less Wrong discount code: LW2011. Hope to see you next month in New York!

Ray Kurzweil Eliezer Yudkowsky Anna Salamon Luke Muehlhauser

 

continue reading »

Last day to register for Foresight@Google - Mountain View, California

2 Louie 21 June 2011 08:33PM

For those in the California bay area, I thought I'd post some details about the upcoming Foresight conference.

25th Anniversary Conference Celebration and Reunion Weekend
Google HQ in Mountain View, CA
June 25-26 2011

Speakers and panelists include: 
• WILLIAM ANDREGG - Founder/CEO of Halcyon Molecular
• MIKE GARNER, PhD - Chair of ITRS Emerging Research Materials
• MIKE NELSON - CTO of NanoInk
• LUKE NOSEK - CoFounder of Paypal, Founders Fund
    Partner
• Keynote: BARNEY PELL, PhD - 
Cofounder/CTO of Moon Express
• PAUL SAFFO, PhD - Wired, NYT-published strategist & forecaster
• SIR FRASER STODDART, PhD - Knighted for creation of molecular "switches" and a new field of nanochemistry
• THOMAS THEIS, PhD - IBM's Director of Physical Sciences
• Keynote: JIM VON EHR - Founder/President of Zyvex, the world's first successful molecular nanotech company

For the full speaker roster, as well as information on the 25th Anniversary Banquet, see the conference website: 

Space is limited! 

For $50 off, our supporters can register with the discount code: SIAI

View more: Next