You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

[Link] Some notes on Rationality in Harry Potter and the Methods of Rationality

12 mstevens 12 June 2012 12:44PM

http://www.etla.org/hpmor_notes.html

This is an ongoing project of mine, although I haven't worked on it in a while. I've been trying to extract the references to Rationality - the Methods of Rationality from HPMoR. It also ended up having a few quotes that seemed interesting about how the story's going. I've linked references where I could find them.

I've only got as far as Chapter 40. Any extra submissions welcome.

At least one person - User:DavidGerard suggested it deserved being posted as a discussion link.

[Link] A superintelligent solution to the Fermi paradox

-1 Will_Newsome 30 May 2012 08:08PM

Here.

Long story short, it's an attempt to justify the planetarium hypothesis as a solution to the Fermi paradox. The first half is a discussion of how it and things like it are relevant to the intended purview of the blog, and the second half is the meat of the post. You'll probably want to just eat the meat, which I think is relevant to the interests of many LessWrong folk.

The blog is Computational Theology. It's new. I'll be the primary poster, but others are sought. I'll likely introduce the blog and more completely describe it in its own discussion post when more posts are up, hopefully including a few from people besides me, and when the archive will give a more informative indication of what to expect from the blog. Despite theism's suspect reputation here at LessWrong I suspect many of the future posts will be of interest to this audience anyway, especially for those of you who take interest in discussion of the singularity. The blog will even occasionally touch on rationality proper. So you might want to store the fact of the blog's existence somewhere deep in the back of your head. A link to the blog's main page can be found on my LessWrong user page if you forget the url.

I'd appreciate it if comments about the substance of the post were made on the blog post itself, but if you want to discuss the content here on LessWrong then that's okay too. Any meta-level comments about presentation, typos, or the post's relevance to LessWrong, should probably be put as comments on this discussion post. Thanks all!

[link] Nine Ways to Bias Open-Source AGI Toward Friendliness

8 Kaj_Sotala 22 May 2012 09:56AM

Ben Goertzel and Joel Pitt: Nine Ways to Bias Open-Source AGI Toward Friendliness. Journal of Evolution and Technology - Vol. 22 Issue 1 – February 2012 - pgs 116-141.

Abstract

While it seems unlikely that any method of guaranteeing human-friendliness (“Friendliness”) on the part of advanced Artificial General Intelligence (AGI) systems will be possible, this doesn’t mean the only alternatives are throttling AGI development to safeguard humanity, or plunging recklessly into the complete unknown. Without denying the presence of a certain irreducible uncertainty in such matters, it is still sensible to explore ways of biasing the odds in a favorable way, such that newly created AI systems are significantly more likely than not to be Friendly. Several potential methods of effecting such biasing are explored here, with a particular but non-exclusive focus on those that are relevant to open-source AGI projects, and with illustrative examples drawn from the OpenCog open-source AGI project. Issues regarding the relative safety of open versus closed approaches to AGI are discussed and then nine techniques for biasing AGIs in favor of Friendliness are presented:

1.       Engineer the capability to acquire integrated ethical knowledge.

2.       Provide rich ethical interaction and instruction, respecting developmental stages.

3.       Develop stable, hierarchical goal systems.

4.       Ensure that the early stages of recursive self-improvement occur relatively slowly and with rich human involvement.

5.       Tightly link AGI with the Global Brain.

6.       Foster deep, consensus-building interactions between divergent viewpoints.

7.       Create a mutually supportive community of AGIs.

8.       Encourage measured co-advancement of AGI software and AGI ethics theory.

9.       Develop advanced AGI sooner not later.

 In conclusion, and related to the final point, we advise the serious co-evolution of functional AGI systems and AGI-related ethical theory as soon as possible, before we have so much technical infrastructure that parties relatively unconcerned with ethics are able to rush ahead with brute force approaches to AGI development.

I'd say it's worth a read - they have pretty convincing criticism against the possibility of regulating AGI (section 3). I don't think that their approach will work if there's a hard takeoff or a serious hardware overhang, though it could maybe work if there isn't. It might also work if there was the possibility for a hard takeoff, but not instantly after developing the first AGI systems.

[LINK] International variation in IQ – the role of parasites

5 David_Gerard 14 May 2012 12:08PM

This is interesting, I wonder if there's anything to it: International variation in IQ – the role of parasites (paper) by Christopher Hassall of U. Carleton.

It strikes me as the sort of thing that could be as big an issue as lead in the environment. Raise the sanity waterline: improve health!

[META] Recent Posts for Discussion and Main

9 Oscar_Cunningham 13 May 2012 10:42AM

This link

http://lesswrong.com/r/all/recentposts

gives a page which lists all the recent posts in both the Main and Discussion sections. I've posted it in the comments section before, but I decided to put it in a discussion post because it's a really handy way of accessing the site. I found it by guessing the URL.

Test your forecasting ability, contribute to the science of human judgment

3 MichaelBishop 05 May 2012 03:07PM

As XFrequentist mentioned last August, "Intelligence Advanced Research Project Activity (IARPA) with the goal of improving forecasting methods for global events of national (US) interest. One of the teams (The Good Judgement Team) is recruiting volunteers to have their forecasts tracked. Volunteers will receive an annual honorarium ($150), and it appears there will be ongoing training to improve one's forecast accuracy (not sure exactly what form this will take)."

You can pre-register here.

Last year, approximately 2400 forecasters were assigned to one of eight experimental conditions.  I was the #1 forecaster in my condition.  It was fun, and I learned a lot, and eventually they are going to give me a public link so that I can brag about this until the end of time.  I'm participating again this year, though I plan to regress towards the mean.

I'll share the same info XFrequentist did last year below the fold because I think it's all still relevant.

continue reading »

[link] Why We Reason (psychology blog)

4 [deleted] 18 April 2012 11:40AM

Why We Reason is an excellent psychology blog that has a great deal of subject matter in common with Less Wrong. Some of the topics discussed on the blog include social psychology, judgement and decision making, neuroscience, cognitive biases, and creativity. And there's even a hint of the kind of "cognitive philosophy" practiced on Less Wrong.

The author, Sam McNerney, is blessed with the rare gift of being able to distill psychology topics for a lay audience, and his posts are very lucid.

There's also a handy archive of every post on the site.

[LINK] Neil deGrasse Tyson on killer asteroids

2 David_Gerard 03 April 2012 06:32PM

LessWrong is not big on discussion of non-AI existential risks. But Neil deGrasse Tyson notes killer asteroids not just as a generic problem, but as a specific one, naming Apophis as an imminent hazard.

So treat this as your exercise for today: what are the numbers, what is the risk, what are the costs, what actions are appropriate? Assume your answers need to work in the context of a society that's responded to the notion of anthropogenic climate change with almost nothing but blue vs. green politics.

[LINK] Poem: There are no beautiful surfaces without a terrible depth.

15 JenniferRM 27 March 2012 05:30PM

The poem is from someone whose online pseudonym is atiguhya padma.  I'll quote the first verse, the refrain, and the beginning of the second verse to give you enough flavor to decide if you want to follow the link.  There are about 9 verses total.

continue reading »

[LINK] Freeman Dyson reviews "Physics on the Fringe: Smoke Rings, Circlons, and Alternative Theories of Everything"

5 David_Gerard 23 March 2012 12:03AM

Freeman Dyson writes in the New York Review of Books about people who took up the crackpot offer. Not just complete cranks, but eminent scientists such as Eddington who got into crankery in their later years.

New thing I learnt: Dyson was not only a good friend of Immanuel Velikovsky, but considers him a greatly underappreciated poet.

[LINK] Judea Pearl wins 2011 Turing Award

20 [deleted] 15 March 2012 04:32PM

Link to ACM press release.

In addition to their impact on probabilistic reasoning, Bayesian networks completely changed the way causality is treated in the empirical sciences, which are based on experiment and observation. Pearl's work on causality is crucial to the understanding of both daily activity and scientific discovery. It has enabled scientists across many disciplines to articulate causal statements formally, combine them with data, and evaluate them rigorously. His 2000 book Causality: Models, Reasoning, and Inference is among the single most influential works in shaping the theory and practice of knowledge-based systems. His contributions to causal reasoning have had a major impact on the way causality is understood and measured in many scientific disciplines, most notably philosophy, psychology, statistics, econometrics, epidemiology and social science.

While that "major impact" still seems to me to be in the early stages of propagating through the various sciences, hopefully this award will inspire more people to study causality and Bayesian statistics in general.

Nick Bostrom: Moral uncertainty – towards a solution? [link, 2009]

-6 Kevin 08 March 2012 11:07AM

[link] New Scientist, on the distant future

1 fortyeridania 07 March 2012 10:15AM

The magazine has a bunch of articles dealing with what the world may be like 98,000 years hence. What with the local interest in the distant future, and with prediction itself, I thought I'd bring it to your attention.

http://www.newscientist.com/special/deep-future?cmpid=NLC|NSNS|2012-0503-GLOBAL|deepfuture&utm_medium=NLC&utm_source=NSNS&utm_content=deepfuture

Sapir-Whorf , Savings, and Discount Rates [Link]

1 Nic_Smith 04 March 2012 08:03AM

The language you speak may affect how you approach your finances, according to a working paper by economist Keith Chen (seen via posts by Frances Woolley at the Worthwhile Canadian Initiative and Economy Lab). It appears that languages that require more explicit future tense are associated with lower savings. A few interesting quotes from a quick glance:

...[I]n the World Values Survey a language’s FTR [Future-Time Reference] is almost entirely uncorrelated with its speakers’ stated values towards savings (corr = -0.07). This suggests that the language effects I identify operate through a channel which is independent of conscious attitudes towards savings. [emphasis mine]

Something else that I wasn't previously aware of:

Lowenstein (1988) finds a temporal reference-point effect: people demand much more compensation to delay receiving a good by one year, (from today to a year from now), than they are willing to pay to move up consumption of that same good (from a year from now to today).

[link] How habits control our behavior, and how to modify them

25 Kaj_Sotala 21 February 2012 07:23AM

The New York Times just recently ran an article titled "How Companies Learn Your Secrets", which was partially discussing data mining and partially discussing habits. I thought the bits on habits seemed to offer many valuable insights on how to improve our behavior, excerpts:

The process within our brains that creates habits is a three-step loop. First, there is a cue, a trigger that tells your brain to go into automatic mode and which habit to use. Then there is the routine, which can be physical or mental or emotional. Finally, there is a reward, which helps your brain figure out if this particular loop is worth remembering for the future. Over time, this loop — cue, routine, reward; cue, routine, reward — becomes more and more automatic. The cue and reward become neurologically intertwined until a sense of craving emerges. What’s unique about cues and rewards, however, is how subtle they can be. Neurological studies like the ones in Graybiel’s lab have revealed that some cues span just milliseconds. And rewards can range from the obvious (like the sugar rush that a morning doughnut habit provides) to the infinitesimal (like the barely noticeable — but measurable — sense of relief the brain experiences after successfully navigating the driveway). Most cues and rewards, in fact, happen so quickly and are so slight that we are hardly aware of them at all. But our neural systems notice and use them to build automatic behaviors.

Habits aren’t destiny — they can be ignored, changed or replaced. But it’s also true that once the loop is established and a habit emerges, your brain stops fully participating in decision-making. So unless you deliberately fight a habit — unless you find new cues and rewards — the old pattern will unfold automatically. [...]

Luckily, simply understanding how habits work makes them easier to control. Take, for instance, a series of studies conducted a few years ago at Columbia University and the University of Alberta. Researchers wanted to understand how exercise habits emerge. In one project, 256 members of a health-insurance plan were invited to classes stressing the importance of exercise. Half the participants received an extra lesson on the theories of habit formation (the structure of the habit loop) and were asked to identify cues and rewards that might help them develop exercise routines.

The results were dramatic. Over the next four months, those participants who deliberately identified cues and rewards spent twice as much time exercising as their peers. Other studies have yielded similar results. According to another recent paper, if you want to start running in the morning, it’s essential that you choose a simple cue (like always putting on your sneakers before breakfast or leaving your running clothes next to your bed) and a clear reward (like a midday treat or even the sense of accomplishment that comes from ritually recording your miles in a log book). After a while, your brain will start anticipating that reward — craving the treat or the feeling of accomplishment — and there will be a measurable neurological impulse to lace up your jogging shoes each morning.

Our relationship to e-mail operates on the same principle. When a computer chimes or a smartphone vibrates with a new message, the brain starts anticipating the neurological “pleasure” (even if we don’t recognize it as such) that clicking on the e-mail and reading it provides. That expectation, if unsatisfied, can build until you find yourself moved to distraction by the thought of an e-mail sitting there unread — even if you know, rationally, it’s most likely not important. On the other hand, once you remove the cue by disabling the buzzing of your phone or the chiming of your computer, the craving is never triggered, and you’ll find, over time, that you’re able to work productively for long stretches without checking your in-box. [...]

When they got back to P.& G.’s headquarters, the researchers watched their videotapes again. Now they knew what to look for and saw their mistake in scene after scene. Cleaning has its own habit loops that already exist. In one video, when a woman walked into a dirty room (cue), she started sweeping and picking up toys (routine), then she examined the room and smiled when she was done (reward). In another, a woman scowled at her unmade bed (cue), proceeded to straighten the blankets and comforter (routine) and then sighed as she ran her hands over the freshly plumped pillows (reward). P.& G. had been trying to create a whole new habit with Febreze, but what they really needed to do was piggyback on habit loops that were already in place. The marketers needed to position Febreze as something that came at the end of the cleaning ritual, the reward, rather than as a whole new cleaning routine.

The company printed new ads showing open windows and gusts of fresh air. More perfume was added to the Febreze formula, so that instead of merely neutralizing odors, the spray had its own distinct scent. Television commercials were filmed of women, having finished their cleaning routine, using Febreze to spritz freshly made beds and just-laundered clothing. Each ad was designed to appeal to the habit loop: when you see a freshly cleaned room (cue), pull out Febreze (routine) and enjoy a smell that says you’ve done a great job (reward). When you finish making a bed (cue), spritz Febreze (routine) and breathe a sweet, contented sigh (reward). Febreze, the ads implied, was a pleasant treat, not a reminder that your home stinks.

And so Febreze, a product originally conceived as a revolutionary way to destroy odors, became an air freshener used once things are already clean. The Febreze revamp occurred in the summer of 1998. Within two months, sales doubled. A year later, the product brought in $230 million. Since then Febreze has spawned dozens of spinoffs — air fresheners, candles and laundry detergents — that now account for sales of more than $1 billion a year. Eventually, P.& G. began mentioning to customers that, in addition to smelling sweet, Febreze can actually kill bad odors. Today it’s one of the top-selling products in the world. [...]

But when some customers were going through a major life event, like graduating from college or getting a new job or moving to a new town, their shopping habits became flexible in ways that were both predictable and potential gold mines for retailers. The study found that when someone marries, he or she is more likely to start buying a new type of coffee. When a couple move into a new house, they’re more apt to purchase a different kind of cereal. When they divorce, there’s an increased chance they’ll start buying different brands of beer.

Consumers going through major life events often don’t notice, or care, that their shopping habits have shifted, but retailers notice, and they care quite a bit. At those unique moments, Andreasen wrote, customers are “vulnerable to intervention by marketers.” In other words, a precisely timed advertisement, sent to a recent divorcee or new homebuyer, can change someone’s shopping patterns for years. [...]

Before I met Andrew Pole, before I even decided to write a book about the science of habit formation, I had another goal: I wanted to lose weight.

I had got into a bad habit of going to the cafeteria every afternoon and eating a chocolate-chip cookie, which contributed to my gaining a few pounds. Eight, to be precise. I put a Post-it note on my computer reading “NO MORE COOKIES.” But every afternoon, I managed to ignore that note, wander to the cafeteria, buy a cookie and eat it while chatting with colleagues. Tomorrow, I always promised myself, I’ll muster the willpower to resist.

Tomorrow, I ate another cookie.

When I started interviewing experts in habit formation, I concluded each interview by asking what I should do. The first step, they said, was to figure out my habit loop. The routine was simple: every afternoon, I walked to the cafeteria, bought a cookie and ate it while chatting with friends.

Next came some less obvious questions: What was the cue? Hunger? Boredom? Low blood sugar? And what was the reward? The taste of the cookie itself? The temporary distraction from my work? The chance to socialize with colleagues?

Rewards are powerful because they satisfy cravings, but we’re often not conscious of the urges driving our habits in the first place. So one day, when I felt a cookie impulse, I went outside and took a walk instead. The next day, I went to the cafeteria and bought a coffee. The next, I bought an apple and ate it while chatting with friends. You get the idea. I wanted to test different theories regarding what reward I was really craving. Was it hunger? (In which case the apple should have worked.) Was it the desire for a quick burst of energy? (If so, the coffee should suffice.) Or, as turned out to be the answer, was it that after several hours spent focused on work, I wanted to socialize, to make sure I was up to speed on office gossip, and the cookie was just a convenient excuse? When I walked to a colleague’s desk and chatted for a few minutes, it turned out, my cookie urge was gone.

All that was left was identifying the cue.

Deciphering cues is hard, however. Our lives often contain too much information to figure out what is triggering a particular behavior. Do you eat breakfast at a certain time because you’re hungry? Or because the morning news is on? Or because your kids have started eating? Experiments have shown that most cues fit into one of five categories: location, time, emotional state, other people or the immediately preceding action. So to figure out the cue for my cookie habit, I wrote down five things the moment the urge hit:

Where are you? (Sitting at my desk.)

What time is it? (3:36 p.m.)

What’s your emotional state? (Bored.)

Who else is around? (No one.)

What action preceded the urge? (Answered an e-mail.)

The next day I did the same thing. And the next. Pretty soon, the cue was clear: I always felt an urge to snack around 3:30.

Once I figured out all the parts of the loop, it seemed fairly easy to change my habit. But the psychologists and neuroscientists warned me that, for my new behavior to stick, I needed to abide by the same principle that guided Procter & Gamble in selling Febreze: To shift the routine — to socialize, rather than eat a cookie — I needed to piggyback on an existing habit. So now, every day around 3:30, I stand up, look around the newsroom for someone to talk to, spend 10 minutes gossiping, then go back to my desk. The cue and reward have stayed the same. Only the routine has shifted. It doesn’t feel like a decision, any more than the M.I.T. rats made a decision to run through the maze. It’s now a habit. I’ve lost 21 pounds since then (12 of them from changing my cookie ritual).

[LINK] The NYT on Everyday Habits

6 Alex_Altair 18 February 2012 08:23AM

The New York Times just published this article on how companies use data mining and the psychology of habit formation to effectively target ads.

The process within our brains that creates habits is a three-step loop. First, there is a cue, a trigger that tells your brain to go into automatic mode and which habit to use. Then there is the routine, which can be physical or mental or emotional. Finally, there is a reward, which helps your brain figure out if this particular loop is worth remembering for the future. Over time, this loop — cue, routine, reward; cue, routine, reward — becomes more and more automatic. The cue and reward become neurologically intertwined until a sense of craving emerges.

It has some decent depth of discussion, including an example of the author actually using the concepts to stop a bad habit. The article is based on an upcoming book by the same author titled The Power of Habit.

I haven't seen emphasis of this particular phenomenon—habits consisting of a cue, routine, and reward—on Lesswrong. Do people think it's a valid, scientifically supported phenomenon? The article gives this impression but, of course, doesn't cite specific academic work on it. It ties in to the System 1/System 2 theory easily as a System 1 process. How much of the whole System 1 can be explained as an implementation of this cue, routine, reward process?

And most importantly, how can this fit into the procrastination equation as a tool to subvert akrasia and establish good habits? 

Let's look at each of the four factors. If you've formed a habit, it means that the reward happened consistently, which means you have high expectancy. Given that it is a reward, the value is at least positive, but probably not large. Since habits mostly work on small time scales, delay is probably very small. And maybe increased habit formation means your impulsiveness is low. Each of these effects would increase motivation. In addition, because it's part of System 1, there is little energy cost to performing the habit, like there would be with many other conscious actions.

Does this explanation sound legitimate, or like an argument for the bottom line?

Personally, I can tell that context is a strong cue for behavior at work, school, and home. When I go into work, I'm automatically motivated to perform well, and that motivation remains for several hours. When I go into class, I'm automatically ready to focus on difficult material, or even enthusiastically take a test. Yet when I go home, something about the context switches that off, and I can't seem to get anything done at all. It might be worth significant experimentation to find out what cues trigger both modes, and change my contexts to induce what I want.

What do you think?

Edit: this phenomenon has been covered on LW in the form of operant conditioning in posts by Yvain.

[LINK] The Hacker Shelf, free books.

9 [deleted] 14 February 2012 04:52PM

Yes, this a repost from Hacker News, but I want to point out some books that are of LW-related interest.

The Hacker Shelf is a repository of freely available textbooks. Most of them are about computer programming or the business of computer programming, but there are a few that are perhaps interesting to the LW community. All of these were publicly available beforehand, but I'm linking to the aggregator in hopes that people can think of other freely available textbooks to submit there.

The site is in its beginning explosion phase; in the time it took to write this post, it doubled in size. If previous sites are any indication, it will crest in a month or so. People will probably lose interest after three months, and after a year the site will probably silently close shop.

MacKay, Information Theory, Inference, and Learning Algorithms

I really wish I had an older version of this book; the newer one has been marred by a Cambridge UP ad on the upper margin of every page. Publishers ruin everything.

The book covers reasonably concisely the basics of information theory and Bayesian methods, with some game theory and coding theory (in the sense of data compression) thrown in on the side. The style takes after Knuth, but refrains from the latter's more encyclopedic tendencies. It's also the type of book that gives a lot of extra content in the exercises. It unfortunately assumes a decent amount of mathematical knowledge — linear algebra and calculus, but nothing you wouldn't find on the Khan Academy.

Hacker Shelf review, book website.

Easley and Kleinberg, Networks, Crowds, and Markets

There's just a lot of stuff in this book, most of it of independent interest. The thread that ties the book together is graph theory, and with it they cover a great deal of game theory, voting theory, and economics. There are lots of graphs and pictures, and the writing style is pretty deliberate and slow-paced. The math is not very intense; all their probability spaces are discrete, so there's no calculus, and only a few touches of linear algebra.

Hacker Shelf review, book website.

Gabriel, Patterns of Software

This is a more fluffy book about the practice of software engineering. It's rather old, but I'm linking to it anyway because I agree with the author's feeling that the software engineering discipline has more or less misunderstood Christopher Alexander's work on pattern languages. The author tends to ramble on. I think there's some good wisdom about programming practices and organizational management in general that one could abstract away from this book.

Hacker Shelf link, book website (scroll down).

Nisan et. al., Algorithmic Game Theory

I hesitate to link this because the math level is exceptionally high, perhaps high enough that anyone who can read the book probably knows the better part of its contents already. But game/decision theory is near and dear to LW's heart, so perhaps someone will gather some utility from this book. There's an awful lot going on in it. A brief selection: a section on the relationship between game theory and cryptography, a section on computation in prediction markets, and a section analyzing the incentives of information security.

Hacker Shelf review, book.

[LINK] Autistic woman banned from having sex in latest Court of Protection case

-6 mstevens 06 February 2012 11:53AM

Autistic woman banned from having sex in latest Court of Protection case

This reminded me of previous LW comments about how we restrict the rights of children for their own good.

On the one hand, children can't understand the risks so we stop them having sex.

But on the other hand, animals can't understand the risks and we happily let them continue having sex.

 

[Link] Cognitive Sciences Stack Exchange opened

21 Kaj_Sotala 05 February 2012 08:07AM

This is probably of interest to many here: Cognitive Sciences Stack Exchange.

For those who aren't in the know, the Stack Exchange family of forums is a set of sites where users may post questions and answers. They are divided by subject matter, each trying to collect a community of experts who can collectively answer any well-defined question relating to the domain. The about Stack Exchange page boasts that 90% of questions get great answers, "often stunningly quickly". Probably the most famous SE site is Stack Overflow, the computer programming site that started it all.

I find the creation of a Cogsci SE to be quite exciting, as it seems like it could quickly become an invaluable resource for anyone interested in the subject matter. I encourage people to take a look and contribute if they can, or lurk if they can't - there are a number of interesting questions and answers already. (For instance, I found this answer about biofeedback quite interesting.) I already contributed one answer myself.

In addition to helping contribute to an improved understanding of cognitive science, this might also be a good opportunity for LWers to make a bit of a name for themselves among net-savvy cogsci academics. No idea if that's actually useful, but it might be a bit of a pleasant ego boost if you don't have anything better to do with your time. ;-)

[Link] Reconstructing Speech from Human Auditory Cortex

8 [deleted] 01 February 2012 12:50PM

Link to full paper.

Abstract (emphasis mine):

How the human auditory system extracts perceptually relevant acoustic features of speech is unknown. To address this question, we used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity. We found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram. However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility. The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex.

 

Fissure opens in chess AI scene [link]

-4 Kevin 23 January 2012 02:14AM

[Humor] [Link] Eclipse Maid, a posthuman maid role-playing game

8 Kaj_Sotala 28 December 2011 01:18PM

What do you get when you put together Eclipse Phase, the science-fiction RPG of posthuman horror, and Maid, the light comedy anime-themed RPG? The answer, of course, is... Eclipse Maid.

In the distant future, humanity's age has passed. Runaway technological development has led to the obsolescence of the human race, and the Solar System is now ruled by vast, posthuman intelligences that explore realms of science and philosophy unimaginable to the unenhanced mind. Their most idle musings spawn computational vistas more complex than entire human civilsations as they plumb the very secrets of the cosmos.

Incomprehensibly sophisticated as they may be, however, the posthumans have difficulty dealing with what they euphemistically term the “analogue world”. To be blunt, they're really quite hopeless when it comes to physical matters. For all their cognitive puissance, they haven't yet freed themselves from certain physical needs – energy, security, computational machinery on which to run – and so they create servants to carry out their will, defend their physical forms from rivals and hostile Outsiders, and generally keep things tidy.

Thus, even in the age of humanity's eclipse, there are maids.

The Ego (mind) Origins Table contains entries such as Blank ("You're a brand-new digital sentience, created from scratch to serve your Master"), Fork ("You're a scaled-down copy of your Master's own program. You have so many identity issues"), Uplift ("The Master gave you intelligence to serve him. Were you animal, or something weird like a plant?"), and Offspring ("You're actually a larval posthuman AI, serving your "parent" or another Master as a form of vocational training").

The selection of Morphs (physical bodies) includes ones such as Chibimorph, Giant Flying Space Whale, Spideroid ("This Morph resembles an armoured crab or spider the size of a small car. They're designed for combat and reconnaissance, but a hardware glitch causes Egos sleeved into them to become curious and philosophical"), Braincase ("A brain in a jar; you communicate using a built-in video screen with a picture of your face on it. While sleeved into this Morph, your intellect is vastly expanded, but you're easily tipped over"), Nekomorph, and Spectator ("A hovering metallic sphere with numerous camera-eyes mounted on prehensile robotic stalks. It's equipped with eye lasers for self-defence"). Special Morph qualities range from Blushes Easily ("This Morph turns red at the least provocation - even if this makes no sense whatsoever") to Solar Powered ("Efficient, environmentally friendly, and useless in the dark").

Possible Masters for your maids range from sapient starships to planetary minds to hive minds. You might enjoy reading the PDF even if you didn't know anything about role-playing games.

Thanks to Risto Saarelma for the pointer.

Details of lab-made bird flu won't be revealed [link]

8 Kevin 25 December 2011 12:17AM

The Controls are Lying: A Note on the Memetic Hazards of Video Games [Link]

18 Nic_Smith 19 December 2011 05:25AM

Chris Pruett writes on the Robot Invader blog:

Good player handling code is often smoke and mirrors; the player presses buttons and sees a reasonable result, but in between those two operations a whole lot of code is working to ensure that the result is the best of many potential results.  For example, my friend Greggman discovered that Mario 3's jumping rules change depending on whether or not a level has slopes in it. Halo's targeting reticle famously slows as it passes over an enemy to make it easier to target with an analog stick without using an auto-aim system. When Spider-Man swings, he certainly does not orient about the spot where his web connects to a building (at least, he didn't in the swinging system I wrote).

Good player handling code doesn't just translate the player's inputs into action, it tries to discern the player's intent. Once the intended action has been identified, if the rules of the game allow it, good player handling code makes the action happen–even if it means breaking the rules of the simulation a little. The goal of good handling code isn't to maintain a "correct" simulation, it's to provide a fun game. It sucks to miss a jump by three centimeters. It sucks to take the full force of a hit from a blow that visually missed. It sucks to swing into a brick wall at 80 miles per hour instead of continuing down the street. To the extent that the code can understand the player's intent, it should act on that intent rather than on the raw input. Do what I mean, not what I say.

I suppose this explains why I am better at arcade bowling games than I am at actual bowling. More seriously, while I had some vague awareness of this, I am slightly surprised at the breadth (Mario 3!?) and depth to which this "control re-interpretation" takes place.

Larry King: I want to be frozen

9 Kevin 05 December 2011 02:11AM

I know celebrities cryocrastinate just as much as anyone else, but King seems like the kind of guy to go through with it.

http://www.cnn.com/2011/12/02/showbiz/larry-king-i-want-to-be-frozen/index.html?hpt=hp_t3

One pixel electronic contact lens tested on rabbits [link]

2 Kevin 22 November 2011 11:31AM

[Link] Cognitive bias modification as a treatment for depression

9 RolfAndreassen 19 November 2011 05:11AM

This seems relevant to LessWrong, both as an extreme example of how biases can hurt people and as a possible rationality technique. Depression is presumably at the outer end of some spectrum; to the extent that it's caused by cognitive mistakes, people in the middle of the spectrum should be able to benefit from undoing the same mistakes. 

http://www.sciencedaily.com/releases/2011/11/111117202935.htm

[Link] TED- Cynthia Kenyon: Experiments that Hint at Longer Lives

7 [deleted] 19 November 2011 02:20AM

TEDGlobal 2011- Cynthia Kenyon: Experiments that Hint at Longer Lives

Video Link: http://www.ted.com/talks/cynthia_kenyon_experiments_that_hint_of_longer_lives.html

Speaker's Bio (from TED):

Cynthia Kenyon is revolutionizing our understanding of aging. As an expert in biochemistry and biophysics at the University of California at San Francisco, she is particularly interested in the influence that genetics have on age-related diseases (from cancer to heart failure) in living things.

Her biggest breakthrough was figuring out that there’s a “universal hormonal control for aging”: carbohydrate intake, which can have a dramatic effect on how two critical genes behave, reducing insulin production and boosting repair and renovation activities. So far, her theory has proved true for worms, mice, rats, and monkeys — and she suspects it applies to humans, too.

By studying aging, Kenyon believes that she and other scientists (many of whom have successfully duplicated her experiments) will be able to pinpoint the molecules responsible for the onset of age-related diseases in people and prevent them. She’s co-founded a drug-development company called Elixir Pharmaceuticals to do just that.

She says: "The link between aging and age-related disease suggests an entirely new way to combat many diseases all at once; namely, by going after their greatest risk factor: aging itself."

"Ten years ago, we thought aging was probably the result of a slow decay, a sort of rusting. But Professor Kenyon has shown that it’s ... controlled by genes. That opens the possibility of slowing it down with drugs."

Jeff Holly, Bristol University

 

Summary:

Different animals have different lifespans (i.e. mice- 2years, v. bat- 50 years) There must be something in the genes for aging, so you should be able to change the aging gene to expand life span.

They did experiments on a roundworm and found that mutations that damage a gene called daf-2 could double the worm’s lifespan. The daf-2 gene encodes a hormone receptor which promotes aging. (similar to the ones that promote food uptake and growth.)

They did the same mutation in flies and mice, and it worked on them.

People who lived to 90-100 in a population of Ashkenazi Jews were more likely to have mutations to daf-2 which would make it work less well. They are also less susceptible to things like cancer and age related deseases.

She explains how and why it works.

Follow-up questions about- Certain animals that don’t have aging. And can we do this by changing genes instead of developing a medication. (She says that would be a bad idea).

The nocebo effect [link]

1 Kevin 14 November 2011 12:59PM

A Brief Rant On The Future Of Interaction Design [link]

1 Kevin 11 November 2011 09:15AM

Michael Lewis on Kahneman and Tversky! [link]

7 Kevin 09 November 2011 12:49AM

[Link] Awesome interactive visualization article

17 Morendil 13 October 2011 12:46PM

"Up and Down the Ladder of Abstraction"

Have you seen something similar to explain Bayesian updating? If not, how would one go about doing that?

The rest of the site and in particular the "Kill Math Project" may also be of interest to LWers. Author Bret Victor, whose CV includes "designed the initial user interface concepts for the iPad", comes across overall as a particularly awesome fellow.

[link] Women in Computer Science, Where to Find More Info?

3 magfrump 23 September 2011 09:11PM

I recently ran across the following link:

A Campus Champion for Women in Computer Science

Which discusses a new president at Harvey Mudd College, and specifically her work in making the computer science major more accessible to women.  This seems neat and interesting except... barely any details are provided whatsoever.

They mention that the introductory computer science course was split into different courses, one of which is taught in Python.  Looking at Maria Klowe's webpage on Harvey Mudd she references that these steps were taken in a three part plan, and says "I encourage you to read more" but there are no obvious links on the page to any specifics.

Is anyone from Harvey Mudd that knows more or how to find out more?  For example, did the increase in female computer scientists go along with an increase in the size of the program (as is implied) or was there a displacement of male computer scientists?  Is the success limited to the one department, or are other engineering and science majors picking up women?  I noticed the front page mentioning that Harvey Mudd was recently named the top engineering school in the US, so presumably the gains don't result from "dumbing down" the program, but I'd like to see more information.

Reconstructing visual experiences from brain activity evoked by natural movies. [link]

5 Kevin 23 September 2011 05:10AM

Consolidated link thread, September 2011

6 Kaj_Sotala 19 September 2011 07:04AM

Recently the Discussion section has been full of link threads, most of them with a pretty low karma score and few if any comments. While many of them are interesting, I'd prefer to have less of them around. Right now they clutter up the discussion section so that it's getting hard to find the threads with actual discussion going on.

Therefore I'd suggest having regular link threads, in the same manner as rationality quotes and open threads. If you're only posting a link together with a brief description or excerpt, and it isn't something really really interesting, please post it as a comment in a link thread.

Journal article about politics and mindkilling

30 CronoDAS 07 September 2011 07:46AM

I just found a link to a paper written in 2003 by Geoffrey L. Cohen of Yale University.

"Party over Policy: The Dominating Impact of Group Influence on Political Beliefs"

Abstract:

Four studies demonstrated both the power of group influence in persuasion and people’s blindness to it. Even under conditions of effortful processing, attitudes toward a social policy depended almost exclusively upon the stated position of one’s political party. This effect overwhelmed the impact of both the policy’s objective content and participants’ ideological beliefs (Studies 1–3), and it was driven by a shift in the assumed factual qualities of the policy and in its perceived moral connotations (Study 4). Nevertheless, participants denied having been influenced by their political group, although they believed that other individuals, especially their ideological adversaries, would be so influenced. The underappreciated role of social identity in persuasion is discussed.

That's written in journal-ese, so I'll post a translation from the article I found that contained the link:

My favorite study (pdf) in this space was by Yale’s Geoffrey Cohen. He had a control group of liberals and conservatives look at a generous welfare reform proposal and a harsh welfare reform proposal. As expected, liberals preferred the generous plan and conservatives favored the more stringent option. Then he had another group of liberals and conservatives look at the same plans, but this time, the plans were associated with parties.

Both liberals and conservatives followed their parties, even when their parties disagreed with their preferences. So when Democrats were said to favor the stringent welfare reform, for example, liberals went right along. Three scary sentences from the piece: “When reference group information was available, participants gave no weight to objective policy content, and instead assumed the position of their group as their own. This effect was as strong among people who were knowledgeable about welfare as it was among people who were not. Finally, participants persisted in the belief that they had formed their attitude autonomously even in the two group information conditions where they had not.”

Also, the final study conducted had subjects write editorials either in support of or against a single policy proposal. The differences in how people responded in the "no group information" condition and the "my political party supports / opposes" conditions are also illuminating...

Podcast on Cryonics by 'Stuff you should know'

3 FiftyTwo 04 September 2011 10:36PM

The podcast 'Stuff you should know' has done an episode on cryonics.

 

Available here: 

http://podcasts.howstuffworks.com/hsw/podcasts/sysk/2011-08-30-sysk-cryonics.mp3?_kip_ipx=862998473-1315175433 

 

I don't know much about the subject, but what do people think of it as a depiction of cryonics in popular culture? 

[Link] “How to seem good at everything: Stop doing stupid shit”

14 kpreid 04 September 2011 05:37PM

Possibly interesting article on winning: How to seem good at everything: Stop doing stupid shit

Summary, as I interpreted it: In practicing a skill, focus on increasing the minimum of the quality of the individual actions comprising performing the skill (because that is the greatest marginal benefit).

[This article previously posted as an open thread comment.]

[LINK] Report on the Fourth Conference on Artificial General Intelligence

9 Kaj_Sotala 03 September 2011 04:42PM

http://hplusmagazine.com/2011/09/01/report-on-the-fourth-conference-on-artificial-general-intelligence/

Ben Goertzel
September 1, 2011
The Fourth Conference on Artificial General Intelligence (AGI-11) was held on Google’s campus in Mountain View (Silicon Valley), California, in the first week of August 2011.   This was the largest AGI conference yet, with more than 200 people attending, and it had a markedly different tone from the prior conferences in the series.  A number of participants noted that there was less of an out-of-the-mainstream, wild-eyed maverick feel to the proceedings, and more of a sense of “business as usual” or “normal science” – a sense in the air that AGI is obviously an important, feasible R&D area to be working on, albeit a bit “cutting-edge” compared to the majority of (more narrowly specialized) AI R&D.

I think this difference in tone was due partly to the Google and Bay Area location, and partly to the fact that the conference was held in close spatiotemporal proximity to two larger and older AI-related conferences, AAAI-11 and IJCNN-11.  IJCNN was just before AGI in San Jose, and AAAI was just after AGI in San Francisco — so a number of academic AI researchers who usually go to the larger conferences but not AGI, decided to try out AGI as well this year.   Complementing this academic group, there was also a strong turnout from the Silicon Valley software industry, and the Bay Area futurist and transhumanist community.

The first day of the conference was occupied by tutorials on the LIDA and OpenCog systems, and the Church probabilistic logic programming language.  The second day comprised two workshops: one on self-programming in AGI systems, and the next the traditional “Future of AGI” workshop, which was particularly lively due to the prominence of future-of-technology issues in Bay Area culture (the conference site was not so far off from the headquarters of a variety of futurist organizations like Singularity University, the Singularity Institute for AI, the Foresight Institute, etc.).  Most of the talks from the Future of AGI workshop have corresponding papers or presentations on the conference’s schedule page — with themes such as

These may be of particular interest to H+ Magazine readers!

The final two days constituted the conference proper, with technical talks corresponding to papers in the conference proceedings, which were published in Springer’s Lecture Notes in AI book series.  Videos of the conference talks, including the workshops and tutorials, will be posted by Google during the next months, and linked from the conference website.

Peter Norvig, Google’s head of research and the co-author of the best-selling AI textbook (whose latest edition does mention AGI, albeit quite briefly), gave brief opening remarks.  He didn’t announce any grand Google AGI initiatives, making clear that his own current research focus is elsewhere than the direct pursuit of powerful artificial general intelligence.  Yet, he also made clear that he sees a lot of the research going on at Google as part of an overall body of work that is ultimately building toward advanced AGI.

The four keynote speeches highlighted different aspects of the AGI field, as well as the strongly international nature of the AGI community.

Ernst Dickmanns, from Germany, reviewed his pioneering work on self-driving cars from the 1980s, which in some ways was more advanced that current world on self-driving cars being conducted by Google and others.   He wrapped up with a discussion of general lessons for AGI implied by his experience with self-driving cars, including the importance of adaptive learning and of “dynamic vision” that performs vision in a manner closely coordinated with action.

Aaron Sloman, from Britain, discussed “toddler theorems” – the symbolic understandings of the world that young children learn and create based on their sensorimotor and cognitive experiences.  He challenged the researchers in the audience to understand and model the kind of learning and world-modeling that crows or human babies do, and sketched some concepts that he felt would be useful for this sort of modeling.

MIT’s Ed Boyden reviewed his recent work on optogenetics, one of the most exciting and rapidly developing technologies for imaging the brain – a very important area, given the point raised in the conference’s Special Track on Neuroscience and AGI, that the main factor holding back the design of AGI systems based on human brain emulation is currently the lack of appropriate tools for measuring what’s happening in the brain.  We can’t yet measure the brain well enough to construct detailed dynamic brain simulations.  Boyden’s work is one of the approaches that, step by step, is seeking to overcome this barrier.

Zhongzhi Shi, from the Chinese Academy of Sciences in Beijing, described his integrative AGI architecture, which incorporates aspects from multiple Western AGI designs into a novel overall framework.  He also stressed the importance of cloud computing for enabling practical experimentation with complex AGI architectures like the one he described.

As well as the regular technical AGI talks, there was a Special Session on Neuroscience and AGI, led by neuroscientist Randal Koene, who is probably the world’s most successful advocate of mind uploading, or what he now calls “substrate independent minds.”   Most of the AGI field today is only loosely connected to neuroscience; and yet, in principle, nearly every AGI researcher would agree that careful emulation of the brain is one potential path to AGI, with a high probability of succeeding eventually.  The Special Session served to bring neuroscientists and AGI researchers together, to see what they could learn from each other.  Neuroscience is not yet at the point where one can architect an AGI based solely on neuroscience knowledge, yet there are many areas where AGI can draw inspiration from neuroscience.  Demis Hassabis emphasized the fact that AGI currently lacks any strong theories of how sensorimotor processing interfaces with abstract conceptual processing, and suggested some ways that neuroscience may provide inspiration here, e.g. analysis of cortical-hippocampal interactions.   Another point raised in discussions was that reinforcement learning could potentially gain inspiration from study of the various ways in which the brain treats internal intrinsic rewards (alerting or surprisingness) comparably to explicit external rewards.

Three prizes were awarded at the conference: two Kurzweil Prizes and one Solomonoff Prize.

The Kurzweil Prize for Best AGI Paper was awarded to Linus Gisslen, Matt Luciw, Vincent Graziano and Juergen Schmidhuber for their paper entitled Sequential Constant Size Compressors and Reinforcement Learning.   This paper represents an effort to bridge the gap between the general mathematical theory of AGI (which in its purest form applies only to AI programs achieving massive general intelligence via using unrealistically much processing power) and the practical business of building useful AGI programs.  Specifically, one of the key ideas in the general theory of AGI is “reinforcement learning” – learning via reward signals from the environment – but the bulk of the mathematical theory of reinforcement learning makes the assumption that the AI system has complete visibility into the environment.   Obviously this is unrealistic — no real-world intelligence has full knowledge of its environment.   The award-winning paper describes a novel, creative method of using recurrent neural networks to apply reinforcement learning methods to partially-observable environments — indicating a promising research direction to follow, for those who wish to make reinforcement learning algorithms that scale up to real world problems such as human level AGIs will have to deal with.

The 2011 Kurzweil Award for Best AGI Idea was awarded to Paul Rosenbloom for his paper entitled From Memory to Problem Solving: Mechanism Reuse in a Graphical Cognitive Architecture. Rosenbloom has a long history in the AI field, including a role co-creating the classic SOAR AI architecture in the 1980s.  While still supporting the general concepts underlying his older AI work, his current research focuses more heavily on scalable probabilistic methods – but more flexible and powerful ones than Bayes nets, Markov Logic Networks and other current popular techniques.   Extending his previous work on factor graphs as a core construct for scalable uncertainty management in AGI systems, his award-winning paper shows how factor graph mechanisms described for memory, can also be used for problem-solving tasks.  In the human brain there is no crisp distinction between memory and problem-solving, so it is conceptually satisfying to see AGI approaches that also avoid this sort of crisp distinction.  It is yet unclear to what extent any single mechanism can be used to achieve all the capabilities needed for human-level AGI; but, it is a very interesting and valuable research direction to take a single powerful and flexible mechanism like factor graphs and see how far one can push it, and Dr. Rosenbloom’s paper comprises a wonderful example of this sort of work.

The 2011 Solomonoff AGI Theory Prize – named in honor of AGI pioneer Ray Solomonoff, who passed away in 2010 — was awarded to Laurent Orseau and Mark Ring, for a pair of papers titled Self-Modification and Mortality in Artificial Agents and Delusion, Survival, and Intelligent Agents.   These papers explore aspects of theoretical generally intelligent agents inspired by Marcus Hutter’s AIXI model (a theoretical AGI system that would achieve massive general intelligence using infeasibly much computational resources, but that may potentially be approximated by more feasible AGI approaches).   The former paper considers some consequences of endowing an intelligent agent of this nature with the ability to modify its own code; and the latter analyzes aspects of what happens when this sort of theoretical intelligent agent is interfaced with the real world.   These papers constitute important steps in bridging the gap between the abstract mathematical theory of AGI, and the real-world business of creating AGI systems and embedding them in the world.

While there was a lot of strong and interesting research presented at the AGI-11 conference, I think it’s fair to say that there were no dramatic breakthroughs presented.  Rather, there was more of a feeling of steady incremental progress.   Also, compared to previous years, there was less of a feeling of separate, individual research projects working in a vacuum – the connections between different AI approaches seem to be getting clearer each year, in spite of the absence of a clearly defined common vocabulary or conceptual framework among various AGI researchers.   Links were built between abstract AGI theory and practical work, and between neuroscience and AGI engineering.  Hybridization of previously wholly different AGI architectures was reported (e.g. the paper I presented, describing the incorporation of aspects of Joscha Bach’s MicroPsi system in my OpenCog system).  All signs of a field that’s gradually maturing.

These observations lead me inexorably to some more personal musings on AGI.  I can’t help wondering: Can we get to human-level AGI and beyond via step-by-step, incremental progress, year after year?

It’s a subtle question, actually.   It’s clear that we are far from having a rigorous scientific understanding of how general intelligence works.   At some point, there’s going to be a breakthrough in the science of general intelligence – and I’m really looking forward to it!  I even hope to play a large part in it.  But the question is: will this scientific breakthrough come before or after the engineering of an AGI system with powerful, evidently near-human-level capability?

It may be that we need a scientific breakthrough in the rigorous theory of general intelligence before we can engineer an advanced AGI system.  But … I presently suspect that we don’t.  My current opinion is that it should be possible to create a powerful AGI system via proceeding step-by-step from the current state of knowledge – doing engineering inspired by an integrative conceptual, not quite fully rigorous understanding of general intelligence.  If this is right, then we can build a system that will have the impact of a “Sputnik of AGI,” via combining variants of existing algorithms in a reasonable cognitive architecture in a manner guided by a solid conceptual understanding of mind.   And then, by studying this Sputnik AGI system and its successors and variants, we will be able to arrive at the foreseen scientific breakthrough in the science of general intelligence.  This of course is what my colleagues and I are trying to do with the OpenCog project – but the general point I’m making here is independent of our specific OpenCog AGI design.

Anyway, that’s my personal view of the near to mid term future of AGI, which I advocated in asides during my OpenCog tutorial, and various discussions at the Future of AGI Workshop.   But my view on these matters is far from universal among AGI researchers – even as the AGI field matures and becomes less marginal, it is still characterized by an extremely healthy diversity of views and attitudes!   I look forward to ongoing discussions of these matters with my colleagues in the AGI community as the AGI conference series proceeds and develops.  Mostly, it’s awesome to even have a serious AGI community.  It’s hard sometimes to remember that 10 years ago this was far from the case!

[Link] TED Talk on Perceived Value

12 Alicorn 17 August 2011 12:49AM

Rory Sutherland, an ad man (who missed his calling as a comedian), gives this talk on perceived value versus "real" value, and comes down in favor of more of the first.  He also dabbles in history, status, behavioral economics, and the importance of user interface design.

View more: Prev | Next