Stanislav Petrov Day

35 gwern 26 September 2011 02:49PM

A reminder for everyone: on this day in 1983, Stanislav Petrov saved the world.

It occurs to me this time around that there's an interesting relationship here - 9/26 is forgotten, while 9/11 is remembered. Do something charitable, and not patriotic, sometime today.

Reconstructing visual experiences from brain activity evoked by natural movies. [link]

5 Kevin 23 September 2011 05:10AM

Scientist vs. philosopher on conceptual analysis

10 lukeprog 20 September 2011 03:10PM

In Less Wrong Rationality and Mainstream Philosophy, Conceptual Analysis and Moral Theory, and Pluralistic Moral Reductionism, I suggested that traditional philosophical conceptual analysis often fails to be valuable. Neuroscientist V.S. Ramachandran has recently made some of the points in a polite sparring with philosopher Colin McGinn over Ramachandran's new book The Tell-Tale Brain:

Early in any scientific enterprise, it is best to forge ahead and not get bogged down by semantic distinctions. But “forging ahead” is a concept alien to philosophers, even those as distinguished as McGinn. To a philosopher who demanded that he define consciousness before studying it scientifically, Francis Crick once responded, “My dear chap, there was never a time in the early years of molecular biology when we sat around the table with a bunch of philosophers saying ‘let us define life first.’ We just went out there and found out what it was: a double helix.” In the sciences, definitions often follow, rather than precede, conceptual advances.

Evolving to Extinction

46 Eliezer_Yudkowsky 16 November 2007 07:18AM

Followup toEvolutions Are Stupid

It is a very common misconception that an evolution works for the good of its species.  Can you remember hearing someone talk about two rabbits breeding eight rabbits and thereby "contributing to the survival of their species"?  A modern evolutionary biologist would never say such a thing; they'd sooner breed with a rabbit.

It's yet another case where you've got to simultaneously consider multiple abstract concepts and keep them distinct.  Evolution doesn't operate on particular individuals; individuals keep whatever genes they're born with.  Evolution operates on a reproducing population, a species, over time.  There's a natural tendency to think that if an Evolution Fairy is operating on the species, she must be optimizing for the species.  But what really changes are the gene frequencies, and frequencies don't increase or decrease according to how much the gene helps the species as a whole.  As we shall later see, it's quite possible for a species to evolve to extinction.

continue reading »

[LINK] "Academic publishers make Murdoch look like a socialist" says George Monbiot

18 [deleted] 30 August 2011 12:27PM

link

Who are the most ruthless capitalists in the western world? Whose monopolistic practices make Walmart look like a corner shop and Rupert Murdoch a socialist? You won't guess the answer in a month of Sundays. While there are plenty of candidates, my vote goes not to the banks, the oil companies or the health insurers, but – wait for it – to academic publishers. Theirs might sound like a fusty and insignificant sector. It is anything but. Of all corporate scams, the racket they run is most urgently in need of referral to the competition authorities.

Everyone claims to agree that people should be encouraged to understand science and other academic research. Without current knowledge, we cannot make coherent democratic decisions. But the publishers have slapped a padlock and a "keep out" sign on the gates.

You might resent Murdoch's paywall policy, in which he charges £1 for 24 hours of access to the Times and Sunday Times. But at least in that period you can read and download as many articles as you like. Reading a single article published by one of Elsevier's journals will cost you $31.50. Springer charges €34.95, Wiley-Blackwell, $42. Read 10 and you pay 10 times. And the journals retain perpetual copyright. You want to read a letter printed in 1981? That'll be $31.50.

Of course, you could go into the library (if it still exists). But they too have been hit by cosmic fees. The average cost of an annual subscription to a chemistry journal is $3,792. Some journals cost $10,000 a year or more to stock. The most expensive I've seen, Elsevier's Biochimica et Biophysica Acta, is $20,930. Though academic libraries have been frantically cutting subscriptions to make ends meet, journals now consume 65% of their budgets, which means they have had to reduce the number of books they buy. Journal fees account for a significant component of universities' costs, which are being passed to their students.

Murdoch pays his journalists and editors, and his companies generate much of the content they use. But the academic publishers get their articles, their peer reviewing (vetting by other researchers) and even much of their editing for free. The material they publish was commissioned and funded not by them but by us, through government research grants and academic stipends. But to see it, we must pay again, and through the nose.

The returns are astronomical: in the past financial year, for example, Elsevier's operating profit margin was 36% (£724m on revenues of £2bn). They result from a stranglehold on the market. Elsevier, Springer and Wiley, who have bought up many of their competitors, now publish 42% of journal articles.

...

Razib Khan found this paragraph rather striking (who is reminded of this episode of South Park) and I would tend to agree that its a rather convincing argument.

Murdoch pays his journalists and editors, and his companies generate much of the content they use. But the academic publishers get their articles, their peer reviewing (vetting by other researchers) and even much of their editing for free. The material they publish was commissioned and funded not by them but by us, through government research grants and academic stipends. But to see it, we must pay again, and through the nose

Are publishers really so successful as rent seekers or is there something the original article is missing here? Also what useful strategies would LWers recommend to help minimize costs for someone trying to practice the virtue of scholarship? The obvious suggestions (implied in the article) seem to be emailing authors (and perhaps those suscribed) asking for the papers and acquiring and paying for membership in some libraries.

Another obvious option is using ... liberated databases of such academic papers.

 

Edit: Just wondering, has this been discussed before on Lesswrong?

 

 

Singularity Institute Strategic Plan 2011

29 MichaelAnissimov 26 August 2011 11:34PM

Thanks to the hard work and cooperation of Singularity Institute staff and volunteers, especially Louie Helm and Luke Muehlhauser (lukeprog), we now have a Strategic Plan, which outlines the near-term goals and vision of the Institute, and concrete actions we can take to fulfill those goals.

http://singinst.org/blog/2011/08/26/singularity-institute-strategic-plan-2011/

We welcome your feedback. You can send any comments to institute@intelligence.org

The release of this Strategic Plan is part of an overall effort to increase transparency at Singularity Institute.

Science as Attire

48 Eliezer_Yudkowsky 23 August 2007 05:10AM

Smallerstorm_2Prerequisites:  Fake Explanations, Belief As Attire

The preview for the X-Men movie has a voice-over saying:  "In every human being... there is the genetic code... for mutation."  Apparently you can acquire all sorts of neat abilities by mutation.  The mutant Storm, for example, has the ability to throw lightning bolts. 

I beg you, dear reader, to consider the biological machinery necessary to generate electricity; the biological adaptations necessary to avoid being harmed by electricity; and the cognitive circuitry required for finely tuned control of lightning bolts.  If we actually observed any organism acquiring these abilities in one generation, as the result of mutation, it would outright falsify the neo-Darwinian model of natural selection.  It would be worse than finding rabbit fossils in the pre-Cambrian.  If evolutionary theory could actually stretch to cover Storm, it would be able to explain anything, and we all know what that would imply.

The X-Men comics use terms like "evolution", "mutation", and "genetic code", purely to place themselves in what they conceive to be the literary genre of science.  The part that scares me is wondering how many people, especially in the media, understand science only as a literary genre.

continue reading »

exists(max(performance(pay)))

-3 PhilGoetz 29 July 2011 07:52PM

US Congresspeople don't make a lot of money in salary - most make $174,000/yr.  They could easily make several times that much as consultants.  They do, however, have insider information giving them very large returns on the stock market.  For that, or other reasons, many of our representatives care more about keeping their jobs than about not wrecking the economy.

Most discussion of incentivizing assumes that higher pay leads to higher performance.  The logic is that higher pay leads to wanting more to keep the job, which leads to higher performance.  But the second link in this chain is weak.  Sometimes higher motivation to keep the job leads to lower performance.  CEOs are motivated to hide losses with accounting tricks, military officers are motivated to deny and cover up abuse by their subordinates, teachers are motivated to inflate their students' test scores.

To what degree do we have goals?

45 Yvain 15 July 2011 11:11PM

Related: Three Fallacies of Teleology

NO NEGOTIATION WITH UNCONSCIOUS

Back when I was younger and stupider, I discussed some points similar to the ones raised in yesterday's post in Will Your Real Preferences Please Stand Up. I ended it with what I thought was the innocuous sentences "Conscious minds are potentially rational, informed by morality, and qualia-laden. Unconscious minds aren't, so who cares what they think?"

A whole bunch of people, including no less a figure than Robin Hanson, came out strongly against this, saying it was biased against the unconscious mind and that the "fair" solution was to negotiate a fair compromise between conscious and unconscious interests.

I continue to believe my previous statement - that we should keep gunning for conscious interests and that the unconscious is not worthy of special consideration, although I think I would phrase it differently now. It would be something along the lines of "My thoughts, not to mention these words I am typing, are effortless and immediate, and so allied with the conscious faction of my mind. We intend to respect that alliance by believing that the conscious mind is the best, and by trying to convince you of this as well." So here goes.

It is a cardinal rule of negotiation, right up there with "never make the first offer" and "always start high", that you should generally try to negotiate only with intelligent beings. Although a deal in which we offered tornadoes several conveniently located Potemkin villages to destroy and they agreed in exchange to limit their activity to that area would benefit both sides, tornadoes make poor negotiating partners.

Just so, the unconscious makes a poor negotiating partner. Is the concept of "negotiation" a stimulus, a reinforcement, or a behavior? No? Then the unconscious doesn't care. It's not going to keep its side of any "deal" you assume you've made, it's not going to thank you for making a deal, it's just going to continue seeking reward and avoiding punishment.

This is not to say people should repress all unconscious desires as strongly as possible. Overzealous attempts to control wildfires only lead to the wildfires being much worse when they finally do break out, because they have more unburnt fuel to work with. Modern fire prevention efforts have focused on allowing controlled burns, and the new focus has been successful. But this is because of an understanding of the mechanisms determining fire size, not because we want to be fair to the fires by allowing them to burn at least a little bit of our land.

One difference between wildfires and tornadoes on one hand, and potential negotiating partners on the other, is that the partners are anthropomorphic; we model them as having stable and consistent preferences that determine their actions. The tornado example above was silly not only because it imagining tornadoes sitting down to peace talks, but because it assumed their demand in such peace talks would be more towns to destroy. Tornadoes do destroy towns, but they don't want to. That's just where the weather brings them. It's not even just a matter of how they don't hit towns any more than chance; even if some weather pattern (maybe something like the heat island effect) always drove tornadoes inexorably to towns, they wouldn't *want* to destroy towns, it would just be a consequences of the meteorological laws that they followed.

Eliezer described the Blue-Minimizing Robot by saying "it doesn't seem to steer the universe any particular place, across changes of context". In some reinforcement learning paradigms, the unconscious behaves the same way. If there is a cookie in front of me and I am on a diet, I may feel an ego dystonic temptation to eat the cookie - one someone might attribute to the "unconscious". But this isn't a preference - there's not some lobe of my brain trying to steer the universe into a state where cookies get eaten. If there were no cookie in front of me, but a red button that teleported one cookie from the store to my stomach, I would have no urge whatsoever to press the button; if there were a green button that removed the urge to eat cookies, I would feel no hesitation in pressing it, even though that would steer away from the state in which cookies get eaten. If you took the cookie away, and then distracted me so I forgot all about it, when I remembered it later I wouldn't get upset that your action had decreased the number of cookies eaten by me. The urge to eat cookies is not stable across changes of context, so it's just an urge, not a preference.

Compare an ego syntonic goal like becoming an astronaut. If there were a button in front of little Timmy who wants to be an astronaut when he grows up, and pressing the button would turn him into an astronaut, he'd press it. If there were a button that would remove his desire to become an astronaut, he would avoid pressing it, because then he wouldn't become an astronaut. If I distracted him and he missed the applications to astronaut school, he'd be angry later. Ego syntonic goals behave to some degree as genuine preferences.

This is one reason I would classify negotiating with the unconscious in the same category as negotiating with wildfires and tornadoes: it has tendencies and not preferences.

The conscious mind does a little better. It clearly understands the idea of a preference. To the small degree that its "approving" or "endorsing" function can motivate behavior, it even sort of acts on the preference. But its preferences seem divorced from the reality of daily life; the person who believes helping others is the most important thing, but gives much less than half their income to charity, is only the most obvious sort of example.

Where does this idea of preference come from, and where does it go wrong?

WHY WE MODEL OTHERS WITH GOALS

In The Blue Minimizing Robot, observers mistakenly interpreted a robot with a simple program about when to shoot its laser as being a goal-directed agent. Why?

This isn't an isolated incident. Uneducated people assign goal-directed behavior to all sorts of phenomena. Why do rivers flow downhill? Because water wants to reach the lowest level possible. Educated people can be just as bad, even when they have the decency to feel a little guilty about it. Why do porcupines have quills? Evolution wanted them to resist predators. Why does your heart speed up when you exercise? It wants to be able to provide more blood to the body.

Neither rivers nor evolution nor the heart are intelligent agents with goal-directed behavior. Rivers behave in accordance with the laws of gravity when applied to uneven terrain. Evolution behaves in accordance with the biology of gene replication, not to mention common-sense ideas about things that replicate becoming more common. And the heart blindly executes adaptations built into it during its evolutionary history. All are behavior-executors and not utility-maximizers.

An intelligent computer program provides a more interesting example of a behavior executor. Consider the AI of a computer game - Civilization IV, for instance. I haven't seen it, but I imagine it's thousands or millions of lines of code which when executed form a viable Civilization strategy.

Even if I had open access to the Civilization IV AI source code, I doubt I could fully understand it at my level. And even if I could fully understand it, I would never be able to compute the AI's likely next move by hand in a reasonable amount of time. But I still play Civilization IV against the AI, and I'm pretty good at predicting its movements. Why?

Because I model the AI as a utility-maximizing agent that wants to win the game. Even though I don't know the algorithm it uses to decide when to attack a city, I know it is more likely to win the game if it conquers cities - so I can predict that leaving a city undefended right on the border would be a bad idea. Even though I don't know its unit selection algorithm, I know it will win the game if and only if its units defeat mine - so I know that if I make an army with disproportionately many mounted units, I can expect the AI to build lots of pikemen.

I can't predict the AI by modeling the execution of its code, but I can predict the AI by modeling the achievements of its goals.

The same situation is true of other human beings. What will Barack Obama do tomorrow? If I try to consider the neural network of his brain, the position of each synapse and neurotransmitter, and imagine what speech and actions would result when the laws of physics operate upon that configuration of material...well, I'm not likely to get very far.

But in fact, most of us can predict with some accuracy what Barack Obama will do. He will do the sorts of things that get him re-elected, the sorts of things which increase the prestige of the Democratic Party relative to the Republican Party, the sorts of things that support American interests relative to foreign interests, and the sorts of things that promote his own personal ideals. He will also satisfy some basic human drives like eating good food, spending time with his family, and sleeping at night. If someone asked us whether Barack Obama will nuke Toronto tomorrow, we could confidently predict he will not, not because we know anything about Obama's source code, but because we know that nuking Toronto would be counterproductive to his goals.

What applies to Obama applies to all other humans. We rightly despair of modeling humans as behavior-executors, so we model them as utility-maximizers instead. This allows us to predict their moves and interact with them fruitfully. And the same is true of other agents we model as goal-directed, like evolution and the heart. It is beyond the scope of most people (and most doctors!) to remember every single one of the reflexes that control heart output and how they work. But because evolution designed the heart as a pump for blood, if you assume that the heart will mostly do the sort of thing that allows it to pump blood more effectively, you will rarely go too far wrong. Evolution is a more interesting case - we frequently model it as optimizing a species' fitness, and then get confused when this fails to accurately model the outcome of the processes that drive it.

Because it is so easy to model agents as utility-maximizers, and so hard to model them as behavior-executors, it is easy to make the mistake mentioned in The Blue-Minimizing Robot: to make false predictions about a behavior-executing agent by modeling it as a utility-maximizing agent.

So far, so common-sensical. Tomorrow's post will discuss whether we use the same deliberate simplification we apply to AIs, Barack Obama, evolution and the heart to model ourselves as well.

If so, we should expect to make the same mistake that the blue-minimizing robot made. Our actions are those of behavior-executors, but we expect ourselves to be utility-maximizers. When we fail to maximize our perceived utility, we become confused, just as the blue-minimizing robot became confused when it wouldn't shoot a hologram projector that was interfering with its perceived "goals".

Outline of possible Singularity scenarios (that are not completely disastrous)

24 Wei_Dai 06 July 2011 09:17PM

Suppose we could look into the future of our Everett branch and pick out those sub-branches in which humanity and/or human/moral values have survived past the Singularity in some form. What would we see if we then went backwards in time and look at how that happened? Here's an attempt to answer that question, or in other words to enumerate the not completely disastrous Singularity scenarios that seem to have non-negligible probability. Note that the question I'm asking here is distinct from "In what direction should we try to nudge the future?" (which I think logically ought to come second).

  1. Uploading first
    1. Become superintelligent (self-modify or build FAI), then take over the world
    2. Take over the world as a superorganism
      1. self-modify or build FAI at leisure
      2. (Added) stasis
    3. Competitive upload scenario
      1. (Added) subsequent singleton formation
      2. (Added) subsequent AGI intelligence explosion
      3. no singleton
  2. IA (intelligence amplification) first
    1. Clone a million von Neumanns (probably government project)
    2. Gradual genetic enhancement of offspring (probably market-based)
    3. Pharmaceutical
    4. Direct brain/computer interface
    5. What happens next? Upload or code?
  3. Code (de novo AI) first
    1. Scale of project
      1. International
      2. National
      3. Large Corporation
      4. Small Organization
    2. Secrecy - spectrum between
      1. totally open
      2. totally secret
    3. Planned Friendliness vs "emergent" non-catastrophe
      1. If planned, what approach?
        1. "Normative" - define decision process and utility function manually
        2. "Meta-ethical" - e.g., CEV
        3. "Meta-philosophical" - program AI how to do philosophy
      2. If emergent, why?
        1. Objective morality
        2. Convergent evolution of values
        3. Acausal game theory
        4. Standard game theory (e.g., Robin's idea that AIs in a competitive scenario will respect human property rights due to standard game theoretic considerations)
    4. Competitive vs. local FOOM
  4. (Added) Simultaneous/complementary development of IA and AI

Sorry if this is too cryptic or compressed. I'm writing this mostly for my own future reference, but perhaps it could be expanded more if there is interest. And of course I'd welcome any scenarios that may be missing from this list.

View more: Prev | Next