Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

New LW Meetup: Christchurch NZ

1 FrankAdamek 19 April 2014 01:57AM

Botworld: a cellular automaton for studying self-modifying agents embedded in their environment

45 So8res 12 April 2014 12:56AM

On April 1, I started working full-time for MIRI. In the weeks prior, while I was winding down my job and packing up my things, Benja and I built Botworld, a cellular automaton that we've been using to help us study self-modifying agents. Today, we're publicly releasing Botworld on the new MIRI github page. To give you a feel for Botworld, I've reproduced the beginning of the technical report below.

continue reading »

Rationality Quotes April 2014

6 elharo 07 April 2014 05:25PM

Another month has passed and here is a new rationality quotes thread. The usual rules are:

  • Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself.
  • Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
  • No more than 5 quotes per person per monthly thread, please.

And one new rule:

  • Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.

Effective Altruism Summit 2014

16 Ben_LandauTaylor 21 March 2014 08:30PM

In 2013, the Effective Altruism movement came together for a week-long Summit in the San Francisco Bay Area. Attendees included leaders and members from all the major effective altruist organizations, as well as effective altruists not affiliated with any organization. People shared strategies, techniques, and projects, and left more inspired and more effective than when they arrived.

More than ever, rationality and existential risk reduction are part of the Effective Altruism movement, and so I'm glad to announce to LessWrong the 2014 Effective Altruism Summit.

Following last year’s success, this year’s Effective Altruism Summit will comprise two events. The Summit will be a conference-style event held on the weekend of August 2-3, followed by a smaller Effective Altruism Retreat from August 4-9. To accommodate our expanding movement and its many new projects, this year’s Summit will be bigger than the last. The Retreat will be similar to last year’s EA Summit, providing a more intimate setting for attendees to discuss, to learn, and to form lasting connections with each other and with the community.

We’re now accepting applications for the 2014 events. Whether you’re a veteran organizer trying to keep up with Effective Altruism’s most exciting developments, or you're looking to get involved with a community of people who use rationality to improve the world, we’d love for you to join us.

The Problem with AIXI

23 RobbBB 18 March 2014 01:55AM

Followup toSolomonoff CartesianismMy Kind of Reflection

Alternate versions: Shorter, without illustrations


 

AIXI is Marcus Hutter's definition of an agent that follows Solomonoff's method for constructing and assigning priors to hypotheses; updates to promote hypotheses consistent with observations and associated rewards; and outputs the action with the highest expected reward under its new probability distribution. AIXI is one of the most productive pieces of AI exploratory engineering produced in recent years, and has added quite a bit of rigor and precision to the AGI conversation. Its promising features have even led AIXI researchers to characterize it as an optimal and universal mathematical solution to the AGI problem.1

Eliezer Yudkowsky has argued in response that AIXI isn't a suitable ideal to build toward, primarily because of AIXI's reliance on Solomonoff induction. Solomonoff inductors treat the world as a sort of qualia factory, a complicated mechanism that outputs experiences for the inductor.2 Their hypothesis space tacitly assumes a Cartesian barrier separating the inductor's cognition from the hypothesized programs generating the perceptions. Through that barrier, only sensory bits and action bits can pass.

Real agents, on the other hand, will be in the world they're trying to learn about. A computable approximation of AIXI, like AIXItl, would be a physical object. Its environment would affect it in unseen and sometimes drastic ways; and it would have involuntary effects on its environment, and on itself. Solomonoff induction doesn't appear to be a viable conceptual foundation for artificial intelligence — not because it's an uncomputable idealization, but because it's Cartesian.

In my last post, I briefly cited three indirect indicators of AIXI's Cartesianism: immortalism, preference solipsism, and lack of self-improvement. However, I didn't do much to establish that these are deep problems for Solomonoff inductors, ones resistant to the most obvious patches one could construct. I'll do that here, in mock-dialogue form.

continue reading »

Less Wrong Study Hall - Year 1 Retrospective

47 Error 18 March 2014 01:54AM

Some time back, a small group of Less Wrongers collected in a video chatroom to work on…things. We’ve been at it for exactly one year as of today, and it seems like a good time to see what’s come of it.[1] So here is what we’ve done, what we’re doing, and a few thoughts on where we’re going. At the end is a survey taken of the LWSH, partly to be compared to Less Wrong proper, but mostly for fun. If you like what you see here, come join us. The password is “lw”.

A Brief History of the Hall

I think the first inspiration was Eliezer looking for someone to sit with him while he worked, to help with productivity and akrasia. Shannon Friedman answered the call and it seemed to be effective. She suggested a similar coworking scheme to one of her clients, Mqrius, to help him with akratic issues surrounding his thesis. She posted on Less Wrong about it, with the intent of connecting him and possibly others who wanted to co-work in a similar fashion. Tsakinis, in the comments, took the idea a step further, and created a Tinychat video chatroom for group working. It was titled the Less Wrong Study Hall. The theory is that it will help us actually do the work, instead of, say, reading tvtropes when we should be studying. It turned out to be a decent Schelling point, enough to form a regular group and occasionally attract new people. It’s grown slowly but steadily.

Tinychat’s software sucks, and there have been a couple of efforts to replace it. Mqrius looked into OpenMeetings, but it didn’t work out. Yours truly took a crack at programming a LWSH Google Hangout, but it ran aground on technical difficulties. Meanwhile the tinychat room continued to work, and despite nobody actually liking it, it’s done the job well enough.

Tinychat is publicly available, and there have been occasional issues with the public along the way. A few people took up modding, but it was still a nuisance. Eventually a password was placed on the room, which mostly shut down the problem. We did have one guy straight out guess the password, which was a…peculiar experience. He was notably not all there, but somehow still scrupulously polite, and left when asked. I don’t think I’ve ever seen that happen on the Internet before.

A year after the Hall opened, we have about twenty to twenty-five regulars, with an unknown number of occasional users. We’re still well within Dunbar’s number, so everybody knows everybody else and new users integrate quickly. We’ve developed a reasonably firm set of social norms to guide our work, in spite of not having direct technical control nor clear leaders.

continue reading »

Strategic choice of identity

67 Vika 08 March 2014 04:27PM

Identity is mostly discussed on LW in a cautionary manner: keep your identity small, be aware of the identities you are attached to. As benlandautaylor points out, identities are very powerful, and while being rightfully cautious about them, we can also cultivate them deliberately to help us achieve our goals.

Some helpful identities that I have that seem generally applicable:

  • growth mindset
  • low-hanging fruit picker
  • truth-seeker
  • jack-of-all trades (someone who is good at a variety of skills)
  • someone who tries new things
  • universal curiosity
  • mirror (someone who learns other people's skills)

Out of the above, the most useful is probably growth mindset, since it's effectively a meta-identity that allows the other parts of my identity to be fluid. The low-hanging fruit identity helps me be on the lookout for easy optimizations. The universal curiosity identity motivates me to try to understand various systems and fields of knowledge, besides the domains I'm already familiar with. It helps to give these playful or creative names, for example, "champion of low-hanging fruit". Some of these work well together, for example the "trying new things" identity contributes to the "jack of all trades" identity.

It's also important to identify unhelpful identities that get in your way. Negative identities can be vague like "lazy person" or specific like "someone who can't finish a project". With identities, just like with habits, the easiest way to reduce or eliminate a bad one seems to be to install a new one that is incompatible with it. For example, if you have a "shy person" identity, then going to parties or starting conversations with strangers can generate counterexamples for that identity, and help to displace it with a new one of "sociable person". Costly signaling can be used to achieve this - for example, joining a public speaking club. The old identity will not necessarily go away entirely, but the competing identity will create cognitive dissonance, which it can be useful to deliberately focus on. More specific identities require more specific counterexamples. Since the original negative identity makes it difficult to perform the actions that generate counterexamples, there needs to be some form of success spiral that starts with small steps.

Some examples of unhelpful identities I've had in the past were "person who doesn't waste things" and "person with poor intuition". The aversion to wasting money and material things predictably led to wasting time and attention instead. I found it useful to try "thinking like a trader" to counteract this "stingy person" identity, and get comfortable with the idea of trading money for time. Now I no longer obsess about recycling or buy the cheapest version of everything. Underconfidence in my intuition was likely responsible for my tendency to miss the forest for the trees when studying math or statistics, where I focused on details and missed the big picture ideas that are essential to actual understanding. My main objection to intuitions was that they feel imprecise, and I am trying to develop an identity of an "intuition wizard" who can manipulate concepts from a distance without zooming in. That is a cooler name than "someone who thinks about things without really understanding them", and brings to mind some people I know who have amazing intuition for math, which should help the identity stick.

There can also be ambiguously useful identities, for example I have a "tough person" identity, which motivates me to challenge myself and expand my comfort zone, but also increases self-criticism and self-neglect. Given the mixed effects, I'm not yet sure what to do about this one - maybe I can come up with an identity that only has the positive effects.

Which identities hold you back, and which ones propel you forward? If you managed to diminish negative identities, how did you do it and how far did you get?

Political Skills which Increase Income

56 Xodarap 02 March 2014 05:56PM

Summary: This article is intended for those who are "earning to give" (i.e. maximize income so that it can be donated to charity). It is basically an annotated bibliography of a few recent meta-analyses of predictors of income.

Key Results

  • The degree to which management “sponsors” your career development is an important predictor of your salary, as is how skilled you are politically.

  • Despite the stereotype of a silver-tongued salesman preying on people’s biases, rational appeals are generally the best tactic.

  • After rationality, the best tactics are types of ingratiation, including flattery and acting modest.

Ng et al. performed a metastudy of over 200 individual studies of objective and subjective career success. Here are the variables they found best correlated with salary:

Predictor

Correlation

Political Knowledge & Skills

0.29

Education Level

0.29

Cognitive Ability (as measured by standardized tests)

0.27

Age

0.26

Training and Skill Development Opportunities

0.24

Hours Worked

0.24

Career Sponsorship

0.22

(all significant at p = .05)


(For reference, the “Big 5” personality traits all have a correlation under 0.12.)

Before we go on, a few caveats: while these correlations are significant and important, none are overwhelming (the authors cite Cohen as saying the range 0.24-0.36 is “medium” and correlations over 0.37 are “large”). Also, in addition to the usual correlation/causation concerns, there is lots of cross-correlation: e.g. older people might have greater political knowledge but less education, thereby confusing things. For a discussion of moderating variables, see the paper itself.

Career Sponsorship

There are two broad models of career advancement: contest-mobility and sponsorship-mobility. They are best illustrated with an example.

Suppose Peter and Penelope are both equally talented entry-level employees. Under the contest-mobility model, they would both be equally likely to get a raise or promotion, because they are equally skilled.

Sponsorship-mobility theorists argue that even if Peter and Penelope are equally talented, it’s likely that one of them will catch the eye of senior management. Perhaps it’s due to one of them having an early success by chance, making a joke in a meeting, or simply just having a more memorable name, like Penelope. This person will be singled out for additional training and job opportunities. Because of this, they’ll have greater success in the company, which will lead to more opportunities etc. As a result, their initial small discrepancy in attention gets multiplied into a large differential.

The authors of the metastudy found that self-reported sponsorship levels (i.e. how much you feel the management of your company “sponsors” you) have a significant, although moderate, relationship to salary. Therefore, the level at which you currently feel sponsored in your job should be a factor when you consider alternate opportunities.

The Dilbert Effect

The strongest predictor of salary (tied with education level) is what the authors politely term “Political Knowledge & Skills” - less politely, how good you are at manipulating others.

Several popular books (such as Cialdini’s Influence) on the subject of influencing others exist, and the study of these “influence tactics” in business stretches back 30 years to Kipnis, Schmidt and Wilkinson. Recently, Higgins et al. reviewed 23 individual studies of these tactics and how they relate to career success. Their results:


Tactic

Correlation

Definition (From Higgins et al.)

Rationality

0.26

Using data and information to make a logical argument supporting one's request

Ingratiation

0.23

Using behaviors designed to increase the target's liking of oneself or to make oneself appear friendly in order to get what one wants

Upward Appeal

0.05

Relying on the chain of command, calling in superiors to help get one's way

Self-Promotion

0.01

Attempting to create an appearance of competence or that you are capable

of completing a task

Assertiveness

-0.02

Using a forceful manner to get what one wants

Exchange

-0.03

Making an explicit offer to do something for another in exchange for their doing what

one wants

(Only ingratiation and rationality are significant.)

This site has a lot of information on how to make rational appeals, so I will focus on the less-talked-about ingratiation techniques.

How to be Ingratiating

Gordon analyzed 69 studies of ingratiation and found the following. (Unlike the previous two sections, success here is measured in lab tests as well as in career advancement. However, similar but less comprehensive results have been found in terms of career success):

Tactic

Weighted Effectiveness (Cohen’s d difference between control and intervention)

Description

Other Enhancement

0.31

Flattery

Opinion Conformity

0.23

“Go along to get along”

Self-presentation

0.15

Any of the following tactics: Self-promotion, self-deprecation, apologies, positive nonverbal displays and name usage

Combination

0.10

Includes studies where the participants weren’t told which strategy to use, in addition to when they were instructed to use multiple strategies

Rendering Favors

0.05


Self-presentation is split further:

Tactic

Weighted Effect Size

Comment

Modesty

0.77


Apology

0.59

Apologizing for poor performance

Generic

0.28

When the participant is told in generic terms to improve their self-presentation

Nonverbal behavior and name usage

-0.14

Nonverbal behavior includes things like wearing perfume. Name usage means referring to people by name instead of a pronoun.

Self-promotion

-0.17

 

 

Moderators

One important moderator is the direction of the appeal. If you are talking to your boss, your tactics should be different than if you’re talking to a subordinate. Other-enhancement (flattery) is always the best tactic no matter who you’re talking to, but when talking to superiors it’s by far the best. When talking to those at similar levels to you, opinion conformity comes close to flattery, and the other techniques aren't far behind.

Unsurprisingly, when the target realizes you’re being ingratiating, the tactic is less effective. (Although effectiveness doesn’t go to zero - even when people realize you’re flattering them just to suck up, they generally still appreciate it.) Also, women are better at being ingratiating than men, and men are more influenced by these ingratiating tactics than women. The most important caveat is that lab studies find much larger effect sizes than in the field, to the extent that the average field effect for the ingratiating tactics is negative. This is probably due to the fact that lab experiments can be better controlled.

Conclusion

It’s unlikely that a silver-tongued receptionist will out-earn an introverted engineer. But simple techniques like flattery and attempting to get "sponsored" can appreciably improve returns, to the extent that political skills are one of the strongest predictors of salaries.

 

I would like to thank Brian Tomasik and Gina Stuessy for reading early drafts of this article.

References

Cohen, Jacob. Statistical power analysis for the behavioral sciences. Psychology Press, 1988.

 

Gordon, Randall A. "Impact of ingratiation on judgments and evaluations: A meta-analytic investigation." Journal of Personality and Social Psychology 71.1 (1996): 54.

 

Higgins, Chad A., Timothy A. Judge, and Gerald R. Ferris. "Influence tactics and work outcomes: a meta‐analysis." Journal of Organizational Behavior 24.1 (2003): 89-106.

 

Judge, Timothy A., and Robert D. Bretz Jr. "Political influence behavior and career success." Journal of Management 20.1 (1994): 43-65.

 

Kipnis, David, Stuart M. Schmidt, and Ian Wilkinson. "Intraorganizational influence tactics: Explorations in getting one's way." Journal of Applied psychology 65.4 (1980): 440.


Ng, Thomas WH, et al. "Predictors of objective and subjective career success: A meta‐analysis." Personnel psychology 58.2 (2005): 367-408.

Solomonoff Cartesianism

18 RobbBB 02 March 2014 05:56PM

Followup to: Bridge CollapseAn Intuitive Explanation of Solomonoff InductionReductionism

Summary: If you want to predict arbitrary computable patterns of data, Solomonoff induction is the optimal way to go about it — provided that you're an eternal transcendent hypercomputer. A real-world AGI, however, won't be immortal and unchanging. It will need to form hypotheses about its own physical state, including predictions about possible upgrades or damage to its hardware; and it will need bridge hypotheses linking its hardware states to its software states. As such, the project of building an AGI demands that we come up with a new formalism for constructing (and allocating prior probabilities to) hypotheses. It will not involve just building increasingly good computable approximations of AIXI.


 

Solomonoff induction has been cited repeatedly as the theoretical gold standard for predicting computable sequences of observations.1 As Hutter, Legg, and Vitanyi (2007) put it:

Solomonoff's inductive inference system will learn to correctly predict any computable sequence with only the absolute minimum amount of data. It would thus, in some sense, be the perfect universal prediction algorithm, if only it were computable.

Perhaps you've been handed the beginning of a sequence like 1, 2, 4, 8… and you want to predict what the next number will be. Perhaps you've paused a movie, and are trying to guess what the next frame will look like. Or perhaps you've read the first half of an article on the Algerian Civil War, and you want to know how likely it is that the second half describes a decrease in GDP. Since all of the information in these scenarios can be represented as patterns of numbers, they can all be treated as rule-governed sequences like the 1, 2, 4, 8… case. Complicated sequences, but sequences all the same.

It's been argued that in all of these cases, one unique idealization predicts what comes next better than any computable method: Solomonoff induction. No matter how limited your knowledge is, or how wide the space of computable rules that could be responsible for your observations, the ideal answer is always the same: Solomonoff induction.

Solomonoff induction has only a few components. It has one free parameter, a choice of universal Turing machine. Once we specify a Turing machine, that gives us a fixed encoding for the set of all possible programs that print a sequence of 0s and 1s. Since every program has a specification, we call the number of bits in the program's specification its "complexity"; the shorter the program's code, the simpler we say it is.

Solomonoff induction takes this infinitely large bundle of programs and assigns each one a prior probability proportional to its simplicity. Every time the program requires one more bit, its prior probability goes down by a factor of 2, since there are then twice as many possible computer programs that complicated. This ensures the sum over all programs' prior probabilities equals 1, even though the number of programs is infinite.2

continue reading »

Bridge Collapse: Reductionism as Engineering Problem

42 RobbBB 18 February 2014 10:03PM

Followup to: Building Phenomenological Bridges

Summary: AI theorists often use models in which agents are crisply separated from their environments. This simplifying assumption can be useful, but it leads to trouble when we build machines that presuppose it. A machine that believes it can only interact with its environment in a narrow, fixed set of ways will not understand the value, or the dangers, of self-modification. By analogy with Descartes' mind/body dualism, I refer to agent/environment dualism as Cartesianism. The open problem in Friendly AI (OPFAI) I'm calling naturalized induction is the project of replacing Cartesian approaches to scientific induction with reductive, physicalistic ones.


 

I'll begin with a story about a storyteller.

Once upon a time — specifically, 1976 — there was an AI named TALE-SPIN. This AI told stories by inferring how characters would respond to problems from background knowledge about the characters' traits. One day, TALE-SPIN constructed a most peculiar tale.

Henry Ant was thirsty. He walked over to the river bank where his good friend Bill Bird was sitting. Henry slipped and fell in the river. Gravity drowned.

Since Henry fell in the river near his friend Bill, TALE-SPIN concluded that Bill rescued Henry. But for Henry to fall in the river, gravity must have pulled Henry. Which means gravity must have been in the river. TALE-SPIN had never been told that gravity knows how to swim; and TALE-SPIN had never been told that gravity has any friends. So gravity drowned.

TALE-SPIN had previously been programmed to understand involuntary motion in the case of characters being pulled or carried by other characters — like Bill rescuing Henry. So it was programmed to understand 'character X fell to place Y' as 'gravity moves X to Y', as though gravity were a character in the story.1

For us, the hypothesis 'gravity drowned' has low prior probability because we know gravity isn't the type of thing that swims or breathes or makes friends. We want agents to seriously consider whether the law of gravity pulls down rocks; we don't want agents to seriously consider whether the law of gravity pulls down the law of electromagnetism. We may not want an AI to assign zero probability to 'gravity drowned', but we at least want it to neglect the possibility as Ridiculous-By-Default.

When we introduce deep type distinctions, however, we also introduce new ways our stories can fail.

continue reading »

View more: Next