You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Dark Arts: Defense in Reputational Warfare

1 OrphanWilde 03 December 2015 03:03PM

First, the Dark Arts are, as the name implies, an art, not a science.  Likewise, defending against them is.  An artful attacker can utilize expected defenses against you; if you can be anticipated, you can be defeated.  The rules, therefore, are guidelines.  I'm going to stage the rules in a narrative form; they don't need to be, however, because life doesn't follow a narrative.  The narrative exists to give them context, to give the reader a sense of the purpose of each rule.

Rule #0: Never follow the rules if they would result in a worse outcome.  

 


 

Now, generally, the best defense is to never get attacked in the first place.  Security through obscurity is your first line of defense.  Translations of Sun Tzu vary somewhat, but your ideal form is to be formless, by which I mean, do not be a single point of attack, or defense.  If there's a mob in your vicinity, the ideal place is neither outside it, nor leading it, but a faceless stranger among it.  Even better is to be nowhere near a mob.  This is the fundamental basis of not being targeted; the other two rules derive from this one.

Rule #1: Do not stand out.

 

Sometimes you're picked out.  There's a balancing art with this next piece; you don't want to stand out, to be a point of attack, but if somebody is picking faces, you want to look slightly more dangerous than your neighbor, you want to look like a hard target.  (But not when somebody is looking for hard targets.  Obviously.)

Rule #2: Look like an unattractive target.

 

The third aspect of this is somewhat simpler, and I'll borrow the phrasing from HPMoR:

Rule #3: "I will not go around provoking strong, vicious enemies" - http://hpmor.com/chapter/19

 

The first triplet of rules, by and large, are about -not- being attacked in the first place.  These are starting points; Rule #1, for example, culminates in not existing at all.  You can't attack what doesn't exist.  Rule #1 is the fundamental strategy of Anonymous.  Rule #2 is about encouraging potential attackers to look elsewhere; Rule #1 is passive, and this is the passive-aggressive form of Rule #1.  It's the fundamental strategy of home security - why else do you think security companies put signs in the yard saying the house is protected?  Rule #3 is obvious.  Don't make enemies in the first place, and particularly don't make dangerous enemies.  It has critical importance beyond its obvious nature, however - enemies might not care if they get hurt in the process of hurting you.  That limits your strategies for dealing with them considerably.

 


 

You've messed up the first three rules.  You're under attack.  What now?  Manage the Fight.  Your attacker starts with the home field advantage - they attacked you under the terms they are most comfortable in.  Change the terms, immediately.  Do not concede that advantage.  Like Rule #1, Rule #4 is the basis of your First Response, and Rule #5 and Rule #6.  The simplest approach is the least obvious - immediate surrender, but on your terms.  If you're accused of something, admit to the weakest and least harmful version of that which is true (be specific, and deny as necessary), and say you're aware of your problem and working on improving.  This works regardless of whether there's an audience or not, but works best if there is an audience.

Rule #4: Change the terms of the fight to favor yourself, or disfavor your opponent.

 

Sometimes, the best response to an attack is no response at all.  Is anybody (important) going to take it seriously?  If not, then the very worst thing you can do is to respond, because that validates the attack.  If you do need to respond, respond as lightly as possible; do not respond as if the accusation is serious or matters, because that lends weight to the accusation.  If there's no audience, or a limited audience, responding gives your attacker an opportunity to continue the attack.  If there's a risk of them physically assaulting you, ignoring them is probably a bad idea; a polite non-response is ideal in that situation.  (For crowds that pose a risk of physically assault you... you need more rules than I'm going to write here.)

Rule #5: Use the minimum force necessary to respond.

 

It's tempting to attack back: Don't.  You're going to escalate the situation, and escalation is going to favor the person who is better at this; worse, in a public Dark Arts battle, even the better person is going to take some hits.  Nobody wins.  Instead, mine the battlefield, and make sure your opponent sees you mining the battlefield.  If you're accused of something, suggest that both you and your opponent know the accused thing isn't as uncommon as generally represented.  Hint at shared knowledge.  Make it clear you'll take them out with you.  If they're actually good at this, they'll get the hint.  (This is why it's critically important not to make enemies.  You really, really don't want somebody around who doesn't mind going down with you, and your use of this strategy becomes difficult.)

Rule #6: Make escalation prohibitively costly.

 

You might recognize some elements of martial arts here.  There are similarities, enough that one is useful to the other, but they are not the same.

 


 

You're in a fight, and your opponent is persistent, or you messed up and now things are serious.  What now?  First, continue to Manage the Fight.  Your goal now is to end the fight; the total damage you're going to suffer is a function of both the amplitude of escalation and the length of the fight.  You've failed to manage the amplitude; manage the length.

Rule #7: End fights fast.

 

At this point you've been reasonable and defensive, and that hasn't worked.  Now you need to go on the offensive.  Your defense should be light and easy, continuing to react with the lightest necessary touch, continuing to ignore anything you don't need to react to; your attack should be brutal, and put your opponent on the defensive immediately.  Attack them on the basis of their harassment of you, first, and then build up to any personal attacks you've been holding back on - your goal is to impart a tone of somebody who has been put-upon and had enough.

Rule #8: Hit hard.

 

And immediately stop.  If you've pulled off your counterattack right, they'll offer up defenses.  Just quit the battle.  Do not be tempted by a follow-up attack; you were angry, you vented your anger, you're done.  By not following up on the attack, by not attacking their defenses, you're leaving them no reasonable way to respond.  Any continuing attacks can be safely ignored; they will look completely pathetic going forward.

Rule #9: Recognize when you've won, and stop.

 

Defense follows different rules than attack.  In defense, you aren't trying to inflict wounds, you're trying to avoid them.  Ending the fight quickly is paramount to this.

The Winding Path

6 OrphanWilde 24 November 2015 09:23PM

The First Step

The first step on the path to truth is superstition.  We all start there, and should acknowledge that we start there.

Superstition is, contrary to our immediate feelings about the word, the first stage of understanding.  Superstition is the attribution of unrelated events to a common (generally unknown or unspecified) cause - it could be called pattern recognition. The "supernatural" component generally included in the definition is superfluous, because supernatural merely refers to that which isn't part of nature - which means reality -, which is an elaborate way of saying something whose relationship to nature is not yet understood, or else nonexistent.  If we discovered that ghosts are real, and identified an explanation - overlapping entities in a many-worlds universe, say - they'd cease to be supernatural and merely be natural.

Just as the supernatural refers to unexplained or imaginary phenomena, superstition refers to unexplained or imaginary relationships, without the necessity of cause.  If you designed an AI in a game which, after five rounds of being killed whenever it went into rooms with green-colored walls, started avoiding rooms with green-colored walls, you've developed a good AI.  It is engaging in superstition, it has developed an incorrect understanding of the issue.  But it hasn't gone down the wrong path - there is no wrong path in understanding, there is only the mistake of stopping.  Superstition, like all belief, is only useful if you're willing to discard it.

The Next Step

Incorrect understanding is the first - and necessary - step to correct understanding.  It is, indeed, every step towards correct understanding.  Correct understanding is a path, not an achievement, and it is pursued, not by arriving at the correct conclusion in the first place, but by testing your ideas and discarding those which are incorrect.

No matter how much intelligent you are, you cannot skip the "incorrect understanding" step of knowledge, because that is every step of knowledge.  You must come up with wrong ideas in order to get at the right ones - which will always be one step further.  You must test your ideas.  And again, the only mistake is stopping, in assuming that you have it right now.

Intelligence is never your bottleneck.  The ability to think faster isn't necessarily the ability to arrive at the right answer faster, because the right answer requires many wrong ones, and more importantly, identifying which answers are indeed wrong, which is the slow part of the process.

Better answers are arrived at by the process of invalidating wrong answers.

The Winding Path

The process of becoming Less Wrong is the process of being, in the first place, wrong.  It is the state of realizing that you're almost certainly incorrect about everything - but working on getting incrementally closer to an unachievable "correct".  It is a state of anti-hubris, and requires a delicate balance between the idea that one can be closer to the truth, and the idea that one cannot actually achieve it.

The art of rationality is the art of walking this narrow path.  If ever you think you have the truth - discard that hubris, for three steps from here you'll see it for superstition, and if you cannot see that, you cannot progress, and there your search for truth will end.  That is the path of the faithful.

But worse, the path is not merely narrow, but winding, with frequent dead ends requiring frequent backtracking.  If ever you think you're closer to the truth - discard that hubris, for it may inhibit you from leaving a dead end, and there your search for truth will end.  That is the path of the crank.

The path of rationality is winding and directionless.  It may head towards beauty, then towards ugliness; towards simplicity, then complexity.  The correct direction isn't the aesthetic one; those who head towards beauty may create great art, but do not find truth.  Those who head towards simplicity might open new mathematical doors and find great and useful things inside - but they don't find truth, either.  Truth is its own path, found only by discarding what is wrong.  It passes through simplicity, it passes through ugliness; it passes through complexity, and also beauty.  It doesn't belong to any one of these things.

The path of rationality is a path without destination.

 


 

Written as an experiment in the aesthetic of Less Wrong.  I'd appreciate feedback into the aesthetic interpretation of Less Wrong, rather than the sense of deep wisdom emanating from it (unless the deep wisdom damages the aesthetic).

I Want To Believe: Rational Edition

4 27chaos 18 November 2014 08:00PM

Relevant: http://lesswrong.com/lw/k7h/a_dialogue_on_doublethink/

I would like this conversation to operate under the assumption that there are certain special times when it is instrumentally rational to convince oneself of a proposition whose truth is indeterminate, and when it is epistemically rational as well. The reason I would like this conversation to operate under this assumption is that I believe questioning this assumption makes it more difficult to use doublethink for productive purposes. There are many other places on this website where the ethics or legitimacy of doublethink can be debated, and I am already aware of its dangers, so please don't mention such things here.

I am hoping for some advice. "Wanting to believe" can be both epistemically and instrumentally rational, as in the case of certain self-fulfilling prophecies. If believing that I am capable of winning a competition will cause me to win, believing that I am capable of winning is rational both in the instrumental sense that "rationality is winning" and in the epistemic sense that "rationality is truth".

I used to be quite good at convincing myself to adopt beliefs of this type when they were beneficial. It was essentially automatic, I knew that I had the ability and so applying it was as trivial as remembering its existence. Nowadays, however, I'm almost unable to do this at all, despite what I remember. It's causing me significant difficulties in my personal life.

How can I redevelop my skill at this technique? Practicing will surely help, and I'm practicing right now so therefore I'm improving already. I'll soon have the skill back stronger than ever, I'm quite confident. But are there any tricks or styles of thinking that can make it more controllable? Any mantras or essays that will help my thought to become more fluidly self-directed? Or should I be focused on manipulating my emotional state rather than on initiating a direct cognitive override?

I feel as though the difficulties I've been having become most pronounced when I'm thinking about self-fulfilling prophecies that do not have guarantees of certainty attached. The lower my estimated probability that the self-fulfilling prophecy will work for me, the less able I am to use the self-fulfilling prophecy as a tool, even if the estimated gains from the bet are large. How might I deal with this problem, specifically?

Intentionally Raising the Sanity Waterline

12 Gleb_Tsipursky 13 November 2014 08:25PM

Hi all, I’m a social entrepreneur, professor, and aspiring rationalist. My project is Intentional Insights. This is a new nonprofit I co-founded with my wife and other fellow aspiring rationalists in the Columbus, OH Less Wrong meetup. The nonprofit emerged from our passion to promote rationality among the broad masses. We use social influence techniques, create stories, and speak to emotions. We orient toward creating engaging videos, blogs, social media, and other content that an aspiring rationalist like yourself can share with friends and family members who would not be open to rationality proper due to the Straw Vulcan misconception. I would appreciate any advice and help from fellow aspiring rationalists. The project is described more fully below, but for those for whom that’s tl;dr, there is a request for advice and allies at the bottom.

Since I started participating in the Less Wrong meetup in Columbus, OH and reading Less Wrong, what seems like ages ago, I can hardly remember my past thinking patterns. Because of how much awesomeness it brought to my life, I have become one of the lead organizers of the meetup. Moreover, I find it really beneficial to bring rationality into my research and teaching as a tenure-track professor at Ohio State, where I am a member of the Behavioral Decision-Making Initiative. Thus, my scholarship brings rationality into historical contexts, for example in my academic articles on agency, emotions, and social influence. In my classes I have students engage with the Checklist of Rationality Habits and other readings that help advance rational thinking.

As do many aspiring rationalists, I think rationality can bring such benefits to the lives of many others, and also help improve our society as a whole by leveling up rational thinking, secularizing society, and thus raising the sanity waterline. For that, our experience in the Columbus Less Wrong group has shown that we need to get people interested in rationality by showing them its benefits and how it can solve their problems, while delivering complex ideas in an engaging and friendly fashion targeted at a broad public, and using active learning strategies and connecting rationality to what they already know. This is what I do in my teaching, and is the current best practice in educational psychology. It has worked great with my students when I began to teach them rationality concepts. Yet I do not know of any current rationality trainings that do this. Currently, such education in rationality is available mainly through excellent, intense 4-day workshops the Center for Applied Rationality, usually held in the San Francisco area, which are aimed at a "select group of founders, hackers, and other ambitious, analytical, practically-minded people." We are targeting a much broader and less advanced audience, the upper 50-85%, while CfAR primarily targets the top 5-10%. We had great interactions with Anna Salamon, Julia Galef, Kenzi Amodei, and other CFAR folks, and plan to collaborate with them on various ways to do Rationality outreach. Besides CfAR, there are also some online classes on decision-making from Clearer Thinking, as well as some other stuff we list on the Intentional Insights resources page. However, we really wanted to see something oriented at the broad public, which can gain a great deal from a much lower level of education in rationality made accessible and relevant to their everyday lives and concerns, and delivered in a fashion perceived as interesting, fun, and friendly by mass audiences, as we aim to do with our events.

Intentional Insights came from this desire. This nonprofit explicitly orients toward getting the broad masses interested in and learning about rationality by providing fun and engaging content delivered in a friendly manner. What we want to do is use various social influence methods and promote rationality as a self-improvement/leadership development offering for people who are not currently interested in rational thinking because of the Straw Vulcan image, but who are interested in self-improvement, professional development, and organizational development. As people become more advanced, we will orient them toward more advanced rationality, at Less Wrong and elsewhere. Now, there are those who believe rationality should be taught only to those who are willing to put in the hard work and effort to overcome the high barrier to entry of learning all the jargon. However, we are reformers, not revolutionaries, and believe that some progress is better than no progress. And the more aspiring rationalists engage in various projects aimed to raise the sanity waterline, using different channels and strategies, the better. We can all help and learn from each other, adopting an experimental attitude and gathering data about what methods work best, constantly updating our beliefs and improving our abilities to help more people gain greater agency.

The channels of delivery locally are classes and workshops. Here is what one college student participant wrote after a session: “I have gained a new perspective after attending the workshop. In order to be more analytical, I have to take into account that attentional bias is everywhere. I can now further analyze and make conclusions based on evidence.” This and similar statements seem to indicate some positive impact, and we plan to gather evidence to examine whether workshop participants adopt more rational ways of thinking and how the classes influence people’s actual performance over time.

We have a website that takes this content globally, as well as social media such as Facebook and Twitter. The website currently has: - Blog posts, such as on agency; polyamory and cached thinking; and life meaning and purpose. We aim to make them easy-to-read and engaging to get people interested in rational thinking. These will be targeted at a high school reading level, the type of fun posts aspiring rationalists can share with their friends or family members whom they may want to get into rationality, or at least explain what rationality is all about. - Videos with similar content to blog posts, such as on evaluating reality clearly, and on meaning and purpose - A resources page, with links to prominent rationality venues, such as Less Wrong, CFAR, HPMOR, etc.

It will eventually have: - Rationality-themed merchandise, including stickers, buttons, pens, mugs, t-shirts, etc. - Online classes teaching rationality concepts - A wide variety of other products and offerings, such as e-books and apps

Now, why my wife and I, and the Columbus Less Wrong group? To this project, I bring my knowledge of educational psychology, research expertise, and teaching experience; my wife her expertise as a nonprofit professional with an MBA in nonprofit management; and other Board members include a cognitive neuroscientist, a licensed therapist, a gentleman adventurer, and other awesome members of the Columbus, OH, Less Wrong group.

Now, I can really use the help of wise aspiring rationalists to help out this project:

1) If you were trying to get the Less Wrong community engaged in the project, what would you do?

2) If you were trying to promote this project broadly, what would you do? What dark arts might you use, and how?

3) If you were trying to get specific groups and communities interested in promoting rational thinking in our society engaged in the project, what would you do? What dark arts might you use, and how?

4) If you were trying to fundraise for this project, what would you do? What dark arts might you use, and how?

5) If you were trying to persuade people to sign up for workshops or check out a website devoted to rational thinking, what would you do? How would you tie it to people’s self-interest and everyday problems that rationality might solve? What dark arts might you use, and how? What dark arts might you use, and how?

6) If you were trying to organize a nonprofit devoted to doing all the stuff above, what would you do to help manage its planning and organization? What about managing relationships and group dynamics?

Besides the advice, I invite you to ally with us and collaborate on this project in whatever way is optimal for you. Money is very helpful right now as we are fundraising to pay for costs associated with starting up the nonprofit, around $3600 through the rest of 2014, and you can donate directly through our website. Your time, intellectual capacity, and any specific talents would also be great, on things such as giving advice and helping out on specific tasks/projects, developing content in the form of blogs, videos, etc., promoting the project to those you know, and other ways to help out.

Leave your thoughts in comments below, or you can get in touch with me at gleb@intentionalinsights.org. I hope you would like to ally with us to raise the sanity waterline!

 

EDIT: Based on your feedback, we've decided that this post on polyamory and cached thinking is probably a bad fit for what we want to promote right now. We've removed it from the main index of our site. Thanks for helping!

How Tim O'Brien gets around the logical fallacy of generalization from fictional evidence

9 mszegedy 24 April 2014 09:41PM

It took me until I read The Things They Carried for the third time until I realized that it contained something very valuable to rationalists. In "The Logical Fallacy of Generalization from Fictional Evidence," EY explains how using fiction as evidence is bad not only because it's deliberately wrong in particular ways to make it more interesting, but more importantly because it does not provide a probabilistic model of what happened, and gives at best a bit or two of evidence that looks like a hundred or more bits of evidence.

Some background: The Things They Carried is a book by Tim O'Brien that reads as an autobiography where he recollects various stories from being a story in the Vietnam War. However, O'Brien often repeats himself, writing the same story over again, but with details or entire events that change. It is actually a fictional autobiography; O'Brien was in the Vietnam War, but all the stories are fictional.

In The Things They Carried, Tim O'Brien not only explains how generalization from fictional evidence is bad, but also has his own solution to the problem that actually works, i.e. gives the reader a useful probabilistic model of what happened in such a way that actually interests the reader. He does this by telling his stories many times, changing significant things about them. Literally; he contradicts himself, writing out the same story but with things changed. The best illustration of the principle in the book is the chapter "How to Tell a True War Story," found here (PDF warning, and bad typesetting warning).

A reader is not inclined to read a list of probabilities, but they are inclined to read a bunch of short stories. He talks about this practice a lot in the book itself, writing, "All you can do is tell it one more time, patiently, adding and subtracting, making up a few things to get at the real truth. … You can tell a true war story if you just keep on telling it." He always says war story, but the principle generalizes. At one point, he has a character represent the forces that act on conventional writing, telling a storyteller that he cannot say that he doesn't know what happened, and that he cannot insert any analysis.

O'Brien also writes about a lot of other things I don't want to mention more than briefly here, such as the specific ways in which the model that conventional war stories give of war is wrong, and specific ways in which the audience misinterprets stories. I recommend the book very much, especially if you think writing "tell multiple short stories" fiction is a great idea and want to do it.

I apologize if this post has been made before.

EDIT: Tried to clarify the idea better. I added an example with an excerpt.

EDIT 2: Added a better excerpt.

EDIT 3: Added a paragraph about background.

How to become a PC?

15 DataPacRat 26 January 2014 06:49PM

"Cryonics has a 95% chance of failure, by my estimation; it would be downright /embarrassing/ to die on the day before real immortality is discovered. Thus, I want to improve my general health and longevity."

That thought has gotten me through three weeks of gradually increasing exercise and diet improvement (I'm eating an apple right now) - but my enthusiasm is starting to flag. So I'm looking for new thoughts that will help me keep going, and keep improving. A few possibilities that I've thought of:

Pride: "If I'm so smart, then I should be able to do /better/ than those other people who don't even know about Bayesian updates, let alone the existence of akrasia..."

Sloth: "If I stop now, it's going to be /so much/ harder and more painful to start up again, instead of just keeping on keeping on..."

Desire: "I already like hiking and camping - if I keep this up, I'll be able to carry enough weight to finally take that long trip I've occasionally considered..."

Curiosity: "I'm as geeky a nerd as you can find. I wonder how far I can hack my own body?"

Pride again: "I already keep a hiker's first-aid kit in my pocket, and make other preparations for events that happen rarely. How stupid do I have to be not to put at least that much effort into making my everyday life easier?"

 

Does anyone have any experience in such self-motivation? Does this set of mental tricks seem like a sufficiently viable approach? Are there any other approaches that seem worth a shot?

[LINK] What Can Internet Ads Teach Us About Persuasion?

5 JQuinton 22 October 2013 03:31PM

How "one weird trick" conquered the Internet. Some excerpt I found interesting:

Research on persuasion shows the more arguments you list in favor of something, regardless of the quality of those arguments, the more that people tend to believe it,” Norton says. “Mainstream ads sometimes use long lists of bullet points—people don’t read them, but it’s persuasive to know there are so many reasons to buy.” OK, but if more is better, then why only one trick? “People want a simple solution that has a ton of support.”

I actually see this technique used in a lot of religious apologetics. There's even a name for one of them: The Gish Gallop. Would it be fair to say that this technique is taking advantage of a naive or intuitive understanding of Bayesian updates?

What about all the weirdness? “A word like ‘weird’ is not so negative, and kind of intriguing,” says Oleg Urminsky of the University of Chicago Booth School of Business. “There’s this foot-in-the-door model. If you lead with a strong, unbelievable claim it may turn people off. But if you start with ‘isn’t this kind of weird?’ it lowers the stakes.” The model also explains why some ads ask you to click on your age first. “Giving your age is low-stakes but it begins the dialogue. The hard sell comes later.”

The "click on your age" first gambit seems a bit like Cached Selves.

“People tend to think something is important if it’s secret,” says Michael Norton, a marketing professor at Harvard Business School. “Studies find that we give greater credence to information if we’ve been told it was once ‘classified.’ Ads like this often purport to be the work of one man, telling you something ‘they’ don’t want you to know.” The knocks on Big Pharma not only offered a tempting needle-free fantasy; they also had a whiff of secret knowledge, bolstering the ad’s credibility

Humanity's love affair with secrecy and its importance seems to go back quite a bit. The world's largest religion seems to have started out as one of many mystery religions in the Greco-Roman world at the time.

Dark Arts 101: Winning via destruction and dualism

-13 PhilGoetz 21 September 2013 01:53AM

Recalling first that life is a zero-sum game, it is immediately obvious that the quickest and easiest path to success is not to accomplish things yourself—that's a game for heroes and other suckers—but to tear down the accomplishments and reputations of others. Destruction is easy. The difficulty lies in constructing a situation so that that destruction is to your net benefit.

continue reading »

[link] A attempt to reduce Epistemic Viciousness in the martial arts/ an empirical analysis of WSD training.

7 beoShaffer 31 December 2011 07:50PM

http://kojutsukan.blogspot.com/2011/12/womens-self-defence-courses-effective.html

http://kojutsukan.blogspot.com/2011/12/womens-self-defence-effective-or-not-pt.html

 

The linked articles' theoretical topic, the effectiveness of women's self defense(WDS) courses, is not of particular interest to LW. However, they are also a pushback against epistemic viciousness in the martial arts.  The author analyzes WDS courses in the light of actual studies on sexual violence and the effectiveness of various methods of resistance.  They also make several direct references to the problems with martial arts epistemology and some of the causes.  Thus, I recommend them anyone interested in the martial arts and rationality.  For that matter I recommend the entire blog, which is largely about the science of martial arts.

Optimal User-End Internet Security (Or, Rational Internet Browsing)

1 [deleted] 09 September 2011 06:23PM

Hacking and Cracking, Internet security, Cypherpunk. I find these topics fascinating as well as completely over my head.

Yet, there are still some things that can be said to a layman, especially by the ever-poignant Randall Munroe:

Password Strength

Passwords Reuse

I'm guilty on both charges (reusing poorly formulated passwords, not stealing them).

These arguments may be just be the tip of the iceberg of a much larger problem that needs optimizing: Social Engineering, or mainly how it can be used against our interests (to quip Person 2, "It doesn't matter how much security you put on the box.  Humans are not secure."). I get the feeling that I'm not managing my risks on the Internet as well as I should.

So the questions I ask are: In what ways do our cognitive biases come into play when we surf the Internet and interact with others? Of which of these biases can actively we protect against, and how? I've enforced HTTPS when available, as well as kept my Internet use iconoclastic rather than typical, but I doubt that's a comprehensive list.

I don't know how usefully I can contribute, but I hope that many on Less Wrong can.

Topic Search Poll Results and Short Reports

6 Nic_Smith 09 August 2011 06:28AM

At the end of June, I asked Less Wrong to vote for "What topic[s] would be best for an investigation and brief post?" in order to direct a search for topics to examine here. My thanks to everyone that participated (especially since the comments hint that the poll format was not well-liked). The most-wanted topics follow, and the complete list can be found on Google Docs -- maps and graphs related to the poll are also available on All Our Ideas. A score for a topic in the results below is an "estimated [percent] chance that it will win against a randomly chosen idea."

  1. Systems theory -- 71.6
  2. Leadership -- 70.7
  3. Linguistics (general) -- 70.7
  4. Finance -- 67.0
  5. Bayesian approach to business -- 60.7
  6. Lisp (Programming language) -- 59.7
  7. Anthropology (general) -- 59.4
  8. Sociology (general) -- 59.2
  9. Political Science (general) -- 58.5
  10. Historiography (the methods of history) -- 58.3
  11. Logistics -- 56.8
  12. Sociology of Political Organizations -- 56.0
  13. Military Theory -- 52.1
  14. Diplomacy -- 51.1

Systems theory, in first place, is a topic that I found while rummaging through online sources, including Wikipedia, for items to add to the poll; it's described there as the "study of systems in general, with the goal of elucidating principles that can be applied to all types of systems in all fields of research. [....] In this context the word systems is used to refer specifically to self-regulating systems, i.e. that are self-correcting through feedback." Leadership seems to fall into both the social and "being effective" categories of interest, but has only lightly been touched on in previous discussion here despite a lot of ink spilled on the topic elsewhere -- the top Google results for "leadership" on this site are currently Calcsam's post on community roles and a book review for the Arbinger Institute's Leadership and Self Deception. "To Lead, You Must Stand Up" also comes to mind.

How to Use It

The spreadsheet includes columns for "Currently Investigated By" and "Writeup URLs" -- feel free to add your name or writeup links. If you already know a thing or two about one of the above topics, share your knowledge in a comment below or in a discussion post as appropriate, similar to the earlier "What can you teach us?" If you want to survey what currently exists on a topic, grab a few books, investigate, and then let us know what you found. When a related post instead of just a comment is appropriate, I recommend the tag "topic_search" As mentioned previously, even investigations that end in a comment to this post that a topic isn't useful for LW is still itself useful for the search.

Shortening the Unshortenable Way

-2 Duk3 26 July 2011 06:44AM

 

or

A Starting Point for Defense against Flexible Dark Artists and Circumstances

 

In On Seeking a Shortening of the Way the assertion “Maybe we're not geniuses because we don't bother paying attention to ordinary things” caught my eye. Certainly! I said. Obviously if we were able to pay the appropriate amount of attention to every occurrence so as to gain enough data to update our models in an optimal way, we would rapidly increase our overall ability to model the world and increase our probability of insights at the level currently considered ‘genius.’

 

                And then I remembered that I can’t really do that, on account of having crappy models of what is actually important, and thinking that i can't improve those models quickly. Whoops! I, like so many others, fail to know how much attention to pay to ordinary things so as to become a genius. C’est la vie. Fortunately the lesson here was not the factuality of the statement, which is high, but a reminder that you could probably gain benefits from paying more attention and being more disciplined in your thought.

                Which is even better because it’s great advice, and eminently doable. Thanks, Yvain! So I set about paying attention to how I currently pay attention and, like usual, paid attention to the cues I get about how other people pay attention, assuming that I make the mistakes they do at least some of the time.

                And then I realized… wait a minute, whenever other people aren’t actually paying attention is when I could most easily shanghai them into doing things they normally wouldn’t do (Were I a dark artist. Hypothetically.). So learning how to pay more attention and pay attention in the correct way is probably the best reflexive method of avoiding being dutch booked by people who are highly adaptable dark artists.

                And here’s my low-hanging fruit of techniques to build the foundational reflexes for shortening the way. The goal is to avoid being inattentive in certain sorts of situations where I noted personal susceptibility to being taken advantage of by changing situations or flexible con artists.

                Summary: Act like Suspicious, Smart, Rich People Do. Assume everyone and everything is both an opportunity and an encounter with a parasite, and don’t act like it unless it’s socially convenient. How do you do this, you say. It sounds more difficult than that, you say. On the contrary, skeptical sir! I will now present an exercise which rapidly becomes reflexive, in a manner which will cause it to become reflexive, which separates the exercise from the situation so that you can learn the requisite acting skills separately! Try this!

Ask yourself for new people , situations, arguments, and facts, what is this worth to me? What risks do I run by paying attention to this? What opportunities lie in this, if my understanding of it is correct? What risks do I run, if my understanding of it is incorrect? And you can go as much deeper as you think is valuable or are mentally capable of sustaining.

                  For the step-by-steppers out there (I salute you!), here’s explicitly How To start doing this in a low-cost way.

Step 1: In your journal for daily events (If you’re not keeping one of these go buy a journal and start. Without a daily log how do you know you’re actually making progress?) use Pen and Paper (The Great Equalizer!) and write down your understanding of a couple of important topics and a few simple topics (the simple topics shouldn’t take as long… right?). This will be a lot of work! But it’s only for one day, and developing this mental habit in particular and your ability to do rational yet seemingly onerous things for a brief period each day will both be massively valuable.

Step 2: When That Gets Boring, elaborate with pros and cons, an analysis of arguments, or other techniques that professionals use when it’s important (Imagine a lawyer not analyzing their opponent’s arguments, and then imagine yourself as their client.).  Do a Fermi calculation (here's some practice) if it involves a number of things you don’t understand well.

Step 3: Avoid abusing this method to convince yourself you don't need to run the numbers by pretending someone else, someone biased,wrote the analysis. (Those darned Biased people, cropping up even in your own journal!) Think of how future versions of yourself will look at your thought processes (you'll be smarter then... wiser... with a knowledge ofcommon logical fallacies and the heuristics and biases literature)(you might even read Godel Escher Bach or something andblow your mind. Anything is possible!). Look over your previous analyses before deciding (sleep on it and wait on it). Developing a decent set of evidence for fermi calculations and calibration exercises will let you use the same thought processes to do this right when you don't have time to run the numbers.

Step 4: Profit.

 

 

Please vote -- What topic would be best for an investigation and brief post?

4 Nic_Smith 30 June 2011 04:50AM

Followup to: Systematic Search for Useful Ideas

I've set up a pairwise poll for this question and additional suggestions are welcome. My original proposal was to examine topics that haven't already been covered here, but instead of that, I'd like to ask people to consider the existing level of discussion on a topic in evaluating what would be "best."

ETA: There are currently over 500 pairs. You don't have to go through all of them -- answer as many or as few as you like.

Proposal: Systematic Search for Useful Ideas

6 Nic_Smith 01 June 2011 12:09AM

LessWrong is a font of good ideas, but the topics and interests usually expressed and explored here tend to cluster over few areas. As such, high-value topics may still be present for the community in other fields which can be systematically explored, rather than waiting for a random encounter. Additionally, there seems to be interest here in examining a wider variety of topics. In order to do this, I suggest creating a community list of areas to look into (besides the usual AI, Cog Sci, Comp Sci, Econ, Math, Philosophy, Psych, Statistics, etc.) and then reading a bit on the basics of these fields. In additional to potentially uncovering useful ideas per se, this also might offer the opportunity to populate the textbooks resource list and engage in not-random acts of scholarship.

Everyone Split Up, There’s a A Lot of Ideosphere to Cover

A rough sketch of how I think the project will work follows. I’ll be proceeding with this and tackling at least one or two subjects as long as there’s at least a few other people interested in working on it too.

Step 1, Community Evaluation: Using All Our Ideas or similar, generate a list of fields to investigate.
Step 2, Sign-Up: People have the best sense of what they already know and their abilities, so at this point anyone that wants to can pick a subject that’s best for them to look into.
Step 3, Study: I imagine this will mostly involve self-directed reading of a handful of texts, watching some online videos, and maybe calling up one or two people -- in other words, nothing too dramatic. If a vein of something interesting is found, it’s probably better that it’s “marked” for further follow-up rather than further examined alone.
Step 4, Post: Some these investigations will not reveal anything -- that’s actually a good thing (explained below); for these, a short “Looked into it, nothing here” sort of comment should suffice. Subjects with bigger findings should get bigger, more detailed comments/posts.

Evaluation of Proposal

As a first step, I’ll use a variation of the Heilmeier questions which is an (admittedly idiosyncratic) mix of the original version and gregv’s enhanced version.

  • What are you trying to do? Articulate your objectives using absolutely no jargon.
    Produce comments or posts providing very brief overviews of fields of knowledge, not previously discussed here, with notes pertaining to Less Wrong topics and interests.
  • Who cares? How many people will benefit?
    This post is partially an attempt to determine that, but there seems to be at least some interest in more variety on the site (see above). Additionally, the posts should be a good general resource for anyone that stumbles across them, and might even make good content for search purposes.
  • Why hasn't someone already solved this problem? What makes you think what stopped them won't stop you?
    The idea is roughly book club meets Wikipedia, but with an emphasis on creating a small evaluative body of knowledge rather than a massive descriptive encyclopedia, and with a LessWrong twist. The sharper focus should make the results more useful to go through than just hitting “random page” in yon encyclopedia.
  • How much have projects like this cost (time equivalent)?
    Some have the ability to take on “whole fields of knowledge in mere weeks” but that’s not typical -- investigating a subject in this case is roughly comparable in complexity to taking an introductory class or two, which people without any previous training normally accomplish over a period of about three to four months at a pace which is not especially strenuous, and with fairly light monetary costs beyond tuition/fees (which aren't applicable here).
  • What are the midterm and final "exams" to check for success?
    For each individual investigation, a good “midterm” check would be for the person looking into a field to have an list of resources or texts they’re working on. The final “exam” is a posting indicating if anything useful or interesting was found, and if so, what.
  • If y [this community search] fails to solve x [uncover useful knowledge in fields previously under-examined on LessWrong], what would that teach you that you (hopefully) didn't know at the beginning?
    Quite possibly, this could be a good thing -- it indicates that the mix of topics on LessWrong is approximately right, and things can continue on. In this case, we’d end up seeing a bunch of short “nothing interesting here” comments, and can rest more or less assured that further investigation into even more minute detail in unnecessarily. This is conditional on not-terrible scholarship and a reasonably good priority list from step 1.

What would an Incandescence about FAI look like?

1 VNKKET 01 May 2011 08:30PM

This post spoils Greg Egan's Incandescence.

Incandescence is a success story about some people who notice an existential threat and avoid it using science and engineering.  We see them figure out how gravity works, which is more interesting than it might sound, partly because their everyday experiences are full of gravitational effects that we don't notice on Earth.  At first they do science out of pure curiosity, but it turns into an urgent collective action problem when they discover that their orbit will lead them towards all sorts of disasters, including falling into a black hole.  The solution, it turns out, is to move some dirt around.

Has anyone considered writing a success story about using Friendly AI to solve an existential threat?