You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Dark Arts: Defense in Reputational Warfare

1 OrphanWilde 03 December 2015 03:03PM

First, the Dark Arts are, as the name implies, an art, not a science.  Likewise, defending against them is.  An artful attacker can utilize expected defenses against you; if you can be anticipated, you can be defeated.  The rules, therefore, are guidelines.  I'm going to stage the rules in a narrative form; they don't need to be, however, because life doesn't follow a narrative.  The narrative exists to give them context, to give the reader a sense of the purpose of each rule.

Rule #0: Never follow the rules if they would result in a worse outcome.  

 


 

Now, generally, the best defense is to never get attacked in the first place.  Security through obscurity is your first line of defense.  Translations of Sun Tzu vary somewhat, but your ideal form is to be formless, by which I mean, do not be a single point of attack, or defense.  If there's a mob in your vicinity, the ideal place is neither outside it, nor leading it, but a faceless stranger among it.  Even better is to be nowhere near a mob.  This is the fundamental basis of not being targeted; the other two rules derive from this one.

Rule #1: Do not stand out.

 

Sometimes you're picked out.  There's a balancing art with this next piece; you don't want to stand out, to be a point of attack, but if somebody is picking faces, you want to look slightly more dangerous than your neighbor, you want to look like a hard target.  (But not when somebody is looking for hard targets.  Obviously.)

Rule #2: Look like an unattractive target.

 

The third aspect of this is somewhat simpler, and I'll borrow the phrasing from HPMoR:

Rule #3: "I will not go around provoking strong, vicious enemies" - http://hpmor.com/chapter/19

 

The first triplet of rules, by and large, are about -not- being attacked in the first place.  These are starting points; Rule #1, for example, culminates in not existing at all.  You can't attack what doesn't exist.  Rule #1 is the fundamental strategy of Anonymous.  Rule #2 is about encouraging potential attackers to look elsewhere; Rule #1 is passive, and this is the passive-aggressive form of Rule #1.  It's the fundamental strategy of home security - why else do you think security companies put signs in the yard saying the house is protected?  Rule #3 is obvious.  Don't make enemies in the first place, and particularly don't make dangerous enemies.  It has critical importance beyond its obvious nature, however - enemies might not care if they get hurt in the process of hurting you.  That limits your strategies for dealing with them considerably.

 


 

You've messed up the first three rules.  You're under attack.  What now?  Manage the Fight.  Your attacker starts with the home field advantage - they attacked you under the terms they are most comfortable in.  Change the terms, immediately.  Do not concede that advantage.  Like Rule #1, Rule #4 is the basis of your First Response, and Rule #5 and Rule #6.  The simplest approach is the least obvious - immediate surrender, but on your terms.  If you're accused of something, admit to the weakest and least harmful version of that which is true (be specific, and deny as necessary), and say you're aware of your problem and working on improving.  This works regardless of whether there's an audience or not, but works best if there is an audience.

Rule #4: Change the terms of the fight to favor yourself, or disfavor your opponent.

 

Sometimes, the best response to an attack is no response at all.  Is anybody (important) going to take it seriously?  If not, then the very worst thing you can do is to respond, because that validates the attack.  If you do need to respond, respond as lightly as possible; do not respond as if the accusation is serious or matters, because that lends weight to the accusation.  If there's no audience, or a limited audience, responding gives your attacker an opportunity to continue the attack.  If there's a risk of them physically assaulting you, ignoring them is probably a bad idea; a polite non-response is ideal in that situation.  (For crowds that pose a risk of physically assault you... you need more rules than I'm going to write here.)

Rule #5: Use the minimum force necessary to respond.

 

It's tempting to attack back: Don't.  You're going to escalate the situation, and escalation is going to favor the person who is better at this; worse, in a public Dark Arts battle, even the better person is going to take some hits.  Nobody wins.  Instead, mine the battlefield, and make sure your opponent sees you mining the battlefield.  If you're accused of something, suggest that both you and your opponent know the accused thing isn't as uncommon as generally represented.  Hint at shared knowledge.  Make it clear you'll take them out with you.  If they're actually good at this, they'll get the hint.  (This is why it's critically important not to make enemies.  You really, really don't want somebody around who doesn't mind going down with you, and your use of this strategy becomes difficult.)

Rule #6: Make escalation prohibitively costly.

 

You might recognize some elements of martial arts here.  There are similarities, enough that one is useful to the other, but they are not the same.

 


 

You're in a fight, and your opponent is persistent, or you messed up and now things are serious.  What now?  First, continue to Manage the Fight.  Your goal now is to end the fight; the total damage you're going to suffer is a function of both the amplitude of escalation and the length of the fight.  You've failed to manage the amplitude; manage the length.

Rule #7: End fights fast.

 

At this point you've been reasonable and defensive, and that hasn't worked.  Now you need to go on the offensive.  Your defense should be light and easy, continuing to react with the lightest necessary touch, continuing to ignore anything you don't need to react to; your attack should be brutal, and put your opponent on the defensive immediately.  Attack them on the basis of their harassment of you, first, and then build up to any personal attacks you've been holding back on - your goal is to impart a tone of somebody who has been put-upon and had enough.

Rule #8: Hit hard.

 

And immediately stop.  If you've pulled off your counterattack right, they'll offer up defenses.  Just quit the battle.  Do not be tempted by a follow-up attack; you were angry, you vented your anger, you're done.  By not following up on the attack, by not attacking their defenses, you're leaving them no reasonable way to respond.  Any continuing attacks can be safely ignored; they will look completely pathetic going forward.

Rule #9: Recognize when you've won, and stop.

 

Defense follows different rules than attack.  In defense, you aren't trying to inflict wounds, you're trying to avoid them.  Ending the fight quickly is paramount to this.

The Winding Path

6 OrphanWilde 24 November 2015 09:23PM

The First Step

The first step on the path to truth is superstition.  We all start there, and should acknowledge that we start there.

Superstition is, contrary to our immediate feelings about the word, the first stage of understanding.  Superstition is the attribution of unrelated events to a common (generally unknown or unspecified) cause - it could be called pattern recognition. The "supernatural" component generally included in the definition is superfluous, because supernatural merely refers to that which isn't part of nature - which means reality -, which is an elaborate way of saying something whose relationship to nature is not yet understood, or else nonexistent.  If we discovered that ghosts are real, and identified an explanation - overlapping entities in a many-worlds universe, say - they'd cease to be supernatural and merely be natural.

Just as the supernatural refers to unexplained or imaginary phenomena, superstition refers to unexplained or imaginary relationships, without the necessity of cause.  If you designed an AI in a game which, after five rounds of being killed whenever it went into rooms with green-colored walls, started avoiding rooms with green-colored walls, you've developed a good AI.  It is engaging in superstition, it has developed an incorrect understanding of the issue.  But it hasn't gone down the wrong path - there is no wrong path in understanding, there is only the mistake of stopping.  Superstition, like all belief, is only useful if you're willing to discard it.

The Next Step

Incorrect understanding is the first - and necessary - step to correct understanding.  It is, indeed, every step towards correct understanding.  Correct understanding is a path, not an achievement, and it is pursued, not by arriving at the correct conclusion in the first place, but by testing your ideas and discarding those which are incorrect.

No matter how much intelligent you are, you cannot skip the "incorrect understanding" step of knowledge, because that is every step of knowledge.  You must come up with wrong ideas in order to get at the right ones - which will always be one step further.  You must test your ideas.  And again, the only mistake is stopping, in assuming that you have it right now.

Intelligence is never your bottleneck.  The ability to think faster isn't necessarily the ability to arrive at the right answer faster, because the right answer requires many wrong ones, and more importantly, identifying which answers are indeed wrong, which is the slow part of the process.

Better answers are arrived at by the process of invalidating wrong answers.

The Winding Path

The process of becoming Less Wrong is the process of being, in the first place, wrong.  It is the state of realizing that you're almost certainly incorrect about everything - but working on getting incrementally closer to an unachievable "correct".  It is a state of anti-hubris, and requires a delicate balance between the idea that one can be closer to the truth, and the idea that one cannot actually achieve it.

The art of rationality is the art of walking this narrow path.  If ever you think you have the truth - discard that hubris, for three steps from here you'll see it for superstition, and if you cannot see that, you cannot progress, and there your search for truth will end.  That is the path of the faithful.

But worse, the path is not merely narrow, but winding, with frequent dead ends requiring frequent backtracking.  If ever you think you're closer to the truth - discard that hubris, for it may inhibit you from leaving a dead end, and there your search for truth will end.  That is the path of the crank.

The path of rationality is winding and directionless.  It may head towards beauty, then towards ugliness; towards simplicity, then complexity.  The correct direction isn't the aesthetic one; those who head towards beauty may create great art, but do not find truth.  Those who head towards simplicity might open new mathematical doors and find great and useful things inside - but they don't find truth, either.  Truth is its own path, found only by discarding what is wrong.  It passes through simplicity, it passes through ugliness; it passes through complexity, and also beauty.  It doesn't belong to any one of these things.

The path of rationality is a path without destination.

 


 

Written as an experiment in the aesthetic of Less Wrong.  I'd appreciate feedback into the aesthetic interpretation of Less Wrong, rather than the sense of deep wisdom emanating from it (unless the deep wisdom damages the aesthetic).

I Want To Believe: Rational Edition

4 27chaos 18 November 2014 08:00PM

Relevant: http://lesswrong.com/lw/k7h/a_dialogue_on_doublethink/

I would like this conversation to operate under the assumption that there are certain special times when it is instrumentally rational to convince oneself of a proposition whose truth is indeterminate, and when it is epistemically rational as well. The reason I would like this conversation to operate under this assumption is that I believe questioning this assumption makes it more difficult to use doublethink for productive purposes. There are many other places on this website where the ethics or legitimacy of doublethink can be debated, and I am already aware of its dangers, so please don't mention such things here.

I am hoping for some advice. "Wanting to believe" can be both epistemically and instrumentally rational, as in the case of certain self-fulfilling prophecies. If believing that I am capable of winning a competition will cause me to win, believing that I am capable of winning is rational both in the instrumental sense that "rationality is winning" and in the epistemic sense that "rationality is truth".

I used to be quite good at convincing myself to adopt beliefs of this type when they were beneficial. It was essentially automatic, I knew that I had the ability and so applying it was as trivial as remembering its existence. Nowadays, however, I'm almost unable to do this at all, despite what I remember. It's causing me significant difficulties in my personal life.

How can I redevelop my skill at this technique? Practicing will surely help, and I'm practicing right now so therefore I'm improving already. I'll soon have the skill back stronger than ever, I'm quite confident. But are there any tricks or styles of thinking that can make it more controllable? Any mantras or essays that will help my thought to become more fluidly self-directed? Or should I be focused on manipulating my emotional state rather than on initiating a direct cognitive override?

I feel as though the difficulties I've been having become most pronounced when I'm thinking about self-fulfilling prophecies that do not have guarantees of certainty attached. The lower my estimated probability that the self-fulfilling prophecy will work for me, the less able I am to use the self-fulfilling prophecy as a tool, even if the estimated gains from the bet are large. How might I deal with this problem, specifically?

Intentionally Raising the Sanity Waterline

12 Gleb_Tsipursky 13 November 2014 08:25PM

Hi all, I’m a social entrepreneur, professor, and aspiring rationalist. My project is Intentional Insights. This is a new nonprofit I co-founded with my wife and other fellow aspiring rationalists in the Columbus, OH Less Wrong meetup. The nonprofit emerged from our passion to promote rationality among the broad masses. We use social influence techniques, create stories, and speak to emotions. We orient toward creating engaging videos, blogs, social media, and other content that an aspiring rationalist like yourself can share with friends and family members who would not be open to rationality proper due to the Straw Vulcan misconception. I would appreciate any advice and help from fellow aspiring rationalists. The project is described more fully below, but for those for whom that’s tl;dr, there is a request for advice and allies at the bottom.

Since I started participating in the Less Wrong meetup in Columbus, OH and reading Less Wrong, what seems like ages ago, I can hardly remember my past thinking patterns. Because of how much awesomeness it brought to my life, I have become one of the lead organizers of the meetup. Moreover, I find it really beneficial to bring rationality into my research and teaching as a tenure-track professor at Ohio State, where I am a member of the Behavioral Decision-Making Initiative. Thus, my scholarship brings rationality into historical contexts, for example in my academic articles on agency, emotions, and social influence. In my classes I have students engage with the Checklist of Rationality Habits and other readings that help advance rational thinking.

As do many aspiring rationalists, I think rationality can bring such benefits to the lives of many others, and also help improve our society as a whole by leveling up rational thinking, secularizing society, and thus raising the sanity waterline. For that, our experience in the Columbus Less Wrong group has shown that we need to get people interested in rationality by showing them its benefits and how it can solve their problems, while delivering complex ideas in an engaging and friendly fashion targeted at a broad public, and using active learning strategies and connecting rationality to what they already know. This is what I do in my teaching, and is the current best practice in educational psychology. It has worked great with my students when I began to teach them rationality concepts. Yet I do not know of any current rationality trainings that do this. Currently, such education in rationality is available mainly through excellent, intense 4-day workshops the Center for Applied Rationality, usually held in the San Francisco area, which are aimed at a "select group of founders, hackers, and other ambitious, analytical, practically-minded people." We are targeting a much broader and less advanced audience, the upper 50-85%, while CfAR primarily targets the top 5-10%. We had great interactions with Anna Salamon, Julia Galef, Kenzi Amodei, and other CFAR folks, and plan to collaborate with them on various ways to do Rationality outreach. Besides CfAR, there are also some online classes on decision-making from Clearer Thinking, as well as some other stuff we list on the Intentional Insights resources page. However, we really wanted to see something oriented at the broad public, which can gain a great deal from a much lower level of education in rationality made accessible and relevant to their everyday lives and concerns, and delivered in a fashion perceived as interesting, fun, and friendly by mass audiences, as we aim to do with our events.

Intentional Insights came from this desire. This nonprofit explicitly orients toward getting the broad masses interested in and learning about rationality by providing fun and engaging content delivered in a friendly manner. What we want to do is use various social influence methods and promote rationality as a self-improvement/leadership development offering for people who are not currently interested in rational thinking because of the Straw Vulcan image, but who are interested in self-improvement, professional development, and organizational development. As people become more advanced, we will orient them toward more advanced rationality, at Less Wrong and elsewhere. Now, there are those who believe rationality should be taught only to those who are willing to put in the hard work and effort to overcome the high barrier to entry of learning all the jargon. However, we are reformers, not revolutionaries, and believe that some progress is better than no progress. And the more aspiring rationalists engage in various projects aimed to raise the sanity waterline, using different channels and strategies, the better. We can all help and learn from each other, adopting an experimental attitude and gathering data about what methods work best, constantly updating our beliefs and improving our abilities to help more people gain greater agency.

The channels of delivery locally are classes and workshops. Here is what one college student participant wrote after a session: “I have gained a new perspective after attending the workshop. In order to be more analytical, I have to take into account that attentional bias is everywhere. I can now further analyze and make conclusions based on evidence.” This and similar statements seem to indicate some positive impact, and we plan to gather evidence to examine whether workshop participants adopt more rational ways of thinking and how the classes influence people’s actual performance over time.

We have a website that takes this content globally, as well as social media such as Facebook and Twitter. The website currently has: - Blog posts, such as on agency; polyamory and cached thinking; and life meaning and purpose. We aim to make them easy-to-read and engaging to get people interested in rational thinking. These will be targeted at a high school reading level, the type of fun posts aspiring rationalists can share with their friends or family members whom they may want to get into rationality, or at least explain what rationality is all about. - Videos with similar content to blog posts, such as on evaluating reality clearly, and on meaning and purpose - A resources page, with links to prominent rationality venues, such as Less Wrong, CFAR, HPMOR, etc.

It will eventually have: - Rationality-themed merchandise, including stickers, buttons, pens, mugs, t-shirts, etc. - Online classes teaching rationality concepts - A wide variety of other products and offerings, such as e-books and apps

Now, why my wife and I, and the Columbus Less Wrong group? To this project, I bring my knowledge of educational psychology, research expertise, and teaching experience; my wife her expertise as a nonprofit professional with an MBA in nonprofit management; and other Board members include a cognitive neuroscientist, a licensed therapist, a gentleman adventurer, and other awesome members of the Columbus, OH, Less Wrong group.

Now, I can really use the help of wise aspiring rationalists to help out this project:

1) If you were trying to get the Less Wrong community engaged in the project, what would you do?

2) If you were trying to promote this project broadly, what would you do? What dark arts might you use, and how?

3) If you were trying to get specific groups and communities interested in promoting rational thinking in our society engaged in the project, what would you do? What dark arts might you use, and how?

4) If you were trying to fundraise for this project, what would you do? What dark arts might you use, and how?

5) If you were trying to persuade people to sign up for workshops or check out a website devoted to rational thinking, what would you do? How would you tie it to people’s self-interest and everyday problems that rationality might solve? What dark arts might you use, and how? What dark arts might you use, and how?

6) If you were trying to organize a nonprofit devoted to doing all the stuff above, what would you do to help manage its planning and organization? What about managing relationships and group dynamics?

Besides the advice, I invite you to ally with us and collaborate on this project in whatever way is optimal for you. Money is very helpful right now as we are fundraising to pay for costs associated with starting up the nonprofit, around $3600 through the rest of 2014, and you can donate directly through our website. Your time, intellectual capacity, and any specific talents would also be great, on things such as giving advice and helping out on specific tasks/projects, developing content in the form of blogs, videos, etc., promoting the project to those you know, and other ways to help out.

Leave your thoughts in comments below, or you can get in touch with me at gleb@intentionalinsights.org. I hope you would like to ally with us to raise the sanity waterline!

 

EDIT: Based on your feedback, we've decided that this post on polyamory and cached thinking is probably a bad fit for what we want to promote right now. We've removed it from the main index of our site. Thanks for helping!

[LINK] What Can Internet Ads Teach Us About Persuasion?

5 JQuinton 22 October 2013 03:31PM

How "one weird trick" conquered the Internet. Some excerpt I found interesting:

Research on persuasion shows the more arguments you list in favor of something, regardless of the quality of those arguments, the more that people tend to believe it,” Norton says. “Mainstream ads sometimes use long lists of bullet points—people don’t read them, but it’s persuasive to know there are so many reasons to buy.” OK, but if more is better, then why only one trick? “People want a simple solution that has a ton of support.”

I actually see this technique used in a lot of religious apologetics. There's even a name for one of them: The Gish Gallop. Would it be fair to say that this technique is taking advantage of a naive or intuitive understanding of Bayesian updates?

What about all the weirdness? “A word like ‘weird’ is not so negative, and kind of intriguing,” says Oleg Urminsky of the University of Chicago Booth School of Business. “There’s this foot-in-the-door model. If you lead with a strong, unbelievable claim it may turn people off. But if you start with ‘isn’t this kind of weird?’ it lowers the stakes.” The model also explains why some ads ask you to click on your age first. “Giving your age is low-stakes but it begins the dialogue. The hard sell comes later.”

The "click on your age" first gambit seems a bit like Cached Selves.

“People tend to think something is important if it’s secret,” says Michael Norton, a marketing professor at Harvard Business School. “Studies find that we give greater credence to information if we’ve been told it was once ‘classified.’ Ads like this often purport to be the work of one man, telling you something ‘they’ don’t want you to know.” The knocks on Big Pharma not only offered a tempting needle-free fantasy; they also had a whiff of secret knowledge, bolstering the ad’s credibility

Humanity's love affair with secrecy and its importance seems to go back quite a bit. The world's largest religion seems to have started out as one of many mystery religions in the Greco-Roman world at the time.

Dark Arts 101: Winning via destruction and dualism

-13 PhilGoetz 21 September 2013 01:53AM

Recalling first that life is a zero-sum game, it is immediately obvious that the quickest and easiest path to success is not to accomplish things yourself—that's a game for heroes and other suckers—but to tear down the accomplishments and reputations of others. Destruction is easy. The difficulty lies in constructing a situation so that that destruction is to your net benefit.

continue reading »

No need for gravity?

-7 graviton 17 October 2012 10:22AM

I don’t know where else to go with this idea.  I’m not a physicist and it could be obviously wrong for some reason I’m missing, but it seems to me that there is a small chance that I’ve figured out how to remove one of the fundamental forces from our models of the universe; gravity, to be specific.

So we’ve all heard of dark energy, the force driving the accelerating expansion of the universe.  Presumably it comes from somewhere, perhaps from every piece of matter in the universe, perhaps only stars, perhaps only black holes, but as long as it’s not all coming from a single source, it’s probably coming from something that is relatively common, and primarily found within galaxies.  And if I magically came to KNOW that it does all come from one source, I would do very little beyond deleting a few arrows in my diagrams to change this post. 

And as the Hubble deep field scans showed, there are A LOT of galaxies in any direction you look in (at least from Earth).  So, countless galaxies in all directions are emitting dark matter, with tiny rays from each one hitting our galaxy, as well as galaxies in every other direction. 

Of course our galaxy doesn’t have a plastic shell around it for dark energy to hit.  It’s specific things in the galaxy that get hit by specific emissions of dark energy, just like it’s specific objects that emit those emissions.

Here are some galaxies.  Beyond the ones shown, there are more and more in all directions for as far as anyone knows so far.  Please note that nothing in any of these diagrams is drawn to scale.   


  Figure 1

Any one of them emits dark energy pretty uniformly in all directions, as it does with light.
Figure 2

Given the sheer number of other galaxies off in all directions, the total dark energy hitting a galaxy would look something like this.  The magnitude of the dark energy forces coming in from the rest of the universe ought to be a lot more than what our one little galaxy puts out. 

Now let’s look inside this galaxy, at a single solar system.  Dark energy converges from all directions, as at the perimeter of the galaxy, since the galaxy is mostly empty space, and (I’m presuming) a more or less negligible amount is added from other objects within the galaxy. 


Now let’s consider a single planet within the solar system

The sun casts a shadow in the dark energy field; some of the dark energy headed in the direction of our planet strikes the sun along the way and never makes it to the planet.  To a lesser extent, the planet shields the sun as well.  As the planet revolves around the sun, there is always a void in the otherwise all-pervasive dark energy field in the direction of the sun.  As rudimentary as these diagrams are, you might as well just rotate your monitor if you really need a visual (keeping the screen on the same plane).

Each dark energy vector has an equal opposite cancelling it out, except in the shadow.  Any other imbalance would be the same for the planet as it is for the star, and therefore would not alter their positions relative to each other.  Even if it was all coming from one direction. 

The planet always has one region that is only hit with the sun’s own dark energy emissions (if it has any), whereas all other sides are being hit with dark energy from all of the rest of the universe (in that direction), hence the illusion of gravity. 

Similarly, a supermassive black hole at the center of a galaxy would cast a dark energy shadow on everything else in the galaxy, so the net push of all the dark energy vectors hitting a given star is toward the center of the galaxy, keeping it in orbit.  This would be true of any given moment; the direction of the greatest push rotates around, but it is always toward the black hole. 

The planets around the star are shielded by the black hole roughly the same amount as the star is, but they are much more strongly affected by the shielding from the star than the black hole is. 

Now, consider 2 galaxies:

Each emits a little bit of dark energy of its own, and is mostly empty space so that much of the dark energy from other galaxies beyond it passes right through.  I'm thinking no one object within a galaxy is emitting more dark energy than it shields its neighbors from.  There is some galaxy-to-galaxy shielding, but it is very weak due to the amount of dark energy that can pass right through without hitting anything.  This would be consistent with the galaxies spreading out from each other without being ripped apart by the force causing them to spread.  So, between galaxies, there is a repulsive effect, while within galaxies, there is primarily an effect of shielding from the all-pervasive repulsion from dark energy. 

 

Optimal User-End Internet Security (Or, Rational Internet Browsing)

1 [deleted] 09 September 2011 06:23PM

Hacking and Cracking, Internet security, Cypherpunk. I find these topics fascinating as well as completely over my head.

Yet, there are still some things that can be said to a layman, especially by the ever-poignant Randall Munroe:

Password Strength

Passwords Reuse

I'm guilty on both charges (reusing poorly formulated passwords, not stealing them).

These arguments may be just be the tip of the iceberg of a much larger problem that needs optimizing: Social Engineering, or mainly how it can be used against our interests (to quip Person 2, "It doesn't matter how much security you put on the box.  Humans are not secure."). I get the feeling that I'm not managing my risks on the Internet as well as I should.

So the questions I ask are: In what ways do our cognitive biases come into play when we surf the Internet and interact with others? Of which of these biases can actively we protect against, and how? I've enforced HTTPS when available, as well as kept my Internet use iconoclastic rather than typical, but I doubt that's a comprehensive list.

I don't know how usefully I can contribute, but I hope that many on Less Wrong can.

Shortening the Unshortenable Way

-2 Duk3 26 July 2011 06:44AM

 

or

A Starting Point for Defense against Flexible Dark Artists and Circumstances

 

In On Seeking a Shortening of the Way the assertion “Maybe we're not geniuses because we don't bother paying attention to ordinary things” caught my eye. Certainly! I said. Obviously if we were able to pay the appropriate amount of attention to every occurrence so as to gain enough data to update our models in an optimal way, we would rapidly increase our overall ability to model the world and increase our probability of insights at the level currently considered ‘genius.’

 

                And then I remembered that I can’t really do that, on account of having crappy models of what is actually important, and thinking that i can't improve those models quickly. Whoops! I, like so many others, fail to know how much attention to pay to ordinary things so as to become a genius. C’est la vie. Fortunately the lesson here was not the factuality of the statement, which is high, but a reminder that you could probably gain benefits from paying more attention and being more disciplined in your thought.

                Which is even better because it’s great advice, and eminently doable. Thanks, Yvain! So I set about paying attention to how I currently pay attention and, like usual, paid attention to the cues I get about how other people pay attention, assuming that I make the mistakes they do at least some of the time.

                And then I realized… wait a minute, whenever other people aren’t actually paying attention is when I could most easily shanghai them into doing things they normally wouldn’t do (Were I a dark artist. Hypothetically.). So learning how to pay more attention and pay attention in the correct way is probably the best reflexive method of avoiding being dutch booked by people who are highly adaptable dark artists.

                And here’s my low-hanging fruit of techniques to build the foundational reflexes for shortening the way. The goal is to avoid being inattentive in certain sorts of situations where I noted personal susceptibility to being taken advantage of by changing situations or flexible con artists.

                Summary: Act like Suspicious, Smart, Rich People Do. Assume everyone and everything is both an opportunity and an encounter with a parasite, and don’t act like it unless it’s socially convenient. How do you do this, you say. It sounds more difficult than that, you say. On the contrary, skeptical sir! I will now present an exercise which rapidly becomes reflexive, in a manner which will cause it to become reflexive, which separates the exercise from the situation so that you can learn the requisite acting skills separately! Try this!

Ask yourself for new people , situations, arguments, and facts, what is this worth to me? What risks do I run by paying attention to this? What opportunities lie in this, if my understanding of it is correct? What risks do I run, if my understanding of it is incorrect? And you can go as much deeper as you think is valuable or are mentally capable of sustaining.

                  For the step-by-steppers out there (I salute you!), here’s explicitly How To start doing this in a low-cost way.

Step 1: In your journal for daily events (If you’re not keeping one of these go buy a journal and start. Without a daily log how do you know you’re actually making progress?) use Pen and Paper (The Great Equalizer!) and write down your understanding of a couple of important topics and a few simple topics (the simple topics shouldn’t take as long… right?). This will be a lot of work! But it’s only for one day, and developing this mental habit in particular and your ability to do rational yet seemingly onerous things for a brief period each day will both be massively valuable.

Step 2: When That Gets Boring, elaborate with pros and cons, an analysis of arguments, or other techniques that professionals use when it’s important (Imagine a lawyer not analyzing their opponent’s arguments, and then imagine yourself as their client.).  Do a Fermi calculation (here's some practice) if it involves a number of things you don’t understand well.

Step 3: Avoid abusing this method to convince yourself you don't need to run the numbers by pretending someone else, someone biased,wrote the analysis. (Those darned Biased people, cropping up even in your own journal!) Think of how future versions of yourself will look at your thought processes (you'll be smarter then... wiser... with a knowledge ofcommon logical fallacies and the heuristics and biases literature)(you might even read Godel Escher Bach or something andblow your mind. Anything is possible!). Look over your previous analyses before deciding (sleep on it and wait on it). Developing a decent set of evidence for fermi calculations and calibration exercises will let you use the same thought processes to do this right when you don't have time to run the numbers.

Step 4: Profit.