New vs. Business-as-Usual Future

2 katydee 05 November 2013 02:13AM

What, in a broad sense, does the future look like? We don't know, and while many have historically made predictions, the track record for such predictions is less than impressive. I have noted that there appear to be two main types of view about the future-- the "new future" and the "business-as-usual future." In order to simplify this discussion, let's restrict it only to the coming century-- the period between 2013 and 2113.

The "new future" is, generally speaking, the idea that the coming century is going to be very different from the present; the "business-as-usual future" is, generally speaking, the idea that the coming century is going to be very similar to the present.

Here are some characteristics of the new future:

  • Some large-scale event occurs that alters human experience forever-- an intelligence explosion leading to a technological singularity, existential risks leading to human suppression or extinction, global climate change on a massive scale, etc.
  • Society changes a lot, and in fundamental ways that are difficult to understand. Daily life is vastly altered.
  • If future history even exists after the dramatic change, it views the coming century as being a critical moment where everything became vastly different, on par with or exceeding the significance of the development of agriculture.

Here are some characteristics of the business-as-usual future:

  • The intelligence explosion doesn't happen. AI continues to advance in much the same way that it has for the last several decades. More human-capable tasks become automated, but in slow and predictable ways. Intelligence amplification doesn't happen or doesn't yield generally useful results.
  • The world doesn't end. Global warming ends up being just another doomsday scare. Perhaps a lot of people die in the Third World, but the rest of the world adapts and keeps going much like it has for the last ever. Yellowstone doesn't explode. No asteroids hit the earth. There isn't a nuclear war.
  • Society doesn't change very much except in superficial ways. Daily life is more or less the same.
  • Wars might happen. Nations might collapse. But wars have been happening and nations have been collapsing for thousands of years. By and large, the coming century is viewed by future history as not particularly unlike those that came before.

Reference class forecasting seems to indicate that the business-as-usual future is quite likely. But as we know, this is far from a textbook case of reference class forecasting, and applying such techniques may not be helpful. What, then, is a good method of establishing what you think the future will look like?

Aliveness in Training

9 katydee 31 October 2013 01:17AM

Related: The Martial Art of Rationality

One principle in the martial arts is that arts that are practiced with aliveness tend to be more effective.

"Aliveness" in this case refers to a set of training principles focused on simulating conditions in an actual fight as closely as possible in training. Rather than train techniques in a vacuum or against a compliant opponent, alive training focuses on training with movement, timing, and energy under conditions that approximate those where the techniques will actually be used.[1]

A good example of training that isn't alive would be methods that focused entirely on practicing kata and forms without making contact with other practitioners; a good example of training that is alive would be methods that focused on verifying the efficacy of techniques through full-contact engagement with other practitioners.

Aliveness tends to create an environment free from epistemic viciousness-- if your technique doesn't work, you'll know because you won't be able to use it against an opponent. Further, if your technique does work, you'll know that it works because you will have applied it against people trying to prevent you from doing so, and the added confidence will help you better apply that technique when you need it.

Evidence from martial arts competitions indicates that those who practice with aliveness are more effective than others. One of the chief reasons that Brazilian jiu-jitsu (BJJ) practitioners were so successful in early mixed martial arts tournaments was that BJJ-- a martial art that relies primarily on grappling and the use of submission holds and locks to defeat the opponent-- can be trained safely with almost complete aliveness, whereas many other martial arts cannot.[2]

Now, this is not to say that one should only attempt to practice martial arts under completely realistic conditions. For instance, no martial arts school that I am aware of randomly ambushes or attempts to mug its students on the streets outside of class in order to test how they would respond under truly realistic conditions.[3]

Even in the age of sword duels, people would train with blunt weapons and protective armor rather than sharp weapons and ordinary clothes. Would training with sharp weapons and ordinary clothes be more alive than training with blunt weapons and protective armor? Certainly, but the trainees wouldn't be! And yet training with blunt weapons is still useful-- the fact that training does not fully approximate realistic conditions does not intrinsically mean it is bad.

That being said, generally speaking martial arts training that is more alive-- that better approximates realistic fighting conditions-- is more effective within reasonable safety margins. There is a growing consensus among students of martial arts who are looking for effective self-defense techniques that the specific martial art one practices is not hugely relevant, and that what matters more is the extent to which the training does or doesn't use aliveness.

 

Aliveness and Rationality

So, that's all well and good-- but how can we apply these principles to rationality practice?

While martial arts training has very clear methods of measuring whether or not skills work (can I apply this technique against a resisting opponent?), rationality training is much murkier-- measuring rationality skills is a nontrivial problem.

Further, under normal circumstances the opponent that you are resisting when applying rationality techniques is your own brain, not an external enemy.[4] This makes applying appropriate levels of resistance in training difficult, because it's very easy to cheat yourself. The best method that I have found thus far is lucid dreaming, as forcing your dreaming brain to recognize its true state through the various hallucinations and constructed memories associated with dreaming is no easy task.

That being said, I make no claims to special or unique knowledge in this area. If anyone has suggestions for useful methods of "live" rationality practice, I'd love to hear them.

 

 

[1] For further explanation, see Matt Thornton's classic video "Why Aliveness?"

[2] If your plan is to choke someone until they fall unconscious, it is possible to safely train for this with nearly complete aliveness by wrestling against an opponent and simply releasing the chokehold before they actually fall unconscious. By contrast, it is much harder to safely train to punch someone into unconsciousness, and harder still to safely train to break people's necks.

[3] The game of Assassins does do this, but usually follows rules that are constrained enough to make it a suboptimal method of training.

[4] There are some contexts in which rationality techniques are applied in order to overcome an external enemy. Competitive games and some sports are a good method of finding practice in this respect. For instance, in order to be a competitive Magic: The Gathering player, you need to engage many epistemic and instrumental rationality skills. Competitive poker can offer similar development.

[Link] You and Your Research

16 katydee 20 October 2013 12:05AM

I've seen Richard Hamming's classic talk You And Your Research referenced several times on LessWrong and figured I would post the full version. The introduction is reproduced below:

The title of my talk is, ``You and Your Research.'' It is not about managing research, it is about how you individually do your research. I could give a talk on the other subject - but it's not, it's about you. I'm not talking about ordinary run-of-the-mill research; I'm talking about great research. And for the sake of describing great research I'll occasionally say Nobel-Prize type of work. It doesn't have to gain the Nobel Prize, but I mean those kinds of things which we perceive are significant things. Relativity, if you want, Shannon's information theory, any number of outstanding theories - that's the kind of thing I'm talking about.

Now, how did I come to do this study? At Los Alamos I was brought in to run the computing machines which other people had got going, so those scientists and physicists could get back to business. I saw I was a stooge. I saw that although physically I was the same, they were different. And to put the thing bluntly, I was envious. I wanted to know why they were so different from me. I saw Feynman up close. I saw Fermi and Teller. I saw Oppenheimer. I saw Hans Bethe: he was my boss. I saw quite a few very capable people. I became very interested in the difference between those who do and those who might have done.

When I came to Bell Labs, I came into a very productive department. Bode was the department head at the time; Shannon was there, and there were other people. I continued examining the questions, ``Why?'' and ``What is the difference?'' I continued subsequently by reading biographies, autobiographies, asking people questions such as: ``How did you come to do this?'' I tried to find out what are the differences. And that's what this talk is about.

I consider this talk good and useful not only for those interested in research, but for those interested in achieving much of anything. Check it out!

Better Rationality Through Lucid Dreaming

10 katydee 18 October 2013 08:48PM

Note: this post is no longer endorsed by the author, for reasons partially described here.

In the spirit of radioing back to describe a path:

The truly absurd thing about dreams lies not with their content, but with the fact that we believe them. Perfectly outrageous and impossible things can occur in dreams without the slightest hesitance to accept them on the part of the dreamer. I have often dreamed myself into bizarre situations that come complete with constructed memories explaining how they secretly make sense!

However, sometimes we break free from these illusions and become aware of the fact that we are dreaming. This is known as lucid dreaming and can be an extremely pleasant experience. Unfortunately, relatively few people experience lucid dreams "naturally;" fortunately, lucid dreaming is also a skill, and like any other skill it can be trained.

While this is all very interesting, you may be wondering what it has to do with rationality. Simply put, I have found lucid dreaming perhaps the best training currently available when it comes to increasing general rationality skills. It is one thing to notice when you are confused by ordinary misunderstandings or tricks; it is another to notice while your own brain is actively constructing memories and environments to fool you!

I've been involved in lucid dreaming for about eight years now and teaching lucid dreaming for two, so I'm pretty familiar with it on a non-surface level. I've also been explicitly looking into the prospect of using lucid dreaming for rationality training purposes since 2010, and I'm fairly confident that it will prove useful for at least some people here.

If you can get yourself to the point where you can consistently induce lucid dreaming by noticing the inconsistencies and absurdities of your dream state,[1] I predict that you will become a much stronger rationalist in the process. If my prediction is correct, lucid dreaming allows you to hone rationality skills while also having fun, and best of all permits you to do this in your sleep!

If this sounds appealing to you, perhaps the most concise and efficient resource for learning lucid dreaming is the book Lucid Dreaming, by Dr. Stephen LaBerge. However, this is a book and costs money. If you're not into that, a somewhat less efficient but much more comprehensive view of lucid dreaming can be found on the website dreamviews.com. I further recommend that anyone interested in this check out the Facebook group Rational Dreamers. Recently founded by LW user BrienneStrohl, this group provides an opportunity to discuss lucid dreaming and related matters in an environment free from some of the mysticism and confusion that otherwise surrounds this issue.

All in all, it seems that lucid dreaming may offer a method of training your rationality in a way that is fun,[2] interesting, and takes essentially none of your waking hours. Thus, if you are interested in increasing your general rationality, I strongly recommend investigating lucid dreaming. To be frank, my main concern about lucid dreaming as a rationality practice is simply that it seems too good to be true.

 

[1] Note that this is only one of many ways of inducing lucid dreaming. However, most other techniques that I have tried are not necessarily useful forms of rationality practice, effective as they might be.

[2] And, to be honest, "fun" is an understatement.

Making Fun of Things is Easy

32 katydee 27 September 2013 03:10AM

Making fun of things is actually really easy if you try even a little bit. Nearly anything can be made fun of, and in practice nearly anything is made fun of. This is concerning for several reasons.

First, if you are trying to do something, whether or not people are making fun of it is not necessarily a good signal as to whether or not it's actually good. A lot of good things get made fun of. A lot of bad things get made fun of. Thus, whether or not something gets made fun of is not necessarily a good indicator of whether or not it's actually good.[1] Optimally, only bad things would get made fun of, making it easy to determine what is good and bad - but this doesn't appear to be the case.

Second, if you want to make something sound bad, it's really easy. If you don't believe this, just take a politician or organization that you like and search for some criticism of it. It should generally be trivial to find people that are making fun of it for reasons that would sound compelling to a casual observer - even if those reasons aren't actually good. But a casual observer doesn't know that and thus can easily be fooled.[2]

Further, the fact that it's easy to make fun of things makes it so that a clever person can find themselves unnecessarily contemptuous of anything and everything. This sort of premature cynicism tends to be a failure mode I've noticed in many otherwise very intelligent people. Finding faults with things is pretty trivial, but you can quickly go from "it's easy to find faults with everything" to "everything is bad." This tends to be an undesirable mode of thinking - even if true, it's not particularly helpful.

[1] Whether or not something gets made fun of by the right people is a better indicator. That said, if you know who the right people are you usually have access to much more reliable methods.

[2] If you're still not convinced, take a politician or organization that you do like and really truly try to write an argument against that politician or organization. Note that this might actually change your opinion, so be warned.

What's Your Hourly Rate?

6 katydee 11 September 2013 06:52PM

Here's an interesting post about calculating the value of your free time and why it might not be as simple as some tend to think.

Optimize Your Settings

14 katydee 29 July 2013 09:10PM

Related to: The Good News of Situationist Psychology

Perhaps the most significant teaching social psychology has to offer is that most of our behaviors are determined by situational factors inherent to our settings, not by our personal qualities.[1]

Some consider this depressing-- for instance, the Milgram experiments in obedience to authority and Stanford prison experiment are often cited as examples of how settings can cause otherwise-good people to participate in and even support unethical and dangerous behavior. However, as lukeprog points out in The Good News of Situationist Psychology, this principle can also be considered uplifting. After all, if our settings have such an effect on our behavior, they are thus a powerful tool that we can employ to make ourselves more effective.[2]

 

Changing Your Physical Settings

One relatively easy place to start making such changes is in your personal life. I have found that great productivity increases can be gained through relatively minor changes in lifestyle-- or even seemingly-trivial matters such as the position of physical (or sometimes digital) objects in your environment!

For instance, I recently noticed a tendency in myself to "wake up" and then waste the next twenty or thirty minutes aimlessly browsing the Internet on my laptop in bed before actually getting up and eating breakfast, showering, going to work, etc. Since I value time, especially morning time, substantially, I decided that action should be taken to avoid this.

At first, I figured that once I had noticed the problem I could simply apply willpower and avoid it, but this proved less than effective-- it turns out that my willpower is not at its strongest when I first wake up and am still a little groggy![3] I then decided to apply the principles of situational psychology to the situation. The most obvious setting contributing to the problem was that I was using an alarm app on my computer to wake up in the morning, and turning off this alarm caused me to interact with the computer.

So I picked up an IKEA alarm clock, turned off my alarm app, and moved my computer to the kitchen instead of my room-- problem solved. In my new settings, browsing in bed was outright ridiculous-- I'd have to wake up, go downstairs to the kitchen, pick up my computer, and bring it back up to my room with me. Not a likely course of events!

 

Changing Your Mental Settings

While physical environments can certainly produce changes in behavior,[4] social and intellectual environments can too.

For instance, one of my friends from undergrad took an interesting approach when choosing what major to take. He knew that he wanted a solid private-sector income that would allow him to support a family, but didn't particularly care what field it was in. Overall, he wanted to ensure that whatever major he chose would have the highest possible chance of getting him a good job without unusual effort or circumstances.

Therefore, during winter term of his sophomore year, prior to declaring, he went around to all the seniors he could get to talk to him and asked them what their major was, what they were doing post-graduation, and how much money they anticipated making. He found that the CS majors tended to have more private-sector job prospects and higher average starting salaries than students in other fields, so he decided to declare a CS major.[5]

While I don't think my friend's approach is necessarily the best possible option for determining what to do with your life, it certainly beats the sort of unstructured guessing that I've seen many others do. By considering academic majors as settings and examining what setting produced the best result on average, my friend managed to find a field and career that he's by all indications quite happy in-- and with a minimal amount of risk and stress involved.

 

Conclusion

Human psychology is greatly influenced by situational factors, and in more ways than a naive reasoner might expect. If you're looking to improve your life across any particular axis, one good way to start is by examining your current physical, social, and intellectual settings and paying close attention to how changes in those settings might help accomplish your goals.

 

[1] If you don't believe that this is true, I advise simulating that you do and going on anyway. I find this method effective enough for me and others and easy enough to implement that it seems well worth testing, even if you don't fully believe in the claims behind it. At worst, it might become a potential epistemic/instrumental tradeoff.

[2] See for instance Joseph Heath and Joel Anderson, Procrastination and the Extended Will (2009).

[3] In the course of researching and writing this post, I encountered some objections to the resource expenditure theory of willpower (many of which have already been summarized here by Jess_Riedel). I believe my beliefs regarding willpower loss while tired/just awakening may be limiting in the same sense that believing willpower is a limited resource appears limiting, but have yet to test at the time of this writing.

[4] If you're interested in seeing other examples of ways in which we can structure the physical objects around us in order to become more productive, you may wish to check out Alicorn's How to Have Things Correctly and fowlertm's related How to Have Space Correctly. Several of Alyssa Vance's Random Life Tips also relate to this matter.

[5] The friend in question is now employed as a software engineer at a tech company and by all indications loves his job. Note though that this post isn't saying "you should be a CS major." Things change over time, and what was a good choice for one person and one time may not be a good choice for another person or another time.

Epistemic and Instrumental Tradeoffs

20 katydee 19 May 2013 07:49AM

Related: What Do We Mean By "Rationality?"

Epistemic rationality and instrumental rationality are both useful. However, some things may benefit one form of rationality yet detract from another. These tradeoffs are often not obvious, but can have serious consequences.

For instance, take the example of learning debate skills. While involved in debate in high school, I learned how to argue a position quite convincingly, muster strong supporting evidence, prepare rebuttals for counterarguments, prepare deflections for counterarguments that are difficult to rebut, and so on.

I also learned how to do so regardless of what side of a topic I was assigned to.

My debate experience has made me a more convincing and more charismatic person, improved my public speaking skills, and bolstered my ability to win arguments. Instrumentally speaking, this can be a very useful skillset. Epistemically speaking, this sort of preparation is very dangerous, and I later had to unlearn many of these thought patterns in order to become better at finding the truth.

For example, when writing research papers, the type of motivated cognition used when searching for evidence to bolster a position in a debate is often counterproductive. Similarly, when discussing what the best move for my business to make is, the ability to argue convincingly for a position regardless of whether it is right is outright dangerous, and lessons learned from debate may actually decrease the odds of making the correct decision-- if I'm wrong but convincing and my colleagues are right but unconvincing, we could very well end up going down the wrong path!

Epistemic and instrumental goals may also conflict in other ways. For instance, Kelly (2003)[1] points out that, from an epistemic rationality perspective, learning movie spoilers is desirable, since they will improve your model of the world. Nevertheless, many people consider spoilers to be instrumentally negative, since they prefer the tension of not knowing what will happen while they watch a movie.

Bostrom (2011)[2] describes many more situations where having a more accurate model of the world can be hazardous to various instrumental objectives. For instance, knowing where the best parties are held on campus can be a very useful piece of knowledge to have in many contexts, but can become a distracting temptation when you're writing your thesis. Knowing that one of your best friends has just died can be very relevant to your model of the world, but can also cause you to become dangerously depressed. Knowing that Stalin's wife didn't die from appendicitis can be useful for understanding certain motivations, but can be extraordinarily dangerous to know if the secret police come calling.

Thus, epistemic and instrumental rationality can in some cases come into conflict. Some instrumental skillsets might be better off neglected for reasons of epistemic hygeine; similarly, some epistemic ventures might yield information that it would be instrumentally better not to know. When developing rationality practices and honing one's skills, we should take care to acknowledge these tradeoffs and plan accordingly.

 

[1] Kelly, T., (2003). Epistemic Rationality as Instrumental Rationality: A Critique. Philosophy and Phenomenological Research, 66(3), pp. 612-640.

[2] Bostrom, N., (2011). Information Hazards: A Typology of Harms from Knowledge. Review of Contemporary Philosophy, 10, pp. 44-79.

Use Search Engines Early and Often

0 katydee 05 May 2013 08:33AM

The Internet contains vast amounts of useful content. Unfortunately, it also contains vast amounts of garbage, superstimulus hazards, and false, meaningless, or outright harmful information. One skill that is hence quite useful in the modern day is using search engines correctly, allowing you to separate the wheat from the chaff. When doing so, one can often uncover preexisting work that solves your problem for you, the answers to relevant factual questions, and so on. It is rare to find a situation where search engines are outright useless-- at the very least they tend to point you in the direction of useful information.

Further, the time cost of setting up and refining a search is extremely low, meaning that most of the time "just Google it" should in fact be your default response to a situation where you don't have very much information.[1] Overall, I consider one's ability to use search engines-- and, just as importantly, one's ability to recognize what types of situations can benefit from using them-- a basic but fairly significant instrumental rationality skill.

Much of the above sounds extremely obvious, and in point of fact it should be-- but the fact remains that people don't use search engines anywhere near as often as they seemingly should. I've frequently found myself in situations where someone in the same room as me asks me a trivially searchable factual question while we are both using computers. Worse still, I've been in situations where people do the same over IRC! The existence of lmgtfy indicates that others have noticed this issue before, and yet it remains a problem.

So, how can we do better?

One easy trick that I've found very helpful is to use Goodsearch instead of Google. Goodsearch is a service that automatically donates a cent to a charity of your choice whenever you search.[2] Further, it can be installed into your search toolbar in Firefox, making the activation cost of using Goodsearch rather than Google essentially zero if, like me, you tend to search in the search bar instead of the URL field. Goodsearch has had profound effects on my tendency to perform searches because it gives me a little hit of "doing good" every time I perform a search, thus encouraging me to do so in more situations, thus causing me to accrue more money via Goodsearch, etc.

This has not only made me more productive by causing me to search more but added positive externalities to every search I conduct. Earlier, I would say that I frequently used search engines to find out information about a new topic or project-- now I would say that I nearly automatically do this as the first step in most situations where I need some information before proceeding. The potential information gained from a search is very high, the costs of performing a search are very low, and with Goodsearch you can donate a little bit to charity while you do so.

If you're reading this in Firefox and haven't already spent large amounts of time getting used to advanced search methods in other engines (and maybe even if you have), I strongly suggest navigating over to Goodsearch, signing up for an account, and installing the Goodsearch App to make it your default toolbar search. For me, this proved to be a big win-- opportunities to increase instrumental rationality for only a minimal time expenditure while also earning free money for charity are not exactly common!

 

[1] Note that there are some things you might not want to Google. I would, for instance, be very careful about what terms I used if I were looking into the history of political assassinations.

[2] Before anyone gets too clever, there are restrictions.

Rationality Quotes May 2013

6 katydee 03 May 2013 08:02PM

Here's another installment of rationality quotes. The usual rules apply:

  • Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself.
  • Do not quote from Less Wrong itself, Overcoming Bias, or HPMoR.
  • No more than 5 quotes per person per monthly thread, please.

 

View more: Prev | Next