Curriculum suggestions for someone looking to teach themselves contemporary philosophy

7 quanticle 31 May 2013 04:20AM

Hello LessWrong,

I just (finally) finished Good and Real, by Gary Drescher. It was a very stimulating read, and I'd like to continue learning philosophy on my own. However, I'm running into a bootstrapping problem. I don't know what I don't know, and therefore, I don't know where I should get started. I've tried searching the LessWrong archive to see if anyone has made a post outlining a curriculum for someone looking to teach themselves the fundamentals of modern philosophy and logic, but either my Google-fu is weak or no such post exists. So, what should someone who is looking to reduce the inferential distance between themselves and modern philosophical thought read, and in what order?

Or, do you all think this is a quixotic quest that I should give up on?

Ruthless Extrapolation

0 quanticle 13 July 2012 08:51PM

Ruthless Extrapolation

Article Summary: One of the key adaptations of humanity is the ability to see trends, which allows us to anticipate and preemptively adapt to future conditions. However, this ability has its limits. We're very good at seeing first derivatives, but terrible at seeing higher level trends. This leaves us vulnerable to situations where those first derivative trends unexpectedly change. The example used is with energy resources, where our adaptation to continually increasing energy usage leaves us vulnerable to a situation where we no longer have access to ever increasing energy resources.

I have two questions regarding the linked article. First, is there a name for this cognitive bias? The author uses "Ruthless Extrapolation", which I find quite fetching, but I think this is well known enough to have a name already. Secondly, what assumptions do we make that could be described as ruthless extrapolation? It seems to me that many in the Singularity Studies community simply assume that CPU transistor densities will continue to increase indefinitely, which certainly seems to be a case of ruthless extrapolation. What would happen to whole-brain emulation if we woke up tomorrow and found out that the most powerful CPU possible would have a transistor density only two or four times higher than an Ivy Bridge Core i7? 

Betrand Russell's Ten Commandments

3 quanticle 06 May 2012 07:52PM

Betrand Russell's Ten Commandments for teachers.

  1. Do not feel absolutely certain of anything.
  2. Do not think it worth while to proceed by concealing evidence, for the evidence is sure to come to light.
  3. Never try to discourage thinking for you are sure to succeed.
  4. When you meet with opposition, even if it should be from your husband or your children, endeavour to overcome it by argument and not by authority, for a victory dependent upon authority is unreal and illusory.
  5. Have no respect for the authority of others, for there are always contrary authorities to be found.
  6. Do not use power to suppress opinions you think pernicious, for if you do the opinions will suppress you.
  7. Do not fear to be eccentric in opinion, for every opinion now accepted was once eccentric.
  8. Find more pleasure in intelligent dissent that in passive agreement, for, if you value intelligence as you should, the former implies a deeper agreement than the latter.
  9. Be scrupulously truthful, even if the truth is inconvenient, for it is more inconvenient when you try to conceal it.
  10. Do not feel envious of the happiness of those who live in a fool’s paradise, for only a fool will think that it is happiness.

I find this to be of use not just for teachers but for rationalists in general. #8, especially, is an especially eloquent formulation of Aumann's Agreement Theorem.

[LINK] Signalling and irrationality in Software Development

9 quanticle 21 November 2011 04:24PM

Why Software Projects are terrible and how not to fix them (by Drew Crawford):

Unless you are having a meeting with the one person who is going to use the software that you’re writing, you’re not meeting with the real customer.  You’re meeting with a person who has to explain to someone who can explain to someone who can explain what you’re saying to the real customer.  It’s not enough to convince the person you’re sitting in the room with that Agile is a good idea.  He has to convince his boss.  That person has to convince his boss.  That person has to convince the sales team.  The sales team has to convince the customer.  If the customer is b2b, your contact at the customer organization has to convince his boss.  Who convinces his boss.  Who convinces the real customer.  Maybe.  Unless that sale is also b2b.  This is a very long game of telephone.  If the guy you’re talking too is thinking “This sounds like a really good idea but I’m concerned I can’t sell this upstairs,” you are dead in the water.  At any point in the chain, if somebody thinks that, you are dead in the water.  You can’t just say “It’s objectively better,” you have to show how he can turn around and sell the idea to someone else.

Put yourself in the middle manager’s shoes.  If the project goes bad, he has to “look busy”.  He has to put more developers on the project, call a meeting and yell at people, and other arbitrary bad ideas.  Not because he thinks those will solve the problem.  In fact, managers often do this in spite of the fact that they know it’s bad.   Because that’s what will convince upper management that they’re doing their best.

In other words, it's all about signaling, isn't it? Managers will take actions that actively harm the continued progress of the project if that action makes them look "decisive" and "in charge".  I've seen this on many projects I've been on, and it took me a while to realize that my managers weren't stupid or ignorant. It's just that the organization I was working in put a higher priority on process than on results. My managers, therefore quite rationally did things that maximized their apparent value in the eyes of their bosses, even if it meant that the project (and, as a result) the entire organization was hurt.

Crawford then goes on to detail why organizations with such maladaptive practices survive:

Yes, businesses are under pressure to gravitate toward bad engineering practices, but shouldn’t they be under equal market pressure to compete against companies that are using actually good software engineering practices?  Shouldn’t, at some point, bad companies simply implode under their own weight? Why sure, in the long run.  But as Keynes succinctly put it, “In the long run, we’ll all be dead.”  Eventually is a long time.  It’s months, years, or decades.  A project can be failing a long time before management is clued in.  And even longer before management’s management is clued in.  And it can be ages before it hits the user.

I think this is something that we as rationalists sometimes forget about. Irrationality has momentum. Humans have been thinking intuitively for thousands (hundreds of thousands, even) of years before we figured out how to think with rigorous rationality. Even if rationality had a massive advantage of intuitive thinking in everyday situations (it doesn't, as far as I can tell) it would take a very long time for rational thought to propagate through society.

So the next time you get frustrated at some bit of wanton irrationality, remind yourself, "Momentum," before you get frustrated.

 

EDIT: Fixed spelling as per RolfAndreassen's post.

How did you come to find LessWrong?

5 quanticle 21 November 2011 03:32PM

I was reflecting the other day about how I learned about LessWrong. As best as I can recall/retrace, I learned about LessWrong from gwern, who I met in the #wikipedia IRC channel via an essentially chance meeting. I'm wondering how typical my experience is. How did you come to LessWrong?

EDIT: Optional follow-up question: Do you think that we (the community) are doing enough to bring in new users to LessWrong? If not, what do you think could be done to increase awareness of LessWrong amongst potential rationalists?