To put it bluntly, I think it made me smarter. Not more intelligent in the IQ sense - I remain between 125 and 130 - but quicker to notice confusion, see contradictions and avoid dead-ends. So I waste a little less time on predictably fruitless endeavors, my thinking is much more consistent (after a lot of house-cleaning), and I have clear priorities that help me decide right even when pressed for time. These changes have also made me more aware of mistakes others make, and more certain in rejecting them. I have had to learn to point out the mistakes of others more nicely and effectively, but I'm nowhere near good enough at that yet.
I learned a lot about artificial intelligence and machine learning, and am now introducing machine learning methods into my work environment.
I met a bunch of great people, especially at Secular Solstices.
I got a felt impression of how huge the smarter-than-me population actually is, and how sharply limited my abilities are. This helped me start an earnest search for the best task I can do at my level of ability. Similarly, I got an acute sense of how people at different levels of cognitive ability see the world entirely differently - independently of cultural and economic factors, just depending on the quantity and quality of interpretations and implications they're able to draw from their perception.
I got rid of a lot of false beliefs and a couple of people who continue to hold them. This freed up lots of attentional resources, which I partly reinvested into better beliefs and better people. From the latter I learned more good skills, such as standing up for my needs and empathetic communication.
The rest of the freed-up attention largely went into a huge art project that is incredibly satisfying.
I got better at modeling rational thought processes in other people, which helps in negotiations and got me a quite comfortable salary. I've come to rely on this quite a bit, and like to think it makes me an effective communicator. But at the same time, people where this sense fails (who I cannot model as rational agents) feel unsettling to me, and I more and more try to avoid them.
Perhaps most of all, I appear to make fewer stupid mistakes. The absence of something is always hard to notice, but it feels like I'm paying some kind of stupidity tax all the time, and that tax rate has gone down. Not an effect you notice after a day or two, but over the years, the benefits accumulate.
Hi everybody, first post , long time lurker. For the first few months after I discovered less wrong via HPMOR it was no more than a shiny distraction, until I read The Motivation Hacker by Nick Winter which was mention in some post here, after I read it, it was clear for the first time that I could gain knowledge and skills in any(not really any) area I want, and things that I just wished to know before where became posibility.
More concrete since then I begin learning programing(Python, haskell, and currently java) by myself, and now i am a few mounths of launcing my first app for android and iOS, a skill I always wanted but take no action to adquire for no more reason that the choice was not even in my mind, learn German (currently medium level), planing to become a effective altruism after I get a more stable income, and taking various online courses of economic, neurobiology and programing, and finally thinking in terms of consequences of my actions instead of if it is my responsability or not. Doing almost all of it with the help of pomodoros, beeminder and, spaced retetition.
I think it was the combination of the Zeitgeist of less wrong and with a vivid example of someone taking action from reading The Motivation Hacker that help me develop a growth mindset.
P.S: sorry for the english, it's my second lenguague.
May I ask what is the utility of Haskell? Or rather, in what field it has one? Functional programming as a shortcut is great, but Python has that covered. Even C# LINQ has that covered, for most pragmatic functional programming is about writing domain-specific query languages, as a lot of complicated programming can be reduced to input - massage the data - output. The rest often just library-juggling. As opposed to this pragmatically functional stuff, purely functional programming is largely about avoiding bugs of certain types, but in my experience 95% of bugs come from not of those types, but from misunderstanding requirements or requirements themselves being sloppy and chaotic. Pure functionality is largely about programming like a mathemathician, strictly formal and everything the result of reasoning instead of just cobbling things together by trial and error which tends to characterize most programming, but the kind of bugs this formalist attitude cuts down on is not really the kinds of bugs that actually annoy users. So I wonder what utility you found to Haskell.
I do not have much experience with functional programming, but I'll try to answer anyway:
There is a huge difference between programming as an academical discipline, and programming as in "what 99% of programmers in private sector do". The former is like painting portraits that will survive centuries, the latter is like painting walls. Both kinds of "painters" use colors, although the wall painters are usually okay with using only white color and the most experienced of them are really proud to use a paint roller with the great skill that only comes from years of experience. The portait painters usually have extremely low square-meters-per-hour productivity.
Programming as a science is about writing effective and provably correct algorithms, and designing abstract tools to make writing and proving of the algorithms easier. There is often a lot of math involved. Here are some random keywords: Turing completeness, Computational complexity, Formal languages, Lambda calculus...
Programming as a craft is about using existing tools (programming languages and libraries) and solving real-life problems with them. Also about developing practical skills that make cooperation and maintenance of larger projects easier.
So, if programming as a craft is what 99% of programmers do, why do we even need the science? It's because without the science we wouldn't have the tools. At some point of history, procedural programming was an academic toy, and all commercial programmers were happy to use GOTO all the time. At some other point, object-oriented programming was an academic toy that didn't seem to bring any improvement to real-life problems. Today, procedures and objects are our daily bread, at least in Java/C# development, although most "object-oriented" developers don't really understand Liskov substitution principle and couldn't solve the circle-ellipse problem properly. I've met people who had years of experience in JavaScript, who were surprised to hear that you can use obejcts in JavaScript. But if someone does not understand the nature of objects in JavaScript, they would never be able to create JQuery.
in my experience 95% of bugs come from not of those types, but from misunderstanding requirements or requirements themselves being sloppy and chaotic
Yes, this is because in real life, most people are idiots. Of course when people who are paid for writing requirements can't do it well, most problems will be related to the requirements. But if you manage to fix this problem (for example, if you would cooperate with smart people, or write the requirements for yourself), then most of the remaining bugs will come from somewhere else.
There is a kind of improvements that comes strictly in a form of limitations. Such as "thou shalt not use GOTO". Or think about declaring variables and functions as "public" and "private"; all this does is forbidding you to do something what you would be able to do in a different language. And yet somehow these limitations make programming easier. Programs without GOTO commands are on average much easier to read, although there are specific situations where the specific algorithm could have been easier with some GOTO. (Also we have the "break" and "continue" commands to create so-called one-and-half loops that some purists frown upon.) Declaring a variable or method private allows you to change it without having to read the rest of the program. And these limitations work much better when supported by the IDE. You want your IDE to declare an edit-time error when a private variable is used from a different class, because then you have an immediate feedback.
Functional programming, as I understand it (and I am not an expert), seems like two related concepts: First, you can use functions as values. (You could simulate it in Java by various anynomous Runnable classes and adapters, but that would be a lot of boilerplate code, at least before Java 8.) Second, you have immutable objects, and functions without side-effects. The first thing is a syntactic sugar, but the second thing is the limitation that can make programming easier. Yes, you can have immutable objects in Java, but the compiler will not check this property for you. (There is no such thing as telling your standard Java IDE: "this class is supposed to be immutable; if there is some way I didn't notice that could actually mutate its values, please underline the code using red color".) So it is like a language that allows you to create your own convention about which variables and methods can be used only within the class that declared them, but does not provide any edit-time support for the "private" keyword.
Checking the immutability of an object in compile time is important, because when you decide you use this trait heavily (analogically to when you decide to use object-oriented programming heavily, as opposed to only having a class or two in an otherwise procedural code), and you make a mistake and one of those classes actually happens to be mutable, the whole concept falls apart. What is the advantage of using almost exclusively immutable objects? It makes multi-threaded development and caching values much easier. If you are going to develop an application that heavily uses multiple processors, or even multiple computers, for doing one computation, you will want to use immutable objects. Otherwise you will have to do a lot of thread-related micromanagement, and most likely you will produce a lot of bugs. On the other hand, if you add a few exceptions from the immutability rule (called monads) to otherwise functional code, you can still write the usual kind of applications. So, functional programming seems superior, because it expands your abilities.
Unfortunately, the more powerful programming concepts require better abstract thinking, and most people, including most professional programmers, are not good at it. They have to be dragged screaming to using new concepts, and even there they will stubbornly try to write the old code using the new syntax. (Just like many people write C++ code with Java syntax and call it "Java programming"; which usually means they treat interfaces as some inferior form of abstract classes, and complain about Java not having multiple inheritance. Also, their methods have hundreds of lines.) And because the worse programmers are more frequent in the job market, it actually makes sense for companies to use inferior technologies. (For example, this is why PHP still exists despite having no advantage over Python.)
Thanks, this is an excellent explanation. Let me add something. You say you need OOP in JavaScript or else you cannot make jQuery. But what is jQuery needed for? Largely to fix what sucks with JavaScript. Why does JavaScript suck? Partially browser issues but largely because it is used far beyond its intended purposes. Software companies were ignoring all the time that HTML is for documents meant for reading, and based on the puny HTML forms that were originally just meant to do stuff like post a comment or register your address for purchasing from a webshop, built whole CRM and accounting and whatnot systems. They treated HTML and JavaScript as a General Client for doing any client-server app.There is even an online version of PhotoShop and similar crazy things. Meanwhile, everybody else who used the proper tools for the job and used desktop software for things where it makes sense and so on, so basically who did not try to force a tool into something it was not meant to did not have the whole problem to begin with. Of course in a sense it meant a competitive disadvantage but that does not always matter so much.
My point is simply that it is not really just stupidity. It is not needing to go out on the edge. It is about not needing to build a query language into one that was totally not meant to because not needing to massage an XHTML document into a GUI because they use documents for documents and GUIs for GUIs.
But you are right, programming science is great. Just... don't expect to use it if you work for people who don't need to do almost-impossible things :)
JavaScript is probably the most underestimated programming language ever. I am not going into technical details here, but here are the keywords: prototypes, first-class functions. It is a language designed to be embedded in an application; and web browsers are just one of the possibilities -- ActionScript in Flash programming is almost the same language; you can use JavaScript in your own applications; you can develop websites in JavaScript.
What people perceive as "JavaScript sucks" almost always means "web browsers suck". If you would write web browser scripts in Java or C++, you would get into exactly the same kind of problems if each browser (and each version of the browser) would provide you different objects, with different method and variable names, or even with the same methods and variables which would do different things. (Actually, this is how Microsoft tried to destroy Java once, by providing a widely used but slightly different version, to make Java programs behave differently in different environments.)
I'm a professional programmer and I know Haskell, but I've only ever written one real Haskell program (an AI for double-move chess). Nevertheless I recommend it. All I can tell you is that if you master it -- I mean really master it, not learn to write Python in Haskell -- then your Python programming will reach a new level as well. You will be able to solve problems that once seemed intractable, which you'd persuade your product manager to scope out.
It used to be that you could get this effect by learning Lisp, but I don't think that works anymore; too many of Lisp's good ideas have since been taken up by more ordinary languages.
I'm taking a class in Haskell, and I'd really like to know this too. Haskell is annoying. It's billed as "not verbose", but it's so terse that reading other people's code and learning from it is difficult. (Note: the person I'm on a project with likes one-letter variable names, so that's a bit of a confounder.)
That sounds like math! :) I suck at math precisely due to lack of verbosity, as I am more used to reading essays than equations my brain is used to reading fast and filtering out large chunks of what I read. This shallowness works very well for reviewing philosophy, but in math just missing one letter leads to not understanding it.
This is, weirdly, how I know that much of programming is applied math it does not feel so to me. In programming, it is a taboo to call some variable a Greek letter instead of calling it UnitPriceIncludingTax. This leads to me reading code easy and reading math badly.
I did not go very far in haskell, I was in a exploratory phase, the lack of libraries for haskell, make go to java, having being my only experienced with progaming the creation of map and mods with WarCraft 3 graphical Interface I took online courses and books on Python becouse was easy, and then Hakell just because is from another paradigm and it helped me understand more deeply recursion, types and many basic stuff than was hidden in Python( being a high level language).
I finally settle in Java because for its support, libraries and compatibility with Android, I'm not triying to "know it all" of computer science or programing not for lack of curiosity but for opportunity cost, I'm learning what I need to learn, and dedicate a fraction of my focus to learn seemingly unrelated thing to take care of the unknown unknowns.
Java as platform or as language? The platform is great but why use the language when the platform also offers Clojure or Scala? While I complained elsewhere about the lack of verbosity makes math hard to read for me, Java is the opposite, the boilerplate verbosity pisses me off i.e. that after reading SomeWhateverFactory someWhatEverFactory = new SomeWhateverFactory() after processing that line mentally I have learned nothing about what a program actually does, it convey precisely zero information about the actual human utility it delivers. Writing this bullshit may be made easier by tools, but reading is not.
I'm working with the java languague right now, but it's true I'm considered using scala after I finished my current project.
I'll be editing this as I think of more things.
In particular acting more "Slytherin" and taking advantage of stupid people instead of just becoming frustrated with them.
Which chapters, if I may ask? I don't really intend to read the whole thing anytime soon, but this can be useful.
I have a really bad memory for that sort of stuff and can't recall, sorry. Others would probably know though. Maybe post in open thread?
I've learned that it's probably optimal to strive for the most high-earning high-productivity career you can accomplish and donate your extra income to effective altruism and that I'm not going to go and try to do that.
There are other things to optimize, and Less Wrong would cheerfully endorse your optimizing for some combination of:
Someone needs to do those things that the high-earning people are paying for, and if you don't think you'd be happy with an earnings-optimized career, you might be that person.
Less than a month in I guess, but not drinking booze (drank for 20 years) and more vigorous in my training. I was always an atheist but I now understand better why.
I don't really understand when Scott Alexander writes new people expect superpowers. Are there any? Did anyone do anything really unusual with it? I don't really expect a huge change, because rationality is all about aiming the arrow better, but it does not change how strong you pull the bow or how many arrows you have in the quiver.
I am probably of the minority who does not read HPMOR. I just don't understand the point, why mix two incompatible worlds. Isn't rationalism obviously a better candidate for SF than F? Like Larry Niven's Neutron Star? Or that is precisely the point? I like my fantasy irrational and my sci-fi rational, it just feels like the proper way how things should be... I probably lack a certain sense of humor here.
The crucial point of HPMoR is that rationality is NOT genre specific. No matter what world you live in, no matter how crazy the laws of physics look, rationality is still important to understanding how the world works.
I am probably of the minority who does not read HPMOR. I just don't understand the point, why mix two incompatible worlds.
HPMOR is quite good. You don't need to understand the point before you read it.
I don't really expect a huge change, because rationality is all about aiming the arrow better, but it does not change how strong you pull the bow or how many arrows you have in the quiver.
To continue in your metaphor, a small improvement in aiming can in some situations significantly increase the ratio of arrows that hits the target. Of course assuming that precision was the problem, instead of e.g. distance or lack of arrows. Returning from the metaphor, the benefits of (LW-style) rationality probably also depend on what kind of problems you solve, and what kind of irrational things you were doing before.
What would be the kind of situation where one gets a lot of quick gains from rationality? Seems like a situation where people have to make decisions which later have a big impact on the outcome. (Not the kind where the smart decision is merely 10% more effective than the usual decision; unless those 10% allow you to overcome some critical threshold.)
Decisions like choosing a school, a carreer, a life partner, whether to join a cult, or move to a different city, etc. Or possibly creating useful habits that give you a multiplier on what you were already doing, such as using space repetiton or pomodoros, avoiding planning fallacy or other biases, etc. -- Either doing a few good big changes, or developing reliable good habits. (Also, avoiding big bad changes, and getting rid of bad habits.)
Here is an interesting thing. EY often warns people not try long chains of reasoning as probability drops with every step, or don't try to think too far ahead. But things like choosing a career or partner are precise those things where you cannot just think one or two steps ahead i.e. where you can predict with some reasonably high probabilty, you have to think far ahead while you know you don't really have much a chance predicting how things will work out.
This is one of the cases where I think Taleb's anti-fragility shines, it is hard to filter out the good stuff from all the overly brilliant showing-off from his books but this is the good stuff part. That the idea is not so much to plan ahead but to make the kinds of choices that are resilient or even gain from surprises that you did not foresee at all.
ESR calls it maximizing the breadth of your option tree as in, not choosing narrow paths. Choosing so that in the future you have many choices, many options available. So when the unforeseen, unpredicted happens, you have many options to deal with it. This is probably what anti-fragility is. Basically avoiding commitment to a narrow path as long as possible.
But alas, this also have huge drawbacks! Avoing commitment to narrow paths can often mean languishing in lukewarm tepidity, as highly achieving people have always chose narrow paths and worked their butts off to go ahead on them as narrow paths concentrate the effort more. And avoiding commitment is means you are a generalist and if you want to live in a city, that sucks, cities, high pop densites want specialists. Usually. And having options can very well be a bad thing, psychologically, paradox of choice, akrasia and all that. I know a guy who never rented his apartments, always bought them on mortage and the idea being that he does not have the willpower to save up voluntarily, but if he is committed to paying a mortgage then he will do it, and that builds equity better than voluntary saving, and it pushes him to get better jobs and negotiate harder. Precommitment.
So maximizing future options is both a very good and very bad idea.
l habits that give you a multiplier on what you were already doing, such as using space repetiton or pomodoros, avoiding planning fallacy or other biases
This always confuses me. Are most people entrepreneurs here or what? I don't need better time management because I don't have enough tasks to fill out my workday and if I could I wouldn't as it would not result in a raise or promotion as they are generally not visible ones. I don't need to memorize anything, I can just look things up as I need them. I speak two foreign languages (English is foreign to me) and I never memorized words, I just read books with dictionaries until it sticked. The planning fallacy happens to people who plan aggressively, but why the hell would people want to do that, really, why do Silicon Valley programmers do that, why is their environment so competitive or why is mine not? I just make a comfortable guess and multiply it by three to six, based on how many other tasks there seem to be or how my holiday planning looks like. Works all the time. You don't need complicated planning if your planning is already lazy as fsck. Why do I see most methods here are all about avoiding being cocksure while I rarely had that problem because being undecided about everything was far more comfy and lazy? I feel like somehow the methods are optimized for a very competitive, confident, driven, accomplishment-oriented approach. Probably it requires that you feel that you get rewarded for things you do. This was always missing for me, in my life experience in what you and what you get is really loosely related. Or a goal-rich, target-rich environment.
The eternal conflict between exploration and exploitation. Keeping your options is what keeps the good options within your reach, and prevents you from going too far in the blind alleys. But at the end, if you have walked through the whole shop and didn't buy anything, you leave empty-handed. At some point you gotta have a job (or other source of income) and people are going to pay you for something specific.
I think this is even more complicated when people are not explicitly aware of the skills they really have. They may feel like they don't specialize in anything, when in fact they do. For example I have a friend working in IT whose programming skills are not very impressive: he can do simple things in many systems, but is not very good at math, cannot write complicated algorithms, and is not really nerdy enough to spend evenings obsessing over some technical details. Yet somehow his career was at least as successful as mine. Because what he lacked in programming skills, he compensated by great communication and leadership skills. But he didn't realize this was his real strong point; he identified with being a programmer, because that's what most of his friends were. It took him a few years to fully realize that he is more fit for a role of a manager or consultant in an IT company, and that instead of trying to learn yet another programming language (he somehow believed that his lack of mathematical skills could be fixed by finding the "right" programming language; which is a delusion many bad programmers and IT managers seem to share), he should rather find a position where he gets paid explicitly for doing what he is good at. This more or less doubled his salary, and he is no longer worried about not sufficiently understanding some abstract things his nerdy friends debate about. -- So he actually was a specialist all the time, but in a skill he didn't think about as essential for his job.
Are most people entrepreneurs here or what?
I think entrepreneurs are a minority here, but still a larger fraction that in the general population. Also other types of people need motivation and efficiency while working relatively alone, for example PhD students.
I don't need better time management because I don't have enough tasks to fill out my workday and if I could I wouldn't as it would not result in a raise or promotion as they are generally not visible ones.
Do you have any goals outside of your work where being more productive could help you reach them better? My promotion options are also rather limited (and as far as they exist, this website seems more relevant than LW). But I also have other goals, where productivity helps. I am doing the productivity stuff for myself, not for my boss.
The planning fallacy happens to people who plan aggressively, but why the hell would people want to do that ... I just make a comfortable guess and multiply it by three to six
I most frequently think about planning fallacy when correcting the estimates of my colleagues at work. For example, last week: We had to do 3 critical things, each of them requiring the same resources for at least 1 day. So my colleague immediately sends an e-mail to the customer promising that it will be done in 3 days. Which in reality means 2.5 days, because then we have to travel to the customer, fill the paperwork, install the stuff, and hope that nothing goes wrong. And it assumes there will be no non-trivial bugs in a project that wasn't maintained for a month, doesn't have a proper documentation, and two programers who worked on it, including the previous team leader, have left the company during that month. And my colleague just doesn't care: she sends the promise to the customer, puts my e-mail in the copy, and the problem is "solved". She doesn't even tell me; if I would miss the e-mail, she would only tell me on the third day. So me and a few helpful coworkers voluntarily stayed at work for 12 hours a day, fix a few horrible bugs, completed the stuff in 3.5 days (that included waiting half day until a broken server was fixed), delivered the result to the customer... and the next day I am invited to the CEO where my colleague blames me for failing the customer and for "making her look stupid". (And the only thing that saved my ass was completely unrelated to my skills or work, it was a random office-politics advice from internet that I decided to test experimentally at work a few days ago, and luckily it worked.) -- Uhm, okay, this is not really about planning fallacy, but about a completely fucked up system. But planning fallacy apears here all the time. Pretty much all deadlines we have ever made were unrealistic, and all of them were done like this: "don't think about details, just make a very simplified model, imagine the best-case scenario for that model, and write it down as the official estimate".
I feel like somehow the methods are optimized for a very competitive, confident, driven, accomplishment-oriented approach. Probably it requires that you feel that you get rewarded for things you do. This was always missing for me, in my life experience in what you and what you get is really loosely related.
Heh, my work experience also suggests that what I do and what I get is loosely related, and I think this years-long experience also has contributed to my laziness. (It is hard to get motivated when your uncosciousness insists that what you do it completely unrelated to the outcome, and it is hard to make yourself think otherwise when you have a ton of experimental evidence supporting that.) But I think the life outside of the work doesn't have to be like this. If I decide to make a computer game in my free time, it is up to me. I do have a computer and a development environment, I know programming, I do have a few hours of free time every week... and it is my choice how to use them.
Because what he lacked in programming skills, he compensated by great communication and leadership skills. But he didn't realize this was his real strong point; he identified with being a programmer, because that's what most of his friends were.
This is interesting - I have never assumed people would not know themselves. Now I wonder if I know my own strengths and weaknesses. I communicate so little that I have no idea what opinion people have of me. No feedback at all. I don't remember anyone ever telling me something is my fault when some things did not work out as expected. I don't really remember any praise either beyond the kind of praise that is mostly just politeness.
Do you have any goals outside of your work where being more productive could help you reach them better?
Yes, but they are not open-ended. They are more structured, trainings at specific times of the week etc. I tend to think the other way around, this is what weirds me out. I won't set a target body weight to myself with Beeminder, I would rather decide I am not happy with the current one, and make a change, and see what happens. If still not happy, another change. I commit to the method, not the goal. I started boxing to lose weight and gain courage, but right now I care about boxing, not weight or courage, if it makes sense. This is because otherwise it would be hard to keep up with the willpower. Looking at a mountain 10 km away and walking to it is hard if you keep your eyes on it and constantly think I want to get there, I want to get there. But if you just remove the goal from your mind and identify with the walking, just telling yourself you are a walky guy, this just what you are, it is in your nature to walk, it is very easy. So I guess I have all sorts of goals but they are buried under the methods to reach them. The disadvantage is not being able to change methods if they don't work well, the advantage is not needing a lot of willpower.
So my colleague immediately sends an e-mail to the customer promising that it will be done in 3 days
I think your story is more about not caring at all, because it is not her problem how much the people on the other department suffer. This sounds familiar, this is why we hated salespeople when I worked at consulting companies. Perhaps it can fixed much higher up with different incentives (no commission paid after services sold that were fulfilled in overtime, and instead that commission goes to the people who fulfill it), although the most ingenious solution I have seen when I worked in the UK was that the business owner liked to do programming. He did some sales too but mostly left sales and almost all of the project management to others. He would basically pick up various development subprojects in various projects and do them. This made the sales and project management about not over-promising and not pissing off the people who have to fulfill them, as it can happen that it is the boss who fulfills them.
The best solution for working at per-hour billed consulting companies is to don't. I think this almost necessarily sucks because the incentives are all screwed up. Normally people sell results, and the time took to provide them is a cost. Billing per hour means selling costs, while the customers expect and want to pay for results. This is such a contradiction that cannot be resolved. A closely related issue is that businesses see internal and external costs differently, they gladly pay someone X salary per year to do a job because they visualize it as good old Billy is working hard at entering data in accounting and he supports a family with this pay so it is all well, but paying an external company 0.1X to automate half his job is seen as far less emotionally appealing because it is some money hungry strangers out there with their weird computer magic. I think this kind of efficiency violates a sense of fairness. At any rate, my solution was to not work for consulting companies again but find a big enough customer and do it internally. This also has its drawbacks, but the level of trust is much higher.
But if you just remove the goal from your mind and identify with the walking, just telling yourself you are a walky guy, this just what you are, it is in your nature to walk, it is very easy.
I like Scott Adams's statement of this approach: here and here. (The first link is a cached link because it looks like the original content has moved.)
Yes, I think it is something similar. Of course, it has its failure modes too. Specifically, it is easy to fake.
I have a certain hunch that it has historical and cultural forerunners. I think Anglo-American culture was always goal-oriented, more focused on specific achievements, more on a how to get what you want attitude. And the German-Czech style, fairly late-comer capitalism was more in the direction of just be a conscientious person who does things by the book and puts in the effort to do things really right and then basically have a system, not goals, and just take whatever results you get. My point is, effectively both cultures or systems are right, historically the first one is the basically so efficient that it created the centres of the power that run the world today, but the second one is also remarkable because it had a much shorter time and much more constrained resources and compared to that it built something remarkable too, so probably a good approach too. I think goal-orientedness works better for people who are natural individualists, and system-orientedness for people who have more of a bit of a collective mindset perhaps. Goals are individual, systems are usually built on shared standards.
(And the only thing that saved my ass was completely unrelated to my skills or work, it was a random office-politics advice from internet that I decided to test experimentally at work a few days ago, and luckily it worked.)
Asides like this should be forbidden as cruelty to animals... I mean readers. I think the kind and compassionate thing to do is to either say what it is, link to it, or never, ever mention it.
The eternal conflict between exploration and exploitation.
I vaguely remember having read one article about this, but was not aware it is a big topic. Got linx?
I didn't have any specific article in mind. It is just a topic that I am aware of in my life. For example, I love learning new things, but instead of using them I often just jump to learning another thing. Which seemed like widening my options, until a few years later I realized that I keep forgetting the old things and that I actually never used most of them. Thus learning is an enjoyable hobby for me, but to make it useful, I have to go beyond mere learning.
There is such thing as "learning too much", or more precisely, being so obsessed by learning that you never actually use what you learned. (The problem is not much knowledge per se, but zero application of that knowledge beyond mere signalling.) And this is a mistake that probably many smart people do, and you can get a lof of applause for promoting it as the most noble way of life. On the other hand, as Steve Jobs alegedly said: "Real artists ship."
Of course there is also the opposite mistake of doing some stuff every day for years, and never taking time to learn how to do it better. But among educated people this is considered a low-status mistake, while learning many useless things is a high-status mistake.
A short burst of optimism that I may not die (the Kurzweil phase), followed by the realisation that there's a significant chance that humanity might die in my lifetime.
So, in terms of effects my life as in what I've done differently: Not much.
The biggest change, I think, is that I no longer feel alone. Not in the sense of not having anyone in my life, but rather that I now know people who think in roughly the same way I do about roughly the same things I do. To put it in jargon, I have, for the first time, an in-group, a tribe. This is not an effect you should underestimate.
I have also changed my life in some ways and my outlook on the world has grown more realistic, I think. I think about things differently and am more willing to make trade-offs rather than just be paralyzed with indecision. I'm more attentive to opportunities (in all areas of life) and I'm more willing to go for those opportunities.
The most specific change I can point to is that I use my free time a lot better. Used to be that I just sat around playing videogames I didn't really enjoy in full. I can now notice when that is happening and stop doing it, which is a huge improvement on several levels. (Sometimes I then start playing a game I'll probably enjoy a lot more or continue learning to program Python or work on my math skills in Khan Academy.)
Almost the same for me (just replace Python with Android or Unity, and "playing videogames I don't really enjoy" with "reading websites I don't really enjoy").
My father-in-law became suddenly more tolerable once I decided to look for evidence discomfiting my impression of him as an overbearing, condescending, deaf [noun]. It's not like I can find it often, but now I have a sport of looking! (And value evidence that confirms my hypothesis far less.)
I've gone from hoping we find life in the solar system to really hoping we don't.
I notice I am confused a lot.
I can use Bayes theorem well enough to calculate that the probability of me fully understanding Bayes theorem is 0%.
My thoughts about my future have changed from wondering aimlessly about how I can change my life and the world for the better and not being able to do anything about it to knowing quite well how I can change my life and the world for the better and still not doing anything about it
I'm referring to the great filter theory which I first learned about on Less Wrong: http://mason.gmu.edu/~rhanson/greatfilter.html
I changed my intended college major from biomedical engineering to neuroscience+compsci.
I give more money to better charities than I probably would have otherwise.
I have a regular exercise habit that I cultivated with ideas I got from LW.
I might never have read Gödel, Escher, Bach if not for LW.
LW recommended Good and Real, the book that convinced me to become vegetarian and then vegan.
I've picked up various other good habits of thought, and a much better understanding of metaethics, but those are the concretely visible ones.
ETA: also, LW convinced me that I should sign up for cryonics, but I haven't yet because I'm still in school and don't have the money, so I don't know if it counts.
I changed my intended college major from biomedical engineering to neuroscience+compsci.
As a biomedical engineering undergrad, can I ask you what prompted this decision and how the two options compare to each other, in your opinion?
I wanted to do research that would have practical implications for the human condition, and I thought working on genetic diseases was the best way to do that. Various lesswrong memes convinced me that working toward uploading by advancing neuroscience was a better alternative. Also, the exposure to cognitive science on LW and the idea that human intelligence is the Most Important Thing made neuroscience seem a lot more interesting. I can't say much about the comparison, since I changed my plans while still in high school, but I'm glad I did it. For one thing, if I hadn't, I wouldn't have discovered how much I love to code.
Various lesswrong memes convinced me that working toward uploading by advancing neuroscience was a better alternative.
In what kind of timeframe do you consider uploading to be relevant?
Maybe sometime before I die of old age, if I'm very lucky, or sufficiently shortly afterward that it's worth getting cryonics and hoping. Probably sometime within the next 100-200 years, if something else doesn't make it unnecessary by then.
Yes
1)Mealsquares & Romeo and Yvain's advice(sleep apnea & fitness)
2) Meeting intelligent people without needing to pay for it if my life didn't work out in that fashion
3) Having the chance to say the word epistemic without people thinking I'm crazy
4) A whole bunch of other stuff that I'll edit in later
Very thankful for the bay area rationalist community
I know I posted before that I didn't think I got that much out of the sequences. But now I'm studying philosophy and a lot of questions that I used to think were very challenging now are questions that I can answer. In particular, I manage to dissolve a lot of philosophical paradoxes or arguments much quicker than I used to be able to. This gives me more time to devote to my other subjects.
I also think that some of the rationality discussion has helped to improve some of my decisions, but it is very hard to pin point which.
In particular, I manage to dissolve a lot of philosophical paradoxes or arguments much quicker than I used to be able to.
Any particular approach you would like to cite there? I've found Peirce's pragmatic maxim to be a good shorthand for a lot of that. "This is just word games with bad definitions."
http://wiki.lesswrong.com/wiki/The_map_is_not_the_territory is very important.
It's also very helpful training to look at various paradoxes such as the Liar Paradox, The Unexpected Hanging Paradox, the Raven Paradox and so on.
I went from being dead set on physics to a joined major of math and physics, following up with a master in math in a while, and instead of thinking "I should learn programming some time", I am currently sitting at a lecture in Python. Had I not encountered lesswrong, I would have been a quirky, regular physicist who lives their life, thinking of some Big Questions, but not actually coming to an answer. I would also not have become atheist quite as quickly. I intend to keep being less wrong in the future, and I've seen results - positive, as well as negative due to what some people consider too much honesty. But all in all, I have improved my odds of reaching my goals.
I've probably spent slightly more time on this site and slightly less time on other sites. Other than that, I've been introduced to a few e-friends, and found a few good blogs to follow.
The weirdest aspect is that the general mood is more useful to me than the list of methods. I constantly feel like given encouragement to have the courage to not repeat tribal-membership-signalling truths but try to find out real ones. I constantly feel like being encouraged to have the courage to not think like "I must say X because Y people already dislike me and I need the support of X people" but "come out of the closet" as an independent thinker without allegiances.
A very good place to practice is https://www.reddit.com/r/purplepilldebate/ the whole color coding is encouraging people to don a tribal jersey and say things like "I as a bluepiller think..." "you redpiller guys think that..." injecting a "no-pill" attitude into this takes quite some courage, is useful for exercising and growing this kind of courage-muscle, and with LW I feel I have a non-tribe tribe at my back who understands what I am trying to do and useful to lean back to when I feel like redpillers think I am bluepill, bluepillers think I am redpill and both distrust me. (I am not using this nickname there.)
And another useful thing is that I feel I am being encouraged to not simply adopt a lofty "I am above both" attitude which is very, very tempting but fruitless and narcissistically smug, as feeling superior to mud-wrestling tribes is a textbook classy intellectual move but totally vacuous, rather I feel encouraged to go down into the mud, and fish truths out of it, some red, some blue.
Made me think more explicitly in terms of Bayesian epistemology. I think this would be the biggest boost I got from being a LW member. It helped tons. I'm now significantly better at evaluating evidence and handling uncertainty. In a more general sense than this, the field of probability as a whole and the importance it has in our lives also increased in salience to me.
LW also made the idea of mental models of the world more salient to me. I had it before as well, but not as sharply defined and applicable to everyday situations.
Another nifty benefit was that LessWrong is an important node in a network of very interesting websites, blogs, and thinkers, and it opened up basically a whole new realm full of smart people.
LessWrong is probably the first website I visit daily that can be broadly considered "intellectual" in theme; the rest of them are hobby stuff. It's an important contributor to the fact that I haven't yet let my brain rot.
(This list turned out longer than I expected; only the first item on it was salient to me, but the more I thought about it, the more contributions I could think up.)
Overall, I feel that LessWrong made a noticeably significant positive impact on my life.
How has LessWrong changed my life? I would say that I have learned a lot in regards to Bayesianism and epistemology. I became a transhumanist and developed an interest in cryonics before I knew LessWrong existed.
I've gone from rock-bottom self-esteem and hopeless crying, to... rock-bottom self-esteem and StepfordSmiling. LessWrong has helped me become much less self-centered by providing the skills to quantify exactly how I am not, in fact, worth anything to anyone, and am, in fact, entitled to nothing.
I talk about transhumanism and cryonics instead of nihilism and suicide.
I went from feeling like I'm always in hostile territory waiting to die, to feeling like I'm always out in the cold looking in on something beautiful that will never include me.
I get much less enjoyment and relaxation from my passtimes because I've internalized the fact that escapism isn't.
It took an hour to compose this post instead of ten minutes, because I have a more realistic expectation of the results.
Made life-changing amounts of money. (If you include the coins.)
I discovered the idea of Bitcoin rather early on, via LessWrong.
From there, the Cypherpunks mailing list. The Hal Finney connection. The Wei Dai connection.
This was interesting stuff. I did some mental arithmetic. Started optimizing for coins. (Thanks, Clippy!)
With only limited success, or so it seemed. Some time passed.
And with every passing day, Bitcoin didn't die.
And here we are.
I've been wondering what effect joining lesswrong and reading the sequences has on people.
How has lesswrong changed your life?
What have you done differently?
What have you done?