[LINK] How to increase conscientiousness
I wrote an interactive blog post, How To Increase Conscientiousness, which has some steps which I think might increase your conscientiousness. I'm not sure if it works, but I would love to see some curious low-conscientiousness people try it and post your results here.
If you do it, please do it before reading the comments on this post, as they may contain spoilers.
If you are feeling especially helpful, also take a Big Five personality test like this one and report your percentile result on Conscientiousness.
[LINK] Why Your Customers Would Be Happier If You Charged More
Surprising material to discover on Less Wrong, I know, but has some core insights about effectiveness and entrepreneurship and freelancing which I think people here will appreciate.
Quotes:
So you have to understand that if you make an amazing product and you’ve tested it and you know it will help, it is your obligation to get it out to the market as aggressively as possible.
[Patrick notes: I think this is important enough to emphasize, twice. If you got into this business to make peoples’ lives better, and you have produced something which will succeed with that, and you are aware of truth about reality such as “better marketed products beat better engineered products every single bloody time”, then you have an obligation to get better at marketing yourself. To do otherwise is to compromise the value of your offering to the world based on selfish desires such as appeasing your own vanity (“Everyone should realize how great my work is without me needing to tell them”) or indulging your own unspoken fears (“If this were really good, it would sell itself, so if I try selling it, it must not be good.”)]
I was inventing excuses – in real time! – as to why I couldn’t have possibly delivered the value he already reported having gotten from the conversation we just had.
Heading off a near-term AGI arms race
I know people have talked about this in the past, but now seems like an important time for some practical brainstorming here. Hypothetical: the recent $15mm Series A funding of Vicarious by Good Ventures and Founders Fund sets off a wave of $450mm in funded AGI projects of approximately the same scope, over the next ten years. Let's estimate a third of that goes to paying for man-years of actual, low-level, basic AGI capabilities research. That's about 1500 man-years. Anything which can show something resembling progress can easily secure another few hundred man-years to continue making progress.
Now, if this scenario comes to pass, it seems like one of the worst-case scenarios -- if AGI is possible today, that's a lot of highly incentivized, funded research to make it happen, without strong safety incentives. It seems to depend on VCs realizing the high potential impact of an AGI project, and of the companies having access to good researchers.
The Hacker News thread suggests that some people (VCs included) probably already realize the high potential impact, without much consideration for safety:
...I think this exactly the sort of innovation timeline real venture capitalists should be considering - funding real R&D that could have a revolutionary impact even if the odds are against it.
The company to get all of this right will be the first two trillion dollar company.
Is there any way to reverse this trend in public perception? Is there any way to reduce the number of capable researchers? Are there any other angles of attack for this problem?
I'll admit to being very scared.
How to enjoy being wrong
Related to: Reasoning Isn't About Logic, It's About Arguing; It is OK to Publicly Make a Mistake and Change Your Mind.
Examples of being wrong
A year ago, in arguments or in thought, I would often:
- avoid criticizing my own thought processes or decisions when discussing why my startup failed
- overstate my expertise on a topic (how to design a program written in assembly language), then have to quickly justify a position and defend it based on limited knowledge and cached thoughts, rather than admitting "I don't know"
- defend a position (whether doing an MBA is worthwhile) based on the "common wisdom" of a group I identify with, without any actual knowledge, or having thought through it at all
- defend a position (whether a piece of artwork was good or bad) because of a desire for internal consistency (I argued it was good once, so felt I had to justify that position)
- defend a political or philosophical position (libertarianism) which seemed attractive, based on cached thoughts or cached selves rather than actual reasoning
- defend a position ("cashiers like it when I fish for coins to make a round amount of change"), hear a very convincing argument for its opposite ("it takes up their time, other customers are waiting, and they're better at making change than you"), but continue arguing for the original position. In this scenario, I actually updated -- thereafter, I didn't fish for coins in my wallet anymore -- but still didn't admit it in the original argument.
- defend a policy ("I should avoid albacore tuna") even when the basis for that policy (mercury risk) has been countered by factual evidence (in this case, the amount of mercury per can is so small that you would need 10 cans per week to start reading on the scale).
- provide evidence for a proposition ("I am getting better at poker") where I actually thought it was just luck, but wanted to believe the proposition
- when someone asked "why did you [do a weird action]?", I would regularly attempt to justify the action in terms of reasons that "made logical sense", rather than admitting that I didn't know why I made a choice, or examining myself to find out why.
Now, I very rarely get into these sorts of situations. If I do, I state out loud: "Oh, I'm rationalizing," or perhaps "You're right," abort that line of thinking, and retreat to analyzing reasons why I emitted such a wrong statement.
We rationalize because we don't like admitting we're wrong. (Is this obvious? Do I need to cite it?) One possible evo-psych explanation: rationalization is an adaptation which improved fitness by making it easier for tribal humans to convince others to their point of view.
Over the last year, I've self-modified to mostly not mind being wrong, and in some cases even enjoy being wrong. I still often start to rationalize, and in some cases get partway through the thought, before noticing the opportunity to correct the error. But when I notice that opportunity, I take it, and get a flood of positive feedback and self-satisfaction as I update my models.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)