Cleaning out my computer I found some old LW-related stuff I made for graphic editing practice. Now that we have a store and all, maybe someone here will find it useful:
Why is LessWrong not an Amazon affiliate? I recall buying at least one book due to it being mentioned on LessWrong, and I haven't been around here long. I can't find any reliable data on the number of active LessWrong users, but I'd guess it would number in the 1000s. Even if only 500 are active, and assuming only 1/4 buy at least one book mentioned on LessWrong, assuming a mean purchase value of $20 (books mentioned on LessWrong probably tend towards the academic, expensive side), that would work out at $375/year.
IIRC, it only took me a few minutes to sign up as an Amazon affiliate. They (stupidly) require a different account for each Amazon website, so 5*4 minutes (.com, .co.uk, .de, .fr), +20 for GeoIP database, +3-90 (wide range since coding often takes far longer than anticipated) to set up URL rewriting (and I'd be happy to code this) would give a 'worst case' scenario of $173 annualized returns per hour of work.
Now, the math is somewhat questionable, but the idea seems like a low-risk, low-investment and potentially high-return one, and I note that Metafilter and StackOverflow do this, though sadly I could not find any information on the returns they see from this. So, is there any reason why nobody has done this, or did nobody just think of it/get around to it?
The entire world media seems to have had a mass rationality failure about the recent suicides at Foxconn. There have been 10 suicides there so far this year, at a company which employs more than 400,000 people. This is significantly lower than the base rate of suicide in China. However, everyone is up in arms about the 'rash', 'spate', 'wave'/whatever of suicides going on there.
When I first read the story I was reading a plausible explanation of what causes these suicides by a guy who's usually pretty on the ball. Partly due to the neatness of the explanation, it took me a while to realise that there was nothing to explain.
Your strength as a rationalist is your ability to be more confused by fiction than by reality. It's even harder to achieve this when the fiction comes ready-packaged with a plausible explanation (especially one which fits neatly with your political views).
That's what I thought as well, until I read this post from "Fake Steve Jobs". Not the most reliable source, obviously, but he does seem to have a point:
But, see, arguments about national averages are a smokescreen. Sure, people kill themselves all the time. But the Foxconn people all work for the same company, in the same place, and they’re all doing it in the same way, and that way happens to be a gruesome, public way that makes a spectacle of their death. They’re not pill-takers or wrist-slitters or hangers. ... They’re jumpers. And jumpers, my friends, are a different breed. Ask any cop or shrink who deals with this stuff. Jumpers want to make a statement. Jumpers are trying to tell you something.
Now I'm not entirely sure of the details, but if it's true that all the suicides in the recent cluster consisted of jumping off the Foxconn factory roof, that does seem to be more significant than just 15 employees committing suicide in unrelated incidents. In fact, it seems like it might even be the case that there are a lot more suicides than the ones we've heard about, and the cluster of 15 are just those who've killed themselves via this particular, highly visible, me...
Marginal Revolution linked to A Fine Theorem, which has summaries of papers in decision theory and other relevant econ, including the classic "agreeing to disagree" results. A paper linked there claims that the probability settled on by Aumann-agreers isn't necessarily the same one as the one they'd reach if they shared their information, which is something I'd been wondering about. In retrospect this seems obvious: if Mars and Venus only both appear in the sky when the apocalypse is near, and one agent sees Mars and the other sees Venus, then they conclude the apocalypse is near if they exchange info, but if the probabilities for Mars and Venus are symmetrical, then no matter how long they exchange probabilities they'll both conclude the other one probably saw the same planet they did. The same thing should happen in practice when two agents figure out different halves of a chain of reasoning. Do I have that right?
ETA: it seems, then, that if you're actually presented with a situation where you can communicate only by repeatedly sharing probabilities, you're better off just conveying all your info by using probabilities of 0 and 1 as Morse code or whatever.
ETA: the paper works out an example in section 4.
I thought of a simple example that illustrates the point. Suppose two people each roll a die privately. Then they are asked, what is the probability that the sum of the dice is 9?
Now if one sees a 1 or 2, he knows the probability is zero. But let's suppose both see 3-6. Then there is exactly one value for the other die that will sum to 9, so the probability is 1/6. Both players exchange this first estimate. Now curiously although they agree, it is not common knowledge that this value of 1/6 is their shared estimate. After hearing 1/6, they know that the other die is one of the four values 3-6. So actually the probability is calculated by each as 1/4, and this is now common knowledge (why?).
And of course this estimate of 1/4 is not what they would come up with if they shared their die values; they would get either 0 or 1.
Here is a remarkable variation on that puzzle. A tiny change makes it work out completely differently.
Same setup as before, two private dice rolls. This time the question is, what is the probability that the sum is either 7 or 8? Again they will simultaneously exchange probability estimates until their shared estimate is common knowledge.
I will leave it as a puzzle for now in case someone wants to work it out, but it appears to me that in this case, they will eventually agree on an accurate probability of 0 or 1. And they may go through several rounds of agreement where they nevertheless change their estimates - perhaps related to the phenomenon of "violent agreement" we often see.
Strange how this small change to the conditions gives such different results. But it's a good example of how agreement is inevitable.
Observation: The may open thread, part 2, had very few posts in the last days, whereas this one has exploded within the first 24 hours of its opening. I know I deliberately withheld content from it as once it is superseded from a new thread, few would go back and look at the posts in the previous one. This would predict a slowing down of content in the open threads as the month draws to a close, and a sudden burst at the start of the next month, a distortion that is an artifact of the way we organise discussion. Does anybody else follow the same rule for their open thread postings? Is there something that should be done to solve this artificial throttling of discussion?
Some sites have gone to an every Friday open thread; maybe we should do it weekly instead of monthly, too.
I think my only other comment here has been "Hi." But, the webcomic SMBC has a treatment of the prisoner's dilemma today and I thought of you guys.
http://www.kk.org/quantifiedself/2010/05/eric-boyd-and-his-haptic-compa.php
'Here is Eric Boyd's talk about the device he built called North Paw - a haptic compass anklet that continuously vibrates in the direction of North. It's a project of Sensebridge, a group of hackers that are trying to "make the invisible visible".'
The technology itself is pretty interesting; see also http://www.wired.com/wired/archive/15.04/esp.html
So I've started drafting the very beginnings of a business plan for a Less Wrong (book) store-ish type thingy. If anybody else is already working on something like this and is advanced enough that I should not spend my time on this mini-project, please reply to this comment or PM me. However, I would rather not be inundated with ideas as to how to operate such a store yet: I may make a Less Wrong post in the future to gather ideas. Thanks!
...In my experience, happy people tend to be more optimistic and more willing to take risks than sad people. This makes sense, because we tend to be more happy when things are generally going well for us: that is when we can afford to take risks. I speculate that the emotion of happiness has evolved for this very purpose, as a mechanism that regulates our risk aversion and makes us more willing to risk things when we have the resources to spare.
Incidentally, this would also explain why people falling in love tend to be intensly happy at first. In order to get and keep a mate, you need to be ready to take risks. Also, if happiness is correlated with resources, then being happy signals having lots of resources, increasing your prospective mate's chances of accepting you. [...]
I was previously talking with Will about the degree to which people's happiness might affect their tendency to lean towards negative or positive utilitarianism. We came to the conclusion that people who are naturally happy might favor positive utilitarianism, while naturally unhappy people might favor negative utilitarianism. If this theory of happiness is true, then that makes perfect sens
Searle has some weird beliefs about consciousness. Here is his description of a "Fading Qualia" thought experiment, where your neurons are replaced, one by one, with electronics:
... as the silicon is progressively implanted into your dwindling brain, you find that the area of your conscious experience is shrinking, but that this shows no effect on your external behavior. You find, to your total amazement, that you are indeed losing control of your external behavior. You find, for example, that when the doctors test your vision, you hear them say, ‘‘We are holding up a red object in front of you; please tell us what you see.’’ You want to cry out, ‘‘I can’t see anything. I’m going totally blind.’’ But you hear your voice saying in a way that is completely out of your control, ‘‘I see a red object in front of me.’’
(J.R. Searle, The rediscovery of the mind, 1992, p. 66, quoted by Nick Bostrom here.)
This nightmarish passage made me really understand why the more imaginative people who do not subscribe to a computational theory of mind are afraid of uploading.
My main criticism of this story would be: What does Searle think is the physical manifestation of those panicked, helpless thoughts?
I don't have Searle's book, and may be missing some relevant context. Does Searle believe normal humans with unmodified brains can consciously affect their external behavior?
If yes, then there's a simple solution to this fear: do the experiment he describes, and then gradually return the test subject to his original, all-biological condition. Ask him to describe his experience. If he reports (now that he's free of non-biological computing substrate) that he actually lost his sight and then regained it, then we'll know Searle is right, and we won't upload. Nothing for Searle to fear.
But if, as I gather, Searle believes that our "consciousness" only experiences things and is never a cause of external behavior, then this is subject to the same criticism as Searle's support of zombies.
Namely: if Searle is right, then the reason he is giving us this warning isn't because he is conscious. Maybe in fact his consciousness is screaming inside his head, knowing that his thesis is false, but is unable to stop him from publishing his books. Maybe his consciousness is already blind, and has been blind from birth due to a rare developmental accident, and it doesn't know what words he types in his books at all. Why should we listen to him, if his words about conscious experience are not caused by conscious experience?
To the powers that be: Is there a way for the community to have some insight into the analytics of LW? That could range from periodic reports, to selective access, to open access. There may be a good reason why not, but I can't think of it. Beyond generic transparency brownie points, since we are a community interested in popularising the website, access to analytics may produce good, unforeseen insights. Also, authors would be able to see viewership of their articles, and related keyword searches, and so be better able to adapt their writing to the audience. For me, a downside of posting here instead of my own blog is the inability to access analytics. Obviously i still post here, but this is a downside that may not have to exist.
LW too focused on verbalizable rationality
This comment got me thinking about it. Of course LW being a website can only deal with verbalizable information(rationality). So what are we missing? Skillsets that are not and have to be learned in other ways(practical ways): interpersonal relationships being just one of many. I also think the emotional brain is part of it. There might me people here who are brilliant thinkers yet emotionally miserable because of their personal context or upbringing, and I think dealing with that would be important. I think a hollistic approach is required. Eliezer had already suggested the idea of a rationality dojo. What do you think?
This post is about the distinctions between Traditional and Bayesian Rationality, specifically the difference between refusing to hold a position on an idea until a burden of proof is met versus Bayesian updating.
Good quality government policy is an important issue to me (it's my Something to Protect, or the closest I have to one), and I tend to approach rationality from that perspective. This gives me a different perspective from many of my fellow aspiring rationalists here at Less Wrong.
There are two major epistemological challenges in policy advice, in addition to the normal difficulties we all have to deal with:
1) Policy questions fall almost entirely within the social sciences. That means the quality of evidence is much lower than it is in the physical sciences. Uncontrolled observations, analysed with statistical techniques, are generally the strongest possible evidence, and sometimes you have nothing but theory or professional instinct to work with.
2) You have a very limited time in which to find an answer. Cabinet Ministers often want an answer within weeks, a timeframe measured in months is luxurious. And often a policy proposal is too sensitive to discuss with the...
Forgive me if this is beating a dead horse, or if someone brought up an equivalent problem before; I didn't see such a thing.
I went through a lot of comments on dust specks vs. torture. (It seems to me like the two sides were miscommunicating in a very specific way, which I may attempt to make clear at some point.) But now I have an example that seems to be equivalent to DSvs.T, easily understandable via my moral intuition and give the "wrong" (i.e., not purely utilitarian) answer.
Suppose I have ten people and a stick. The appropriate infinite...
DSvsT was not directly an argument for utilitarianism, it was an argument for tradeoffs and quantitative thinking and against any kind of rigid rules, sacred values, or qualitative thinking which prevents tradeoffs. For any two things, both of which have some nonzero value, there should be some point where you are willing to trade off one for the other - even if one seems wildly less important than the other (like dust specks compared to torture). Utilitarianism provides a specific answer for where that point is, but the DSvsT post didn't argue for the utilitarian answer, just that the point had to be at less than 3^^^3 dust specks. You would probably have to be convinced of utilitarianism as a theory before accepting its exact answer in this particular case.
The stick-hitting example doesn't challenge the claim about tradeoffs, since most people are willing to trade off one person getting hit multiple times with many people each getting hit once, with their choice depending on the numbers. In a stadium full of 100,000 people, for instance, it seems better for one person to get hit twice than for everyone to get hit once. Your alternative rule (maximin) doesn't allow some tradeoffs, so it leads to implausible conclusions in cases like this 100,000x1 vs. 1x2 example.
I have a theory: Super-smart people don't exist, it's all due to selection bias.
It's easy to think someone is extremely smart if you've only seen the sample of their most insightful thinking. But every time that happened to me, and I found that such a promising person had a blog or something like that, it universally took very little time to find something terribly brain-hurtful they've written there.
So the null hypothesis is: there's a large population of fairly-smart-but-nothing-special people, who think and publish their thought a lot. Because the best ...
I was thinking something similar just today:
Some people think out loud. Some people don't. Smart people who think out loud are perceived as "witty" or "clever." You learn a lot from being around them; you can even imitate them a little bit. They're a lot of fun. Smart people who don't think out loud are perceived as "geniuses." You only ever see the finished product, never their thought processes. Everything they produce is handed down complete as if from God. They seem dumber than they are when they're quiet, and smarter than they are when you see their work, because you have no window into the way they think.
In my experience, there are far more people who don't think out loud in math than in less quantitative fields. This may be part of why math is perceived as so hard; there are all these smart people who are hard to learn from, because they only reveal the finished product and not the rough draft. Rough drafts make things look feasible. Regular smart people look like geniuses if they leave no rough drafts. There may really be people who don't need rough drafts in the way that we mundanes do -- I've heard of historical figures like that, and those really are savants -- but it's possible that some people's "genius" is overstated just because they're cagey about expressing half-formed ideas.
The Unreasonable Effectiveness of My Self-Exploration by Seth Roberts.
This is an overview of his self-experiments (to improve his mood and sleep, and to lose weight), with arguments that self-experimentation, especially on the brain, is remarkably effective in finding useful, implausible, low-cost improvements in quality of life, while institutional science is not.
There's a lot about status and science (it took Roberts 10 years to start getting results, and it's just to risky to careers for scientists to take on projects which last that long), and some int...
Good article on the abuse of p-values: http://www.sciencenews.org/view/feature/id/57091/title/Odds_are,_its_wrong
I've been reading the Quantum Mechanics sequence, and I have a question about Many-Worlds. My understanding of MWI and the rest of QM is pretty much limited to the LW sequence and a bit of Wikipedia, so I'm sure there will be no shortage of people here who have a better knowledge of it and can help me.
My question is this: why are the Born Probabilites a problem for MWI?
I'm sure it's a very difficult problem, I think I just fail to understand the implications of some step along the way. FWIW, my understanding of the Born Probabilities mainly clicks here:
...I
So... If a quantum event has a 30% chance of going LEFT and a 70% chance of going right . . . you'll have a 30% probability of observing LEFT and a 70% probability of observing RIGHT.
So why is this surprising?
The surprising (or confusing, mysterious, what have you) thing is that quantum theory doesn't talk about a 30% probability of LEFT and a 70% probability of RIGHT; what it talks about is how LEFT ends up with an "amplitude" of 0.548 and RIGHT with an "amplitude" of 0.837. We know that the observed probability ends up being the square of the absolute value of the amplitude, but we don't know why, or how this even makes sense as a law of physics.
I would have thought everyone here would have seen this by now, but I hadn't until today so it may be new to someone else as well:
Charlie Munger on the 24 Standard Causes of Human Misjudgment
After more-or-less successfully avoiding it for most of LW's history, we've plunged headlong into mind-killer territory. I'm a little bit worried, and I'm intrigued to find out what long-time LWers, especially those who've been hesitant about venturing that direction, expect to see as a result over the next month or two.
It doesn't look encouraging. The discussions just don't converge, they meander all over the place and leave no crystalline residue of correct answers. (Achievement unlocked: Mixed Metaphor)
Are there any rationalist psychologists?
Also, more specifically but less generally relevant to LW; as a person being pressured to make use of psychological services, are there any rationalist psychologists in the Denver, CO area?
The blog of Scott Adams (author of Dilbert) is generally quite awesome from a rationalist perspective, but one recent post really stood out for me: Happiness Button.
Suppose humans were born with magical buttons on their foreheads. When someone else pushes your button, it makes you very happy. But like tickling, it only works when someone else presses it. Imagine it's easy to use. You just reach over, press it once, and the other person becomes wildly happy for a few minutes.
What would happen in such a world?
...
Is everyone missing the obvious subtext in the original article - that we already live in just such a world but the button is located not on the forehead but in the crotch?
Perhaps some people would give their button-pushing services away for free, to anyone who asked. Let's call those people generous, or as they would become known in this hypothetical world: crazy sluts.
Pushing someone's happiness button is like doing them a favor, or giving them a gift. Do we have social customs that demand favors and gifts always be exchanged simultaneously? Well, there are some customs like that, but in general no, because we have memory and can keep mental score.
William Saletan at Slate is writing a series of articles on the history and uses of memory falsification, dealing mainly with Elizabeth Loftus and the ethics of her work. Quote from the latest article:
...Loftus didn't flinch at this step. "A therapist isn't supposed to lie to clients," she conceded. "But there's nothing to stop a parent from trying something like [memory modification] with an overweight child or teen." Parents already lied to kids about Santa Claus and the tooth fairy, she observed. To her, it was a no-brainer: "A
This might be old news to everyone "in", or just plain obvious, but a couple days ago I got Vladimir Nesov to admit he doesn't actually know what he would do if faced with his Counterfactual Mugging scenario in real life. The reason: if today (before having seen any supernatural creatures) we intend to reward Omegas, we will lose for certain in the No-mega scenario, and vice versa. But we don't know whether Omegas outnumber No-megas in our universe, so the question "do you intend to reward Omega if/when it appears" is a bead jar guess.
Thought I might pass this along and file it under "failure of rationality". Sadly, this kind of thing is increasingly common -- getting deep in education debt, but not having increased earning power to service the debt, even with a degree from a respected university.
Summary: Cortney Munna, 26, went $100K into debt to get worthless degrees and is deferring payment even longer, making interest pile up further. She works in an unrelated area (photography) for $22/hour, and it doesn't sound like she has a lot of job security.
We don't find out until...
One reason why the behavior of corporations and other large organizations often seems so irrational from an ordinary person's perspective is that they operate in a legal minefield. Dodging the constant threats of lawsuits and regulatory penalties while still managing to do productive work and turn a profit can require policies that would make no sense at all without these artificially imposed constraints. This frequently comes off as sheer irrationality to common people, who tend to imagine that big businesses operate under a far more laissez-faire regime than they actually do.
Moreover, there is the problem of diseconomies of scale. Ordinary common-sense decision criteria -- such as e.g. looking at your life history as you describe it and concluding that, given these facts, you're likely to be a responsible borrower -- often don't scale beyond individuals and small groups. In a very large organization, decision criteria must instead be bureaucratic and formalized in a way that can be, with reasonable cost, brought under tight control to avoid widespread misbehavior. For this reason, scalable bureaucratic decision-making rules must be clear, simple, and based on strictly defined ca...
For what it's worth, the credit score system makes a lot more sense when you realize it's not about evaluating "this person's ability to repay debt", but rather "expected profit for lending this person money at interest".
Someone who avoids carrying debt (e.g., paying interest) is not a good revenue source any more than someone who fails to pay entirely. The ideal lendee is someone who reliably and consistently makes payment with a maximal interest/principal ratio.
This is another one of those Hanson-esque "X is not about X-ing" things.
(Wherein I seek advice on what may be a fairly important decision.)
Within the next week, I'll most likely be offered a summer job where the primary project will be porting a space weather modeling group's simulation code to the GPU platform. (This would enable them to start doing predictive modeling of solar storms, which are increasingly having a big economic impact via disruptions to power grids and communications systems.) If I don't take the job, the group's efforts to take advantage of GPU computing will likely be delayed by another year or two. Th...
Should we buy insurance at all?
There is a small remark in Rational Choice in an Uncertain World: The Psychology of Judgment and Decision Making about insurance saying that all insurance has negative expected utility, we pay too high a price for too little a risk, otherwise insurance companies would go bankrupt. If this is the case should we get rid of all our insurances? If not, why not?
There is a small remark in Rational Choice in an Uncertain World: The Psychology of Judgment and Decision Making about insurance saying that all insurance has negative expected utility, we pay too high a price for too little a risk, otherwise insurance companies would go bankrupt.
No -- Insurance has negative expected monetary return, which is not the same as expected utility. If your utility function obeys the law of diminishing marginal utility, then it also obeys the law of increasing marginal disutility. So, for example, losing 10x will be more than ten times as bad as losing x. (Just as gaining 10x is less than ten times as good as gaining x.)
Therefore, on your utility curve, a guaranteed loss of x can be better than a 1/1000 chance of losing 1000x.
ETA: If it helps, look at a logarithmic curve and treat it as your utility as a function of some quantity. Such a curve obeys diminishing marginal utility. At any given point, your utility increases less than proportionally going up, but more than proportionally going down.
(Incidentally, I acutally wrote an embarrasing article arguing in favor of the thesis roland presents, and you can still probably find on it the internet....
Guided by Parasites: Toxoplasma Modified Humans
a ~20 minute (absolutely worth every minute) interview with, Dr. Robert Sapolsky, a leading researcher in the study of Toxoplasma & its effects on humans. This is a must see. Also, towards the end there is discussion of the effect of stress on telomere shortening. Fascinating stuff.
I'm not certain this comment will be coherent, but I would like to compose it before I lose my train of thought. (I'm in an atypical mental state, so I easily could forget the pieces when feeling more normal.) The writing below sounds rather choppy and emphatic, but I'm actually feeling neutral and unconvinced. I wonder if anyone would be able to 'catch this train' and steer it somewhere else perhaps..?
It's an argument for dualism. Here is some background:
I've always been a monist: believing that everything should be coherent from within this reality. Th...
In Harry Potter and the Methods of Rationality, Quirrell talks about a list of the thirty-seven things he would never do as a Dark Lord.
Eliezer, do you have a full list of 37 things you would never do as a Dark Lord and what's on it?
Ah. I'm pretty sure it isn't a real list because of the number 37. 37 is one of the most common numbers for people to pick when they want to pick a small "random" number. Humans in general are very bad at random number generation. More specifically, they are more likely to pick an odd number, and given a specific range of the form 1 to n, they are most likely to pick a number that is around 3n/4. The really clear examples are from 1 to 4 (around 40% pick 3), 1 to 10 (I don't remember the exact number but I think it is around 30% that pick 7). and then 1 to 50 where a very large percentage will pick 37. The upshot is if you ever see an incomplete list claiming to have 37 items, you should assign a high probability that the rest of the list doesn't exist.
What does 'consciousness' mean?
I'm having an email conversation with a friend about Nick Bostrom's simulation argument and we're now trying to figure out what the word "consciousness" means in the first place.
People here use the C-word a lot, so it must mean something important. Unfortunately I'm not convinced it means the same thing for all of us. What does the theory that "X is conscious" predict? If we encounter an alien, what would knowing that it was "conscious" or "not conscious" tell us? How about if we encou...
In A Technical Explanation of Technical Explanation, Eliezer writes,
...You should only assign a calibrated confidence of 98% if you're confident enough that you think you could answer a hundred similar questions, of equal difficulty, one after the other, each independent from the others, and be wrong, on average, about twice. We'll keep track of how often you're right, over time, and if it turns out that when you say "90% sure" you're right about 7 times out of 10, then we'll say you're poorly calibrated.
...
What we mean by "probability"
First I'd like to point out a good interview with Ray Kurzweil, which I found more enjoyable than a lot of his monotonous talks. http://www.motherboard.tv/2009/7/14/singularity-of-ray-kurzweil
As a follow-up, I am curious anyone attempted to mathematically model Ray's biggest and most disputed claim, which is the acceleration rate of technology. Most dispute the claim by pointing out that the data points are somewhat arbitrary and invoke data dredging. It would be interesting if the claim was based on a more of a model basis rather than basically a regressi...
I couldn't post a article due to lack of karma so I had to post here:P
I notice this site is pretty much filled with proponents of MWI, so I thought it'd be interresting to see if there are anyone on here who are actually against MWI, and if so, why?
After reading through some posts it seems the famous Probability, Preferred Basis and Relativity problems are still unsolved.
Are there any more?
Any recommendations for how much redundancy is needed to make ideas more likely to be comprehensible?
There's a general rule in writing that if you don't know how many items to put in a list, you use three. So if you're giving examples and you don't know how many to use, use three. Don't know if that helps, but it's the main heuristic I know that's actually concrete.
I sometimes look at human conscious thought as software which is running on partially re-programmable hardware.
The hardware can be reprogrammed by two actors - the conscious one, mostly indirectly, and the unconscious one, which seems to have direct access to the wiring of the whole mechanism (including the bits that represent the conscious actor).
I haven't yet seen a coherent discussion of this kind of model - maybe it exists and I'm missing it. Is there already a coherent discussion of this point of view on this site, or somewhere else?
In the same vein as Roko's investigation of LessWrong's neurotypicalness, I'd be interested to know the spread of Myers-Briggs personality types that we have here. I'd guess that we have a much higher proportion of INTPs than the general population.
Online Myers-Briggs test can be found here, though I'm not sure how accurate it is
Anyone here live in California? Specifically, San Diego county?
The judicial election on June 8th has been subject to a campaign by a Christian conservative group. You probably don't want them to win, and this election is traditionally a low turnout one, so you might want to put a higher priority on this judicial election than you normally would. In other words, get out there and vote!
To whom it may concern:
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
(After the critical success of part II, and the strong box office sales of part III in spite of mixed reviews, will part IV finally see the June Open Thread jump the shark?)