I wouldn't be too surprised to learn that people are capable of independently thinking that they have highly atypical minds while simultaneously falling prey to the typical mind fallacy
.... because knowing or believing that you have an atypical mind gives you almost no information about how everyone else is thinking.
And because the differences we notice and care about are the ones that provide satisfying explanations of prominent experiences, such as loneliness or frustration with others' behavior. A quality of the self that works 'behind the scenes', the kind that only comes up when we talk about the theory of the mind on a fairly high level, will not usually seem like a candidate for such explanations. For example, I've known I was smarter than average since childhood, but it took me until college to notice that I was color blind. And color blindness is fairly concrete- like, the relationship between categories and central examples is almost certainly different in my head than in the average guy on the street, but there's no real way of knowing.
(Or perhaps I'm falling prey to the typical mind fallacy, natch.)
Well, personally I think:
a very small number>P(intolerance of homosexuality will destroy civilization)>P(intolerance of homosexuality will save civilization)>10^-30
But some people would disagree with me.
I wasn't actually trying to imply that we shouldn't tolerate homosexuality - I hope this was clear, otherwise I need to work on communicating unambiguously. I was trying to make the meta point that right-wing opinions don't have to be powered by hate, but perhaps they often are because people can't separate emotions and logic.
I wasn't actually trying to imply that we shouldn't tolerate homosexuality - I hope this was clear, otherwise I need to work on communicating unambiguously.
This was clear, yes. No worries!
I was trying to make the meta point that right-wing opinions don't have to be powered by hate, but perhaps they often are because people can't separate emotions and logic.
It is certainly possible that, in the territory, homosexuality is an existential threat. I believe the Westboro Baptists have a model that describes such a case, to name a famous example. A person who believes that the evidence favors such a territory is morally obliged to take anti-gay positions, assuming that they value human life at all. in other words, yes, there's a utilitarian calculation that justifies homophobia in certain conditions.
But if I'm not mistaken, the intersection of 'evidence-based skeptical belief system' and 'believes that homosexuality is an existential threat' is quite small (partially because the former is a smallish group, partially because the latter is rare within that group, partially because most of the models in which homosexuality is an existential threat tend to invoke a wrathful God). But that's an empirical claim, not a political stance.
Since we're asking a political question, rather than exploring the theoretical limits of human belief systems, it's fair to talk about coalitions and social forces. In that domain, to the extent that there are empirical claims being made at all, it's clear that the political influence aligned with and opposed to the gay rights movement is almost entirely a matter of motivated cognition.
To generalize out from the homosexuality example, I think it's trivially true that utilitarian calculations could put you in the position to support or oppose any number of things on the basis of existential threats. I mean, maybe it turns out that we're all doomed unless we systematically exterminate all cephalopods or something. But even if that were true, then the political forces that motivated many people to unite behind the cause of squid-stomping would not resemble a convincing utilitarian argument. So, if you're asking what causes anti-squid hysteria to be a politically relevant force, rather than a rare and somewhat surprising idea that you occasionally find on the fringes of the rationalosphere, then utilitarianism isn't really an explanation.
If you're looking for a reason to think that any given person with otherwise abhorrent politics might, actually, be a decent human- yes, you can get there. But if you're looking for a reason why those politics exist, then this kind of calculation will fall short.
I wouldn't be too surprised to learn that people are capable of independently thinking that they have highly atypical minds while simultaneously falling prey to the typical mind fallacy. In general, I expect myself to spend more time thinking about the overt things that make me feel unique, without necessarily being aware of the things that underlie those differences. With the TMF, it's the unexamined assumptions that get you.
I have been thinking about politics again, this time from a meta level and considering motivations for positions.
Among my peer group and much of the media, the dominant model seems to be 'anyone who has center-right views is consumed by hate and/or a useful idiot for the evil ones, and anyone who has further right views is a jackbooted fascist'.
Now, given that the views they cannot tolerate are nothing compared to the NRxers, in a way this strikes me as absurd hysteria. But in another way this makes sense (except for the overreaction). I don't think most people really grasp that, for instance, P(women are better at maths than men on average) should be independent of whether one wants it to be true, or whether one hates women. And while LWers probably grasp this in theory, I would doubt that these beliefs and values are actually uncorrelated among LWers, since we are not perfect Baysian reasoners (or, to put it another way, there is a difference between knowing the path and walking the path).
So far, this is probably fairly obvious. Its also fairly clear that, unless everyone believes you to be a perfect Bayesian reasoner, it is certainly possible that by holding certain beliefs you are signalling moral stances even though this should be independent.
When I worried that the correlation between testosterone and politics means that political opinions are hopelessly biased by emotions, it was pointed out to me that it could be valid for emotions to affect values if not probability estimates. At the time I accepted this, but now I have largely changed my mind, at least WRT politics on LW.
The reason is that whatever we value, we should hold that the survival of civilisation is a subgoal. (Voluntary human extinction movement excepted).
As an example, there are NRxers who believe that there is a substantial probability that tolerance of homosexuality will destroy civilisation. I don't believe this, but to leave a line of retreat... well, IIRC future civilisation could be between 30 orders of magnitude and infinitely bigger than current civilisation, dependent upon the laws of physics. I put it to you that if
P(tolerance of homosexuality will destroy civiliseation)-P(tolerance of homosexuality will save civiliseation)>10^-30
Then a utilitarian has to be against tolerance of homosexuality, and it doesn't matter whether you hate gays or not, it doesn't matter if you have gay friends or indeed if you are gay. Its a simple (edit: actually, its quite complicated) cost-benefit calculation. (Although, of course this does not mean that campaigning on this points would be a productive use of your time).
If you have a different utility function than 'value all human-equivlanet life-years equally', then I think this argument should still hold with only slight changes. 10^30 is a very big number, after all.
I should emphasise that I'm not saying that this does justify homophobia. For one thing, I think that a general principal of not defecting against people who do not defect against you could arguably help save civilisation. What I am saying is that the issue of whether we should tolerate homosexuality (for instance) should be a matter of probability estimates and values almost all of us hold in common. Whether one actually loves or hates gays is irrelevant.
That different rationalists hold wildly differing opinions on this matter (as with various other political matters), and moreover polarised positions, is bad news for Aumann's agreement theorem and motivated cognition and so forth.
Or perhaps it is a sign that deontological or virtue ethics have advantages? I am aware that what I have written probably sounds shockingly cold and calculating to many people.
EDIT: I am not trying to say that tolerance of homosexuality fails the cost-benefit calculation. I am not trying to pick on left-wing people for saying that their opponents are evil, I used to think that anyone who was against of homosexuality was evil, but then I changed my mind. I realise the right wing also uses 'my political opponents are evil' retoric, but the left tries to frame everything as heroic rebels vs the evil empire, with an almost complete refusal to discuss or consider actual policies, whereas I think the right discusses actual politics more.
And whoever just downvoted every single comment in the thread, you are not helping.
P(tolerance of homosexuality will destroy civiliseation)-P(tolerance of homosexuality will save civiliseation)>10^-30
Do you have a reason to consider this, and not the inverse [i.e. P(intolerance of homosexuality will destroy civilization)-P(intolerance of homosexuality will save civilization)>10^-30]?
I don't think this is even a Pascal's mugging as such, just a framing issue.
I don't quite understand gratitude journaling. First of all, gratitude is the same thing as gratefulness or thankfulness, right? If yes, it means, you are glad because you got stuff you did not really earn, you got stuff that was not yours by right, was not owed to you, right? Because when a debt is paid or you get paid for your work, you don't feel grateful, this is yours by right.
So to me gratitude journaling seems to drive your focus on the things you got without earning them. Is that supposed to help people who have self-esteem problems? SSC wrote how most depressed people feel like a burden, how the heck does feeling grateful for things one does not really earn or deserve make one feel less of a burden?
What am I missing here?
If anything, I would experiment with achievement journaling.
This is (I think) an extension of mindfulness practice. So the ultimate point of the exercise is to help you conscientiously notice and assign weight to a certain class of experience. Your feeling of entitlement is opposed to that in the sense that humans tend not to notice a well-functioning machine. So if we put a dollar in a vending machine and candy comes out, we might enjoy the candy, or be sad about not having a dollar any more, but we rarely take any time to be excited about how great it is to have a machine that performs the swap. Same with getting a paycheck.
Ideally, gratitude journaling expands the class of things you have to be happy about. It adds the vending machine as an object of joy, rather than an 'inert' object that catches our attention only when it fails.
Be critical of these sorts of factoids. Aristotle was a 'wise man' which in that pre-scientific time meant more seemingly-wise than actually-wise regarding most topics (although Aristotle was better than other contemporaries to be fair). You can take it as weak evidence that Aristotle claiming the self to be in the heart and not in the brain means that most people of the time thought it was in the brain not the heart, as with today. His view got recorded for history because it was contrarian.
In ancient Greece, it was common knowledge that the liver was the thinking organ. This is obvious, because it is purple (the color of royalty) and triangular (mathematically and philosophically significant).
Hysteresis exists. Complex models are often time-dependent, and initial states may not always be retrievable under any circumstances.
In the immediate sense, the world we experience obviously has the quality of irreversible change. On a larger scale, our cosmos could easily be such a system- even without ChaosMote's excellent statistical treatment, we can't be sure that, just because things in general continue to happen, an event like the big bang could happen an infinite number of times. No matter how wide the scope of your analysis, it may be that the 'final answer' is that we are indeed working within a single time-dependent system.
I think that the most reasonable thing to assume is that every possible kind of reality exists. Why? Well, there seems to be no good reason for it not to. To assume that the universe is the sole reality is one assumption too many for me, and I say the fewer assumptions there are, the better.
If I'm not mistaken, this is the strong assumption underlying the whole post. And I would encourage you to consider this claim in probabilistic terms, rather than just working within a believe/disbelieve binary. What is your degree of confidence in this proposition, and why?
The amount of fossil fuels extracted in a year is equal to the amount of fossil fuels burned in a year (give or take reserves, which will even out in the long run). So if fossil fuel extraction were reduced, CO2 emissions would be reduced, regardless of any taxes, cap-and-trade, alternative energy sources, etc that may or may not be in effect. Indeed, the only way that traditional environmental measures such as the above can reduce carbon emissions is if their effect on fossil fuel prices eventually causes less extraction.
Therefore it seems logical that the best way to reduce CO2 emissions is to pay fossil fuel extractors to reduce their extraction rate. This should not cost the extractors too much because they will still own the resources and will be able to monetise them eventually. But environmentalists do not favour such subsidies to e.g. Saudi Arabia and when I have brought up this suggestion to environmentalists they have looked at me funny and suggested the issue was complicated, but never provided any direct reason why this should be a bad idea. This makes me think I am missing something obvious, that this is a silly idea.
Is there academic literature on this or similar concepts? Why isn't this a good idea for reducing CO2 emissions?
If nothing else, because it would be prohibitively expensive. Globally, something like 70 million barrels of oil are produced per day. The total value of all barrels produced in a year varies depending on the price of oil, but at a highish but realistic $100bbl, you're talking about two and a half trillion US dollars per year. If you were to reduce the supply by introducing a 'buyer' (read: subsidy to defer production) for some large percentage of those barrels, then the price would go even higher; this project would probably cost more than the entire global military budget combined, with no immediate practical or economic benefits.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Norman Borlaug is the poster child of how to use genetic manipulation for large-scale impact as an individual, so I don't think your degree is pointed in the wrong direction. But it is the nature of established institutions to fail at revolutionary thinking, so a survey of the 'heavyweights' in your field will tend to be disappointing.
We have only crappy guesses about the completion date for the AGI project, and the success of FAI in particular is contingent on how well our civilization runs in the interim. For example, wartime research might involve risky choices in AGI development, because they have a more urgent need for rapid deployment- an arms race for the 'first' AGI would be terrible for our chances of FAI. Genomics won't help us build a mind, but it can help foster an environment where that research is more likely to go well (see Borlaug again). You might, say, investigate the regulatory networks surrounding genes correlated with sociability or IQ.
Do you believe that you can reliably distinguish 'problems that cannot be solved by humans' from 'problems that humans could solve in principle but haven't yet'? Personally, I'm very bad at this, especially when the solutions involve unexpected lateral thinking. While I do agree that AGI is more or less the last human invention, I doubt that it's the next one- we haven't run out of other things to invent, and I'd be surprised if that was the case in the narrower area of genomics.
It's probably worth pointing out that you are at the exact stage in your PhD that is most known for general burnout. This looks suspiciously like such an event, with an atypical LW filter. So, this: "I think a large part of my lack of enthusiasm comes from my belief that advances in artificial intelligence..." is likely to be false, since many of your colleagues are experiencing similar feelings at a similar time.