Closet survey #1
What do you believe that most people on this site don't?
I'm especially looking for things that you wouldn't even mention if someone wasn't explicitly asking for them. Stuff you're not even comfortable writing under your own name. Making a one-shot account here is very easy, go ahead and do that if you don't want to tarnish your image.
I think a big problem with a "community" dedicated to being less wrong is that it will make people more concerned about APPEARING less wrong. The biggest part of my intellectual journey so far has been the acquisition of new and startling knowledge, and that knowledge doesn't seem likely to turn up here in the conditions that currently exist.
So please, tell me the crazy things you're otherwise afraid to say. I want to know them, because they might be true.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (653)
In what is probably an increasing order of controversial beliefs:
Libertarianism is correct, at least in the broader sense of the word (in the sense under which Milton Friedman qualifies as a libertarian). I know this isn't the most controversial belief, but it's still a minority belief, according to the 2012 survey.
Productivity (in the sense of "improve your productivity" LW posts) isn't that important as long as you're above a certain threshold, the threshold needed to do enough work to support yourself, save for the future, and have money to spend for fun. Excessive optimization for productivity (which describes many productivity posts on LW) leads to a less happy life.
The differences between men and women are overblown and are mostly socially caused. They are not so great that men and women should be treated differently. Normative gender roles should be abolished. Feminism is good.
The arguments commonly presented in favor of vegetarianism/veganism are weak. They presuppose that people care more about animal suffering than they really do (and subtly and unintentionally try to shame those that don't care as much), and that people are more capable of reducing animal suffering with their dietary choices than they really are.
Human value isn't irreducibly complex. It boils down to pleasure/happiness. Wireheading is the optimal state.
There is an objective morality (for humans), and it's ethical egoism.
I'd love to subscribe to your newsletter.
I can't shake off the suspicion of solipsism.
Don't worry, you're not the one who exists.
I don't think what I'm about to post is strictly in keeping with the intended comment material, but I'm posting it here because I think this is where I'll get the best feedback.
The majority of humans don't have a concrete reason for why they value moral behavior. If you ask a human why they value life or happiness of others, they'll throw out some token response laden with fallacies, and when pressed they'll respond with something along the lines of "I just feel like it's the right thing". In my case, it's the opposite. I have a rather long list of reasons why not to kill people, starting with the problems that would result if I programmed an AI with those inclinations. Also the desire for people not to kill and torture me. But where other people have a negative inclination to killing people, flaying them alive, etc. I don't. Where other people have an neural framework that encourages empathy and inconsequential intellectual arguments to support this, I have a neural framework that encourages massive levels of suffering in others and intellectual arguments restricting my actions away from my intuitive desires.
On to my point. Understandably, it is rather difficult for me to express this unconventional aspect of myself in fleshy-space (I love that term). So I don't have any supported ideas of how common non-conventional ethical inclinations are, or how they're expressed. I wanted to open this up for discussion of our core ethical systems, normative and non-normative. In particular I am interested in seeing if others have similar inclinations to mine and how they deal / don't deal with them.
Many singularitarians have a bias toward expecting a singularity in their own lifetime or shortly after it. (I assign a single-digit to singularity before 2100 and something like 25-40 in the next 500 years
Old Culture gets way too little credit, but most of the people who realize this or appear to realize this are reactionaries who either can't imagine different, much better Old Cultures or are neither utilitarian nor consensual with respect to participation in said cultures. (
I'm not sure what you mean by this.
Late to this (only by 4 years... so fifty smartphone generations), but LOVE the idea.
I believe - firmly, and with conviction - that the modal politician is a parasitic megalomaniacal sociopath who should be prevented at all costs from obtaining power; that the State (and therefore democracy) is an entirely illegitimate way of ameliorating public goods problems and furthering 'social objectives'.
Hence my nick (which I invented).
the optimal political/social structure is one in which we encourage megalomaniacal sociopaths to do good, because they tend to be effective. This is the best part of capitalism.
I'm not so sure about that. The outcomes implied by an ASPD diagnosis (not quite identical to "sociopath", but close enough for use here, I think) are better than some disorders, but still pretty rough -- including in measures of occupational success.
We might object that these are self-selected as people whose lives have been damaged enough by their problem that they seek treatment, but personality disorder criteria are so vague that I can't think offhand of a better way of grounding the word.
To a large extent you're right, but I think it's not inaccurate to say that EG ceos of corporations are more likely to be examples of effective sociopaths. I can't remember where I read that statistic but the rate of psychopathy among wealthy CEOS is higher than the average.
I've heard the same statistic, but there are a lot more ASPD diagnoses than there are CEOs. The former can be overrepresented among the latter (perhaps because it confers an advantage in business if you also have a bunch of other rare prerequisites) without the disorder being good news for its sufferers' effectiveness on average.
There's a definite selection effect for ASPD.
In general, any mental health diagnosis is usually conditioned on a significant disruption of the sufferer's life -- if you're a sociopath, but it doesn't effect you in any way, you're typically not diagnosed. This is usually on the DSM checklist for a diagnosis and while I don't know offhand if ASPD is the same, I'd bet that it is.
The comment you're replying to is definitely questionable, though. It seems like a very prematurely-halted optimization process if the "optimal" structure is optimized towards encouraging less than one percent of humans to do good things.
obviously we want to encourage lots of humans to do good things but I think it's extra important to encourage the 1 percent of humans that would otherwise do evil things to do good things.
Less than one percent.
Do you think it's optimally important? As in, the optimal social structures are weighted specifically towards this subgroup?
There's far lower hanging fruit than that. Most people don't even know about the Milgram experiments.
Don't make the optimal the enemy of the good.
Anyway part of my point is that this is already basically being accomplished through capitalism so we don't need to focus on it. It's a low hanging fruit that's already being plucked by our system that gives money to people who are of benefit to lots f of people
It is, yes, but that doesn't necessarily preclude "effective" -- the diagnosis can be based on disruption of any part of the patient's life. It's entirely possible for the behavior associated with a disorder to improve outcomes in one domain (employment, say), while disrupting others (i.e. family life) enough for the label to stick. That's what I was trying to get at with my qualification about occupational success.
(Meta-comment.) These 2009-era comments raise political/controversial points and meta-commentary I associate with latter-day LW, not OG LW, which surprises me a bit. (Examples below.) Given the more recent signs of escalating political tensions on LW, I wouldn't have expected these older comments to hit the same beats as, say, Multiheaded's analogous thread from this year, but a bunch did.
It looks like the political/controversial points provoked less argument here than in the 2012 post. I'd guess this is down to increasing political heterogeneity on LW over time, but maybe it's just because there are more people here now. (Or maybe Multiheaded's more dramatic framing in the 2012 post primed people to argue more vigorously? Dunno.)
"the most important application of improving rationality is not projects like friendly AI or futarchy, but ordinary politics"
"Forbidden topics!"
"I've heard reports that cause me to assign a non-neglible probability on the chance that sexual relations with between children and adults aren't necessarily as harmful as they may seem."
"In western societies, it's an orthodoxy, a moral fashion, to say that sex between children/adolescents and adults is bad. This can be clearly seen because people who argue against the orthodoxy are not criticised for being wrong, but condemned for being bad."
"within [sic?] human races there are probably genetically-determined differences in intelligence and temperment, [sic] and that these differences partically explain differences in wealth between nations"
"it's important to not downvote contributors to this survey if they sound honest, but voice silly-sounding or offending opinions"
"That both women and men are far happier living with traditional gender roles. That modern Western women often hold very wrong beliefs about what will make them happy, and have been taught to cling to these false beliefs even in the face of overwhelming personal evidence that they are false."
"I believe that there are very significant correlations between intelligence and race. [...] I believe that the reasons white people enslaved black people, and not the other way around is due to average intelligence differences."
"There is a very strong pressure to be "Politically Correct", and it seems that most beliefs that would be tagged with "Politically Correct" are tagged with that because they cannot be tagged with "Correct"."
"Men and women think differently. Ditto that modern Western women hold very wrong beliefs about what will make them happy."
"As a matter of individual rights as well as for a well working society, all information should be absolutely free; there should be no laws on the collection, distribution or use of information. Copyright, Patent and Trademark law are forms of censorship and should be completely abolished."
"Bearing children is immoral."
"All discussion of gender relations on LessWrong, OvercomingBias, or any similar forum, will converge on GenderFail." (This last one's from April 2010, but still.)
It's unclear to me that this is that LW specific. If you asked any large sample of western Internet users for anonymous and unaccountable statements of controversial opinions would you get results that are that different? If not, then it's more a description of the Internet.
The only thing that's LW specific is the suggestion that the most effective use of rationality is going to be politics.
I guess satt's point is that back in 2010 that stuff wasn't discussed outside “Closet survey” and threads like that, whereas more recently people have done that in otherwise regular threads causing some drama and mind-killing (though IMO certain LWers overstate the extent to which this is a problem).
Keeping discussions of potentially mind-killing topics quarantined to specially designated threads may be a superior solution to either banning them altogether or allowing them throughout the site.
My point was more that I had a causal model in my head (much higher proportion of LWers thinking/talking about controversial topics in 2012 → more LW drama in 2012), but realized it was wrong when I read the comments here, felt confused, and noticed I was confused. (It's a pretty mundane example of noticing confusion but I doubt I'm the only one whose mental model was wrong in this way.)
Coincidentally, I just found a sort of similar post by taw when I was idly Googling "reference class tennis". It mentions climate change scientists as examples of politicized science, and namedrops "race and IQ, nuclear winter, and pretty much everything in macroeconomics" as times when "such science was completely wrong". Also, although taw's ultimate point was actually about reference class forecasting, a lot of the comments focused on his object-level examples of scientific controversy instead. That happened back in 2009 as well.
As for what to do about drama, I'll hold off on making suggestions. It's not something top-down policy is likely to fix without unhappy side effects, and LW's ultimately an entertainment device for me (albeit one that sometimes makes me think). If it turns into something un-fun, I'll just go and procrastinate with something else.
[emphasis added]
Wow. Essentially, they prophesied Elevatorgate.
It isn't prophecy if you have a large-n sample.
It's reference class forecasting!
Creating working AGI software components is a necessary step towards making AGI, and you don't have any hope of understanding the problem of AGI until you've worked on it at the software level.
I do not believe that the Singularity is likely to happen any time soon, even in astronomical terms. Furthermore, I am far from convinced that, even if the Singularity were to happen, the transhuman AI would be able to achieve quasi-godlike status (i.e., it may never be able to reshape entire planets in a matter of minutes, rewrite everyone's DNA, travel faster than light, rewrite the laws of physics, etc.). In light of this, I believe that worrying about the friendliness of AI is kind of a waste of time.
I think I have good reasons for these beliefs, and I operate by Crocker's Rules, FWIW...
Anything that does not have sufficient intelligence to be considered a threat does not even remotely qualify as a 'Singularity'. (Your 'even if' really means 'just not gonna happen'.)
What dlthomas said. A hyper-intelligent AI could still pose a major existential threat, even if it did not have something like gray goo at its disposal. For example, it could convince us puny humans to launch our nuclear arsenals at each other, or destroy the world's economy, or come up with some sort of a memetic basilisk, etc. Assuming, of course, that such an AI could exist at all (which I am quite uncertain about), and that such feats of intelligence are in fact possible at all (I kinda doubt that basilisk one, for example).
Anything that cannot "reshape entire planets in a matter of minutes, rewrite everyone's DNA, travel faster than light, rewrite the laws of physics, etc" cannot possibly be intelligent enough to qualify as a threat? That seems an odd statement, given that some of those are thought to be impossible.
No. That isn't implied by what I said.
The relevant sentence is "In light of this, I believe that worrying about the friendliness of AI is kind of a waste of time". If that to which the label 'singularity' is applied is not sufficiently powerful for worrying about friendliness then the label is most certainly applied incorrectly.
As I'd already mentioned, I am far from convinced that a sufficiently powerful AI will emerge any time soon. Furthermore, I believe that such an AI will still be constrained by the laws of physics, regardless of how smart it is, which will put severe limits on its power. I also believe that our current understanding of the laws of physics is more or less accurate; i.e., the AI won't suddenly discover how to make energy from nothing or how to travel faster than light, regardless of how much CPU power it spends on the task. So far so good; but I am also far from convinced that bona fide "gray goo" self-replicating molecular nanotechnology -- which is the main tool in any Singularity-grade AI's toolbox -- is anything more than a science fictional plot device, given our current understanding of the laws of physics.
Maybe supersmart AI's are so good at disregarding the known laws pof phyisc that they exist already.
I find it amusing that there are actual mechanisms that "our current understanding of the laws of physics" predict will allow both of these (zero-point energy and alcubierre drives, respectively.)
The Alcubierre drive is an highly speculative idea that would require exotic matter with negative mass, which is not considered possible according to mainstream theories of matter such as the Standard Model and common extensions and variations.
Zero-point energy is a property of quantum systems. According to mainstream quantum mechanics, Zero-point energy can't be withdrawn to perform physical work (without spending more energy to alter the underlaying physical system).
Among the perpetual motion/free energy crowd, Zero-point energy is a common buzzword, but these people are fringe scientists at the very best, and more commonly just crackpots or outright fraudsters.
Ah ... no.
Not exactly. ZPE has measurable and, in some cases, exploitable effects. I'm not saying it'll ever be practical to use it as a power source (except maybe for nanotech) but it can most definitely be used to perform work. For example, the Caismir effect. I note that Wikipedia (which I can't edit from this library computer) makes this claim, but the citation provided does not; I'm not sure if it's a simple mistake or someone backing up their citation-less claim with an impressive-sounding source.
Well yeah, anyone claiming to have an actual working free energy machine is lying or crazy. Just like anyone claiming to have flown to Venus or programmed a GAI. Likewise, anyone claiming to have almost achieved such technology is probably conning you. But that doesn't mean it's physically impossible or that it will never be achieved.
Uhm, I'm not a physicist, but that's a short paper (in letter to the editor format) regarding wormholes, which was published in 1988. The Alcubierre drive was proposed in 1994. Maybe somebody used an FLT drive to go back in time and write the paper :D
Anyway, while I don't have the expertise to properly evaluate it, the paper looks somewhat handwavy:
One can imagine the Moon being made of cheese, but that doesn't make it physically plausible.
AFAIK, there are multiple interpretations of the Caismir effect, but in most of them it is maintained that the phenomenon doesn't violate conservation of energy and can't be used to extract energy out of the quantum vacuum.
It can, in theory, be used to convert mass to energy directly. Bias quantum foam flux over an event horizon - and this need not be a gravitational event horizon, an optical one ought to work- and one side of the horizon will radiate hawking radiation, and the other will accumulate negative-mass particles. These should promptly annihilate with the first bit of matter they encounter, vanishing back into the foam and clearing the energy debit of the hawking radiation - effectively making the entire system a mass->energy conversion machine. Which does not violate CoE.
One second.. http://arxiv.org/pdf/1209.4993v1.pdf
AKA: A theoretical way to make a mass-annihilation powered laser amplifier. No way to tell if this is good physics without actually building the setup, but the theory all seems sound.
Eh... Only.. Do not point that lab bench at me, please? The amplification ought to stop when the diamond turns into a plasma cloud..
I understand (I can't get past the paywall) that it describes how the Caismir effect creates an area that violates the positive energy condition, proving that it's not a law of physics. This is only part of their more general point (which is time machines, which are, of course, equivalent to FTL drives in any case. Harder to build though.)
The quote is handwavy. Then again, I don't know much about quantum foam. OTOH, considering their paper concerns a mechanism for holding wormholes open, it's not an unreasonable proposition (and it's not the only way to get a wormhole, after all, merely a possible way.)
The Caismir effect isn't the only example. ZPE keeps liquid helium liquid and probably contributes (although it's not the only contributor) to the expansion of the universe. Conservation of energy simply doesn't apply on a quantum scale; it's an emergent property of quantum mechanics, like, say, chairs.
Wrong link? The abstract (full text is paywalled) says:
I don't see any connection to Alcubierre drives. Classic Kip Thorne, though.
Without even pretending to be anything other than an amateur layman in such questions, I found this on arxiv, quote:
(Lastly, if you're wondering why I'm replying to you a lot, it's just because you are a prolific commenter with whom I occasionally disagree.)
looks embarrassed
I just grabbed a citation from someone talking about how the Caismir effect can be used to create negative energy (in the context of stabilizing wormholes.) I should probably have checked that, I would have found it wasn't actually in the abstract.
Nevertheless! My point was that negative energy is pretty obviously physically possible, since it's what predicts the Caismir effect working. (There has been some attempt to claim the CE is actually predicted by some other theories, but that's not widely accepted.)
From what I understand it may be closer to say "doesn't rule out" rather than "predict will allow". Even that much of a possibility is somewhat mind-blowing.
Um, the current definition of speed prohibits FTL motion.
Travel, on the other hand, is a much looser term. Alcubierre drives, in theory, travel faster than their speed would suggest by distorting space. Until recently they were merely interesting mathematical curiosities, but recently new variations that allow them to be constructed by a non-godlike tech level have been discovered.
Only locally. And 'local' is rather malleable (which is the principle alcubierre drives theoretically rely on).
It's distance and time which are more malleable; if light travels through a vacuum and arrives in x time, the arrival point is defined as being x distance away from the departure point of the light when it arrives. The Alcubierre drive would (given a couple of facts not in evidence) allow you to change the distance. Light emitted from you at the time of departure would still beat you to the destination.
Sure, so long as the space being traversed remains consistent. Which it doesn't (always) given General Relativity. Hence Alcubierre drives.
No, it wouldn't. The drive in question is described thus:
Notice the link there to faster than light travel. That title is a literal description.
For emphasis: This is General and not Special Relativity.
Fair enough. I might recommend cutting your quote down to the relevant bit for clarity and brevity. I should have got your intended meaning with a few more cycles invested, but anything you can do to make the reader's job easier is a win.
Ok, I put some [...] in.
Fertility and intelligence are negatively correlated.
Religiosity and intelligence seem to be negatively correlated.
Therefore all the efforts of Dawkins, Yudkowsky etc. to make the world more rational seem to be futile or at least inefficient. Pretty scary...
Gah. Classical evolution is over. To clarify: Evolution is real, but it is also glacially slow. Social changes are orders of magnitude faster, and technology faster still.
The odds of selective effects causing any changes whatsoever to human nature before someone rewrites our genome like it is a manuscript in dire need of editing are zero. The time scales are wrong. As far as evolution is concerned, if it takes us 300 years to master genetic engineering, and another 500 before the laws against it stop being enforced, then that is a bullet Darwin cannot dodge. And after that point, evolution is no longer blind.
Fertility and intelligence may be correlated, but that does not state much about intelligence and birth rate. Just because two -things are correlated, does not imply causation, and even if they are, their may be non-listed effects which cause results opposite those that would be anticipated with only two factors taken into consideration.
I believe that every single social interaction is linked to power/hierarchy. (see Robert Greene book's)
I also believe that most on LW simply opt-out their local/most proximate hierarchy (and they may actively and/or secretly seek to discredit it), as Paul Graham in high-school. In one of his articles he talked of how wanted to be more intelligent than popular. That is dominance in one field instead of another. (A tip to entrepreneurs is to aim to be #1 in your field or not start at all.)
If it's not their most proximate hierarchy then it is the one they internalized during their youth. Parents? Friends?
I believe that's human, good and perhaps to an extent "WEIRD". I remember reading an old quote of an amerindian chief talking of the unrest in the eyes of europeans, also how Thomas Jefferson (or was it Franklin?) talked of the indolence of amerindians.
Culture is an internalization of the power/hierarchy in place/followed natural or not. As is everything else of social nature, pretty much everything else manmade. (In theory not science, that's what I love about it. If you take out the "In theory" and the human nature of scientists.)
http://www.paulgraham.com/nerds.html
His argument boils down to nerd kids being exceptionally smart, and caring much more about being smart than being popular, hence failing at the latter.
I think this argument is overly general, as it can be applied to any kind of excellence: jock kids are exceptionally athletic, and they care much more about being athletic than being popular, hence, according to Graham's argument, they should be failing at being popular, while in the American school system they succed.
I wonder whether this "popular jock, unpopular nerd" phenomenon is specific to the American, and perhaps to a lesser extent Western, culture. AFAIK, in East Asian cultures such as Japan and South Korea, school popularity is positively correlated with scholastic performance, probably with good reason, since in these countries scholastic performance is highly correlated with future income and social status.
The closest Japanese equivalent to the Western 'nerd' or 'geek' is the 'otaku'. The word otaku typically refers to social ineptitude, an excessive fixation on pop culture items such as manga, anime, videogames and associated paraphernalia, and general tendency to withdraw from normal social interactions and escape to a fantasy world.
While perhaps many Western nerds can be considered otaku or near-otaku, Japanese otaku are not, in general, nerds, in the Western meaning of "socially awkward smart person". I don't know about IQ scores, but AFAIK, otaku usually have lower-than-average scholastic performance.
I suppose that escapism is the result of social isolation, which results from being underperforming in whatever measure of success your local society values. Different societies value different things.
FWIW, it did not exist at the schools I went to in Edinburgh, Scotland, in the 1960s, nor at university (Edinburgh and Oxford) in the 70s. There were sports; some excelled in them and some didn't, like anything else. In my later years at school, one of the options for sports (a compulsory subject for all) was chess. From over here, the jock/nerd thing looks like an exclusively American phenomenon that only exists elsewhere, where it exists at all, by contagion from the original source. "Jock" is an American word. I don't see it used here.
For that matter, the idea of the "popularity totem pole" didn't exist either. Everyone had their own circle of friends. There was no such thing as being "popular". I have no idea what it's like in British schools these days, but "popular" in that specific sense isn't a concept I hear used.
See the comments to this post.
It's true that athletics are very demanding (I remember vividly the absurd amounts of time my high school's football team demanded of its members), but in practice, athletics does seem to somehow escape the double-bind of 'you cannot serve two masters'.
Is it general physical fitness and attractiveness? Yes, I bet that's part of it (although it makes one wonder if there's a causation/correlation confusion). Is it immediate advantages from intimidation due to physical size? I remember the football players at my highschool benefited a bit from this, from simply being huge, but it doesn't seem adequate. Is it the tribal nature of sports, in warring against the enemy school, where athletics short-circuits the need to earn popularity the hard way by players wrapping themselves in the proverbial flag? It'd explain why the competitive sports like football seem to elicit the most admiration of its athletes (and huge donations from alumni), and various track and field events ignored by most students. I like this as the biggest factor.
If I were going the correlation route, I'd probably appeal to the same excuses universities make in choosing on non-academic merits: the kids who do aggressive sports are generally more likely to succeed spectacularly in business or life and earn lots of money which they can donate back. (Consider the Terman study which found massive lifetime income returns to being extraverted.) So when the girls flock helplessly around the football team, making them 'popular' even though they are specializing in football and not 'being popular', they are executing an effective choice of future allies and boyfriends. (How many football stars marry their highschool sweetheart and go on to success...?)
I can only speak from my extensive anime-watching experience (he said, self-mockingly), but I get the impression that athletics is a great way to popularity and girls in Japan as well. Yes, the 'ideal student' archetype will be great at sports and academics, but that's true in the US as well, and it seems that if you can't have both, better to go with sports.
There are far fewer well-defined mathematical relations and operations on the set of "utilities" (aka "utility values", although the word 'value' is misleading since it suggests a number) than most self-stated utilitarians routinely use; for example, multiplying utilities by a scalar makes no sense, the sum of two utilities can only be defined under very strict conditions, and comparing two utilities under only slightly less strict ones.
Consequently, from a rigorous point of view utilitarianism makes very little sense and is in no way intellectually compelling. Most utilitarians satisfy themselves with a naive approach that allows it to build an internally consistent rule set, much in the same way as theology or classical physics. But the "utilities" they talk about have lost most of their connection to reality - to subjects' preferences/happiness - and more closely resemble an imaginary karma score.
I would have chosen the original ending of Three World Collide over the "true" ending, and would be, if not entirely pleased, at least optimistic with respects to the outcome of Failed Utopia #4-2.
Judging from the comments on Failed Utopia #4-2, you are far from alone on that one. Even EY, for all that he asserted that people were just claiming to be OK with it to be contrary, eventually conceded that he would choose that world over the current state of affairs. As would I.
That was because they didn't have the same impending doom of existential risk hanging directly over their heads and people weren't dying all the time, it wasn't a function of "yay more people are HAPPY".
Yes.
I didn't mean to suggest that you viewed it as a perfect win condition, nor that you believed peoples' HAPPY level was the most important factor; sorry if it came across that way.
I believe there's a significant probability of economic collapse in large developed countries in the next fifty years. (Possibilities: fiscal collapse, default, financial crash resulting in a true depression.) I believe that it's worth effort and money to plan for this eventuality.
I believe that choosing to focus attention on uplifting things is the most practical use of one's mind. (This is more controversial than it sounds: it means placing a noticeably higher value on high culture than low culture, and it means that making cynical observations corrodes most people's ability to be productive.)
I believe that the personal really is political. That is, many "political" isms are actually total sets of values about interpersonal relationships and the good life. So you can't really talk about values and ethics without ever bringing up contemporary politics, because often people's personal creeds in daily life actually are libertarian, feminist, conservative, socialist, etc. Therefore rules like "don't talk politics" imply that we don't talk about values either.
I do not believe in utilitarianism of any sort, as an account of how people should behave, how they do behave, or how artificial people might be designed to behave. People do not have utility functions and cannot use utility functions, and they will never prove useful in AGI.
Bayesian reasoning is no more a method for discovering truth than predicate calculus is. In particular, it will never be the basis for constructing an AGI.
Almost all writings on how to build an AGI are nothing more than word salad.
In common with most people here, I expect AGI to be possible. However, I may be unlike most people here in that I have no idea how to build one.
The bar to take seriously any proposed way of building an AGI is at least this high: a real demo that scares Eliezer with what could be done with it right now, never mind if and when it might foom.
All discussion of gender relations on LessWrong, OvercomingBias, or any similar forum, will converge on GenderFail. (Google "RaceFail" to see what I'm comparing this to. The current GenderFail isn't as bad as LiveJournal's great RaceFail 2009, but it's the same process in miniature.)
Some things are right, some things are wrong, and it is possible to tell the difference.
In your opinion, what might be some methods for discovering truth?
Observing, thinking, having ideas, and communicating with other people doing these things. Nothing surprising there. No-one has yet come up with a general algorithm for discovering new and interesting truths; if they did it would be an AGI.
Taking a wider view of this, it has been observed that every time some advance is made in the mathematics or technology of information processing, the new development is seized on as a model for how minds work, and since the invention of computers, a model for how minds might be made. The ancient Greeks compared it to a steam-driven machine. The Victorians compared it to a telephone exchange. Freud and his contemporaries drew on physics for their metaphors of psychic energies and forces. When computers were invented, it was a computer. Then holograms were invented and it was a hologram. Perceptrons fizzled because they couldn't even compute an XOR, neural networks achieved Turing-completeness but no-one ever made a brain out of them, and logic programming is now just another programming style.
Bayesian inference is just the latest in that long line. It may be the one true way to reason about uncertainty, as predicate calculus is the one true way to reason about truth and falsity, but that does not make of it a universal algorithm for thinking.
I didn't get the impression that Bayesian inference itself was going to produce intelligence; the impression I have is that Bayesian inference is the best possible interface with reality. Attach a hypothesis-generating module to one end and a sensor module to the other and that thing will develop the correctest-possible hypotheses. We just don't have any feasible hypothesis-generators.
I do get that impression from people who blithely talk of "Bayesian superintelligences". Example. What work is the word "Bayesian" doing there?
In this example, a Bayesian superintelligence is conceived as having a prior distribution over all possible hypotheses (for example, a complexity-based prior) and using its observations to optimally converge on the right one. You can even make a theoretically optimal learning algorithm that provably converges on the best hypothesis. (I forget the reference for this.) Where this falls down is the exponential explosion of hypothesis space with complexity. There no use in a perfect optimiser that takes longer than the age of the universe to do anything useful.
Thank you, that was very enlightening. I see now where you were coming from.
I still think that some breakthroughs are more -equal- fundamental and some methods are more correct, that is, efficient in seeking the truth. Perhaps attempts to first point out some specific interesting features of human consciousness (or intelligence, or brain) and only then try to analyse and replicate them would meet more success. In that sense logic and neural networks are successful, while bayesian inference is not.
I wonder if you are familiar with TRIZ? It strikes me as positively loony, but it is a not-outright-unsuccessful attempt at a general algorithm for discovering new, uh, counterintuitive implications of known natural laws. Not truths per se, but pretty close.
I've read a book on it, as it happens. It seemed quite a useful set of schemas for generating new ideas in industrial design, but of course not a complete algorithm.
I've peeked at your profile and the linked page. See, I'm currently enrolled into linguistics program, and I was considering dedicating some time to The Art of Prolog, so I've researched what Prolog software there is and wasn't especially impressed. Could I maybe ask you for advice as to what kind of side project Prolog is suited for? I'm familiar with Lisp and C and I've dabbled with Haskell and Coq, and I would really really like to write something at least marginally useful.
I think Prolog, like Lisp, is mainly useful for being a different way of thinking about computation. The only practical industrial uses of Prolog I've ever heard of are some niche expert systems, a tool for exploring Unix systems for security vulnerabilities, and an implementation of part of the Universal Plug and Play protocol.
It would be a significant part of an AGI. Even the hardest part. But not enough to be considered an AGI itself.
Corporations literally get away with murder. The corporation is a recent innovation, not something that has always been with us. This recent social contract that governs corporations is deeply flawed, in that it holds no one accountable for consequences that would be regarded as criminal if resulting from the actions of a person. A recent case in point is the wave of suicides in the French national telecom giant.
(Ideas below are still works in progress, listed in descending order of potential disagreement:)
Bearing children is immoral. Eliezer has stated that he is not adult enough to have children, but I wonder if we will ever be adult enough, including in a post-singularity environment.
The second idea probably isn't as controversial: early suicide (outside of any moral dilemma, battlefield, euthanasia situation, etc.) is in some cases rational and moral. Combined with cryonics, it is the only sensible option for, e.g., senile dementia patients. But this group can be expanded, even without cryonics.
Some have mentioned modern school systems to be broken, but I'll go even further and say that mandatory education is a huge waste of time and money, for all involved. Many, perhaps most, need to know only basic literacy and arithmetic. The rest should be taught on a want-to-know basis or similar. As a corollary, I don't think many or even most people can be brought into the fold of science or rationality.
(Curiously, the original poster wondered if our crazy beliefs might be true, but many responses, including my own, are value, not fact, judgments.)
I love saying crazy things that I can support, and I thrive on the attention given to the iconoclast, so I find it impossible to answer this.
The only beliefs that I wouldn't feel comfortable saying here are beliefs that I want to be true, want to argue for, but I know would get shredded. This is one reason I try to hang out with smart, argumentative people - so that my concern about being shredded in an argument forces me to more carefully evaluate my beliefs. (With less intelligent people, I could say false things and still win arguments).
This is great. I haven't waded my way through the whole EvoPsy debate (yet), and then there's global warming, several flavors of natural selection argument, and who knows what else. Mind if I vent some arguments with you?
I've had a bone to pick with Yudkowski* ever since reading Three Worlds Collide. I haven't gathered all of my thoughts yet, or put them in a proper essay, but since you asked, here's a quick synopsis (paraphrasing Clausewitz).
I think people nowadays overestimate the value of human life. Generally speaking, we ain't worth that much - and up until about four-hundred years ago, killing each other was our primary source of entertainment.
As long as we have individuals, conflict is inevitable; and a society where the conflict's extremes have been narrowed down to nothing but sassy comments and politicking, well... that seems like a pretty boring place to live.
Speaking from experience, yelling at people solves a lot of problems. And I know a few individuals whom would be much less of a trainwreck if they'd been given the punch in the face they deserved. I think we've got no call to be judging the Baby Eaters for their biology - anymore than the Orgasmiums have for judging us. Misery can be just as much fun, if you approach it with the proper mindset, and I think HBO Rome does a brilliant job of describing a society with more reasonable standards. At the end of the day, it beats playing checkers, doesn't it?
:-) Just look at our entertainment - we love a protagonist who suffers.
*It's a very small bone. A chicken bone, really.
To whom?
You may want to read some of the Romantics, if you haven't already. Especially Ralph Waldo Emerson and Nietzsche, who don't necessarily normally fall into that category.
I was hoping to get more interesting replies to this post.
It seems you all more or less agree about how the world works, and what's left is people mooning about their personal ethical preferences or niggling issues in already vague areas, or minor doubts about this and that.
I believe jesus is entirely mythical, quarks don't exist, 9/11 and the london tube bombings were inside jobs, and flying saucers are the manifestation of a non-human, superior intelligence.
This rationalist community is a dry husk of libertarians, mathematicians, and various other people who don't get invited to parties. I find it very depressing...
I don't have much of a vested interest in being or remaining human. I've often shocked friends and acquaintances by saying that if there were a large number of intelligent life forms in the universe and I had my choice, I doubt I'd choose to be human.
I'm going to be an elven wizard.
Are there (many) people on here who don't agree with you?
Depending on how we define "human," I might... I'm not sure. But I'm fairly confident that if I did, my definition of "human" would come out so broad that it would shock swestrup's friends and acquaintances even more.
Whenever I hear an unsupported vote against conventional wisdom on a web forum, e.g. "adult-preteen intercourse isn't very harmful", I don't update my view much. Absent a well-argued case for the unconventional position, I assume that such beliefs reflect some strong self-interested bias (sufficient to overcome strong societal pressure) and not fearless rational investigation - to say nothing of trolls.
I also strongly discount unreasoned votes in favor of the consensus, especially on issues subject to strong conformity pressure.
It seems that this survey is not intended to solicit arguments for particular controversial anthropological or political beliefs. Does the site accept them at all? I'd expect not, except as case studies for some general claim, due to the risk of attracting cranks.
I agree. See my comment for this post. My position is controversial, but pretty coherent. At least, no one came up with a counter argument, I was just downvoted alot. So, my opinion is a pretty good example of what the poster is looking for, yet such opinions inherently will not do well. Really, this forum is antithetical to this post.
I think school, as conventionally operated, is a scandalous waste of brain plasticity and really amounts mostly to a combination of "signaling" and a corral.
I'm not sure what should replace it. There are things kids need to know - math, general knowledge, epistemology, reasoning, literacy as communication, and the skills of unsupervised study and research. (School doesn't overtly teach most of the above - it puts you under impossible pressure and assumes that like a tomato pip you will be squeezed into moving in the right direction.)
There are also a ton of things they might like to learn, out of interest.
I am not sure those two categories of learning ought to be bundled up. Especially, while I can understand forcing a study of the first category, it seems obviously counterproductive to force the second.
I've read some responses touching on the same issue, but my point is different enough that I thought I'd do my own.
I believe that posession of child, or any other kind of pornography should be legal. I don't have enough information to decide whether the actual making of child pornography is harmful in the long term to the children, but I believe that having easy access to it would allow would-be child molesters to limit themselves to viewing things that have already happened and can't be undone.
I would say that the prominence of hentai and lolicon in Japan is a smaller step in the same direction, and seems to have worked well there.
In context it's interesting that Japanese children's manga routinely has bawdy jokes, sexualized slapstick and "fan service". This may be an outsider's mistaken view but there doesn't seem to be any serious attempt to fence children into a contrived asexual sandpit.
I agree that's interesting, but remember these manga are not actually written by children, nor bought or read exclusively by children.
There is no such thing as a consumer-driven economy.
meaning what?
OK, here goes. I could probably produce a list of things that all y'all'd disagree with, though I'm pleased to see that routine neonatal circumcision = bad isn't among them. But I'll just go for the jugular:
Flush toilets are the greatest evil in the world.
Edit: OK, so why the downvote? Presumably not because you disagree.
You might get a better response if you actively claimed RNC isn't bad.
Also, if you provided some clarifying explanation for the toilet claim. "Greatest evil in the world" is pretty extreme - try modifying it downwards.
Something that I don't so much believe as assign a higher probability than other people.
There is a limit to how much technology humans can have, how much of the universe we can understand and how complicated of devices we can make. This isn't necessarily a universal IQ limit but more of an asymptotic limit that our evolved brains can't surpass. And this limit is lower, perhaps substantially so, than what we would need to do a lot of the cool stuff like achieve the singularity and start colonizing the universe.
I think it's even possible that some sort of asymptotic limit is common to all evolved life, this may well be a solution to the fermi paradox, not that they aren't out there, but no one is smart enough to actually leave their rock.
I have wondered about the assumption that technological / scientific / economic progress can continue forever, and I am also suspicious of the idea that arbitrary degrees of hyper-intelligence are possible. I suspect that all things have limits, and that mother nature long ago found most of those limits.
I don't know how many people here would agree with the following, but my position on it is extreme relative to the mainstream, so I think it deserves a mention:
As a matter of individual rights as well as for a well working society, all information should be absolutely free; there should be no laws on the collection, distribution or use of information.
Copyright, Patent and Trademark law are forms of censorship and should be completely abolished. The same applies to laws on libel, slander and exchange of child pornography.
Information privacy is massively overrated; the right to remember, use and distribute valuable information available to a specific entity should always override the right of other entites not to be embarassed or disadvantaged by these acts.
People and companies exposing buggy software to untrusted parties deserve to have it exploited to their disadvantage. Maliciously attacking software systems by submitting data crafted to trigger security-critical bugs should not be illegal in any way.
Limits: The last paragraph assumes that there are no langford basilisks; if such things do in fact exist, preventing basilisk deaths may justify censorship - based on the purely practical observation that fixing the human mind would likely not be possible shortly after discovery.
All of the stated policy opinions apply to societies composed of roughly human-intelligent people only; they break down in the presence of sufficiently intelligent entities.
In addition, if it was possible to significantly ameliorate existential risks by censorsing certain information, that would justify doing so - but I can't come up with a likely case for that happening in practice.
Isn't yelling "fire!" in a crowded theater a kind of langford basilisk?
Agreed.
Normally, when people say they believe "all information should be free", I suspect they don't really mean this, but since you claim your position is very "extreme", perhaps you really do mean it?
I think information, such as what is the PIN to my bank account, or the password to my LessWrong.com account, should not be freely accessible.
You don't believe there is value in anonymity? E.g. being able to criticize an oppressive government, without fear of retribution from said government?
You make a good point; I didn't phrase my original statement as well as I should have. What I meant was that there shouldn't be any laws (within the limits mentioned in my original post) preventing people or companies from using, storing and passing on information. I didn't mean to imply keeping secrets should be illegal. If a person or company wants to keep something secret, and can manage to do so in practice, that should be perfectly legal as well.
As a special case, using encryption and keeping the keys to yourself should be a fundamental right, and doing so shouldn't lead to e.g. a presumption of guilt in a legal case.
I believe there can be value in anonymity, but the way to achieve it is by effectively keeping a secret either through technological means or by communicating through trusted associates. If doing so is infeasible without laws on use of information, I don't think laws would help, either.
I think governments that would like to be oppressive have significantly more to fear from free information use than their citizens do.
When you use the PIN to your bank account you expect both the bank and ATM technicians and programmers to respect your secret. There are laws that either force them not to remember the PIN or impose punishment for misusing their position of trust. I don't see how such situations or cases of blackmail would be resolved without assuming one person's right to have their secrets not made public by others.
I'm not just nitpicking. I would love to see a watertight argument against communication perversions. Have you written anything on the topic?
Agreed.
Also, if you pile on technological improvements but still try to keep patents etc, you end up in the crazy situation where government intrusiveness has to grow without bounds and make hegemonic war on the universe to stop anyone, anywhere from popping a Rolex out of their Drexlerian assembler.
I very strongly agree, except for the matter of trademarks. Trademarks make brand recognition easier and reduce transaction costs. Also enforcing trademarks is more along the lines of preventing fraud, since trademarks are limited only in identifying items in specific classes of items (rather clumsily worded, but I'm trying to be concise and legalities don't exactly lend themselves to concision.)
I don't agree with it. You can't believe everything you read in Wired. The "information should be free" movement is just modern techno-geek Marxism, and it's only sillier the second time around.
All software is buggy. All parties are untrusted.
That may be so now, but that doesn't mean it's impossible to change it. That the current default state for software is "likely insecure" reflects the fact that the market price for software security is lower than the cost of providing it.
Laws against software attacks raise the cost of performing such attacks, and therefore lower the incentives for people to ensure the software they use is secure. I think it would be worth a try to take that illegality away, and see if the market responds by coming up with ways to make software secure.
You can't get really good physical security without expending huge amounts of resources: physical security doesn't scale well. Software security is different in principle: If you get it right, it doesn't matter how many resources an attacker can get to try and subvert your system over a data channel - they won't succeed.
I believe that some improvements in rationality have negative consequences which outweigh their positive ones.
That said, it might be easy to make too much of this. I agree that, on average, marginal improvements in rationality lead to far superior outcomes for individuals and society.
Could you give an example of such a negative consequence?
With probability 50% or greater, the long-term benefits of the invasion of Iraq will outweigh the costs suffered in the short term.
Do you still maintain the statement, in 2015 with ISIL attacks?
I can see the reasoning though I don't quite agree for two reasons.
1) If the Lancet report is at all accurate that's a lot of deaths for the long-term benefits to make up for.
2) How much more extreme has that made the rest of the middle east? How has it hurt the possibility of peace in Israel.
I was, and still am against the start of the war, though I've been fairly consistent in thinking they should stay since then. (Oddly enough I thought the surge was a good idea when virtually no-one else did, though have since started to think it didn't really do anything now that everyone is moving on board!).
Costs and benefits to whom? America and allies, Iraq, or the world in general?
I believe I'm immortal (and so is everyone else). This is from a combination of a kind of Mathematical Platonism (as eujay mentions below) and Quantum Immortality.
This believing in 'all possible worlds' and having a non-causal framework for the embedding of consciousness means that just because of the anthropic principle and perhaps some weird second-order effects, it is quite possible that we will experience rather odd phenomena in the world. Hence, things like ghosts, ESP and such may not be so far-fetched.
Also, I am not a Bayesian. I simply do not think the mind really operates according to such quantitatively defined parameters. It is fuzzy and qualitative. I, for one, have never said I believed in something at, say, 60% probability - and if I did, I would be lying.
Just because odd things occur, does not mean other odd things, like ghosts and ESP, exist. What mechanisms for these do you believe in and why do you believe in them? Why do humans have ESP and what mechanism fuels this? What exactly are ghosts and why should the chemical processes in the human brain transfer over to this this 'ghost' mechanism after they cease functioning? I guess I just want to ask, what do you believe and why do you believe it? Just because extraordinarily odd things have happened does not remove the need for extraordinary evidence to explain other extraordinarily odd things.
Surely you have varying degrees of confidence in various statements. Think about what sort of odds you would need to bet on various predicted future events. You need to read up on calibrating your estimates.
You are saying that "being a Bayesian" describes a belief about how the mind works. That's like saying you're not a Calculian because you don't believe the mind natively uses calculus. Most Bayesians would probably say it's a belief about how to get the right answer to a problem.
I don't think this qualifies as a belief; it's just something I have noticed.
My dreams are always a collection of images (assembled into a narrative, naturally) of things I thought about precisely once the prior day. Anything I did not think about, or thought about more than a single time, is not included. I like to use this to my advantage to avoid nightmares, but I have also never had a sex dream. The fact that other people seem to have sex dreams is good evidence that my experience is rare or unique, but I have no explanation for it.
My nightmares are some of my most interesting dreams, so I don't try to avoid them.
I used to have really interesting nightmares too. Unfortunately, nightmares need a charge of fear to sustain them, and I haven't really been afraid of anything in the last few years, so no more nightmares. My dreams have been a lot more disorganized and less memorable since.
I sometimes suspect that mass institutionalized schooling is net harmful because it kills off personal curiosity and fosters the mindset that education necessarily consists of being enrolled in a school and obeying commands issued by an authority (as opposed to learners directly seeking out knowledge and insight from self-chosen books and activities). I say sometimes suspect rather than believe because my intense emotional involvement with this issue causes me to doubt my rationality: therefore I heavily discount my personal impressions on majoritarian grounds.
I don't actually believe it as such, but I think J. Michael Bailey et al. are onto something.
OK, you're the second person in this thread I've seen advocating this view, so maybe my pro-school view is the minority one here.
The idea of curiosity is very compelling, but how often does productive curiosity actually occur in people who don't go to school? Modern society has lots of things to be curious about: television, video games, fan fiction, skateboarding, model rockets, etc. The level of interesting-ness doesn't correlate with the level of importance (examples of fields with potential large improvements for humanity: theoretical physics, chemistry, computer science, artificial intelligence, biology, etc.) If you believe model rockets are a sure lead-in to theoretical physics or chemistry, I think you're being overly optimistic.
The most important effect of school is providing an external force that gets people to study these (relatively) boring but important fields. Also, you get benefits like learning to speak in public, being able to use expensive school facilities, having lots of other people to converse with on the topics you're learning, etc. To do boring things on your own, you need self-discipline, which is hard to come by. School does a great job of augmenting self-discipline.
By the way, I thought about school much the same way you did until I left high school (two years early) and went to community college. I can't explain why, but for some reason it's a million times better.
Well, in community college, you're now the "customer", and determine what you want to study, and how to study. It still provides a framework, but you're much freer in that framework. The question is to what extent can we get similar benefits in earlier schooling. AFAICT, the best way to do so would be to make more of it optional. (Another pet project of mine would be to separate grading/certification and teaching. They're very different things, and having the same entity do both of them seems like a recipe for altering one to make the other look good.)
"...separate grading/certification and teaching...."
John Stuart Mill advocates that in the last chapter of On Liberty. He wanted the state to be in charge of testing and certification, but get out of the teaching business altogether (except for providing funding for educating the poor). I like the idea.
I should really get around to reading On Liberty one of these days.
I really think this is the domino that could trigger reform throughout the entire system. The problem is that there are only a few professions that require a specific, critical skill-set which can be easily tested and which completion of a degree does not guarantee.
I agree in principle. The problem is kids in the workplace. When you're gardening and making necklaces, the children can float around among the adults, learn by observation, and from one another. When both parents are sitting in front of a computer all day...
So you found OB and his other writings 40 years ago?
Also, kudos for spending a lot of time studying theoretical physics.
When I really get depressed I speculate that drug abuse could be the explanation of the Fermi Paradox, the reason we can't find any ET's. If it were possible to change your emotions to anything you wanted, alter modes of thought, radically change your personality, swap your goals as well as your philosophy of life at the drop of a hat it would be very dangerous.
Ever want to accomplish something but been unable to because it's difficult, well just change your goal in life to something simple and do that; better yet, flood your mind with a feeling of pride for a job well done and don't bother accomplishing anything at all. Think all this is a terrible idea and stupid as well, no problem, just change your mind (and I do mean CHANGE YOUR MIND) now you think it's a wonderful idea.
Complex mechanisms just don't do well in positive feedback loops, not electronics, not animals, not people, not ET's and not even Jupiter brains. I mean who wouldn't want to be a little bit happier than they are; if all you had to do is move a knob a little what could it hurt, oh that's much better maybe a little bit more, just a bit more, a little more.
The world could end not in a bang or a whimper but in an eternal mindless orgasm. I'm not saying this is definitely going to happen but I do think about it a little when I get down in the dumps.
Doubtful. The first person to invent an 'expansionist' drug, that turned users into hyper-competitive, rapidly-reproducing, high-achieving types -- basically, a pill for being a Mormon -- would have lots of offspring, lots of success, etc. Many people choose to abuse heroin, but many people also choose to abuse Adderall, or to use Piracetam or other similar substances. The success-druggies will outbreed and outcompete the orgasm-druggies, leading to more intense success-drugs and perpetuating the cycle.
What you've just said is a perfect example of the way in which the "far" brain's intuitive modeling of minds, inaccurately predicts REAL human behavior, especially with respect to emotions.
Positive motivation actually consists of associating a positive emotion with goal completion... and this requires you to have a taste of the feeling you'll get when you complete the goal. (i.e., "Oh boy, I can almost taste that food now!").
So what actually happens when you give yourself the feeling of pride in a job well done, before the job is done, you get more motivated, not less, as long as you link that emotion to the desired future state, as compared to the current state of reality.
It's worth us worrying about as far as our future is concerned, but to be the sole explanation of the Fermi Paradox (rather than just a contributing factor) it would have to have happened to at least an overwhelming majority of extraterrestrial civilizations, many of whom would presumably have considered the problem beforehand.
There is no such thing as a free market.
Yeah there is, they are just really small. Just the other day I asked if someone would come in on their day off from work in order to cover for me. I paid them, and they performed the service. All this went down without any government intervention, coercion, or use of force.
If you mean that there is not a single country on Earth that contains ONLY free markets then you are absolutely right.
I see a dilemma here.
If I think of your transaction in isolation, it's free but not a market: it's a bargaining problem.
If I think of your transaction as part of the broader labour market, it's not really free; it's influenced by government regulations & macroeconomic policies, if only through their effects on the general price level, the general wage level, and the supply & demand for labour.
I reckon your transaction is an example of what mtraven's talking about rather than a counterexample!
Scientific materialism is overrated -- because the things we care about (like rationalism, or truth, or well-being) are not material things. The current theories for how ideas are implemented in the material world (such as AI) are grossly inadequate to the task.
Responding to the question "What do you believe that most people on this site don't?":
I believe that people who try and sound all "edgy" and "serious" by intoning what they believe to be "blunt truths" about race/gender differences are incredibly annoying for the most part. I just want to roll my eyes when I see that kind of thing, and not because I'm a "slave to political correctness", but because I see so many poorly defined terms being bandied about and a lot of really bad science besides.
(And I am not going to get into a big explanation right here, right now, of why I think what I think in this regard -- I'm confident enough in this area here to take whatever status hit my largely-unqualified statement above brings. If I write an explanation at some point it will be on my own terms and I frankly don't care who does or doesn't think I'm smart in the meantime.)
Racial differences and gender differences are very different topics. Especially if we are interested in discussing whether, or the extent to which, they are rooted in biology.
I agree (and I see sex/gender as far more valid of a biological concept than "race", for the record), but I've noticed a correlation between people who would describe themselves in terms like "race realist" and people who think there's good evidence for women being "less suited" to math and science than men, cognitively speaking. (And again, getting deeply into this right now is not something I'm going to do, it would be wandering off-topic for one thing.)
I think people should be allowed to sell their organs if they want to. We don’t consider it immoral to pay a surgeon to transplant a kidney, or to pay the nurse who helps him, so I don’t see why it’s immoral to pay the person who provides that kidney. I also think we should pay people in medical experiments. Pharmaceutical companies could hire private rating agencies to judge proposed Human experiments much as Standard and Poor rates bonds; that way people would know what they’re getting into. The pain \ danger index would range from slightly uncomfortable \ probably harmless to agony \ probably fatal and payment would be tied to that index. A market would develop open to anybody who was interested. It would be in the financial interest of the drug companies to make the tests as safe and comfortable as possible. All parties would benefit, medical research would get a huge boost and everybody would have a new way to make money if they chose to do it.
I also think that if you believe in capital punishment it is foolish to kill the condemned before performing some medical experiments on him first.
Maybe I'm just projecting, but I doubt the first thing is a controversial position here.
I think we do pay people in medical experiments.
Cryonics membership is a rational choice.
My chances of surviving death through resuscitation are good (as such things as chances to beat death go), but would be better if I convinced more people that cryonics is a rational choice.
In my day to day I am more concerned with my job than convincing others on the subject of cryonics, even though the latter is probably more valuable to my long term happiness. Am I not aware of what I value? Why do I not structure my behavior to match what I believe I value? If I believed that cryonics would buy me an additional 1000 years of life wouldn't 10 years of total dedication to its cause be worthwhile? Does this mean that I do not actually believe in cryonics, but only profess to believe in cryonics?
Americans no longer significantly value liberty and this will be to the detriment of our society.
A large number of Americans accept the torture of religious enemies as necessary and just.
Male circumcision is more harmful than we realize and one cause (among many) of sexual dysfunction among couples.
Most humans would be happier if polyamory was socially acceptable and encouraged.
I don't believe in male bisexuality, though I do believe in it for women.
From your other comments, I believe you're confusing "I don't believe men who say they are bisexual" with "I don't believe men can be bisexual."
It's clear to me that, in American society at least, the majority of bisexual men are to be found among the ranks of men who would never identify as anything but straight, sometimes even to the men they have sex with(!). Conversely, many of the men that DO identify as bisexual are merely finding a graceful way to transition to a homosexual love life.
Thus, that a man who identifies as bisexual is mostly likely gay may be true (though I doubt it--especially among men who have been out as bisexual for more than, say, 5 years) is not an indication that male bisexuality doesn't exist--only that self-professed bisexuality is scantily coterminous with a bisexual orientation in males.
Being wrong in the way that you are wrong will probably not damage the accuracy of your insight when conversing with individuals about their sexuality (you'll correctly assign a high probability to his being gay if he says he's bisexual), but it probably WILL damage that accuracy when analyzing human populations in the abstract (you'll incorrectly assign a low probability to the existence of large ranks of males who engage in and enjoy sexual relations with both men and women).
As I've said elsewhere, possibly even on this thread... if my culture makes it more difficult for men to identify as queer than as straight, then even if sexual orientation varies (like many other things) continuously within the population, I should expect the majority of more-than-negligably male-oriented men to identify as straight.
I am a male bisexual. I believe this with a high level of probability, primarily due to my ability to have erections from naked or sexual pictures of both genders. Also the fact that I have felt heavy romantic interest for both genders would seem to indicate that this is very possible.
If you want documented research done into male bisexuality, look into the research of Alfred Kinsey. He researched all forms of sexuality extensively, and was a male bisexual himself.
Edit: Also, the society I have been raised in has practically no instances of homophobia, so I don't believe that could be a factor.
If it's not against the implicit rules of this thread to ask, on what evidence do you believe this?
BTW, it may not be obvious, but I can tell you that ciphergoth is not talking about a hypothetical example.
Define bisexuality.
Voted up from -1 because I want you to clarify. Do you believe that bisexuality in women is ubiquitous, while not ubiquitous, but present in some men? Or that it is completely absent in men, but present though not ubiquitous in women? Or any other combination of absent, present or ubiquitous in either women or men?
I think bisexuality is present (but not ubiquitous) in women, and extremely rare in men.
You believe it's much rarer than female bisexuality, or you believe there are literally zero instances? If you met a man who had slept with several men and several women and continued to sleep with both, what would you tend to assume about his sexuality?
That the psychoanalytic theory of psychodynamics is in some sense true, and that it is a useful way to approach the mind. My belief comes from personal experience in psychotherapy, albeit a quite unorthodox one. I have found that explanations in Freudian terms such as the unconscious, ego, superego, Eros and Thanatos help to greatly clarify my mental life in a way that is not only extremely useful but also seems quite accurate.
I should clarify that I reject just about everything to come out of academic psychoanalytic theory, especially in literary theory (I'm an English major), and that most clinicians fail to correlate it with real mental phenomena. I know that this sounds--and should sound--extremely suspect to any rationalist. But a particular therapist has convinced me very strongly that she is selling something real, not only from my personal experience in therapy, but in how she successfully treats extremely successful people and how I don't know anyone who wins at life quite so hard as she does.
This would make an awesome Edge topic if they could offer sufficient assurance of anonymous answers.
Their 2005 annual question was pretty close to this one and has many fascinating answers: What do you believe is true even though you cannot prove it?
It wasn't anonymous or pseudo-anonymous though.
I don't like libertarianism. It makes some really good points, and clearly there are lots of things government should stay out of, but the whole narrative of government as the evil villain that can never do anything right strikes me as more of a heroic myth than a useful way to shape policy. This only applies to libertarians who go overboard, though. I like Will Wilkinson, but I hate Lew Rockwell.
I think the better class of mystics probably know some things about the mind the rest of us don't. I tend to trust yogis who say they've achieved perfect bliss after years of meditation, although I think there's a neurological explanation (and would like to know what it is). I think Crowley's project to systematize and scientifically explain mysticism had some good results even though he did go utterly off the deep end.
I am not sure I will sign up for cryonics, although I am still seriously considering it. The probability of ending up immortal and stuck in a dystopia where I couldn't commit suicide scares me too much.
I have a very hard time going under 2-3% belief in anything that lots of other people believe. This includes religion, UFOs, and ESP. Not astrology though, oddly enough; I'll happily go so low on that one it'd take exponential notation to describe properly.
I like religion. I don't believe it, I just like it. Greek mythology is my favorite, but I think the Abrahamic religions are pretty neat too.
I am a very hard-core utilitarian, and happily accept John Maxwell's altruist argument. I sorta accept Torture vs. Dust Specks on a rational but not an emotional level.
I am still not entirely convinced that irrationality can't be fun. I sympathize with some of those Wiccans who worship their gods not because they believe in them but just because they like them. Of course, I separate this from belief in belief, which really is an evil.
Plenty of libertarians agree with you on #1.
Personally I'd prefer an eternity of being tortured by an unFriendly AI to simple death. Is that controversial?
If I have surgery, I want anesthesia; if I have a pain flare at 6 or above, I take sleeping pills and try to sleep. So I prefer losing a few hours of conscious life to experiencing moderate to severe pain for a few hours. I would not want to be anesthetized for six months I'd otherwise spend at a 6, but I would if it was a 7.
I think the criterion is "Yeah, screaming in pain, but can I watch Sherlock?". If I can do moderately interesting things then I can just get used to the pain, but if the pain is severe enough to take over my whole mind then no dice. Transhuman torture is definitely the latter.
I'm not sure it's fair to compare "anesthetized for six months" to "dead, permanently".
Well I don't have much experience with death and eternal life. What goes wrong in extrapolating from hours or months to eternity?
Well, you wake up after the six months. Unless you expect to wake up from death (in which case it's a perfectly logical argument, I think) then there does seem to be a difference. As I said, I'm not sure if this difference is relevant, but it seems like it might be.
I'm curious about your personal experiences with physical pain. What is the most painful thing you've experienced and what was the duration?
I'm sympathetic to your preference in the abstract, I just think you might be surprised at how little pain you're actually willing to endure once it's happening (not a slight against you, I think people in general overestimate what degree of physical pain they can handle as a function of the stakes involved, based largely on anecdotal and second hand experience from my time in the military).
At the risk of being overly morbid, I have high confidence (>95%) that I could have you begging for death inside of an hour if that were my goal (don't worry, it's certainly not). An unfriendly AI capable of keeping you alive for eternity just to torture you would be capable of making you experience worse pain than anyone ever has in the history of our species so far. I believe you that you might sign a piece of paper to pre-commit to an eternity of torture vice simple death. I just think you'd be very very upset about that decision. Probably less than 5 minutes into it.