All of ZoltanBerrigomo's Comments + Replies

If you limit yourself to a subset of features such that you are no longer writing in a format which is turing complete then you may be able to have a program capable of automatically proving that code reliably.

Right, that is what i meant.

Here is my attempt at a calculation. Disclaimer: this is based on googling. If you are actually knowledgeable in the subject, please step in and set me right.

There are 10^11 neurons in the human brain.

A neuron will fire about 200 times per second.

It should take a constant number of flops to decide whether a neuron will fire -- say 10 flops (no need to solve a differential equation, neural networks usually use some discrete heuristics for something like this)

I want a society of 10^6 orcs running for 10^6 years

As you suggest, lets let the simulation ru... (read more)

Still inability to realise what you are doing seems rather dangerous.

So far, all I've done is post a question on lesswrong :)

More seriously, I do regret it if I appeared unaware of the potential danger. I am of course aware of the possibility that experiments with AI might destroy humanity. Think of my post above a suggesting a possible approach to investigate -- perhaps one with some kinks as written (that is why I'm asking a question here) but (I think) with the possibility of one day having rigorous safety guarantees.

0Slider
I don't mean the writing of this post but in general the principle of trying to gain utility from minimising self-awareness. Usually you don't make processes as opaque as possible to increase their possibility of going right. The opposite of atleast social political processes being transparent is seen as pretty important. If we are going to create minilife just to calculate 42, seeing it get calculated should not be a super extra temptation. Preventing the "interrupt/tamper" decision by limiting options is rather backwards in doing that while it would be better to argue why it should not be chosen even if available.

I think this calculation too conservative. The reason is (as I understand it) that neurons are governed by various differential equations, and simulating them accurately is a pain in the ass. We should instead assume that deciding whether a neuron will fire will take a constant number of flops.

I'll write another comment which attempts to redo your calculation with different assumptions.

It seems to me that by the time we can do that, we should have figured out a better way to create AI.

But will we have figured a way to reap the gains of AI safely for humanity?

but more than that, it knows that the real universe is capable of producing intelligent beings that chose this particular world to simulate.

Good point -- this undermines a lot of what I wrote in my update 1. For example, I have no idea if F = m d^3 x / dt would result in a world that is capable of producing intelligent beings.

I should at some point produce a version of the above post with this claim, and other questionable parenthetical remarks I made, deleted, or at least acknowledging that they require further argumentation; they are not necessary fo... (read more)

I guess I am willing to bite the bullet and say that, as long as entity X prefers existence to nonexistence, you have done it no harm by bringing it into being. I realize this generates a number of repulsive-sounding conclusions, e.g., it becomes ethical to create entities which will live, by our 21st century standards, horrific lives.

At least some of them will tell you they had rather not been born.

If one is willing to accept my reasoning above, I think one can take one more leap and say that statistically as long as the vast majority of these entities will prefer existing to never having been brought into being, we are in the clear.

1compartmentalization
If you use the entities' preferences to decide what's ethical, then everything is (or can be), because you can just adjust their preferences accordingly.

The theorem you cite (provided I understood you correctly) does not preclude the possibility of checking whether a program written in a certain pre-specified format will have bugs. Bugs here are defined to be certain undesirable properties (e.g., looping forever, entering certain enumerated states, etc).

Baby versions of such tools (which automatically check whether your program will have certain properties from inspecting the code) already exist.

1HungryHobo
If the language and format you're using is Turing complete then you can't write a program which can guarantee to find all bugs. If you limit yourself to a subset of features such that you are no longer writing in a format which is turing complete then you may be able to have a program capable of automatically proving that code reliably. Static analysis code does exist but still doesn't guarantee 100% accuracy and is generally limited to the first couple of levels of abstraction. Keep in mind that if you want to be 100% certain of no bugs you also have to prove the compiler, the checker, any code your program interacts with and the hardware on which the code runs.

Nice idea...I wrote an update to the post suggesting what seemed to me to be a variation on your suggestion.

About program checking: I agree completely. I'm not very informed about the state of the art, but it is very plausible that what we know right now is not yet up to task.

0HungryHobo
I'm not sure it's just a matter of what we know right now, it's mathematically provable that you can't create a program which can find all security flaws or prove any provable code so bugs are pretty much inevitable no matter how advanced we become.

I'm a bit skeptical about whether you can actually create a superintelligent AI by combining sped up humans like that,

Why not? You are pretty smart, and all you are is a combination of 10^11 or so very "dumb" neurons. Now imagine a "being" which is actually a very large number of human-level intelligences, all interacting...

0Silver_Swift
Yeah, that didn't came out as clear as it was in my head. If you have access to a large number of suitable less intelligent entities there is no reason you couldn't combine them into a single, more intelligent entity. The problem I see is about the computational resources required to do so. Some back of the envelope math: I vaguely remember reading that with current supercomputers we can simulate a cat brain at 1% speed, even if this isn't accurate (anymore) it's probably still a good enough place to start. You mention running the simulation for a million years simulated time, let's assume that we can let the simulation run for a year rather than seconds, that is still 8 orders of magnitude faster than the simulated cat. But we're not interested in what a really fast cat can do, we need human level intelligence. According to a quick wiki search, a human brain contains about 100 times as many neurons as a cat brain. If we assume that this scales linearly (which it probably doesn't) that's another 2 orders of magnitude. I don't know how many orcs you had in mind for this scenario, but let's assume a million (this is a lot less humans than it took in real life before mathematics took off, but presumably this world is more suited for mathematics to be invented), that is yet another 6 orders of magnitude of processing power that we need. Putting it all together, we would need a computer that has at least 10^16 times more processing power than modern supercomputers. Granted, that doesn't take into account a number of simplifications that could be build into the system, but it also doesn't take into account the other parts of the simulated environment that require processing power. Now I don't doubt that computers are going to get faster in the future, but 10 quadrillion times faster? It seems to me that by the time we can do that, we should have figured out a better way to create AI.

is very, very difficult not to give a superintelligence any hints of how the physics of our world work.

I wrote a short update to the post which tries to answer this point.

Maybe they notice minor fluctuations in the speed of the simulation based on environmental changes to the hardware

I believe they should have no ability whatsoever to detect fluctuations in the speed of the simulation.

Consider how the world of world of warcraft appears to an orc inside the game. Can it tell the speed at which the hardware is running the game?

It can't. What it can ... (read more)

1Silver_Swift
You are absolutely correct, they wouldn't be able to detect fluctuations in processing speed (unless those fluctuations had an influence in, for instance, the rounding errors in floating point values). About update 1: It knows our world very likely has something approximating newtonian mechanics, that is a lot of information by itself. but more than that, it knows that the real universe is capable of producing intelligent beings that chose this particular world to simulate. From a strictly theoretical point of view that is a crapton of information, I don't know if the AI would be able to figure out anything useful from it, but I wouldn't bet the future of humanity on it. About update 2: That does work, provided that this is implemented correctly, but it only works for problems that can be automatically verified by non-AI algorithms.

These are good points. Perhaps I should not have said "interact" but chosen a different word instead. Still, its ability to play us is limited since (i) we will be examining the records of the world after it is dead (ii) it has no opportunity to learn anything about us.

Edit: we might even make it impossible for it to game us in the following way. All records of the simulated world are automatically deleted upon completion -- except for a specific prime factorization we want to know.

This is a really bad argument for safety.

You are right, of ... (read more)

  1. When talking about dealing and (non)interacting with real AIs, one is always talking about a future world with significant technological advances relative to our world today.

  2. If we can formulate something as a question about math, physics, chemistry, biology, then we can potentially attack it with this scheme. These are definitely problems we really want to solve.

  3. Its true that if we allow AIs more knowledge and more access to our world, they could potentially help us more -- but of course the number of things that can go wrong has to increase as well. Perhaps a compromise which sacrifices some of the potential while decreasing the possibilities that can go wrong is better.

1+1=2 is true by definition of what 2 means

Russell and Whitehead would beg to differ.

0ChristianKl
I learned math with the Peano axioms and we considered the symbol 2 to refer to the 1+1, 3 to (1+1)+1 and so on. However even if you consider it to be more complicated it still stays an analytic statement and isn't a synthetic one. If you define 2 differently what's the definition of 2?
2gjm
"True by definition" is not at all the same as "trivial" or "easy". In PM the fact that 1+1=2 does in fact follow from R&W's definition of the terms involved.

Sometimes you're dealing with a domain where explicit reasoning provides the best evidence, sometimes with a domain where emotions provide the best evidence.

And how should you (rationally) decide which kind of domain you are in?

Answer: using reason, not emotions.

Example: if you notice that your emotions have been a good guide in understanding what other people are thinking in the past, you should trust them in the future. The decision to do this, however, is an application of inductive reasoning.

1Kaj_Sotala
Sure.

No. CFAR rationality is about aligning system I and system II. It's not about declaring system I outputs to be worthy of being ignored in favor of system II outputs.

I believe you are nitpicking here.

If your reason tells you 1+1=2 but your emotions tell you that 1+1=3, being rational means going with your reason. If your reason tells you that ghosts do not exist, you should believe this to be the case even if you really, really want there to be evidence of an afterlife.

CFAR may teach you techniques to align your emotions and reason, but this does not ... (read more)

6Kaj_Sotala
Being rational involves evaluating various claims and empirical facts, using the best evidence that you happen to have available. Sometimes you're dealing with a domain where explicit reasoning provides the best evidence, sometimes with a domain where emotions provide the best evidence. Both are information-processing systems that have evolved to make sense of the world and orient your behavior appropriately; they're just evolved for dealing with different tasks. This means that in some domains explicit reasoning will provide better evidence, and in some domains emotions will provide better evidence. Rationality involves figuring out which is which, and going with the system that happens to provide better evidence for the specific situation that you happen to be in.
0ChristianKl
On of the claims is analytic. 1+1=2 is true by definition of what 2 means. There's little emotion involved. When it comes to an issue such as is there evidence for the existence of ghosts? neither rationality after Eliezer's sequences nor CFAR argues that emotions play no role. Noticing when you feel the emotion of confusion because your map doesn't really fit is important. Beauty of mathematical theories is a guiding stone for mathematicians. Basically any task that doesn't need emotions or intuitions is better done by computers than by humans. To the extend that human's outcompete computers there's intuition involved.

Sure, you can work towards feeling more strongly about something, but I don't believe you'll ever be able match the emotional fervor the partisans feel -- I mean here the people who stew in their anger and embrace their emotions without reservations.

As a (rather extreme) example, consider Hitler. He was able to sway a great many people with what were appeals to anger and emotion (though I acknowledge there is much more to the phenomena of Hitler than this). Hypothetically, if you were a politician from the same era, say a rational one, and you understood that the way to persuade people is to tap into the public's sense of anger, I'm not sure you'd be able to match him.

7gjm
"The best lack all conviction, and the worst / Are full of passionate intensity" -- W B Yeats "The fundamental cause of the trouble is that in the modern world the stupid are cocksure while the intelligent are full of doubt" -- Bertrand Russell
1ChristianKl
Julian Assange was one of the first people to bring tears to my eyes when he spoke and I saw him live. At the same time Julian's manifesto is rational to the extend that it makes it case with graph theory. Interestingly the "We Lost The War"-speech that articulated the doctrine that we need to make life easier for whistleblowers by providing a central venue to which the can post their documents was 10 years ago. The week ago there was a "Ten years after ‚We Lost The War‘" at this CCC congress. Rop Gonggrijp closes by describing the new doctrine as: I think that's the core strategy. We don't want eternal september so it's no problem if the core community uses language that's not understood by outsiders. We can have our cuddle pies and feel good with each other. Cuddle pies produce different emotions than anger but they also create emotions that produce strong bonds. If we really need strong charismatic speakers that are world class at persuasion I think that Valentine currently is at that level (as is Julian Assange in the hacker community). It's not CFAR mission to maximize for charisma but nothing that CFAR does prevents people from maximizing charisma. If someone wants to develop themselves into that role Valentine wrote down his body language secrets in http://lesswrong.com/lw/mp3/proper_posture_for_mental_arts/ . A great thing about the prospects of our community is that there's money seeking Effective Altruistic uses. As EA grows there might be an EA person running for office in a few years. If other EA people consider his run to have prospects for making a large positive impact he can raise money from them. But as Rop says in the speech, we should play for the long-term. We don't need a rationalist to run for office next year.

Do the extend that it does require luck that simply means that it's important to have more people with rationality + competence + caring. If you have many people some will get lucky.

The "little bit of luck" in my post above was something of an understatement; actually, I'd suggest it requires a lot of luck (among many other things) to successfully change the world.

I think you might be pattern matching to straw-vulcan rationality, that's distinct from what CFAR wants to teach.

Not sure if I am, but I believe I am making a correct claim abo... (read more)

3ChristianKl
No. CFAR rationality is about aligning system I and system II. It's not about declaring system I outputs to be worthy of being ignored in favor of system II outputs. The alternative is working towards feeling more strongly for the fundamental principles than caring about the fights. A person who cares strongly for his cause doesn't need to fake emotions.

A very interesting and thought provoking post -- I especially like the Q & A format.

I want to quibble with one bit:

How can I tell there aren't enough people out there, instead of supposing that we haven't yet figured out how to find and recruit them?

Basically, because it seems to me that if people had really huge amounts of epistemic rationality + competence + caring, they would already be impacting these problems. Their huge amounts of epistemic rationality and competence would allow them to find a path to high impact; and their caring would compe

... (read more)
5ChristianKl
Do the extend that it does require luck that simply means that it's important to have more people with rationality + competence + caring. If you have many people some will get lucky. I think the term "unreasonable confidence" can be misleading. It's possible to very confidently say "I don't know". At the LW Community Camp in Berlin, I consider Valentine of CFAR to have been the most charismatic person in attendence. When speaking with Valentine, he said things like: "I think it's likely that what you are saying is true, but I don't see a reason why it has to be true." He also very often told people that he might be wrong and that people shouldn't trust his judgements as strongly as they do. I think you might be pattern matching to straw-vulcan rationality, that's distinct from what CFAR wants to teach.
1Gleb_Tsipursky
Really good point! In fact, there is a specific challenge in that the rationality community itself lashes back against rationalists using such tactics, as I experienced myself. So this is a particular challenging area of impacting the world.

For those people who insist, however, that the only thing that is important is that the theory agrees with experiment, I would like to make an imaginary discussion between a Mayan astronomer and his student...

These are the opening words of a ~1.5 minute monologue in one of Feynman's lectures; I won't transcribe the remainder but it can be viewed here.

Not sure...I think confidence, sales skills, and ability to believe and get passionate about BS can be very helpful in much of the business world.

Side-stepping the issue of whether rationalists actually "win" or "do not win" in the real world, I think a-priori there are some reasons to suspect that people who exhibit a high degree or rationality will not be among the most successful.

For example: people respond positively to confidence. When you make a sales pitch for your company/research project/whatever, people like to see you that you really believe in the idea. Often, you will win brownie points if you believe in whatever you are trying to sell with nearly evangelical fervor... (read more)

0ChristianKl
CFAR's Valentine manages to have a very high charisma. He also manages to get out of his way to tell people not to believe him too much and explicetly that that he's not certain. In http://lesswrong.com/lw/mp3/proper_posture_for_mental_arts/ he suggests: Having this strong sense of something worth to protect seems to be more important than believing that individual ideas are necessarily correct. You don't develop a strong sense of something worth by doing debaising techniques but at the same time it's a major part of rationality!CFAR and rationality!HPMOR. At the same time there are people in this community plagued by akrasia who don't have that strong sense on an emotional level.
0Adam Zerner
I agree with your point about the value of appearing confident, and that it's difficult to fake.* I think it's worth bringing up, but I don't think it's a particularly large component of success. Depending on the field, but I still don't think there's really many fields where it's a notably large component of success (maybe sales?). *I've encountered it. I'm an inexperienced web developer, and people sometimes tell me that I should be more confident. At first this has very slightly hurt me. Almost negligibly slight. Recently, I've been extremely fortunate to get to work with a developer who also reads LW and understands confidence. I actually talked to him about this today, and he mirrored my thoughts that with most people, appearing more confident might benefit me, but that with him it makes sense to be honest about my confident levels (like I have been).

I'm very fond of this bit by Robin Hanson:

A wide range of topics come up when talking informally with others, and people tend to like you to express opinions on at least some substantial subset of those topics. They typically aren’t very happy if you explain that you just adopted the opinion of some standard expert source without reflection, and so we are encouraged to “think for ourselves” to generate such opinions.

[This comment is no longer endorsed by its author]Reply
2Vaniver

I think the lumping of various disciplines into "science" is unhelpful in this context. It is reasonable to trust the results of the last round of experiments at the LHC far more than the occasional psychology paper that makes the news.

I've not seen this distinction made as starkly as I think it really needs to be made -- there is a lot of difference between physics and chemistry, where one can usually design experiments to test hypotheses; to geology and atmospheric science, where one mostly fits models to data that happens to be available; to psychology, where the results of experiments seem to be very inconsistent and publication bias is a major cause of false research results.

0telomerase
...and then on to any specific field which has political uses, where "publication bias" can reach Lysenko levels ;) So just never studfy psychology and you won't go crazy. It all works out...

I agree with 99.999% of what you say in this comment. In particular, you are right that the parody only works in the sense of the first of your bulleted points.

My only counterpoint is that I think this is how almost every reader will understand it. My whole post is an invitation to to consider a hypothetical in which people say about strength what they now say about intelligence and race.

I confess that I have not read much of what has been written on the subject, so what I am about to say may be dreadfully naive.

A. One should separate the concept of effective altruism from the mode-of-operation of the various organizations which currently take it as their motto.

A.i. Can anyone seriously oppose effective altruism in principle? I find it difficult to imagine someone supporting ineffective altruism. Surely, we should let our charity be guided by evidence, randomized experiments, hard thinking about tradeoffs, etc etc.

A.ii. On the other han... (read more)

2Ishaan
I emphatically don't, but yes, one can. The quantitative/reductionist attitude you've outlined here biases us towards easily measurable causes. Some examples of difficult to measure causes include: 1) All forms of funding-hungry research, scientific or otherwise 2) most x-risks, including this forum's favorite AI risk 3) causes which claim to influence social, economic, military, and political matters in complex but possiblyhigh impact ways 4) (Typically local and community-driven) causes which do good via subtle virtuous cycles, human connections, and various other intangibles

I'm not sure I understand your criticism. I don't mean this in a passive aggressive sense, I really do not understand it. It seems to me that "the stupid," so to speak, perfectly carries over between the parody and the "original."


A. Imagine I visit country X, where everyone seems to be very buff. Gyms are everywhere, the parks are full of people practicing weight-lifting, and I notice people carrying heavy objects with little visible effort. When I return home, I remark to a friend that people in X seem to be very strong.

My friend ... (read more)

1gjm
There are multiple things that could be wrong with your friend's response. * It could draw a wrong inference given its premises. * It could have wrong premises. * It could, even if its conclusion is technically correct in some sense, be misleading. * It could, even if its conclusion is correct, be a just plain weird response. As regards the first of these, the three situations are indeed very closely analogous. (Not exactly -- you chose different arguments in the different cases. A: ill-defined, ugly-history, no-good-measure, within-versus-between. B: ill-defined, ugly-history, within-versus-between. C: ill-defined, ugly-history, no-good-measure.) As regards the second, I think not so closely. For instance, so far as I know the notion of strength doesn't have at all the sort of ugly history that the notion of race does, nor the (less ugly) sort that the notion of intelligence does. (Maybe it has a different sort of ugly history, something to do with metaphorical use of "strength" by warmongering politicians perhaps.) And I bet you can get something nearer to a usable culture-independent test of strength than to a usable culture-independent test of intelligence. And, as I said earlier, I greatly doubt that there's more variation in anything anyone expects to be a matter of strength within strong/weak than between those groups. (Incidentally, you wrote "more variation between ... than among" which is the wrong way around for that argument.) I'm not sure how much we should care about the third and fourth of those points since I think we agree that the conclusion is dubious at best in each case. But for sure the context is different (e.g., one reason why people who think there's no such thing as race or intelligence bother to say so is that there are other people saying: oh yes there are, and look, it turns out that this traditionally disadvantaged race to which I happen not to belong is less intelligent than this traditionally advantaged race to which I happen

First, I'm not so sure: if someone is actually inconsistent, then pointing out the inconsistency may be the better (more charitable?) thing to do rather than pretending the person had made the closest consistent argument.

For example: there are a lot of academics who attack reason itself as fundamentally racist, imperialistic, etc. They back this up with something that looks like an argument. I think they are simply being inconsistent and contradictory, rather than meaning something deep not apparent at first glance.

More importantly, I think your conjectur... (read more)

0Douglas_Knight
Ableism is a lot more recent (or at least more recently popular) than the idea that intelligence does not exist. I don't think it's very relevant

For what its worth, I have not downvoted any of your posts. Although we seem to be on opposite sides of this debate, I appreciate the thoughtful disagreement my post has received..

-1gjm
And for what it's worth, I thought you probably hadn't. (Indiscriminate downvoting doesn't tend to go hand in hand with reasoned and reasonable disagreement.)

I would bet the opposite on #4, but that is beside the point. On #4 and #6, the point is that even if everything I wrote was completely correct -- e.g., if the scientific journals were actually full of papers to the effect that there is no such thing as a universal test of strength because people from different cultures lift things differently -- it would not imply there is no such thing as strength.

On #5, the statement that race is a social construct is implicit. Anyway, as I said in the comment above, there are a million similar statements that are being... (read more)

2gjm
Certainly true, but the most conspicuous problem with the parody argument in this case isn't that, it's that the statements about scientific journals etc. are spectacularly false. (Much less so for intelligence.) So someone reading your parody sees the parody argument, correctly says "wow, that's really stupid" -- but what they're probably noticing is stupid is something that doesn't carry across to the original. And (I'm repeating things that have been said already in this discussion) while indeed any quantity of scientific papers finding problems with universal strength test wouldn't imply that there is no such thing as strength, they would give good reason to avoid treating strength as a single simple thing that can be easily compared across cultures -- and that is the version of the "no universal test, so no such thing as intelligence" argument that's actually worth engaging in, if you are interested in making intellectual progress rather than just mocking people who say silly oversimplified and overstated things. Yes, I know; that's why I said "It's true, though, that some people say race is a social construct".

First, only some of the attacks I cited were brief and sketchy; others were lengthier. Second, I have cited a few such attacks due to time and space constraints, but in fact they exist in great profusion. My personal impression is that the popular discourse on intelligence and race is drowning in confused rhetoric along the lines of what I parodied.

Finally, I think the last possibility you cite is on point -- there are many, many people who are not thinking very clearly here. As I said, I think these people also have come to dominate the debate on this su... (read more)

A. I think at least some people do mean that concepts of intelligence and race are, in some sense, inherently meaningless.

When people say

"race does not exist because it is a social construct"

or that race does not exist because

"amount of variation within races is much larger than the amount of variation between races,"

I think it is being overly charitable to read that as saying

"race is not a scientifically precise concept that denotes intrinsic, context-independent characteristics."

B. Along the same lines, I believe I am... (read more)

5Alejandro1
It is true that normally, taking people at their word is charitable. But if someone says that a concept is meaningless (when discussing it in a theoretical fashion), and then proceeds to use informally in ordinary conversation (as I conjectured that most people do with race and intelligence) then we cannot take them literally at their word. I think that something like my interpretation is the most charitable in this case.

See the reply I just wrote to gjm for an explanation of my motivations.

When I was writing this, I thought the intent to parody would be clear; surely no one could seriously suggest we have to strike strength from our dictionaries? I seem to have been way off on that. Perhaps that is a reflection on the internet culture at large, where these kinds of arguments are common enough not to raise any eyebrows.

Anyway, I went one step further and put "parody" in the title.

0Algon
Ah, that makes sense. I would probably put something like 'this is a parody of the arguments used for 'there is no such thing as intelligence etc'' as some people (AKA me) might not pick up what you're parodying. Though perhaps I'm just in a small minority, and I don't read internet debates as often as others do. Thanks for the clarification by the way.

I was not trying to suggest that intelligence and strength are as alike as race and strength. Rather, I was motivated by the observation that there are a number of arguments floating around to the effect that,

A. Race doesn't exist

B. Intelligence doesn't exist.

and, actually, to a lesser extent,

C. Rationality doesn't exist (as a coherent notion).

The arguments for A,B,C are often dubious and tend to overlap heavily; I wanted to write something which would show how flawed those arguments are through a reductio ad absurdum.

To put it another way, even if str... (read more)

2gjm
(Separate comment because I'm making an entirely separate point.) I wonder whether any nontrivial proposition, however well supported, could survive the treatment you are meting out here. The procedure seems to be: (1) find a number of brief and sketchy attacks on something, (2) do a search-and-replace to turn them into attacks on something else, (3) quote maybe a sentence or two from each, and (4) protest that none of these one-or-two-sentence attacks suffices to establish that the thing they're attacking is bad or unreal. I'm not sure how supportable the claims "race doesn't exist" and "intelligence doesn't exist" are (though clearly the answer will depend a lot on exactly how those claims are interpreted) but I'm quite certain that if either of them is true then a decent argument for it will take (let's say) at least a page or two. If someone says "race doesn't exist" or "intelligence doesn't exist" followed by a one-sentence soundbite, they probably aren't trying to "establish the conclusion" so much as gesturing towards how an argument for the conclusion might go. (Or maybe they really think their soundbite is enough, but in that case what we should conclude is that the person in question isn't thinking very clearly and that if we really want to evaluate their claims we need to find a better statement of them.)
1gjm
This sort of analogical reductio ad absurdum only succeeds in so far as whatever makes the parody arguments visibly bad applies to the original arguments too. This is more or less true for your arguments when they are parodying "no such thing as intelligence" (though I don't think the conclusion "there is no such thing as strength" is particularly absurd, if it's understood in a way parallel to what people mean when they say there's no such thing as intelligence). But it's clearly not true, e.g., for #4. If you divide the human species up into races and look at almost any characteristic we have actual cause to be interested in, then the within-group differences do come out larger than the between-group differences. Whereas, e.g., if you divide people up into those who can and those who can't lift a 60kg weight above their heads, I bet the between-group differences for many measures of strength will be bigger than the within-group differences. #5 is interesting because what Toni Morrison actually calls "a social construct" at the other end of the link is racism, not race. It's true, though, that some people say race is a social construct. But so far as I can see the things they mean by this don't have much in common with anything anyone would seriously claim about strength. #6 takes a not-very-convincing argument from authority against belief in race and turns it into a completely absurd argument from authority against belief in strength, because in fact there are good scientists saying that race is an illusion or a social construct or something of the sort, and there aren't good scientists saying the same thing about strength. It seems to me that your parodies of arguments in class A are consistently less successful than those of arguments in class B -- which is entirely unsurprising because intelligence and strength are similar things, whereas race and strength are much less so. [EDITED to fix a weird formatting problem. I think start-of-line octothorpes must
4Alejandro1
When people say things like "intelligence doesn't exist" or "race doesn't exist", charitably, they don't mean that the folk concepts of "intelligence" or "race" are utterly meaningless. I'd bet they still use the words, or synonyms for it, in informal contexts, analogously to how we use informally "strength". (E.g. "He's very smart"; "They are an interrracial couple"; "She's stronger than she looks"). What they object to is to treating them as a scientifically precise concepts that denote intrinsic, context-independent characteristics. I agree with gjm that your parody arguments against "strength" seem at least superficially plausible if read in the same way than the opponents of "race" and "intelligence" intend theirs.

Hmm, on second thought, I added a [/parody] tag at the end of my post - just in case...

0Algon
You know, I must applaud you. You really surprised me there. After reading that I could only say 'What?' Was this made as a prank or just as a humorous piece? I'm quite curious to know your intentions here.
0Elo
this addition was helpful. Although I do wonder if the concept can be steelmanned a bit.
6Furcas
My cursor was literally pixels away from the downvote button. :)

For what its worth, I have observed a certain reverence in the way great mathematicians are treated by their lesser-accomplished colleagues that can often border on the creepy. This is something specific to math, in that it seems to exist in other disciplines with lesser intensity.

But I agree, "dysfunctional" seems to be a more apt label than "cult." May I also add "fashion-prone?"

The links you give are extremely interesting, but, unless I am missing something, it seems that they fall short of justifying your earlier statement that math academia functions as a cult. I wonder if you would be willing to elaborate further on that?

2JonahS
I'll be writing more about this later. The most scary thing to me is that the most mathematically talented students are often turned off by what they see in math classes, even at the undergraduate and graduate levels. Math serves as a backbone for the sciences, so this may badly undercutting scientific innovation at a societal level. I honestly think that it would be an improvement on the status quo to stop teaching math classes entirely. Thurston characterized his early math education as follows: I hated much of what was taught as mathematics in my early schooling, and I often received poor grades. I now view many of these early lessons as anti-math: they actively tried to discourage independent thought. One was supposed to follow an established pattern with mechanical precision, put answers inside boxes, and "show your work," that is, reject mental insights and alternative approaches. I think that this characterizes math classes even at the graduate level, only at a higher level of abstraction. The classes essentially never offer students exposure to free-form mathematical exploration, which is what it takes to make major scientific discoveries with significant quantitative components.