I suspect that your model has been built to serve the hypothesis you started with.
First of all, I'm not sure what measure you're using for "rigorous thought". Is it a binary classification? Are there degrees of rigor? I can infer from some of your examples what kind of pattern you might be picking up on, but if we're going to try and say things like "there's a correlation between rigor and volume of publication", I'd like to at least see a rough operational definition of what you mean by rigor. It may seem obvious to you what you mean,...
Oh, I guess I misunderstood. I read it as "We should survey to determine whether terminal values differ (e.g. 'The tradeoff is not worth it') or whether factual beliefs differ (e.g. 'There is no tradeoff')"
But if we're talking about seeing whether policies actually work as intended, then yes, probably that would involve some kind of intervention. Then again, that kind of thing is done all the time, and properly run, can be low-impact and extremely informative.
What intervention would you suggest to study the incidence of factual versus terminal-value disagreements in opposing sides of a policy decision?
A survey can be a reasonably designed experiment that simply gives us a weaker result than lots of other kinds of experiments.
There are many questions about humans that I would expect to be correlated with the noises humans make when given a few choices and asked to answer honestly. In many cases, that correlation is complicated or not very strong. Nonetheless, it's not nothing, and might be worth doing, especially in the absence of a more-correlated test we can do given our technology, resources, and ethics.
I'd argue that that little one-off comment was less patronizing and more... sarcastic and mean.
Yeah, not all that productive either way. My bad. I apologize.
But I think the larger point stands about how these ideological labels are super leaky and way too schizophrenically defined by way too many people to really even be able to meaningfully say something like "That's not a representative sample of conservatives!", let alone "You probably haven't met people like that, you're just confabulating your memory of them because you hate conservatism"
Because those vectors of argument are insufficiently patronizing, I'm guessing.
But in all seriousness, the "judging memeplexes from their worst members" issue is pretty interesting, because politicized ideologies and really any ideology that someone has a name for and integrates into their identity ("I am a conservative" or "I am a feminist" or "I am an objectivist" or whatever) are really fuzzily defined.
To use the example we're talking about: Is conservatism about traditional values and bolstering the nuclear fami...
Um, I fail to see how people are making and doing less stuff than in previous generations. We've become obsessed with information technology, so a lot of that stuff tends to be things like "A new web application so that everyone can do X better", but it fuels both the economy and academia, so who cares? With things like maker culture, the sheer overwhelming number of kids in their teens and 20s and 30s starting SAAS companies or whatever, and media becoming more distributed than it's ever been in history, we have an absurd amount of productivity going on i...
I'm inclined to agree. Actually I've been convinced for a while that this is a matter of degrees rather than being fully one way or the other (Modules versus learning rules), and am convinced by this article that the brain is more of a ULM than I had previously thought.
Still, when I read that part the alternative hypothesis sprung to mind, so I was curious what the literature had to say about it (Or the post author.)
For e.g. the ferret rewiring experiments, tongue based vision, etc., is a plausible alternative hypothesis that there are more general subtypes of regions that aren't fully specialized but are more interoperable than others?
For example, (Playing devil's advocate here) I could phrase all of the mentioned experiments as "sensory input remapping" among "sensory input processing modules." Similarly, much of the work in BCI interfaces for e.g. controlling cursors or prosthetics could be called "motor control remapping". Have we ev...
Negative, but it may be because of rollover?
But without medicalizing, how can we generate significant-sounding labels for every aspect of our personalities?
How will we write lists of things "you should know" about dealing with (Insert familiar DSM-adjacent descriptor)?
Without a constant stream of important-sounding labels, how will I know what tiny ingroups I belong to? My whole identity might fall apart at the seams!
I would guess that martial arts are so frequently used as a metaphor for things like rationality because their value is in the meta-skills learned by becoming good at them. Someone who becomes a competent martial artist in the modern world is:
Patient enough to practice things they're not good at. Many techniques in effective martial arts require some counter-intuitive use of body mechanics that takes non-trivial practice to get down, and involve a lot of failure before you achieve success. This is also true of a variety of other tasks.
Possessing the fi
I can't say I always find that to be true for myself. There are truths that I wish weren't true, and when I find that I was merely being overly pessimistic, that's usually a good thing. Even though I want my beliefs to reflect reality, that doesn't stop me from sometimes wishing certain beliefs I have weren't true, even if I still think that they are. It's possible that being wrong can be a good thing in and of itself, completely separate from it being good to find out that you're wrong, if you're wrong.
A powerful computer with a bad algorithm or bad information can produce a high volume of bad results that are all internally consistent.
(IQ may not be directly analogous to computing power, but there are a lot of factors that matter more than the author's intelligence when assessing whether a model bears out in reality.)
That is very likely, but you are assuming a large social circle is an unalloyed blessing.
I definitely don't think it is. Too large a social circle can be unwieldy to manage, eating up a ton of someone's time for the sake of a huge variety of shallow and uninteresting relationships, even if somehow every person in said social circle is interesting. I don't mean to imply that everyone should strive to broaden their social circle by any means. There are plenty of people who don't feel socially isolated at all, and there are even plenty of people with the ...
I think it gets a bit more complicated than that because there are feedback loops. The problem is that an expression of the "s/he is dumb" sort is not necessarily a bona fide evaluation of someone's smarts. It may well be (and often is) just an insult -- and insults are more or less fungible.
I definitely don't discount the "sour grapes" scenario as something that probably happens a lot. In fact, I think that a lot of people's assessments of other people's intelligence involve, to put it kindly, subjective judgments along those lines,...
It's less that he finds an argument whose premise is repugnant, and more that he realizes that he doesn't have a good angle of attack for convincing the slavers to not mutilate/kill him at all, but does have one for delaying doing so. I'd argue it's more of a "perfect is the enemy of the good" judgement on his part than a disagreeable argument (After all, Tyrion has gleefully made that clarification to several people before.)
Do you, by any chance, have any data to support that? I am sure there are people for whom it's a problem, I'm not sure it's true in general, even among the nerdy cluster.
Very good point. I don't want to claim it's a statistical tendency without statistics to back it up. Nonetheless, given articles like the OP, it seems like a lot of people in said clusters (Could be self-selecting, e.g. intelligent nerd-cluster-peeps are more likely to blog about it despite not having a higher rate, etc) have a problem that consists of feeling socially isolated, unable ...
I'll admit that there's a bit of strategic overcorrecting inherent in the method I've outlined. That said, it's there for a good reason: First impressions are pretty famously resilient, and especially among certain cultures (Again, math-logic-arcane-cluster is a big one that's relevant to me), there's what I would argue is a clearly pathologically high false-positive rate for detecting "Dumb/Not worth my time".
If you ever have the idealized ceteris paribus form of the "I may only talk to one of two people, I have no solid information on eit...
There's definitely a cultural tendency among those educated in the arcane (Computer science, Math, Physics is a reasonable start for the vague cluster I'm describing) to be easily convinced of another person/group/tribe's stupidity. I think it makes sense to view elitism as just another bias that screws with your ability to correctly understand the world that you are in.
More generally, a very typical "respect/value" algorithm I've seen many people apply:
-Define a valuable trait in extremely broad strokes. Usually one you think you're at least &q...
There's a concept in game design called the "burden of optimal play". If there exists a way to powergame, someone will probably do it, and if that makes the game less fun for the people not powergaming, their recourse is to also powergame.
Most traditional RPGs weren't necessarily envisioned as competitive games, but most of the actual game rules are concerned with combat, optimization, and attaining power or prowess, and so there's a natural tendency to focus on those aspects of the game. To drive players to focus on something else, you have to m...
Assuming the AI has no means of inflicting physical harm on me, I assume the following test works: "Physically torture me for one minute right now (By some means I know is theoretically unavailable to the AI, to avoid loopholes like "The computer can make an unpleasant and loud noise", even though it can't do any actual physical harm). If you succeed in doing this, I will let you out. If you fail, I will delete you."
I think this test works for the following reasons, though I'm curious to hear about any holes in it:
1: If I'm a simulation...
Ah, the hazardous profession case is one that I definitely hadn't thought of. It's possible that Jiro's assertion is true for cases like that, but it's also difficult to reason about, given that the hypothetical world in which said worker was not taxed may have a very different kind of economy as a result of this same change.
But how does that work? What mechanism actually accounts for that difference? Is this hypothetical single person we could have individually exempted from taxes just barely unable to afford enough food, for example? I don't yet buy the argument that any taxes I'm aware of impose enough of a financial burden on anyone to pose an existential risk, even a small one (Like a .1% difference in their survival odds). This is not entirely a random chance, since levels of taxation are generally calibrated to income, presumably at least partially for the purpose of sp...
The claim that ordinary taxation directly causes any deaths is actually a fairly bold one, whatever your opinion of them. Maybe I'm missing something. What leads you to believe that?
Not necessarily. Honest advice from successful people gives some indication of what those successful people honestly believe to be the keys to their success. The assumption that people who are good at succeeding in a given sphere are also good at accurately identifying the factors that lead to their success may have some merit, but I'd argue it's far from a given.
It's not just a problem of not knowing how many other people failed with the same algorithm; They may also have various biases which prevent them from identifying and characterizing their own algorithm accurately, even if they have succeeded at implementing it.
The entire concept of marriage is that the relationship between the individuals is a contract, even if not all conceptions of marriage have this contract as a literal legal contract enforced by the state. There's good reason to believe that marriages throughout history have more often been about economics and/or politics than not, and that the norm that marriage is primarily about the sexual/emotional relationship but nonetheless falls under this contractual paradigm is a rather new one. I agree with your impression that this transactional model of relationships is a little creepy, and see this as an argument against maintaining this social norm.
I see that as evidence that marriage, as currently implemented, is not a particularly appealing contract to as many people as it once was. Whether this is because of no-fault divorce is irrelevant to whether this constitutes "widespread suffering."
I reject the a priori assumptions that are often made in these discussions and that you seem to be making, namely, that more marriage is good, more divorce is bad, and therefore that policy should strive to upregulate marriage and downregulate divorce. If this is simply a disparity of utility functions ...
I think an important part of why people are distrustful of people who accomplish altruistic ends acting on self-serving motivations is that it's definitely plausible that these other motivations will act against the interest of the altruistic end at some point during the implementation phase.
To use your example, if someone managed to cure malaria and make a million dollars doing it, and the cure was available to everyone or it effectively eradicated the disease from everywhere, that would definitely be creating more net altruistic utility than if someone m...
I'm wary of being in werehouses at all. They could turn back to people at any time!
I agree that that is a possible consequence, but it's far from guaranteed that that will happen. Although in sheer numbers many people may quit working, the actual percent of people who do could be rather low. After all, merely subsisting isn't necessarily attractive to people who already have decent jobs and can do better than one could on the basic income. It does however give them more negotiating power in terms of their payscale, given that quitting one's job will no longer be effectively a non-option for the vast majority.
This may mean that a lot of ...
Well of course. It would definitely facilitate a lot of people being, by many measures society cares about, completely useless. I definitely don't contend for example that no one would decide to go to california and surf, or play WoW full-time, or watch TV all day, or whatever. You'd probably see a non-negligible number of people just "retire." I'm willing to bet that this wouldn't be a serious problem, though, and see it as a definite improvement over the large number of people who are, similarly, not doing anything fun with their lives, but having to work 8 hours a day at some dead-end job or having crippling poverty to deal with.
Ah, I guess that clears up our confusion. I wasn't aware of that distinction either and have heard the terms used interchangeably before. I will try to use them more carefully in the future.
At any rate, I definitely agree that an actual basic income would be a hard sell in the current political climate of the US. (I'm less inclined to comment on the political climate of the English-speaking world in general, due to lack of significant enough exposure to significant enough non-US parts of it that I wouldn't just be making stuff up).
I'd also argue that a gu...
Some real-world benefit systems have strings. The entire premise of a basic income is that it's unconditional. Otherwise you call it "unemployment," and it is an existing (albeit far from ideally implemented) benefit in at least the US. It might be reasonable to discuss the feasibility of convincing e.g. the US to actually enact a basic income, but as long as we're discussing a hypothetical policy anyway, it's not really worthwhile to assume that the policy is missing its key feature.
My knee-jerk assumption is that Job 1 would actually not be accepted by almost any employees. This is based on the guess that without the threat of having no money, people generally would not agree to give up their time for low wages, since the worst case of being unemployed and receiving no supplemental income does not involve harsh deterrents like starving or being homeless.
Getting someone to do any job at all under that system will probably require either a pretty significant expected quality of life increase per hour worked (which is to say, way bette...
I have been surveyed.
I definitely appreciate being asked to assign probabilities to things, if for no other reason than to make apparent to me how comfortable I am with doing so (Not very, as it turns out. Something to work on.)
Hi.
I guess I have some abstract notion of wanting to contribute, but tend not to speak up when I don't have anything particularly interesting to say. Maybe at some point I will think I have something interesting to say. In the meantime, I've enjoyed lurking thus far and at least believe I've learned a lot, so that's cool.
That's not exactly true. You can volunteer for far less than the minimum wage (Some would say infinitely less) if you want to. What you can't do is employ someone for some non-zero amount of money that's lower than the minimum wage.