I remember Bas van Fraassen (probably quoting or paraphrasing someone else, but I remember van Fraassen's version) saying that the requirements for finding truth were, in decreasing order of importance, luck, courage, and technique (and this surely applies to most endeavours, not just the search for truth). But although technique comes last, it's the one you have the most control over, so it makes sense to focus your attention there, even though its effect is the smallest. Of course, he is, like me, a philosopher, so perhaps we just share your bias toward caring about rationality.
The chances of the LLM being able to do this depend heavily on how similar the subjects discussed in the alien language are to things humans discuss. Removing areas where there is most likely to be similarity would reduce the chance that the LLM would find matching patterns in both. Indeed, that we're imagining aliens for the example already probably greatly increases the difficulty for the LLM.
Agreed. An AI powerful enough to be dangerous is probably in particular better at writing code than us, and at least some of those trying to develop AI are sure to want to take advantage of that to have the AI rewrite itself to be more powerful (and so, they hope, better at doing whatever they want the AI for, of course). So even if the technical difficulties in making code hard to change that others have mentioned could be overcome, it would be very hard to convince everyone making AIs to limit them in that way.
Some components of experience, like colors, feel simple introspectively. The story of their functions is not remotely simple, so the story of their functions feels like it must be talking about a totally different thing from the obviously simple experience of the color. Though some people try to pretend this is more reasonable than it is by playing games and trying to define an experience as consisting entirely of how things seem to us and so as being incapable of being otherwise than it seems, this is just game playing; we are not that infallible on any s...
Looking at the listed philosophers is not the best way to understand what's going on here. The category of rationalists is not "philosophers like those guys," it is one of a pair of opposed categories (the other being the empiricists) into which various philosophers fit to varying degrees. It is less appropriate for the ancients than for Descartes, Spinoza, and Leibniz (those three are really the paradigm rationalists). And the wikipedia article is taking a controversial position in putting Kant in the rationalist category. Kant was aware of...
The healthcare system capacity shouldn't be a flat line, though I admit that the reports I've seen suggest that not nearly enough effort has been devoted to ramping up to deal with the emergency. But obviously if there is an upward slope to capacity (and there are efforts to increase production of ventilators, to pick one of the most troublesome restrictions), that increases the benefit of curve flattening efforts.
Your requirements are very slightly too strong. If you have more than 6 cards in a suit, the amount of them that have to be top cards is reduced. In your second example, a spade suit of A,K,Q,8,7,6,5,4,3,2 would have served just as well, as even if all the opposing spades were in one hand, playing out the A,K,Q would force them all out, making the remaining spades also winners.
Hmmm, thanks, but that research doesn't seem to make any effort to distinguish people with diagnosable dementia conditions from those without, and does mention that the rates can be quite different for different people, so I can't tell whether there's anything about it which contradicts what I thought I remembered encountering in other research.
I'm curious about your claim that at 60-70 years old people start rapidly becoming stupider for reason we don't know. I thought that I recalled reading that while the various forms of dementia become immensely more common with age, those who are fortunate enough to avoid any of them experience relatively little cognitive decline. Unless you mean only to say that our present understanding of Alzheimer's and the other less common dementia disorders is relatively limited, so you're counting that as a reason we don't know (it is certainly something we don't know how to fix, so you win on that point).
The research indicates that most people's responses to any social science result is "that's what I would have expected," although that doesn't actually seem to be true; you can get them to say they expected conflicting results. Have there really been no studies of when people say they think studies are surprising, comparing the results to what people actually predicted beforehand (I know Milgram informally surveyed what people expected before his study, but I don't think he did any rigorous analysis of expectations)? Perhaps people are as inaccurate in reporting what they find surprising as they are in reporting what they expected. It would certainly be interesting to know!
Over the course of a month? The reasons you give for thinking these stocks might go up aren't things that would reliably manifest in such a short time frame, and the market generally has been down recently. I don't think what you've described here is evidence of much of anything. Probably you're no good at active investing, because the evidence seems to suggest that nobody is (the winners are just the ones who get lucky), but the reason to think that is because of the general evidence for that, not because of your personal experience over the past month.
A lot of biological research is inherently slow, because you have to wait to observe effects on slow processes in living things. Probably the only way to get rapid research progress on immortality is with vastly superior computer models running on vastly superior computers substituting for as much as possible of the slow observing what really goes on in humans research. Though there would probably still be a lot of slow observing what goes on in humans going on in the course of testing the computer models for accuracy. Anyway, making more powerful computer...
I was under the impression that the research into biases by people like Kahneman and Tversky generally found that eliminating them was incredibly hard, and that expertise, and even familiarity with the biases in question generally didn't help at all. So this is not a particularly surprising result; what would be more interesting is if they had found anything that actually does reduce the effect of the biases.
Overcoming these biases is very easy if you have an explicit theory which you use for moral reasoning, where results can be proved or disproved. Then you will always give the same answer, regardless of the presentation of details your moral theory doesn't care about.
Mathematicians aren't biased by being told "I colored 200 of 600 balls black" vs. "I colored all but 400 of 600 balls black", because the question "how to color the most balls" has a correct answer in the model used. This is true even if the model is unique to the ...
It is almost completely uncontroversial that meaning is not determined by the conscious intentions of individual speakers (the "Humpty Dumpty" theory is false). More sophisticated theories of meaning note that people want their words to mean the same as what other people mean by them (as otherwise they are useless for communication). So, bare minimum, knowing what a word means requires looking at a community of language users, not just one speaker. But there are more complications; people want to use their words to mean the same as what experts i...
Because those countries also have lower labor costs, so executives can report that they're saving money on labor costs and their company's stock will go up. More cynically, international operations require more management (to keep on top of shipping issues and deal with different government circumstances in the different countries where operations are going on), and the managers who make such decisions may approve of an outcome where more is spent on management and less on labor. Most of the research I've heard of suggests that it is not because such relocations are overall more profitable; that's very rarely the case.
Indeed. A more plausible alternative strategy for Germany would be to forget the invading Belgium plan, fight defensively on the western front, and concentrate their efforts against Russia at the beginning. Britain didn't enter the war until the violation of Belgian neutrality. Admittedly, over time French diplomats might have found some other way to get Britain into the war, but Britain was at least initially unenthusiastic about getting involved, so I think Miller is on the right track in thinking Germany's best hope was to look for ways to keep Britain out indefinitely.
Socrates initially offered as an alternative punishment that he be given free meals for the rest of his life; he never suggested that he should be paid money, though that's a quibble. More importantly, the final proposal he made (under pressure from his friends) was that he (well, his friends) pay a whopping huge fine. This may have partly backfired because it also reminded people that he had rich and unpopular friends, but it was a substantial penalty. Though you are right that exile would have been more likely to be acceptable to the jury, especially as you are also correct that he never promised to behave differently in the future (which exile, unlike a fine, would have made irrelevant).
Neither Plato nor Xenophon describe Socrates as someone who fails to acknowledge the gods that the city acknowledges. Even in Plato, any criticism of the traditional Greek religion is veiled, while in Xenophon Socrates' religious views are completely orthodox.
On why Socrates didn't choose exile, what Plato has Socrates say in Crito makes it sound like he thought fleeing would be harming the city. But I'm not sure that Socrates really makes a compelling case for why fleeing is bad anywhere in Plato's account. In Xenophon's version of the trial, Socrates al...
I'm torn. There are definitely differences between the way Less Wrong operates and the situation the article describes, but that's always going to be the case. It would be nice to see more studies, of course, examining how the details of the system matter, but no such seem to be available. Absent that it kind of seems like special pleading to say "we do things slightly differently, so obviously it won't apply to us." On the other hand, only one study is rather weak evidence, and the differences do exist, even if we don't have any actual evidence that they matter. I really don't know if it makes sense to consider changing our system in light of this.
I agree that an AI with such amazing knowledge should be unusually good at communicating its justifications effectively (because able to anticipate responses, etc.) I'm of the opinion that this is one of the numerous minor reasons for being skeptical of traditional religions; their supposedly all-knowing gods seem surprisingly bad at conveying messages clearly to humans. But to return to VAuroch's point, in order for the scenario to be "wildly inconsistent," the AI would have to be perfect at communicating such justifications, not merely unusually good. Even such amazing predictive ability does not seem to me sufficient to guarantee perfection.
As I said, I'm sympathetic to pragmatism. But I guess I'd turn the question around, and ask what you think pragmatism will improve. Serious researchers are pretty good at rationalizing how procedures that work fit into their paradigm (or just not thinking about it and using the procedures that work regardless of any conflicting absolutist principles they might have). I'm sure removing the hypocrisy would be of some benefit, but given the history it would also likely be extremely difficult; in what cases do you think it is clear that this would be the best ...
Pragmatists from Pierce through the positivists to Rorty have agreed with you that the goal is to avoid wasting time on theories of truth and meaning and instead focus on finding practical tools; they've only spoken of theories of truth when they thought there was was no other way to make their points understandable to those too firmly entrenched in the philosophical mainstream (or, even more often, had such theories attributed to them by people who assumed that must be what they were up to despite their explicit disavowals). I'm not saying all of those pe...
I don't have time to re-read the whole book to come up with examples, and there is unhelpfully no index in my copy, but checking through the footnotes quickly, I found exactly two references to actual positivists (or close enough); a quick dismissive paragraph on Ernest Nagel's use of probability theory, and a passing reference to Philipp Frank's biography of Einstein. No references to Reichenbach or Hempel or Carnap. The closest he comes is perhaps the (one) reference to Goodman, who was heavily influenced by Carnap, but Kuhn cites Goodman favorably, whil...
Kuhn certainly knew physics better than he knew philosophy. The frequently mentioned "positivist" in his narrative is entirely made of straw. He discusses a lot of interesting ideas, and he wrote better than many people who had discussed similar ideas previously, but most of the ideas had been discussed previously, sometimes extensively; he was apparently simply not very aware of the previous literature in the philosophy of science.
The biggest problem is that twins raised apart are actually pretty rare, so almost any study of them goes to desperate lengths to just get enough of them for the study. This often involves fudging what they're willing to accept as "raised apart" to a degree no unbiased observer would be comfortable with, just to get sufficient numbers.
Also, from the same background, it is striking to me that a lot of the criticisms Less Wrong people make of philosophers are the same as the criticisms philosophers make of one another. I can't really think of a case where Less Wrong stakes out positions that are almost universally rejected by mainstream philosophers. And not just because philosophers disagree so much, though that's also true, of course; it seems rather that Less Wrong people greatly exaggerate how different they are and how much they disagree with the philosophical mainstream, to the extent that any such thing exists (again, a respect in which their behavior resembles how philosophers treat one another).
I'm pretty sure I was also a victim, if a rather recent and relatively small scale one, and I'm glad to see something was done. However much I told myself it wasn't really important, that karma's a horribly noisy measure, with a few slightly funny comments gaining me the majority of my karma while my most thoughtful contributions usually only gathered a handful, the block downvoting really did make me feel disinclined to post new comments. Banning seems like an extreme measure, and I guess I can see where people who think there should have been warnings ar...
It looks like this has been an unpopular suggestion, but I wouldn't discount motivation completely. A lot of early 20th century economists thought centrally planned economies were a great idea, based on the evidence of how productive various centrally planned war economies had been. Presumably there's some explanation for why central planning works better (or doesn't fail as badly) with war economies compared with peacetime economies, and I've always suspected that people's motivation to help the country in wartime was probably one of the factors.
The Logical Positivists were mostly pretty far left, but they mostly didn't engage in much political advocacy; though this was controversial among members of the movement (Neurath thought they should be more overtly political), most of them seemed to think that helping people think more clearly and make better use of science was a better way to encourage superior outcomes than advocating specific policies. They were also involved in various causes, though; many members of the Vienna Circle were involved in adult education efforts in Vienna, for example. Th...
Cool! I've looked for that manifesto on line before, and failed to find it; thanks for the link! Too many people seem to get all of their knowledge of the Vienna Circle and Logical Positivism from its critics. It's good to look at the primary sources. The translation is a little clunky (perhaps too literal), but so much better than not having it available at all.
You make a lot of assumptions. When I said the grad student population was "racially diverse" I was not trying to give a more impressive sounding name to the fact that it included a decent number of Asians. It did, of course, but it also included plenty of people from Africa, the West Indies, the Middle East, and, well, pretty much everywhere.
Which I said nothing about. I referred to the undergraduate population (I wasn't an undergrad, but university campuses aren't particularly segregated between grad and undergrad populations). Actually, the grad student population generally was more racially diverse than the undergraduate population (mostly due to lots of international students among the grad students).
One reason for thinking that a measure of talent is poor might be that it is outperformed by other measures. There may not be genuinely good measures of talent. It does occur some sort of retrospective measure based on results is probably better than what the admissions office uses, but that is surely still not a perfect measure, and is also obviously not a practical option to replace what the admissions office uses (unless someone invents a time machine). Another reason to think a measure of talent is poor, though, and this is probably more applicable her...
This would only be true if affirmative action were carried to the point where the percentage of black students in the elite schools exceeded the percentage of blacks in the general population. I don't have the numbers handy, but I did go to grad school at an Ivy, not terribly long ago, and that does not match my recollection of the racial make-up there. The undergraduate ranks seemed to be dominated by rich white kids.
I wouldn't be surprised if you disagreed with his point, but I'm a little surprised that you just don't understand it. The cutoff you speak of is in the admissions criteria, not in talent (there being no way to measure talent directly). VAuroch is pretty obviously of the opinion that admissions criteria are poor measures of talent, and that in particular minorities are more likely to score poorly on the admissions criteria for reasons other than talent. Again, not surprised if you disagree, but I'm very surprised you couldn't figure out that that was what he meant.
As far as I can tell, the far left position on sex is that most of the stereotypical sex differences are exaggerated, and most of the genuine differences are more the result of socialization rather than biology. I don't encounter anyone who goes further than that; I've never encountered anyone who would replace either "most" with an "all," or who would replace the "more" with an "entirely," in the case of sex, and I encounter a lot of people who are pretty far left (being fairly far left myself these days). The situa...
I admit that I encounter people who make a big deal of how edgy and contrarian they are for speaking out about innate differences in the face of the stifling politically correct consensus that race and sex don't matter at all. It's pretty amazing how they seem to be everywhere, given the supposedly universal consensus rejecting and supressing such edgy, contrarian views.
Differences in the rate of absorption can definitely be important to addiction; oral amphetamines are not particularly addictive, but amphetamines taken in other ways that increase absorption rate are very addictive. And the last I checked the research on that, there wasn't much understanding of exactly why the line there is where it is. Perhaps alcohol just works completely differently, but it is also possible that drinking on an empty stomach, or drinking carbonated drinks, doesn't increase absorption enough to make a difference. Or perhaps it does make a difference, but not enough to have turned up in any research yet; this isn't an area where small effects would be easy to detect.
I'm not sure about the usefulness of grouping the kind of vague spirituality and religion mentioned in the first paper with the discussions of meditation in the other papers. As the last paper argues, I also would think it would be worthwhile to distinguish different forms of meditation. My general understanding of the state of the literature was that studies of the benefits of "spirituality and religion" were all over the place (it being an incredibly vague category). I also was under the impression that there have been a lot of studies of medit...
I use his texts in the philosophy courses I teach; I love that website. I hope it continues to be available for a long time.