The chances of the LLM being able to do this depend heavily on how similar the subjects discussed in the alien language are to things humans discuss. Removing areas where there is most likely to be similarity would reduce the chance that the LLM would find matching patterns in both. Indeed, that we're imagining aliens for the example already probably greatly increases the difficulty for the LLM.
Agreed. An AI powerful enough to be dangerous is probably in particular better at writing code than us, and at least some of those trying to develop AI are sure to want to take advantage of that to have the AI rewrite itself to be more powerful (and so, they hope, better at doing whatever they want the AI for, of course). So even if the technical difficulties in making code hard to change that others have mentioned could be overcome, it would be very hard to convince everyone making AIs to limit them in that way.
Logicians still can't agree whether the symbol for if and only if should be a triple bar or a double arrow. Odds that they'd all sign up for this, rather than having it be, at best, yet another competing standard, seem low.
Some components of experience, like colors, feel simple introspectively. The story of their functions is not remotely simple, so the story of their functions feels like it must be talking about a totally different thing from the obviously simple experience of the color. Though some people try to pretend this is more reasonable than it is by playing games and trying to define an experience as consisting entirely of how things seem to us and so as being incapable of being otherwise than it seems, this is just game playing; we are not that infallible on any subject, introspective or otherwise. The obvious solution, that what seems simple just turns out to be complicated and is in fact what the complicated functional story talks about, is surely the correct one. Don't let Chalmers' accent lull you into thinking he has some superior down under wisdom; listen to the equally accented Australian materialists!
Looking at the listed philosophers is not the best way to understand what's going on here. The category of rationalists is not "philosophers like those guys," it is one of a pair of opposed categories (the other being the empiricists) into which various philosophers fit to varying degrees. It is less appropriate for the ancients than for Descartes, Spinoza, and Leibniz (those three are really the paradigm rationalists). And the wikipedia article is taking a controversial position in putting Kant in the rationalist category. Kant was aware of the categories (indeed, is a major source of the tradition of grouping philosophers into those two categories), and did not consider himself to belong to either of them (his preferred terms for the categories were "dogmatists" for the rationalists and "skeptics" for the empiricists, which is probably enough on its own to give you a sense for how he viewed the two groups). There is admittedly a popular line of Kant interpretation which reads him as a kind of crypto-rationalist, but there are also those of us who read him as a crypto-empiricist, and not a few who take him at his word as being outside both categories.
In any event, the empiricist tradition has at least as much, if not more, influence on the LW wrong crowd as the rationalist tradition, and really both categories work best for early moderns and aren't fantastic for categorizing most in the present era. So anybody familiar with the philosophical term is likely to find the application to this community initially confusing.
The healthcare system capacity shouldn't be a flat line, though I admit that the reports I've seen suggest that not nearly enough effort has been devoted to ramping up to deal with the emergency. But obviously if there is an upward slope to capacity (and there are efforts to increase production of ventilators, to pick one of the most troublesome restrictions), that increases the benefit of curve flattening efforts.
Your requirements are very slightly too strong. If you have more than 6 cards in a suit, the amount of them that have to be top cards is reduced. In your second example, a spade suit of A,K,Q,8,7,6,5,4,3,2 would have served just as well, as even if all the opposing spades were in one hand, playing out the A,K,Q would force them all out, making the remaining spades also winners.
Hmmm, thanks, but that research doesn't seem to make any effort to distinguish people with diagnosable dementia conditions from those without, and does mention that the rates can be quite different for different people, so I can't tell whether there's anything about it which contradicts what I thought I remembered encountering in other research.
I'm curious about your claim that at 60-70 years old people start rapidly becoming stupider for reason we don't know. I thought that I recalled reading that while the various forms of dementia become immensely more common with age, those who are fortunate enough to avoid any of them experience relatively little cognitive decline. Unless you mean only to say that our present understanding of Alzheimer's and the other less common dementia disorders is relatively limited, so you're counting that as a reason we don't know (it is certainly something we don't know how to fix, so you win on that point).
I remember Bas van Fraassen (probably quoting or paraphrasing someone else, but I remember van Fraassen's version) saying that the requirements for finding truth were, in decreasing order of importance, luck, courage, and technique (and this surely applies to most endeavours, not just the search for truth). But although technique comes last, it's the one you have the most control over, so it makes sense to focus your attention there, even though its effect is the smallest. Of course, he is, like me, a philosopher, so perhaps we just share your bias toward caring about rationality.