My prediction is that giving such population-level arguments in response to why they are by themselves is much less likely to result in being left alone (presumably, the goal) than by saying their parents said it's okay, so would show lower levels of instrumental rationality, rather than demonstrate more agency.
I think those are good lessons to learn from the episode, but it should be pointed out that Copernicus' model also required epicycles in order to achieve approximately the same predictive accuracy as the most widely used Ptolemaic systems. Sometimes later, Kepler-inspired corrected versions of Copernicus' model, are projected back into the past making the history both less accurate and interesting, but more able fit a simplistic morality tale.
I don't have a solution to this, but I have a question that might rule in or out an important class of solutions.
The US spent about $75 billion in assistance to the Ukraine. If both the US and EU pitched in an amount of similar size, that's $150 billion. There are about 2 million people in Gaza.
If you split the money evenly between each person and the country that was taking them in, how much of the population could you relocate? That is, Egypt gets $37,500 for allowing Yusuf in and Yusuf gets $37,500 for emigrating, Morocco gets $37,000 for allowing Fatim...
Thanks, that's getting pretty close to what I'm asking for. Since posting the above, I've also found Katja Grace's Argument for AI x-risk from competent malign agents and Joseph Carlsmith's Is Power-Seeking AI an Existential Risk, both of which seem like the kind of thing you could point an analytic philosopher at and ask them which premise they deny.
Any idea if something similar is being done to cater to economists (or other social scientists)?
Other intellectual communities often become specialized in analyzing arguments only of a very specific type, and because AGI-risk arguments aren't of that type, their members can't easily engage with those arguments. For example:
......if you look, say, at COVID or climate change fears, in both cases, there are many models you can look at, including--and then models with data. I'm not saying you have to like those models. But the point is: there's something you look at and then you make up your mind whether or not you like those models; and then they're tested
IMO, Andrew Ng is the most important name that could have been there but isn't. Virtually everything I know about machine learning I learned from him and I think there are many others for which that is true.
Consider the following rhetorical question:
Ethical vegans are annoyed when people suggest their rhetoric hints at violence against factory farms and farmers. But even if ethical vegans don't advocate violence, it does seem like violence is the logical conclusion of their worldview - so why is it a taboo?
Do we expect the answer to this to be any different for vegans than for AI-risk worriers?
...there is hardly any mention about memorization on either LessWrong or EA Forum.
I'm curious how you came to believe this. IIRC, I first learned about spaced repetition from these forums over a decade ago and hovering over the Memory and Mnemonics
and Spaced Repetition
tags on this very post shows 13 and 67 other posts on those topics, respectively. In addition, searching for "Anki" specifically is currently returning ~800+ comments.
Strictly speaking it is a (conditional) "call for violence", but we often reserve that phrase for atypical or extreme cases rather than the normal tools of international relations. It is no more a "call for violence" than treaties banning the use of chemical weapons (which the mainstream is okay with), for example.
It's a call for preemptive war; or rather, it's a call to establish unprecedented norms that would likely lead to a preemptive war if other nations don't like the terms of the agreement. I think advocating a preemptive war is well-described as "a call for violence" even if it's common for mainstream people to make such calls. For example, I think calling for an invasion of Iraq in 2003 was unambiguously a call for violence, even though it was done under the justification of preemptive self-defense.
Virtue ethics says to decide on rules ahead of time.
This may be where our understandings of these ethical views diverges. I deny that virtue ethicists are typically in the position to decide on the rules (ahead of time or otherwise). If what counts as a virtue isn't strictly objective, then it is at least intersubjective, and is therefore not something that can decided on by an individual (at least relatively). It is absurd to think to yourself "maybe good knives are dull" or "maybe good people are dishonest and cowardly", and when you do think such though...
Another interesting case study:
Phineas Gage was an American railroad construction foreman remembered for his improbable survival of an accident in which a large iron rod was driven completely through his head, destroying much of his brain's left frontal lobe, and for that injury's reported effects on his personality and behavior over the remaining 12 years of his life...".
We (and I mostly mean the US, where I'm located) seem to design our culture and our government in an incredibly convoluted, haphazard and error-prone way. No thought is given to the long-run consequences or the stability of our political decisions.
It's interesting to me that it looks that way to you, given that the architects of the American system (James Madison, John Jay etc...) where explicitly attempting to achieve a kind of "defense in depth" (e.g. separation of powers between the branches, federalism with independent states, decentralized militia sys...
If "rationalist" is a taken as a success term, then why wouldn't "effective altruist" be as well? That is to say: if you aren't really being effective, then in a strong sense, you aren't really an "effective altruist". A term that doesn't presuppose you have already achieved what you are seeking would be "aspiring effective altruist", which is quite long IMO.
Would you agree with a person that told you that human testimony is not sufficient grounds for the belief in a natural event (say, that your friend was attacked by another, but there were no witnesses and it left no marks) because humans are not perfect, etc...?
If not, might that indicate the rest of your argument only holds in the case where the prior probability of miracles is extremely low (and potentially misses the crux of the disagreement between yourself and miracle-believing people)?
Every industry has downsides. Some industries have much larger downsides for some kinds of people. If you personally think the tradeoffs are such that overall you prefer to stay in finance, then by analogy perhaps others who are like you would as well.
Deontology and virtue ethical frameworks have lots of resources for explaining why one shouldn't lie, but from a purely (naively) consequentialist perspective, it would be wrong to encourage people to enter your industry despite its problems only if compared to their next best alternative it would leave them worse off overall. Does it?
This is the form I expect answers to "why do you believe x"-type questions to take. Thanks.
Note: That interfax.ru link doesn't seem to work from North American or European IP addresses, but you can view a snapshot on the Way Back Machine here.
I think that if Lesswrong wants to be less wrong, then questions "why do you believe in that?" should not be downvoted.
As for the question itself, I know next to nothing about the situation on this NPP, but just from priors I'd give 70% that if someone shelled it, it was Russian army.
1) It is easier to shoot at NPP if you don't know what you re shooting at. Russian army is much more likely to mistake this target for something else.
2) p(Russian government lies that it wasn't them | it was them) > p(Ukrainian government lies it wasn't them | it was them) ...
I've personally known many people who have had serious medical problems that sure looked clearly like vaccine reactions.
I don't consider it a "serious medical problem", but I attempted to report (via the phone number on the paperwork given me by the person that administered the shot at Wallgreens) my 48 hours long migraine + ~4 day long high blood pressure (as measured by my Omron home blood pressure monitor) after getting a Pfizer booster. I was told they don't need me to fill anything out because those are already known side-effects.
Searching Google for ...
Aristotle seems (though he's vague on this) to be thinking in terms of fundamental attributes, while I'm thinking in terms of present capacity, which can be reduced by external interventions such as schooling.
Thinking about people I know who've met Vassar, the ones who weren't brought up to go to college* seem to have no problem with him and show no inclination to worship him as a god or freak out about how he's spooky or cultish; to them, he's obviously just a guy with an interesting perspective.
*As far as I know I didn't know any such people before 2020;...
Actually, several of the chapters of this book are very likely completely wrong and the rest are on shakier foundations than I believed 9 years ago (similar to other works of social psychology that accurately reported typical expert views at the time). See here for further elaboration.
I'm on the fence about recommending this book now, but please read skeptically if you do choose to read it.
I agree with your point about there being at least two distinct ways to interpret the non-central fallacy, and also the OPs point that while ad hominem arguments are technically invalid, they can be of high inductive strength in some circumstances. I'm mostly critiquing Scott's choice of examples for introducing the non-central fallacy, since mixing it with other fallacious forms of reasoning makes it harder to see what the non-central part is contributing to the mistake being made. For this reason, the theft example is preferred by me.
I think the Martin Luther King scenario is a particularly bad example for explaining the non-central fallacy, because it depends on a conjunction of fallacies, rather than isolating the non-central part. The inference from (1) MLK does/doesn't fit some category with negative emotional valence, to (2) his ideas are bad just is the ad hominem fallacy (which is distinct from the non-central fallacy). The truth (or falsity) of Bloch's theorem is logically independent of whether or not André Bloch was a murder (which he was).
I asked around about this on the ##hplusroadmap irc channel:
...15:59 < Jayson_Virissimo> Yeah, sorry. Was much more interested in the claim about peptide sourcing specifically.
16:00 < Jayson_Virissimo> Is that 4-5 weeks duration normal? How flexible is it, if at all?
16:01 < yashgaroth> some of them might offer expedited service, though I've never had cause to find out when ordering peptides and am not bothered to check...and it'd save you a week or two at most
16:02 < Jayson_Virissimo> What would you guess as to the ma
I've been working on an interactive flash card app to supplement classical homeschooling called Boethius. It uses a spaced-repetition algorithm to economize on the students time and currently has exercises for (Latin) grammar, arithmetic, and astronomy.
Let me know what you think!
It seems to me that there is some tension in the creed between (6), (9), and (11). On the one hand, we are supposed to affirm that "changes to one’s beliefs should generally also be probabilistic, rather than total", but on the other hand, we are using belief/lack of belief as a litmus test for inclusion in the group.