You discuss "compromised agents" of the FBI as if they're going to be lone, investigator-level agents. If there was going to be any FBI/CIA/whatever cover-up, the version of that which I would expect, is that Epstein would've had incriminating information on senior FBI/CIA personnel, or politicians. Incriminating information could just be that the FBI/CIA knew that Epstein was raping underage girls for 20 years, and didn't stop him, or even protected him. In all your explanations of how impossible a crime Epstein's murder would be to pull off, a thing that makes it seem more plausible to me is if the initial conspirator isn't just someone trying to wave money around, but is someone with authority.
Go ahead and mock, but this is what I thought the default assumption was whenever someone said "Epstein didn't kill himself" or "John McAfee didn't kill himself". I never assumed it would just be one or two lone, corrupt agents.
Now that I've had 5 months to let this idea stew, when I read your comment again just now, I think I understand it completely? After getting comfortable using "demons" to refer to patterns of thought or behavior which proliferate in ways not completely unlike some patterns of matter, this comment now makes a lot more sense than it used to.
without using "will"
Oh come on. Alright, but if your answer mentions future or past states, or references time at all, I'm dinging you points. Imaginary points, not karma points obviously.
So let's talk about this word, "could". Can you play Rationalist's Taboo against it?
Testing myself before I read further. World states which "could" happen are the set of world states which are not ruled impossible by our limited knowledge. Is "impossible" still too load-bearing here? Fine, let's get more concrete.
In a finite-size game of Conway's Life, each board state has exactly one following board state, which itself has only one following board state, and so on. This sequence of board states is a board's future. Each board state does not correspond to only a single previous board state, but rather a set of previous board states. If we only know the current... (read more)
So then this initial probability estimate, 0.5, is not repeat not a "prior".
1:1 odds seems like it would be a default null prior, especially because one round of Bayes' Rule updates it immediately to whatever your first likelihood ratio is, kind of like the other mathematical identities. If your priors represent "all the information you already know", then it seems like you (or someone) must have gotten there through a series of Bayesian inferences, but that series would have to start somewhere, right? If (in the real universe, not the ball & urn universe) priors aren't determined by some chain of Bayesian inference, but instead by some degree of educated guesses... (read more)
Evil is a pattern of of behavior exhibited by agents. In embedded agents, that pattern is absolutely represented by material. As for what that pattern is, evil agents harm others for their own gain. That seems to be the core of "evilness" in possibility space. Whenever I try to think of the most evil actions I can, they tend to correlate with harming others (especially one's equals, or one's inner circle, who would expect mutual cooperation), for one's own gain. Hamlet's uncle. Domestic abusers. Executives who ruin lives for profit. Politicians who hand out public money in exchange for bribes. Bullies who torment other children for fun. It's a learnable script, which says "I can gain at others expense", whether that gain is power, control, money, or just pleasure.
If your philosopher thinks "evil" is immaterial, does he also think "epistemology" is immaterial?
(I apologize if this sounds argumentative, I've just heard "good and evil are social constructs" far too many times.)
Not really. "Collapse" is not the only failure case. Mass starvation is a clear failure state of a planned economy, but it doesn't necessarily burn through the nation's stock of proletariat laborers immediately. In the same way that a person with a terminal illness can take a long time to die, a nation with failing systems can take a long time to reach the point where it ceases functioning at all.
How do lies affect Bayesian Inference?
(Relative likelihood notation is easier, so we will use that)
I heard a thing. Well, I more heard a thing about another thing. Before I heard about it, I didn't know one way or the other at all. My prior was the Bayesian null prior of 1:1. Let's say the thing I heard is "Conspiracy thinking is bad for my epistemology". Let's pretend it was relevant at the time, and didn't just come up out of nowhere. What is the chance that someone would hold this opinion, given that they are not part of any conspiracy against me? Maybe 50%? If I heard it in a Rationality influenced... (read more)
Monopolies on the Use of Force
[Epistemic status & effort: exploring a question over an hour or so, and constrained to only use information I already know. This is a problem solving exercise, not a research paper. Originally written just for me; minor clarification added later.]
Is the use of force a unique industry, where a single monolithic [business] entity is the most stable state, the equilibrium point? From a business perspective, an entity selling the use of force might be thought of as in a "risk management" or "contract enforcement" industry. It might use an insurance-like business model, or function more like a contractor for large projects.
In a monopoly... (read 477 more words →)
AGI Alignment, or How Do We Stop Our Algorithms From Getting Possessed by Demons?
[Epistemic Status: Absurd silliness, which may or may not contain hidden truths]
[Epistemic Effort: Idly exploring idea space, laying down some map so I stop circling back over the same territory]
From going over Zvi's sequence on Moloch, what the "demon" Moloch really is, is a pattern (or patterns) of thought and behaviour which destroys human value in certain ways. That is an interesting way of labeling things. We know patterns of thought, and we know humans can learn them through their experiences, or by being told them, or reading them somewhere, or any other way that humans can learn things. ... (read more)
The Definition of Good and Evil
Epistemic Status: I feel like I stumbled over this; it has passed a few filters for correctness; I have not rigorously explored it, and I cannot adequately defend it, but I think that is more my own failing than the failure of the idea.
I have heard said that "Good and Evil are Social Constructs", or "Who's really to say?", or "Morality is relative". I do not like those at all, and I think they are completely wrong. Since then, I either found, developed, or came across (I don't remember how I got this) a model of Good and Evil, which has so far seemed accurate in every... (read 387 more words →)
I am so disappointed every time I see people using the persuasiveness filter. Persuasiveness is not completely orthogonal to correctness, but it is definitely linearly independent from it.