Purged Deviator

Wiki Contributions

Comments

Sorted by

"he sounds convincing when he talks about X."

I am so disappointed every time I see people using the persuasiveness filter. Persuasiveness is not completely orthogonal to correctness, but it is definitely linearly independent from it.

You discuss "compromised agents" of the FBI as if they're going to be lone, investigator-level agents.  If there was going to be any FBI/CIA/whatever cover-up, the version of that which I would expect, is that Epstein would've had incriminating information on senior FBI/CIA personnel, or politicians.  Incriminating information could just be that the FBI/CIA knew that Epstein was raping underage girls for 20 years, and didn't stop him, or even protected him.  In all your explanations of how impossible a crime Epstein's murder would be to pull off, a thing that makes it seem more plausible to me is if the initial conspirator isn't just someone trying to wave money around, but is someone with authority.

Go ahead and mock, but this is what I thought the default assumption was whenever someone said "Epstein didn't kill himself" or "John McAfee didn't kill himself".  I never assumed it would just be one or two lone, corrupt agents.

Now that I've had 5 months to let this idea stew, when I read your comment again just now, I think I understand it completely?  After getting comfortable using "demons" to refer to patterns of thought or behavior which proliferate in ways not completely unlike some patterns of matter, this comment now makes a lot more sense than it used to.

  1. Yeah.  I wanted to assume they were being forced to give an opinion, so that "what topics a person is or isn't likely to bring up" wasn't a confounding variable.  Your point here suggests that a conspirator's response might be more like "I don't think about them", or some kind of null opinion.
  2. This sort of gets to the core of what I was wondering about, but am not sure how to solve: how lies will tend to pervert Bayesian inference.  "Simulacra levels" may be relevant here.  I would think that a highly competent conspirator would want to only give you information that would reduce your prediction of a conspiracy existing, but this seems sort of recursive, in that anything that would reduce your prediction of a conspiracy would have increased likelihood of being said by a conspirator.  Would the effect of lies by bad-faith actors, who know your priors, be that certain statements just don't update your priors, because uncertainty makes them not actually add any new information?  I don't know what limit this reduces to, and I don't yet know what math I would need to solve it.  
  3. Naturally. I think "backpropagation" might be related to certain observations affecting multiple hypotheses?  But I haven't brushed up on that in a while. 

Thank you, it does help!  I know some people who revel in conspiracy theories, and some who believe conspiracies are so unlikely, that they dismiss any possibility of a conspiracy out of hand.  I get left in the middle with the feeling that some situations "don't smell right", without having a provable, quantifiable excuse for why I feel that way.

without using "will"

Oh come on.  Alright, but if your answer mentions future or past states, or references time at all, I'm dinging you points.  Imaginary points, not karma points obviously.

So let's talk about this word, "could".  Can you play Rationalist's Taboo against it? 

Testing myself before I read further.  World states which "could" happen are the set of world states which are not ruled impossible by our limited knowledge.  Is "impossible" still too load-bearing here?  Fine, let's get more concrete.

In a finite-size game of Conway's Life, each board state has exactly one following board state, which itself has only one following board state, and so on.  This sequence of board states is a board's future.  Each board state does not correspond to only a single previous board state, but rather a set of previous board states.  If we only know the current board state, then we do not know the previous board state, but we know the set that contains the previous board state.  We call this set the board states that could have been the previous board state.  From the inverse of this, we know the set that does not contain the previous board state, which we call the boards which could not have been the previous board.

Going back up to our universe, what "could happen" is a set of things which our heuristics tell us contains one or more things which will happen.  What "can't happen" is a set of things which our heuristics tell us does not contain a thing that will happen.

A thing which "could have happened" is thus a thing which was in a set which our heuristics told us contained a thing that will (or did) happen.

If I say "No, that couldn't happen", I am saying that your heuristics are too permissive, i.e. your "could" set contains elements which my heuristics exclude.

I think that got the maybe-ness out, or at least replaced it with set logic.  The other key point is the limited information preventing us from cutting the "could" set down to one unique element.  I expect Eliezer to have something completely different.

So then this initial probability estimate, 0.5, is not repeat not a "prior".

1:1 odds seems like it would be a default null prior, especially because one round of Bayes' Rule updates it immediately to whatever your first likelihood ratio is, kind of like the other mathematical identities.  If your priors represent "all the information you already know", then it seems like you (or someone) must have gotten there through a series of Bayesian inferences, but that series would have to start somewhere, right?   If (in the real universe, not the ball & urn universe) priors aren't determined by some chain of Bayesian inference, but instead by some degree of educated guesses / intuition / dead reckoning, wouldn't that make the whole process subject to a "garbage in, garbage out" fallacy(?).

For a use case: A, low internal resolution rounded my posterior probability to 0 or 1, and now new evidence is not updating my estimations anymore, or B, I think some garbage crawled into my priors, but I'm not sure where.  In either case, I want to take my observations, and rebuild my chain of inferences from the ground up, to figure out where I should be.   So... where is the ground?  If 1:1 odds is not the null prior, not the Bayesian Identity, then what is?

Evil is a pattern of of behavior exhibited by agents.  In embedded agents, that pattern is absolutely represented by material.  As for what that pattern is, evil agents harm others for their own gain.  That seems to be the core of "evilness" in possibility space.  Whenever I try to think of the most evil actions I can, they tend to correlate with harming others (especially one's equals, or one's inner circle, who would expect mutual cooperation), for one's own gain.  Hamlet's uncle.  Domestic abusers.  Executives who ruin lives for profit.  Politicians who hand out public money in exchange for bribes.  Bullies who torment other children for fun.  It's a learnable script, which says "I can gain at others expense", whether that gain is power, control, money, or just pleasure.

If your philosopher thinks "evil" is immaterial, does he also think "epistemology" is immaterial?

(I apologize if this sounds argumentative, I've just heard "good and evil are social constructs" far too many times.)

Not really.  "Collapse" is not the only failure case.  Mass starvation is a clear failure state of a planned economy, but it doesn't necessarily burn through the nation's stock of proletariat laborers immediately.  In the same way that a person with a terminal illness can take a long time to die, a nation with failing systems can take a long time to reach the point where it ceases functioning at all.

How do lies affect Bayesian Inference?

(Relative likelihood notation is easier, so we will use that)

I heard a thing.  Well, I more heard a thing about another thing.  Before I heard about it, I didn't know one way or the other at all.  My prior was the Bayesian null prior of 1:1.  Let's say the thing I heard is "Conspiracy thinking is bad for my epistemology".  Let's pretend it was relevant at the time, and didn't just come up out of nowhere.  What is the chance that someone would hold this opinion, given that they are not part of any conspiracy against me?  Maybe 50%?  If I heard it in a Rationality influenced space, probably more like 80%?  Now, what is the chance that someone would share this as their opinion, given that they are involved in a conspiracy against me?  Somewhere between 95% and 100%, so let's say 99%?  Now, our prior is 1:1, and our likelihood ratio is 80:99, so our final prediction, of someone not being a conspirator vs being a conspirator, is 80:99, or 1:1.24.  Therefore, my expected probability of someone not being a conspirator went from 50%, down to 45%.  Huh.

 

For the love of all that is good, please shoot holes in this and tell me I screwed up somewhere.


 

Load More