LESSWRONG
LW

2556
Unnamed
7933Ω42811177
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Paranoia: A Beginner's Guide
Unnamed2d398

I downvoted this post because it felt slippery. I kept running into parts that didn't fit together or otherwise seemed off.

If this was a google doc I might leave a bunch of comments quickly pointing to examples. I guess I can do that in list format here.

  • The post highlights the market for lemons model, but then the examples keep not fitting the lemons setup. Covid misinformation wasn't an adverse selection problem, nor was having spies in the government, nor was the Madman Theory situation.
  • "there are roughly three big strategies" is a kind of claim that I generally start out skeptical of, and this post failed to back that one up
  • The description of the CDC and what happened during covid seems basically inaccurate, e.g. that's not how concerns about surfaces played out
  • The Madman Theory example doesn't feel like an example of the Nixon administration being in an adversarial information situation or being paranoid. It's trying to make threats in a game theory situation.
  • The way you talk about the 3 strategies I get the sense that you're saying: when you're in an adversarial information scenario here are 3 options to consider. But in the examples of each strategy, the other 2 strategies don't really make sense. They are structurally different scenarios.
  • A thing that I think of as paranoia centrally involves selective skepticism: having some target that you distrust, but potentially being very credulous towards other sources of views on that topic such as an ingroup that also identifies that target as untrustworthy or engaging in confirmatory reasoning about your own speculative theories that don't match the untrusted target. That's missing from your theory and your examples.
  • The thing you're calling "blinding" includes approaches that seem pretty different. A source I distrust is claiming X, and X seems like the sort of thing they'd want me to believe, so I'll (a) doubt X, (b) find some other method of figuring out whether X is true that doesn't rely on that source, or c) go someplace else so that X doesn't matter to me. I associate paranoia mainly with (a), or with doing an epistemically atrocious job of (b), or a self-deceiving variant of (c).
  • Despite all the examples, there's a lack of examples that are examples of the core thing - here's an epistemically adversarial situation, here's a person being paranoid, here's what that looks like, here's how that's relatively appropriate/understandable even if not optimal, (perhaps also) here's how that involves a bunch of costs/badness (from the concluding paragraph this is maybe part of the core, though that wasn't apparent before that point)
Reply1
8 Questions for the Future of Inkhaven
Unnamed4d60

As a reader, I wish there was more filtering or signal boosting to help bring some Inkhaven posts to my attention. 

There are a few ways that could happen. It could be something reddit-like where there's a centralized place which at least has links to all the Inkhaven posts and people can upvote them. It could be something like LW curation where some moderators pick a few posts to curate (possibly some of them could even be cross-posted and curated on LW). It could be a linkpost style thing (as Vaniver has been done some of) where people post links to some of their favorite Inkhaven posts.

I could imagine setting up Inkhaven with the intention of having the residents do linkposts. Maybe each Sunday is linkpost day when residents are encouraged to make their daily post a linkpost (with no word requirement) where they link to 1-3 of their best posts from the past week, 3-10 other Inkhaven posts from the past week that they liked, and optionally a few things from elsewhere. Then on Monday there could be a centralized roundup post which links to all of those linkpost and all the posts which got multiple recommendations in those linkposts.

Reply
Experiment: Test your priors on Bernoulli processes.
Unnamed1mo30

Explanation:

Hypothesis 1: The data are generated by a beta-binomial distribution, where first a probability x is drawn from a beta(a,b) distribution, and then 5 experiments are run using that probability x. I had my coding assistant write code to solve for the a,b that best fit the observed data and show the resulting distribution for that a,b. It gave (a,b) = (0.6032,0.6040) and a distribution that was close but still meaningfully off given the million experiment sample size (most notably, only .156 of draws from this model had 2 R's compared with the observed .162).

Hypothesis 2: With probability c the data points were drawn from a beta-binomial distribution, and with probability 1-c the experiment instead used p=0.5. This came to mind as a simple process that would result in more experiments with exactly 2 R's out of 4. With my coding assistant writing the code to solve for the 3 parameters a,b,c, this model came extremely close to the observed data - the largest error was .0003 and the difference was not statistically significant. This gave (a,b,c) = (0.5220,0.5227,0.9237).

I could have stopped there, since the fit was good enough so that anything else I'd do would probably only differ in its predictions after a few decimal places, but instead I went on to Hypothesis 3: the beta distribution is symmetric with a=b, so the probability is 0.5 with probability 1-c and drawn from beta(a,a) with probability c. I solved for a,c with more sigfigs than my previous code used (saving the rounding till the end), and found that it was not statistically significantly worse than the asymmetric beta from Hypothesis 2. I decided to go with this one because on priors a symmetric distribution is more likely than an asymmetric distribution that is extremely close to being symmetric. Final result: draw from a beta(0.5223485278, 0.5223485278) distribution with probability 0.9237184759 and use p=0.5 with probability 0.0762815241. This yields the above conditional probabilities out to 6 digits.

Reply
OpenAI #15: More on OpenAI’s Paranoid Lawfare Against Advocates of SB 53
Unnamed1mo163

Chris Lehane, the inventor of the original term ‘vast right wing conspiracy’ back in the 1990s to dismiss the (true) allegations against Bill Clinton by Monica Lewinsky

This is inaccurate in a few ways.

Lehane did not invent the term "vast right wing conspiracy", AFAICT; Hillary Clinton was the first person to use that phrase in reference to criticisms of the Clintons, in a 1998 interview. Some sources (including Lehane's Wikipedia page) attribute the term to Lehane's 1995 memo Communication Stream of Conspiracy Commerce, but I searched the memo for that phrase and it does not appear there. Lehane's Wikipedia page cites (and apparently misreads) this SFGate article, which discusses Lehane's memo in connection with Clinton's quote but does not actually attribute the phrase to Lehane.

The memo's use of the term "conspiracy" was about how the right spread conspiracy theories about the Clintons, not about how the right was engaged in a conspiracy against the Clintons. Its primary example involved claims about Vince Foster which it (like present-day Wikipedia) described as "conspiracy theories" (as you can see by searching the memo for the string "conspirac").

Also, Lehane's memo was published in July 1995 which was before the Clinton-Lewinsky sexual relationship began (Nov 1995), and so obviously wasn't a response to allegations about that relationship.

Lehane's memo did include some negatives stories about the Clintons that turned out to be accurate, such as the Gennifer Flowers allegations. So there is some legitimate criticism about Lehane's memo, including how it presented all of these negative stories as part of a pipeline for spreading unreliable allegations about the Clintons, and didn't take seriously the possibility that they might be accurate. But it doesn't look like his work was mainly focused on dismissing true allegations.

Reply
The Most Common Bad Argument In These Parts
Unnamed1mo21

Exhaustive Free Association is a step in a chain of reasoning where the logic goes "It's not A, it's not B, it's not C, it's not D, and I can't think of any more things it could be!"[1] Once you spot it, you notice it all the damn time.

This description skips over the fallacy part of the fallacy. On its own, the sentence in quotes sounds like a potentially productive contribution to a discussion.

Reply
Experiment: Test your priors on Bernoulli processes.
Unnamed1mo31

[0.111019, 0.324513, 0.5, 0.675487, 0.888981]

Reply
The Counterfactual Quiet AGI Timeline
Unnamed1mo2118

My leading guess is that a world without Yudkowsky, Bostrom, or any direct replacement looks a lot more similar to our actual world, at least by 2025. Perhaps: the exact individuals and organizations (and corporate structures) leading the way are different, progress is a bit behind where it is in our world (perhaps by 6 months to a year at this point), there is less attention to the possibility of doom and less focus on alignment work.

One thing that Yudkowsky et al. did is to bring more attention to the possibility of superintelligence and what it might mean, especially among the sort of techy people who could play a role in advancing ML/AI. But without them, the possibility of thinking machines was already a standard topic in intro philosophy classes, the Turing test was widely known, Deep Blue was a major cultural event, AI and robot takeover were standard topics in sci-fi, Moore's law was widely known, people like Kurzweil and Moravec were projecting when computers would pass human capability levels, various people were trying to do what they could with the tech that they had. A lot of AI stuff was in the groundwater, especially for the sort of techy people who could play a role in advancing ML/AI. So in nearby counterfactual worlds, as there are advances in neural nets they still have ideas like trying to get these new & improved computers to be better than humans at Go, or to be much better chatbots. 

Yudkowsky was also involved in networking, e.g. helping connect founders & funders. But that seems like a kind of catalyst role that speeds up the overall process slightly, rather than summoning it where it otherwise would be absent. The specific reactions that he catalyzed might not have happened without him, but it's the sort of thing where many people were pursuing similar opportunities and so the counterfactual involves some other combination of people doing something similar, perhaps a bit later or a bit less well.

Reply
High-level actions don’t screen off intent
Unnamed2mo60

e.g., Betty could cause one more girl to have a mentor either by volunteering as a Big Sister or by donating money to the Big Sisters program.

In the case where she volunteers and mentors the girl directly, it takes lots of bits to describe her influence on the girl being mentored. If you try to stick to the actions->consequences framework for understanding her influence, then Betty (like a gamer) is engaging in hundreds of actions per minute in her interactions with the girl - body language, word choice, tone of voice, timing, etc. What the girl gets out of the mentoring may not depend on every single one of these actions but it probably does depend on patterns in these micro-actions. So it seems more natural to think about Betty's fine-grained influence on the girl she's mentoring in terms of Betty's personality, motivations, etc., and how well she and the girl she's mentoring click, rather than exclusively trying to track how that's mediated by specific actions. If you wanted to know how the mentoring will go for the girl, you'd probably have questions about those sorts of things - "What is Betty like?", "How is she with kids?", etc.

In the case where Betty donates the money, the girl being mentored will still experience the mentoring in full detail, but most of those details won't be coming directly from Betty so Betty's main role is describable with just a few bits (gave $X which allowed them to recruit & support one more Big Sister). e.g., For the specific girl who got a mentor thanks to Betty's donation, it probably doesn't make any difference what facial expression Betty was making as she clicked the "donate" button, or whether she's kind or bitter at the world. Though there are still some indirect paths to Betty influencing fine-grained details for girls who receive Big Sisters mentoring, as the post notes, since the organization could change its operations to try to appeal to potential donors like Betty.

Reply
High-level actions don’t screen off intent
Unnamed2mo50

I think of this as coarse-grained influence vs. fine-grained influence, which basically comes down to how many bits are needed to specify the nature of the influence.

Reply
An epistemic advantage of working as a moderate
Unnamed3mo53

I think there is a fair amount of overlap between the epistemic advantages of being a moderate (seeking incremental change from AI companies) and the epistemic disadvantages.

Many of the epistemic advantages come from being more grounded or having tighter feedback loops. If you're trying to do the moderate reformer thing, you need to justify yourself to well-informed people who work at AI companies, you'll get pushback from them, you're trying to get through to them.

But those feedback loops are with that reality as interpreted by people at AI companies. So, to some degree, your thinking will get shaped to resemble their thinking. Those feedback loops will guide you towards relying on assumptions that they see as not requiring justification, using framings that resonate with them, accepting constraints that they see as binding, etc. Which will tend to lead to seeing the problem and the landscape from something more like their perspective, sharing their biases & blindspots, etc.

Reply
Load More
29Using smart thermometer data to estimate the number of coronavirus cases
6y
8
11Case Studies Highlighting CFAR’s Impact on Existential Risk
9y
1
53Results of a One-Year Longitudinal Study of CFAR Alumni
10y
35
23The effect of effectiveness information on charitable giving
12y
0
31Practical Benefits of Rationality (LW Census Results)
12y
5
56Participation in the LW Community Associated with Less Bias
13y
50
14[Link] Singularity Summit Talks
13y
3
26Take Part in CFAR Rationality Surveys
13y
4
2Meetup : Chicago games at Harold Washington Library (Sun 6/17)
13y
0
2Meetup : Weekly Chicago Meetups Resume 5/26
14y
0
Load More
Alief
4 years ago
(+111/-18)
History of Less Wrong
5 years ago
(+154/-92)
Virtues
5 years ago
(+105/-99)
Time (value of)
5 years ago
(+117)
Aversion
14 years ago
Less Wrong/2009 Articles/Summaries
15 years ago
(+20261)
Puzzle Game Index
15 years ago
(+26)