LESSWRONG
LW

3154
Unnamed
7879Ω42811157
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Experiment: Test your priors on Bernoulli processes.
Unnamed5d30

Explanation:

Hypothesis 1: The data are generated by a beta-binomial distribution, where first a probability x is drawn from a beta(a,b) distribution, and then 5 experiments are run using that probability x. I had my coding assistant write code to solve for the a,b that best fit the observed data and show the resulting distribution for that a,b. It gave (a,b) = (0.6032,0.6040) and a distribution that was close but still meaningfully off given the million experiment sample size (most notably, only .156 of draws from this model had 2 R's compared with the observed .162).

Hypothesis 2: With probability c the data points were drawn from a beta-binomial distribution, and with probability 1-c the experiment instead used p=0.5. This came to mind as a simple process that would result in more experiments with exactly 2 R's out of 4. With my coding assistant writing the code to solve for the 3 parameters a,b,c, this model came extremely close to the observed data - the largest error was .0003 and the difference was not statistically significant. This gave (a,b,c) = (0.5220,0.5227,0.9237).

I could have stopped there, since the fit was good enough so that anything else I'd do would probably only differ in its predictions after a few decimal places, but instead I went on to Hypothesis 3: the beta distribution is symmetric with a=b, so the probability is 0.5 with probability 1-c and drawn from beta(a,a) with probability c. I solved for a,c with more sigfigs than my previous code used (saving the rounding till the end), and found that it was not statistically significantly worse than the asymmetric beta from Hypothesis 2. I decided to go with this one because on priors a symmetric distribution is more likely than an asymmetric distribution that is extremely close to being symmetric. Final result: draw from a beta(0.5223485278, 0.5223485278) distribution with probability 0.9237184759 and use p=0.5 with probability 0.0762815241. This yields the above conditional probabilities out to 6 digits.

Reply
OpenAI #15: More on OpenAI’s Paranoid Lawfare Against Advocates of SB 53
Unnamed6d153

Chris Lehane, the inventor of the original term ‘vast right wing conspiracy’ back in the 1990s to dismiss the (true) allegations against Bill Clinton by Monica Lewinsky

This is inaccurate in a few ways.

Lehane did not invent the term "vast right wing conspiracy", AFAICT; Hillary Clinton was the first person to use that phrase in reference to criticisms of the Clintons, in a 1998 interview. Some sources (including Lehane's Wikipedia page) attribute the term to Lehane's 1995 memo Communication Stream of Conspiracy Commerce, but I searched the memo for that phrase and it does not appear there. Lehane's Wikipedia page cites (and apparently misreads) this SFGate article, which discusses Lehane's memo in connection with Clinton's quote but does not actually attribute the phrase to Lehane.

The memo's use of the term "conspiracy" was about how the right spread conspiracy theories about the Clintons, not about how the right was engaged in a conspiracy against the Clintons. Its primary example involved claims about Vince Foster which it (like present-day Wikipedia) described as "conspiracy theories" (as you can see by searching the memo for the string "conspirac").

Also, Lehane's memo was published in July 1995 which was before the Clinton-Lewinsky sexual relationship began (Nov 1995), and so obviously wasn't a response to allegations about that relationship.

Lehane's memo did include some negatives stories about the Clintons that turned out to be accurate, such as the Gennifer Flowers allegations. So there is some legitimate criticism about Lehane's memo, including how it presented all of these negative stories as part of a pipeline for spreading unreliable allegations about the Clintons, and didn't take seriously the possibility that they might be accurate. But it doesn't look like his work was mainly focused on dismissing true allegations.

Reply
The Most Common Bad Argument In These Parts
Unnamed7d21

Exhaustive Free Association is a step in a chain of reasoning where the logic goes "It's not A, it's not B, it's not C, it's not D, and I can't think of any more things it could be!"[1] Once you spot it, you notice it all the damn time.

This description skips over the fallacy part of the fallacy. On its own, the sentence in quotes sounds like a potentially productive contribution to a discussion.

Reply
Experiment: Test your priors on Bernoulli processes.
Unnamed7d31

[0.111019, 0.324513, 0.5, 0.675487, 0.888981]

Reply
The Counterfactual Quiet AGI Timeline
Unnamed14d2118

My leading guess is that a world without Yudkowsky, Bostrom, or any direct replacement looks a lot more similar to our actual world, at least by 2025. Perhaps: the exact individuals and organizations (and corporate structures) leading the way are different, progress is a bit behind where it is in our world (perhaps by 6 months to a year at this point), there is less attention to the possibility of doom and less focus on alignment work.

One thing that Yudkowsky et al. did is to bring more attention to the possibility of superintelligence and what it might mean, especially among the sort of techy people who could play a role in advancing ML/AI. But without them, the possibility of thinking machines was already a standard topic in intro philosophy classes, the Turing test was widely known, Deep Blue was a major cultural event, AI and robot takeover were standard topics in sci-fi, Moore's law was widely known, people like Kurzweil and Moravec were projecting when computers would pass human capability levels, various people were trying to do what they could with the tech that they had. A lot of AI stuff was in the groundwater, especially for the sort of techy people who could play a role in advancing ML/AI. So in nearby counterfactual worlds, as there are advances in neural nets they still have ideas like trying to get these new & improved computers to be better than humans at Go, or to be much better chatbots. 

Yudkowsky was also involved in networking, e.g. helping connect founders & funders. But that seems like a kind of catalyst role that speeds up the overall process slightly, rather than summoning it where it otherwise would be absent. The specific reactions that he catalyzed might not have happened without him, but it's the sort of thing where many people were pursuing similar opportunities and so the counterfactual involves some other combination of people doing something similar, perhaps a bit later or a bit less well.

Reply
High-level actions don’t screen off intent
Unnamed1mo50

e.g., Betty could cause one more girl to have a mentor either by volunteering as a Big Sister or by donating money to the Big Sisters program.

In the case where she volunteers and mentors the girl directly, it takes lots of bits to describe her influence on the girl being mentored. If you try to stick to the actions->consequences framework for understanding her influence, then Betty (like a gamer) is engaging in hundreds of actions per minute in her interactions with the girl - body language, word choice, tone of voice, timing, etc. What the girl gets out of the mentoring may not depend on every single one of these actions but it probably does depend on patterns in these micro-actions. So it seems more natural to think about Betty's fine-grained influence on the girl she's mentoring in terms of Betty's personality, motivations, etc., and how well she and the girl she's mentoring click, rather than exclusively trying to track how that's mediated by specific actions. If you wanted to know how the mentoring will go for the girl, you'd probably have questions about those sorts of things - "What is Betty like?", "How is she with kids?", etc.

In the case where Betty donates the money, the girl being mentored will still experience the mentoring in full detail, but most of those details won't be coming directly from Betty so Betty's main role is describable with just a few bits (gave $X which allowed them to recruit & support one more Big Sister). e.g., For the specific girl who got a mentor thanks to Betty's donation, it probably doesn't make any difference what facial expression Betty was making as she clicked the "donate" button, or whether she's kind or bitter at the world. Though there are still some indirect paths to Betty influencing fine-grained details for girls who receive Big Sisters mentoring, as the post notes, since the organization could change its operations to try to appeal to potential donors like Betty.

Reply
High-level actions don’t screen off intent
Unnamed1mo40

I think of this as coarse-grained influence vs. fine-grained influence, which basically comes down to how many bits are needed to specify the nature of the influence.

Reply
An epistemic advantage of working as a moderate
Unnamed2mo53

I think there is a fair amount of overlap between the epistemic advantages of being a moderate (seeking incremental change from AI companies) and the epistemic disadvantages.

Many of the epistemic advantages come from being more grounded or having tighter feedback loops. If you're trying to do the moderate reformer thing, you need to justify yourself to well-informed people who work at AI companies, you'll get pushback from them, you're trying to get through to them.

But those feedback loops are with that reality as interpreted by people at AI companies. So, to some degree, your thinking will get shaped to resemble their thinking. Those feedback loops will guide you towards relying on assumptions that they see as not requiring justification, using framings that resonate with them, accepting constraints that they see as binding, etc. Which will tend to lead to seeing the problem and the landscape from something more like their perspective, sharing their biases & blindspots, etc.

Reply
Negative utilitarianism is more intuitive than you think
Unnamed2mo20

Instead, the intuitions at play here are mainly about how to set up networks of coordination between agents. This includes

Voluntary interactions: limiting interaction with those who didn't consent, limiting effects (especially negative effects) on those who didn't opt in (e.g. Mill's harm principle)

Social roles: interacting with people in a particular capacity rather than needing to consider the person in all of their humanity & complexity

Boundaries/membranes: limiting what aspects of an agent and its inner workings others have access to or can influence

Reply1
Negative utilitarianism is more intuitive than you think
Unnamed2mo26

Seems like a mistake to write these intuitions into your axiology.

Reply1
Load More
29Using smart thermometer data to estimate the number of coronavirus cases
6y
8
11Case Studies Highlighting CFAR’s Impact on Existential Risk
9y
1
53Results of a One-Year Longitudinal Study of CFAR Alumni
10y
35
23The effect of effectiveness information on charitable giving
12y
0
31Practical Benefits of Rationality (LW Census Results)
12y
5
56Participation in the LW Community Associated with Less Bias
13y
50
14[Link] Singularity Summit Talks
13y
3
26Take Part in CFAR Rationality Surveys
13y
4
2Meetup : Chicago games at Harold Washington Library (Sun 6/17)
13y
0
2Meetup : Weekly Chicago Meetups Resume 5/26
13y
0
Load More
Alief
4 years ago
(+111/-18)
History of Less Wrong
5 years ago
(+154/-92)
Virtues
5 years ago
(+105/-99)
Time (value of)
5 years ago
(+117)
Aversion
14 years ago
Less Wrong/2009 Articles/Summaries
14 years ago
(+20261)
Puzzle Game Index
15 years ago
(+26)