LESSWRONG
LW

1873
Unnamed
7846Ω42811117
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
The Counterfactual Quiet AGI Timeline
Unnamed3h31

My leading guess is that a world without Yudkowsky, Bostrom, or any direct replacement looks a lot more similar to our actual world, at least by 2025. Perhaps: the exact individuals and organizations (and corporate structures) leading the way are different, progress is a bit behind where it is in our world (perhaps by 6 months to a year at this point), there is less attention to the possibility of doom and less focus on alignment work.

One thing that Yudkowsky et al. did is to bring more attention to the possibility of superintelligence and what it might mean, especially among the sort of techy people who could play a role in advancing ML/AI. But without them, the possibility of thinking machines was already a standard topic in intro philosophy classes, the Turing test was widely known, Deep Blue was a major cultural event, AI and robot takeover were standard topics in sci-fi, Moore's law was widely known, people like Kurzweil and Moravec were projecting when computers would pass human capability levels, various people were trying to do what they could with the tech that they had. A lot of AI stuff was in the groundwater, especially for the sort of techy people who could play a role in advancing ML/AI. So in nearby counterfactual worlds, as there are advances in neural nets they still have ideas like trying to get these new & improved computers to be better than humans at Go, or to be much better chatbots. 

Yudkowsky was also involved in networking, e.g. helping connect founders & funders. But that seems like a kind of catalyst role that speeds up the overall process slightly, rather than summoning it where it otherwise would be absent. The specific reactions that he catalyzed might not have happened without him, but it's the sort of thing where many people were pursuing similar opportunities and so the counterfactual involves some other combination of people doing something similar, perhaps a bit later or a bit less well.

Reply
High-level actions don’t screen off intent
Unnamed20d50

e.g., Betty could cause one more girl to have a mentor either by volunteering as a Big Sister or by donating money to the Big Sisters program.

In the case where she volunteers and mentors the girl directly, it takes lots of bits to describe her influence on the girl being mentored. If you try to stick to the actions->consequences framework for understanding her influence, then Betty (like a gamer) is engaging in hundreds of actions per minute in her interactions with the girl - body language, word choice, tone of voice, timing, etc. What the girl gets out of the mentoring may not depend on every single one of these actions but it probably does depend on patterns in these micro-actions. So it seems more natural to think about Betty's fine-grained influence on the girl she's mentoring in terms of Betty's personality, motivations, etc., and how well she and the girl she's mentoring click, rather than exclusively trying to track how that's mediated by specific actions. If you wanted to know how the mentoring will go for the girl, you'd probably have questions about those sorts of things - "What is Betty like?", "How is she with kids?", etc.

In the case where Betty donates the money, the girl being mentored will still experience the mentoring in full detail, but most of those details won't be coming directly from Betty so Betty's main role is describable with just a few bits (gave $X which allowed them to recruit & support one more Big Sister). e.g., For the specific girl who got a mentor thanks to Betty's donation, it probably doesn't make any difference what facial expression Betty was making as she clicked the "donate" button, or whether she's kind or bitter at the world. Though there are still some indirect paths to Betty influencing fine-grained details for girls who receive Big Sisters mentoring, as the post notes, since the organization could change its operations to try to appeal to potential donors like Betty.

Reply
High-level actions don’t screen off intent
Unnamed20d40

I think of this as coarse-grained influence vs. fine-grained influence, which basically comes down to how many bits are needed to specify the nature of the influence.

Reply
An epistemic advantage of working as a moderate
Unnamed1mo53

I think there is a fair amount of overlap between the epistemic advantages of being a moderate (seeking incremental change from AI companies) and the epistemic disadvantages.

Many of the epistemic advantages come from being more grounded or having tighter feedback loops. If you're trying to do the moderate reformer thing, you need to justify yourself to well-informed people who work at AI companies, you'll get pushback from them, you're trying to get through to them.

But those feedback loops are with that reality as interpreted by people at AI companies. So, to some degree, your thinking will get shaped to resemble their thinking. Those feedback loops will guide you towards relying on assumptions that they see as not requiring justification, using framings that resonate with them, accepting constraints that they see as binding, etc. Which will tend to lead to seeing the problem and the landscape from something more like their perspective, sharing their biases & blindspots, etc.

Reply
Negative utilitarianism is more intuitive than you think
Unnamed2mo20

Instead, the intuitions at play here are mainly about how to set up networks of coordination between agents. This includes

Voluntary interactions: limiting interaction with those who didn't consent, limiting effects (especially negative effects) on those who didn't opt in (e.g. Mill's harm principle)

Social roles: interacting with people in a particular capacity rather than needing to consider the person in all of their humanity & complexity

Boundaries/membranes: limiting what aspects of an agent and its inner workings others have access to or can influence

Reply1
Negative utilitarianism is more intuitive than you think
Unnamed2mo26

Seems like a mistake to write these intuitions into your axiology.

Reply1
My Empathy Is Rarely Kind
Unnamed2mo70

Sounds like this agency thing is so central to how you orient to the world that you're bringing it with you when you try to empathize with someone (at least if they're human).

If you want to try to do something more like the sort of empathizing that other people talk about, you could try:

  1. Imagining some other scenario where the way that you would relate to that other scenario matches the way that this person is relating to the scenario that they're in.
  2. Finding a case where someone is showing some of the agency thing, in a context where you wouldn't, and empathize with them in that scenario. (This can also help with doing 1, if you also notice how you relate to this context where you wouldn't show that agency thing, because it adds to your collection of 'other scenarios' which could match how someone else engages with a scenario where you would have more agency.)
  3. Empathizing with people in contexts that are orthogonal to the agency thing, where how much agency a person is showing isn't central to what's happening. (Perhaps someone liking a book/movie/hobby/etc. that you don't like?)
Reply
Generalized Hangriness: A Standard Rationalist Stance Toward Emotions
Unnamed2mo40

It's standard for parents to have these sorts of models about their kids' emotions, e.g. "She's cranky because she didn't get her nap."

Reply
Generalized Hangriness: A Standard Rationalist Stance Toward Emotions
Unnamed2mo40

Related: Feeling Rational

Reply
Generalized Hangriness: A Standard Rationalist Stance Toward Emotions
Unnamed2mo112

I've seen hangriness-style advice circulating on twitter (via Zvi, so perhaps in the rationalist milieu) and tiktok (not in the rationalist milieu afaict).

If you feel like you hate everyone, eat
If you feel like everyone hates you, sleep
If you feel like you hate yourself, shower
If you feel like everyone hates everyone, go outside
If you feel overwhelmed by your thoughts, write them down
If you feel lost and alone, call a friend 
If you feel stuck in the past, plan for the future
...

Reply
Load More
29Using smart thermometer data to estimate the number of coronavirus cases
6y
8
11Case Studies Highlighting CFAR’s Impact on Existential Risk
9y
1
53Results of a One-Year Longitudinal Study of CFAR Alumni
10y
35
23The effect of effectiveness information on charitable giving
11y
0
31Practical Benefits of Rationality (LW Census Results)
12y
5
56Participation in the LW Community Associated with Less Bias
13y
50
14[Link] Singularity Summit Talks
13y
3
26Take Part in CFAR Rationality Surveys
13y
4
2Meetup : Chicago games at Harold Washington Library (Sun 6/17)
13y
0
2Meetup : Weekly Chicago Meetups Resume 5/26
13y
0
Load More
Alief
4 years ago
(+111/-18)
History of Less Wrong
5 years ago
(+154/-92)
Virtues
5 years ago
(+105/-99)
Time (value of)
5 years ago
(+117)
Aversion
14 years ago
Less Wrong/2009 Articles/Summaries
14 years ago
(+20261)
Puzzle Game Index
15 years ago
(+26)