by awg
1 min read

2

This is a special post for quick takes by awg. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
4 comments, sorted by Click to highlight new comments since:
[-]awg10

«Boundaries» and AI safety compilation and Embedded Agents got me thinking:

Cancerous cells are misaligned subsystems with respect to the human body. Their misalignment results in behavior that violates the usual functional boundaries of other subsystems.

[-]awg14

One thing I have observed in myself as I've followed AI more closely, especially as the pace has seemed to escalate in the past few weeks/months, is that my level of care for climate change has dropped significantly. (Maybe irrationally, to some degree.) I find myself being bored by appeals to climate change risk at this point, especially longer-term risks. They feel paltry in comparison to the risks posed by AGI. Like, assuming timelines <30-50 years, either AGI goes well and then climate change is a solved problem, or AGI doesn't go well and then climate change is no longer a concern.

The model of the world that has superintelligence in its future straightforwardly predicts that the scope of counterfactually higher climate change is much smaller than standard estimates, which ignore this consideration. After making that update, the emotional impression of caring less correctly tracks the underlying concern.

[-]awg00

EY gets mentioned in a recent newsletter in the Atlantic from writer Derek Thompson.