Wiki Contributions

Comments

Sorted by
Tobes120

I wanted to get some perspective on my life so I wrote my own obituary (in a few different ways).

They ended up being focussed my relationship with ambition. The first is below:

Auto-obituary attempt one:

Thesis title: “The impact of the life of Toby Jolly”
a simulation study on a human connected to the early 21st century’s “Effective Altruism” movement

Submitted by:
Dxil Sind 0239β
for the degree of Doctor of Pre-Post-humanities
at Sopdet University 
August 2542

Abstract
Many (>500,000,000) papers have been published on the Effective Altruism (EA) movement, its prominent members and their impact on the development of AI and the singularity during the 21st century’s time of perils. However, this is the first study of the life of Toby Jolly; a relatively obscure figure who was connected to the movement for many years. Through analysing the subject’s personal blog posts, self-referential tweets, and career history, I was able to generate a simulation centred on the life and mind of Toby. This simulation was run 100,000,000 times with a variety of parameters and the results were analysed. In the thesis I make the case that Toby Jolly had, through his work, a non-zero, positively-signed impact on the creation of our glorious post-human Emperium (Praise be to Xraglao the Great). My analysis of the simulation data suggests that his impact came via a combination of his junior operations work, and minor policy projects but also his experimental events and self-deprecating writing.

One unusual way he contributed was by consistently trying to draw attention to how his thoughts and actions were so often the product of his own absurd and misplaced sense of grandiosity; a delusion driven by what he would describe himself as a “desperate and insatiable need to matter”. This work marginally increased the self-awareness and psychological flexibility amongst the EA community. This flexibility subsequently improved the movement's ability to handle its minor role in the negotiations needed to broker power during the Grand Transition - thereby helping avoid catastrophe.

The outcomes of our simulations suggest that through his life and work Toby decreased the likelihood of a humanity-ending event by 0.0000000000024%. He is therefore responsible for an expected 18,600,000,000,000,000,000 quality adjusted experience years across the light-cone, before the heat-death of the universe (using typical FLOP standardisation). Toby mattered.

Ethics note: as per standard imperial research requirements, we asked the first 100 simulations of Toby if they were happy being simulated. In all cases, he said “Sure, I actually, kind of suspected it…look, I have this whole blog about it”

See my other auto-obituaries here :)

Tobes22

I'd appreciate seeing the post that you mentioned, and part of me does worry that you are right.

Part of me worries that this is all just a form of group mental illness. That I am have been sucked into a group that was brought together through a pathological obsession with groundless abstract prediction and a sad-childhood-memories-induced intuition that narratives about the safety of powerful actors are usually untrustworthy. That fears about AI are an extreme shadow of these underlying group beliefs and values. That we are just endlessly group-reinforcing our mental-ill-health-backed doomy predictions about future powerful entities. I put weight on this part of me having some or all of the truth.

But I have other parts that tell me that these ideas just all make sense. In fact, the more grounded, calm and in touch with my thoughts and feelings I am - the more I think/feel that acknowledging AI risk is the healthiest thing that I do.