yanni kyriacos

Director & Movement Builder - AI Safety ANZ

Advisory Board Member (Growth) - Giving What We Can

The catchphrase I walk around with in my head regarding the optimal strategy for AI Safety is something like: Creating Superintelligent Artificial Agents* (SAA) without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (*we already have AGI).

I thought it might be useful to spell that out.

Wiki Contributions

Comments

I've decided to post something very weird because it might (in some small way) help shift the Overton Window on a topic: as long as the world doesn't go completely nuts due to AI, I think there is a 5%-20% chance I will reach something close to full awakening / enlightenment in about 10 years. Something close to this: 

Very quick thoughts on setting time aside for strategy, planning and implementation, since I'm into my 4th week of strategy development and experiencing intrusive thoughts about needing to hurry up on implementation;

  • I have a 52 week LTFF grant to do movement building in Australia (AI Safety)
  • I have set aside 4.5 weeks for research (interviews + landscape review + maybe survey) and strategy development (segmentation, targeting, positioning),
  • Then 1.5 weeks for planning (content, events, educational programs), during which I will get feedback from others on the plan and then iterate it. 
  • This leaves me with 46/52 weeks to implement ruthlessly.

In conclusion, 6 weeks on strategy and planning seems about right. 2 weeks would have been too short, 10 weeks would have been too long, this porridge is juuuussttt rightttt.

keen for feedback from people in similar positions.

Yeah it is a private purchase, unlike eating, so less likely to create some social effect by abstaining (i.e. vegan). I will say though, I've been vegan for about 7 years and I don't think I've nudged anyone :|

I have an intuition that if you tell a bunch of people you're extremely happy almost all the time (e.g. walking around at 10/10) then many won't believe you, but if you tell them that you're extremely depressed almost all the time (e.g. walking around at 1/10) then many more would believe you. Do others have this intuition? Keen on feedback.

Two jobs in AI Safety Advocacy that AFAICT don't exist, but should and probably will very soon. Will EAs be the first to create them though? There is a strong first mover advantage waiting for someone -

1. Volunteer Coordinator - there will soon be a groundswell from the general population wanting to have a positive impact in AI. Most won't know how to. A volunteer manager will help capture and direct their efforts positively, for example, by having them write emails to politicians

2. Partnerships Manager - the President of the Voice Actors guild reached out to me recently. We had a very surprising number of cross over in concerns and potential solutions. Voice Actors are the canary in the coal mine. More unions (etc) will follow very shortly. I imagine within 1 year there will be a formalised group of these different orgs advocating together.

Please help me find research on aspiring AI Safety folk!

I am two weeks into the strategy development phase of my movement building and almost ready to start ideating some programs for the year.

But I want these programs to be solving the biggest pain points people experience when trying to have a positive impact in AI Safety .

Has anyone seen any research that looks at this in depth? For example, through an interview process and then survey to quantify how painful the pain points are?

Some examples of pain points I've observed so far through my interviews with Technical folk:

  • I often felt overwhelmed by the vast amount of material to learn.
  • I felt there wasn’t a clear way to navigate learning the required information
  • I lacked an understanding of my strengths and weaknesses in relation to different AI Safety areas  (i.e. personal fit / comparative advantage) .
  • I lacked an understanding of my progress after I get started (e.g. am I doing well? Poorly? Fast enough?)
  • I regularly experienced fear of failure
  • I regularly experienced fear of wasted efforts / sunk cost
  • Fear of admitting mistakes or starting over might prevent people from making necessary adjustments.
  • I found it difficult to identify my desired role / job (i.e. the end goal)
  • When I did think I knew my desired role, identifying the specific skills and knowledge required for a desired role was difficult
  • There is no clear career pipeline: Do X and then Y and then Z and then you have an A% chance of getting B% role
  • Finding time to get upskilled while working is difficult
  • I found the funding ecosystem opaque
  • A lot of discipline and motivation over potentially long periods was required to upskill
  • I felt like nobody gave me realistic expectations as to what the journey would be like 

Thanks :) Uh, good question. Making some good links? Have you done much nondual practice? I highly recommend Loch Kelly :)

Hi Jonas! Would you mind saying about more about TMI + Seeing That Frees? Thanks!

Yesterday Greg Sadler and I met with the President of the Australian Association of Voice Actors. Like us, they've been lobbying for more and better AI regulation from government. I was surprised how much overlap we had in concerns and potential solutions:
1. Transparency and explainability of AI model data use (concern)

2. Importance of interpretability (solution)

3. Mis/dis information from deepfakes (concern)

4. Lack of liability for the creators of AI if any harms eventuate (concern + solution)

5. Unemployment without safety nets for Australians (concern)

6. Rate of capabilities development (concern)

They may even support the creation of an AI Safety Institute in Australia. Don't underestimate who could be allies moving forward!

Ilya Sutskever has left OpenAI https://twitter.com/ilyasut/status/1790517455628198322

Load More