We have a number of charities that are working on different aspects of AGI risk

-  The theory of the alignment problem (MIRI/FHI/more)

-  How to think about problems well (CFAR)

However we don't have body dedicated to making and testing a coherent communication strategy to help postpone the development of dangerous AIs.

I'm organising an on-line discussion around what we should do about this issue next saturday.

In order to find out when people can do it, I've created a doodle here. I'm trusting that doodle works well with timezones. The time slots should be between 1200 and 2300  UTC , let me know if they are not.

We'll be using the optimal brainstorming methodology

Give me a message if you want an invite, once the time has been decided.

I will take notes and post them here again.

New Comment
7 comments, sorted by Click to highlight new comments since:

I didn't organise this one so well, so it was a wash. No concrete plans for next time yet. Other priorities may interfere

My apologies for not being present. I did not put it into my calendar, and it slipped my mind. :(

The title says 2017/6/27. Should it be 2017-05-27?

Thanks! Fixing now.

Also, it looks like the last time slot is 2200 UTC. I can participate from 1900 and forward.

I will promote this in the AI Safety reading group tomorrow evening.

Can I get an email to invite to the hangout?

Also I've nailed down the time if people see this in the comments.

time to be decided UTC

Grin.

Curated and popular this week