Wiki Contributions

Load More

Comments

Seems like a useful tool to have available, glad someone's working on it.

Answer by plexDec 30, 202390

AI Safety Info's answer to "I want to help out AI Safety without making major life changes. What should I do?" is currently:
 

It's great that you want to help! Here are some ways you can learn more about AI safety and start contributing:

Learn More:

Learning more about AI alignment will provide you with good foundations for helping. You could start by absorbing content and thinking about challenges or possible solutions.

Consider these options:

Join the Community:

Joining the community is a great way to find friends who are interested and will help you stay motivated.

Donate, Volunteer, and Reach Out:

Donating to organizations or individuals working on AI safety can be a great way to provide support.

 

If you don’t know where to start, consider signing up for a navigation call with AI Safety Quest to learn what resources are out there and to find social support.

If you’re overwhelmed, you could look at our other article that offers more bite-sized suggestions.

Not all EA groups focus on AI safety; contact your local group to find out if it’s a good match. ↩︎


 

Life is Nanomachines

In every leaf of every tree
If you could look, if you could see
You would observe machinery
Unparalleled intricacy
 
In every bird and flower and bee
Twisting, churning, biochemistry
Sustains all life, including we
Who watch this dance, and know this key

Illustration: A magnified view of a vibrant green leaf, where molecular structures and biological nanomachines are visible. Hovering nearby, a bird's feathers reveal intricate molecular patterns. A bee is seen up close, its body showcasing complex biochemistry processes in the form of molecular chains and atomic structures. Nearby, a flower's petals and stem reveal the dance of biological nanomachines at work. Human silhouettes in the background observe with fascination, holding models of molecules and atoms.

Congratulations on launching!

Added you to the map:

and your Discord to the list of communities, which is now a sub-page of aisafety.com.

One question: Given that interpretability might well lead to systems which are powerful enough to be an x-risk long before we have a strong enough understanding to direct a superintelligence, so publish-by-default seems risky, are you considering adopting a non-publish-by-default policy? I know you talk about capabilities risks in general terms, but is this specific policy on the table?

Yeah, that could well be listed on https://ea.domains/, would you be up for transferring it?

Internal Double Crux, a cfar technique.

I think not super broadly known, but many cfar techniques fit into the category so it's around to some extent.

And yeah, brains are pretty programmable.

Right, it can be way easier to learn it live. My guess is you're doing something quite IDC flavoured, but mixed with some other models of mind which IDC does not make explicit. Specific mind algorithms are useful, but exploring based on them and finding things which fit you is often best.

Nice, glad you're getting value out of IDC and other mind stuff :)

Do you think an annotated reading list of mind stuff be worth putting together?

Load More