Wikitag Contributions

Comments

Sorted by

Hmm. I agree that values are important: what does a superintelligent AI value?

My answer: to become a superintelligent AI, the AI must value learning about things with an increasing level of complexity.

If you accept this point, then a superintelligent AI would prefer to study more complex phenomena (humanity) than less complex phenomena (computing pi).

So, the superintelligent AI would prefer to keep humans and their atoms around to study them.

One of my points was that humanity has a level of complexity that means that an AI couldn't simulate humanity perfectly without humanity.

So, a superintelligent AI would keep us because it would want to observe humanity, which can involve observing us in reality. I doubt that AI can "successfully calibrate simulations [of humanity]" as you mentioned.

Really engaging post. You've got a compelling style! Thanks for writing. I found it funny and thought-provoking.

There’s one trick, and it’s simple: stop trying to justify your beliefs. Don’t go looking for citations to back your claim. Instead, think about why you currently believe this thing, and try to accurately describe what led you to believe it.

There might have been some irony in the article. But good tips!

Thanks Gunnar. I have added your recipe to the article at https://www.tomdekan.com/design-blindness

Actually Gunnar_Zarncke, may I add your soup recipe as a fun addition to the article?

I agree with you. You have to design/build a lot of things to develop a design instinct. 

In case you find it helpful, one 'executable strategy' to develop a design instinct is making art.  This could be sketching your room, painting a picture of a tree in your garden, or making 3D models on Blender. These design activities tend to transfer well to other domains.  

(Executable strategy concept: https://notes.andymatuschak.org/Executable_strategy)

Load More