verbalshadow

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Yes. That is one of the things in possibility space. I don't think unaligned means safe. We work with unaligned people all the time, and some of them aren't safe either. 

The main thing I was hoping people would understand from this is that an unaligned AI is near a 100% possibility. Alignment isn't a one and done goal that so many people act like it is. Even if you successfully align an AI, all it takes is one failure to align and the genie is out of the bottle.  One single point of failure and it becomes a cascading failure. 

  • So let's imagine an ASI that works on improving itself. How does it ensure the alignment of an intelligence greater than itself. 
  • With hundreds, maybe thousands of people working to create AI, someone will fail to align.

The future is unaligned. 

Are we taking that seriously? Working on alignment is great, but it is not the future we should be prepping for. Do you have a plan? I don't yet, but I'm thinking about the world where there are intelligences greater than me abound (already true) and we don't share the same interests (also already true).

This can be addressed by peer to peer tech and federation. Peertube uses few techniques, that make it more tentable: hosting on the site itself, site to site sharing (activity pub), and bittorent to fill the massive demand. The bittorent part is on by default and there are already more than then an few instances which people can share on.