How are you approaching cognitive security as AI becomes more capable?
I'm worried about how increasingly capable AI could hijack my brain. Already: * LLMs drive people to psychosis. * AI generated content racks up countless views. * Voice cloning allows scammers to impersonate loved ones, bosses, etc. * AI engagement is difficult to distinguish from real user engagement on social...
I guess they're losing money in the short-term but gaining training data and revenue (which helps them raise funds). It's not clear to me that this is harming the lab in expectation.