California's Safe Harbor level for lead is 0.5 µg/day. The CDC's safe level is 10 µg/day, and was 25 µg/day from 1985 to 1991. 12−25 times 0.5 is 6−12.5 µg, which is basically within the CDC's safe level, and was only found in two samples. (Also, as Soylent's own reply pointed out, they tested version 1.5, and 2.0 has a different recipe with even lower—but still safe—levels.)
As You Sow has also found lead and cadmium levels above California's Safe Harbor threshold in 26 chocolate products, including Ghirardelli, Hershey, Mars, Trader Joe's, and Whole Foods. They seem to be more about drawing attention to California's Proposition 65/themselves, than about actually promoting safety.
Note that the standard way of dealing with Proposition 65 is to just label it as "This product contains chemicals known to the State of California to cause cancer and birth defects or other reproductive harm." and then keep selling it, because the other 49 states don't care.
I'm glad Soylent responded quickly to this, and that most people aren't taking it as an excuse to be scared of Soylent. A few have been immediately blowing it up into wild speculation, for instance, that Rob Rhinehart is going crazy from lead poisoning by dog-fooding his own product (so to speak).
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I though a bit about it, but I think Tay is basically a software version of a parrot that repeats back what it hears - I don't think it has any commonsense knowledge or serious attempt to understand that tweets are about a world that exists outside of twitter. I.e it has no semantics, it's just a syntax manipulator that uses some kind of probabilistic language model to generate grammatically correct sentences and a machine learning model to try and learn which kind of sentences will get the most retweets or will most closely resemble other things people are tweeting about. Tay does't know what a "Nazi" actually is. I haven't looked into it in any detail but I know enough to guess that that's how it works.
As such, the failure of Tay doesn't particularly tell us much about Friendliness, because friendliness research pertains to superintelligent AIs which would definitely have a correct ontology/semantics and understand the world.
However, it does tell us that a sufficiently stupid, amateurish attempt to harvest human values using an infrahuman intelligence wouldn't reliably work. This is obvious to anyone who has been "in the trade" for a while, however it does seem to surprise the mainstream media.
It's probably useful as a rude slap-in-the-face to people who are so ignorant of how software and machine learning work that they think friendliness is a non-issue.
Tay doesn't tell us much about deliberate Un-Friendliness. But Tay does tell us that a well-intentioned effort to make an innocent, harmless AI can go wrong for unexpected reasons. Even for reasons that, in hindsight, are obvious.
Are you sure that superintelligent AIs would have a "correct ontology/semantics"? They would have to have a useful one, in order to achieve their goals, but both philosophers and scientists have had incorrect conceptualizations that nevertheless matched the real world closely enough to be productive. And for an un-Friendly AI, "productive" translates to "using your atoms for its own purposes."