Phew! From the title I first thought it would be about some under-employed bureaucrats drawing up rights for the AIs themselves.
That actually would also be worthwhile. We will have AGI soon enough, after all, and I think it's hard to argue that it wouldn't be sentient and thus deserving of rights.
AIXI contains sentient minds, but isn't itself sentient. I suspect there are designs of minds that are highly competent at many problems, and have a mental architecture totally different from humans. Such that if we had a clearer idea what we meant by "sentient", we would agree the AI wasn't sentient.
Also, how long do we have sentient AI before singularity. If the first sentient AI is a paperclipper that destroys the world, any bill of "sentient AI rights" is pragmatically useless.
SAFE AND EFFECTIVE SYSTEMS
[...]Systems should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards. Outcomes of these protective measures should include the possibility of not deploying the system or removing a system from use. Automated systems should not be designed with an intent or reasonably foreseeable possibility of endangering your safety or the safety of your community. They should be designed to proactively protect you from harms stemming from unintended, yet foreseeable, uses or impacts of automated systems.
It would be an interesting timeline if this language actually helped lobbyists shut down large AGI projects based on a lack of mitigation of foreseeable impacts.
The PDF can be found here: https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf
Some quick notes regarding this linkpost:
My quick take:
Summary of the press release
Many technologies can or do pose a threat to democracy. There are plentiful cases where these technological "tools" limit their users more than help them. Examples include problems with the use of technology in patient care, algorithmic bias in credit and hiring, and widespread breaches of user data privacy. Of course, automation is, generally speaking, helpful (e.g., agricultural production, severe weather prediction, disease detection) but we need to ensure that "progress must not come at the price of civil rights...".
The Office for Science and Technology proposes 5 design principles for minimizing harm from automated systems:
Structure of the Report
(so you can have a sense of what's contained)
Other
The existence of this linkpost is due in part to casens commenting on the release of the report in the following Metaculus question, which this report is relevant to.
AI-Human Emulation Laws before 2025
Before 2025, will laws be in place requiring that AI systems that emulate humans must reveal to people that they are AI?