President Biden is taking new measures to ensure rapid progress of Artificial intelligence The technology is well managed. The Biden administration recently released a blueprint for “Amnesty International Bill of RightsA set of five recommendations to ensure that AI systems are safe, fair, optional, and most of all ethical.
Unlike the actual Bill of Rights, this document is not legally binding. Instead, the blueprint is there to formalize best practices from the major players in AI and machine learning. These measures include ensuring that AI is not biased by bad data, providing notifications when automation is used, and providing human-based alternatives to automated services, according to Venkat Rangapuram, CEO of data solutions provider Pactera Edge.
Here are the five “rights” outlined by the White House blueprint, and how companies should take advantage of them when developing and using automated systems.
1. Ensure that automated systems are safe and efficient.
The safety and security of users should always be of paramount importance in the development of AI systems, according to the scheme. Management argues that automated systems should be developed with public input, allowing consultation from diverse groups of people able to identify potential risks and concerns, and that systems should undergo testing and monitoring prior to deployment to demonstrate their safety.
One example of malicious AI mentioned in the document cites Amazon, which has installed AI-powered cameras in its delivery trucks in order to assess the safety habits of its drivers. The system improperly punished drivers when other vehicles cut them off or when other events beyond their control occurred on the road. As a result, some drivers were ineligible for a bonus.
2. Protect users from algorithmic discrimination.
The second right deals with the tendency of automated systems to “achieve unfair results” using data that fails to explain the systemic biases that exist in American society, such as a facial recognition software that misidentifies people of color more than white people, or a recruitment tool that rejects applications. provided by women.
To combat this, the scheme suggests using Computational bias guarantees for the workforce, a document containing best practices developed by a consortium of industry leaders including IBM, Meta and Deloitte. The document outlines steps for educating employees about algorithmic bias, as well as instructions for implementing preventive measures in the workplace.
3. Protect users from abusive data policies.
According to the third right, each person must have agency over how their data is used. The proposal suggests that designers and developers of automated systems should seek user permission and respect user decisions regarding the collection, use, access, transfer and delivery of personal data. The outline adds that any consent requests should be brief and understandable in clear language.
Designing your automated systems to constantly learn without appearing to be overbearing is a “hard balance” to find, Rangapuram says, but adds that letting individual users set their own level of comfort and privacy is a good first step.
4. Provide users with notices and clarifications.
Consumers should always know when an automated system is being used, and have enough information to understand how and why it contributes to the outcomes that affect them, according to the fourth right.
Overall, Rangapuram says negative social sentiments toward companies that collect data can negatively affect the advancement of new technology, so explaining how and why data is used has never been more important. By educating people about their data, companies can gain trust among their users, which can lead to a greater desire by those users to share their information.
5. Offer human-based alternatives and backup options.
According to the scheme, you should be able to opt out of using automated systems in favor of a human alternative. At the same time, automated systems should have human-based backup plans in case of technical issues. For example, the scheme highlights customer service systems that use chatbots to respond to common customer complaints, but will redirect users to human agents to deal with more complex problems.
Consider the experience of driving a self-driving car; While the system may work perfectly, “you’ll still want a steering wheel in case something happens,” says Rangapuram.