Advertisement

Mass. officials warns emerging AI systems to be subjected to consumer protection, anti-bias laws

Massachusetts Attorney General Andrea Campbell answers a question during an interview at the State Attorneys General Association meetings, Nov. 14, 2023, in Boston. (Charles Krupa/AP)
Massachusetts Attorney General Andrea Campbell answers a question during an interview at the State Attorneys General Association meetings, Nov. 14, 2023, in Boston. (Charles Krupa/AP)

Developers, suppliers and users of artificial intelligence must comply with existing state consumer protection, anti-discrimination and data privacy laws, the Massachusetts attorney general cautioned Tuesday.

In an advisory, Attorney General Andrea Campbell pointed to what she described as the widespread increase in the use of AI and algorithmic decision-making systems by businesses, including technology focused on consumers.

The advisory is meant in part to emphasize that existing state consumer protection, anti-discrimination and data security laws still apply to emerging technologies, including AI systems, just as they would in any other context, despite their complexity.

"There is no doubt that AI holds tremendous and exciting potential to benefit society and our commonwealth in many ways, including fostering innovation and boosting efficiencies and cost-savings in the marketplace," Campbell said in a statement.

"Yet, those benefits do not outweigh the real risk of harm that, for example, any bias and lack of transparency within AI systems can cause our residents," she added.

Falsely advertising the usability of AI systems, supplying an AI system that is defective, and misrepresenting the reliability or safety of an AI system are just some of the actions that could be considered unfair and deceptive under the state's consumer protection laws, Campbell said.

Misrepresenting audio or video content of a person to deceive another to engage in a business transaction or supply personal information as if to a trusted business partner — as in the case of deepfakes, voice cloning, or chatbots used to engage in fraud — could also violate state law, she added.

The goal, in part, is to help encourage companies to ensure that their AI products and services are free from bias before they enter the commerce stream — rather than face consequences afterward.

Regulators also say companies should disclose to consumers when they interact with algorithms. A lack of transparency could violate consumer protection laws.

Elizabeth Mahoney of the Massachusetts High Technology Council, which advocates for the state's technology economy, said that because there might be some confusion about how state and federal rules apply to the use of AI, it's critical to spell out state law clearly.

"We think having ground rules is important and protecting consumers and protecting data is a key component of that," she said.

Campbell acknowledges in her advisory that AI has the potential to help accomplish great benefits for society even as it has also been shown to pose serious risks to consumers, including bias and the lack of transparency.

Developers and suppliers promise that their AI systems and technology are accurate, fair and effective even as they also claim that AI is a "black box," meaning that they do not know exactly how AI performs or generates results, she said in her advisory.

The advisory also notes that the state's anti-discrimination laws prohibit AI developers, suppliers, and users from using technology that discriminates against individuals based on a legally protected characteristic — such as technology that relies on discriminatory inputs or produces discriminatory results that would violate the state's civil rights laws, Campbell said.

AI developers, suppliers and users must take steps to safeguard personal data used by AI systems and comply with the state's data breach notification requirements, she added.

Related:

Advertisement

More from WBUR

Listen Live
Close