Joint Committee on Advanced IT Hosts Hearing on bill to place guardrails on the use of AI in the Workplace

For Immediate Release
September 11, 2025
Contact: Martin Mulkerrin, MMulkerrin@massaflcio.org
Joint Committee on Advanced IT Hosts Hearing on bill to place guardrails on the use of AI in the Workplace
On September 11, 2025, the Joint Committee on Advanced IT hosted a hearing on The F.A.I.R. Act, An Act Fostering Artificial Intelligence Responsibility. H.77/S.35 will regulate AI in the workplace, guard against worker surveillance, and protect human autonomy and expertise.
“Working people aren’t buying Big Tech’s promise of a shiny AI future that will solve all of our problems. While technological advances can improve our lives in many ways, we need real guardrails around the use of AI and other technology on the job so that working people aren’t left in the dust” said Chrissy Lynch, President of the Massachusetts AFL-CIO, which represents nearly 500,000 members across over 800 local unions across the Commonwealth. “The FAIR Act prioritizes data privacy and human autonomy so we can capitalize on the beneficial aspects of this technology without compromising fundamental worker rights.”
Rapidly evolving technologies, such as AI and other algorithmic tools, are beginning to significantly impact the livelihoods of working people. Employers are increasingly using these technologies to make hiring, firing, and promotional decisions and have started incorporating these technologies into the job responsibilities of and expectations for their employees. In some industries, these technologies have begun to replace workers all together – often with little support for the existing workforce. Workers are also continuously exploited through the collection, use, transfer, and sale of their data, including biometric and sensitive data.
The FAIR Act:
- restricts employers from relying wholly on data from these Automated Decision Making Systems (ADS) when making such decisions
- requires an independent impact assessment prior to the use of ADS
- bans the use of ADS to predict employee behavior and interfere with protected activities and legal rights
- prohibits state or agency procurement of AI/ADS systems unless expressly permitted by law; any authorized use would require a completed impact assessment
- restricts the collection, use, transfer, and sale of employee data, including biometric and sensitive data
- prevents employers from using data collected to punish or retaliate against workers engaged in a protected activity
- requires written notice prior collection and use of worker data
- protects workers from any negative employment action for refusing to follow the output of an artificial intelligence system if they reasonably believe it would lead to an adverse outcome
- places final decision making agency with human workers over a computer system
“The FAIR Act is an essential bill that encourages responsible AI innovation while ensuring worker protections so that technological progress can lift up and support workers, rather than leave entire workforces behind. At a time when automated systems can make critical decisions about hiring, promotions, and job security, we need clear guardrails that preserve human autonomy and accountability,” said Representative Tricia Farley-Bouvier (D-Pittsfield). “The FAIR Act strikes the right balance of embracing technology while protecting workers' rights, privacy, and the value of human decision-making in the workplace.”
“Massachusetts has always led the way on protecting workers’ rights, and as artificial intelligence rapidly advances and becomes more embedded in our workplaces, those protections must keep pace,” said Senator Dylan Fernandes (D-Falmouth). “The FAIR Act ensures that employees aren’t surveilled without consent, punished by flawed algorithms, or have their wages stolen by flawed AI systems. This legislation puts people first in a world increasingly driven by corporate greed focused on eroding worker’s rights.”
“The F.A.I.R. Act represents the kind of bold leadership we need from the states as we see the largest technology companies in the world unleash AI across the economy and in public services,” said Ed Wytkind, interim Executive Director of the AFL-CIO Technology Institute. “By establishing guardrails around automated decision-making and guarding against discriminatory AI practices in the workplace, Massachusetts has an opportunity to show the nation what responsible AI governance looks like. We can harness innovation while still protecting fundamental rights and human dignity.”
“The ACLU of Massachusetts is proud to fight alongside the labor movement to ensure new technologies uphold our basic rights and benefit workers at every level of our economy,” said Kade Crockford, Director of Technology and Justice Programs at ACLU of Massachusetts. “AI systems can be used to discriminate against job applicants, conduct invasive and unnecessary surveillance of workers on and off the job, and replace human decision-making, leading to mistakes and poor outcomes for workers and the communities they serve. While new technologies show promise in many areas of our economy, they must be properly regulated to ensure that all of us reap the benefits.”
“Workers in every sector of the economy need to be valued and respected, and this bill protects workers from the temptation employers may have to misuse AI. In the field of education, the MTA will fight to protect the agency of our members to exercise their professional judgment in the classroom and prevent public school districts and public colleges from resorting to automated evaluations or AI generated academic content,” said Deb McCarthy, Vice President of the Massachusetts Teachers Association, who endorsed the F.A.I.R. Act earlier this year.
“At SEIU 509, we represent workers in the public and private sectors caring for our most vulnerable populations, whether it be elders, at-risk children, people with mental illnesses, or developmental disabilities,” said Dave Foley, President of SEIU 509. “The people we serve deserve the human expertise of mental health clinicians and social workers, not to rely solely on AI technologies with a limited understanding of human nature.”
###