BOSTON (SHNS) – Technology’s new frontiers present unique regulatory challenges for government systems that often lag behind the rates of change in fast-developing tech sectors.
But some in the Legislature are trying to help orchestrate a response. Several bills before the Joint Committee on Advanced Information Technology, the Internet and Cybersecurity seek to get the ball rolling, as artificial intelligence (AI) becomes more advanced everyday.
“The problem we face with AI today is that it’s being used broadly in society to replace human decision-making with little to no rules about testing these systems for accuracy, effectiveness or bias. And that has real tangible harms,” Caitriona Fitzgerald, deputy director at the Electronic Privacy Information Center, told the committee at a public hearing on Thursday.
Fitzgerald supports a Rep. Sean Garballey and Sen. Jason Lewis bill (H 64 / S 33) which would create a commission to study the use of automated decision-making by government agencies.
Last fall, EPIC released a report on the use of AI by government agencies in Washington, D.C., that found computers were assigning children to schools, and making decisions about policing resources and medical care.
“These systems can have a huge impact on people’s lives, and they don’t even know they’re being used. And they don’t know if they’re being tested for accuracy or bias,” Fitzgerald said. “Honestly oftentimes, the government doesn’t know either. They’re buying a system, trusting it works as advertised. So from a fiscal responsibility perspective, millions of taxpayer dollars are being spent on these systems that are untested, they’re unproven, they’re inaccurate and they often simply don’t work.”
The Garballey and Lewis bill would require the commission create a catalog of the AI systems being used across state government, and make that list public. The commission would also advise the Legislature on regulations to put in place.
Similar commissions have been set up in Alabama, New York and Vermont.
Several people who testified about a slew of AI bills expressed fears about bias baked in to computer programs, then being used to make decisions that impact people.
Rep. Jeffrey Rosario Turco, who called himself “totally ignorant on this AI stuff” and said “my seven-year-old could probably educate me” on the technology, asked Fitzgerald for an example on how an AI-based computer system could be biased.
Fitzgerald described a study done by MIT researcher Joy Buolamwini on racial and gender bias in AI services from large tech companies.
“She tested facial recognition systems and how it recognized white male faces, white female faces, Black male faces, Black female faces, and the error rate for Black female faces in particular, but also Black male faces was so much higher than it was for white faces, because it has learned on, you know, data that was historically white faces,” she said. “If you’re then talking about facial recognition that’s being used at the airport … and it’s misidentifying you, as someone who is on a list or something like that, that’s a real tangible harm.”
Rep. Tricia Farley-Bouvier, co-chair of the committee, said she wanted to “emphasize” how much AI “has the potential to really dig into discrimination.”
“If we’re putting in somebody’s data to decide whether this person is a good candidate for parole, and you’re using historical data? Well historically we haven’t done a really good job with that, there’s been a lot of discrimination on deciding who gets parole,” Farley-Bouvier said. “So if we’re using that historical data to then inform what we do in the future, we’re just doubling down on the discrimination that we have.”
Rep. Simon Cataldo, who co-sponsored the bill to create a commission to study AI, used AI writing software called ChatGPT to write part of his testimony.
“Here’s one passage from ChatGPT itself that I wanted to pass on to you. This is a quote: ‘AI algorithms can inadvertently perpetuate biases present in historical data or lead to discriminatory outcomes. Robust measures should be put in place to detect and mitigate biases, ensuring fairness and equity in the application of AI across government functions,'” Cataldo read to the committee.
If using ChatGPT to talk about regulating AI sounds familiar, Cataldo was taking a page out of Sen. Barry Finegold’s book. He made headlines earlier this year for using the chatbot to write legislation.
Finegold filed a bill — written with the help of AI — seeking to help regulate ChatGPT and future, similar chatbots by requiring them to program the model to include a distinctive watermark that could be used to detect plagiarism, implement security and privacy measures to protect those using the tool, and register the model’s capacity, training data and intended use with the attorney general’s office.
“Where we failed with Facebook and some other stuff is that we never really worked with them and put in place proper guardrails, and I think because of that, it got abused,” Finegold said in February. “Companies are looking for us to kind of set up the parameters. I’m sure there’s going to be other companies that are doing this and they will be competitive. I think if we play referee as the government, then everyone’s going to know the rules of the game and the proper way to operate.”
At the time Finegold filed the bill, several state agencies asked by the News Service whether they used ChatGPT answered that they were unaware of or did not know enough about the program. In the time since, the program has set the record for the website with the fastest-growing user base, and currently generates about 1.8 billion visitors per month.
Oftentimes, government regulators struggle to keep up with the fast pace of technology development, and advocates urged lawmakers to move these regulatory bills forward.
“To illustrate the gravity of the case, let me share a story familiar to every student in Massachusetts public schools,” said Ivan Bezkrovnyi, a student at Newton North High School and president of the Massachusetts arm of youth-led AI advocacy group Encode Justice.
“School-issued Chromebooks are now equipped with a software that tracks every student’s online activities. Automated decision systems are capable of making significant decisions, like locking websites or forming assumptions about a student’s behavior and intentions just based on their browser search results,” Bezkrovnyi said. “My friends were searching nuclear science out of pure academic curiosity, and … the software mistakenly interprets the search as a potential threat.”
Though regulators are often playing catch-up with technology, bills similar to the legislation to form a commission to study AI in government and safety recommendations were filed in the past two sessions. Last session, the bill was reported favorably by the Joint Committee on Advanced Information Technology, the Internet and Cybersecurity, but never made it to final passage.
“It is very possible for the state of Massachusetts to get a clearer sense of exactly what types of machine learning and automated decision systems are in use in our government right now,” said Kade Crockford of the Massachusetts ACLU. “This is the third session that a bill like this has been before this committee … We’d really like to get it done this session. I think it’s really time.”