The United States Department of the Treasury is just the latest federal agency to embrace machine learning and artificial intelligence (AI) advances. The department announced earlier this month that it is employing the data-driven technology in fraud and improper payment prevention.
That has already helped prevent and recover more than $4 billion in fraud and improper payments this fiscal year (FY) (October 2023 – September 2024), up from $652.7 million in FY23.
“This increase reflects dedicated efforts by Treasury’s Office of Payment Integrity (OPI), within the Bureau of the Fiscal Service (Fiscal Service) to enhance its fraud prevention capabilities and expand offerings to new and existing customers,” the Treasury Department said in a statement.
Online payment fraud is expected to surpass $362 billion by 2028 according to data from Juniper Research.
AI Battling Fraud
There have been concerns that AI tools, notably generative AI could help empower criminals engaging in banking fraud and other types of financial-based scams. However, the Treasury Department is now using the same technology – which can analyze large quantities of data – to detect patterns of fraud that are commonly used by criminals.
The agency, which didn’t go into specific details, did lay out some of the key ways the technology is already being utilized:
- Expanding risk-based screening resulting in $500 million in prevention.
- Identifying and prioritizing high-risk transactions resulting in $2.5 billion in prevention.
- Expediting the identification of Treasury check fraud with machine learning AI resulting in $1 billion in recovery.
- Implementing efficiencies in payment processing schedule resulting in $180 million in prevention.
“Treasury takes seriously our responsibility to serve as effective stewards of taxpayer money. Helping ensure that agencies pay the right person, in the right amount, at the right time is central to our efforts,” said Deputy Secretary of the Treasury Wally Adeyemo. “We’ve made significant progress during the past year in preventing over $4 billion in fraudulent and improper payments. We will continue to partner with others in the federal government to equip them with the necessary tools, data, and expertise they need to stop improper payments and fraud.”
In addition to protecting online payments, the Treasury Department has established and strengthened partnerships with new and high-risk programs to increase access to and usage of its payment integrity solutions. That has included working closely with federally funded state-administered programs. In May of this year, Treasury and the Department of Labor announced a data-sharing partnership that could provide state unemployment agencies with access to “Do Not Pay Working System” data sources and services through the Unemployment Insurance Integrity Data Hub.
Data-Driven Protection
How exactly the agency will protect the data seems to be a closely guarded secret, but Dr. Jim Purtilo, associate professor of computer science at the University of Maryland shared some thoughts with ClearanceJobs.
“(Treasury) was not specific about what particular techniques were being applied for screening, but it does seem the agency is looking at far more data, and that alone certainly has the potential to fill in puzzle pieces and complete more pictures of fraud,” he explained.
“As with anything in this business, the quality the quality of data will be key,” Purtilo added. “What is the accuracy of these prediction methods? False positives? False negatives?”
Though the Department of the Treasury didn’t say as much, it is possible the same tools could be employed on tax returns to catch would-be tax cheats.
“Good on them for finding more cheats, but if the cost of doing so increases out of proportion to the input or more people are snared by a high false negative rate, then I doubt voters will conclude it is a win,” suggested Purtilo.
AI isn’t going anywhere and could be a good sign that some departments are being proactive in adopting it.
“Certainly many agencies and companies can win the same sorts of quality benefits by leveraging data in smarter ways, but we need to watch very closely what independent controls are used to track efficacy,” said Purtilo. “Many AI techniques defy clear explanation for a given result, and we would not want machines accusing people of things the bureaucrats simply don’t understand. The stronger predictive algorithms must be tied with stronger accountability.”