Computer processor speeds continue to increase, but anyone who has bought a personal computer in the past few years may have noticed that the speeds aren’t increasing at the rate seen even a decade ago, and nothing like the rates that were commonplace in the 1990s. From the 1970s to the early 2000s, Moore’s Law—named for Gordon Moore, co-founder of Fairchild Semiconductor and Intel—observed that processing performance doubled every two years.
That has slowed, and performance improvements from one generation to the next aren’t as dramatic as past leaps in processing power. Yet, some technology, including artificial intelligence (AI) and machine learning (ML), has continued to demand ever faster speeds. The downside is that processors with higher speeds generate excessive heat and consume far more power, creating engineering and efficiency challenges.
Machine Learning is About Power
The issue is complicated by the fact that traditional ML models have been designed with a single focus: maximizing performance.
The Defense Advanced Research Projects Agency (DARPA), the research and development agency of the United States Department of Defense (DoD) responsible for the development of emerging technologies for use by the military, has noted that this approach has delivered breakthroughs in language models, image recognition, and other advancements, but it has come at a cost.
ML requires far greater electricity consumption.
The growing demand for AI and ML poses significant concerns for energy grids, while also contributing to environmental issues, including carbon emissions and water usage.
“The relationship between computation and power consumption is just physics,” explained Dr. Jim Purtilo, associate professor of computer science at the University of Maryland.
“Each operation your program performs takes some minuscule amount of power, so when you perform more operations, you consume more power,” Purtilo told ClearanceJobs. “But this really adds up for AI. Large language models (LLMs) in particular require an immense amount of computation to provide the rich and authentic responses that people have come to expect. More arithmetic needs more power.”
Purtilo added, “under the hood,” these computations are simply trying to determine what would be an “average” response offered by other users on the internet, which is where we typically obtain data to train the LLMs.
“Numerically, that looks just like finding the average of test scores in a classroom or batting performance of baseball players,” he continued. “Add up a column and divide by the count. However, with AI, we perform this computation over billions of values. That’s what creates the power demand.”
DARPA’s ML2P Efforts
To address power consumption, DARPA announced this week its efforts with the Mapping Machine Learning to Physics (ML2P) program, which is being developed to transform how AI systems can better balance performance with energy usage.
According to engineers at DARPA, ML2P could address the gap by mapping ML model performance to physical electrical characteristics. This includes focusing on measuring energy use in joules and embedding energy awareness into the design of the AI systems. This can create models that are better able to achieve the right balance between accuracy and power consumption.
“In an era where AI is increasingly deployed in power-constrained environments, such as at the tactical edge, energy efficiency is no longer optional,” explained Bernard McShea, founding program manager for ML2P. “With ML2P, we want to move beyond optimizing just for accuracy and instead understand, for every joule of electricity, what level of performance we’re getting back. That will enable us to build AI that is smarter, leaner, and more useful to the warfighter.”
Video: ML↦P – Mapping Machine Learning to Physics
Seeking Industry Insight
DARPA is now seeking insights from industry, and the agency has opened a solicitation that calls for companies to submit proposals on how to take part in the ML2P program, aiming to enhance the power consumption and performance of ML models on existing hardware.
“Multiple individual awards are anticipated with a planned down-select, herein referred to as ‘go/no-go,’ at the end of Phase 1. The total ML2P budget is anticipated to be at or below $5.9M, which will be divided amongst multiple selected performers. More specifically, Phase 1 is anticipated to be at or $3.5M and Phase 2 is anticipated to be $2.4M,” the program solicitation explained.
DARPA is now looking for experts from multiple domains, including electrical engineering, mathematics, logic, and ML, to help develop the next generation of “energy-aware” ML. ML2P will serve as a guide model, offering insight from prior design choices, while creating training functions that optimize the trade-off between energy consumption and model performance.
“Making models more efficient and performant is crucial as AI applications often require substantial computational resources, leading to high energy consumption,” added McShea. “By enabling principled simulation of machine learning model performance on general-purpose compute systems, it could provide insights into how hardware should be optimized for AI workloads.”
DARPA has suggested that if this effort is successful, it could establish a new “paradigm for AI design,” where power efficiency and performance can “go hand in hand.”
The solicitation is now open, with proposals due by December 8. It has called for submissions from all responsible sources, including large and small businesses, nontraditional defense contractors, and research institutions.
Is This a Valid Effort
It shouldn’t be surprising that DARPA is taking a lead on trying to solve this issue. The U.S. military is invested heavily in AI, ML, and LLMs, and it already understands the energy demands of other advanced systems.
“DARPA’s initiative is important and likely leads us to the next level of maturity in AI,” Purtilo told ClearanceJobs, noting that the early work with LLMs was all about getting the best predictions possible, bar nothing.
“Brute force methods helped create early leaders in this race. If burning more energy got us a better result, then we burned more energy,” Dr. Purtilo added. “Now the question is: how can we get comparable results for less energy, or more realistically, get sufficient results for predictable energy costs. Not all problems require being addressed with the biggest AI hammer in the toolbox. Some might do just as well with far more modest results that are immensely more cost-effective to compute. Knowing how to measure efficacy and relate it to power demand is a critical research objective.”