- Trends in State Policy
- Posts
- On Math and Numbers in Legislation
On Math and Numbers in Legislation
How a HUD grant relates to a California AI bill SB1047
This is my second newsletter. Although my initial plan was to broadly survey legislation on specific topics, an event last week inspired me to draw parallels between California’s SB1047 and my experiences with Housing and Urban Development.
Last week, I attended a session where Scott Wiener, the dynamic state senator from San Francisco, defended his AI regulation bill, SB1047, before an audience of startup founders.
SB1047 targets large AI models. Specifically, it applies to foundation models requiring over $100 million to train and using more than 10^25 floating point operations.
According to Wiener, the bill only affects the largest companies developing foundation models. It mandates safety and risk assessments, which these companies already perform. Essentially, SB1047 grants the Attorney General enforcement power, ensuring companies remain compliant with their current practices.
After Wiener’s defense, Andrew Ng, one of the leading thinkers in AI, delivered his referendum against the bill. Ng contends the bill is fundamentally flawed but suggests it could be less detrimental.
Ng suggests removing the Frontier Model Division (FMD) provision. The FMD would determine the computing power threshold for covered models. Critics like Ng fear this provision could empower a new body of regulators to impose stricter requirements, stifling AI innovation.
And this gets into a larger conversation of math in legislation. Should numerical thresholds be set by the legislature or delegated to agencies? It’s a balance between adaptability and control.
At HUD, I encountered this tradeoff in the Community Development Block Grant formula, which allocates over $3 billion annually to cities and states based on factors like poverty, outdated housing, growth lag, and population.
The formula is embedded in legislation written 50 years ago and uses variables such as pre-1940 housing to allocate these funds. At the time, this meant housing 30 years or older. Now, it represents housing over 80 years old, and because very old housing is aesthetically preserved in wealthy communities, this variable allocates just as much money to high-income communities as low-income.
And, despite decades of reports from HUD showing the formula is losing the ability to properly target the neediest communities, the formula has never changed. Why? Fixing the formula means some cities and states receive less funding, so legislatures can’t come to an agreement. If the legislation, when originally written, provided HUD with flexibility, then HUD would’ve changed the formula by now.
The HOME grant program, which targets housing, allows HUD to determine the allocation formula within legislative guidelines. The guidelines even specify factors that should be considered in the formula. This approach strikes a balance between adaptability and control.
SB1047 does provide guidelines. The FMD can define a covered model, but the legislation has a hard-coded $100 million floor. So, the FMD can never make the legislation more strict. I caught a slightly irritated Wiener explaining this to a reporter after the event in response to Ng’s claims.
Ng still opposes the legislation, arguing we should regulate the applications of AI models, not the models themselves, akin to regulating cars rather than the motors used in various devices (e.g. toothbrushes, blenders, cars). Drawing parallels, we should not regulate AI models but the application of those models.
This may be a misguided analogy. Why use motors as the analogy and not cars themselves? Cars—to be sold publicly for use on public roads—are subject to safety and emission regulations (*oh, and plenty of these regulations apply exclusively to the engines of cars). Cars can be used for a wide variety of tasks, each personalized to the user, much like AI models. In most cases, the AI model is the final product, or very near it. Of course, to be adapted in a wide variety of use cases. Maybe the analogy to motors/cars is not useful in general, but if it is, I am tempted to say it sways toward AI regulation for public protection.
Policy adaptation is often slow, hindered by politics. Agency flexibility can help overcome this, especially when legislation includes rigid numerical thresholds.
In a fast-moving world, we expect policy to lag behind in AI regulation. Perhaps 10 ^ 25 operations will be seen as nothing in a few years when GPUs and models improve. Without the FMD, increasing this number could become more challenging for the AI industry. Else, it could be the standard for the next 50 years.