Mark Kelly's government-heavy approach to AI
If staying ahead of China is the goal, higher taxes and more regulation isn't the way.
Despite remarkable vagueness, Sen. Mark Kelly’s AI “roadmap” has attracted a bit of attention. That’s undoubtedly due to both his status as a possible presidential candidate and his growing reputation as a serious legislator in an increasingly unserious institution.
However, Kelly’s roadmap, peering through the vagueness, would most likely do more to reduce and delay the benefits from the adoption of artificial intelligence applications without doing much to ameliorate any adverse disruptive effects.
While unstated, Kelly’s roadmap rests on two premises. First, that AI will be highly profitable for modelers and application developers. Second, that AI will result in significant and rapid labor market dislocations, including large job losses. There’s reason for skepticism about both premises.
Nvidia, which makes the best computer chips for AI functions, is, at present, hugely profitable. However, AI developers are not.
AI developers are vacuuming up large sums of investment capital, but aren’t even close to turning a profit. The most prominent American company, OpenAI, doesn’t even project becoming profitable until toward the end of the decade. And that’s based on some phenomenal and highly speculative revenue growth projections.
The AI development sector is very competitive and the competition is expanding. Maybe a few companies will become dominant and fetch oversized profit margins, as has happened in some tech fields. But, at this point, there is no reason to assume that.
In fact, there’s not much reason to assume that even Nvidia will sustain its robust profit profile. In addition to the possibility of competitors catching up, the company is a ping-pong ball in the geopolitical jockeying for AI superiority between the United States and China, with a large percentage of its revenue hanging in the balance.
I’m in no position to judge the range of potential AI applications. However, whatever they are, their adoption is likely to be more gradual than a lot of the hype and fret would suggest. The question is how quickly non-AI companies rejigger their processes to use AI and the extent to which that displaces existing workers versus making them more productive. In many cases, that will not be an obvious, easy, or risk-free transition. There’s likely to be a period of exploration rather than rapid adoption of the full range of AI possibilities.
The heart of Kelly’s roadmap is the creation of what he calls the “AI Horizon Fund”. The Fund would be seeded by siphoning off revenue from AI developers, justified on the at least premature premise that they are generating oversized profits. Kelly doesn’t specify the precise mechanism by which to charge AI developers for their contributions to the Fund, but whatever the mechanism turns out to be, it will function as an additional tax on those companies.
Let’s assume that AI developers turn out to be highly profitable. That would mean that they would already be paying more to the federal government through the existing corporate income tax. What’s the basis for saying that they should be singled out for an even higher effective tax rate than companies in other sectors?
Kelly says that one of the objectives of his roadmap is to keep the United States ahead of China in the AI race. If so, this additional tax would be highly counterproductive. Even if profitable, AI developers are unlikely to get into the business of distributing profits to shareholders through dividends, at least for a very long time. Instead, they will be plowing any after-tax profits into product development, attempting to stay alive on the cutting edge of the competition. In essence, Kelly is proposing to rob these companies of internally generated investment capital. That’s not a way to stay ahead of the Chinese.
The Fund itself is a weird entity. Kelly describes it as a trust fund and says that its spending would not be subject to the annual congressional appropriations process. However, he doesn’t say who exactly would make the spending decisions or how those people would be chosen.
The main focus of the Fund would be retraining workers displaced by AI applications. However, the federal government has a lousy track record on worker training programs, something Kelly’s white paper somewhat acknowledges – although he attributes it to underfunding rather than the inherent limitations on the government’s ability to anticipate future jobs needs in the private sector.
Kelly calls for a more extensive safety net to deal with AI-related job displacement, including higher unemployment benefits. Higher general unemployment benefits is a way to get a stickier labor market, as the experience in several Western European countries has demonstrated. However, if AI displaces jobs much faster than it creates them, a more extensive safety net could be an appropriate governmental response. This would be a worthwhile area for exploration and discussion.
The most disturbing feature of Kelly’s roadmap is a call for greater regulation of the adoption of AI applications by non-AI companies. Kelly wants to make that subject to consultations with workers and especially unions. The largest potential gains from AI are precisely from its use by non-AI companies to improve productivity throughout the economy. Slowing that process down, or creating obstacles to it, will reduce or delay the benefits AI could deliver. The slower the process and the greater the obstacles, the larger the diminution of the economic benefits.
While I think Kelly is on the wrong track with his AI roadmap, I think he is on the right track regarding the kind of leadership we need from our elected officials. His previous big white paper was on revitalizing the American maritime industry, with a particular emphasis on ship-building.
Maritime capabilities and AI regulation aren’t politically sexy topics. But they are important ones, and ones not given to simple ideological appeals or demonizing political opponents – although I think Kelly’s position on both was influenced by a predisposition in favor of industrial policy. Of course, I have a predisposition against industrial policy.
Do we need an industrial policy regarding AI and does the federal government need to prepare for significant AI-related job losses? Those are serious and important questions, to which Kelly has attempted a serious initial response. As flawed as I think that response is, it is a welcome respite from our generally broken politics.
Reach Robb at robtrobb@gmail.com.
