Your Profit Hour
  • World News
  • Investing
  • Tech News
  • Stock
  • Editor’s Pick
Editor's PickInvesting

The AI Action Plan: Taking AI Innovation Seriously

by July 28, 2025
July 28, 2025

Matt Mittelsteadt

artificial intelligence

On July 23, the White House released its much-anticipated AI Action Plan. Spanning thirty topics across twenty-eight pages, this plan represents a truly far-reaching attempt to shape the AI future.

The Action Plan’s specifics represent a diverse grab bag of policy action aimed at supporting and shaping AI on all fronts. To speed AI infrastructure buildout, data center and energy permits are set to be fast-tracked. To maximize the impact of existing semiconductor incentives, “extraneous” Chips Act regulatory requirements will be removed. To ensure sufficient AI training data, efforts will aim at improving federal data accessibility. To kindle further research progress, investments will be made in machine learning, AI evaluations, next-generation manufacturing, and genome sequencing research. This is just a slice of what the Action Plan seeks to do. 

Given the scope, there is far more to say than can fit in a single post. This will be part one of a series on the AI Action Plan. As with any policy this big, there are substantial negatives worthy of discussion and attention. These will be addressed in future posts. Today, I will focus on the Action Plan’s biggest positive: its concentration on AI innovation.

Let’s dive in. 

A Thematic Sea Change

The Action Plan’s clear theme is its emphasis on innovation. In its introductory words: “To secure our future, we must harness the full power of American innovation.” This emphasis is a clear break from the recent policy past. Compare these words to the policy thesis that introduced President Biden’s AI executive order: “Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks.” The difference is stark. 

While both plans stress harnessing AI’s benefits, Biden’s plan was built around the belief that the path to a bright future runs through rules-driven risk management. Harnessing AI benefits meant harnessing AI itself.

To be clear, the Action Plan is hardly an “innovation at all costs” document. The authors are clear-eyed that AI is a general-purpose technology and can indeed be used for purposes both good and malign. The Action Plan, for instance, rightly recognizes that deepfake technology will soon challenge courts’ ability to authenticate the veracity of evidence. New rules and procedures are necessary and are called for. Still, the weights have shifted, and the script has been flipped. Rules and risk management are secondary, while once also-ran innovation has taken center stage.

Turning to specifics. While the Action Plan has many positive pro-innovation provisions worth highlighting, two elements stand apart: an analysis of AI regulatory bottlenecks and its declared support for open-source and open-weight AI models that are free to use, distribute, and modify. These are worth deeper discussion, as they have the greatest potential for long-run innovation impact.

An AI Regulatory Analysis

The Action Plan’s most important provision is its most understated: the Office of Science and Technology Policy (OSTP) will launch a request for information to identify “current Federal regulations that hinder AI innovation and adoption.” Essentially, the government will be crowdsourcing industry information to create a list of AI regulatory bottlenecks.

This should have been done years ago. Given AI’s sheer diversity, regulatory bottlenecks are possible in nearly every sector and subdomain of federal policy. While deregulation has been discussed endlessly as a tool to unleash AI, the true state of regulatory affairs has never been fully clear. If successful, this request could finally give us a snapshot of the across-the-board deregulatory need.

Such clarity could collapse present policy uncertainty into directed action. With a clear deregulatory to-do list, executive branch decision-makers can pursue targeted, hopefully thoughtful technocratic fixes. Perhaps more importantly, for a Congress that desperately wants to “do something on AI” but has yet to agree on what that “something” might be, this list could provide meaningful legislative direction. 

This action has great potential to be a linchpin of American AI success. As I’ve written, “AI success requires breaking the tech out of the lab and putting it into people’s hands.” While the Plan’s many direct investments in AI could indeed speed innovation, the stated goal of “Winning the AI race” hinges on whether our rules allow those innovations to be diffused, used, and have an impact. By rationalizing or removing any such burdens, we have the unique chance to set the clean institutional table needed for AI benefit.

Efforts to “Encourage Open-Source and Open-Weight AI”

While positive attitudes towards open-source and open weight AI modestly prevail today, it is easy to forget the open-source skepticism and even fear that dominated just two years ago. When Meta’s Llama model leaked into open use in 2023, anxiety was palpable. High-profile reporters, prominent policy researchers, and even United States Senators openly worried that such an “unrestrained and permissive” release could yield long-held worries of AI risk.

The 2023 Biden AI executive order reflected the uncertainties of the moment. Open models—cumbersomely named “Dual-use foundation models with widely available model weights”—were ”at best a curiosity and at worst, a potential hazard. The national security-tinged “dual use” frame was revealing—while benefits were possible, the first-order question was “Will this be a new form of weapon?” Given this overall “risk-to-be-studied” frame, open source found itself unfortunately sidelined in subsequent policy action. Most notably, there is no mention of open source in the NIST AI Risk Management Framework.

In the AI Action Plan, open models are refreshingly given their due. Overt skepticism is nonexistent, and an entire subsection is devoted to declaring the federal government will create a “supportive environment for open models.”

I believe this support is one of the Action Plan’s “big rock” elements because it signals that harsh open-source restrictions are likely off the table for the next three years. This is particularly important for AI export control policy, where the president’s power to regulate the global AI future is all but unrestrained, and action is most likely.

Such support is essential because of the bright future open source could enable. In 2025, the performance gap between open- and closed-source models has narrowed to just 1.5 percent. Today’s open-source AI models are not only free—they are capable. This matters deeply for the resource-limited. If AI remains effective and free, scientists, small businesses, and the developing world can all share the benefits of the AI future. Without open source, we risk consigning benefits to only those who can afford closed models. Supporting open-source means averting a future of AI haves and have-nots.

Open source is not just the key to AI equity; it may counterintuitively be the answer to some of the very safety concerns that have animated open-source skepticism. In the coming years, AI will almost certainly drive advanced cyber threats. The best—and perhaps only—way to respond will be through widely accessible defensive AI. Through free-to-use defensive options, the security of chronically under-resourced hospitals, schools, and developing nations could be made possible.

While federal backing of open source doesn’t guarantee such futures, it is a necessary first step. Policy support is kindling for private sector confidence. Knowing regulations won’t halt open source, developers can self-assuredly invest their time and efforts into improving these models. With this three-year window of support, open source will hopefully continue to thrive.

Conclusion

These two elements are worth deep discussion because they have the potential to enable the rapid, responsible AI diffusion essential to a positive future. Only if models are open and accessible and rules fit-to-purpose will this technology be broadly used and realize its full promise.

Naturally, the impact of these and other promising elements will depend entirely on implementation. While the general focus on innovation is indeed a positive step, the devil lies in the implementation details. Adverse political whims and the administration’s nationalist impulses may yet water down potential.

While this post has focused on the Plan’s pro-innovation positives, it’s essential to note that an overall pro-innovation tone doesn’t mean all elements will help. Certain provisions—notably its aim to tackle “AI ideological bias”—are significant risks to American competitiveness and a positive, productive AI future. Those risks will be the focus of part two. 

previous post
College Student Aid Theft
next post
Is AI a Horse or a Zebra When It Comes to the First Amendment?

You may also like

Is AI a Horse or a Zebra When...

July 28, 2025

College Student Aid Theft

July 28, 2025

Can the White House Denaturalize Domestic Opponents?

July 25, 2025

An Updated Interview with George Selgin on Free...

July 25, 2025

Inflation Can Increase Capital Gains Tax Rate to...

July 25, 2025

Friday Feature: Edefy, “Pod Schooling Made Simple”

July 25, 2025

Yiwu: China’s Free-Market City

July 25, 2025

Chart Mania – 23 ATR Move in QQQ...

July 25, 2025

Man Admits to Medicaid Fraud—That’s Not the Worst...

July 25, 2025

S&P 500 Breaking Out Again: What This Means...

July 24, 2025

Leave a Comment Cancel Reply

Save my name, email, and website in this browser for the next time I comment.

    Get the daily email that makes reading the news actually enjoyable. Stay informed and entertained, for free.

    Your information is secure and your privacy is protected. By opting in you agree to receive emails from us. Remember that you can opt-out any time, we hate spam too!

    Recent Posts

    • Austrian Perspectives on Social Justice

      July 29, 2025
    • Axe the Bank of Portugal Before It Does More Harm

      July 29, 2025
    • Axe the Bank of Portugal Before It Does More Harm

      July 29, 2025
    • Don’t Ship Weapons to Troubled Hot Spots

      July 29, 2025
    • Nobody for Fed Chairman

      July 29, 2025
    • About us
    • Contact us
    • Privacy Policy
    • Terms & Conditions

    Copyright © 2025 yourprofithour.com | All Rights Reserved

    Your Profit Hour
    • World News
    • Investing
    • Tech News
    • Stock
    • Editor’s Pick