Game Over for OS Foundation Models?

ronnit.w • September 8, 2023

In the fast-paced world of artificial intelligence, the European Union is taking a proactive stance to ensure ethical and responsible AI development. The proposed regulation, known as the EU AI Act, seeks to govern AI systems and their components. Three slightly different proposals are currently under debate between the European Commission and the legislative authorities (i.e. the European Council and the European Parliament), who must agree to any legislation passed on Union level. Amongst these negotiations, the Open Source Community has voiced concerns that the legislation is tailored to fit the structures of centralised, commercial software development at the detriment of the open source community.

The OS community calls for better representation in the AI Act

A consortium representing the OS community including HuggingFace, GitHub, OpenFuture and a few other actors in the space have published policy recommendations aimed at improving the conditions for open source development. Specifically, the criticisms and proposed improvements are aimed at open source foundational models, which the authors argue stand to contribute tremendously to decentralised economic growth. (Interestingly, this claim seems to be contradicted by Nathaniel Whittemore’s observation that open source foundation models appear to be pushing AI start-ups out of business. This is because the large companies expected to buy these start-ups’ products prefer to develop their own custom LLM systems in-house using open source foundation models).  In this blog post, we will delve into the nuances of the different regulatory proposals and discuss how they stand to impact the open source community.


What’s on the table

There are three proposals on the table that have been published in the year 2021, 2022 and 2023 respectively. As we all know, much has happened during this time, and while all proposals foresee exemptions for Open Source components, the proposals (fail to) reflect these developments accordingly. 

  1. Commission Proposal (from April 2021):
    This proposal exempts open source components from regulation unless they are integrated into a broader AI system. It further excludes systems developed for research purposes only. However, LLMs were not yet a thing when this proposal was written between 2019 and 2021. It can therefore be expected that this blanket exclusion of open source components will not stand. 

  2. Council Compromise Proposal (from February 2022):
    The Council's proposal goes a step further and defines and addresses separately General Purpose AI systems (GPAI) including LLMs. It also clearly foresees exemptions for open-source GPAI systems in the context of research and development.

  3. Parliament Coordinated Position (from June 2023):
    With some more time to consider in-depth the advent of LLMs and a growing understanding of the underlying technology, the Parliament’s proposal defines foundation models as a category of AI systems of its own with special requirements. While the Parliament also foresees exemptions for non-commercial, open source components and explicitly relieves OS developers of any down-stream obligations for OS developers, these exemptions do not extend to foundation models. Furthermore, the proposal also retracts these exemptions where systems or system components are being tested live.


What the OS community thinks about it
While the European AI Regulation aims to foster responsible AI development, the consortium points out how the current proposals stand to structurally disadvantage Open Source foundation models in particular, and advocate changing the regulation accordingly. They argue that the current requirements for foundation models, and for all (components of) high-risk AI systems being tested under real-world conditions, fail to take into account the decentralised nature of OS software development. Without accomodating this circumstance, the authors insist, the AI Act threatens to make OS development of foundation models impossible. They put forward a number of policy recommendations to level the playing field between OS and closed-source development of AI systems.


  1. Defining Components Properly: To ensure legal certainty, the consortium calls for a clear definition of the term ‘component’ within the context of AI systems. 

  2. No Obligations for OS Developers: The consortium recommends that the regulation explicitly states that open source developers should face no obligations to cooperate with downstream users, as proposed by the Parliament.

  3. Establishment of European AI Office: The consortium supports the establishment of a European AI Office to provide regulatory clarity and facilitate inclusive deliberation. They argue that this office can help address emerging issues and provide guidance to developers.

  4. Enabling Limited Live-Testing: Limited live-testing should be permitted for OS systems within certain boundaries, which should be subject to ‘sufficient’ documentation and transparency to users. The consortium does not address the uncertainty contained in their proposed phrasing (whatever constitutes ‘sufficient’?). 

  5. Proportional Requirements for Foundation Models: The consortium argues that open source foundation models below a certain size or significance should be subject to fewer requirements. In particular, the authors propose that obligations relating to the tracking of energy usage, the creation of a documented quality management system, registration and record keeping obligations should be suspended for OS systems below a certain size. Data governance, risk assessment and technical documentation obligations are proposed to be maintained for any size foundation model. The authors further propose that the AI office defines and regularly reviews and adjusts the relevant thresholds for applicability.

  6. Inclusion of OS Developer Community: Finally, the envisioned advisory forum for AI regulation should include active participation from the open source developer community to ensure that the needs of the OS ecosystem are taken into consideration. When you succeed in achieving a goal, be it a big one or a small one, make sure to pat yourself on the back.


What are the chances?

In the tug of war that is the European legislative process, it remains to be seen who can make their voices heard. Especially with a view to General Purpose AI systems, the open source community is under fire for allegedly enabling fast-and-loose playing with a dangerous technology, a narrative leveraged particularly (– but by no means only–) the incumbent tech giants scared of having their cake eaten. At the same time, existing exceptions for open-source components demonstrates a base level of good-will on part of the institutions for the OS ecosystem. While the OS consortium’s policy recommendations arrive late in the legislative process, the lack of existing proposals on foundation models by the Commission may leave some wiggle room during negotiations. I will keep an eye out for how the OS community's recommendations find their way into the final regulation, or otherwise.

  • Use of Generative AI Transparency Notice

    Chat-GPT 3.0 was used in the creation of this blog article. I read the policy recommendation, taking bullet-point notes on it’s content. I pasted these into chat-GPT and asked it to draft a blogpost based on my notes. I then went over the output, using the proposed structure for orientation while remodelling and rephrasing the running text to reflect my own thoughts. I checked the generated output for accuracy based on my own notes on the original proposal – any remaining errors are my own. 

By ronnit.w March 16, 2025
The AI Act prohibits the use of AI for emotion recognition in the workplace. But does that include written text?
By ronnit.w January 25, 2025
There are so many good reasons to communicate with site visitors. Tell them about sales and new products or update them with tips and information.