<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:g-custom="http://base.google.com/cns/1.0" xmlns:media="http://search.yahoo.com/mrss/" version="2.0">
  <channel>
    <title>e59ed937</title>
    <link>https://www.aiactready.eu</link>
    <description />
    <atom:link href="https://www.aiactready.eu/feed/rss2" type="application/rss+xml" rel="self" />
    <item>
      <title>Text-Based Sentiment Analysis at the Workplace</title>
      <link>https://www.aiactready.eu/can-employers-run-sentiment-analysis-over-their-employee-s-slack-messages</link>
      <description />
      <content:encoded>&lt;div data-rss-type="text"&gt;&#xD;
  &lt;h3&gt;&#xD;
    &lt;span&gt;&#xD;
      
           The AI Act prohibits the use of AI for emotion recognition in the workplace. But does that include written text?
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/h3&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div&gt;&#xD;
  &lt;img src="https://irp.cdn-website.com/b16c8396/dms3rep/multi/pexels-photo-1743364-cd1a861e.jpeg"/&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
           Update:
          &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
             
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      
           The answer is no. The guidelines on prohibited uses of AI released by the AI Office in February 2025 have clarified that text is not considered biometric data. Sentiment analysis based on text is therefore not included in the prohibition - and, critically, also not under the high-risk use case category. This means employers have no obligation to consult workers' representations, inform affected workers, or ensure the accuracy of predictions for this practice (under the AI Act).
          &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Could your employer run sentiment analysis over your slack message logs to infer your sentiments towards the company? What about assessing employees' sentiments on a pending strategic decision, or for the purpose of leadership feedback? That depends - on two things.
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      &lt;br/&gt;&#xD;
      
           1. Does Article 5 refer to 'Inference of emotions' generally or to 'Emotion Recognition Systems' specifically?
          &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Emotion recognition is considered a highly critical use of AI technology under the EU AI Act - especially at the workplace. In fact, ‘the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace [...]’ is prohibited under Article 5 (f) AI Act. At first sight, this appears to cover the scenario above. However, at least the Dutch data protection authority appears to apply the narrow definition for 'emotion recognition systems' to Article 5. The problem is, this definition covers only AI systems that infer emotions based on biometric data.
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           This interpretation appears to be in line with the principle that prohibitions are an ultima ratio that should be very narrowly specified. Reversely, having a prohibition that is phrased more generally than uses merely subject to regulatory requirements flies in the face of this principle. This suggests that the difference in terminology is the result of an oversight, rather than deliberate. On the other hand, one might argue that the specification of domains (in the workplace and in education institutions) already counts as more specific. Further, the intrusive nature of the technology that is cited in the motivation for the ban (Recital 44) certainly extends to the surveillance of thought as mediated through text. In conclusion, it could be argued either way.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
           2. Can text be considered biometric data?
          &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Assuming that the Dutch DPA's interpretation stands, this then raises the question if text should be considered biometric data. Here is where it gets interesting: If text was indeed not considered biometric data,
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;strong&gt;&#xD;
      
           the practice would be completely unregulated
          &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            under the AI Act (although you'd still have some rights under GDPR).
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           So… is it? Compounding from the relevant definitions in the AI Act and GDPR as referenced in the AI Act, we need to think of biometric data as ‘
          &#xD;
    &lt;/span&gt;&#xD;
    &lt;strong&gt;&#xD;
      
           any information
          &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            relating to a natural person
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;strong&gt;&#xD;
      
           resulting from specific technical processing
          &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            relating to the
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;strong&gt;&#xD;
      
           physical, physiological or behavioural characteristics
          &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            of a natural person, such as facial images or dactyloscopic data, which allow or confirm the unique identification of that natural person.’ (AI Act Art. 2 (32), GDPR Art. 4 (1), (14)).
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            This definition leaves a lot of room for argumentation around whether or not text relates to 'behavioral characteristics' of a person (see here for an
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.linkedin.com/pulse/sentiment-analysis-using-ai-system-prohibited-tomasz-zalewski-7hfze/" target="_blank"&gt;&#xD;
      
           argument why text is not biometric data
          &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            by Thomas Zelewski, for example). While, in my experience, companies tend to err on the side of caution in the face of fines reaching into the millions, I would not be surprised if, in some corporate spreadsheet, someone somewhere would turn up that it’s worth the gamble.
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           In the meantime, I hope that the AI Office will respond to this uncertainty and clarify their intention of the prohibition of emotion recognition in the workplace in favour of employee’s privacy and dignity. Regardless of its legal status, if the company you work for ventures into this territory, it may be time to join a union - or to look for a new job.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Thanks to everyone who contributed to this question via my initial LinkedIn post on this subject, notably
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.linkedin.com/in/irma-mastenbroek-2b676688/" target="_blank"&gt;&#xD;
      
           Irma Mastenbroek
          &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            ,
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.linkedin.com/in/alexandermoltzau/" target="_blank"&gt;&#xD;
      
           Alex Moltzau
          &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            ,
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.linkedin.com/in/tomaszzalewski/" target="_blank"&gt;&#xD;
      
           Tomasz Zalewski
          &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.linkedin.com/in/arnoudengelfriet/" target="_blank"&gt;&#xD;
      
           Arnoud&amp;#55357;&amp;#56507; Engelfriet
          &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.linkedin.com/in/diana-bia%C5%82ob%C5%82ocka-b%C5%82achnicka-7015556/" target="_blank"&gt;&#xD;
      
           Diana Białobłocka-Błachnicka
          &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.linkedin.com/in/andreas-h%C3%A4uselmann-851248140/" target="_blank"&gt;&#xD;
      
           Andreas Häuselmann
          &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            .
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;</content:encoded>
      <enclosure url="https://irp.cdn-website.com/b16c8396/dms3rep/multi/pexels-photo-313690.jpeg" length="353001" type="image/jpeg" />
      <pubDate>Sun, 16 Mar 2025 10:18:31 GMT</pubDate>
      <author>ronnit.w@gmail.com</author>
      <guid>https://www.aiactready.eu/can-employers-run-sentiment-analysis-over-their-employee-s-slack-messages</guid>
      <g-custom:tags type="string" />
      <media:content medium="image" url="https://irp.cdn-website.com/b16c8396/dms3rep/multi/pexels-photo-313690.jpeg">
        <media:description>thumbnail</media:description>
      </media:content>
      <media:content medium="image" url="https://irp.cdn-website.com/b16c8396/dms3rep/multi/pexels-photo-313690.jpeg">
        <media:description>main image</media:description>
      </media:content>
    </item>
    <item>
      <title>(How) Does the AI Act Regulate AI Companions?</title>
      <link>https://www.aiactready.eu/ai-act-legislative-process</link>
      <description />
      <content:encoded>&lt;div data-rss-type="text"&gt;&#xD;
  &lt;div&gt;&#xD;
    &lt;p&gt;&#xD;
      
                      
    There are so many good reasons to communicate with site visitors. Tell them about sales and new products or update them with tips and information.
  
                    &#xD;
    &lt;/p&gt;&#xD;
  &lt;/div&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;div&gt;&#xD;
    &lt;p&gt;&#xD;
      
                      
    Here are some reasons to make blogging part of your regular routine.
  
                    &#xD;
    &lt;/p&gt;&#xD;
    &lt;p&gt;&#xD;
      &lt;br/&gt;&#xD;
      &lt;b&gt;&#xD;
        
                        
      Blogging is an easy way to engage with site visitors
    
                      &#xD;
      &lt;/b&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/p&gt;&#xD;
    &lt;p&gt;&#xD;
      
                      
    Writing a blog post is easy once you get the hang of it. Posts don’t need to be long or complicated. Just write about what you know, and do your best to write well.
  
                    &#xD;
    &lt;/p&gt;&#xD;
    &lt;br/&gt;&#xD;
    &lt;b&gt;&#xD;
      
                      
    Show customers your personality
  
                    &#xD;
    &lt;/b&gt;&#xD;
    &lt;br/&gt;&#xD;
    &lt;p&gt;&#xD;
      
                      
    When you write a blog post, you can really let your personality shine through. This can be a great tool for showing your distinct personality.
  
                    &#xD;
    &lt;/p&gt;&#xD;
    &lt;br/&gt;&#xD;
    &lt;b&gt;&#xD;
      
                      
    Blogging is a terrific form of communication
  
                    &#xD;
    &lt;/b&gt;&#xD;
    &lt;br/&gt;&#xD;
    &lt;p&gt;&#xD;
      
                      
    Blogs are a great communication tool. They tend to be longer than social media posts, which gives you plenty of space for sharing insights, handy tips and more.
  
                    &#xD;
    &lt;/p&gt;&#xD;
    &lt;br/&gt;&#xD;
    &lt;b&gt;&#xD;
      
                      
    It’s a great way to support and boost SEO
  
                    &#xD;
    &lt;/b&gt;&#xD;
    &lt;br/&gt;&#xD;
    &lt;p&gt;&#xD;
      
                      
    Search engines like sites that regularly post fresh content, and a blog is a great way of doing this. With relevant metadata for every post so  search engines can find your content.
  
                    &#xD;
    &lt;/p&gt;&#xD;
    &lt;br/&gt;&#xD;
    &lt;b&gt;&#xD;
      
                      
    Drive traffic to your site
  
                    &#xD;
    &lt;/b&gt;&#xD;
    &lt;br/&gt;&#xD;
    &lt;p&gt;&#xD;
      
                      
    Every time you add a new post, people who have subscribed to it will have a reason to come back to your site. If the post is a good read, they’ll share it with others, bringing even more traffic!
  
                    &#xD;
    &lt;/p&gt;&#xD;
    &lt;br/&gt;&#xD;
    &lt;b&gt;&#xD;
      
                      
    Blogging is free
  
                    &#xD;
    &lt;/b&gt;&#xD;
    &lt;br/&gt;&#xD;
    &lt;p&gt;&#xD;
      
                      
    Maintaining a blog on your site is absolutely free. You can hire bloggers if you like or assign regularly blogging tasks to everyone in your company.
  
                    &#xD;
    &lt;/p&gt;&#xD;
    &lt;br/&gt;&#xD;
    &lt;b&gt;&#xD;
      
                      
    A natural way to build your brand
  
                    &#xD;
    &lt;/b&gt;&#xD;
    &lt;br/&gt;&#xD;
    &lt;p&gt;&#xD;
      
                      
    A blog is a wonderful way to build your brand’s distinct voice. Write about issues that are related to your industry and your customers.
  
                    &#xD;
    &lt;/p&gt;&#xD;
  &lt;/div&gt;&#xD;
&lt;/div&gt;</content:encoded>
      <enclosure url="https://irp.cdn-website.com/b16c8396/dms3rep/multi/pexels-photo-8294823.jpeg" length="286545" type="image/jpeg" />
      <pubDate>Sat, 25 Jan 2025 09:09:19 GMT</pubDate>
      <guid>https://www.aiactready.eu/ai-act-legislative-process</guid>
      <g-custom:tags type="string" />
      <media:content medium="image" url="https://irp.cdn-website.com/b16c8396/dms3rep/multi/pexels-photo-8294823.jpeg">
        <media:description>thumbnail</media:description>
      </media:content>
      <media:content medium="image" url="https://irp.cdn-website.com/b16c8396/dms3rep/multi/pexels-photo-8294823.jpeg">
        <media:description>main image</media:description>
      </media:content>
    </item>
    <item>
      <title>Game Over for OS Foundation Models?</title>
      <link>https://www.aiactready.eu/game-over-for-os-foundational-models</link>
      <description />
      <content:encoded>&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            In the fast-paced world of artificial intelligence, the European Union is taking a proactive stance to ensure ethical and responsible AI development. The proposed regulation, known as the EU AI Act, seeks to govern AI systems and their components. Three
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://www.europarl.europa.eu/cmsdata/272920/AI%20Mandates.pdf?utm_source=substack&amp;amp;utm_medium=email" target="_blank"&gt;&#xD;
      
           slightly different proposals
          &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            are currently under debate between the European Commission and the legislative authorities (i.e. the European Council and the European Parliament), who must agree to any legislation passed on Union level. Amongst these negotiations, the Open Source Community has voiced concerns that the legislation is tailored to fit the structures of centralised, commercial software development at the detriment of the open source community.
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
           The OS community calls for better representation in the AI Act
          &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            A consortium representing the OS community including HuggingFace, GitHub, OpenFuture and a few other actors in the space have published
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://github.blog/wp-content/uploads/2023/07/Supporting-Open-Source-and-Open-Science-in-the-EU-AI-Act.pdf?utm_source=substack&amp;amp;utm_medium=email" target="_blank"&gt;&#xD;
      
           policy recommendations
          &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            aimed at improving the conditions for open source development. Specifically, the criticisms and proposed improvements are aimed at open source foundational models, which the authors argue stand to contribute tremendously to decentralised economic growth. (Interestingly, this claim seems to be contradicted by Nathaniel Whittemore’s observation that
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;a href="https://open.spotify.com/episode/1hqLlUwzrsV9seYfrwvGkx?si=09T4Coz4QKSSlT0NzyzVhQ" target="_blank"&gt;&#xD;
      
           open source foundation models appear to be pushing AI start-ups out of business
          &#xD;
    &lt;/a&gt;&#xD;
    &lt;span&gt;&#xD;
      
           . This is because the large companies expected to buy these start-ups’ products prefer to develop their own custom LLM systems in-house using open source foundation models).  In this blog post, we will delve into the nuances of the different regulatory proposals and discuss how they stand to impact the open source community.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      
           What’s on the table
          &#xD;
    &lt;/strong&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      
           There are three proposals on the table that have been published in the year 2021, 2022 and 2023 respectively. As we all know, much has happened during this time, and while all proposals foresee exemptions for Open Source components, the proposals (fail to) reflect these developments accordingly. 
          &#xD;
    &lt;/span&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;ol&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;strong&gt;&#xD;
        
            Commission Proposal (from April 2021)
           &#xD;
      &lt;/strong&gt;&#xD;
      &lt;span&gt;&#xD;
        &lt;span&gt;&#xD;
          
             :
            &#xD;
        &lt;/span&gt;&#xD;
      &lt;/span&gt;&#xD;
      &lt;span&gt;&#xD;
        &lt;br/&gt;&#xD;
      &lt;/span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            This proposal exempts open source components from regulation unless they are integrated into a broader AI system. It further excludes systems developed for research purposes only. However, LLMs were not yet a thing when this proposal was written between 2019 and 2021. It can therefore be expected that this blanket exclusion of open source components will not stand. 
             &#xD;
        &lt;br/&gt;&#xD;
        &lt;br/&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;strong&gt;&#xD;
        
            Council Compromise Proposal (from February 2022):
           &#xD;
      &lt;/strong&gt;&#xD;
      &lt;span&gt;&#xD;
        &lt;br/&gt;&#xD;
      &lt;/span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            The Council's proposal goes a step further and defines and addresses separately General Purpose AI systems (GPAI) including LLMs. It also clearly foresees exemptions for open-source GPAI systems in the context of research and development.
            &#xD;
        &lt;br/&gt;&#xD;
        &lt;br/&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;strong&gt;&#xD;
        
            Parliament Coordinated Position (from June 2023):
           &#xD;
      &lt;/strong&gt;&#xD;
      &lt;span&gt;&#xD;
        &lt;br/&gt;&#xD;
      &lt;/span&gt;&#xD;
      &lt;span&gt;&#xD;
        
            With some more time to consider in-depth the advent of LLMs and a growing understanding of the underlying technology, the Parliament’s proposal defines foundation models as a category of AI systems of its own with special requirements. While the Parliament also foresees exemptions for non-commercial, open source components and explicitly relieves OS developers of any down-stream obligations for OS developers, these exemptions do not extend to foundation models. Furthermore, the proposal also retracts these exemptions where systems or system components are being tested live.
            &#xD;
        &lt;br/&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
  &lt;/ol&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
    &lt;strong&gt;&#xD;
      
           What the OS community thinks about it
          &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
           While the European AI Regulation aims to foster responsible AI development, the consortium points out how the current proposals stand to structurally disadvantage Open Source foundation models in particular, and advocate changing the regulation accordingly. They argue that the current requirements for foundation models, and for all (components of) high-risk AI systems being tested under real-world conditions, fail to take into account the decentralised nature of OS software development. Without accomodating this circumstance, the authors insist, the AI Act threatens to make OS development of foundation models impossible. They put forward a number of policy recommendations to level the playing field between OS and closed-source development of AI systems.
           &#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;ol&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;strong&gt;&#xD;
        
            Defining Components Properly:
           &#xD;
      &lt;/strong&gt;&#xD;
      &lt;span&gt;&#xD;
        
            To ensure legal certainty, the consortium calls for a clear definition of the term ‘component’ within the context of AI systems. 
            &#xD;
        &lt;br/&gt;&#xD;
        &lt;br/&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;strong&gt;&#xD;
        
            No Obligations for OS Developers:
           &#xD;
      &lt;/strong&gt;&#xD;
      &lt;span&gt;&#xD;
        
            The consortium recommends that the regulation explicitly states that open source developers should face no obligations to cooperate with downstream users, as proposed by the Parliament.
            &#xD;
        &lt;br/&gt;&#xD;
        &lt;br/&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;strong&gt;&#xD;
        
            Establishment of European AI Office:
           &#xD;
      &lt;/strong&gt;&#xD;
      &lt;span&gt;&#xD;
        
            The consortium supports the establishment of a European AI Office to provide regulatory clarity and facilitate inclusive deliberation. They argue that this office can help address emerging issues and provide guidance to developers.
            &#xD;
        &lt;br/&gt;&#xD;
        &lt;br/&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;strong&gt;&#xD;
        
            Enabling Limited Live-Testing:
           &#xD;
      &lt;/strong&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Limited live-testing should be permitted for OS systems within certain boundaries, which should be subject to ‘sufficient’ documentation and transparency to users. The consortium does not address the uncertainty contained in their proposed phrasing (whatever constitutes ‘sufficient’?). 
            &#xD;
        &lt;br/&gt;&#xD;
        &lt;br/&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;strong&gt;&#xD;
        
            Proportional Requirements for Foundation Models:
           &#xD;
      &lt;/strong&gt;&#xD;
      &lt;span&gt;&#xD;
        
            The consortium argues that open source foundation models below a certain size or significance should be subject to fewer requirements. In particular, the authors propose that obligations relating to the tracking of energy usage, the creation of a documented quality management system, registration and record keeping obligations should be suspended for OS systems below a certain size. Data governance, risk assessment and technical documentation obligations are proposed to be maintained for any size foundation model. The authors further propose that the AI office defines and regularly reviews and adjusts the relevant thresholds for applicability.
            &#xD;
        &lt;br/&gt;&#xD;
        &lt;br/&gt;&#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
    &lt;li&gt;&#xD;
      &lt;strong&gt;&#xD;
        
            Inclusion of OS Developer Community:
           &#xD;
      &lt;/strong&gt;&#xD;
      &lt;span&gt;&#xD;
        
            Finally, the envisioned advisory forum for AI regulation should include active participation from the open source developer community to ensure that the needs of the OS ecosystem are taken into consideration. When you succeed in achieving a goal, be it a big one or a small one, make sure to pat yourself on the back.
           &#xD;
      &lt;/span&gt;&#xD;
    &lt;/li&gt;&#xD;
  &lt;/ol&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;strong&gt;&#xD;
      &lt;br/&gt;&#xD;
      
           What are the chances?
          &#xD;
    &lt;/strong&gt;&#xD;
    &lt;span&gt;&#xD;
      &lt;br/&gt;&#xD;
      
           In the tug of war that is the European legislative process, it remains to be seen who can make their voices heard. Especially with a view to General Purpose AI systems, the open source community is under fire for allegedly enabling fast-and-loose playing with a dangerous technology, a narrative leveraged particularly (– but by no means only–) the incumbent tech giants scared of having their cake eaten. At the same time, existing exceptions for open-source components demonstrates a base level of good-will on part of the institutions for the OS ecosystem. While the OS consortium’s policy recommendations arrive late in the legislative process, the lack of existing proposals on foundation models by the Commission may leave some wiggle room during negotiations. I will keep an eye out for how the OS community's recommendations find their way into the final regulation, or otherwise.
          &#xD;
    &lt;/span&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;</content:encoded>
      <enclosure url="https://irp.cdn-website.com/b16c8396/dms3rep/multi/pexels-photo-11034131.jpeg" length="162134" type="image/jpeg" />
      <pubDate>Fri, 08 Sep 2023 09:09:19 GMT</pubDate>
      <guid>https://www.aiactready.eu/game-over-for-os-foundational-models</guid>
      <g-custom:tags type="string" />
      <media:content medium="image" url="https://irp.cdn-website.com/b16c8396/dms3rep/multi/pexels-photo-11034131.jpeg">
        <media:description>thumbnail</media:description>
      </media:content>
      <media:content medium="image" url="https://irp.cdn-website.com/b16c8396/dms3rep/multi/pexels-photo-11034131.jpeg">
        <media:description>main image</media:description>
      </media:content>
    </item>
  </channel>
</rss>
