Text-Based Sentiment Analysis at the Workplace

ronnit.w • March 16, 2025

The AI Act prohibits the use of AI for emotion recognition in the workplace. But does that include written text?

Update:  The answer is no. The guidelines on prohibited uses of AI released by the AI Office in February 2025 have clarified that text is not considered biometric data. Sentiment analysis based on text is therefore not included in the prohibition - and, critically, also not under the high-risk use case category. This means employers have no obligation to consult workers' representations, inform affected workers, or ensure the accuracy of predictions for this practice (under the AI Act).


Could your employer run sentiment analysis over your slack message logs to infer your sentiments towards the company? What about assessing employees' sentiments on a pending strategic decision, or for the purpose of leadership feedback? That depends - on two things.


1. Does Article 5 refer to 'Inference of emotions' generally or to 'Emotion Recognition Systems' specifically?


Emotion recognition is considered a highly critical use of AI technology under the EU AI Act - especially at the workplace. In fact, ‘the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace [...]’ is prohibited under Article 5 (f) AI Act. At first sight, this appears to cover the scenario above. However, at least the Dutch data protection authority appears to apply the narrow definition for 'emotion recognition systems' to Article 5. The problem is, this definition covers only AI systems that infer emotions based on biometric data.


This interpretation appears to be in line with the principle that prohibitions are an ultima ratio that should be very narrowly specified. Reversely, having a prohibition that is phrased more generally than uses merely subject to regulatory requirements flies in the face of this principle. This suggests that the difference in terminology is the result of an oversight, rather than deliberate. On the other hand, one might argue that the specification of domains (in the workplace and in education institutions) already counts as more specific. Further, the intrusive nature of the technology that is cited in the motivation for the ban (Recital 44) certainly extends to the surveillance of thought as mediated through text. In conclusion, it could be argued either way.


2. Can text be considered biometric data?

Assuming that the Dutch DPA's interpretation stands, this then raises the question if text should be considered biometric data. Here is where it gets interesting: If text was indeed not considered biometric data, the practice would be completely unregulated under the AI Act (although you'd still have some rights under GDPR).


So… is it? Compounding from the relevant definitions in the AI Act and GDPR as referenced in the AI Act, we need to think of biometric data as ‘any information relating to a natural person resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopic data, which allow or confirm the unique identification of that natural person.’ (AI Act Art. 2 (32), GDPR Art. 4 (1), (14)).


This definition leaves a lot of room for argumentation around whether or not text relates to 'behavioral characteristics' of a person (see here for an argument why text is not biometric data by Thomas Zelewski, for example). While, in my experience, companies tend to err on the side of caution in the face of fines reaching into the millions, I would not be surprised if, in some corporate spreadsheet, someone somewhere would turn up that it’s worth the gamble.


In the meantime, I hope that the AI Office will respond to this uncertainty and clarify their intention of the prohibition of emotion recognition in the workplace in favour of employee’s privacy and dignity. Regardless of its legal status, if the company you work for ventures into this territory, it may be time to join a union - or to look for a new job.


Thanks to everyone who contributed to this question via my initial LinkedIn post on this subject, notably Irma Mastenbroek, Alex Moltzau, Tomasz Zalewski Arnoud💻 Engelfriet Diana BiaÅ‚obÅ‚ocka-BÅ‚achnicka Andreas Häuselmann.


By ronnit.w January 25, 2025
There are so many good reasons to communicate with site visitors. Tell them about sales and new products or update them with tips and information.
By ronnit.w September 8, 2023
In the fast-paced world of artificial intelligence, the European Union is taking a proactive stance to ensure ethical and responsible AI development. The proposed regulation, known as the EU AI Act, seeks to govern AI systems and their components. Three slightly different proposals are currently under debate between the European Commission and the legislative authorities (i.e. the European Council and the European Parliament), who must agree to any legislation passed on Union level. Amongst these negotiations, the Open Source Community has voiced concerns that the legislation is tailored to fit the structures of centralised, commercial software development at the detriment of the open source community.