Finnish Yle Introduces All-Encompassing AI Principles
BY Georgi R. Chakarov
Last October, Finland’s public broadcasting company became one of the first media organizations in the world to introduce new principles of responsible artificial intelligence. The principles provide guidance on the use and development of artificial intelligence covering all operations within the company.

Johanna Törn-Mangs was responsible for developing Yle’s AI principles and in this exclusive interview with Georgi R. Chakarov shares some of the details which Yle considered during the drawing up of the principles and also provided several examples of how Yle is already effectively using the advantages provided by the various AI tools. Johanna also stressed on the importance of the human factor and the dangers that the uncontrolled use of AI could present in the future, especially when it comes to misinformation.
Johanna, Finland’s national public broadcasting company - Yle - published its new principles of responsible artificial intelligence (AI) in October, becoming one of the first media organizations to do so. How do you define AI at Yle and how do you see its involvement in content, news production and broadcasting?
Since there is no agreed definition for AI, it is practical for us to define it as a broad term for any system that queries data for output. These kinds of systems have lots in common in terms of what kind of challenges and questions we need to deal with when building or using them. At Yle we are focusing on a responsible approach to AI, and we believe that AI ethics and responsibility will become as important as journalistic ethics in media organizations.

Yes, we want to pilot and experiment with new technologies, but it is not enough to figure out how to accelerate the use of AI and maximize the use. We need to consider how we really want to use AI - what is the responsible way for a public media company? It is crucial that our use of AI is aligned with our mission and values.

We believe that responsibility is not only an intrinsic value but also frees energy, reducing many uncertainties. When there is support and information available for both legality and addressing ethical issues, development decisions become easier to make. Creativity needs boundaries, and these boundaries for responsible AI create psychological safety for experimenting and utilizing AI. It encourages us to focus more on people instead of technology.

Yle’s most important value is trust, and we shall not do anything to compromise the audience’s trust in our news and other content. Without trust we cannot fulfill our mission. Transparency and reliability in AI development strengthen public trust in Yle.

One thing that was important when we wrote Yle’s principles of responsible artificial intelligence is that they approach AI broadly. It was important for us that the principles cover the entire company, not only journalism, and they cover all technologies, not only generative AI. Writing them was a company wide effort, because we believe that AI affects all parts of the company.

Yle is one of the first media organizations to draft principles of responsible artificial intelligence covering an entire company’s operations. They serve as the starting point for more detailed guidelines in the company, for example, generative AI policy, guidelines for developing algorithmic systems/using machine learning in our services, where and how we can use AI generated images, and so on.

Yle’s main principle is that people are always responsible for decisions and the outcome of the use of AI. What kind of safeguards will you introduce and how will you make sure this flagship principle is being upheld at all times?
We are introducing a new governance model for the use and development of AI - 2024 will be very much learning what works and what does not. The idea is that no application is without a responsible owner. We will use the governance model also to clarify the accountabilities in case of any harm caused by AI and aligning those with our current compliance work. We already have safeguards in the form of policies, checklists and compliance checks, but we will be developing these further to make them work better together to ensure nothing falls through the cracks.

How far along is Yle in terms of AI use? In which departments are you using this technology? Have you already aired/streamed AI-generated content?
Yle has been a pioneer and we have been conducting AI experiments for decades, and the company has also taken AI experiments into production. AI has been used to build recommendations for our streaming service Yle Areena, to write news, to cut video clips, to write news in plain language, to suggest headlines, to transcribe and translate texts, to read weather reports on local TV, to write news in Ukrainian, to create audio news from online texts, to create election graphics, to create audio drama, and so on.



Even though we have used AI for a long time, ChatGPT has been a game-changer, because since it was launched, using AI has become much cheaper, easier and the end result is of a far better quality than before. These factors have led to a rising interest among our employees, both journalists and other employee groups, like programmers and HR experts, to start experimenting with and using and developing different AI tools.

AI can be used to automate our processes more to make them more efficient, it can make us work more intelligently, and it can help us innovate new processes and products.

AI helps us in doing many routine tasks much faster and easier, like text-to-speech, speech-to-text, translations, making news in different languages, plain language for the disabled, video and audio editing, synthetic voices, analyzing big amounts of information in investigative journalism, and so on.

We believe that those media companies and individual journalists that are able to start using these tools will have an advantage, since it frees time and resources that can be put on creating new and unique content specifically for our target groups. This means that we can focus on doing journalism that requires human resources, like watchdog journalism, doing local stories by interviewing people in our communities, doing unique investigative journalism in our communities, demanding answers from politicians, and so on.

I’d like to describe two of Yle’s most interesting uses of AI that were launched recently a little bit more in detail:

Reading weather reports by AI

The regional weather forecasts were presented in pictures, by showing a map with the weather forecast for the area. Music was played in the background. Because the regional TV news does not have a presenter who could have read the weather forecast, an alternative solution was sought. It was decided to try if the weather forecast could be read out loud with the help of artificial intelligence. The text-based weather forecast sent by the Finnish Meteorological Institute was used as a basis, which each area’s producer at Yle modified for publication. After that, automatic processes were created to put a synthetic voice to read the text, which was then automatically sent back and the voice was placed on the weather map. In May 2023, this was put into production. The feedback has been positive all around, people have liked the female voice of the AI and the service for the visually impaired has improved.

The use of AI assisted playwriting and speech synthesis at Yle’s drama department

Yle’s drama department had started experimenting with writing AI assisted audio drama in 2017 together with some researchers at University of Helsinki. Paradise Family made with Dramaturg.io but with human actors was published in 2021. When ChatGPT was released in November 2022 Yle’s dramaturgist Juha-Pekka Hotinen wanted to test what was possible to do with ChatGPT. He involved four other tech-savvy people in the project - a dramaturgist, a sound designer and two speech synthesis programmers. The aim was to test two things: how to write an audio drama with Chat GPT; and how to create actors for the drama by using speech synthesis.

The team chose the subject of the drama: how the appreciation of art and culture has evolved. The two dramaturgists planned the scenes and instructed AI to write according to the scenes. They also had to explain to ChatGPT what a scene was. The dramaturgists described the main characters, for example age and personality, to ChatGPT. The dramaturgists worked actively with ChatGPT - evaluated the dialogue and the scenes that AI created, and gave new instructions if it wasn’t good or interesting enough. Sometimes only the seventh version of the scene was accepted… ChatGPT created the characters, wrote and rewrote the scenes and the dialogue according to the instructions. The voices of six real people were used to create the actors for the play. Some of them are employees at Yle, and their voices were taken from Yle’s TV archive. None of the voices were used as such, they were mixed to create unique voices for each character and were not supposed to be recognizable.



The 47-minute audio drama was ready in the spring 2023 and was published in our streaming service Yle Areena and broadcast on radio in the fall. It is a drama in episodic form, consisting of different scenes and essays about the meaning and value of art in society from different perspectives. The audience thought that it was interesting, weird, funny - but everybody knew that it was written and acted by AI which clearly had an effect on the feedback. According to Yle’s dramaturgists, it was artistically on an average level, “we’ve published better audio dramas made by humans, but also worse”. Some quality problems remained - the speech synthesis was not perfect, some scenes could have been worked on more. If these problems had been fixed, the quality would have risen substantially. On the other hand, ChatGPT proved to be quite creative by connecting things in an unexpected way.

What will be the new media experiences Yle has prepared thanks to AI?
We are accelerating personalization of our services and AI has obviously a central role there. We are utilizing it for example to enable multi-lingual services as well as finding links between content pieces to help people to discover new content. And of course personal recommendations.

We believe that people will demand personalized news services, and we are working on developing a public service algorithm. However, at this point, we don’t believe in customized news only, because of our mission and values. We believe that as a public service company we need to build in our values in our algorithm, which means that we will not only reinforce the audience’s pre-existing preferences, but also optimize values like universality, serendipity, exposure of diversity, transparency, collective media experiences, when we develop our recommendation algorithm. It will rather be a model where there are many different ways and opportunities to find and consume content than one big algorithm that rules it all.

What is the perception of AI in Finnish society? Have you seen an increase in the use of such tools in Finland?
In general, Finns tend to be pragmatic and positive about technology and we have a high rate of digitalization in both our public and private sector customer services. Latest studies found that the majority of people in Finland believe AI will boost industry efficiency and personal productivity. At the same time a sizable chunk (around a third of the respondents) think it will weaken privacy protections, decrease the number of jobs and make it harder to access accurate information. Generative AI is currently definitely impacting the perception a lot - many people who were not concerned or did not pay attention before are now forming their views about AI.

I think many media companies in Finland are now accelerating their use of AI. Some are creating new teams and writing their own AI principles. Some have also published new features or products made by AI.

Yle is constantly assessing AI related risks. What has your research shown in this respect, what are the biggest threats you see in using AI?
AI offers a lot of possibilities for Yle, and we have the resources and the knowledge to use AI in a responsible way. We also have a good coverage of local journalists that will cover the whole country, and good possibilities to create unique journalism that no machine can deliver. However, there are a lot of risks, and for media companies it is crucial to focus on responsible use of AI. One of the biggest ethical challenges is if we can trust the largest AI companies and be sure that the solutions that they develop are up to our ethical standards.

A big challenge for responsible media will be to make it possible for people to construct their world view with the help of information they can trust. Responsible media companies could cooperate with each other around watermarking and other proofs of authenticity.

The biggest challenge for media companies will, however, be in the media landscape that we are operating in. Companies that have no background in media, and that do not have the same ethical standards as traditional media companies, will be able to overflow the platforms with huge amounts of news and other content made by AI. There will be AI agents that decide what the audience will see. It will be harder for the audiences to find reliable content in the huge amounts of content that look reliable, even though it is not. We will be flooded by fake news, fake pictures, and so on. There has been a lot of discussion about misinformation, and I think that those fears are justified, because AI makes it very easy to make content that looks real. We also have past examples of huge operations to influence elections around the world and AI scales misinformation in an unprecedented way. One future risk that is seldom discussed is what it means for the whole internet when AI generated content will be fed as a training material for yet another AI.

As we saw from both the writers’ and actors’ strikes in the US, one of the major concerns when using AI in the entertainment industry is the issue of copyright. How is Yle addressing those concerns?
Creators’ rights on their work and benefiting from it is a rather big philosophical question and we expect that discussion to continue lively in the coming years. Our principle for copyright states: “When we use AI, we take into account copyright as well as the rights of people working on creative tasks. At the same time, we are aware that machine learning challenges existing compensation models for creative work. We seek new methods together with other players in the industry.”

Juha-Pekka Hotinen in the Artlab studio in Helsinki, where Paradise Family was recorded


We believe that we are in a transformative phase regarding ways how creative work is compensated. We need to innovate and negotiate within our industry as well as with the technology players while being firm about opposing the overly permissive practices of some technology companies that release products without thoroughly considering potential risks and ethical implications. We are currently prohibiting data scraping in our digital services for commercial purposes. We are also in constant dialogue with copyright organizations we have agreements with.

As a public broadcaster, have you planned educational campaigns re the use of AI for your viewers/users?
We have said that our goal is that both Yle employees and those who use Yle’s services should have the opportunity to understand what the algorithms do and what data they use. We have discussed educational campaigns and our aim is to educate our audience, but we have no concrete plans yet.

Recently we have discussed our approach to AI with the other Nordic public service media companies multiple times, and we are constantly sharing best practices and strategies amongst us. At the moment, we are not developing together, but it is possible to cooperate around this in the future.
Johanna Törn-Mangs is responsible for developing Yle’s principles for responsible AI. She works as Director and Editor-in-chief at Svenska Yle, the department specialized in content for the Swedish speaking audience at Finland’s national public broadcasting company. Johanna has worked as a journalist in Finland for twenty years. She has held numerous management positions, specializing in leading digital media teams. She has also worked as a newspaper, online, TV and radio reporter, as well as a foreign correspondent in the US and Sweden. Johanna holds a Master’s of Science in Economics.
Share this article: