
Lyse case study: An interview with Øyvind Feed
Two waves of responsible AI – First came governance, now comes competence
Lyse has always had a clear goal – to create new opportunities based on its own business. New technology and data have long been part of that journey. Underpinning everything are the company’s core values: team player, courageous, straightforward and responsible.
“As part of our focus on data science, several frameworks were established back in 2019 to ensure that we manage technology responsibly and in line with the group’s strategy. A key outcome of this work was the guideline ‘Ethical Use of Algorithms and Artificial Intelligence in the Lyse Group’, which was approved by group management in 2020,” says Øyvind Feed, CIO of Lyse AS.
When ChatGPT was launched in the autumn of 2022, Lyse carried out a risk assessment in accordance with this guideline. The assessment became the starting point for the establishment of the AI Council in 2023. The council was created by group management to ensure that Lyse adopts artificial intelligence in a safe and responsible way. At the same time, it was tasked with building competence and exploring new opportunities – with the expectation that the volume of AI services would increase as the technology moved from producing predictions to generating actual objects and content.
Practical implementation and future outlook
“Since its inception, the AI Council has reviewed 68 different use cases. It continuously assesses new proposals, defines ethical boundaries and builds a shared practice for AI use. In this way, many small, practical initiatives have been implemented in everyday operations – in customer service, invoicing and work support. More complex areas such as power markets and grid operations have been phased in gradually and in a controlled way,” Feed continues.
Lyse is now building further on this foundation, as more advanced AI tools are introduced to increase speed and quality in development and operations. The principles from the first wave remain firm: human oversight, accountability, traceability and clear boundaries.
At Lyse Produksjon, AI is now used together with optimisation models to manage power production as efficiently as possible.
“As the systems become more automated, people will still be responsible for building, training and monitoring the solutions,” says Feed.
“Another good example is the subsidiary Lnett. The group’s infrastructure company has adopted AI in the planning of maintenance and expansion. With millions of images as a basis, the systems interpret terrain and installations, and suggest, among other things, where wooden poles should – and should not – be used,” he adds.
Lyse practises responsible AI through regular updates of ethical guidelines and clearly defined boundaries and restrictions. This includes caution when it comes to meeting transcription and a firm ‘no’ to analysing colleagues.
Lyse is now preparing for the next wave. AI agents will transform how customers interact with the company digitally – and how relationships are built in the future.