Development of generative AI policy at Lund University
Professor Eva Åkesson at Uppsala University. (Photo credit: Mikael Wallerstedt, Uppsala University)
Eva Åkesson, Professor of Chemical Physics at Lund University (Sweden) and former Rector of Uppsala University, kindly shares some of her observations from working on university policy for generative AI (GenAI) at Lund University. Her candid insights are beneficial for senior women leaders grappling with this challenging issue.
By Eva Åkesson with Patricia A. Maurice and Janet G. Hering
3 March 2026, DOI: 10.5281/zenodo.18526609
As a senior advisor at Lund University [1], together with a small group, I had the task of proposing a coordinated approach for the use of AI at the university, with a special focus on educational matters. The task involved dialogue with faculties, students, and relevant support structures to summarize existing co-intelligence procedures and policies at each faculty, as well as to map future needs and plans. The assignment also included conducting benchmarking studies of international universities. In addition, we were assigned to develop a university wide policy for the use of generative AI (GenAI). This blog post draws upon our task force work and ensuing reports. It also includes observations from a debate I participated in at the 2025 EWORA (European Women Rectors Association) Conference in Lisbon. A report on the conference is now available [2] and Janet, whom I was pleased to meet at the conference, wrote a post about it [3]. I was delighted to be invited to write about my work on GenAI policy for this blog.
The year spent diving into the questions and concerns regarding GenAI at a university as ancient and prestigious as Lund was challenging but rewarding. Responding to GenAI is a challenge we all face in academia. After having dialogue with faculty, the development can be described as moving from “against” to “with” GenAI. We realized that GenAI currently represents a ‘Wild West’ environment. The initial phase after ChatGPT was launched in November 2022 was characterized by faculties’ resistance to the use of GenAI, and on concerns about potential cheating by students. Discussion evolved to how AI can be used, how assessment needs to change, and what implications the use of GenAI may have for pedagogy and didactics. Questions such as how syllabi need to be revised, or whether there will be a shift or change in emphasis between learning outcomes, were raised. As I discussed GenAI with (blog editors) Janet and Patricia, we understood this is not the first time new technologies have disrupted education. Introduction of pocket calculators in the 1970s caused faculties in STEMM and Business schools to consider when and how students might be permitted their use. Faculty approaches evolved from resistance to eventually requiring calculator use for many assignments and examinations. Of course, GenAI is immensely more powerful and transformative, and it is spreading far faster!
Some overarching observations about GenAI on campuses
GenAI is becoming ubiquitous and it is here to stay. It is extremely powerful and can increase efficiency. As an example, I used Microsoft copilot AI tools to translate my notes from Swedish to English for this article. GenAI can offer enormous benefits and should be embraced, but we also need to be aware of potential pitfalls [4].
We also acknowledged that many students are focused on graduating as a gateway to a good job and future successes. Students want to know the rules. Students often question why, if they will be using GenAI throughout their lives, they are constrained in how they use it at school. On the other hand, university faculty know it is crucial to teach students not just facts and figures but how to think critically, to research and evaluate, and to be prepared for life-long learning. Faculty want to ensure that students do not lose their ability to learn, think critically and innovate because of GenAI. Students need to be taught and gain experience so that they can take responsibility for the accuracy and integrity of GenAI-assisted work.
Universities are intellectual leaders and we should lead in using AI ethically and responsibly to benefit society. Faculty and staff must set an example of transparent and ethical use of GenAI not just for students but for the public. Janet, Patricia, and I discussed how, in the US, there has been a recent backlash against universities for being ‘elite’ and ‘out of touch.’ Setting a good example of balanced and responsible GenAI use is one step towards regaining public trust. Moreover, people are growing wary of AI ‘slop’ (see [4]) and many are looking for better curated, more trustworthy materials, especially online. Again, universities can lead the way.
Some initial observations from my committee work at Lund include:
· There is considerable uncertainty and anxiety among both staff and students regarding what is permitted or appropriate when it comes to the use of GenAI. This is no surprise given how rapidly the GenAI landscape is changing.
· Digital competence varies greatly among both staff and students. There are pioneers and change agents, but also many who have never consciously used AI. As discussed in our previous post [4], GenAI is often hidden. There is a broad demand for skills development, but it can be difficult to prioritize and allocate time for this when everyone is already busy. There are also concerns that the necessary skills development is not always available or sufficiently systematic. This is a reminder that a diverse faculty with experienced professors and more junior faculty, many of whom have grown up in an AI world, can be a powerful combination when everyone is engaged.
· Students use AI tools to a greater extent than is obvious, often in secret due to uncertainty about what is permitted. They are asking for more openness, clear information, and dialogue about AI in education. There are also requests for teachers to be transparent and to discuss AI-related issues. Students are also concerned that they might be falsely accused of using GenAI especially as programs designed to detect such cheating can give false positives [4].
· Information about AI tools, guidelines, and resources tends to be scattered and difficult to find. There is need for a common, coordinated, and clear channel for AI-related policies and queries for both staff and students. Such a channel will need to be revisited and updated as GenAI and opinions evolve.
Through our efforts at Lund to benchmark and coordinate with other universities in Sweden and abroad, it became evident that many universities are in similar situations, facing uncertainty, a need for guidelines, and varying degrees of coordination. Several institutions have developed their own policies, training programs, and support structures for AI, which provide valuable points of comparison for Lund University. The situation at Lund is not special, but rather typical.
Keywords from our work at Lund
We identified three primary keywords for dealing with GenAI on campus:
TRUST – TRANSPARENCY – TIME
A fundamental principle of the policy is trust – that staff and students will use GenAI in a responsible, ethical, and transparent manner. The traditional academic principles of openness, honesty and integrity apply to any new technological or pedagogical development. The use of GenAI must be characterized by transparency, with staff and students open about when, how, and why GenAI is used. By using GenAI, time can be freed up for what is most valuable at a university: meetings between people, critical thinking, and academic development.
Approached properly, everyone’s work can be made more efficient. For example, professors and staff might devote less time editing student drafts for translations and grammar. But this doesn’t mean students should not learn how to write well!
I encourage all senior women leaders to add another keyword:
TALK!
Much dialogue and exchange of experience is needed both among and between students and staff. In a world of AI where everything is changing fast, there is much uncertainty and anxiety on campuses. This is not something you can solve by yourself, this is something we must manage together.
Special opportunities and challenges of GenAI for women
As with any big transformation in society, GenAI offers both challenges and opportunities for women. One of the biggest challenges is biases, as we discussed in a previous post [4]. Most current GenAI relies on publicly available data, often limited to what is freely available and not behind a paywall. Yet there has traditionally been a paucity of data on women or of gender-disaggregated data [5]. Moreover, many publications in STEMM traditionally have been dominated by white male authors. Plus, GenAI programmers and software developers are primarily white and Asian men. If we are to use GenAI, we must try to ensure it does not bake in old biases.
That said, AI offers the ability to analyze — quickly and efficiently — large datasets to identify biases. Moreover, as addressed in a previous post on service work [6] women in academics often devote more time to ‘academic housework.’ GenAI could be used to take on or streamline some service work so that faculties can spend more time on higher level tasks such as research and innovation. This might help to ensure a more level playing field.
Policy considerations going forward
At Lund University, and indeed at any university, an open culture and transparency should permeate the entire organization. At many higher education institutions, the focus has been on students’ use of GenAI, especially in connection with examinations. There is a need to broaden the perspective so that all activities are encompassed. Discussions are needed on how we indicate and acknowledge the use of GenAI in different contexts.
Many STEMM journals and societies have strict guidelines on the use and acknowledgment of AI, including GenAI. Hiding the use of AI in the creation of content or drafting of publications is considered as a deviation from good research practice by The Swedish Research Council [7].
A good policy should aim to create a positive attitude towards the use of GenAI. Nonetheless, policies should not be naïve. They should acknowledge the significant problems of AI ‘slop’ (see [4]) and the potential for AI to be used for nefarious purposes such as harassment. Skills development is a fundamental prerequisite. Any university needs to examine both how and who has access to the tools.
It is important that university policies regarding GenAI be developed in dialogue by a diverse group of people. Students need to be involved, along with faculty and a staff. Women and members of marginalized groups need to be included, especially given the special challenges and opportunities relating to bias.
At Lund, the task force’s recent view was that GenAI is, in principle, covered by existing regulations and ethical guidelines, such as those against plagiarism and for following an instructor’s guidelines for classroom work. GenAI, however, involves many new tools and new situations arise. Lund’s policy on GenAI, published in December 2025, recognizes that the policy will need “to be updated in line with the need for change in the world around us and within the organisation” [8]. Thus, we must work together to interpret and find approaches going forward so as to meet the widespread concern and demand for clarity. In this context, the policy and its accompanying guidelines serve an important function for the organization.
Times are changing and the world is learning about the strengths, weaknesses, promise, and potential pitfalls of GenAI. To some degree, people are becoming less anxious and concerned. But tomorrow may bring new technologies posing new opportunities and challenges. So, I stress that everyone remember the keywords: Trust, Transparency, Time, and especially Talk! It is my hope that this blog post will be part of that all-important effort to communicate.
Questions for further consideration
Here are some questions our readers might consider going forward
· If you are currently associated with a university, does it have a clearly articulated and openly available formal policy for use of GenAI?
· What new policy challenges might GenAI pose in the future?
· Do you, personally use GenAI? Do your students? What have your observations and impressions been to date?
References and notes
[1] https://www.lunduniversity.lu.se/lucat/user/77ceb25d79c42f67fc388da34ccab08f
[2] https://www.ewora.org/news/159 (Accessed January 22, 2026)
[3] https://www.epistimi.org/blog/women-leading-european-universities-discuss-values-based-leadership
[5] Criado-Perez, C. (2019) Invisible Women. Abrams Press.
[7] Swedish Research Council, Good Research Practice 2024. Report available at: https://www.vr.se/english/analysis/reports/our-reports/2025-07-03-good-research-practice-2024.html (Accessed January 21, 2026).
[8] “Policy on Principles for the Use of Generative AI at Lund University” (2025) https://www.staff.lu.se/sites/staff.lu.se/files/2025-12/policy-on-principles-for-the-use-of-generative-AI-at-LU..pdf (Accessed February 8, 2026).