What every senior woman academic leader should know about artificial intelligence

Photo credit: generated by P. Maurice using Copilot

In this post on artificial intelligence (AI), we focus on the ‘Top 10’ things every senior woman leader in academia should know about AI, especially generative AI (GenAI).  We hope that this will help to ensure its wise and ethic use.

By Patricia A. Maurice, Eva Åkesson, and Janet G. Hering

17 February 2026, DOI: 10.5281/zenodo.18526502

Research in artificial intelligence (AI) has existed for decades, and many academics have used AI for years, often unknowingly.  For example, spellcheck methods have evolved from simple dictionary matching to complex Natural Language Processing (NLP) that takes context into account.  Digital cameras on smart phones long used AI to produce high quality images.  Anyone who has served as a reviewer or editor for scientific journals over the past few decades likely has grappled with the use of AI by ‘paper mills’ that spew out fake scientific papers.    

Generative AI, or GenAI, goes beyond more traditional AI, allowing users to create new content, such as music, graphics, text, or even videos. GenAI tools rely on machine learning (ML) methods whereby a software ‘model’ is trained on existing large datasets to generate new content. Widespread use of GenAI exploded after the November 30, 2022 release of ChatGPT (GPT stands for Generative Pre-trained Transformer) by OpenAI [2]. Concerns for problems like fake papers have soared since that date [3].  Almost immediately, academic publications and discussion forums such as The Chronicle of Higher Education, Times Higher Education, and The Conversation responded with an explosion of articles and opinion pieces on the effects of AI on higher education [4].  Opinions run the gamut from wild enthusiasm to dire warnings about the potential for GenAI to erode students’ research, writing and critical thinking skills.  

Senior women leaders must understand the potential benefits and risks of AI in order to respond quickly in this rapidly changing landscape.  Here, we list the ‘Top 10’ things every senior woman in academia should know about AI today.  This list is primarily cautionary, focused on promoting AI’s wise and ethical use on campus.  Admittedly, the world of AI is changing so fast that our list may be out-of-date at any time.

The Top 10 things every senior woman leader should know about AI today

The list is organized into several themes: basics (1-4), environmental impacts (5), impacts on women and members of marginalized groups (6-9), and policy needs (10).

Some basic things to know about GenAI: 

1. The use of GenAI is exploding and it is here to stay.  A recent study by Graphite estimated that “in November 2024, the quantity of AI-generated articles being published on the web surpassed the quantity of human-written articles.”  This study may under-represent use of GenAI because many times it is incorporated as just one step in a larger process of drafting and editing [5].  Many corporations are betting on GenAI becoming increasingly important, although the climate may change as people recognize its limitations and drawbacks.

2. It can be hard or impossible to distinguish AI- versus non-AI- generated content. As described by Graphite “There is a considerable disagreement about the accuracy of AI detection algorithms, and many argue that detecting AI is impossible, or at best, highly inaccurate” [5]. GenAI and the algorithms developed to detect it are already in an ‘arms race’.  The inaccuracy of detection algorithms can raise a serious issue for academics intent on preventing students from cheating. Recent news articles have described students traumatized by unwarranted accusations that they used GenAI when they did not [6].

3. AI needs human oversight and shepherding, plus fact-checking.  This is highlighted by Merriam-Webster’s choice of slop as its 2025 word of the year [7].  They defined slop as “digital content of low quality that is produced usually in quantity by means of artificial intelligence.”   This derogatory term is a caution against hubris in the GenAI community.

Computer scientists and software engineers often stress that GenAI requires human expertise and oversight.  There can be a ‘wild west’ mentality in ‘hot’ fields like ML and GenAI that can lead to misunderstandings, abuses, and potential backlash.  In addition to technical experts, we need humanists who will address issues like ethical lapses and biases.

GenAI is commonly used to search large databases to answer queries and generate new material. Universities have a long tradition of knowledge generation, storage, and curation.  Scholars are carefully trained to apply established research methods.  Universities need to work with AI to ensure resulting material is up-to-date, fact-checked and presented in an unbiased, transparent, and ethical manner.  This is needed for the outputs to be trustworthy.  Users must take ultimate responsibility for their work.  Indeed, the official ChatGPT web site includes the warning “ChatGPT can make mistakes. Check important info” directly under its ‘ask anything’ chat window/prompt box [2].

4. GenAI is changing the landscape for employment and the job market.  A 2025 report by Stanford University researchers [8] found that in the US, “Early-career workers (ages 22-25) in AI-exposed occupations experienced 16% relative employment declines, controlling for firm-level shocks, while employment for experienced workers remained stable.”  They identify most ‘AI-exposed occupations’ as, for example, software developers and customer service representatives.  Companies will likely experience difficulties maintaining enough experienced workers without hiring and training young people. 

GenAI is not ‘free’ from an environmental standpoint:

5. AI, especially GenAI involving images or videos, requires enormous amounts of water, energy, and minerals [9].  At the 2024 World Economic Forum annual meeting in Davos, Switzerland, OpenAI CEO Sam Altman “warned that the next wave of generative AI systems will consume vastly more power than expected, and that energy systems will struggle to cope.” He highlighted the need for a breakthrough [10].

Energy is consumed not only by AI servers in their operations but also through supply-chain activities.  As of 2025, “According to the International Energy Agency, 0.6% of global total carbon emissions comes from the data centres and data transmission networks due to their electricity consumption” [11].  AI-associated industry energy consumption could double by 2026.  Researchers predict that data centers could consume 12% of total U.S. electricity by 2028 [12].

Large quantities of water are required to cool AI servers and for electricity generation.

Between 2024 and 2030, “the deployment of AI servers across the United States could generate an annual water footprint ranging from 731 to 1,125 million m3 … depending on the scale of expansion” [11].  Such staggering water consumption is already having consequences both for the environment and for water availability in parts of the (Mid)west.

The U.S. Geological Survey provides data on the mineral resources needed for data centers, including information on the percent of each element the U.S. currently imports [9].  Considering the numbers (e.g., 80% of Rare Earth Elements, and 100% of Tantalum, Indium, Germanium, and Gallium) it’s no wonder that access to these resources is causing geopolitical tension. 

GenAI may pose additional challenges for women and marginalized groups:

6. AI is likely to increase wealth and knowledge gaps between rich and poor, women and men.  According to a 2024 UNESCO report, there are 244 million fewer women than men using the internet worldwide.  This means that fewer women have access to the potential benefits of internet-based AI [13].  A blog post at the Center for Global Development states that “While AI will, hopefully, boost macro-level productivity, it could widen income disparities within countries, benefiting highly skilled workers, displacing lower-skilled jobs in repetitive tasks, and concentrating wealth among those who control the technology. But the bigger, and far-less explored, concern is the inequality AI could amplify between nations” [14].  

7. GenAI is being used to harass and threaten.  As a young professor in the 1990s, Patricia and her female colleagues were sometimes subjected to harassment such as finding a photo of their faces pasted on a Playboy centerfold.  AI and the internet are making such harassment easier, more realistic, and even more brutal.  In January 2026, Elon Musk’s AI tool Grok was banned from several countries for producing ‘nudified’ pictures of women and children. According to a report in The Guardian, “But the controversy may have been helpful for boosting public awareness of Grok. On Thursday, Musk shared a post claiming “popularity and real world usage are skyrocketing globally” – alongside a graph of “Grok” as a search term hitting a new high on Google Trends” [15].

As academics, we can have zero tolerance for AI-based harassment on campus. But as senior women leaders, we are likely to experience harassment, ourselves.  According to the UN, “Women leaders, journalists, activists, and public figures face relentless gendered disinformation, deepfake attacks, and coordinated harassment campaigns designed to silence, shame, and push them out of public life” [16].   

8. GenAI voraciously consumes data, which can raise issues of bias and privacy concerns.   In her groundbreaking book Invisible Women: Data Bias in a World Designed for Men [17, 18], Caroline Criado Perez described how lack of data including women or of gender-disaggregated data can have profound effects on women’s health and safety.  As discussed in Patricia’s book Do Science Like a Girl, existing data biases can result in AI perpetuating a range of biases against women [19, 20].  Removal of data repositories focusing on women by the current US administration is likely to exacerbate these problems.

A recent StanfordReport [21] described observations of “widespread evidence of bias against older women on popular image and video sites and in the algorithms that power popular AI tools such as ChatGPT.” Such biases can have a huge impact on a woman’s life and career.  For example, AI- based tools that employers use to review job applicants’ resumes “may give older men an advantage while putting older women and younger job seekers at a disadvantage. Where older women and younger people may have already experienced discrimination in hiring, the LLM not only reflects but actively reinforces this bias”.  However, because AI companies do not disclose their ML training methods, it’s hard to know the exact source of biases [21] and thus how to prevent them.

Spurred on by GenAI, governments and corporations can be voracious hunters for and users of large datasets, raising new privacy concerns.  A 2025 Brookings Institute commentary noted that, “Perhaps the starkest example is in China, where AI enables surveillance on a widespread scale. Coupled with social media monitoring, cameras, and facial recognition, the technology enables authorities to track dissidents and government critics and identify their statements and locations” [22]. But such problems are not limited to authoritarian regimes. Many corporations and democratic governments are also on the prowl for our personal data.

9. AI development and programming are dominated by white and Asian men. According to a 2024 UNESCO report [23], only 30% of professionals in the AI sector are women, and only 18% of C-Suite positions at AI startups are held by women.  Moreover, only 16% of university faculty conducting AI research are women.  The report argues that the “predominance of white males in the AI and machine learning fields is leading to gender bias in AI systems.” Although, other sources have noted that China has become a leader in many aspects of GenAI [24], so it’s not all white men.

Finally, we need to consider policy:

10.  AI is moving far faster than our laws and policies can keep up.   UNESCO has warned that, “Getting AI governance right is one of the most consequential challenges of our time, calling for mutual learning based on the lessons and good practices emerging from the different jurisdictions around the world” [25].  The AI industry does not like the thought of regulation, which is being approached differently by different governments.  The 2024 report UNESCO Women for Ethical AI [23] also explored the need for more diverse contributors to AI-policymaking.

Many universities such as the University of Notre Dame (U.S.) where Patricia is an Emeritus Professor, have developed policies and guidelines for the use of AI, especially GenAI.  These generally stress acting within broader existing university guidelines, behaving ethically, taking responsibility for and acknowledging one’s use of AI [26].  We shall address policy in greater detail in an upcoming post, focused primarily on Eva’s experiences with AI policy at Lund University (Sweden).

Questions for further consideration

Here are some questions our readers might consider:

•     Do you agree with our ‘top 10’ list? If not, what do you think should have been included and/or excluded?

•     How do you, personally use AI?  Are you able to recognize all the ways AI is used in your life and career?

•     What are your greatest concerns for GenAI potential impacts on education and learning? How do you think GenAI might benefit education and learning?

References cited

[1] Figure generated by P Maurice using GenAI tools in her Microsoft Copilot subscription. 

[2] https://chatgpt.com/?openaicom_referred=true&model=auto (Accessed January 9, 2026)

[3] Liberpool, L. (2023) “AI intensifies fight against ‘paper mills’ that churn out fake research,” Nature 618, 222-223. (Note: open access)

[4] Jensen, L.X., Buhl, A., Sharma, A. et al. (2025) Generative AI and higher education: a review of claims from the first months of ChatGPT. High Educ 89, 1145–1161. https://doi.org/10.1007/s10734-024-01265-3 (Note: open access)

[5] https://graphite.io/five-percent/more-articles-are-now-created-by-ai-than-humans (Accessed December 31, 2025)

[6] https://www.rollingstone.com/culture/culture-features/student-accused-ai-cheating-turnitin-1234747351/ (Accessed Dec. 31, 2025).

[7] https://www.merriam-webster.com/wordplay/word-of-the-year Accessed January 7, 2026.

[8]  https://digitaleconomy.stanford.edu/publications/canaries-in-the-coal-mine/ (Accessed January 2, 2026)

[9] https://www.usgs.gov/media/images/key-minerals-data-centers-infographic (Accessed January 22, 2026)

[10] Crawford, K. (2024) “Generative AI’s environmental costs are soaring — and mostly secret.” Nature 626, 693, https://doi.org/10.1038/d41586-024-00478-x (Note: open access)

[11] Xiao, T., Nerini, F.F., Matthews, H.D. et al. (2025) Environmental impact and net-zero pathways for sustainable artificial intelligence servers in the USA. Nat Sustain 8, 1541–1553. https://doi.org/10.1038/s41893-025-01681-y (Note: open access)

[12] Shehabi, A., Smith, S.J., Hubbard, A., Newkirk, A., Lei, N., Siddik, M.A.B., Holecek, B., Koomey, J., Masanet, E., Sartor, D. (2024) 2024 United States Data Center Energy Usage Report. Lawrence Berkeley National Laboratory, Berkeley, California. LBNL-2001637

[13] UNESCO (2024) Global education monitoring report 2024, gender report: technology on her terms, https://doi.org/10.54676/WVCF2762 (Accessed April 10, 2025)

[14] Schellekens, P. and Skilling, D. (2024) “Three Reasons Why AI May Widen Global Inequality”, Center for Global Development blog, https://www.cgdev.org/blog/three-reasons-why-ai-may-widen-global-inequality (Accessed January 22, 2026).

[15] https://www.theguardian.com/technology/2026/jan/16/x-still-allowing-sexualised-images-grok-ai-nudification (Accessed January 17, 2026).

[16] https://www.ungeneva.org/en/news-media/news/2025/11/113136/ai-and-anonymity-fuel-surge-digital-violence-against-women (Accessed December 30, 2025.)

[17] Criado-Perez, C. (2019). Invisible Women. Abrams Press.

[18] https://www.epistimi.org/blog/invisible-women-data-bias-in-a-world-designed-for-men-by-caroline-criado-perez

[19] Maurice, P.A. (2025) Do Science Like a Girl: How women in science are changing the world. Published electronically on amazon.com

[20] https://www.epistimi.org/blog/our-goal-as-women-should-not-be-to-do-science-as-well-as-any-man-but-to-do-it-better

[21] https://news.stanford.edu/stories/2025/10/ai-llms-age-bias-older-working-women-research (Accessed January 1, 2026)

[22] https://www.brookings.edu/articles/how-ai-can-enable-public-surveillance/ (Accessed January 2, 2026)

[23] UNESCO (2024) UNESCO Women for Ethical AI: outlook study on artificial intelligence and gender, available online at https://unesdoc.unesco.org/ark:/48223/pf0000391719 (Accessed January 1, 2026)

[24] https://www.cnbc.com/2024/07/10/china-is-global-leader-in-genai-experimentation-but-lags-us-in-implementation.html (Accessed January 1, 2026)

[25] https://www.unesco.org/en/artificial-intelligence/recommendation-ethics (Accessed January 2, 2026)

[26] https://ai.nd.edu/ai-in-action/policies-and-guidelines/ (Accessed January 1, 2026)

Next
Next

Service activities in academia need to be properly valued and supported