Critical AI Literacy Chelsea Humphries

Critical AI Literacy for Sustainable Scholarly Publishing

Chelsea Humphries

As a librarian, I am on the frontlines of scholarship. I support the development of scholars by assisting and instructing undergraduate and graduate students at my mid-sized Canadian university, I support faculty research and conduct my own, and I disseminate scholarly outputs by building our collections and performing outreach to promote them. Additionally, I work as a co-editor for a journal in the library and information sciences (LIS). Scholarly communication underpins every aspect of my various roles, and I see how generative AI is impacting its sustainability every day.

Generative text tools in particular, also known as Large Language Models (LLMs), are powerful in their ability to synthesize vast amounts of data in natural language. They are emerging in seemingly every digital product at our fingertips, and while they may provide creative new avenues for research and education, they require a critical literacy that explicitly invites opportunity for an informed stance of refusal and resistance prior to their use to avoid harms to scholarship, the people who perform it, and the world in which this work is done. This is an uncommon perspective, but one that is essential in ensuring sustainable scholarly publishing as these tools continue to emerge, proliferate within, and impact academia.

There are many misconceptions and misapplications surrounding these tools, which I encounter every day in the library and in the classroom. For example, hallucination—or these tools’ tendencies to fabricate information (in often too-confident language)—surprises many scholars, no matter their stage of career or publishing goals; frequently, chatting with a generative text tool is incorrectly seen as an equivalent for running a search in a search engine.1 This misunderstanding spills into a variety of areas. The thinking that these tools are all-knowing conversational search engines may augment students’ abilities to build their critical thinking skills and practice the research and synthesis necessary to become credible scholars (and active, critical citizens of the world). Overreliance upon and cognitive offloading to AI tools are already being seen to influence critical thinking skills in users (Gerlich; Kosmyna et al.). This is compounded, insofar as AI-generated summaries and “assistants” are pervading digital products, providing a shortcut and possible alternative to engaging with challenging material directly.

These generated alternatives may also come to devalue the hard work of scholars who are creating new knowledge in their disciplines. That “information has value” is one of the core precepts in the Association of College and Research Libraries’ (ACRL) Information Literacy Framework (2015), and librarians are particularly well-poised to discuss this devaluation and fight against it. Alongside generated text summaries, machine-generated texts are now entering publishers’ frontlists; these range from entirely generated texts with human editorial oversight,2 to texts that blend human-written material with generated literature reviews (“Springer Nature”). It is easy to use generative tools to brainstorm, draft, edit, and translate text, and this may have implications for editorial work and workers. But, because these tools are prone to error3,4 and draw from unsustainable amounts of natural resources, the expertise of scholars and those in scholarly publishing should not be devalued but rather valued more highly, so that we can navigate generative AI use carefully and thoughtfully, deploying it strategically as befits its multifold and dramatic impacts upon the world. Librarians in scholarly publishing can and should advocate for themselves and other experts who are creating information.

The environmental impact of generative AI tools is staggering. We have known, nearly since their inception, that the energy and natural resource demands of generative AI data centres are high (Meredith). In particular, the increasing water footprint of generative AI threatens clean water supplies (Pengfei et al.); it also makes unsustainable demands on power, with predictions forecasting that global data centre electricity consumption will more than double by 2030, exceeding the power demands of the entire country of Japan (International Energy Agency). This is a direct threat to the ecological sustainability of our planet and must be handled carefully, although it is often either unthought of or obfuscated as users increasingly engage with incorporeal chats. Libraries are increasingly prioritizing sustainable practices, with the Association of Research Libraries (ARL) stating that it “believes in fostering a research and knowledge ecosystem that is financially, technologically, and ecologically sustainable,” and the American Library Association (ALA) “recogniz[ing] sustainability as a core value of the profession, highlighting libraries’ vital role in fostering a sustainable future and inspiring solutions for global challenges like climate change, social equity, and economic viability” (Tribelhorn). Voicing concerns regarding new, unsustainable, and pervasive technology, and offering informed refusal as a legitimate response, is not just an ethical consideration—it is a requirement in our profession.

To date, developing AI competencies and AI literacy frameworks in LIS seek to promote meaningful engagement with AI. There is very little mention of intentional disengagement. AI literacy is defined by one authority as “the ability to understand, use, and think critically about AI technologies and their impact on society, ethics, and everyday life” (Lo 120). This corresponds with a point made in the most recent draft of the AI Competencies for Library Workers document from the ACRL: “Critical evaluation fosters healthy skepticism and ongoing assessment of AI-generated outputs, benefits, and challenges” (Assn. of College and Research Libraries, AI Competencies). These are admirable goals; however, they fall short of explicitly recognizing that engagement and “healthy skepticism” can also look like conscious and informed resistance. “Use” is not required. Similar to Leo S. Lo’s framework, the Canadian Association of Research Libraries’ (CARL’s) strategic plan includes a focus on AI, but this falls short of being non-prescriptive, wanting to foster “the understanding and integration of generative artificial intelligence into support for research, teaching and learning and into library practices” (Canadian Assn. of Research Libraries). AI literacy can include but should not presuppose integration and use. This is a point emerging in LIS scholarship (see Slater), but not yet at the fore.

Teaching generative AI literacy, rooted in human-centred approaches to AI, is central to the future of sustainable scholarly publishing. As Shannon Vallor, the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the University of Edinburgh’s Edinburgh Futures Institute, defines human-centred AI systems, they are “designed by people, for people, and with people, in such a way that the ultimate design aim is the promotion of human flourishing” (13). As a librarian with robust instruction duties, I lead conversations about generative AI in my guest lectures on information literacy to support intellectual and scholarly flourishing at the institution. I pull students and faculty into these conversations as equals, exploring their thoughts about these tools, why they do and do not choose to use them, situating our affective responses (ranging from effusive supporters to frustrated detractors), how we understand these tools to work, the implications of their presence in scholarly environments, and larger topics and themes. I regularly discuss who owns these tools and profits from them; intellectual property, copyright, and privacy in relation to training data, inputs, and outputs; and environmental impacts and what they mean for populations around the world. Frequently, I structure these conversations as a game of true and false, asking students and instructors questions about generative AI using anonymous live polling tools. Anonymity creates a low-stakes environment for participation in which all thoughts, opinions, and questions can be voiced without personal judgement, and I have found that robust conversation usually ensues, both within the anonymous polling tool (I often use Mentimeter’s Q&A feature, and follow up comments and questions are common as new topics are discussed) and vocally in the classroom. I have also run similar activities and facilitated similar conversations among library staff and various faculty groups, encouraging curiosity and critical, practical evaluation methods for generative AI tool use (see, for example, Hervieux and Wheatley) that are impartial to specific tools and do not presuppose their value. To this end, I have also created a LibGuide in collaboration with my colleagues in the library that does not promote or discuss specific generative AI tools, but rather provides frameworks and guidelines for thinking about AI tools and evaluating them for use. My goal is agnostic and pragmatic: I aim to help scholars understand these tools, evaluate them, and make informed and defensible decisions about their use or resistance to their use in relation to their scholarly goals, encouraging transparency and human participation at every step along the way.

There is often a sense of inevitability surrounding generative AI—an assumption that these tools should and will become embedded in every aspect of our lives. If one were to choose otherwise, they might be described as a Luddite or as attempting to bury their head in the sand. This sense of inevitability can thus couple with a fear that not using these tools will render scholars and their work obsolete, which may in turn increase workloads as scholars attempt to master generative AI tools to stay relevant. Insofar as this is frequently undertaken with little institutional guidance or support, this inevitability and urgency for mastery is itself an unsustainable approach to professional development and learning. Becoming AI literate and meaningfully engaging with generative AI tools does not necessitate mastery and use. This assumption is one that is rooted in capitalistic and neoliberal ways of thinking emerging from Big Tech drives for profit. It is no coincidence that Google, Meta, Microsoft, Apple, and others are embedding generative AI into all of their products; it is not for our benefit or to support human flourishing that they are doing so (I, for one, definitely did not ask for AI summaries to be appended to every search),5 but rather to increase their profits as we literally buy in to the narrative that we must adopt these tools as they emerge, regardless of their actual value. I believe that comprehensive and critical AI literacy should provide equal opportunity for informed use and informed resistance. Generative AI can be useful, but it must be human-centred. We must ask ourselves as we approach it: Is this tool something that we actually need? Will it help us solve the problems that we are working on as scholars? Does it further our goals and support our flourishing? Or are we allowing it, and its creators, to shape and define new problems for us? Who is in charge of our scholarly futures—us, or the technology that has been thrust upon us? It is too late to ignore generative AI, but it is not too late to address our understanding of it and make informed decisions about its use and usefulness in scholarship and scholarly communications.

I encourage readers to explore these topics, critically investigate your own scholarly goals, and engage others in conversation about generative AI. Chat with librarians, researchers, instructors, editors, authors, and students. We should do what we do best as scholars—be curious, critical, and collaborative.

We cannot wait for “perfect” circumstances within which to begin these conversations; the pace of institutional policy development is slower than that of generative AI tools. While we should contribute to these larger institutional discussions wherever possible, we cannot bide our time. Generative AI is all around us, and we must meaningfully engage and disengage with it now or risk having the future of our scholarly communications work decided for us as generative AI continues to be more deeply entangled in our publishing processes and technologies.

Works Cited

Aksenfeld, Rita. “Springer Nature to Retract Machine Learning Book following Retraction Watch Coverage.” Retraction Watch. 16 July 2025, retractionwatch.com/2025/07/16/springer-nature-to-retract-machine-learning-book-following-retraction-watch-coverage/.

Association of College and Research Libraries. Framework for Information Literacy for Higher Education. American Library Association, 2015, www.ala.org/acrl/standards/ilframework.

Association of College and Research Libraries. AI Competencies for Academic Library Workers. 5 Mar. 2025, www.ala.org/sites/default/files/2025-03/AI_Competencies_Draft.pdf.

Baikady, Rajendra. Advancing global social work: A machine-generated literature overview. Springer, 2025, doi.org/10.1007/978-981-96-1828-6.

Canadian Association of Research Libraries. CARL’s Strategic Focus, 2025-2028. 2025, www.carl-abrc.ca/about-carl/carls-strategic-focus-2025-2028/.

Gerlich, Michael. “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking.” Societies, vol. 15, no. 1, 6, 2025, doi.org/10.3390/soc15010006.

Gireesan, K., and Jos Chathukulam. Democracy, Leadership and Governance – Application of Artificial Intelligence: A Machine-Generated Overview. Springer, 2024, doi.org/10.1007/978-981-99-7735-2.

Hervieux, Sandy, and Amanda Wheatley. “The ROBOT test [Evaluation tool].” The LibrAIry, 2020, thelibrairy.wordpress.com/2020/03/11/the-robot-test.

International Energy Agency. Energy and AI. 2025, www.iea.org/reports/energy-and-ai.

Khine, Myint Swe. Motivation Science: A Machine-Generated Literature Overview. Springer, 2024, doi.org/10.1007/978-981-97-9247-4.

Kosmyna, Nataliya, et al. “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task.” arXiv, 2025, doi.org/10.48550/arXiv.2506.08872.

Lo, Leo S. “AI Literacy: A Guide for Academic Libraries.” College and Research Libraries News, vol. 86, no. 3, 2025, pp. 120-22, crln.acrl.org/index.php/crlnews/article/view/26704/34625.

Meredith, Sam. “A ‘Thirsty’ Generative AI Boom Poses a Growing Problem for Big Tech.” CNBC, 6 Dec. 2023, www.cnbc.com/2023/12/06/water-why-a-thirsty-generative-ai-boom-poses-a-problem-for-big-tech.html.

Pengfei, Li, et al. “Making AI Less ‘Thirsty’: Uncovering and Addressing the Secret Water Footprint of AI Models.” arXiv, 2025, arxiv.org/abs/2304.03271.

Slater, Kailyn “Kay.” “Against AI: Critical Refusal in the Library.” Library Trends, vol. 73, no. 4, 2025, pp. 588-608, muse.jhu.edu/pub/1/article/968497.

Snoswell, Aaron J., et al. “A Weird Phrase is Plaguing Scientific Papers – and We Traced it Back to a Glitch in AI Training Data.” The Conversation, 15 Apr. 2025, theconversation.com/a-weird-phrase-is-plaguing-scientific-papers-and-we-traced-it-back-to-a-glitch-in-ai-training-data-254463.

“Springer Nature Advances its Machine-Generated Tools and Offers a New Book Format with AI-Based Literature Overviews.” Springer Nature, 4 May 2021, https://group.springernature.com/gp/group/media/press-releases/advances-its-machine-generated-tools-with-ai-based-lit-overviews/19129322.

Tribelhorn, Sarah. “The Vital Role of Sustainability in Academic Libraries.” ARLViews, 15 Aug. 2025, www.arl.org/blog/the-vital-role-of-sustainability-in-academic-libraries/.

Udaya, Muthyala, and Chada Ramamuni Reddy. Vocabulary, Corpus and Language Teaching: A Machine-Generated Overview. Springer, 2024, doi.org/10.1007/978-3-031-45986-3.

Vallor, Shannon. “Defining Human-Centred AI: An Interview with Shannon Vallor.” Human-Centred AI: A Multidisciplinary Perspective for Policy-Makers, Auditors, and users, edited by Catherine Régis, et al. Chapman and Hall/CRC, 2024, pp. 13-20, doi.org/10.1201/9781003320791.

Notes


  1. To generalize, as language-pattern machines, these tools are not inherently designed for information retrieval; they are designed for natural language production corresponding with certain probabilities. They will give you the most likely response to your query, according to their algorithms, training data, and other design features. The quality of these outputs must be checked by those with relevant expertise.↩︎

  2. See, for example, Gireesan and Chathukulam; Khine; Udaya and Reddy; Baikady.↩︎

  3. For example, the nonsense phrase “vegetative electron microscopy” has been appearing in an increasing number of scientific publications, and it is attributed to erroneous generative AI chat suggestions (Snoswell et al.).↩︎

  4. A recent title, Mastering Machine Learning: From Basics to Advanced, was retracted by Springer Nature after generated citations in the text were discovered to be nonexistent (Aksenfeld).↩︎

  5. For those interested in avoiding Google Gemini’s generated responses, at the time of writing, running a search and selecting “Web” or another filter at the top of the search’s landing page (instead of searching “All”) should eliminate the summary.↩︎