Introduction
Academia is undergoing a profound transformation driven by rapid technological advances, intensified global collaboration, the spread of for-profit scholarship, and the rise of AI, which we define as Algorithmic Intelligence (Gordo and Gray). Maintaining research integrity has become more important than ever in the face of new challenges and opportunities. The peer review processes, central to ensuring such integrity, is under growing pressure due to the exponential increase in submissions. This has resulted in reviewer overload and significant delays in disseminating scholarly work. As a response, Generative AI (GenAI) technologies are being explored as tools to support and partially automate several dimensions of the peer review processes (PRP).
In an effort to enhance the peer review processes, publishers have introduced automated screening tools that allow editors to accelerate manuscript evaluation, verify compliance with journal policies, and identify suitable reviewers based on their expertise and previous performance. At the other end of the publishing chain, academic authors are increasingly employing what we label AI-assisted technologies, particularly generative conversational models such as ChatGPT and large language models (LLMs) more broadly, during the manuscript preparation phase (Dergaa et al. 616). Although these tools offer the potential to enhance and accelerate academic writing, their use also raises pressing ethical concerns around quality, authorship, authenticity, credibility, and accountability, with additional legal implications regarding copyright (Giray et al. 41).
There is also the paradox of speed. Accelerationism is popular across the political spectrum, but speeding up processes does not offer a solution to unsustainable dynamics. Yet, accelerating knowledge acquisition drives scholarly publishing as do proliferating crises. Sustainability, having an acceptable homeostasis, assumes relentless expansionism on material levels cannot continue. And yet a balance must be reached between the dangers of a new tool such as AI, and what it can offer. The only way to really find it is to act.
At our journal Teknokultura we are exploring how to use AI as part of our human-controlled peer review processes, working with the new wave of AI based on a generative process instead of just rejecting it or submitting to it. PRP are widely regarded as crucial for establishing research quality and scholarly legitimacy, while also playing a significant role in distributing academic prestige and recognition (Tennant and Ross-Hellauer 1, 12).
In this paper, we address emerging issues, challenges, and ethical considerations surrounding the integration of AI into PRP. After surveying the main ethical and related concerns raised in the literature, we focus on a notable blind spot: the absence, or at best marginal presence, of discussions on sustainability and the environmental impact of GenAI within the broader peer review ecosystem.
Our definition of sustainable publishing resonates with Antoine Fauchié’s notion of permapublishing, also featured in this special issue on sustainable publishing. Drawing from permacomputing, Fauchié emphasizes durability, sobriety, and long-term viability in editorial infrastructures. His call to decouple publishing workflows from extractive infrastructures, to empower editors and researchers through minimalist, self-hosted tools, and to depreciate resource-intensive systems, aligns closely with our argument for a slower, more environmentally grounded editorial culture. As we will show, addressing sustainability in PRP and publishing is not just a technical problem, it is political and epistemological as well.
Generative AI in Peer Review Processes: Functions, Risks and Emerging Concerns
The integration of GenAI systems into scholarly workflows is reshaping both academic communication and PRP. Louie Giray’s analysis of the views of members of the 170,000-person strong Facebook group Reviewer 2 Must Be Stopped! is an excellent overview of both the promises and perils of automating more of academic PRP, while also proving beyond doubt that the system now is not fit for purpose (146). Since 2023 there have been more and more cases of peer reviews clumsily using AI. It is the same for paper writing. A study of peer-reviewed submissions to AI conferences in 2023 and 2024 estimates that up to 16.9% of them were “substantially modified by LLMs” (Liang et al.1). No doubt the number is growing higher, and in almost every discipline, not just AI research.
Although PRP have been historically adapted to technological change in its 300-year history (Drozdz and Ladomery 1; Tennant et al. 5), the rise of GenAI introduces a new set of opportunities and challenges, as numerous opinion articles point out (Salah et al.; Schintler et al.; Sabet et al.). These include new ethical dilemmas (Schintler et al.; Seghier), especially around citation accuracy (Mehregan) and the need for new policies that implement transparency (Mollaki).
One of AI’s main benefits in PRP is improving their efficiency. A qualitative analysis of reviews of one paper, comparing human reviewers to AI peer review, found excellent quality with less time committed (Biswas et al.). Another study trained an AI peer review system on 3,300 papers and then compared its reviewing to humans on a new paper. Despite high correlations between machine and human evaluations, the research team had reservations about the quality of machine reviews (Checco et al.). From initial manuscript screening to review report drafting, AI tools are already streamlining various stages, according to a team of researchers in the Philippines who used a strategic planning tool called SWOT (Strengths, Weaknesses, Opportunities, and Threats) (Giray et al.).
Some aspects of quality assessment—such as readability checks or formatting—can reasonably be assisted or automated. AI might reduce desk rejections by flagging superficial issues (e.g., layout, graphic quality) and providing early feedback to authors without engaging reviewers unnecessarily. This could help mitigate “first impression” bias and allow reviewers to focus on scientific content (Checco et al.). AI also supports routine editorial tasks like plagiarism detection, according to a 2022 survey of 685 peer reviewers (Calamur and Ghosh). Two qualitative analyses of tools and reports agreed (Kousha and Thelwall; Jiffriya et al.).
Generative AI can assist with paper screening, integrity checks, and issue flagging, thus facilitating more focused and constructive feedback from human reviewers (Miao et al.). Mike Thelwall trained an AI model on 51 of his previously published articles and found its evaluations surprisingly aligned with his own, expressing a generally positive impression of its judgment (9). AI may also shorten review timelines (Mrowinski et al.; Farber), improve tone and clarity in reviewer comments (Verharen) and reduce workload by matching reviewers based on expertise (Kousha and Thelwall). GenAI and other algorithmic decision making, in this sense, promise to alleviate bottlenecks in editorial workflows (Björk and Solomon).
More recent developments push even further. Advances in LLMs suggest AI could support—or even replace—some complex human writing tasks. A team led by Lu Sun introduced MetaWriter, trained on five years of open peer review data, capable of highlighting “common topics in the original peer reviews, extracts key points by each reviewer, and on request, provides a preliminary draft of a meta-review that can be further edited” (1) . Similarly, as Lu Sun et alia note somewhere else, other tools like ReviewFlow, “scaffolds novices using contextual reflection cues, in-situ knowledge support, and notes-to-outline synthesis” (16). These tools can also enhance clarity and coherence in review reports (Mehta et al.; Mollaki).
However, despite these advantages, integrating GenAI into peer review raises significant concerns. Issues around bias and transparency persist (Calamur and Ghosh; Nath et al.; García). Such bias may stem from initial impressions, theoretical or ideological orientations, language choices, social identity markers, or institutional prestige (Checco et al.). Laurie A. Schintler et al. warn that while AI may “alleviate some of the problems that confront peer review today, such as long decision and publication delays”, it can also compromise the ethics of AI use in peer review mainly “to matters related to plagiarism and authorship in academic journal publishing” (2). Additional risks include breaches of authorship integrity, threats to confidentiality, and a general lowering of editorial standards (Chauhan and Currie; Mensah). The speed enabled by AI tools might shortcut rigorous peer scrutiny, resulting in weaker publications (Carobene et al.). Furthermore, GenAI systems are prone to fabricating content or references, which undermines trust in the review process (Giray et al.; Khalifa and Ibrahim). There is growing evidence that more advanced models tend to produce more errors—such as hallucinations—than earlier ones, and developers still cannot fully explain why (Metz and Weise). In scholarship, accuracy remains a fundamental ethical issue.
Additional ethical risks arise when reviewers rely too heavily on AI, potentially diminishing critical judgment. There is growing concern that human reviewers might be replaced, not just assisted. Overreliance could blur the boundaries between human and machine authorship, threatening academic originality and credibility. As Tiffany I. Leung and collaborators note in their editorial text “Authors must also be cautious of the potential for unintentional plagiarism […] or overt AI plagiarism (the authors passing off or taking credit for the production of statements that were generated by AI). Either form of plagiarism is deemed not acceptable” (par. 8—emphases in original). Because algorithms mirror the biases of their training data, they can perpetuate historical or sociocultural distortions (Limongi). This could lead to automation bias, loss of reviewer skill, and an unintended narrowing of what gets published, reducing epistemic diversity (Giray et al.). Other major concerns include the potential homogenization of academic perspectives through excessive reliance on AI and the lack of robust tools to detect AI-generated or modified content in manuscripts and peer reviews. These issues remain central in the current debates surrounding the integration of AI into the editorial process.
Putting Machines in a Human Loop: Oversight, Integrity and Responsibility in Generative AI Peer Review
PRP constitute the backbone of academic research, ensuring that scholarly work is evaluated by experts before it is published (Houghton). Traditionally, peer reviewers engage deeply with manuscripts to identify flaws and provide meaningful feedback. Now, however, with AI increasingly taking over some of these tasks, reservations have emerged about the possibility that reviewers may overly depend on AI outputs without adequately verifying them (Giray et al.). There are already multiple cases of individuals, some with considerable expertise, relying on flawed AI-generated content without appropriate scrutiny. It isn’t just important newspapers recommending for your summer reading books that don’t exist (Blair), it is also a major government report on children’s health from the U.S. Department of Health full of nonexistent science (Mitchell) and a lawyer from Stanford University being paid $600 an hour as an expert, filing official court documents with hallucinated cases for the State of Minnesota in a trial about the constitutionality of legislation on deepfakes and elections (Gray, Deepfakes, par. 18)! This is just the beginning.
While AI holds promise, it is clear that human reviewers remain essential for preserving authenticity and intellectual integrity. For instance, Carobene et al. contend that “we must approach the integration of AI in study design with discernment, ensuring that it serves as an adjunct to, rather than a replacement for, the nuanced and innovative contributions of human intellect” (842). From this perspective, academic publishing must center human flourishing, prioritizing equitable and meaningful knowledge creation and dissemination over purely technical optimization. While AI might streamline certain editorial logistics, it cannot replicate the critical judgment and interpretive depth that reviewers bring to research assessment (Mehta et al.). This is why it is not enough just to have a human in the loop (or, as the military says, “man in the loop”). The “loop,” which really means the system, has to be fundamentally human, and machines should be integrated at places where they can be helpful, but always be checked and controlled by people. If we had a scholarship system that mainly consisted of machines and humans were just looking for errors and editing what is fundamentally a machine product, we would have lost.
Except for a few proposals advocating fully automated or symmetrical hybrid models (Irfanullah; Weber; Bauchner and Rivara), most of the specialized literature supports the use of GenAI as a supplement to, not a substitute for, human oversight in peer review. Accordingly, AI should be used to support, but not replace, the expert judgment that ensures contextual depth and epistemic responsibility in evaluation (Mollaki; Perry). In short, while GenAI can offer meaningful assistance, human reviewers remain indispensable for maintaining academic rigour. The prevailing consensus calls for human-centered oversight of AI-enhanced peer review, treating AI as a tool or assistant, not a substitute, for human evaluators (Giray et al.; Sabet et al.; Carobene et al.; Seghier).
From this perspective, AI tools should be limited to narrowly defined tasks in which they offer clear benefits: automating manuscript triage, flagging technical or ethical issues (e.g., plagiarism, data anomalies), suggesting reviewer matches, summarizing manuscript content, or assisting in drafting and refining reviewer feedback (Schintler et al.; Seghier; Giray; Carobene et al.; Checco et al.; Thelwall; Miao et al.). Furthermore, these tools must be subject to rigorous testing for accuracy and reliability (Kankanhalli). Regular audits and performance assessments are necessary to ensure compliance with ethical standards and to detect potential biases or harm (Mensah; Addy et al.). Crucially, this approach also emphasizes the need to develop ethical frameworks and regulatory policies that are transparent, detailed, and responsive to the rapidly evolving nature of AI technologies in peer review (Mollaki; García; Ling and Yan).
From Authorship to Accountability: Ethical Frontiers in AI Peer Review
As AI continues to advance and permeate various domains of society, addressing its ethical implications becomes increasingly urgent. In the context of GenAI-assisted peer review, issues of accountability and transparency lie at the heart of current debates (Kousha and Thelwall; Chen et al.; Limongi). It is essential to critically examine the legitimacy of AI-assisted peer review, assessing its potential benefits and pitfalls in light of broader epistemic, social, and ethical concerns (Schintler et al.). Accountability is fundamental to ensuring that individuals and institutions are responsible for the ethical use of AI. This involves creating clear guidelines for AI in academic publishing, as well as mechanisms for monitoring and enforcement, which is not only a question of punishment, but also restitution and remediation, if possible. AI agents provide new tools for monitoring and quality control, they also introduce novel uncertainties around responsibility and governance. As Ricardo Limongi notes “the role of artificial intelligence agents offers new tools for monitoring and quality assurance but raises additional questions about accountability and control (3) (see also Stahl and Eke).
As concerns around legitimacy intensify, an increasing number of publication guidelines now treat holding the relevant humans accountable as a core criterion for authorship (Schintler et al.). The notion of AI authorship does not cause the responsibility of actual people to disappear. If an AI tool produces flawed or inaccurate content, how can it be held responsible? And if human authors do not fully understand how the system generated the result, can they? Yes, they can. You don’t need to know how a tool or weapon works to misuse it. To hold otherwise threatens foundational scientific values like transparency and epistemic responsibility (Texeira da Silva). Nath et al. assert “[Y]et LLMs cannot take responsibility for their errors and transgressions, nor are they ever accountable for the integrity of a given work” (11). It is a question of agency.
Initially, most scientific journals and editors firmly opposed the integration of AI into the scholarship production (Stokel-Walker; Balat and Bahsi). For instance, the International Conference on Machine Learning (ICML) banned submissions with AI-generated scientific content (Vincent). However, this resistance was short-lived. Within two years, major academic publishers shifted toward permitting the inclusion of AI-generated text and visuals, provided that such usage is clearly disclosed and explained (Grove). Current guidelines now recommend that authors and reviewers disclose chatbot contributions and their extent (Miao et al.; Zielinski et al.).
As Vasiliki Mollaki notes, publishers must urgently develop and enforce policies on the ethical use of GenAI (248). These should be transparent, detailed, and actionable, especially for cases where a reviewer uses AI tools without proper disclosure. When such tools are introduced into reviewing processes, it becomes critical to enable transparency and accountability about automated decision processes, offering explanations and guidelines for their appropriate use. Furthermore, given the fast-evolving nature of this technology, existing standards require constant reevaluation and adaptation. Journals and academic institutions must therefore define clear criteria for when and how AI should be disclosed by both authors and reviewers (Limongi; Raman). For instance, authors should report AI contributions to text, visuals, or analysis, and reviewers should indicate if they used AI to edit, phrase, or draft their evaluations. Disclosing technical limitations is also crucial for building trust (Giray et al.; Seghier; Miao et al.). Absent consistent regulation, GenAI will become yet another source of criticism in an already contested peer review system, known for delays, inefficiency, and perceived bias, as well as its limited capacity to prevent fraud and misconduct (Castelo-Branco; Manchikanti et al.; Tennant et al.).
AI’s role in PRP also prompts higher-order philosophical questions: can a machine truly reason, evaluate, or exercise judgment like a human researcher? Laurie A. Schintler et al. argue that listing AI as authors or reviewers threatens accountability and responsibility in publishing, an issue aligned with the dominant human-centred perspective on ethical PRP (11).
Nevertheless, there are alternative viewpoints that call for collaboration, integration, and even partial delegation of reviewer tasks to AI. These ideas have historical roots in the exploration of future avenues for peer review presented nearly a decade ago in the foundational paper “A multi-disciplinary perspective on emergent and future innovations in peer review” by Tennant and a large team of co-authors (35). Although it does not explicitly address artificial intelligence or automation, Susan Haack’s work on peer review provides a valuable framework for reflecting on how emerging technologies might help mitigate some of the problems she identifies—for instance, by reducing biases or broadening the diversity of voices in the editorial process (800). More recent contributions, such as those by Howard Bauchner and Frederick P. Rivara, frame AI as an unavoidable horizon for scholarly publishing, arguing that “rather than avoiding AI, editors should embrace it” (2). Likewise, Ron Weber introduces the term “‘RoboReviewer’” to propose that an AI system could be developed and be able to “undertake high-quality reviews of papers” (87). Today, some experts predict that human-led PRP may soon be replaced altogether. Driven by time pressures and productivity/profitability demands, the academic publishing system is starting to entertain models of fully automated, AI-driven peer review exemplifies a wider faith in technological solutionism, the notion that technology can “solve” complex societal problems by collapsing them into mere engineering challenges.
Critics argue, however, that the current peer review system is already inequitable and unsustainable (Irfanullah; Wynne and Kolachalama; Parrish). Haseeb Irfanullah highlights the structural exploitation of unpaid, invisible academic labor by profit-driven publishers (par. 6). PRP, he points out, rely heavily on professional goodwill, “‘good karma’, and disproportionately burden scholars in under-resourced regions, exacerbating burnout in the process (par. 4). The problem has been further intensified by the promotion of open access by many public administrations, which have been co-opted by the profit-driven logics of major publishers. In turn, those publishers have leveraged the interactions between digitalization and open access to anchor “platformization business models” within the global movement to provide society with access to scientific publications—and therefore, to knowledge. Transformative agreements often perpetuate this model, masking the political economy underlying behind PRP (Tabarés, 154). These critiques point toward the need to imagine radically different infrastructures.
Megan DeWitt, in this special issue, explores how The Otter, a publication of the Network in Canadian History and Environment, challenges dominant academic publishing norms. With an ethos centered on care, accessibility, and community, the platform embraces contributions from academics, students, independent researchers, and activists alike. Its editorial practices intentionally move away from hierarchical gatekeeping, encouraging relational writing, reflexivity, and interdisciplinary collaboration. In this way, The Otter models what an environmentally and socially sustainable approach to knowledge circulation might look like, one that redistributes authority and redefines academic legitimacy. In a complementary register, the FEELed Lab’s contribution, “Using Research Blogs,” makes the case for blog-based publication as a site of ethical resistance. Emphasizing process over product and relationality over prestige, the authors frame blogging as a way to reorient research around accessibility, care, and collective meaning-making, particularly within student and community-engaged contexts.
This imbalance between contribution and reward undermines both the fairness and functionality of the review process. As a response, Irfanullah proposes a fully AI-automated peer review system, not just as a technical solution, but as an ethical corrective to a broken publishing model that extracts value from the many while rewarding only a few. But that is wildly optimistic about how well such systems can perform. As many have shown, especially in critiques of military AI (Gray, AI, 126), GenAI or any other AI is just not up to doing any important tasks on its own. There must be a better way.
Beyond Technological Solutionism and Liberal Generative AI Frameworks: Revisiting the Open Science Movement
Scholarship depends on openness, but pursuing profits and power leads to secrecy. The creation of scientific journals and professional associations with enforced standards of transparency shaped the science, and other scholarship, of today. Still, military and proprietary (for profit) research dominates many emerging technologies, such as AI. Today’s Open Science Movement (OSM) has articulated a set of interrelated principles including open scientific knowledge, open dialogue with other knowledge systems, open engagement of societal actors, and an open science infrastructure (Wakiaik and Betz; Gong).
Although existing norms in scientific research aim to preserve its ethics, integrity and quality, these may fall short in addressing the unique challenges posed by GenAI. For example, traditional data governance protocols do not adequately handle the scale of big data processed by AI, especially when dealing with personal or sensitive data (Chen et al.). A machine’s capacity to learn, infer, and generate knowledge challenges long-standing ideas of authorship and credibility. As such, research integrity, traditionally rooted in accuracy, honesty, and transparency, faces a new test. Ethics is no longer a complement to AI research but an academic necessity (Limongi).
The integration of AI into content generation introduces complex dilemmas around authorship and contribution (Koo). Differentiating human intellectual labor from AI-generated material raises questions of intellectual property, especially when multiple users reuse AI outputs for publication or commercial ends. Moreover, AI-generated content may closely resemble existing works, leading to potential copyright disputes. As GenAI expands into domains such as the PRP, it becomes imperative to establish clear and coherent guidelines that address the multifaceted legal and ethical aspects of intellectual property, fair use, and attribution (Chen et al.). Here OSM becomes a counterpoint: with its emphasis on collective stewardship of knowledge, open data governance, and equitable attribution, OSM provides a framework to rethink how intellectual property and reuse should function in the age of AI. At the same time, OSM faces its own tensions when openness collides with privacy, consent, and the risks of extractive big data practices.
Pathways toward the ethical integration of AI in research will require enhanced collaboration, transparency and accountability. According to Giray, effective use of AI in PRP should promote transparency, uphold ethical standards, protect privacy, ensure quality assurance, and support ongoing reviewer development (150). Many of these goals resonate with the foundational principles of the OSM, including the adoption of open-source GenAI models as an ethical avenue for scientific progress (González-Esteban and Patrici).
In this context, the OSM has, over time, established a foundational framework—both in terms of infrastructure and terminology—that supports the responsible development and application of GenAI in research. The adoption of GenAI models that align with open science values, such as the transparent disclosure of training data sources, would represent a meaningful step toward encouraging model creators to engage more substantively with open science principles (Hosseini et al.). Nonetheless, the prevalence of commercial models—often closed-source and resource-intensive developments—poses significant challenges to the incorporation of these tools into open science workflows. However, the phenomenon of “openwashing” complicates the genuine implementation of openness, as many AI models are marketed as open but remain functionally closed, withholding critical components such as datasets, model weights, or documentation.
The OSM originated in the 17th century alongside the emergence of scientific journals, when public demand for access to knowledge compelled scientific communities to share resources (Machado). At its core, the movement emerged from a conflict between researchers seeking collaborative access to knowledge and institutions seeking profit through control of access (David). In terms of research assessment, OSM promotes open identities, open reports and open participation as key alternatives to traditional peer review. Journals that align with open science often adopt customized combinations of these practices based on their editorial aims. This flexibility allows for a more nuanced peer review processes—one that balances openness with scholarly rigour, bias mitigation, and accountability. Aligned with this ethos, Andreas Finke and Thomas Hensel propose a decentralized, community-based peer review model governed by smart contracts and blockchains, aiming to improve transparency, speed and quality (2). Limongi also suggests that participatory models and open-source AI systems can ensure a fairer and more responsible integration of AI into scientific work (8-9). More broadly, open science and its core practices—open data, open access, and open peer review—could offer one of the most robust antidotes to the ethical and editorial risks of GenAI in PRP.
Under current conditions, the OSM can offer not only an ethical corrective but also a sustainability-oriented framework for hybrid AI publishing models. Reconsidering OSM’s relevance is not incompatible with developing regulatory frameworks for AI in academic publishing. In fact, it may provide an opportunity to reshape ethical standards and rethink regulation for a transformed scholarly communication landscape.
From the standpoint of “intelligent governance,” scholars like Pompeu Casanovas propose an AI that helps design an ethically responsible version of itself, capable of responding to the profound social implications of GenAI (15). Yet there are different challenges that neither the technological push nor humanistic or ethical approaches can solve by themselves. In particular, significant problems are still not being addressed in the extractive and exploitative nature of the peer review processes under platform logics and the “AI imperative.” We need to reconsider the use and adoption of AI into the peer review processes and to reflect on the sustainability of current platform business models anchored by the small number of giant for-profit academic publishers who dominate and gatekeep so much human knowledge.
Editorial Acceleration and Its Discontents: The Un/Sustainable Legacy of AI Solutionism
In an increasingly accelerated world, where urgency defines productivity and velocity eclipses reflection, the publishing industry and the economic logic underpinning scholarly production and impact markets have long embraced the consequences of such pace. This is especially evident in the well-established business of paying to publish in high-impact journals through mechanisms such as publication fees and article processing charges (APCs), which finance open-access publication models. The peak expression of this commodification is found in the proliferation of hijacked and explicitly predatory journals.
Despite the flurry of recent discussions around GenAI-assisted peer review, most conversations remain anchored in technical logistics, questions of oversight and transparency, or proposals for regulatory frameworks aimed at restoring confidence in the system. Engaging with this literature has allowed us to critically reflect on the material conditions that make GenAI publishing possible and the broader discourse surrounding the future of peer review. It has also helped frame the editorial ecosystem’s obsession with speed, efficiency, and shorter decision making cycles. Figure 1 illustrates the growing push for accelerated review models—often marketed as hybrid human/AI services completed in as little as five days.
This drive for ever-faster publishing aligns with the logics of accelerationism, a techno-political stance that sees technological and economic intensification as a catalyst for systemic transformation (Mardones). Two main currents can be distinguished: left accelerationism, which advocates using technology to transcend capitalism through automation and redistribution (Avanessian and Reis); and right accelerationism, which promotes unchecked capitalist expansion, believing acceleration will bring inevitable change (Land, Noumena; Teleoplexia). Critics argue that all forms of accelerationism worsen inequality and hasten planetary ecological collapse (Noys; Arias Gil).
The peer review ecosystem, where GenAI-powered processes are perhaps its most visible accelerationist artifact, exists within this broader ideological terrain. In tandem, many ethical discussions surrounding AI in peer review rely on technological solutionism and disciplinary frameworks that often eclipse deeper issues of sustainability. This fixation on protecting the “human factor” or enhancing transparency risks obscuring the deeper flaws of the peer review system itself—flaws that long predate AI. While AI indeed poses ethical and social risks, including undermining integrity and trust, human-led peer review also suffers from longstanding problems such as bias, exploitation, opacity, and inequity (Resnick and Elmore; Schintler et al.; Tennant and Ross-Hellauer).
Moreover, the ethics discourse surrounding GenAI may be complicit in whitewashing other urgent problems, particularly the environmental costs of deploying AI at scale in peer review. Although the carbon footprint of AI models is occasionally mentioned, most literature on GenAI and PRP does not meaningfully engage with the environmental consequences of using LLMs in editorial workflows. As reiterated throughout this paper, dominant concerns remain limited to efficiency, bias, and workflow optimization, with little attention paid to sustainability. Thus, we argue that the current AI boom in academic publishing—driven by accelerationist logics, technological solutionism and ethical minimalism—obscures both the flaws of the peer review system and the ecological footprint of AI integration.
These tendencies converge in a form of greenwashing aligned with the extractivist logic of green capitalism, in which environmental goals are superficially reconciled with capital accumulation. Green capitalism posits that growth can continue without ecological degradation, provided that sufficient technological and market-based solutions are implemented. The broader “green economy” similarly claims to balance economic development with sustainability and social justice.
Developing and operating even beneficial AI models, extensive deep learning systems, requires substantial computing power and energy. Training advanced models can consume hundreds or even thousands of graphics processing unit hours, resulting in significant electricity usage and considerable carbon dioxide emissions contributing to climate change.
Moreover, many commercial AI models are proprietary and lack transparency, making it difficult to fully evaluate their environmental impact and creating challenges for sustainability efforts. For instance, Patterson and his coauthors estimated that training GPT-3, a language model with 175 billion parameters, generated approximately 552 tons of carbon dioxide equivalent (CO₂eq) (7). They compared this to the emissions of a round-trip flight between San Francisco and New York, noting that GPT-3’s training emissions were roughly three times higher (13).
In response to these challenges, various tools have been developed to help quantify and address the environmental impact of AI systems. Resources such as the Machine Learning Emissions Calculator (Lacoste et al.) and CodeCarbon (CodeCarbon) aim to raise awareness of AI’s ecological footprint. Alongside newer AI-based benchmarking systems, these tools assist researchers and institutions in tracking and mitigating carbon emissions—despite the paradox that AI is being used to monitor the damage it helps generate.
Beyond technical solutions, a collective intervention proposed by Carrie Karsgaard and colleagues—also included in this special issue—offers a more systemic response. Their piece, “The Pedagogy of Manifesto Making” puts forward a Decarbonizing Manifesto that invites us to rethink the carbon-intensive infrastructure of scholarly publishing. Written as a form of situated and relational pedagogy, the manifesto challenges extractivist production models, prestige-driven evaluation, and hypermobility. Instead, it promotes care, slowness, and institutional responsibility as cornerstones of sustainable academic practice. Their proposal resonates strongly with the ethical and ecological imperatives addressed throughout this issue.
And if care is not taken, these responses risk reproducing a Green AI narrative—a subdomain that promises sustainable AI by promoting measurement, tuning, and optimization, without challenging underlying extractivist assumptions (Tomlinson et al.; Verdecchia et al.). Recent research calls for standardized protocols to quantify the climate impact of AI models and to prioritize sustainability in AI ethics frameworks (Iqbal et al.). Ultimately, unless sustainability becomes a central axis in both peer review reform and AI integration, we risk replacing one flawed system with another, which is more opaque, extractive, and unsustainable than before.
As scholars in the precariat, working for free to make knowledge more democratic and more helpful to humanity’s quest for a just and sustainable world, we understand that sustainability is about more than publishing protocols or even the crucial issue of energy use and climate change. It is about us. We must learn to sustain our work, to have sustainable activism, as Laurence Cox explains in his powerful overview and defense of this concept. He points out that as we burn out, so burns the world (530). There are some good theories (Suzuki; Ede; Boggs and Kurashige) and wonderful practices that have helped us survive and even thrive. Our participation outside of scholarship in mass social movements, for example, has not just taught us a great deal, it has sustained us just as working on Teknokultura sustains us, making our choice to be scholars something we can live with.
Grace Lee Boggs was a Chinese American activist who worked for peace until her death at 95. Famous for supporting the Detroit Black Power movement, pioneering sustainable agriculture, and always working to end war, she kept a positive spirit by always remembering that we work for a better world, not just against the evil in this one. Every crisis is an opportunity, and scholarly publishing is certainly in crisis. As Grace Lee Boggs and Scott Kurashige discuss revolution and sustainable activism:
“Every crisis, actual or impending, needs to be viewed as an opportunity to bring about profound changes in our society. Going beyond protest organizing, visionary organizing begins by creating images and stories of the future that help us imagine and create alternatives to the existing system” (xxi).
Works Cited
Addy, Alfred, et al. “Analysis of Ghana’s Public Health Act 2012 and AI’s Role in Augmenting Vaccine Supply and Distribution Challenges in Ghana.” JL Policy & Globalization, no. 139, 2024, p. 14, doi.org/10.7176/JLPG/139-03.
Arias Gil, Enrique. Aceleracionismo y extrema derecha. ¿Hacia una nueva oleada terrorista? Agapea Libros, 2020.
Avanessian, Armen, and Mauro Reis, comps. Aceleracionismo. Estrategias para una transición hacia el postcapitalismo. Caja negra, 2017.
Balat, Ayşe, and İlhan Bahşi. “May Artificial Intelligence Be a Co-Author on an Academic Paper?” European Journal of Therapeutics, vol. 29, no. 3, 2023, pp. e12-e13, doi.org/10.58600/eurjther1688.
Bauchner, Howard, and Frederick P. Rivara. “Use of Artificial Intelligence and the Future of Peer Review.” Health Affairs Scholar, vol. 2, no. 5, 2024, p. qxae058, doi.org/10.1093/haschl/qxae058.
Biswas, Som, et al. “ChatGPT and the Future of Journal Reviews: a Feasibility Study.” The Yale Journal of Biology and Medicine, vol. 96, no. 3, 2023, p. 415, doi.org/10.59249/skdh9286.
Björk, Bo-Christer, and David Solomon. “The Publishing Delay in Scholarly Peer-Reviewed Journals.” Journal of informetrics, vol. 7, no. 4, 2013, pp. 914-923, doi.org/10.1016/j.joi.2013.09.001.
Blair, Elizabeth. “How an AI-Generated Summer Reading List Got Published in Major Newspapers.” NPR, 20 May 2025, npr.org/2025/05/20/nx-s1-5405022/fake-summer-reading-list-ai. Accessed 8 June 2025.
Boggs, Grace Lee, and Scott Kurashige. The Next American Revolution: Sustainable Activism for the Twenty-First Century. University of California Press, 2012.
Calamur, Harini, and Roohi Ghosh. “Adapting Peer Review for the Future: Digital Disruptions and Trust in Peer Review.” Learned Publishing, vol. 37, no. 1, 2024, p. e1638, doi.org/10.1002/leap.1594.
Carobene, Anna, et al. “Rising Adoption of Artificial Intelligence in Scientific Publishing: Evaluating the Role, Risks, and Ethical Implications in Paper Drafting and Review Process.” Clinical Chemistry and Laboratory Medicine (CCLM), vol. 62, no. 5, 2024, pp. 835-843, doi.org/10.1515/cclm-2023-1136.
Casanovas, Pompeu. “La perversidad inducida.” arXiv Preprint, 2025, doi.org/10.48550/arXiv.2503.23432.
Castelo-Branco, Camil. “Peer Review? No Thanks!” Climacteric, vol. 26, no. 1, 2023, pp. 3-4, doi.org/10.1080/13697137.2022.2149006.
Chauhan, Chhavi, and George Currie. “The Impact of Generative Artificial Intelligence on Research Integrity in Scholarly Publishing.” The American Journal of Pathology, vol. 194, no. 12, 2024, pp. 2234-2238, doi.org/10.1016/j.ajpath.2024.10.001.
Checco, Alessandro, et al. “AI-Assisted Peer Review.” Humanities and social sciences communications, vol. 8, no. 1, 2021, pp. 1-11, doi.org/10.1057/s41599-020-00703-8.
Chen, Ziyu, et al. “Research Integrity in the Era of Artificial Intelligence: Challenges and Responses.” Medicine, vol. 103, no. 27, 2024, p. e38811, doi.org/10.1097/MD.0000000000038811.
CodeCarbon. Tracking Carbon Emissions from Code Execution. CodeCarbon, 2023, codecarbon.io. Accessed 8 June 2025.
Cox, Laurence. “Sustainable Activism.” Routledge Handbook of Radical Politics, edited by Ruth Kinna and Uri Gordon, Routledge, 2019, pp. 524-38, mural.maynoothuniversity.ie/id/eprint/14186/1/Laurence%20Cox%20sustainable%20activism.pdf.
David, Paul A. “Can ‘Open Science’ be Protected from the Evolving Regime of IPR Protections?” Journal of Institutional and Theoretical Economics (JITE)/Zeitschrift für die gesamte Staatswissenschaft, vol. 160, 2004, pp. 9-34, doi.org/10.1628/093245604773861069.
Dergaa, Ismail, et al. “From Human Writing to Artificial Intelligence Generated Text: Examining the Prospects and Potential Threats of ChatGPT in Academic Writing.” Biology of Sport, vol. 40, no. 2, 2023, pp. 615-622, doi.org/10.5114/biolsport.2023.125623.
DeWitt, Jessica M. “Reimaging Academic Publishing: Community, Knowledge, and the Future Beyond Academia” [Goose?] Imaginations and The Goose ”Sustainable Publishing,” vol. 16, no. 1, 2025, pp. XX.
Drozdz, John A., and Michael R. Ladomery. “The Peer Review Process: Past, Present, and Future.” British Journal of Biomedical Science, vol. 81, 2024, p. 12054, doi.org/10.3389/bjbs.2024.12054.
Ede, Sharon. “What Jamie Oliver Can Teach Sustainability Activists,” Change Agency, 15 Jun 2012, uploads.strikinglycdn.com/files/8c6e840c-079b-4334-bc9a-35b2e6cf5d69/What%20Jamie%20Oliver%20Can%20Teach%20Sustainability%20Activists.pdf. Accessed 08 June 2025.
Farber, Shai. “Enhancing Peer Review Efficiency: A Mixed‐Methods Analysis of Artificial Intelligence‐Assisted Reviewer Selection Across Academic Disciplines.” Learned Publishing, vol. 37, no. 4, 2024, p. e1638, http://dx.doi.org/10.1002/leap.1638.
Fauchié, Antoine. “Permapublishing: pour des modes d’édition pérnennes.” Imaginations and The Goose ”Sustainable Publishing,” vol. 16, no. 1, 2025, pp. XX
Finke, Andreas, and Thomas Hensel. “Decentralized Peer Review in Open Science: A Mechanism Proposal.” arXiv Preprint, 2024, doi.org/10.48550/arXiv.2404.18148.
García, Manuel B. “Using AI tools in Writing Peer Review Reports: Should Academic Journals Embrace the Use Of ChatGPT?” Annals of Biomedical Engineering, vol. 52, no. 2, 2024, pp. 139-140, doi.org/10.1007/s10439-023-03299-7.
Giray, Louie. “Benefits and Challenges of Using AI for Peer Review: A Study on Researchers’ Perceptions.” The Serials Librarian, vol. 85, no. 5-6, 2024, pp. 144-154, doi.org/10.1080/0361526X.2024.2428377.
Giray, Louie, et al. “Strengths, Weaknesses, Opportunities, and Threats of Using ChatGPT in Scientific Research.” International Journal of Technology in Education, vol. 7, no. 1, 2024, pp. 40-58, http://dx.doi.org/10.46328/ijte.618.
Gong, Ke. “Open Science: The Science Paradigm of The New Era.” Cultures of Science 5.1, 2022, pp. 3-9, journals.sagepub.com/doi/pdf/10.1177/20966083221091867.
González-Esteban, Elsa, and Calvo Patrici. “Ethically Governing Artificial Intelligence in the Field of Scientific Research and Innovation.” Heliyon, vol. 8, 2022, p. e08946, doi.org/10.1016/j.heliyon.2022.e08946.
Gordo, Angel, and Chris Hables Gray. “Artificial Intelligence, QAnon, and the Gamification of Alienation.” Eighth International Conference on Communication & Media Studies, Madrid, Spain, 2023.
Gray, Chris Hables. AI, Sacred Violence, and War—The Case of Gaza. Palgrave Macmillan, 2025.
Gray, Chris Hables. “Political Deepfakes and Elections” Free Speech Center, 6 Dec 2024, firstamendment.mtsu.edu/article/political-deepfakes-and-elections/. Accessed 08 June 2025.
Grove, Jack. “Science Journals Overturn Ban on ChatGPT-Authored Papers.” Times Higher Education (THE), 16 Nov. 2023, timeshighereducation. com/news/science-journals-overturn-ban-chatgpt-authored-papers. Accessed 08 Jun 2025.
Haack, Susan. “Peer Review and Publication: Lessons for Lawyers.” Stetson Law Review, vol. 36, 2007, pp. 789-819, stetsonlawreview.org/wp-content/uploads/2022/02/36.3.05.Haack_.pdf.
Hosseini, Mohammad, et al. “Open Science at the Generative AI Turn: An Exploratory Analysis of Challenges and Opportunities.” Quantitative Science Studies, vol. 6, 2025, pp. 22-45, doi.org/10.1162/qss_a_00337.
Houghton, Frank. “Keep Calm and Carry On: Moral Panic, Predatory Publishers, Peer Review and the Emperor’s New Clothes.” Journal of Medical Library Association, vol. 110, no. 2, 2022, pp. 233-39, doi.org/10.5195/jmla.2022.1441.
Iqbal, Anam, et al. “A Review of the Good and Bad of AI for the Environment: Decarbonizing AI.” 2024 1st International Conference on Logistics (ICL), IEEE, 14-16 Aug. 2024, doi.org/10.1109/ICL62932.2024.10788577.
Irfanullah, Haseeb. “Ending Human-Dependent Peer Review.” The Scholarly Kitchen, 29 Sept. 2023, scholarlykitchen.sspnet.org/2023/09/29/ending-human-dependent-peer-review/. Accessed 08 June 2025.
Jiffriya, M., et al. “Plagiarism Detection Tools and Techniques: A Comprehensive Survey.” Journal of Science–FAS–SEUSL, vol. 2, no. 2, 2021, pp. 47-64, seu.ac.lk/jsc/publication/v2n2/Manuscript%205.pdf.
Kankanhalli, Atreyi. “Peer Review in the Age of Generative AI.” Journal of the Association for Information Systems, vol. 25, no. 1, 2024, pp. 76-84, doi.org/10.17705/1jais.00865.
Karsgaard, Carrie, et al. “The Pedagogy of Manifesto Making: Countering the Oily Entanglements of Academic Publishing.” 2025. Imaginations and The Goose ”Sustainable Publishing,” vol. 16, no. 1, 2025, pp. XX.
Khalifa, Ahmed A., and Mariam A. Ibrahim. “Artificial Intelligence (AI) and ChatGPT Involvement in Scientific and Medical Writing, a New Concern for Researchers. A Scoping Review.” Arab Gulf Journal of Scientific Research, vol. 42, no. 4, 2024, pp. 1770-87, http://dx.doi.org/10.1108/AGJSR-09-2023-0423.
Koo, Malcolm. “The Importance of Proper Use of ChatGPT in Medical Writing.” Radiology, vol. 307, no. 3, 2023, p. e230312, doi.org/10.1148/radiol.230312.
Kousha, Kayvan, and Mike Thelwall. “Artificial Intelligence to Support Publishing and Peer Review: A Summary and Review.” Learned Publishing, vol. 37, no. 1, 2024, pp. 4-12, doi.org/10.1002/leap.1570.
Lacoste, Alexandre, et al. “Quantifying the Carbon Emissions of Machine Learning.” arXiv Preprint, 2019, doi.org/10.48550/arXiv.1910.09700.
Land, Nick. Fanged Noumena: Collected Writings 1987–2007. MIT Press, 2011.
Land, Nick. Teleoplexia: Ensayos sobre aceleracionismo y horror. Holobionte Ediciones, 2021.
Leung, Tiffany I., et al. “Best Practices for Using AI Tools as an Author, Peer Reviewer, or Editor.” Journal of Medical Internet Research, vol. 25, 2023, p. e51584, doi.org/10.2196/51584.
Liang, Weixin, et al. “Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews.” arXiv, 2024, doi.org/10.48550/arXiv.2403.07183.
Limongi, Ricardo. “The Use of Artificial Intelligence in Scientific Research with Integrity and Ethics.” Future Studies Research Journal: Trends and Strategies, vol. 16, no. 1, 2024, p. e845, doi.org/10.24023/FutureJournal/2175-5825/2024.v16i1.845.
Ling, Xiaoxu, and Siyuan Yan. “Let’s Be Fair. What about an AI Editor?” Accountability in Research, vol. 31, no. 8, 2024, pp. 1253-54, http://dx.doi.org/10.1080/08989621.2023.2223997.
Machado, Jorge. “Open Data and Open Science.” Open Science, 2015, p. 189, academia.edu/15431919.
Manchikanti, Laxmaiah, et al. “Medical Journal Peer Review: Process and Bias.” Pain Physician, vol. 18, no. 1, 2015, pp. E1-E14, pubmed.ncbi.nlm.nih.gov/25675064/.
Mardones, Alejandro. “Máquinas y potencialidades para liberar el discurso aceleracionistas.” Teknokultura. Revista de Cultura Digital y Movimientos Sociales, 2024, pp. 1-8, doi.org/10.5209/tekn.98509.
Mehregan, Mina. “Scientific Journals Must Be Alert to Potential Manipulation in Citations and Referencing.” Research Ethics, vol. 18, no. 2, 2022, pp. 163-68, doi.org/10.1177/17470161211068745.
Mehta, Vini, et al. “The Application of ChatGPT in the Peer-Reviewing Process.” Oral Oncology Reports, 2024, p. 100227, doi.org/10.1016/j.oor.2024.100227.
Mensah, George Benneh. “AI Ethics.” African Journal of Regulatory Affairs (AJRA), vol. 2024, no. 1, 2024, pp. 32-45, doi.org/10.62839/AJFRA/2024.v1.I1.32-45.
Metz, Cade, and Karen Weise. “A.I. Is Getting More Powerful, But Its Hallucinations Are Getting Worse.” The New York Times, 5 May 2025,
www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.html?unlocked_article_code=1.FE8.z6an.o3hI0lPgv07s&smid=url-share. Accessed 08 June 2025.
Miao, Jing, et al. “Ethical Dilemmas in Using AI for Academic Writing and an Example Framework for Peer Review in Nephrology Academia: A Narrative Review.” Clinics and Practice, vol. 14, no. 1, 2023, pp. 89-105, doi.org/10.3390/clinpract14010008.
Mitchell, Ottilie. “US Government Report Cited Non-Existent Sources, Academics Say.” BBC, 31 May 2025, www.bbc.com/news/articles/cdj98vrzpyvo. Accessed 08 June 2025.
Mollaki, Vasiliki. “Death of a Reviewer or Death of Peer Review Integrity? The Challenges of Using AI Tools in Peer Reviewing and the Need to Go Beyond Publishing Policies.” Research Ethics, vol. 20, no. 2, 2024, pp. 239-50, doi.org/10.1177/17470161231224552.
Mrowinski, Maciej J., et al. “Artificial Intelligence in Peer Review: How Can Evolutionary Computation Support Journal Editors?” PLoS ONE, vol. 12, no. 9, 2017, p. e0184711, doi.org/10.1371/journal.pone.0184711.
Nath, Karl A., et al. “AI in Peer Review: Publishing’s Panacea or a Pandora’s Box of Problems?” Mayo Clinic Proceedings, vol. 99, no. 1, Elsevier, 2024, doi.org/10.1016/j.mayocp.2023.11.013.
Noys, Benjamin. Malign Velocities: Accelerationism and Capitalism. John Hunt Publishing, 2014.
Parrish, Danielle E. “The Peer Review Crisis Continues: What Comes Next?” Journal of Social Work Education, vol. 60, no. 2, 2024, pp. 171-73, doi.org/10.1080/10437797.2024.2363167.
Patterson, David, et al. “Carbon Emissions and Large Neural Network Training.” arXiv Preprint, 2021, doi.org/10.48550/arXiv.2104.10350.
Perry, Jr., Earnest L. “Why Peer Review Matters.” Journalism History, vol. 50, no. 1, 2024, pp. 30-32, doi.org/10.1080/00947679.2023.2286830.
Raman, Raghu. “Transparency in Research: An Analysis of ChatGPT Usage Acknowledgment by Authors Across Disciplines and Geographies.” Accountability in Research, 2023, pp. 1-22, doi.org/10.1080/08989621.2023.2273377.
Resnik, David B., and Susan A. Elmore. “Ensuring the Quality, Fairness, and Integrity of Journal Peer Review: A Possible Role of Editors.” Science and Engineering Ethics, vol. 22, 2016, pp. 169-88, doi.org/10.1007/s11948-015-9625-5.
Sabet, Cameron John, et al. “Equity in Scientific Publishing: Can Artificial Intelligence Transform the Peer Review Process?” Mayo Clinic Proceedings: Digital Health, vol. 1, no. 4, 2023, pp. 596-600, doi.org/10.1016/j.mcpdig.2023.10.002.
Salah, Mohammed, et al. “Debate: Peer Reviews at the Crossroads—‘To AI or Not to AI?’” Public Money & Management, vol. 43, no. 8, 2023, pp. 781-82, doi.org/10.1080/09540962.2023.2264032.
Schintler, Laurie A., et al. “A Critical Examination of the Ethics of AI-Mediated Peer Review.” arXiv Preprint, 2023, doi.org/10.48550/arXiv.2309.12356.
Seghier, Mohamed L. “AI-Powered Peer Review Needs Human Supervision.” Journal of Information, Communication and Ethics in Society, vol. 23, no. 1, 2025, pp. 104-116, http://dx.doi.org/10.1108/JICES-09-2024-0132.
Stahl, Bernd Carsten, and Damian Eke. “The Ethics of ChatGPT–Exploring The Ethical Issues of an Emerging Technology.” International Journal of Information Management, vol. 74, 2024, p. 102700, doi.org/10.1016/j.ijinfomgt.2023.102700.
Stokel-Walker, Chris. “How Does Medicine Assess AI?.” BMJ, vol. 383, 2023, doi.org/10.1136/bmj.p2362.
Sun, Lu et al. “MetaWriter: Exploring the Potential and Perils of AI Writing Support in Scientific Peer Review”. Proceedings of ACM Hum.-Comput. Interact. 8, CSCW1. 2024a. doi.org/10.1145/3637371.
Sun, Lu, et al. “ReviewFlow: Intelligent Scaffolding to Support Academic Peer Reviewing.” Proceedings of the 29th International Conference on Intelligent User Interfaces, 2024b, doi.org/10.48550/arXiv.2402.03530.
Suzuki, David. “Sustainable Activism: A Conversation with David Suzuki on His 75th Birthday.” Common Ground, March 2011. commonground.ca/iss/236/cg236_interview.shtml. Accessed 13 Mar. 2025.
Tabarés, Raúl. “Open Access, Responsibility and the ‘Platformization’ of Academic Publishing.” NOvation-Critical Studies of Innovation, vol. 2, 2020, pp. 147-167, doi.org/10.5380/nocsi.v0i2.91157.
Teixeira da Silva, Jaime A. “Is ChatGPT a Valid Author?” Nurse Education in Practice, vol. 68, 2023, p. 103600, doi.org/10.1016/j.nepr.2023.103600.
Tennant, Jonathan P., and Tony Ross-Hellauer. “The Limitations to Our Understanding of Peer Review.” Research Integrity and Peer Review, vol. 5, no. 1, 2020, p. 6, doi.org/10.1186/s41073-020-00092-1.
Tennant, Jonathan P., et al. “A Multi-Disciplinary Perspective on Emergent and Future Innovations in Peer Review.” F1000Research, vol. 6, 2017, p. 1151, doi.org/10.12688/f1000research.12037.3.
Thelwall, Mike. “Can ChatGPT Evaluate Research Quality?” Journal of Data and Information Science, vol. 9, no. 2, 2024, pp. 1-21, doi.org/10.48550/arXiv.2402.05519.
Tomlinson, Bill, et al. “The Carbon Emissions of Writing and Illustrating Are Lower for AI Than for Humans.” Scientific Reports, vol. 14, no. 1, 2024, p. 3732, doi.org/10.1038/s41598-024-54271-x.
Verdecchia, Roberto, et al. “A Systematic Review of Green AI.” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, vol. 13, no. 4, 2023, p. e1507, doi.org/10.1002/widm.1507.
Verharen, Jeroen PH. “ChatGPT Identifies Gender Disparities in Scientific Peer Review.” Elife, vol. 12, 2023, p. RP90230, doi.org/10.7554/elife.90230.
Vincent, J. “Top AI Conference Bans Use of ChatGPT and AI Language Tools to Write Academic Papers.” The Verge, 06 Jan. 2023, www.theverge.com/2023/1/5/23540291/chatgpt-ai-writing-tool-banned-writing-academic-icml-paper. Accessed 08 June 2025.
Wakiaik, Amanda, and Sonya Betz. “Responsible and Sustainable Open Publishing: Q&A with Canada’s Largest Library-Based Open Publisher.” Imaginations and The Goose “Sustainable Publishing,” vol. 16, no. 1, 2025, pp. XX.
Weber, Ron. “The Other Reviewer: RoboReviewer,” Journal of the Association for Information Systems, vol. 25, no. 1, 2024, pp. 85-97, doi.org/10.17705/1jais.00866.
Wynne, R., and V. B. Kolachalama. “Integrating Artificial Intelligence into Scholarly Peer Review: A Framework for Enhancing Efficiency and Quality.” OSF Preprints, 16 May 2025, doi.org/10.31219/osf.io/s764u_v1.
Zielinski, Chris, et al. “Chatbots, Generative AI, and Scholarly Manuscripts: WAME Recommendations on Chatbots and Generative Artificial Intelligence in Relation to Scholarly Publications.” Current Medical Research and Opinion, vol. 40, 2023, 11-13, p. e1015868, doi.org/10.25100/cm.v54i3.5868.
Image Notes
Figure 1: Leonard, Chris. “Let’s chat”, Scalene 36: We are 1! / REMOR / SIFT, 2025, scalene-peer-review.beehiiv.com/p/scalene-36-we-are-1-remor-sift.