How GenAI is Transforming Citation: What You Need To Know

Institutions must collaborate to develop frameworks recognizing GenAI’s limits.
AI
We are in the midst of adjusting to a tech revolution in AI and academic integrity needs to evolve too.

GenAI has created an academic integrity crisis of all manner, including the problem with accurate human detection of AI-generated versus human-written texts. Plagiarism, citing, and referencing together form the crux of heated debates about whether GenAI apps like ChatGPT should even be allowed in educational settings. What is undeniable is that GenAI tools are reshaping how we write, research, and think. Although these tools are promising, they also challenge fundamental academic practices, including citing and referencing. As more students, educators, and researchers rely on GenAI, the question arises: How do we maintain academic integrity when AI tools generate citations that may be unreliable or unverifiable?

As Oscar Wilde observed, “Most people are other people. Their thoughts are someone else’s opinions, their lives a mimicry, their passions a quotation.” Wilde’s insight is reflected in the academic practice of citation, where we acknowledge others’ voices that shape our own. Wilde lauded the process of recognizing those who came before us - as we are a combination of what we have read, experienced, and remember - for the benefit of those who come after us. Citing sources is a way of recognizing and respecting those who said it first or best. Citation is a way to pay homage to those from whom we learned and how our thoughts are formed, reformed, or adapted. The basic principle of citation, regardless of the convention (MLA, Chicago, APA, amongst others), is that if the citation is unambiguous, complete, and accurate, the reader can locate the source either inexpediently on a library bookshelf or immediately via a digital object identifier (DOI), which is a unique numerical identifier for digital texts. A DOI is not the only way to cite digital documents, but it is popular because it is easy to use, universally accepted, and has a persistent link to the document.

The main goal across the various citation styles is to show where information comes from. Although such styles may vary, they are based on subject-specific rules and traditions and an overarching effort to be fair - that is, to give credit where credit is due. Proper citation permits writers to attribute the development of their ideas to the sources. Citation is integral to preserving academic integrity and preventing plagiarism. Citing our sources shows others how our thinking has evolved and helps readers assess the quality of our work. Insisting students to cite their sources should not be an exercise in frustration nor literary policing, but rather a way of helping students develop a habit of attributing, acknowledging, and paying homage to learning from the sources. That is why it is important that sources be high quality, easy for others to locate, and preferably bona fide. 

How and Why GenAI Has Created an Academic Integrity Crisis
Citations generated by GenAI apps are not always accurate. AI tools often produce incomplete or fabricated citations, turning references into placeholders that mislead rather than guide readers. Peer-reviewed publications have raised alarms about such hallucinations. AI-generated citations are often fabricated or only partly accurate, leaving readers to guess what is real and what aligns with the text.

Another problem with GenAI-based citations is limited resources. Specialized AI research tools like scite.ai and scispace.ai (many others exist, and new ones continue to appear) offer proper citations, but the breadth of articles available to them is scant. Furthermore, they require paid subscriptions to utilize their full-scale features. This means that, on top of the limitation, it is an expensive option.

Instead of citing sources, why not cite the GenAI itself? There are several problems with this too, including that of non-reproducible text.

Imagine your citation scheme for GenAI content, which will give you the prompt, the day it was used, and identify which GenAI model was used. But that does not guarantee (actually, the chances are very remote) that the identical text will be generated as the original author managed to produce using the same model of GenAI. This undermines conventional citation entirely. The conventional form of citation assumes that the text in the source cited is intact and available perpetually (that is why the modern citation conventions created DOIs, and their use has burgeoned). That assumption no longer holds in the GenAI era due to technical reasons, including: the non-deterministic nature of GenAI models; the text is generated based on the context, which cannot be captured in a simple citable prompt; the models undergo updates and fine-tuning frequently; and the responses are a mixture of learned patterns, probabilities, and neural computations. This means that a prompt used with GenAI today may generate completely different text tomorrow, making it impossible to verify or replicate its output.

Because the text generated by GenAI models cannot be regenerated precisely the same way, is there any chance of reconciling what we mean by citing sources? One plausible solution seems to elevate the stature of GenAI to a co-author. After all, when we co-write a piece and collaborate, both authors are designated as authors of the text. Is that a viable solution? Perhaps no.

In 2023, the Committee on Publication Ethics (COPE), which comprises editors, publishers, and researchers committed to upholding publication ethics, had issued a position statement on disallowing the use of AI as co-author and has recently issued a revised statement that reads:
‘The use of artificial intelligence (AI) tools such as ChatGPT or Large Language Models in research publications is expanding rapidly. COPE joins organisations, such as WAME and the JAMA Network among others, to state that AI tools cannot be listed as an author of a paper.’

How We can Fix the Citation Crisis and Uphold Academic Integrity
1 Accept that banning GenAI outright is unrealistic in a world increasingly reliant on AI tools for productivity and innovation. It runs counter to the concept of postplagiarism, which is actively promoted.
2 Use GenAI and disclose that it has been used, an approach that Martine Peters (2023) promotes. She created AI logos that she advocates using regularly, like we use signatures in emails. However, disclosing AI use does not solve the problem of unverifiable citations or provide readers with access to source material.
3 Revise what it means to co-author with AI. Redefining what it means to co-author with AI could involve treating GenAI as a tool rather than a collaborator, with clear guidelines on its role in the research process.

Conclusion
As GenAI becomes integrated into academic and research practices, it challenges long-standing conventions of attribution, credit, and collaboration. Our current inability to rely on AI-generated citations or recreate AI-generated text precisely calls for a re-examination of how we define academic integrity in the GenAI era. Traditional citation methods, designed for a world of static, verifiable texts, now feel inadequate for dynamic and probabilistic outputs.

Institutions, publishers, and researchers must collaborate to develop frameworks recognizing GenAI’s limits. This includes fostering transparency through disclosure, establishing standards for AI-assisted work, and exploring new ways to credit contributions that transcend human authorship. These solutions are not just about preserving academic norms; they are about ensuring that innovation in research and education remains ethical and credible.

What we are witnessing is not just a technical challenge but also a shift in how knowledge is created, shared, and valued. In embracing these changes, we must balance progress with accountability, ensuring that the integrity of academic work endures in this new era.

Rahul Kumar Brock University ([email protected])
Sarah Elaine Eaton University of Calgary ([email protected])