Artificial Intelligence

Artificial intelligence and academic integrity, post-plagiarism


GLOBAL

Welcome to the post-plagiarism era. The emergence of ChatGPT in November 2022 created an almost instantaneous technological panic about the impact of artificial intelligence (AI) on education. This is not the first time new technology has generated mass panic. The introduction of the calculator in schools in the 1980s and the commercialisation of the internet in the 1990s had similar effects.

It is important to remember that ChatGPT did not simply appear out of nowhere. The company behind it, OpenAI, was founded in 2015. Precursors to ChatGPT included GPT-3 in 2020, and earlier versions, GPT-2 in 2019 and the original Generative Pre-trained Transformer (GPT) in 2018.

So, the technology in its current form has been around for about half a decade. However, adaptive and predictive text tools were first developed in the 1980s to assist people with disabilities. Predictive text eventually became integrated into text messaging apps on mobile phones and are used by millions of people every day.

The use of artificial intelligence tools does not automatically constitute academic dishonesty. It depends how the tools are used. For example, apps such as ChatGPT can be used to help reluctant writers generate a rough draft that they can then revise and update.

Used in this way, the technology can help students learn. The text can also be used to help students learn the skills of fact-checking and critical thinking, since the outputs from ChatGPT often contain factual errors.

When students use tools or other people to complete homework on their behalf, that is considered a form of academic dishonesty because the students are no longer learning the material themselves. The key point is that it is the students, and not the technology, that is to blame when students choose to have someone – or something – do their homework for them.

There is a difference between using technology to help students learn or to help them cheat. The same technology can be used for both purposes.

The issue of plagiarism

The question of whether artificial intelligence apps plagiarise is also tricky. Historically, plagiarism has been defined as copying text or ideas from others without proper attribution. There has long been an assumption that humans plagiarise from other humans.

In the case of artificial intelligence, tools linked to large language models do not plagiarise in any traditional sense. The text generated by AI apps should not be presumed to be plagiarised, even though the content has been harvested and then aggregated from a variety of online sources.

In many cases, the text itself is completely original – sometimes to a fault. ChatGPT, for example, can fabricate details and the resulting text is not only original, it can be completely untrue.

Mainstream media outlets have been using AI type tools for a few years already. The New York Times, the Washington Post and Forbes have all been using machine learning tools to create news stories.

Human fact-checkers and editors still play a role in ensuring the stories are accurate and true. When the facts are not verified, it can have consequences for readers and damage the reputation of the publication.

Increasingly, artificial intelligence tools are being used in industry and if we want to ensure students who graduate from our universities have the skills they need to enter the workforce, it is essential to teach them how to use artificial intelligence tools responsibly, as they are likely to encounter them at work.

Some major scientific publishers have established rules for how artificial intelligence apps can be used in scholarly publishing, declaring that AI writing apps cannot be listed as co-authors, for example. One reason for this is that it is ultimately humans, not robots, who are held responsible for scientific results and their dissemination.

Instead, the use of AI should be declared in the introduction, methods section or acknowledgements of a scientific paper. This decision by major publishers signals that AI writing tools are just that – tools. They do not replace humans (at least not yet) and scientific advances – for better or for worse – remain a human responsibility.

A hybrid human-technology output

It will not be long before artificial intelligence is built into everyday word processing programmes such as Microsoft Word or Google docs. Like the predictive text features of text messaging apps on mobile phones, the technology will become so commonplace that everyone everywhere will use it every day.

In 2021 I wrote Plagiarism in Higher Education: Tackling tough topics in academic integrity, a book in which I argued that technology would bring us into an age of post-plagiarism. This is an age in which humans and technology co-writing text is normal and the result is a hybrid human-technology output.

In the age of post-plagiarism, humans use artificial intelligence apps to enhance and elevate creative outputs as a normal part of everyday life. We will soon be unable to detect where the human written text ends and where the robot writing begins, as the outputs of both become intertwined and indistinguishable.

The key is that even though people can relinquish full or partial control to artificial intelligence apps, allowing the technology either to write for them or to write with them, humans remain responsible for the result. It is important to prepare young learners and university students for this reality, which is not a distant future, but already the present.

Sarah Elaine Eaton is associate professor in the Werklund School of Education at the University of Calgary in Canada.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.