ChatGPT has been widely and quickly adopted by PhD students organically without a defined place in the PhD curriculum and research practice. Together with other tools, AI will become a part of the standard toolkit of every PhD student. There is a need for more formal guidelines to ensure that we make the best use of these revolutionary tools.
…and there was ChatGPT.
On 30 November 2022, ChatGPT was released to the public. During our weekly catch-up with my team in January, we discussed it for the first time. One of us shared an issue, and a spontaneous answer proposed was “Ask ChatGPT.” At this moment, we realized that we all already had an experience with it.
A PhD student needs to answer many questions regarding their direct research topic, and suitable methodology, develop the required skills for a future career, and to evolve in the "jungle" of academia. Some integrate ChatGPT as the first option to solve these problems. One student used it to replace their momentarily unavailable supervisor, while another got used to it so much that they felt lost if they could not access it.
a personal account
Quentin Loisel is a current MSCA PhD fellow at Glasgow Caledonian University. He is part of the Health CASCADE project, aiming to make co-creation trustworthy. Working within an engineering team, their goal is to develop the technologies that will enable evidence-based co-creation. His work bridges technology and the fundamental human dimension within the values of the co-creation process. From cognitive science and approaching multiple fields, the research projects aim to define the needs, organize a taxonomy, develop technological solutions, forecast the impact of future evolutions and develop ethics. With the growing influence of technology, his future objective is to enable collaboration between society’s actors to make the best of future technology. He is also in charge of the regional lead for Scotland on behalf of the MCAA UK Chapter.
However, ChatGPT polarized opinions. Some are excited, suggesting it is just the beginning and further improvement will revolutionize our work, making it faster, and easier and opening new research perspectives. Others are more skeptical, since they often hear similar claims of imminent technological revolutions, and they notice the limits: struggling to get innovative outputs, basic mistakes, and unclear data treatment. Nonetheless, there is one agreement in the middle: it allows us to do more.
Already a research tool?
Indeed, beyond excitement and doubt, a strong force pushes students to use this technology: it is helpful. To explore their research question, students must discover and integrate various information. They need to be polyvalent and productive. This is precisely what ChatGPT is: a fast, accessible, efficient polyvalent tool that increases productivity.
Of course, some limits remain, and essential questions must be addressed. However, if future improvements in large language model (LLM) technologies overcome some of these limits, we could expect it to be implemented as a future tool for research practice. At this point, it could profoundly change our way of studying, researching, and creating knowledge.
Nevertheless, this spontaneous adoption has not been made with a defined place in the PhD curriculum. Indeed, there are potential immediate risk factors: recognition of errors, responsibility in output usage, intellectual property, data leakage, scientific integrity and ethics, etc. Guidelines and training are needed to ensure this technology's best and most appropriate use. For example, learning how to ask the right questions in a dialogue can exponentially increase the quality of the output.
A revolution for the better?
Beyond the usual dichotomy of “[Use/Ban] it or perish!” Let's highlight some practical implications we need to consider with a broad adoption for PhD students.
As suggested, researchers have much to win using efficient and reliable LLM technology. However, the force driving its adoption is based on environmental pressure. Academia is a very competitive environment. Chatbots are likely to raise the production level expected of researchers by minimizing repetitive tasks and improving flexibility, yet they cannot reduce the overall workload. It might even increase it. Instead of “Publish or perish,” we might evolve to a “Publish more or perish.”
It will allow us to do more with less… but also differently. Indeed, asking a chatbot to do a task for us will likely prevent us from developing the necessary skill to do it ourselves. The question raised is: What skills and knowledge does a PhD student need to acquire to become an accomplished researcher? Answering this question may lead us to still dedicate time and effort to developing a skill while we know that technology can do it better and faster.
Finally, we may legitimately wonder if technology could one day develop knowledge faster and better than humans and if the researcher won’t just become a technician. In this case, it will be necessary to recontextualise the crucial role of the researcher. With a growing part of society losing trust in the scientific method and nourishing fears about artificial intelligence technology, we will need researchers who contextualize and ensure the quality of the knowledge created. To do so, PhD students must develop all the skills and ability to understand these new tools that are becoming more prevalent in their practice.
Some PhD students have already adopted ChatGPT, and the reasons are primarily practical in a competitive environment: doing more with less. If the LLM technology overcomes its limits, it will likely be widely adopted and become a must-have. This raises the need to conceptualize the place of this technology alongside the researcher and to create appropriate guidelines for the most efficient and proper use. Thanks to its strong research community, this is a question that the MCAA could address with a forum.