Editorial - June 2023

Newsletter

Dear MCAA members,

“In recent years, large language models (LLMs) have emerged as a significant breakthrough in the field of artificial intelligence. These models, such as GPT-3, can process vast amounts of text data and generate human-like responses to a wide range of natural language queries.

The development of LLMs has enormous potential for a range of applications, from research and healthcare to education, entertainment, but also customer services. Their ability to understand and respond to natural language queries in a way that is both accurate and engaging opens up new possibilities for improving the way we interact with technology.

However, the development of large language models also raises important ethical and social considerations. One of the most significant concerns is the potential for these models to perpetuate biases and discrimination if they are trained on biased data.”

The three paragraphs above are ChatGPT’s answer when prompted, “Write an editorial about Large Language Models.”

In the latest months, we have seen an increasing interest in LLMs, and for this reason, we have tried to make a bit more clarity around this topic. We wished to better understand topics such as how LLMs are used in research (if they are used at all), what their ethical implications are, and what the future looks like. We were thus motivated to decipher such a complicated topic, and what better than inviting our members to share their knowledge, experience, and thoughts on the benefits and risks of such technologies and how to adopt them with caution in our research activities.

Image by Midjourney, prompt by Fabrizio Martina

The input from the authors who positively responded to our call for articles led to this issue, which covers a huge variety of topics. You will read about the history of LLMs, but also present and future implications for their development, such as data reuse. From a perspective of a researcher, aspects such as co-authorship, systematic reviews, and how PhD students make use of LLMs capabilities were highlighted. The authors also wrote about collective intelligence, breaking language barriers, inclusive society, and thoroughly discussed the ethical concerns of LLMs. You will also have a glimpse into further applications and how LLMs can be already used in healthcare, digital humanities, and project management, to name a few.

Editorial photo

Moreover, as you are used to, this issue includes an update on the Association’s recent activities, underlining, for example, the role of the MCAA in the EU Science Diplomacy Alliance, as well as explaining the importance of continuous learning and how the MCAA is supporting career development through its learning program.

In summary, this collection of articles detangles some of the most important research topics at the intersection of artificial intelligence and humanity and how AI can shape our thoughts and decisions as researchers. Many questions on LLMs are yet to be answered, but we hope that after reading this issue, there will be some clarity on the topic.

As ChatGPT writes when prompted, “Write a 4-line poem on Large Language Models:”

“Machine learning at its best,Language models ace every test.Text generation, translation too,Endless possibilities, all thanks to you”

We believe that the “human side” and our inputs will always be needed, and to correctly use these tools, we will need to make sure to understand their full potential, limitations, and solve the great variety of ethical concerns around them.

Nicoleta Spînu
Guest-editor
Vice-chair of the Communication Working Group
mcaa.cwg.vice@gmail.com
Twitter @nicospinu

Ruben Riosa
Guest-editor
Chair of the Communication Working Group
communication@mariecuriealumni.eu
Twitter @Ruben_Riosa
Follow the CommWG on Twitter @MCAA_CommWG