As scientists, we are constantly exploring new tools to advance our research. One of these tools that is very popular in recent years is called artificial intelligence (AI). AI can be very helpful. It has the power of improving communication, accelerating discovery and enhancing education. But it can also cause some problems if we are not aware of its potential negative effects.
There are four things that scientists worry about when using AI:
• The data that AI uses might be biased and not fair to everyone;
• People might start relying too much on AI and forget to think for themselves;
• AI might make mistakes when it tries to understand data;
• Using AI might be unfair to some people and raises ethical concerns.
Bias in the training data
When AI uses data that is not fair or inaccurate, the responses to scientist questions will be biased or inaccurate as well. This can cause problems and have a negative impact on new discoveries.
For example, AI might not be able to recognize some people’s faces if the training data for facial recognition technology was only limited to images of white individuals. This can have serious consequences because it might cause the wrongful arrest of certain populations (1).
a personal account
Maria Patron is a scientist with a degree in Biotechnology, a PhD in Neurobiology and a postdoctoral training in intracellular calcium homeostasis. Maria became interested in the use of AI in Academia after completing a course offered by the Max Planck society. She has been sharing her knowledge since then on how to effectively utilize AI to benefit scientists.
Rely too much on AI
Another concern is that scientists might start to think that AI knows everything and can replace human researchers. While AI can assist researchers in understanding large amounts of data, it cannot replace creativity, intuition, and critical thinking skills that are essential in scientific research. Relying too much on AI can lead to a lack of diversity in research perspectives and limit our own scientific discoveries.
For example, AI algorithms can analyze vast amounts of chemical data to identify potential drug candidates, but only human intuition and creativity can look at other important factors, like unforeseen side effects that cannot be predicted by the algorithm (2).
Misinterpretation of data
Sometimes AI might make a mistake when it tries to understand scientific data. This is because AI is not as smart as people are. It may not understand the context and nuances of scientific language, leading to inaccurate responses. AI might not be able to understand the difference between two things that look very similar but are actually different. This can cause a problem if scientists use AI to make important decisions based on wrong information.
For example, AI algorithms are used to analyze large amounts of genetic data to identify patterns and associations that may be difficult for humans to detect. However, these algorithms may not take into consideration the biological function of the gene, making it difficult to determine whether the genetic variant is actually causally related to the disease or whether it is simply a bystander. This can lead to potentially harmful interventions (3).
Using AI in research raises ethical concerns around data privacy, data security, transparency and ownership. AI algorithms require large amounts of data to function, and it is important to ensure that the information that scientists use is obtained ethically and with the proper consent of the individuals involved. Additionally, the use of AI in research may lead to the commodification of data, where individuals‘ personal information is bought and sold without their knowledge or consent.
As a practical example, a researcher may collect personal data from individuals without their informed consent, or they may use data that has been obtained unethically, such through hacking or unauthorized access. This can result in harm to individuals, such as identity theft or financial fraud (4).
It is important for us as scientists to approach the use of AI with caution and thoughtfulness. While AI can certainly enhance our research, we must ensure that it is not used to replace human researchers or perpetuate bias and discrimination. We must also be mindful of the ethical implications of using AI in research and take steps to protect the privacy and ownership of data.
To mitigate the potential negative impact of AI on scientific research, we suggest the following:
• Ensure that the data used to train AI algorithms is diverse, unbiased, and obtained ethically;
• Use AI as a tool to complement human researchers, rather than as a replacement;
• Implement regular bias checks on AI algorithms to ensure they are not perpetuating bias;
• Validate the results obtained through AI analysis using independent methods;
• Establish clear guidelines and protocols for the ethical use of AI in research, including data privacy and ownership.
In conclusion, while AI has the potential to enhance our scientific research, we must proceed with caution and consider its potential negative impact. By being mindful of these concerns and taking steps to mitigate them, we can ensure that AI is used in a responsible and ethical manner that benefits scientific research and society as a whole.
SITNFlash. (2020, October 26). Racial Discrimination in Face Recognition Technology - Science in the News. Science in the News. https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/
Heaven, W. D. (2023, March 9). AI is dreaming up drugs that no one has ever seen. Now we’ve got to see if they work. MIT Technology Review. https://www.technologyreview.com/2023/02/15/1067904/ai-automation-drug-development/
Dias, R., & Torkamani, A. (2019). Artificial intelligence in clinical and genomic diagnostics. Genome Medicine, 11(1). https://doi.org/10.1186/s13073-019-0689-8
Raimundo, R., & Rosário, A. T. (2021). The Impact of Artificial Intelligence on Data System Security: A Literature Review. Sensors, 21(21), 7029. https://doi.org/10.3390/s21217029