Newsletter

Strengthening institutions, safeguarding integrity: Ethics in academia and AI
How can clearly defined institutional policies and strong ethical governance protect researchers and foster responsible innovation? Discussions from the PAOLA project and the AI ethics session at the MCAA Annual Conference 2025 point the way forward.
During the MCAA Annual Conference 2025 in Kraków, two insightful sessions highlighted how strong institutions are essential for safeguarding integrity and promoting responsible innovation. The PAndora bOx of whistLeblowing in Academia (PAOLA) project’s final conference and the session AI Ethics and Integrity: Balancing Innovation with Responsibility in Research underlined the critical need for clear ethical guidelines and transparent governance in academia.
Institutional Clarity in Whistleblowing: Insights from PAOLA
Whistleblowing is a crucial mechanism to uphold accountability, yet it remains highly sensitive and under-supported in academia. A recent survey conducted under the PAOLA project revealed that over 90% of respondents had witnessed unethical conduct, and nearly 80% had faced it themselves. However, many lacked confidence in current systems: 70% questioned their confidentiality, and 87% found them difficult to navigate. These findings highlight a persistent disconnect between institutional policies and researchers’ everyday realities.
The satellite event hosted by the PAOLA project explored how institutions can better protect integrity by providing robust, transparent, and supportive whistleblowing channels that researchers can trust. The session featured two panels. The first presented key findings from the PAOLA project, while the second engaged in a forward-looking dialogue on persistent challenges and future directions. Together, they highlighted the need for stronger institutional frameworks to foster a culture of integrity and empower researchers.
A central takeaway from both panels was the importance of institutions addressing three fundamental questions:

MCAA Satellite Event – Final Conference of the Pandora Box of Whistleblowing in Academia (PAOLA) Project, held on March 20, 2025. On the podium (left to right): Gianluigi Maria Riva (Legal Insights), Francisco Valente Gonçalves (Principal Investigator), Szidonia Rusu (Observatory Presentation), and Susana Sousa Lourenço (Data Insights)
- • What constitutes misconduct? Institutions need to clearly define the reportable offences—such as data falsification, plagiarism, harassment, ethical breaches, or misuse of funds. Without precise and shared definitions, whistleblowers may hesitate to report the unethical issues that affect them.
- • How can concerns be raised safely? Effective policies must guarantee confidentiality and set up secure whistleblowing mechanisms for reporting. Best practices include secured digital reporting platforms, anonymous channels, timely responses, and safeguards against retaliation.
- • Who is responsible for managing reports? It is essential to communicate clearly designated roles, such as ethics officers, ombudspersons, or dedicated committees, to ensure consistency and fairness in the process.
Speakers emphasised that policy alone is not enough. Institutions must also actively train staff, raise awareness, and demonstrate visible leadership in ethical governance. A supportive culture is essential, where researchers view whistleblowing as a shared responsibility rather than a personal risk. When researchers trust their institutions, they are more likely to speak up with confidence that they will be heard and protected.
The PAOLA session made a compelling case for institutional clarity, both in written rules and in practice. By embedding ethical conduct into their everyday operations and governance structures, academic institutions can respond to misconduct more effectively, and foster long-term trust among researchers.
As part of its practical contributions, the session also helped raise awareness about the Observatory. This digital platform provides toolkits, legal resources, case studies, and e-learning materials to support whistleblowers and institutions. It connects users with advocacy networks and pro bono legal aid, helping translate ethical principles into practice.

The MCAA Annual Conference session on AI Ethics & Integrity for Responsible Research & Innovation, held on March 21, 2025. On the podium (left to right): Liviu Știrbăț (Head, AI in Science, European Commission), Ornela Bardhi (MCAA Board member), and Theodota Lagouri (Chair, MCAA Policy Working Group). Mihalis Kritikos (Secretary, European Group of Ethics in Science and New Technologies) and Gàbor Szüdi (Centre for Social Innovation) joined virtually.
Institutional Responsibility in AI Ethics
The AI Ethics session built upon these institutional themes, emphasising ethical governance as vital to responsible AI research. Experts from the European Commission and policy advisors highlighted institutional responsibilities related to privacy, bias, transparency, and sustainability in AI research and policy.
Essential ethical considerations discussed were:
- • Privacy and Data Protection: Given AI’s reliance on vast, sensitive datasets, institutions must rigorously enforce privacy protection frameworks to ensure ethical compliance.
• Bias and Fairness: Institutions must actively identify and correct biases in AI systems to avoid discrimination in critical areas such as healthcare, employment, and criminal justice.
• Transparency and Explainability: Transparency in AI decision-making was a recurring theme. Institutions should advocate for explainable AI systems, enabling users and stakeholders to understand and challenge AI-driven decisions.
• Environmental Sustainability: With AI’s growing ecological footprint, institutions must lead sustainability efforts, balancing technological innovation with environmental responsibility.
Speakers provided a comparative analysis of international AI governance, contrasting the EU’s human-rights-oriented framework with China’s state-centric approach. Despite differences, shared global ethical standards were highlighted as potential common ground.
Initiatives such as the EU’s Choose Europe were also discussed, aimed at attracting global research talent by fostering trust and ethical clarity in AI research. Speakers underscored the importance of ethics by design – embedding ethical principles throughout the AI development process. Despite regional differences, there is growing momentum for global cooperation. Researchers are strongly encouraged to engage with ethical frameworks to ensure innovation aligns with fairness, sustainability, and the protection of rights. Institutional support was described as crucial in enabling researchers to pursue responsible AI practices without concerns over data misuse or intellectual property violations.
Taken together, the sessions emphasised that ethical integrity must be embedded not only in individual practices but also within the broader frameworks of institutional governance.
Institutional Ethics: The Way Forward
Both sessions underscored the critical role of ethically robust institutions as foundations for trustworthy and innovative research environments. Strong institutional frameworks – which are built on clarity, transparency, and proactive oversight – are essential. They not only protect researchers but also enable responsible and sustainable innovation.
Key takeaways included:
- • The need for clear definitions and procedures to guide ethical responses to misconduct and AI innovation.
• The importance of visible leadership in creating trust and accountability within research institutions. - • The value of cross-border cooperation in setting and upholding ethical standards.
- • The necessity of integrating ethics into the design stage of policies and technologies.
This emphasis echoes broader academic insights. For instance, Acemoglu and Robinson (2012) argue that effective institutions are fundamental to societal and economic progress. In the context of research, this underscores the need for clearly articulated policies, ethical oversight, and accountability mechanisms to address current challenges and anticipate future ethical complexities in both whistleblowing and AI research.
Ultimately, embedding ethics throughout institutions—from whistleblower protection to AI governance—fosters a culture of integrity and accountability, ensuring research advances responsibly and serves both the scientific community and the public good.
The Policy Working Group X Linkedin
policy@mariecuriealumni.eu
Srishti Goyal
Orcid
X
LinkedIn
MCAA Newsletter Editorial Board Member
srishti.goyal1808@gmail.com
Theodota Lagouri
Orcid
X
LinkedIn
MCAA Policy Working Group Chair
Yale University & CERN
theodota.lagouri@cern.ch
References
Acemoglu, D., & Robinson, J. A. (2012). Why nations fail: The origins of power, prosperity, and poverty. Crown Business.