
In my PhD course on Ethical Problems in Communication at the University of Sharjah (UAE), I asked students to engage in a deliberately modest task: take one concrete AI application in communication and reflect on it using Luciano Floridi’s framework as developed in Chapters 1 and 2 of Ethics, Governance, and Policies in Artificial Intelligence. No pressure to produce a fully referenced article, just structured thinking. As part of the exercise, I also asked them to translate their reflections into a short podcast, to test whether ethical reasoning can move beyond academic writing.
What emerged is more revealing than many polished papers.
Across the reflections, AI is consistently approached not as a neutral tool, but as an infrastructure that reorganizes communication itself, shaping visibility, participation, and decision-making.
Eman Al Suwaidi’s analysis of recommender systems frames the issue as a tension between algorithmic visibility and cultural sovereignty. She shows how Emirati content is not excluded outright, but gradually rendered invisible through engagement logics. The ethical problem, then, is not access, but the conditions under which visibility is distributed. Listen to the full podcast on Spotify:
Manar Daher focuses instead on AI-generated influencers and shows how cultural identity can be performed without communities, producing a form of extraction where representation is detached from those who embody it. Here, the tension between beneficence and justice becomes structural rather than incidental. Listen to the full podcast on Spotify
Layal Ayoub’s reflection on targeted advertising pushes the argument further by locating the problem in the exploitation of vulnerability, where systems do not simply persuade but identify and act upon moments of weakness. The issue is not only harm, but the erosion of meaningful autonomy. Listen to the full podcast on Spotify
Salem Al Hakm Al Shaebi, working on AI-generated content and deepfakes, identifies a different dynamic: the gradual erosion of trust in communication itself, where the distinction between authentic and synthetic content becomes unstable. The ethical issue here is not just misinformation, but the destabilization of epistemic reliability. Listen to the full podcast on Spotify
Mariam’s reflection on digital religion introduces a less discussed dimension: what happens when AI enters domains grounded in intentionality and meaning, raising the question of whether automated practices can coexist with forms of belief traditionally tied to human presence. Listen to the full podcast on Spotify
Yousra’s work frames algorithmic curation of mental health content as a form of epistemic injustice, where entire knowledge systems are displaced. Importantly, she also turns the critique inward, noting the tension of using a Western ethical framework to analyze Western bias, an observation that exposes the limits of the framework itself. Listen to the full podcast on Spotify
These are not isolated observations. Together, they echo Floridi’s argument in Chapter 1: AI is embedded in an informational environment that structures how reality is known and experienced. Ethical analysis must therefore move from outputs to infrastructures.
Chapter 2 becomes particularly useful at this point. Faced with the proliferation of competing ethical guidelines, Floridi proposes a unified framework, beneficence, non-maleficence, autonomy, justice, and explicability, not as a checklist, but as a way to structure ethical reasoning.
What the students show, often implicitly, is that these principles are not stable. They collide.
Systems that maximize beneficence (efficiency, engagement) often undermine justice. Personalization that appears to support autonomy can quietly restrict it. Fairness becomes difficult to assess when systems are opaque.
This is where explicability becomes central. As Floridi argues, it is not simply another principle, but a second-order condition that connects intelligibility with accountability. Students repeatedly arrive at this point: without understanding how systems work, ethical evaluation collapses into speculation.
Yet their reflections also expose a limit in Floridi’s framework. Even when explicability is achieved, many of the problems they identify, cultural marginalization, epistemic dominance, economic extraction, are not merely technical or informational. They are structural and political.
This aligns with another key issue raised in Chapter 2: the difficulty of operationalizing ethical principles. Agreement at the level of values does not translate easily into practice. Students repeatedly encounter this when proposing solutions: transparency conflicts with business models, fairness with engagement metrics, cultural sensitivity with scalability.
What emerges from these reflections, and from the accompanying podcasts, is not a set of answers, but a shared orientation. Students move away from asking whether AI is “good” or “bad,” and instead examine how it structures communication, knowledge, and power.
If anything, this exercise confirms something simple but often overlooked: teaching AI ethics at the PhD level is not about producing solutions. It is about training the ability to identify tensions, situate responsibility, and remain intellectually honest about what cannot be resolved.
And that, perhaps, is exactly where ethical inquiry should begin.
Lascia un commento