Algorithms that discriminate in hiring processes[1]. Microsoft’s Twitter bot quickly turning racist when duplicating the content that it was gathering[2]. These are just two examples which have contributed to raise concerns about the use of AI and have made it a priority to consider the ethics in this booming field. Not only can AI be discriminatory and biased, but the World Economic Forum has found a series of fundamental ethical concerns that arise with the use of AI such as their impact in the job market, the distribution of wealth or their effects in our behaviour, just to name a few[3].

The field which studies this is the ethics of AI which is concerned with the behaviour of the humans who design it and also with machine ethics, which refers to the behaviour of machines[4]. In this regard, there have been numerous attempts to create guidelines for AI to be trustworthy[5]. From the nonprofit sector, we find the principles[6] or the Future of Life Institute’s Asilomar AI Principles[7], for instance. International Organisations such as the UNESCO[8] or the OECD[9] have also released recommendations. Governmental bodies have also issued their share both in individual countries (China[10], Germany, India, Japan, France[11]…) and in transnational bodies such as the G7, which released the Charlevoix principles in 2018[12], or the EU, with the Ethics Guidelines for Trustworthy AI in 2019[13]. Even the private sector has jumped in with codes of conduct like the Partnership on AI, which includes Apple, Amazon, Google and Microsoft, among others[14]. Some companies, like Google have their own ethics teams though not exempt from controversies[15]

The number of ethic documents is so large that some researchers have even analyzed and compared the corpus of principles and guidelines.

The top five ethical principles found to be more commonly cited were transparency, justice and fairness, non-maleficence, responsibility and privacy[16]. Let’s dive into these.

First and foremost, transparency refers to explainability, interpretability or other acts of communication and disclosure. Secondly, justice and fairness entails prevention, monitoring or mitigation of unwanted bias and discrimination and can also include equality, inclusion and mechanisms of appeal to algorithmic decisions. Thirdly, non-maleficence, accounts for issues of safety and security, also including that AI should never cause foreseeable or unintentional harm, and even explicit prohibitions such as cyberwarfare or malicious hacking. Fourthly, responsibility, this one is a bit more blurry and rarely defined. It can include mentions to AI integrity and also the need to attribute responsibility and legal liability to certain persons. Finally, privacy which often refers to data protection and security as values and rights.

The studies also found that almost no guide examined discussed AI within care, nurture, in terms of help, welfare, social responsibility, ecological networks[17] , nor dignity or solidarity[18]. Moreover, the documents mostly came from economically developed countries, while Central and South America, Africa and Central Asia were very underrepresented and thus excluded from the ethics discourse[19]. 

Another challenge found was the broadness of the concept AI, which includes a wide variety of technology which has different requirements and processes. Then,  a catch-all solution in terms of ethics is of no use. Hagendorff suggests focusing on “microethics” instead, by that he means focusing on more specific AI applications and systems and more substantive work so that the research can be implemented concretely[20].

In conclusion, recommendations and guidelines are a good step towards mitigating the malicious effects of AI but they are not enough. They are fragmented and constitute soft law, which is non binding. This means that there are no strict requirements of implementation, accountability nor mechanisms to bring up cases of deviance. Though this is the very reason they are so appealing and widespread[21]. However, the recognition of rights specific to the AI context, adopting and instituting mechanisms for complaints and remedies as well as instituting independent audits could help bring these principles into materiality. In this sense, AI4DA is glad to be a member of the European AI Alliance, a forum which is engaged in a broad and open discussion of all aspects of AI development and its impacts[22] in order to help shape regulatory and policy approaches to the challenges posed by AI in an ethical way. For more information regarding the relationship between Ethics and AI, sign up for our upcoming BrAInstorm Talk, which will explore innovative ways in which rules can be applied to ensure ethical behavior in the context of Artificial Intelligence. Don’t miss out! 


[1] Barnes, Patricia (2019). “Artificial Intelligence Poses New Threat to Equal Employment Opportunity”. Forbes. Retrieved 16 December 2020 from 

[2] Hunt, Elle (2016). “Tay, Microsoft’s chatbot, gets a crash course in racism from Twitter”. The Guardian. Retrieved 16 December 2020 from 

[3] Bossman, Julia (2016) “Top 9 ethical issues in artificial intelligence” World Economic Forum. Retrieved 16 December 2020. 

[4] Müller, Vincent C. (30 April 2020). “Ethics of Artificial Intelligence and Robotics”. Stanford Encyclopedia of Philosophy. Retrieved 16 December 2020 from 

[5] “AI Ethics Guidelines Global Inventory”. AlgorithmWatch. Last updated April 2020. Accessed 16 December 2020 

[6] Tranberg, Pernille, Hasselbalch, Gry, Olsen, Brigitte K. and Byrne, Catrine Søndergaard ( 2018). “Data Ethics Principles”.

Accessed 16 December 2020 

[7] “Asilomar AI Principles”. Future of Life Institute. Accessed 16 December 2020 

[8] UNESCO (2017). “Report of Comest on Robotics Ethics”. Retrieved 16 December 2020 from .

[9] OECD (2019). “Recommendation of the Council on Artificial Intelligence”. OECD/LEGAL/0449. Retrieved 16 December 2020 from 

[10] National New Generation Artificial Intelligence Governance Professional Committee (2019). “New Generation Artificial Intelligence Governance Principles-Development of Responsible Artificial Intelligence”. Beijing. Retrieved 16 December 2020 from

[11] Villani, Cédric (2019). “FOR A MEANINGFUL ARTIFICIAL INTELLIGENCE. Towards a French and European Strategy”. Paris. Retrieved 16 December 2020 

[12] Charlevoix Common vision for the Future of Artificial Intelligence.(2018). G7. Retrieved 16 December 2020 from 

[13] High-Level Expert Group on AI (2019). “Ethics guidelines for trustworthy AI”. Brussels. Retrieved 16 December 2020 from

[14] Partnership of AI 2016-2020. “Tenets”. Retrieved 16 December 2020 from 

[15] Piper, Kelsey (2019). “Exclusive: Google cancels AI ethics board in response to outcry” Vox. Retrieved 16 December 2020 from and

Hao, Karen (2020). “We read the paper that forced Timnit Gebru out of Google. Here’s what it says.” MIT Technology Review. Retrieved 16 December 2020 from

[16] Jobin, Anna; Ienca, Marcello; Vayena, Effy (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence 1, 389–399 (2019) 

[17] Hagendorff, Thilo (2020). “The ethics of Ai ethics: An evaluation of guidelines.” Minds and Machines: 1-22.

[18] Jobin, Anna; Ienca, Marcello; Vayena, Effy (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence 1, 389–399 

[19] ibid

[20] ibid

[21] Hagendorff (2020).

[22] “The European AI Alliance”. European Commission.  Retrieved 16 December 2020