Artificial Intelligence (AI) is influencing every child in many different ways, from how they are conceived and born, to the services they can access and how they learn, to the jobs they will train for. While there are many opportunities for children in the AI age, the risks are substantial. Striking the right balance between child protection and technological advancement is crucial for building AI-powered solutions that help realize and uphold child rights. Further, the 2030 Agenda for Sustainable Development places the protection of children at the heart of the policy actions of every nation with 8 SDGs directly linked to this topic (out of 17).
Traditionally, there were three major forces that shaped the development of children, namely the family, the school and the street. However, this traditional structure has been changing rapidly in recent years as the internet, and implicitly tech developments, increasingly exert influence on children’s behaviour and well-being.
While the school and the street are environments easier to decide upon and supervise for parents, technology is much harder to monitor. But the risks of misused technology can be far deeper and more diverse than we think, such as pornography, sexting, fabricated identities/ identity theft, harmful content and more.
Advances in AI provide us with an unprecedented opportunity to address child abuse and build child-safeguarding solutions running on-device. In particular, image classification algorithms based on deep learning have continually improved accuracy. Mobile hardware has also been evolving in parallel with neural networks. Many mobile devices are now being fitted with dedicated and powerful AI chips. By utilising these resources, it should be feasible to perform inference when images are loaded for rendering, thus blocking harmful images before they are displayed. The implementation approach can be similar to on-access virus scanning, where the virus scanner continually examines the device and automatically activates each time an image is accessed by a program. An alternative implementation can work through certification, where applications certified as “child safe” must use a set of APIs provided by the operating system. Existing parental control mechanisms in iOS, Android and Windows can then be extended to allow parents to control on-device filtering.
Additionally, the power of adaptable AI has unlocked opportunities for personalized learning tools, AI-driven bots that act as virtual therapists for emotional support, and “intelligent systems” that can enable accessibility for children who are differently abled. In the UK for instance, a computer-generated analysis – “machine learning” that produces predictive analytics – can help social workers assess the probability of a child coming on to the at-risk register. It can also help show how they might prevent that happening.
In a nutshell, AI is being used around the world to help protect children, but it raises ethical questions and brings significant risks.
The Netflix documentary Social Dilemma alerts us on the dangerous human impact of social networking. The movie shines light in particular on the cognitive manipulation on children and adolescents. This AIDANote serves as a starting point for discussion and collaboration on AI’s implications on children.
 According to UNICEF, “…child protection refers to preventing and responding to violence, exploitation and abuse against children including commercial sexual exploitation, trafficking, child labor and harmful traditional practices, such as female genital mutilation/cutting and child marriage.”
 Statistics show that children of 11–16-year olds in the UK have seen explicit sexual material online