Children are already interacting with AI technologies in many different ways: they are embedded in toys, virtual assistants, video games, and adaptive learning software. Their impact on children’s lives is profound, yet UNICEF found that, when it comes to AI policies and practices, children’s rights are an afterthought, at best.
In response, the UN children’s agency has developed draft Policy Guidance on AI for Children to promote children’s rights, and raise awareness of how AI systems can uphold or undermine these rights.
Conor Lennon from UN News asked Jasmina Byrne, Policy Chief at the UNICEF Global Insights team, and Steven Vosloo, a UNICEF data, research and policy specialist, about the importance of putting children at the centre of AI-related policies.
AI Technology will fundamentally change society.
Steven Vosloo At UNICEF we saw that AI was a very hot topic, and something that would fundamentally change society and the economy, particularly for the coming generations. But when we looked at national AI strategies, and corporate policies and guidelines, we realized that not enough attention was being paid to children, and to how AI impacts them.
So, we began an extensive consultation process, speaking to experts around the world, and almost 250 children, in five countries. That process led to our draft guidance document and, after we released it, we invited governments, organizations and companies to pilot it. We’re developing case studies around the guidance, so that we can share the lessons learned.
Jasmina Byrne AI has been in development for many decades. It is neither harmful nor benevolent on its own. It’s the application of these technologies that makes them either beneficial or harmful.
There are many positive applications of AI that can be used in in education for personalized learning. It can be used in healthcare, language simulation and processing, and it is being used to support children with disabilities.
And we use it at UNICEF. For example, it helps us to predict the spread of disease, and improve poverty estimations. But there are also many risks that are associated with the use of AI technologies.
Children interact with digital technologies all the time, but they’re not aware, and many adults are not aware, that many of the toys or platforms they use are powered by artificial intelligence. That’s why we felt that there has to be a special consideration given to children and because of their special vulnerabilities.
Privacy and the profit motive
Steven Vosloo The AI could be using natural language processing to understand words and instructions, and so it’s collecting a lot of data from that child, including intimate conversations, and that data is being stored in the cloud, often on commercial servers. So, there are privacy concerns.
We also know of instances where these types of toys were hacked, and they were banned in Germany, because they were considered to be safe enough.
Around a third of all online users are children. We often find that younger children are using social media platforms or video sharing platforms that weren’t designed with them in mind.
They are often designed for maximum engagement, and are built on a certain level of profiling based on data sets that may not represent children.
Predictive analytics and profiling are particularly relevant when dealing with children: AI may profile children in a way that puts them in a certain bucket, and this may determine what kind of educational opportunities they have in the future, or what benefits parents can access for children. So, the AI is not just impacting them today, but it could set their whole life course on a different direction.
Jasmina Byrne Last year this was big news in the UK. The Government used an algorithm to predict the final grades of high schoolers. And because the data that was input in the algorithms was skewed towards children from private schools, their results were really appalling, and they really discriminated against a lot of children who were from minority communities. So, they had to abandon that system.
That’s just one example of how, if algorithms are based on data that is biased, it can actually have a really negative consequences for children.
‘It’s a digital life now’
Steven Vosloo We really hope that our recommendations will filter down to the people who are actually writing the code. The policy guidance has been aimed at a broad audience, from the governments and policymakers who are increasingly setting strategies and beginning to think about regulating AI, and the private sector that it often develops these AI systems.
We do see competing interests: the decisions around AI systems often have to balance a profit incentive versus an ethical one. What we advocate for is a commitment to responsible AI that comes from the top: not just at the level of the data scientist or software developer, from top management and senior government ministers.
Jasmina Byrne The data footprint that children leave by using digital technology is commercialized and used by third parties for their own profit and for their own gain. They’re often targeted by ads that are not really appropriate for them. This is something that we’ve been really closely following and monitoring.
However, I would say that there is now more political appetite to address these issues, and we are working to put get them on the agenda of policymakers.
Governments need to think and puts children at the centre of all their policy-making around frontier digital technologies. If we don’t think about them and their needs. Then we are really missing great opportunities.
Steven Vosloo The Scottish Government released their AI strategy in March and they officially adopted the UNICEF policy guidance on AI for children. And part of that was because the government as a whole has adopted the Convention on the Rights of the Child into law. Children’s lives are not really online or offline anymore. And it’s a digital life now.
This conversation has been edited for length and clarity. You can listen to the interview here.