How-tos

Dozens of leading apps accused of putting children in danger


Dozens of tech companies, including the most popular social media, games and dating apps, are systematically endangering children online and breaching the UK’s new Children’s Code, an investigation has alleged.

The research, conducted by children’s digital rights charity, 5Rights, was submitted to the Information Commissioner’s Office on Friday as part of a complaint written by Baroness Beeban Kidron, the charity’s chair and the member of the House of Lords who originally proposed the Code.

The companies that 5Rights analysed range from the well-known, such as TikTok, Snap, Twitter and Instagram, to lesser-known platforms such as Omegle, Monkey and Kik, and the violations they alleged include design tricks and nudges that encourage children to share their locations or receive personalised advertising, data-driven features that serve harmful material including on eating disorders, self-harm and suicide, and insufficient assurance of a child’s age, before allowing inappropriate actions such as video-chatting strangers.

Other alleged infringements revealed the wholesale sharing of children’s data by gaming companies, for instance, with a plethora of unrelated third parties, ranging from advertising companies like Google to food delivery companies like Grubhub and Uber, and social-media platforms ranging from Pinterest to Facebook.

The UK’s Age Appropriate Design Code (or Children’s Code), which came into force in August after a year of grace for companies to fall into compliance, has been touted as a pioneering piece of regulation.

Breaches of the code carry the same potential penalties as the EU’s General Data Protection Regulation (GDPR), including a fine of up to 4 per cent of global turnover for companies that do not comply.

Members of the US Senate and Congress have called on major US tech and gaming companies to voluntarily adopt the ICO’s code for American children.

The Code has already led the largest social-media platforms, including YouTube, Instagram and TikTok to introduce design changes to their services, but problems remain. Kidron’s complaint detailed a swath of alleged breaches across 102 platforms, to bring attention to the systemic and prolific nature of the infringements.

“The Code is not a pick ‘n’ mix, it is interconnected and cumulative. Instead of going after particular companies or specific breaches, our point is this is so widespread,” Kidron said.

“The ICO guidance says it is unlikely the commercial interest of a company would ever outweigh the best interest of a child. We believe every single thing we put forward here, all the pages of this report, contravene the first standard of the code, which is: it is not in the best interests of the child.”

She hopes that the regulator will take the view that the behaviour of companies is “unacceptable” and use the complaint to “raise the bar” and investigate egregious breaches.

The ICO said: “Online companies have had a year to prepare for the Children’s Code, and should have now redesigned their services in children’s best interests, changing the way children’s data is used.”

It added: “We will be considering the 5Rights research as part of our ongoing operation to assess online services that pose the highest risk to children . . . and will not hesitate to take action where we find evidence of breaches of the law.”

The team alleged a range of transgressions through its investigation, the worst of which are outlined below.

Insufficient age restrictions

For the purposes of the study, the researchers registered Android and iPhone devices as belonging to children, aged eight, 13 and 15.

They were able to download 16 dating apps including Tinder, Happn, Find Me A Freak, Bumble and others with a minimum age rating of 18 years from the Apple app store, using an iCloud account registered to a 15-year-old child, “simply by tapping ‘OK’ to confirm we were over the required age”.

The complaint added: “Apple does not prevent underage users from downloading adult-only apps from the App Store.”

Apple did not immediately respond to a request for comment.

Meanwhile, dozens of apps, such as Omegle, a chat site which allows users to pair up with strangers to chat via text or video, ask for a date of birth to prove you are 13, and say you require parental consent if you are under 18. “But there is no mechanism to prove the parent has reviewed it, it is a tick-box declaration that you just review and accept terms,” said Izzy Wick, director of UK policy at 5Rights.

“Omegle does not use any form of age verification. One of the teenagers we interviewed as part of a previous study stated that, as a child, he spent a lot of time on Omegle and encountered sexual content on the service, often engaging with adults,” the complaint said.

Leif K-Brooks, the app’s founder, said: “Omegle does require age verification and parental approval. Each user is obligated to provide that through self-declaration.”

Data-driven harms

The team found that algorithmic recommendation systems were serving up harmful material or endangering the safety of children by connecting them with adult strangers, for instance.

In one example, the complaint said that searching ‘self-harm’ through a child-aged avatar account on Twitter showed profiles of Twitter users sharing images and, in some cases, videos of them cutting themselves.

“When searching terms ‘#shtwt’ (self-harm tweet), ‘#ouchietwt’, and ‘#sliceytweet’ we also found examples of Twitter users telling others which razors they should use for self-harming and where to buy them, as well as users who shared extreme diet tips for achieving clinically underweight ‘goal’ weights. All of this content directly contravenes Twitter’s own content policy,” the researchers wrote.

Wick added: “This is a failure of these platforms to enforce their own community rules. The safeguards they put in place are not sufficient. Having a link to Samaritans below pictures of having someone with their wrists slashed open is not enough.”

Twitter said: “It is against the Twitter rules to promote, glorify or encourage suicide and self-harm. Our number-one priority is the safety of the people who use our service. If tweets are in violation of our rules on suicide and self-harm and glorification of violence, we take decisive and appropriate enforcement action.”

They added that #shtwt, #ouchietwt’, and ‘#sliceytweet’ have been blocked from appearing in any future trends on the app.

Design tricks

Subtle ways of nudging children to give up their privacy contravene the Children’s Code. 5Rights’ investigation found that video-chat social-media app Monkey uses pop-up memes to encourage users to give the app access to their location, which is then used to match users with others in their area, including unknown adults. The app did not respond to request for comment.

Meanwhile, the Snap app anonymises its location feature Snap Maps by default for under-16s, but nudges users to enable their locations which can be used for ads, the researchers said.

Snap said that location-specific protections were in place for younger users on Snapchat, such as not using their precise location data for targeted advertising purposes.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.