On Tuesday, the Ministry of Electronics and IT ( MeitY) shot off a letter to Facebook India‘s Managing Director Ajit Mohan seeking information on the processes followed by the social media company to moderate content on its platform and the methods employed to prevent harm to online users, the sources said. The missive follows
recent revelations by
Facebook whistleblower Frances Haugen that have “alarmed” the government, specifically with regard to the
so-called India experiment, where a dummy user’s feed was filled with fake news and hate speech within three weeks of opening an account, they added.
Haugen’s revelations have also flagged promotion of violent and provocative posts, especially anti-Muslim content on the Facebook India platform.
“The government has asked for information about the
algorithms that Facebook is using for content moderation and how they are preventing online harms, which are being caused by this kind of content,” said one person cited above.
“They (Facebook) should prevent harmful content from showing on anyone’s feed or wall,” said the person, adding that based on the company’s response, the government will “further investigate”. “The government has to probe how their (Facebook’s) systems currently work and how they plan to reform or change it,” sources said.
Facebook declined to comment on the development.
STARTUP ROCKSTARS IN 2021
Sign-in to see our list of the most promising startups of 2021
ET had on Monday reported that privacy experts and civil society are calling on the Indian government to seek more algorithmic accountability from Facebook in the light of the recent revelations.
The government can demand such information exercising India’s sovereign power and the legal framework of the IT Rules and the IT Act, which prescribe due diligence, those in the know told ET.
India’s newly notified IT Rules under the IT Act prescribe “due diligence” for platforms with regard to content which is “grossly harmful…hateful, or racially, ethnically objectionable…or otherwise unlawful in any manner whatsoever” under the Rule 3.
“The government has also questioned Facebook on the due diligence that is prescribed under the IT Rules and how they prevent harm…,” people aware of the issue said.
US lawmakers investigating how Facebook Inc. and other online platforms shape users’ world views are considering new rules for the artificial intelligence programs blamed for spreading malicious content, Bloomberg reported on Tuesday.
Haugen, a former data scientist at Facebook, alleged earlier this month that the social media giant allocates only 13% of its budget to curb misinformation on its platform outside of the US, including in India, where it has its largest user base, citing internal documents of the company.
“Facebook has admitted, after the documents were leaked, that platforms are not working as per the way they are supposed to, so the Indian government is asking them what they are doing to prevent harm,” sources in the know of the matter told ET.
Profits over Safety
Facebook, which owns the largest instant messaging platform WhatsApp and popular photo- and video-sharing app Instagram, has been under fire after the whistleblower made public a series of documents now dubbed as
The social media network has been accused of putting profit ahead of user safety, including that of children, along with fuelling fake news and hate speech through its platform. Haugen has submitted the papers to the US SEC and has also deposed before the US Senate and the UK Parliament.