During the month of November, over 16.2 million material pieces were "actioned" proactively on Facebook across 13 violation categories in India, according to social media company Meta. According to statistics released in a compliance report, Instagram's photo-sharing platform took preemptive action against over 3.2 million items across 12 categories over the same time period.
Large digital platforms (with more than 5 million users) are required to produce monthly compliance reports under new IT laws that went into effect earlier this year, including the details of complaints received and actions taken.
It also contains information on content that has been deleted or blocked as a result of proactive monitoring by automated systems. In October, Facebook proactively "actioned" over 18.8 million pieces of material across 13 categories, while Instagram proactively "actioned" over 3 million pieces across 12 categories. Between November 1 and November 30, according to Meta's latest report, Facebook received 519 user reports through its Indian grievance mechanism.
It stated that they include pre-established routes for reporting material for particular infractions, self-remediation pathways where customers may retrieve their data, and avenues for dealing with account hacked concerns, among other things. Instagram received 424 complaints through the Indian grievance procedure from November 1 to 30. The parent company of Facebook, Meta, has lately changed its name.
"Of these incoming reports, we provided tools for users to resolve their issues in 461 cases," the report says.
163,200 content items were actioned in the Child Endangerment – Nudity and Physical Abuse category, 700,300 in the Child Endangerment – Sexual Exploitation category, and 190,500 in the Violence and Incitement area. The amount of pieces of material (such as posts, images, videos, or comments) for which action has been taken due to a breach of standards is referred to as "actioned" content. Taking action might involve deleting a piece of content from Facebook or Instagram, or notifying people about photographs or videos that are upsetting to them.
In the majority of these situations, the proactive rate, which reflects the proportion of all content or accounts acted on that Facebook discovered and marked using technology before people reported them, varied from 60.5 to 99.9%. Because this information is contextual and extremely personal, the proactive rate for removing content connected to bullying and harassment was 40.7 percent.