Meta, the parent company of Facebook, Instagram, and WhatsApp, regularly conducts risk assessments before releasing its various technologies and services. For this, the company also has a separate workforce consisting of security experts. This time, Meta is moving towards using artificial intelligence (AI) instead of humans to assess the risks of new technologies and services. Under this new plan, 90 percent of Meta’s ‘Privacy and Integrity Review’ process will be done through AI. This information was recently reported by NPR in a report that reviewed Meta’s internal documents.
According to NPR, Meta currently conducts risk reviews with its employees before updating its algorithms or introducing new security features. This process involves experts analyzing the potential social, ethical, and informational risks of the technology. However, the new plan is reducing human involvement in these decisions.
In April, Meta’s oversight board, while supporting the company’s stance on “controversial” speech, expressed concerns about the weaknesses in Meta’s policies and practices when it comes to content moderation. The board’s statement said that as these changes take effect globally, Meta should now assess the human rights impacts of these measures. Overreliance on automated content identification systems could lead to uneven responses globally.
In April, Meta discontinued its fact-checking efforts and launched a public verification system called ‘Community Notes’. Meta acknowledged that it would use AI to assess the risks of its various technologies and services, but said that AI would initially only be used to launch low-risk technologies and features.
Leave a Reply