As the war between the US and Iran continues and becomes more critical, artificial intelligence is generating related photos and videos that are freely published on social media. Advisors to Meta, one of the tech giants, said Meta should do more to address the spread of fake content created with artificial intelligence tools on its platforms. The 21-member Supervisory Board raised concerns while reprimanding the company for releasing online a video generated by artificial intelligence that claimed to show extensive damage in Haifa, Israel, by Iranian forces.
The BBC reports in an article that the board called on the company to review its artificial intelligence rules, warning that a rise in fake artificial intelligence videos about global military conflicts had confused audiences to distinguish fact from AI, thus jeopardizing the credibility of the information.
Currently, Meta relies heavily on users to “self-detect” when content they post is generated by an artificial intelligence tool. Meanwhile, the Supervisory Board stressed that Meta should label fake AI content “much more frequently.” The board also said that Meta’s current methods were “neither robust nor comprehensive enough to cope with the scale and speed of AI-generated content, particularly during a crisis or conflict where there is increased engagement on the platform.”
“A very high standard is needed for labeling content generated by artificial intelligence, especially when the topic is armed conflict,” the board said on Tuesday. Meta established the oversight board in 2020 as a semi-independent group that provided oversight of content moderation decisions across its platforms, which include Facebook, Instagram and WhatsApp.

