Facebook to ban users from live streaming for violating norms

Facebook Vice President (Integrity) Guy Rosen, in a blogpost, said the company has been reviewing how to limit the use of its services for causing harm or spreading hate.

New Delhi: Social media giant Facebook will bar users from live streaming for a certain period of time in case they violate its norms for using the facility, a move that comes two months after Christchurch mosque attack.

The company – which has 2.38 billion monthly active users globally – said it is also investing USD 7.5 million in new research partnerships with academics from three universities to collaborate on improving image and video analysis technology.

Facebook Vice President (Integrity) Guy Rosen, in a blogpost, said the company has been reviewing how to limit the use of its services for causing harm or spreading hate.

“Today we are tightening the rules that apply specifically to live. We will now apply a ‘one strike’ policy to ‘live’ in connection with a broader range of offenses. From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time – for example 30 days – starting on their first offense,” he said.

Citing an example, Rosen said someone who shares a link to a statement from a terrorist group with no context, will now be immediately blocked from using live for a set period of time.

“We plan on extending these restrictions to other areas over the coming weeks, beginning with preventing those same people from creating ads on Facebook… Our goal is to minimise risk of abuse on live while enabling people to use live in a positive way every day,” he said.

In March, over 50 people were gunned down at two Christchurch mosques by a self-described white supremacist, who broadcast live footage on Facebook. The first user report on the original video came in 29 minutes after the video started, and 12 minutes after the live broadcast ended.

Facebook has, in a previous post, said the video was viewed fewer than 200 times during the live broadcast. Including the views during the live broadcast, the video was viewed about 4,000 times in total before being removed from Facebook.

Rosen – in the latest blogpost – explained that one of the challenges Facebook faced in the days after the Christchurch attack was a proliferation of many different variants of the video of the attack.

People had shared edited versions of the video that made it hard for its systems to detect despite deploying a number of techniques, including video and audio matching technology.

“That’s why we’re partnering with The University of Maryland, Cornell University and The University of California, Berkeley to research new techniques to detect manipulated media across images, video and audio; and distinguish between unwitting posters and adversaries who intentionally manipulate videos and photographs,” he said.

Previously, if someone posted content that violated Facebook’s Community Standards (on Live or otherwise), the company took down the post. If the user kept posting violating content, they were blocked them from using Facebook for a certain period of time.

Rosen said continued efforts would be critical to tackle ‘manipulated media, including deepfakes (videos intentionally manipulated to depict events that never occurred)’.

PTI

Exit mobile version