BBC blocks OpenAI data scraping, to harness Generative AI

OpenAI BBC

Pic- IANS

London: The BBC, which along with other top media organisations like CNN blocked OpenAI’s data scraping, has laid out three principles that will shape its approach to working with Generative AI.

In a blog post, BBC director of nations, Rhodri Talfan Davies, said GenAI provides opportunities to deliver “more value to our audiences and society”.

“We believe Gen AI could provide a significant opportunity for the BBC to deepen and amplify our mission, enabling us to deliver more value to our audiences and to society,” he added.

“It also has the potential to help our teams to work more effectively and efficiently across a broad range of areas including production workflows and our back-office,” the BBC executive noted.

In August, several top new publications like The New York Times, CNN and the Australian Broadcasting Corporation (ABC) blocked Microsoft-backed OpenAI to access their content to train its AI models.

The NYT blocked OpenAI’s web crawler, meaning that the Sam Altman-run company can’t use content from the publication to train its AI models.

OpenAI’s web crawler called GPTBot may scan web pages to help improve its AI models.

Davies said in the blog post that Gen AI is likely to introduce new and significant risks if not harnessed properly.

“These include ethical issues, legal and copyright challenges, and significant risks around misinformation and bias. These risks are real and cannot be underestimated,” he emphasised.

Among the three principles the UK’s top media outlet outlined, it will explore how “we can harness Generative AI to strengthen our public mission and deliver greater value to audiences”.

“We will always prioritise talent and creativity and be open and transparent,” said the company.

IANS

Exit mobile version