The UK government has unveiled a tool it says can accurately detect jihadist content and block it from being viewed.
Home Secretary Amber Rudd told the BBC she would not rule out forcing technology companies to use it by law.
Ms Rudd is visiting the US to meet tech companies to discuss the idea, as well as other efforts to tackle extremism.
Thousands of hours of content posted by the Islamic State group was run past the tool, in order to “train” it to automatically spot extremist material.
The government provided £600,000 of public funds towards the creation of the tool by an artificial intelligence company based in London.
ASI Data Science said the software can be configured to detect 94% of IS video uploads.
Anything the software identifies as potential IS material would be flagged up for a human decision to be taken.
The company said it typically flagged 0.005% of non-IS video uploads. On a site with five million daily uploads, it would flag 250 non-IS videos for review.
It is intended to lighten the moderation burden faced by small companies that may not have the resources to effectively tackle extremist material being posted on their sites.
Similar tools in the past have been heavily criticised by advocates of an “open” internet, saying such efforts can produce false positives – and that means content that is not particularly problematic ends up being taken down or blocked.
In London, reporters were given an off-the-record briefing detailing how ASI’s software worked, but were asked not to share its precise methodology. However, in simple terms, it is an algorithm that draws on characteristics typical of IS and its online activity.
PM seeks ‘safe and ethical’ artificial intelligence
How terror groups hijacked social media
Facebook’s AI wipes terrorism-related posts
Islamic State piggybacks Baaz’s social mega-feed
In Silicon Valley, the home secretary told the BBC the tool was made as a way to demonstrate that the government’s demand for a clampdown on extremist activity was not unreasonable.
“It’s a very convincing example of the fact that you can have the information you need to make sure this material doesn’t go online in the first place,” she said.
“The technology is there. There are tools out there that can do exactly what we’re asking for. For smaller companies, this could be ideal.”
Silicon Valley giants such as Facebook and Google are pouring their own resources into solving this problem, but this tool is at first intended to be used by small companies, and they may one day be forced to use it.
“We’re not going to rule out taking legislative action if we need to do it,” the home secretary said.
“But I remain convinced that the best way to take real action, to have the best outcomes, is to have an industry-led forum like the one we’ve got.”
The Global Internet Forum to Counter Terrorism, launched last year, brings together several governments including the US and UK, and major internet firms like Facebook, Google, Twitter and others.
However, the bigger challenge is predicting which parts of the internet that jihadis will use next.
The Home Office estimates that between July and the end of 2017, extremist material appeared in almost 150 web services that had not been used for such propaganda before.