Prime Minister Theresa May has been warned that her promise to tighten regulation on tech firms after the London attacks will not work.
Mrs May said areas of the internet must be closed because tech giants provided a “safe space” for terrorist ideology.
Twitter, Facebook and Google said they were investing heavily in the area.
An internet advocacy group said social media was not the problem, while an expert in radicalisation branded Mrs May’s criticism “intellectually lazy”.
Google, which owns Youtube, along with Facebook, which owns WhatsApp, and Twitter were among the tech companies already facing pressure to tackle extremist content, a pressure that intensified on Sunday.
Mrs May said: “We cannot allow this ideology the safe space it needs to breed.
“Yet that is precisely what the internet, and the big companies… provide.”
Culture Secretary Karen Bradley said social media companies had successfully taken action against indecent images of children.
“We now need to see the same response in terms of extremism and radicalisation. We know it can be done and we know the internet companies want to do it,” she told the BBC.
Home Secretary Amber Rudd said on Sunday that tech firms needed to take down extremist content and limit the amount of end-to-end encryption that terrorists can use.
End-to-end encryption renders messages unreadable if they are intercepted, for example by criminals or law enforcement.
‘No place on our platform’
However, the major social media firms said they were working hard to rid their networks of terrorist activity and support.
Google said it had spent hundreds of millions of pounds to fight abuse on its platforms and was already working on an “international forum to accelerate and strengthen our existing work in this area”.
The firm added that it shared “the government’s commitment to ensuring terrorists do not have a voice online”.
Facebook said: “Using a combination of technology and human review, we work aggressively to remove terrorist content from our platform as soon as we become aware of it – and if we become aware of an emergency involving imminent harm to someone’s safety, we notify law enforcement.”
Meanwhile, Twitter said “terrorist content has no place on” its platform.
“We continue to expand the use of technology as part of a systematic approach to removing this type of content,” the firm added.
Analysis – Dave Lee, BBC North America technology reporter
Silicon Valley is both on the offensive and defensive.
Defensive in that they are protecting their reputations as companies that put in a lot of work to stamp out extremist content online, but offensive in making it clear they do not feel “kneejerk” regulation is the way to solve the issue.
The tech industry is mostly in agreement on this. They believe that end-to-end encryption, while perhaps frustrating to police, is a technology that means everyone’s communications are far more secure.
The logic put forward by experts is that if there’s a way to break into a terrorist’s smartphone without his permission – then there’s a way to break into your smartphone too.
On Monday, Apple will be holding its annual developers’ conference in San Jose. I’m not expecting chief executive Tim Cook to talk about the issue – he won’t want to willingly draw his company into the debate – but you can fully expect Apple to put its weight behind any movement that seeks to increase security.
And the company will speak out, as it often has, against any attempts from authorities to compel tech firms to give them a so-called “back door” into their systems.
The Open Rights Group, which campaigns for privacy and free speech online, warned that politicians risked pushing terrorists’ “vile networks” into the “darker corners of the web” by more regulation.
“The internet and companies like Facebook are not the cause of hate and violence, but tools that can be abused.
“While governments and companies should take sensible measures to stop abuse, attempts to control the internet is not the simple solution that Theresa May is claiming,” Open Rights said.
Professor Peter Neumann, director of the International Centre For The Study Of Radicalisation at King’s College London, was also critical of Mrs May.
He wrote on Twitter: “Big social media platforms have cracked down on jihadist accounts, with result that most jihadists are now using end-to-end encrypted messenger platforms e.g. Telegram.
“This has not solved problem, just made it different… moreover, few people (are) radicalised exclusively online. Blaming social media platforms is politically convenient but intellectually lazy.”
‘Tool for extremists’
However, Dr Julia Rushchenko, a London-based research fellow at the Henry Jackson Centre for Radicalisation and Terrorism, told the BBC that Mrs May was right, and that more could be done by tech giants to root out such content.
She felt that the companies erred on the side of privacy, not security. “We all know that social media companies have been a very helpful tool for hate preachers and for extremists,” Dr Rushchenko said.
Investors suggested that tech firms would be more willing to take further action against extremist content if shareholders and advertisers pressured them to do so.
Jessica Ground, a UK fund manager at Schroders, told the BBC: “It’s going to be an interesting debate how you put the pressure points. It could be the money rather than the governments.”
Simon Howard, chief executive of UKSIF – the UK Sustainable Investment and Finance Association, said: “We’ll need all the technology companies to do a bit more and we’ll have to decide what the UK legal framework in which they do that is.”