The attacker in Germany, who livestreamed his rampage on Amazon's gaming subsidiary Twitch, shot dead two people after failing to gain entry to the synagogue on the holiest day of the Jewish year.
The nearly 36-minute-long video closely resembled footage livestreamed in March in Christchurch, where the gunman also wore a camera to capture a first-person perspective as he killed 51 people at two mosques.
As with Christchurch, full copies and portions of the German video quickly began appearing elsewhere online, shared both by supporters of the gunman's anti-Semitic ideology and critics condemning his actions.
Reuters viewed copies and links to the footage posted on Twitter, 4chan and message boards focused on trolling and harassment, as well as multiple white supremacist channels on messaging app Telegram.
The Global Internet Forum to Counter Terrorism, whose members include Facebook, Google, Microsoft and Twitter, said they were collaborating to take down the videos using "hashing" technology, which reduces content to code so it can be spotted and removed automatically.
"We are in close contact with each other and remain committed to disrupting the online spread of violent and extremist content," the group said in a statement.
In statements posted to its official Twitter account, Twitch said the footage was viewed live by five people and then seen by 2,200, before the company took it down 30 minutes later.
It said the suspect's account had attempted to stream only once before and its investigation suggested that "people were coordinating and sharing the video via other online messaging services," but did not elaborate.
Facebook said it did not yet have details of how many times the video had been posted on its platforms or how many users saw it, while Twitter referred Reuters to the Forum's statement.
Google and Telegram did not respond to requests for comment.
Silicon Valley tech giants have endorsed New Zealand Prime Minister Jacinda Ardern's "Christchurch Call," which aims to establish ethical standards for tech companies and media outlets to avoid amplifying violent extremist content online.
The companies, which face intense scrutiny over hate speech and are trying to avert more strenuous action by regulators, pledged to tighten rules and share more information around abusive content.
The call came after years of spotty enforcement of companies' policies around hateful and violent content, often reposted millions of times between fringe and mainstream sites.
Attackers began accompanying assaults with highly orchestrated digital announcements, spurring followers to capture the content and post it to different platforms before it could be taken down.
In 2018, a gunman who killed 11 worshippers at a Pittsburgh synagogue posted his manifesto on social network Gab, saying a non-profit that helped refugees relocate to the United States was hurting "my people."
Online message board 8chan was used by mass shooters to announce attacks three times in 2019, including the posting of a four-page statement by the gunman behind the attack at a Walmart store in El Paso, Texas.
Oren Segal, who heads the Center on Extremism at the Anti-Defamation League, said violent imagery spreads across the internet and cannot be stopped by individual firms."A couple of months ago, the conversation was about 8chan. Now it's about Twitch and Telegram. The names will change, but the threat remains the same and is one that affects the entire online ecosystem," he said.