Swift, operator of the world’s largest money-transfer system, said it has hired a pair of information-security firms to help it scrutinize customers’ use of its systems and detect attempted hacks, following a series of breaches at user sites in recent months.
The Brussels company, whose full name is the Society for Worldwide Interbank Financial Telecommunication, has been battered by a series of cyberthefts that have hit banks in Ecuador, Vietnam, Bangladesh and Ukraine in the past 18 months.
Swift has repeatedly said the core of its network remains uncompromised and it is the responsibility of its users to maintain the integrity of their systems. But it has also faced concerns about its inability to ensure the security of its user interface and the authenticity of its message traffic.
The perpetrators, who haven’t been identified, stole the banks’ Swift credentials and fraudulently sent payment instructions over the Swift network.
The two tech giants have stepped up their fight using the same technology used to remove videos with copyrighted content.
Silicon Valley has long struggled with how to police inappropriate or even criminal content. Earlier this year, Microsoft, Facebook, YouTube, and Twitter agreed to work with the European Union to identify and combat hate speech online. The problem these companies face is that they often rely on users submitting and flagging material, but the concern is that if companies start taking down users’ posts themselves, they run the risk of being seen as self-censoring. Now, though, at least two tech companies have turned to automation to remove extremist content from their platforms.
YouTube and Facebook are among a group of tech giants that have quietly begun to use automation to eradicate videos featuring violent extremism from their Web sites, Reuters reports. Two sources tell the news outlet that the technology the companies are utilizing is the same used to automatically identify and delete copyright-protected content, though it’s unclear how much of the process is automated. (Google, Facebook, and others are already using automation to eliminate child pornography on their platforms.) The companies’ end goal is not to identify new extremist videos posted to their platforms, but to prevent re-posted material that’s already been deemed inappropriate from spreading, including Islamic State videos. Neither YouTube’s parent company Google nor Facebook would confirm the reports, nor will they discuss the use of such automation publicly, Reuters’ sources say, partially out of concern that terror groups will learn to circumvent the technology.