The AI arms race is on, and it’s a cat and mouse game we see every day in our threat intelligence work. As new technology evolves, our lives become more convenient, but cybercriminals see new opportunities to attack users. Whether it’s trying to circumvent antivirus software, or trying to install malware or ransomware on a user’s machine, to abusing hacked devices to create a botnet or taking down websites and important server infrastructures, getting ahead of the bad guys is the priority for security providers. AI has increased the sophistication of attacks, making it increasingly unpredictable and difficult to mitigate against.
About the author
Increased Systematic Attacks
AI has reduced the manpower needed to carry out a cyber-attack. As opposed to manually developing malware code, this process has become automated, reducing the time, effort and expense that goes into these attacks. The result: attacks become increasingly systematic and can be carried out on a larger, grander scale.
Societal Change and New Norms
Along with cloud computing services, the growth of AI has brought many tech advancements, but unless carefully regulated it risks changing certain aspects of society. A prime example of this is the use of facial recognition technology by the police and local government authorities. San Francisco hit the headlines this year when it became the first US city to ban the technology.
This was seen as a huge victory – the technology carried far more risks than benefits and question marks over inaccuracy and racial bias were raised. AI technology is not perfect and is only as reliable and accurate as the data that feeds it. As we head into a new decade, technology companies and law makers need to work together to ensure these developments are suitably regulated and used responsibly.
Changing the way we look at information
We’re now in the era of fake news, misinformation and deep fakes. AI has made it even easier to create and spread misleading and fake information. This problem is exacerbated by the fact that we increasingly consume information in digital echo chambers, making it harder to access unbiased information.
While responsibility lies with the tech companies that host and share this content, education in data literacy will become more important in 2020 and beyond. An increasing focus on teaching the public how to scrutinise information and data will be vital.
More Partnerships to Combat Adversarial AI
In order to combat the threat from adversarial AI, we hope to see even greater partnerships between technology companies and academic institutions. This is precisely why Avast has partnered with The Czech Technical University in Prague to advance research in the field of artificial intelligence.
Avast’s rich threat data from over 400 million devices globally have been combined with the CTU’s study of complex and evasive threats in order to pre-empt and inhibit attacks from cybercriminals. The goals of the laboratory include publishing breakthrough research in this field and to enhance Avast’s malware detection engine, including its AI-based detection algorithms.
As we head into a new decade AI will continue to impact and change technology and society around us, especially with the increase in smart home devices. However, despite the negative associations, there’s a lot more good to be gained from artificial intelligence than bad.
Tools are only as helpful as those who wield them. The biggest priority in the years ahead will be cross-industry and government collaboration, to use AI for good and prohibit those who attempt to abuse it.
social experiment by Livio Acerbo #greengroundit #techradar http://www.techradar.com/news/from-the-ai-arms-race-to-adversarial-ai