Understanding the new risks and threats posed by increased use of artificial intelligence.

AI and Cybersecurity

AI and Cybersecurity

AI and Cybersecurity- So much of the discussion about cybersecurity’s relationship with artificial intelligence and machine learning (AI/ML) revolves around how AI and ML can improve security product functionality. However, that is actually only one dimension of a much broader collision between cybersecurity and AI.

As applied use of AI/ML starts to advance and spread throughout a plethora of business and technology use cases, security experts are going to need to help their colleagues in the business start to address new risks, new threat models, new domains of expertise, and, yes, sometimes new security solutions.
Heading into 2020, business and technology analysts expect to see solid applications of AI and ML accelerate. This means that CISOs and security professionals will need to quickly get up to speed on AI-driven enterprise risks. Here are some thoughts from security veterans on what to expect from AI and cybersecurity in 2020.
AI/ML Data Poisoning and Sabotage   
 The security industry will need to keep tabs on emerging cases of attackers seeking to poison AI/ML training data in business applications to disrupt decision-making and otherwise operations. Imagine, for example, what would happen with companies depending on AI to automate supply chain decisions. A sabotaged data set could result in drastic under- or oversupply of product.  

'Expect to see attempts to poison the algorithm with specious data samples specifically designed to throw off the learning process of a machine learning algorithm,' says Haiyan Song, senior vice president and general manager of security markets for Splunk. 'It's not just about duping smart technology, but making it so that the algorithm appears to work fine - while producing the wrong results.' 

Image Source: Adobe Stock

AI/ML Data Poisoning and Sabotage

The security industry will need to keep tabs on emerging cases of attackers seeking to poison AI/ML training data in business applications to disrupt decision-making and otherwise operations. Imagine, for example, what would happen with companies depending on AI to automate supply chain decisions. A sabotaged data set could result in drastic under- or oversupply of product.

“Expect to see attempts to poison the algorithm with specious data samples specifically designed to throw off the learning process of a machine learning algorithm,” says Haiyan Song, senior vice president and general manager of security markets for Splunk. “It’s not just about duping smart technology, but making it so that the algorithm appears to work fine – while producing the wrong results.”

Deepfake Audio Takes BEC Attacks into a New Arena   
Business email compromise (BEC) has cost organizations billions of dollars as attackers pose as CEOs and other senior-level managers to trick people in charge of banking accounts to make fraudulent transfers in the guise of closing a deal or otherwise getting business done. Now attackers are taking BEC attacks to a new arena with the use of AI technology: the telephone. This year we saw one of the first reports of an incident where an attacker used deepfake audio to impersonate a company CEO over the phone in order to trick someone at a British energy firm to wire $240,000 to a specious bank account. Experts believe we will see increased use of AI-powered deepfake audio of CEOs to carry out BEC-style attacks in 2020.  

'Even though many organizations have educated employees on how to spot potential phishing emails, many aren't ready for voice to do the same as they're very believable and there really aren't many effective, mainstream ways of detecting them,' says PJ Kirner, CTO and founder of Illumio. 'And while these types of 'voishing' attacks aren't new, we'll see more malicious actors leveraging influential voices to execute attacks next year.' 

Image Source: Adobe Stock

Deepfake Audio Takes BEC Attacks into a New Arena

Business email compromise (BEC) has cost organizations billions of dollars as attackers pose as CEOs and other senior-level managers to trick people in charge of banking accounts to make fraudulent transfers in the guise of closing a deal or otherwise getting business done. Now attackers are taking BEC attacks to a new arena with the use of AI technology: the telephone. This year we saw one of the first reports of an incident where an attacker used deepfake audio to impersonate a company CEO over the phone in order to trick someone at a British energy firm to wire $240,000 to a specious bank account. Experts believe we will see increased use of AI-powered deepfake audio of CEOs to carry out BEC-style attacks in 2020.

“Even though many organizations have educated employees on how to spot potential phishing emails, many aren’t ready for voice to do the same as they’re very believable and there really aren’t many effective, mainstream ways of detecting them,” says PJ Kirner, CTO and founder of Illumio. “And while these types of ‘voishing’ attacks aren’t new, we’ll see more malicious actors leveraging influential voices to execute attacks next year.”

Read More Here

Article Credit: DR

About The Author

Related Posts

Leave a Reply

Your email address will not be published.