Harnessing Artificial Intelligence to combat cyber threats
As cyber-attacks become more sophisticated, the traditional solutions to combat them are becoming inadequate at combating such threats.
As cyber-attacks become more sophisticated, the traditional solutions to combat them are becoming inadequate at combating such threats.
As cyber-attacks become more sophisticated, the traditional solutions to combat them are becoming inadequate at combating such threats.
A team from UNSW Canberra have investigated how advances in Artificial Intelligence (AI) show promise in enabling cybersecurity experts to counter the ever-evolving threat posed by adversaries.
Dr Erwin Adi from UNSW Canberra Cyber said one of the strengths of using AI in this way is its ability to take large amounts of data, analyse that data and then create patterns from the data so that future threats can be detected in real time.
“Traditionally, detecting cyber threats employs a subset of AI, i.e. machine learning techniques. The techniques separate malicious cyber traffic from legitimate network traffic. However, the current cyber-attack landscape expands beyond intruding networks to include humans.”
“Cyber issues faced into today’s society include phishing, fake news, online grooming and cyber bullying. Thus, other AI techniques such as natural language processing and propositional logic have found their place in the current applications in cyber security,” he said.
“They are able to systematise human languages to draw the meaning from human communications, to be used in detecting hate speeches, online predators and bomb threats.”
Common techniques used to combat cyber threats revolve around an one threat-detection method. According to Dr Adi, this method is signature-based.
“This means, for example, that a certain virus is detected through a specific code responsible as the core of the virus. The problem with this approach is that it is not adaptive when the number of attacks increases,” he said.
Another common approach is statistical, such as fitting some past attack patterns to a mathematical model. Dr Adi said this approach suffers from overfitting, i.e. the model can fit too tightly with the past data, causing it unable to tolerate some variations. In contrast, AI does not depend on signatures, and can adaptively remodel data.
From the study conducted, researchers will be able to see how a range of AI techniques can be relevant to their cybersecurity applications.
“While each technique has its own advantage and disadvantage, this study elucidates the cybersecurity applications and the current studies in the avenue where each technique is relevant,” Dr Adi said.
Dr Adi said that this research will be to look at how scarce resources (e.g. electricity) can be shared between intelligent computers and humans.
“The research showed that AI advances in cybersecurity are bolstered by the ambitions of the white hat (defenders) and black hat (offenders) hackers. Those who have access to the technical knowledge and state-of-the-art hardware would look to be the winner, but only to be followed by the opponent’s advancements in creating intelligent machines.”
“In between this race, there is another stakeholder: humans. The problem is that when intelligent machines start to consume scarce resources, ethical issues regarding the use of AI will arise.”
“When AI solutions will have been deemed as essential, AI research will expand to consider how machines can have rights,” Dr Adi said.