Artificial Intelligence and Machine Learning in Cybersecurity (CyAIM)

Dr. Vladimer Svanadze

Group Chair


Team members:
  • Neli Odishvili (Group Co-Chair)
  • Mamuka Kirkitadze
Current Situation

It is becoming increasingly relevant to use artificial intelligence and machine learning to solve cybersecurity related tasks and great attention is paid to this process from both private and public sectors.

According to existing researches and various estimations it is undoubtable that artificial intelligence is becoming crucial part of cybersecurity sphere and the supply market is accordingly increasing. In particular, in 2016, it was around $ 1 billion, while by 2025, that index will approach $ 350 billion.

The capabilities of artificial intelligence and its relevance at the national level are highlighted in cybersecurity and defense strategies, as the rapid development of information technology and Internet technologies, in general, has led to growth in demand on relevant security requirements.

At the same time, there are growing global initiatives to set standards and certification procedures to raise awareness and trust regarding artificial intelligence in consumers.

The World Economic Forum's 2019 Global Risk Report cites cyberattacks as one of the top five global risks and this is driven by a sharp increase in the incidence of cyberattacks. In particular, in the first half of 2018, cyberattacks made up at about 3.3 billion records, which is almost 70% more than the same period in 2017 (2.7 billion). In addition, in the process of reaching their targets, cyberattacks have become increasingly rapid, sophisticated, and focused on modern technological solutions. A Microsoft study showed that 60% of attacks in 2018 lasted more than an hour, and all of them relied on the nearest solutions of malware and elements of an artificial intelligence system were used.

Artificial intelligence can reduce all above-mentioned numbers and at the same time the related costs, included financial and human resources.

For the purpose to evaluate the sustainability of artificial intelligence systems, new standards and certified procedures are appeared globally. Also, the International Organization for Standardization (ISO) has set up a committee (ISO / IEC JTC 1 / SC 42) to work specifically on artificial intelligence standards. One such standard (ISO / IEC NP TR 24029-1) relates to the assessment of the resilience of neural networks.

On 2019 Defense Advanced Research Projects Agency (DARPA) started new research project for supporting the evolvement of artificial intelligence. In the same year, US Department of Commerce’s National Institute of Standards and Technology (NIST) started working on the standards of artificial intelligence systems for the purpose to be ensured the sustainability of artificial intelligence systems.

However, the use of artificial intelligence (both machine learning and neural networks) in cyber security tasks has both advantages and disadvantages. In particular, it can substantially improve cybersecurity practices, but at the same time can promote new forms of attacks, which can pose a serious threat.

At this stage, studies reveal and experts agree that the active use of artificial intelligence without any control while it is still in the process of getting acknowledged is completely unacceptable, as well as the degree of trust is low and, in fact, unjustified, for the purpose to reduce security risks it is essential to have some form of control to ensure the reliability of artificial intelligence for the effective provision of cyber security.

Global processes have had a major impact on the emergence of new trends in cyberspace and the rapid development of technological processes. Technologies based on artificial intelligence and machine learning are increasingly evolving and cybersecurity is no exception in this regard. Artificial intelligence and machine learning are becoming increasingly relevant in our lives. In particular, recently artificial intelligence has penetrated into all areas of, be it banking, financial or legal, etc. The use of artificial intelligence and machine learning technologies in the digital world will give even greater stimulus to the process of creating new systems in the field of cyber security next year. In addition, the artificial intelligence technologies themselves will be used in the field of cybersecurity.

Aims

The purpose of creating the group is to explore the possibilities of artificial intelligence and machine learning in maintaining cyberspace sustainability, unity, security and stability. To promote the development of this direction both globally, regionally and nationally. Also, to develop international cooperation in this direction and thus, to facilitate the process of information exchange and analysis, as well as to raise awareness in this direction. In addition, prepare and issue relevant recommendations.

Directions

Based on the goals, the group carries out its activities in the following directions:

  1. Study, research and analysis
    • Search for the best solutions for the use of artificial intelligence in cyber security;
    • Study of best practices;
    • To study the legal aspects of the use of artificial intelligence in cyber security;
    • Preparation of relevant reports, manuals and recommendations;
    • Existing standards, certified procedures and methods;
  2. Consulting - Provide consulting services to all interested parties;
  3. Education & Awareness - Conduct seminars, workshops and trainings that will facilitate the process of education and awareness raising in the given direction;
  4. International Cooperation - In order to exchange and analyze information, establish international contacts and develop cooperation with the entities that operate in this field;
  5. Project Management - Participate in international projects, as well as initiate projects regarding above-mentioned directions and participate in grant processes.
SIG in Artificial Intelligence and Machine Learning in Cybersecurity (CyAIM)

Reports, Papers, Briefings

  • Artificial Intelligence in Cybersecurity
    This opinion paper shows the active use of artificial intelligence without significant control measures in place, especially in its early stages, produces insufficient levels of trust to be categorized as unjustified. To minimize the security risks it is imperative to have some form of control to ensure the reliability of artificial intelligence to complement cybersecurity. It is require continuous monitoring and evaluation of artificial intelligence systems. Setting the artificial intelligence standards in cybersecurity, that aims to gain trust and thus refuse to monitor and control AI is risky.