Kaspersky AI Technology Research Center | Kaspersky official blog

Kaspersky AI Technology Research Center: who we are and what we do

Our developments, products, research, patents and expert teams harnessed for AI.

Kaspersky AI Technology Research Center

For nearly two decades, Kaspersky has been at the forefront of integrating artificial intelligence (AI), particularly machine learning (ML), into its products and services. Our deep expertise and experience in applying these technologies to cybersecurity, coupled with our unique datasets, efficient methods, and advanced model-training infrastructure form the bedrock of our approach to solving complex ML challenges. Our Kaspersky AI Technology Research Center brings together data scientists, ML engineers, threat experts, and infrastructure specialists to tackle the most challenging tasks at the intersection of AI/ML and cybersecurity. This includes not only the development of applied technologies but also research into the security of AI algorithms, including the use of promising approaches such as neuromorphic ML, AI risk awareness, and much more.

Our technologies and products

At Kaspersky we’ve developed a wide range of AI/ML-powered threat detection technologies, primarily for identifying malware. These include a deep neural network algorithm for detecting malicious executable files based on static features, decision-tree ML technology for automated creation of detection rules that work on user devices, and neural networks for detecting malicious behavior of programs during execution. We also utilize a system for identifying malicious online resources based on anonymous telemetry received from solutions installed on customer devices and other sources. You can read more about them in our white paper Machine Learning for Malware Detection. Other models – such as the ML model for detecting fake websites and DeepQuarantine for quarantining suspected spam emails – protect users from phishing and spam threats. KSN’s cloud infrastructure makes our AI developments available almost instantly to both home and enterprise users.

Guided by the promise of generative AI, particularly large language models (LLM), we’ve built an infrastructure to explore its capabilities and rapidly prototype new solutions. This infrastructure, which deploys LLM tools akin to ChatGPT, is not only accessible to employees across all departments for everyday tasks but also serves as a basis for new solutions. For example, our Kaspersky Threat Intelligence Portal will soon have a new LLM-based OSINT capability that will quickly deliver threat report summaries for specific IoCs.

To enhance the security of our customers’ infrastructures, we’re actively developing AI technologies tailored to our flagship corporate products and services. For several years now, the AI Analyst in Kaspersky Managed Detection and Response has been helping to reduce the workload of SOC teams by automatically filtering out false positives. Last year alone, this technology closed over 100,000 alerts without human intervention. This allows SOC experts to respond to real threats faster and devote more time to investigating complex cases and proactively hunting for threats. Another of our solutions – AI-based host risk scoring in Kaspersky SIEM (Kaspersky Unified Monitoring and Analysis platform) and Kaspersky XDR – uses ML algorithms to search for suspicious host behavior without the need to transfer data outside a company.

Another key area of Kaspersky’s development is the use of AI/ML in industrial environments. This includes Kaspersky MLAD (Machine Learning for Anomaly Detection) – a predictive analytics software solution that automatically recognizes early (hidden) signs of impending equipment failure, process disruption, human error or cyberattack in telemetry signals. By continuously training the neural network, MLAD analyzes the stream of “atomic” events from the object, structures them into patterns and identifies abnormal behavior. Another of our projects is Kaspersky Neuromorphic Platform (KNP) – a research project and software platform for AI solutions based on spiking neural networks and AltAI, the energy-efficient neuromorphic processor developed by Russian-based Motive Neuromorphic Technologies (Motive NT) in collaboration with Kaspersky.

The widespread adoption of AI technologies requires security control, which is why we’ve also established an AI security team. It offers a range of services aimed at ensuring reliable protection of AI systems and thwarting potential threats to data, business processes and AI infrastructure.

Our people

In the past, ML-based tasks were performed by departments directly involved in detecting specific threats. However, with the growing number of tasks and the increasing importance of ML technologies, we decided to hive off our expertise in AI-based systems to a separate Expertise Center: Kaspersky AI Technology Research. This resulted in the creation of three main teams that drive the use of AI at Kaspersky:

  1. The Detection Methods Analysis Group develops ML algorithms for malware detection in collaboration with the Global Research and Analysis Team (GReAT) and the Threat Research Center. Their AI systems for both static and behavior-based malware detection directly contribute to the security of our users.
  2. Technology Research, under the Future Technologies Department, specializes in: researching promising AI technologies; developing Kaspersky MLAD and KNP; developing the next-generation AltAI neuromorphic processor in collaboration with Motive NT; and providing AIST services for AI security.
  3. The MLTech team is responsible for developing the corporate ML infrastructure for training ML models, creating content threat detection models (phishing and spam), and implementing AI technologies, including LLM-based, into our advanced corporate services and solutions, such as MDR, Kaspersky SIEM (Unified Monitoring and Analysis platform), and Kaspersky XDR.

This doesn’t mean that our AI expertise is limited to the above teams. The field of AI is currently so complex and multifaceted that it’s impossible to concentrate all the know-how in a few research groups. Other teams also make significant contributions to the Expertise Center’s work, and apply ML in many tasks: machine vision technologies in the Antidrone team; research into AI coding assistants in the CoreTech and KasperskyOS departments; APT search in GReAT; and AI legislation study in the Government Relations team.

Our research and patents

The uniqueness of our AI technologies is underscored by the dozens of patents we’ve obtained worldwide. First and foremost, these are patents for detection technologies, such as malware detection based on program behavior logs, detection of malicious servers in telemetry, fake websites, and spam with the aid of ML. But the Kaspersky portfolio covers a much wider range of tasks: technologies for improving datasets for ML, anomaly detection, and even searching for suspicious contacts of kids in parental control systems. And, of course, we are actively patenting our AI technologies for industrial systems and unique neural network approaches to processing event streams.

In addition, Kaspersky actively shares its AI expertise with the community. Some studies, such as those on monotonic ML algorithms or the application of neural networks for spam detection, are published as academic papers at leading ML conferences. Others are published on specialized portals and at information security conferences. For example, we publish research on the security of our own AI algorithms, in particular attacks on spam detection and malware detection algorithms. We study the application of neural networks for time series analysis and explore the use of neuromorphic networks in industry-relevant tasks. Our Kaspersky Neuromorphic Platform (KNP) is open-source software that will be available for use and development by the entire ML community.

The topic of secure AI development and application is of fundamental importance to us, as we need to be able to trust our algorithms and be confident in their reliability. Other topics we cover include our participation in cybersecurity challenges that simulate attacks on ML systems and the use of advanced technologies such as LLMs to detect threats in system logs and phishing links. We also talk about threats to generative AI, including from a privacy standpoint, attacks on various LLM-based systems, the use of AI by attackers, and the application of our technologies in SOCs. Sometimes we open the door and reveal our inner workings, talking about the process of training our models and even the intricacies of assessing their quality.

 

Raising awareness

Finally, the most important function of the Kaspersky AI Technology Research Center is to raise awareness among our customers and the general public about the pros and cons of AI technologies and the threats they pose. Our experts at the Expertise Center demonstrate the dangers of deepfake videos. We talk about the finer points of AI usage (for example, how ChatGPT affects the process of hiring developers) and share our experiences through webinars and roundtable discussions.

The FT Technology Research team organizes conferences on neuromorphic technologies with a separate track devoted to AI security issues, including systems based on the neuromorphic approach. Together with our partner, the Institute for System Programming of the Russian Academy of Sciences (ISP RAS), we’re researching various attack vectors on neural networks in the areas of Computer Vision, LLM, and Time Series, and ways to protect them. As part of Kaspersky’s industrial partnership with ISP RAS, the team is testing samples of trusted ML frameworks.

We’re also involved in the development of educational courses, including a module on the use of AI in cybersecurity at Bauman Moscow State Technical University. Another example is our module on the safe use of AI in Kaspersky ASAP, our solution for raising employee awareness of cyberthreats. Finally, we’re contributing to the creation of a set of international standards for the use of AI. In 2023, we presented the first principles for the ethical use of AI systems in cybersecurity at the Internet Governance Forum.

 

To sum up, the main tasks of the Kaspersky AI Technology Research Center are the development of AI technologies, their safe application in cybersecurity, threat monitoring for improper or malicious AI usage, and forecasting trends. All these tasks serve a single purpose: to ensure the highest level of security for our customers.

Tips