For a long time, more tools and more IT security experts have been the answer of companies and the IT security industry to control the increasing number of cyber attacks. But more and more companies cannot cope with this. Thanks to the cloud, your IT infrastructure has become increasingly distributed and complex. However, more nodes in a network mean more potential gateways for hackers and more opportunities to attack these points. For this reason, it is becoming more and more confusing, especially for the teams in Security Operations Centers (SOCs): the sheer volume and variety of alert messages is nearly impossible for them to process.
Many are therefore switching to a SOC-as-a-Service, which is now offered by many service providers. But here too there is a lot of work to be done: pure outsourcing yields hardly any cost advantage. On the other hand, if you want to use standardized services, you have to make sure that your own infrastructure and what the service provider offers match. This overwhelms many – or does not satisfy them because the specifics of their own business operations are not taken into account. And recognition alone is not enough: the necessary countermeasures must then be initiated – either by the SOC service provider and its employees, or by its own employees.
Of course, IT security products already use methods in the field of artificial intelligence and machine learning. “AI support” is also regularly advertised for data stream monitoring and security analysis tools. It is often a matter of prioritizing security alerts using machine learning methods and analyzing and processing them at least in part automatically.
The goal is to relieve cybersecurity specialists of these routine activities and give them more freedom to analyze and defend against relevant cases. Ultimately, many of these systems aim to perform tasks previously performed by humans with machine intelligence modeled after human intelligence.
The tools must first detect anomalies in the data flow. However, the quality of any AI algorithm largely depends on how it has been trained and what datasets and data are available for it. Poor data quality leads to poor AI, errors in detection, and ultimately poor security – with a false sense of security.
Today’s IT security systems essentially all work on the same principle: they collect data from different sources – for example from end devices, network devices, firewalls, servers, applications – and bring them together in a central point. There they try to recognize dangers with a different number and type of filters – and sometimes with the help of AI technologies. It has long been the case that the more data that is collected, the more meaningful and better the results are.
However, in view of the increasing complexity, the shortcomings of this approach are becoming apparent. For example, if attackers don’t target their target directly, but work their way across multiple systems and multiple users, it can be very difficult to detect with certain datasets. The question of whether the large amount of data is really always a causal relationship or just a random correlation is often difficult to understand.
Collecting as much data as possible is not desirable for cost reasons. Because the amounts of data quickly become very large. In addition, sufficient historical data must be available for meaningful analyses. This results in significant administration and analysis costs – often in a cloud. And eventually the (alleged) alarms are returned to a SOC, where they must be processed by qualified personnel.
As part of its approach to results-based security, WithSecure takes the long-term”Project “Blackfin”“, what an alternative to this current model might look like. The starting point is the idea of a distributed analysis of the events (“Distributed Breach Detection”) – ie not the analysis at a central point, but the attempt to the network recognizes the events relevant to them.
For example, it can already be decided on the end device which data is relevant for process monitoring, user or resource activity. In the context of the Blackfin project, it is still being determined which data can be processed on site and which data it makes sense to continue to send to a central or in any case a higher body. The question is at what point in the system there is already enough information to draw valid conclusions.
The technical term here is “Federal Learning”. It can be easily explained using the text suggestion function of a smartphone as an example. These suggestions come from a variety of sources. On the one hand, the manufacturer offers certain suggestions based on language settings or location data. On the other hand, personal use is taken into account – such as first names of relatives or frequently mentioned places. This doesn’t always work perfectly and creates laughter when using the suggestion feature. However, the system as a whole gets better with each bug corrected.
In terms of IT security, of course, a system that works so poorly, such as the SMS suggestion feature on factory-set mobile phones, will never make it to the market. This is one of the reasons why basic research and a lot of work are still needed before that. But this effort is rewarded by the fact that in such a system each component can work very sparingly with data. Because in addition to a “basic facility”, it only has to take into account the specific characteristics of the user in question. The concept can be extended to industries or other subgroups and applied in various combinations.
Being able to make local decisions has several IT security benefits. First, they can be hit faster. Second, the effort to be centrally controlled is reduced. Third, fewer people need to intervene and fourth, data protection and privacy protection can be more easily implemented if only a small amount of data is sent to central offices for analysis. Last but not least, this can significantly reduce costs: While in the cloud or at a central point to process the sensor data from thousands of end devices, considerable computing power and a lot of storage space are required, this task can often be performed “on the side” on the terminals on site.
Since the end of 2019, WithSecure has been testing the first elements of this concept as part of its Rapid Detection & Response Service. The experience with this is encouraging, although there is still work to be done before collective intelligence is widely applicable and widely available. More about the Blackfin project and its future path discover it in this white paper.