Security monitoring is a difficult task. People, procedures, technology, and data play a role in installing a security monitoring infrastructure. It necessitates numerous iterative phases to reach maturity. Security data can come from a variety of places. Still, the most common method at the time of writing is to collect it by consuming log files from every possible asset and sending it to a SIEM or log management system like Splunk, SumoLogic, or Elastic.
That’s not even taking into account the digital supply chain. You’ve moved away from on-premise software and toward cloud-based services and SaaS apps during the last decade. So there are many people: a typical firm can use anywhere from 100 to 300 SaaS apps. Consider the exponential growth of the SaaS market landscape for just one business category over the previous decade. This development shows no indications of slowing down, so we must monitor numerous cloud services and SaaS programs for security. This change in the digital supply chain necessitates a shift in security monitoring procedures. When installing the software locally, you could nearly always rely on being able to read log files and upload them to your SIEM or log management system.
Cloud and SaaS Make Security Monitoring Difficult
You will not have direct access to log information with cloud services and SaaS apps, and many services do not allow indirect access. Even when access is available, it is difficult to obtain and usually comes at a fee. So, instead of reviewing on-premises logs, you’ll need to use APIs to collect security-related events from cloud services – if such APIs exist.
Whether you’re operating software on-premises, in the cloud, or a hybrid of the two, identifying all of your software assets is a significant difficulty. It tells you about the assault surface you’re trying to monitor and defend. Given the dynamic nature of asset instantiation allowed by cloud and similar technologies, tooling to enable the continual discovery of new assets becomes a must once you know your asset portfolio. The instrumentation phase begins once you know the assets in scope, including adding additional logging capabilities to the area’s significant investments (applications).
Application Event Logging Is Crucial
It’s vital to realize that only a subset of the events collected across different assets in log files and event APIs is significant for security monitoring. They provide far more insight than infrastructure logging alone. Custom application events are what I’m referring to. Unfortunately, application event recording is frequently absent, disabled, or improperly configured, leaving security teams with a blind spot where they require the most visibility.
The value of application logs lies in their ability to:
- Recognize security incidents.
- Keep an eye out for rules infractions.
- Assist non-repudiation measures, which give assurance of the origin and integrity of security event data by establishing baselines of “typical” behavior.
- Report any attacks, security breaches, or other odd circumstances.
- Allow for incident investigation, which is not possible with other log sources.
- Identify potential security flaws.
- Detect and defend against the use of vulnerabilities by using attack detection.
- This OWASP article goes over how to set up application event logging.
Once you’ve started collecting log files, documenting security events via APIs, and instrumenting your application security events appropriately, the job begins mining for potential security events and anomalies to inform the alerting and incident response process.
Note from the Sponsor
InfluxData, the company behind the open-source time-series platform InfluxDB, gives developers the tools to create disruptive monitoring, analytics, and IoT applications faster and at scale. IoT devices, apps, networks, and containers generate vast amounts of time-stamped data, which the platform manages.
Time-Series Databases Enhance Security Monitoring
By transforming all of your log data and security events into collections of time series, a time-series database becomes a necessary and natural solution. You can easily correlate time-series events across dependent or connected assets, articulate the indicators, and track the compromise vector if you do it this way. As a result, faster incident detection, response, cleanup, and forensics workflows are possible.
Traditional log management-driven security monitors necessitate huge storage subsystems. It contains primarily useless data. It cannot scale and respond at the pace and scale required for effective monitoring and response. On the other hand, time-series databases normalize security event data into an efficient, standardized format upon data ingest. It allows you to store security data cost-effectively and index on multiple characteristics for quick searches. Some time-series databases, for example, report query response times in the tens of milliseconds. Using this efficient data structure, you can store more events with less money.
For measuring security metrics, time-series databases are ideal. Numerous metrics can be tracked, including the number of:
- Attempts at authentication over time
- Attempts at authentication that failed over time
- Attempts at successful authentication over time
- Accounts that are unique over time
- Over time, the number of IP addresses assigned to each account has increased.
- Timeline of accounts per IP address
- Over time, privileged operations
Can use machine learning to develop a behavioral model of typical usage, followed by real-time events and advanced anomaly detection techniques for time-series datasets. For example, Median Absolute Deviation, Balanced Iterative Reducing and Clustering with Hierarchies, and can employ Naive Bayes Classifiers.
Despite these advantages, security monitoring programs rarely use time-series databases. At the same time, the security community begins to take notice. Also, I think that using time series databases is a good idea. To be clear, time-series databases should supplement rather than replace SIEM and other log-based security monitoring systems. I believe SIEM companies will eventually include time-series databases as an incorporated component in their solutions.
A community template just supplied by our partner Bonito is an excellent example of a lightweight security monitoring application built on a time series platform that you can start using right now. This tool monitors for abusive IP addresses so that it can temporarily ban them if your application needs port 22 to be open for SSH access. This notion is being implemented in our organization. We’ve started developing security monitoring applications using our time-series technology. However, we’re still in the early stages.
About Enteros
IT organizations routinely spend days and weeks troubleshooting production database performance issues across multitudes of critical business systems. Fast and reliable resolution of database performance problems by Enteros enables businesses to generate and save millions of direct revenue, minimize waste of employees’ productivity, reduce the number of licenses, servers, and cloud resources and maximize the productivity of the application, database, and IT operations teams.
The views expressed on this blog are those of the author and do not necessarily reflect the opinions of Enteros Inc. This blog may contain links to the content of third-party sites. By providing such links, Enteros Inc. does not adopt, guarantee, approve, or endorse the information, views, or products available on such sites.
Are you interested in writing for Enteros’ Blog? Please send us a pitch!
RELATED POSTS
Driving Efficiency in the Transportation Sector: Enteros’ Cloud FinOps and Database Optimization Solutions
- 18 November 2024
- Database Performance Management
In the fast-evolving world of finance, where banking and insurance sectors rely on massive data streams for real-time decisions, efficient anomaly man…
Empowering Nonprofits with Enteros: Optimizing Cloud Resources Through AIOps Platform
In the fast-evolving world of finance, where banking and insurance sectors rely on massive data streams for real-time decisions, efficient anomaly man…
Optimizing Healthcare Enterprise Architecture with Enteros: Leveraging Forecasting Models for Enhanced Performance and Cost Efficiency
- 15 November 2024
- Database Performance Management
In the fast-evolving world of finance, where banking and insurance sectors rely on massive data streams for real-time decisions, efficient anomaly man…
Transforming Banking Operations with Enteros: Leveraging Database Solutions and Logical Models for Enhanced Performance
In the fast-evolving world of finance, where banking and insurance sectors rely on massive data streams for real-time decisions, efficient anomaly man…