As Threat Actors have become more sophisticated and developed better evasion techniques, Auditing/Monitoring has become a huge and complex topic area. So, we break this operational need out into more specific capability areas, each discussed in a separate article:
Having selected, and hopefully deployed, an initial set of Sensors and Sensor Tuning capabilities, Security Operations must now focus on the remote Monitoring of those sensors, including data Collection and Aggregation methodologies and capabilities. All this monitoring data needs to be centralized somewhere (possibly in a tiered approach for very large organizations) and prepped for detection analysis. We address this as a separate Capability Area because this is a non-trivial challenge.
Early remote monitoring capabilities in this area that are still heavily relied upon include protocols and standards such as syslog and SNMP. Since the 1990s, SIEMs have attempted to address some of this collection and aggregation challenge with proprietary approaches. More recently, the vulnerability auditing and management market has seen some convergence on NIST's Security Content Automation Protocol (SCAP) standard. Microsoft's proprietary Windows Event Forwarding also enables secure consolidation of monitoring data. And in environments that demand deep monitoring to support digital forensics, there is a periodic need to capture and record even more detailed network connection data such as NetFlows / IPFIX, SuperFlows (a group of unidirectional flows from one host to many hosts), and full packet capture (PCAP) data.
Many organizations begin with a “Collect Everything” approach. Grabbing and storing as much data as their sensors can provide and their (hopefully, out-of-band) network bandwidth can support. Which is not surprising, given these organizations are not certain yet what they are looking for. However, more experienced Security Operations teams would caution “Just because you can, doesn’t mean you should.”
In even a modest-sized organization of 100s to 1000s of systems and devices, the simplistic “collect everything, continuously” approach does not scale. But instead, typically results in enormous volumes of data being centrally collected (e.g., 10s of millions of events per day). Resulting in security “Big Data” that often proves to be little more than distracting noise, consuming huge amounts of network bandwidth and storage space, and far exceeding the Tier 1 & 2 Analysts’ ability to even review. Eventually resulting in demands for sophisticated, real-time data pipeline/streaming Extract-Transform-Load (ETL) technologies, and advanced Detection Analytics (discussed in a follow-on article), using automated analysis approaches such as artificial intelligence (AI) and machine learning (ML) that take the humans “out of the loop”.
Where resources permit, Cyber Threat Intelligence (CTI) capabilities can significantly inform a more conservative, more effective monitoring strategy. But as more and more data is continuously collected, Security Operations efforts will need to invest in capabilities that can store-and-forward, stream/ETL, aggregate, filter, and fuse this data into actionable information that proves useful, rather than overwhelming, to Tier 1 and 2 analysts.
Ultimately, organizations will also need to establish some Retention Policy to govern how long they need to keep all this data available online. Balancing the desire to keep an infinite historical perspective with the practical constraints of budget and search/query performance implications.