How Effective is Your Threat Detection and Response Management Program?

By Will Armijo on April 11, 2018 (Last Updated on June 25, 2021 )

Get latest articles directly in your inbox, stay up to date

Back to main Blog
Will Armijo

It’s not uncommon for me to be asked how often an organization should review its own InfoSec alerting framework and library. My answer usually goes as follows: Like so much of security, nothing is straightforward, but there are some fundamental best practices that provide critically helpful guidance. The bottom line is that Threat Detection and Response (TD&R) management is really a lifecycle operation.

The Threat Detection and Response effectiveness reviewransomware-2320941_1280

To truly maintain TD&R effectiveness – you must start by reviewing your current TD&R processes and tooling. This includes:

  • An audit of all sources and agent configurations
  • An audit of the log collection and forwarding solutions that you may utilize if you are using a centralized log management solution or Security Information and Event Management (SIEM) solution.
  • Ensuring your alert framework and threat library are up-to-date with the latest threat detection information.
  • Auditing your threat intelligence feeds to ensure they are applicable to your environment and give enough information to make data driven decisions.

Underlying all of this is the fact that having a comprehensive and current alert library is the underpinning of a solid threat management program. In addition, the proper auditing of the threat management framework allows for further assurances of good, clean log collection and processing for all sources enforcing technical controls in the environment.

threatdetection-1

1) Sources Log and Agent Configurations

The sources in an environment act as sensors, responsible for sending their event data to a centralized intelligence system like a SIEM for proper analysis of system and user activities. The care and feeding via asset management of your source systems is critical for ensuring security teams are fully – and accurately - able to determine malicious versus benign events. By ensuring the systems responsible for enforcing technical controls are visible via their log data, an analyst has increased North/South, but more importantly, East/West traffic visibility; which is where the majority of attacks are missed.

By starting the audit at the source systems, you should be able to quickly identify which devices are sending their log data as well as determine the type of event data they are producing. For example, a client such as a Domain Controller should in the very least be sending its Application, Security, and System log events to a centralized location for additional analysis. This is achieved by utilizing an installed agent running as a service responsible for reading, filtering, and forwarding log event data to an authorized destination where the intelligence is applied.

By auditing the configuration of the installed agent, you should be able to determine whether or not you are collecting all of the relevant security information you need. This basic gap analysis should be performed at least twice a year or when there are device changes to the environment. Using asset detection scans to compare against your known log sources should help identify device and environment changes.

2. Log Collectors

Log collector configurations include which source systems are authorized to send their data and how the log event data is normalized before being sent over to the correlation engine or alert library. Auditing the log collectors includes ensuring all of the source systems you just reviewed are sending their event data and ensuring it is being normalized correctly.

After the source systems and collectors have been reviewed for accuracy, it’s time to move onto the alert framework and threat library within the Central Intelligence System as it is the next component of the TD&R management lifecycle audit.

3. Alert Framework and Threat Library

The alert framework and threat library is how your event data becomes actionable; therefore, it is important they are up to date with the latest threat detection information used during attack pattern matching against your log data. These libraries should also include indicators of compromise or IoCs from “known good” threat intelligence feeds such as AlienVault’s OTX feed. IoCs are both the attacker as well as victim information from known attacks from other organizations. They are useful for quickly matching attack information across your own multiple log sources in order to identify various attacks ranging from basic to complex. An example of a good threat library in my opinion is from OSSEC.

If the library is out of date or not working properly, the identification of new attacks will fail. Attackers methodologies are always evolving; therefore, the alert framework and threat libraries must evolve too in order to keep the TD&R management lifecycle effective. Ask yourself, has my risk profile changed? The answer should be yes, because the threat landscape goes beyond your perimeter. Using good authoritative threat intelligence feeds, are key enhancements to maintaining updated and accurate alert frameworks and threat libraries.

4. Threat Intelligence

Maintaining the accuracy of your threat intelligence feeds ensures they are still applicable to your environment and give you enough contextual information for better data driven decisions. Ensure your threat intelligence data is actually being consumed and inserted appropriately by your TD&R workflows in real-time or near real-time. Finally, make sure they actually apply to your business type (financial, retail, etc.), systems, and environments. Using the Domain Controller as an example from earlier, make sure your intelligence feeds include IoC threat information related to Windows Server OS and any of the applications running on said system. If you are swapping out technology from one to another, make sure the project includes verifying your threat feeds and include IoCs for the new systems.

The Future of Threat Detection

I have a few thoughts on this, but Machine Learning with advanced algorithms for quickly and accurately triaging events is one of my favorites. This is where SIEM vendors, MSSPs, and SOCs are moving to…and if they aren’t, they should be. Moving from static to more dynamic alert frameworks and threat libraries reduces the time it takes to audit your frameworks and libraries. This also enhances your ability to move towards predicative behavioral analytics.

Conclusion

Assessing each component of your Threat Detection and Response Management Program provides the ability to ensure better accuracy and quicker alerting by decreasing the amount of time it takes to analyze anomalies for malicious behavior. Monthly vulnerability scanning, and annual penetration testing are great ways to both create and test the baseline for your Threat Detection and Response Management Program.

The next blog in this series will take a look at auditing the heart of your Threat Detection and Response Program—the alert framework and threat libraries. We’ll take a look at a few examples of different library inputs and how to properly manage and test them. I’ll also cover how to apply threat intelligence data appropriately in order to enhance dynamic detection processes, while reducing false positives. And finally, we’ll take a look at how to apply internal technical policies to the library for baselining user behavior.

Submit a Comment

Get latest articles directly in your inbox, stay up to date