My team and I have delivered or participated in several hundred, if not more, PCI assessments over the past ten years. I find that the PCI DSS has matured nicely as the PCI Council has done a great job of toeing the fine line between creating a standard that is both relevant and reasonable. The standard continues to evolve to address the latest threats and issues, and overall is one of the most detailed, prescriptive, and well-defined security frameworks in existence. That said, there are a few potential loophole areas that I would consider to be opportunities for improvement. This blog post is based on dozens of discussions with my peers (hey, when you’re on the road a lot, sometimes dinner conversations turn to information security or PCI!). I wouldn’t be surprised to see some variations of these loopholes addressed in future versions of the PCI DSS.
Unauthenticated vulnerability scans (requirement 11.2): We’ve worked with many clients who perform only unauthenticated vulnerability scans against their internal networks as the PCI DSS does not specifically require authenticated scans. The issue with unauthenticated scans is that they do not necessarily paint an accurate picture pertaining to a system’s patch level or security posture since they don’t allow for a vulnerability scanner to fully and accurately enumerate the target system. One additional challenge we’ve experienced has been around SSO and associated SAML Assertions that don’t/can’t send valid authentication credentials to the authentication for the tool(s) used to scan the target system.
Recommendation: Require authenticated scans for internal vulnerability scans as this will provide greater accuracy, though proceed cautiously to ensure that the scans do not negatively impact network/system performance during the scan. If it is not feasible to perform authenticated scans, additional rigor may be required to ensure that possible target system vulnerabilities have been correctly identified. This could include limited penetration testing and other methods of enumerating and tracking the “true” state of the system component.
Process monitoring: Attackers have become increasingly sophisticated in their efforts to bypass traditional preventative controls on their way to taking control (or modifying) systems and applications. It is not uncommon for these attackers to create and test their malware in lab environments containing a plethora of currently patched operating systems, anti-malware solutions, and monitoring systems to serve as a testing ground so attackers have to script their malware to bypass them. While file integrity monitoring solutions may detect unauthorized changes to static file structure, it is very difficult to configure them to be 100% fool-proof.
Recommendation: It may be high time to monitor running processes/applications, or even open ports, to ensure that any unauthorized/unexpected processes/ports create alerts. Ultimately, preventative measures will move from signature-based controls to behavioral heuristics. Many of our clients are beginning to pour their efforts into baselining their intrusion prevention systems (IPS) in preparation of ultimately flipping the bit to “blocking” mode.
Strong administrative-level authentication controls: Attackers are often successful because they find or create ways to gain administrative credentials. Most of the breaches over the past couple of years have a common success factor (for the attackers) of obtaining administrative credentials.
Recommendation: There are several means of creating stronger controls to protect administrative controls, including separate credentials for administrative access (instead of administrators using the same credentials for both administration and for everyday “mortal” use) or requiring multi-factor authentication for administrative access.
What about key management for data in transit? The PCI DSS is quite prescriptive when it comes to defining mechanisms for properly protecting keys used for protecting data at rest, but it doesn't really touch on how to best protect keys when they are used for data in transit. We’ve seen several instances where SSH keys or PGP private keys are not as protected as they ought to be.
Recommendation: Even though the PCI DSS does not explicitly describe controls for protecting SSH or PGP private keys, organizations should adhere to industry Best Practices to ensure that they are reasonably protecting these keys.
Logging: The PCI DSS calls for logs to be retained in accordance with the requirements set out in 10.2 and 10.3 and doesn’t necessarily provide coverage for all information that should be logged in the event that a forensic analysis must be performed. For example, there is no requirement to log successful firewall “allows,” yet this information may prove to be very useful if an attacker were successful in exfiltrating your data to their landing zone.
Recommendation: Yes, we all know that logging is boring and mundane. But during or after a breach, there is nothing more helpful than to be armed with as many log details as possible to conduct a proper forensic analysis. While there is no silver bullet to address this shortcoming, it is important that you maintain whatever logs you think you would need to conduct forensics.
Disaster Recovery and Business Continuity: A longstanding chuckle in the PCI world is that PCI cares about a lot of things, but it doesn’t necessarily care about availability as, if you are not connected, you can’t easily lose cardholder data. This particular element is something that all organizations should strategically and tactically address, although it should continue to be outside the scope of PCI compliance.
Whether or not some of these elements find their way into the next version of the PCI DSS (due in fall of 2016), they are indeed things you should consider in your never-ending quest to improve security posture since, at the end of the day, PCI compliance is a good by-product of a strong security posture and program.