By now, most of the world has heard about the alarm pertaining to a zombie alert in Lake Worth, Florida. Do we think that zombies were getting their day in the sun, or could it possibly be that whomever was responsible for writing the power alert application (or for testing it) was in some sort of zombie state at the time?
Neighbors, babysitters, handymen, even family members; your backyard, upstairs deck, even your own front door…
Statistics show the vast majority of burglaries and theft, especially identity theft, are perpetrated by a household acquaintance or family member. The US Department of Justice says that “Offenders were known to their victims in 65% of violent burglaries; offenders were strangers in 28%.” Similarly, any Google search yields countless articles listing front doors, backyards, and ground floor windows as the most common points of entry for burglars.
With RSA Conference 2018 behind us, we asked Onliner Derek Nwamadi to share some of his observations from the conference floor.
This year’s RSA Conference did not disappoint. Once again attendees were treated to a great line up of presentations, events, and an always impressive expo floor. Looking back at my week at RSA, one word kept coming to
Trust; we seem to be living in an age where we are technology and data rich but trust poor.
As always, there are some technologies and buzzwords that become industry hot buttons and generally drive it for the rest of the year (more on that later).
In light of Mark Zuckerberg’s appearance in front of Congress last week, Business Insider reported that Zuckerberg’s personal notes had the following reminder:
It’s not uncommon for me to be asked how often an organization should review its own InfoSec alerting framework and library. My answer usually goes as follows: Like so much of security, nothing is straightforward, but there are some fundamental best practices that provide critically helpful guidance. The bottom line is that Threat Detection and Response (TD&R) management is really a lifecycle operation.
Over the past decade, the level of attacks, breaches, and potential dangers to vital data have escalated to the point where organizations in every industry need to take measures to ensure their assets and technical infrastructure are safeguarded. A key part of that protection is having the continuous knowledge of where your environment is vulnerable and the type of risks that may threaten it. The approach you take to continuously monitoring for threats and vulnerabilities can vary based on a number of factors, such as existing technology, staffing, and internal processes. Not to mention the financial impact based on your organization’s resources and maturity. This leaves you with several critical decisions your organization should consider when determining an in-house, or outsourced approach to security operations.
In Part One of my blog series aimed at breaking down each section of Online’s security policy, we looked at some general best practices surrounding the development of a security policy. This included answering the question of “why develop a security policy?” and went into detail about developing the scope of content contained within. Part Two analyzed the organizational roles and responsibilities needed to implement an effective security policy. Now let’s take a look at how Electronic Communication plays into an effective policy.
As you may have heard in the news, computer researchers have recently discovered a design flaw that results in a security vulnerability in the CPU chip that powers nearly all the world’s computers, including PCs, smartphones, and data center computers. This hardware bug allows malicious programs to steal data that is being processed in the computer memory. The name given to these vulnerabilities is ‘Meltdown’ for Intel chips or ‘Spectre’ for AMD and ARM chips. The first reports were published on January 2, 2018, prior to a coordinated disclosure scheduled for the week of January 8. There is no evidence of exploitation at this time, but the publicly disclosed proof-of-concept (PoC) exploit code could result in the vulnerabilities being weaponized for malware delivery.
As 2017 comes to an end, the latest PCI DSS 3.2 requirements (coming into effect on Feb. 1, 2018) are on the minds of many Service Providers and Merchants. A natural question stemming from these changes is "What do I do if I fail my PCI assessment?" Let's take a look back to a blog written earlier this year that answers just that.
Remember when PCI DSS version 3.2 was released way back in April 2016? We counted our blessings that the new requirements truly raised the bar - especially for Service Providers - and gave us a considerable grace period to implement them. And as it goes, 2018 seemed so far away and implementing these changes didn’t seem so urgent. Well fast forward to today and all of a sudden January 31, 2018 doesn’t seem too distant in the future anymore.