Nigel Thorpe at SecureAge takes a look at the traditional approach to cyber security and suggests that we need to focus more on the data itself rather than simply trying to prevent access to it.
The spate of recent attacks, just like the last spate, just goes to show that even the best defences can’t always prevent motivated and skilled attackers. Whether it’s an inside job, social engineering, or the result of compromised user accounts, phishing attacks or breaking into infrastructure components – what these attacks demonstrate is that there needs to be a more data-centric approach to cyber security.
The perimeter defences of firewalls and border anti-virus have long since been augmented with much more granular controls so that we now have anti-malware on each endpoint, together with network segmentation and the concept of ‘zero trust’. The idea here is that organisations should not automatically trust anything both outside and inside its networks. Access is only granted once a user is authenticated and has authority to do what they are requesting.
But a zero-trust approach that incorporates micro-network segmentation, plus service and application-level authentication still allows vulnerability. Once a user or process has passed all the checks they have free reign over all the data in that security silo. Granted, access to other security silos will require re-authentication, but again data is free for the taking.
The problem then, is only partly to do with granting access to information. After all, it is the data that is important, not how or where it is stored. Therefore, we must also consider what the user, once authenticated, is going to do with that data.
They may be a member of staff who is motivated to sell corporate secrets; a user’s account could have been compromised, possibly via social engineering; data-stealing malware could have been released by an unwary user; or perhaps this is a disgruntled or just careless administrator. In any case, existing controls have correctly given access to data, but have no ability to prevent its theft.
The logical conclusion is that security – both authentication and protection – must be built into the data itself so that, when stolen, information remains protected. Surely this would be the ultimate in zero trust? We have long had the technology to achieve this through encryption. So why is data left unencrypted?
The first response to this is that “we already encrypt all our data with full disk encryption”. Well yes, and no. This technology is great if you leave your laptop on the train, but as soon as the system is running, any process – legitimate or malicious – is given any data requested, all silently decrypted with no questions asked. In essence, full disk encryption is ineffective as a security measure on a live system.
The next comment is that “we have special encrypted folders for the most sensitive data”. There are two major security holes in this approach. Firstly, you have to rely on individuals to identify the ‘most sensitive data’. And secondly, you must trust them to store this important information in the encrypted folder. In real life, people don’t always do what they are expected to do – in which case it’s highly likely that some sensitive data will remain unprotected with this approach.
Then there’s “we use data loss prevention technology to spot information theft”. All great systems, but you need to configure them to identify data theft based on your experience. But what if the cyber-criminal uses an approach that you didn’t anticipate? Can you be sure that you’ve set the system up to identify every piece of information that would cause loss, embarrassment or legal action if stolen? And what if the hacker encrypted the files with their own encryption keys before exfiltrating them?
Another common remark is “our data classification system encrypts the most important information”. Great, but are you sure that 100% of your most sensitive data is truly strongly protected? Can users choose a lower level of classification where data is not encrypted? Are your classification filters going to match all sensitive data? Are you sure that data that is less highly classified and therefore not encrypted could not be used by a cyber-criminal for nefarious purposes? Could they build an identity with this data, for example?
Finally, “file encryption is too difficult and slow”. Well, if you use manual encryption tools then that would be a fair comment. However, an encryption system that functions with the transparency of full disk encryption, but which is implemented so that encryption is persistent on running systems, and where authentication is built into each file, would resolve these final difficulties.
Organisations need to realise that it is impossible to keep cyber-criminals out of their networks and endpoints 100% of the time. After all, the cyber-criminal may be a member of staff with legitimate access to data.
Given that sometime, someone bad is going to gain access to your information, isn’t it time to realise that, for security, it’s all about the data?
Nigel Thorpe is technical director at SecureAge, a data security company providing the total protection of data when it is stored, in transit and in use, through advanced encryption technologies.
Main image courtesy of iStockPhoto.com