In 2024, Australia experienced a significant transformation in its digital landscape. Technological advancements, particularly in artificial intelligence, cloud computing, and IoT, have revolutionised daily life and work. However, this progress has also led to a surge in cybersecurity threats, underscoring the urgent need for robust application security measures. We have seen a remarkable surge in security incidents in the first of the year. This pivotal moment demands a proactive approach to safeguarding our digital infrastructure.
The first half of 2024 has seen a disturbing surge in cyberattacks targeting Australian businesses, government agencies, and critical infrastructure. These incidents have ranged from ransomware attacks on healthcare providers, which crippled essential services, to data breaches at major financial institutions, exposing sensitive personal information of millions of Australians.
In March, a significant cybersecurity breach at a prominent Australian financial institution brought attention to the critical vulnerabilities in application security. The attackers exploited an insecure API, leading to unauthorized access to sensitive customer data, including financial records and personal identification details. This breach sent shockwaves through the financial sector, prompting a crucial revaluation of the adequacy of current application security measures across various industries.
Furthermore, the continued rise of ransomware attacks has been particularly troubling. In June 2024, a ransomware attack on a major Australian healthcare network disrupted services across multiple hospitals, delaying critical medical procedures and compromising patient care. The attackers exploited a vulnerability in a third-party application for scheduling and communication, highlighting the risks posed by insecure applications within critical systems.
These events serve as a stark reminder that as applications become more integral to our daily lives, the need for rigorous security measures becomes paramount.
In the current landscape, application security stands as a fundamental component of any cybersecurity strategy. It encompasses rigorous measures and best practices aimed at fortifying applications against malicious attacks and guaranteeing their seamless operation without vulnerabilities. This approach entails steadfast adherence to secure coding practices, routine vulnerability assessments, and the uncompromising implementation of robust security protocols across the entire software development lifecycle.
Secure Coding Practices:
Developers, listen up! Secure coding is the bedrock of application security. It’s crucial to equip yourselves with the skills to craft code that stands strong against prevalent attack vectors like SQL injection, cross-site scripting (XSS), and buffer overflows. By adhering to coding standards and guidelines and harnessing automated tools for static code analysis, we can markedly diminish the likelihood of introducing vulnerabilities during the development phase.
Regular Vulnerability Assessments and Penetration Testing:
Regular vulnerability assessments and penetration testing are imperative in identifying and mitigating security flaws before attackers can exploit them. These tests must be conducted routinely and following any significant changes to the application or its environment. In 2024, Australian businesses have increasingly acknowledged the critical nature of these practices, incorporating them as a standard part of their security protocols.
Secure Software Development Lifecycle (SDLC):
It is absolutely crucial to integrate security into every phase of the software development lifecycle. This requires including security requirements from the very beginning, conducting thorough threat modelling, and consistently performing rigorous security testing. The adoption of DevSecOps practices, where security is seamlessly integrated into the development process rather than treated as an afterthought, has been a prominent trend in 2024.
Third-Party Risk Management:
The events of 2024 have underscored the critical need for organisations to conduct thorough assessments of third-party vendors’ security posture and enforce stringent controls to mitigate the risks associated with external applications and APIs.
Education and Awareness:
Finally, education and awareness are vital components of application security. In 2024, Australian organizations have increasingly invested in training programs to ensure that developers, IT professionals, and end-users understand the importance of security and are equipped to recognize and respond to potential threats.
Recognising the growing cyber threat landscape, the Australian government has taken proactive steps to bolster national cybersecurity. The revised Australian Cybersecurity Strategy 2024 emphasises the need for robust application security and promotes collaboration between government, industry, and academia to develop and implement good practices.
The strategy includes initiatives such as the establishment of a national application security framework, which provides guidelines for secure application development and encourages the adoption of security standards across all sectors. Additionally, the government has introduced incentives for businesses that prioritise application security, including tax breaks and grants for organisations that invest in secure software development practices.
Industry collaboration has also been a key focus, with organisations across various sectors coming together to share threat intelligence and best practices. The formation of sector-specific cybersecurity task forces, such as those in finance, healthcare, and critical infrastructure, has facilitated the development of tailored application security measures that address the unique challenges faced by different industries.
As Australia undergoes digital transformation, the importance of application security cannot be overstated. The events of 2024 have highlighted vulnerabilities in our digital ecosystem. Prioritising application security can protect sensitive data, maintain public trust, and ensure the resilience of critical systems. This collective commitment is essential for building a secure and resilient digital landscape for all Australians.
I did my FAIR analysis fundamentals course a few years ago and here are my thoughts on it.
FAIR stands for Factor Analysis of Information Risk, and is the only international standard quantitative model for information security and operational risk. (https://www.fairinstitute.org/)
My interest to learn more about FAIR came from two observations.
The first was that we had many definitions of what constitute risk. We refer to “script-kiddies”as risks. Not having a security control is referred to as risk. SQL injection is a risk. We also said things like “How much risk is there with this risk?”
The other observation was with our approach at quantifying risk. We derived the level of risk based on the likelihood and impact. And sometimes it was hard to get agreement on those values.
Having completed the course, one of the things I like about FAIR is their definitions. Their definitions of what is a risk, and what it must included. It should include an asset, threat, effect with a method that could be optional. An example of a risk is the probability of malicious internal users impacting the availability of our customer booking system via denial of service.
It uses future loss as the unit of measurement rather than a rating of critical, high, medium & low. The value of future loss is expressed as a range with a most likely value along with the confidence level of that most likely value. As such it focuses on accuracy rather than precision. I quite like that as it makes risk easier to understand and compare. Reporting that a risk has a 1 in 2 year probability of happening with a loss between $20K to $50K but likely being $30K is a lengthy statement. However it is more tangible and makes more sense than reporting that the risk is a High Risk.
Now it sounds like I’m all for FAIR, but I have some reservations. The main one being that there isn’t always data available to determine such an empirical result. Risk according to FAIR is calculated by a multiplication of loss frequency (the number of times a loss event will occur in a year) with loss magnitude (the $ range of loss from productivity, replacement, response, compliance and reputation). It’ll be hard to come up with a loss frequency value when there is no past data to base it on. I’ll be guessing the value and not estimating it. FAIR suggests doing an estimate for a subgroup if there isn’t enough reliable data available, but again I see the same problem. The subgroup for loss frequency is the multiplication of number of time the threat actors attempt to effect the asset with the percentage of attempts being successful. Unless you have that data, that to me is no less easier to determine.
Overall it still feels like a much better way of quantifying risk. I’ll end with a quote from the instructor. “Risk statements should be of probability, not of predictions or what’s possible.” It resonated with me as it is something I too often forget.
I am writing this post in a week when we saw the most significant IT outage ever. A content update in the CrowdStrike sensor caused a blue screen of death (BSOD) on Microsoft Operating systems. The outage resulted in a large-scale disruption of everything from airline travel and financial institutions to hospitals and online businesses.
At the beginning of the week, I delved into the transformation in software developers’ mindsets over the last few decades. However, as the root cause of this incident came to light, the article transitioned from analysing the perpetual clash between practice domains to advocating for best practices to enhance software quality and security.
Developers and security teams were often seen as opposed to security practices over the millennium’s first decade. This is not because they did not want to do the right thing but because of a lack of a collaborative mindset among security practitioners and developers. Even though we have seen a massive shift with the adoption of DevSecOps, there are still some gaps and mature integration of software development lifecycle, Cybersecurity and IT operations.
The CrowdStrike incident offers several valuable lessons for software developers, particularly in strengthening software development cybersecurity programs. Here are some key takeaways:
Security by Design: Security needs to be integrated into every phase of the SDLC, from design to deployment. Developers must embrace secure coding practices, conduct regular code reviews, and use automated quality and security testing tools.
Threat Modelling: Consistently engaging in threat modelling exercises is crucial for uncovering potential vulnerabilities and attack paths, ultimately enabling developers to design more secure systems.
DevSecOps: Incorporating security into the DevOps process to ensure continuous security checks and balances throughout the software development lifecycle.
Cross-Functional Teams: Encouraging collaboration among development, security, and operations teams (DevSecOps) is crucial for enhancing security practices and achieving swift incident response times.
Clear Communication Channels: Establishing clear channels for reporting and communication channels can help ensure a coordinated and efficient response.
Security Training and Awareness: Regular training sessions on the latest security trends, threats, and best practices are vital for staying ahead in today’s ever-changing digital landscape. Developers recognise the need for ongoing education and understand the importance of staying updated on evolving security landscapes.
Balancing Security and Agility: Developers value security measures that are seamlessly integrated into the development cycle. This allows for efficient development without compromising on speed or agility. Implement security processes that strike a balance between robust protection and minimal disruption to the development workflow.
Early Involvement: It is crucial to incorporate security considerations from the outset of the development process to minimise extensive rework and delays in the future.
Preparedness for Security Incidents: Developers should recognise the need for a robust incident response plan to quickly and effectively address security breaches. They should also ensure that their applications and systems can log security events and generate alerts for suspicious activities.
Swift Incident Response: It is important to have a well-defined incident response plan in place. It is crucial for developers to be well-versed in the necessary steps to take when they detect a security breach, including containment, eradication, and recovery procedures.
5. Supply Chain Security and Patch Management
Third-Party Risks and Software Integrity: Developers must diligently vet and update third-party components. To effectively prevent the introduction of malicious code, robust measures must be implemented to verify software integrity and updates. This includes mandating cryptographic signing for all software releases and updates.
Timely and bug-free Updates: It is essential to ensure that all software components, including third-party libraries, are promptly updated with the latest security patches. Developers must establish a robust process to track, test, and apply these updates without delay.
Automated Patch Deployment: Automating the patch management process can reduce the risk of human error and ensure that updates are applied consistently across all systems.
Regular Security Audits: Regular security audits and assessments effectively identify and address vulnerabilities before they can be exploited.
Feedback Loops: Integrating feedback loops to analyse past incidents and strengthen security practices can significantly elevate the overall security posture over time.
In conclusion, the recent IT outage resulting from the CrowdStrike incident unequivocally emphasizes the critical need for robust cybersecurity in software development. Implementing secure coding practices, fostering collaboration between development, security, and operations teams, and giving paramount importance to proactive incident response and patch management can undeniably elevate system security. Regular security audits and continuous improvement are imperative to stay ahead in the ever-evolving digital landscape. Looking ahead, the insights drawn from this incident should galvanise a unified effort to seamlessly integrate security into the software development lifecycle, thereby ensuring the resilience and reliability of digital systems against emerging threats.
Since I started in Cybersecurity, I have observed that cybersecurity has become dominated by insincere vendors and practitioners driven solely by profit.
As a cybersecurity leader, I have also noticed the increasing commoditisation of cybersecurity over the years, and it’s important for us to address this issue.
Going deeper into the motives, I realised the significance of my blog’s title. Initially, I chose the name to reflect the current state of cybersecurity, where my work felt repetitive and inconsequential, like an assembly line. However, regardless of the organisation, the same recurring non-technical issues persisted.
The primary challenge lies in the way security practitioners interact with the people they are meant to protect within these organisations. Those outside the security team are often victim-shamed and blamed for their perceived ignorance as if they are at fault for not prioritising cybersecurity in their daily work.
Security organisations sometimes oppress the very people they are meant to serve. This behaviour struck me as counterproductive and degrading, particularly as I familiarised myself with nonviolent communication and other conflict-resolution techniques. I wondered whether adopting peacebuilding methods could foster collaboration and alignment among stakeholders. While only a few security practitioners initially showed interest, the transition to DevOps and shift left strategy, emphasising these attributes, attracted like-minded individuals.
Additionally, the way users are treated is like the punitive and shaming approach commonly seen in the criminal justice system. However, this approach has not reduced crime or supported victims. On the other hand, restorative justice, which focuses on repairing the harm caused by crime and restoring the community while respecting the dignity of all involved parties, has shown promise.
Evidence suggests that traditional fear-based and shaming tactics have not effectively promoted user compliance in cybersecurity. Instead, creating a supportive workplace environment has been identified as a more practical approach to encouraging voluntary security behaviours.
In the rapidly evolving landscape of cybersecurity, organisations must adapt their approaches. Rather than relying on fear, uncertainty, and doubt (FUD)-based strategies, we must acknowledge the value of users as allies and reposition cybersecurity as a collaborative effort. This involves a fundamental shift in the industry, prioritising collaboration, understanding, and support to foster a culture of proactive cybersecurity measures. By moving away from fear-based tactics and embracing a more cooperative approach, organisations will be better equipped to mitigate threats and safeguard information.