My recent conference presentation on open-source security revealed a common theme. Audience members didn’t realise how pervasive open-source is. Everyone in the audience knew that their organisation uses a fair number of open-source components, but they thought that it only makes up a small percentage of their applications, at around 30% or less.
The truth is that open-source makes up the bulk of your applications. Industry reports have estimated that 85% of modern applications are built from open-source components. The percentage is higher for modern JavaScript web applications, with 97% of the code coming from open-source components. My analysis has found those numbers to be a low estimate, with the percentage for Java applications at around 98%. What was surprising was around three quarters of those open-source components were not explicitly incorporated into their applications, they were transitive dependencies. And with organisations embracing generative AI for software development, that 2% of custom code might not even be written by their developers.
Our use of open-source software is growing exponentially, with the number of download requests exceeding 4 trillion last year, almost doubling from two years ago. But a critical caveat exists, not all open-source offerings are created equal. Around 500 billion download requests made last year were for components with known risk. This is around 1 in 8 downloads of components that have one or more identified security vulnerabilities. Log4j is one such component. It had a critical vulnerability that was disclosed in December 2021 and resulted in most organisations enacting their incident response plans. Today, around 35% of download requests for log4j are for vulnerable versions. That’s 1 in 3 downloads. Why are we still downloading open-source components with known risk, especially components like log4j? I believe that for most organisations, they are unaware of their open-source consumption, especially for transitive dependencies.
Do you know your organisation’s open-source consumption? Do you have a software bill of materials? If you don’t then you’re probably using more open source than you realise.
By taking proactive steps to illuminate and manage open-source usage, organisations can harness the power of open source while mitigating associated security risks.
It was just over thirty years when Tim Berners-Lee’s research at CERN, Switzerland, resulted in World Wide Web, which we also Know as the Internet today. Who would have thought, including Tim, that the Internet will become such a thing as today? This network of networks impacts every aspect of life on Earth and beyond. People are never connected ever before. The Internet has given way for new business models and helped traditional businesses find new and innovative ways to market their products.
Unfortunately, like everything else, we have evil forces on the Internet who are trying to take advantage of the vulnerabilities of the technologies for their vested interests. As first-generation users of the Internet, everything for us was new. Whether it was online entertainment or online shopping, we were the first to use it. We grew up with the Internet. We all had been the victims of the Internet or cybercrimes at some point in our lives. This created a whole new industry now called “cybersecurity”, which is seen as the protectors of cybercrimes. However, it has always been a big challenge to fix who is responsible for the security, business or cybersecurity teams.
What is the need to fix responsibility?
Globalisation and more recently, during the pandemic, has increased the number of people working remotely. It has become an ever-increasing headache for companies. As a result, the number of security incidents has increased manifolds, including the cost per incident. The cost of cyber incidents is increasing year on year basis.
According to IBM’s Cost of a Data Breach 2021 report, the average cost of a security breach costs businesses upward of $4.2 million.
Governments mandate cybersecurity compliance requirements, non-compliance of which attract massive penalties in some jurisdictions. For example, non-compliance with Europe’s General Data Protection Rule (GDPR) may see companies be fined up to €20 million or 4 per cent of their annual global turnover.
Companies that traditionally viewed security as a cost centre are now viewing it differently due to the losses they incur because of the breaches and penalties. We have seen a change in the attitude of these organisations due to the above reasons. Today, companies see security as everyone’s responsibility instead of an IT problem.
Cyber-hygiene: Challenges and repercussions of a bad one.
Cyber hygiene, like personal hygiene, is the set of practices that organisations deploy to ensure the security of the data and networks. Maintaining basic cyber-hygiene is the difference between being breached or quickly recovering from the one without a massive impact on the business.
Cyber hygiene increases the opportunity cost of the attack for the cybercriminals by reducing vulnerabilities in the environment. By practising cyber hygiene, organisations improve their security posture. They can become more efficient to defend themselves against persistent devastating cyberattacks. Good cyber-hygiene is already being incentivised by reducing the likelihood of getting hacked or penalised by fines, legal costs, and reduced customer confidence.
The biggest challenge in implementing a good cyber hygiene practice requires knowing what we need to protect. Having a good asset inventory is a first to start. In a hybrid working environment having clear visibility of your assets is important. You can’t protect something you don’t know. Therefore, it is imperative to know where your information assets are located on your network and who is using them. It is also very important to know where the data is located and who can access it.
Another significant challenge is to maintain discipline and continuity over a long period. Scanning your network occasionally will not help stop unrelenting cyberattacks. Therefore, automated monitoring must be implemented to continuously detect and remediate threats, which requires investment in technical resources that many businesses don’t have.
Due to the above challenges, we often see poor cyber hygiene resulting in security vulnerabilities and potential attack vectors. Following are some of the vulnerabilities due to poor hygiene:
Unclassified Data: Inadequate data classification result in misplaced data and, therefore, stored in places that may not be adequately protected.
Data Loss: Poor and inadequate data classification may result in data loss due to a lack of adequate protection controls. Data may not be recovered because of a data breach, hardware failure, or improper data handling if it is not regularly backed up and tested for corruption.
Software vulnerabilities: All software contains software vulnerabilities. Developers release patches regularly to fix these vulnerabilities. A lack of or poor patch management process will leave software vulnerable, which hackers can potentially exploit to gain access to the network and data.
Poor endpoint protection: According to AV-TEST Institute, they register over 450,000 new malicious applications (malware) and potentially unwanted applications in the wild every day. Due to the inadequate endpoint protection cyber hygiene practices, including malware protection tools, hackers can use a wide range of hacking tools and techniques to get inside your network to breach the company’s environment stealing data.
Inadequate vendor risk management: With ever-increasing supply chain attacks, comprehensive vendor risk management must be implemented considering the potential security risks posed by third-party vendors and service providers, especially those with access to and processing sensitive data. Failure to implement such a process will further expose service disruptions and security breaches.
Poor compliance: Poor cyber hygiene often results in the non-compliance of various legal and regulatory requirements.
Building Accountability within your cybersecurity organisation
With ever-increasing breaches and their impacts, we shall start considering as an industry and society to motivate organisations to make cybersecurity a way of life. Cyber hygiene must be demanded from the organisations that hold, process, and use your data.
Now that we understand the challenges of having good cyber hygiene, we must also understand what we have been doing to solve these issues. So far, we have tried many ways. Some companies have internally developed controls, and others externally mandated rules and regulations. However, we have failed to address the responsibility and accountability issue. We have failed to balance the business requirements and the rigour required for cybersecurity. For example, governments have made laws and regulations with punitive repercussions without considering how a small organisation will be able to implement controls to comply with these laws and regulations.
There are no simple solutions for this complex problem. Having laws and regulations definitely raises the bar for organisations to maintain a good cybersecurity posture, but this will not keep the hackers out forever. Organisations need to be more proactive in introducing more accountability within their security organisation. Cybersecurity professionals need to take responsibility and accountability in preventing and thwarting a cyberattack. At the same time, business leaders need to understand the problem and bring the right people for the job to start with. Develop and implement the right cybersecurity framework which aligns with your business risks. Making cybersecurity one of the strategic pillars of the business strategy will engrain an organisation’s DNA.
There are many ways we can start this journey. To start with, organisations will need glue, a cybersecurity framework. Embracing frameworks like the National Institute of Standards and Technology (NIST) Cyber Security Framework (CSF)
NIST-CSF is a great way to start baselining your cybersecurity functions. It provides a structured roadmap and guidelines to achieve good cyber hygiene. In addition, CSF provides guidance on things like patching, identity & access management, least-privilege principles etc., which can help protect your organisation. If and when you get the basics along with automation, your organisation will have more time to focus on critical functions. In addition, setting up the basic-hygiene processes will improve user experience, predictable network behaviour and therefore fewer service tickets.
Research has shown that the best security outcomes are directly proportional to employee engagement. Organisations may identify “Security Champions” within the business who can evangelise security practices in their respective teams. The security champions can act as a force multiplier while setting up accountabilities. They can act as your change agents by identifying issues quickly and driving the implementation of the solutions.
Conclusion
There is no good time to start. However, the sooner you start addressing and optimising your approach to cyber-hygiene and cybersecurity, the faster you will achieve assurance against cyberattacks. This will bring peace of mind knowing the controls are working and are doing what they are supposed to. You will not be scrambling during a breach to find solutions to the problem but ready to respond to any eventuality.
Besides poor cyber hygiene, if your organisation has managed to avoid any serious breach, it is just a matter of time before your luck will run out.
I’m certain you’ve heard many concerns from CISOs who are struggling to gain visibility into cloud environments despite their considerable efforts and resources.
Many professionals face this common issue around the 6 to 8 months mark of organisations’ cloud transition journey. Despite significant time and resource investments, they begin to view the transition to the cloud as a costly misstep, largely due to the range of security challenges they confront.
These challenges often stem from a lack of understanding or misconceptions within the company about the nuances of cloud security.
I am consistently astonished by the widespread nature of these challenges!
Here are some of the common misunderstandings regarding security in the cloud.
“When in the cloud, it is absolutely secure.”
Since the transition to the cloud has become popular, senior leaders often claim that “the cloud is much more secure than on-premise infrastructure.” There is some truth to this perception. They are often presented with a slick PowerPoint presentation highlighting the security benefits of the cloud and the significant investments made by cloud providers to secure the environment.
It’s a common misconception that when organizations migrate to the cloud, they can relinquish responsibility for security to the cloud provider, like AWS, Google, or Microsoft. This mistaken belief leads them to think that simply moving their workloads to the cloud is sufficient. However, this oversight represents a critical security mistake because organisations still need to actively manage and maintain security measures in the cloud environment to ensure the protection of their data and infrastructure.
The operational model of cloud computing is based on a shared responsibility framework wherein the cloud provider assumes a significant portion of the responsibility for maintaining the infrastructure, ensuring physical security, and managing the underlying hardware and software. However, it’s important to understand that as a cloud user, you also bear the responsibility for configuring and securing your applications and data within the cloud environment.
This shared responsibility model is analogous to living in a rented property. The property owner is responsible for ensuring that the building is structurally sound and maintaining common areas, but as a tenant, you are responsible for securing your individual living space by locking doors and windows.
In the context of cloud computing, when you launch a server or deploy resources on the cloud, the cloud service provider does not automatically take over the task of securing your specific configuration and applications. Therefore, it’s essential to recognise that you must proactively implement robust security measures to protect your cloud assets.
Understanding this shared responsibility model is crucial before embarking on your cloud security program to ensure that you have a clear understanding of your role in maintaining a secure cloud environment.
The CLOUD is superior to On-Premises
Teams new to the cloud often hold biased views. Some assume that the cloud is inherently more secure than on-premises, while others believe it to be insecure and implement excessive controls. Making either mistake can lead to complacency and potential breaches, or make the work of cloud teams more difficult. This typically occurs when there’s a lack of investment in training the cybersecurity team in cloud security.
They often struggle to fully harness cloud technology’s potential and grapple with understanding its unique operational dynamics, leading to mounting frustration. It’s crucial to recognise that both cloud-based and on-premises infrastructures entail their own set of inherent risks. The key consideration lies not in the physical location of the infrastructure but rather in how it is effectively managed and safeguarded. Prioritising the upskilling of your cybersecurity team in cloud security before embarking on the migration process is essential, as this proactive approach ensures that your organisation is equipped to address potential security challenges from the outset.
We have carefully chosen a particular on-premises solution, and I am confident that it will seamlessly transition to the cloud.
It’s crucial to bear in mind that directly transferring on-premises solutions to the cloud and assuming identical outcomes is a grave error. Cloud environments possess unique attributes and necessitate specific configurations. Just because a solution functions effectively on-premises does not ensure that it will perform similarly in the cloud.
Migrating without essential adaptations can leave you vulnerable to unforeseen risks. Whenever feasible, it’s advisable to utilise native cloud solutions or opt for a cloud-based version of your on-premises tools, rather than expecting seamless universal compatibility.
Treating the Cloud Like a Project and Not an Environment
The Cloud is a different paradigm and a completely different approach to operations. It is not a one-time solution; you can’t just set it up and forget about it. Treating it like a project you complete and then hand over is a guaranteed way to invite a data breach.
When it comes to your IT infrastructure, it’s crucial to recognise that the cloud is an independent and vital environment that requires an equivalent level of governance compared to your on-premises setup. Many organisations make the mistake of treating cloud management as a secondary or tangential responsibility while focusing primarily on their on-premises systems. However, this approach underestimates the complexities and unique challenges of managing cloud-based resources.
It’s important to dedicate the necessary attention and resources to effectively govern and manage your cloud infrastructure in order to mitigate risks and ensure seamless operations.
It’s not my Role or Responsibility.
Assuming that the responsibilities for managing your on-premises environment will seamlessly transition to the cloud is a risky assumption to make. Many organisations overlook the critical task of clearly defining who will be responsible for implementing security controls, patching, monitoring, and other essential tasks in the cloud. This lack of clarity can lead to potentially disastrous consequences, leaving the organisation vulnerable to security breaches and operational inefficiencies.
It is important to establish a formally approved organisational chart that comprehensively outlines and assigns responsibilities for cloud security within your organisation. This ensures that all stakeholders understand their roles and accountabilities in safeguarding the organisation’s cloud infrastructure.
Furthermore, if your organisation intends to outsource a significant portion of its cloud-related activities, it is imperative to ensure that your organisational chart accurately reflects this strategic decision. This will help to align internal resources and clarify the division of responsibilities between the organisation and its external cloud service providers.
Conclusion
In the realm of cloud security, it’s crucial to address common misconceptions to ensure a robust and effective security posture. Organisations must understand that cloud security is a shared responsibility, requiring active management and maintenance of security measures within the cloud environment. Additionally, it’s important to recognise that both cloud-based and on-premises infrastructures entail inherent risks,
Globally, businesses leveraged the benefits of transforming their businesses by adopting new ways of doing business and delivering their products to the market quickly and efficiently. The digital transformation has made a distinctive contribution to this effort. Organisations use modern and efficient applications to deliver the above business outcomes. APIs, behind the scenes, are the most critical components that help web and mobile applications deliver innovative products and services.
API is a piece of software that has direct access to the upstream or downstream applications and, in some cases, directly to the data. The following picture depicts a typical scenario where a web application calls an API, which calls downstream resources and data. Unfortunately, due to the nature of the API, direct access to the data introduces a new attack surface called “API Breaches”, which is continuously on the rise, resulting in impersonation, data theft, and financial fraud.
According to Gartner, by 2022, the outlook of API breaches will change from an infrequent breach to the most-frequent attack vector, which will result in frequent data losses for enterprise web applications. This changing trend has brought the realisation that something needs to be done to protect data at the API and the digital interfaces level.
Example of an API Breach
There are many factors of API breaches, and one of the common issues is the broad permissions within the application ecosystem intended for a non-human interaction. Stricter controls regarding user access ensure only the authorised user can access the application. Once a user is authenticated, the APIs can frequently access any data. This is all said and done, but the problem starts when a bad actor bypasses the user authentication accessing the data from the downstream systems.
The above picture shows a user accessing a mobile application using an API while the desktop users connect the same API accessing the same backend through a web interface. Mobile API calls include URIs, methods, headers, and other parameters like a basic web request. Therefore, the API calls are exposed to similar web attacks, such as injection, credential brute force, parameter tampering, and session snooping. Hackers can employ the same tactics that would work against a traditional web application, such as brute force or injection attacks, to breach the mobile API. It is inherently difficult to secure mobile applications. Therefore, attackers are continuously decompiling and reverse engineering mobile applications in pursuit of finding vulnerabilities or opportunities to discover hardcoded credentials or weak access control methods.
API Security Threats
OWASP.ORG, in 2019 released a list of top10 API Security Threats advising on the strategies and solutions to mitigate the unique security challenges and security risks of APIs. Following are the ten API security threats.
API1:2019: Broken Object-Level Authorisation
APIs often expose endpoints handling object identifiers. This exposure leads to a wide attack surface where a user function used to access a data source creates a Level Access Control issue. Therefore, it is recommended to carry out object-level authorisation checks for all such functions accessing data.
API2:2019: Broken User Authentication
User Authentication is the first control gate accessing the application, and attackers often take advantage of the incorrectly applied authentication mechanisms. It is found that the attackers may compromise an authentication token or exploit flaws assume other user’s identity temporarily or permanently. Once compromised, the overall security of the APIs is also compromised.
API3:2019: Excessive Data Exposure
In the name of simplicity, developers often expose more data than required relying on the client-side capabilities to filter the data before displaying it to the user. This is a serious data exposure issue and therefore recommended to filter data on the server-side, exposing only the relevant data to the user.
API4:2019: Lack of Resources and Rate Limiting
System resources are not infinite, and often poorly designed APIs don’t restrict the number or size of resources. Excessive use of the resources may result in performance degradation and at times, can cause Denial of Service (DoS), including malicious DoS attacks. Without rate limiting applied, the APIs are also exposed to authentication vulnerabilities, including brute force attacks.
API5:2019: Broken Function-Level Authorisation
Authorisation flaws are far more common and are often the result of overly complex access control policies. Privileged access controls are poorly defined, and unclear boundaries between regular and administrative functions expose unintended functional flaws. Attackers can exploit these flaws by gaining access to a resource or enacting privileged administrative functions.
API6:2019: Mass Assignment
In a typical web application, users update data to a data model often bound with the data that users can’t. An API endpoint is vulnerable if it automatically converts client parameters into internal object properties without considering the sensitivity and the exposure level of these properties. This could allow an attacker to update object properties that they should not have access to.
API7:2019: Security Misconfiguration
Security misconfiguration is a very common and most pervasive security problem. Often inadequate defaults, ad-hoc or incomplete configurations, misconfigured HTTP headers or inappropriate HTTP methods, insufficiently restrictive Cross-Origin Resource Sharing (CORS), open cloud storage, or error messages that contain sensitive information leave systems with vulnerabilities exposing to data theft and financial loss.
API8:2019: Injection
When untrusted data is sent to the interpreter alongside the command or query results in Injection flaws (including SQL injection, NoSQL injection, and command injection). Attackers can send malicious data to trick the interpreter into executing commands that allow attackers to access data without proper authorisation.
API9:2019: Improper Asset Management
Due to the nature of the APIs, many endpoints may be exposed in modern applications. Lack of up-to-date documentation may lead to the use of the older APIs, increasing the attack surface. It is recommended to maintain an inventory of proper hosts and deployed API versions.
API10:2019: Insufficient Logging and Monitoring
Attackers can take advantage of insufficient logging and monitoring, coupled with ineffective or lack of incident response integration, to persist in a system to extract or destroy data without being detected.
Common Attacks Against APIs
APIs, similar to traditional applications, are also exposed to many of the same types of attacks that we defended to protect networks and web applications. Following are some of the attacks that can easily be used against APIs.
How to Secure APIs
Some other broader controls that organisations may implement to protect their publicly shared APIs in addition to the mitigation strategies mentioned in the above table.
“Security First”, prioritise security.
Often security is an afterthought and seen as someone else’s problem, and API security is no different. API security must not be an afterthought. Organisations will lose a lot with unsecured APIs. Therefore, ensure security remains a priority for the organisation and built into your software development lifecycle.
Maintain a comprehensive inventory of APIs.
It is so commonly seen that several APIs are publicly shared, and often, organisations may not be aware of all of them. To secure these APIs, the organisation must first be aware of them. A regular scan shall discover and inventory APIs. Consider implementing an API gateway for strict governance and management of the APIs.
Use a strong authentication and authorisation solution.
Poor or non-existent authentication and authorisation controls are major issues with publicly shared APIs. APIs are designed not to enforce authentication. It is often the case with private APIs, which are only for internal use. Since APIs provide access to organisations’ databases, the organisations must employ strict access controls to these APIs.
Least privileges.
The foundational security principle of “least privileges” holds good for API security. All users, processes, programs, systems and devices must only be granted minimum access to complete a stated function.
Encrypt traffic in transit.
API payload data must be encrypted when APIs exchange sensitive data such as login credentials, credit card, social security, banking information, health information etc. Therefore, TLS encryption should be considered a must.
Limit data exposure.
Ensure development data such as development keys, passwords, and other information that must be removed before APIs are made publicly available. Organisations must use scanning tools in their DevSecOps processes, limiting accidental exposure of credentials.
Input validation.
Input validation must be implemented and never be sent through the API to the endpoint.
Use rate limiting.
Setting a threshold above which subsequent requests will be rejected (for example, 10,000 requests per day per account) can prevent denial-of-service attacks.
Deploy web application firewall.
Deploy a web application and configure it to ensure that it can understand API payloads.
Conclusion
APIs are the most efficient method of accessing data for modern applications. Mobile applications and Internet of Things(IoT) devices leverage this efficiency to launch innovative products and services. With such a dependency on APIs, some organisations may not have realised the API specific risks. However, most organisations already have controls to combat well-known attacks like cross-site scripting, injection, distributed denial-of-service, and others that can target APIs. In addition, the best practices mentioned above are also likely practised in these organisations. If you are struggling to start or know where to start, the best approach could be to start from the top and start making your way down the stack. It doesn’t matter how many APIs your organisation chooses to share publicly. The ultimate goal should be to establish principle-based comprehensive and efficient API security policies and manage them proactively over time.