I did my FAIR analysis fundamentals course a few years ago and here are my thoughts on it.
FAIR stands for Factor Analysis of Information Risk, and is the only international standard quantitative model for information security and operational risk. (https://www.fairinstitute.org/)
My interest to learn more about FAIR came from two observations.
The first was that we had many definitions of what constitute risk. We refer to “script-kiddies”as risks. Not having a security control is referred to as risk. SQL injection is a risk. We also said things like “How much risk is there with this risk?”
The other observation was with our approach at quantifying risk. We derived the level of risk based on the likelihood and impact. And sometimes it was hard to get agreement on those values.
Having completed the course, one of the things I like about FAIR is their definitions. Their definitions of what is a risk, and what it must included. It should include an asset, threat, effect with a method that could be optional. An example of a risk is the probability of malicious internal users impacting the availability of our customer booking system via denial of service.
It uses future loss as the unit of measurement rather than a rating of critical, high, medium & low. The value of future loss is expressed as a range with a most likely value along with the confidence level of that most likely value. As such it focuses on accuracy rather than precision. I quite like that as it makes risk easier to understand and compare. Reporting that a risk has a 1 in 2 year probability of happening with a loss between $20K to $50K but likely being $30K is a lengthy statement. However it is more tangible and makes more sense than reporting that the risk is a High Risk.
Now it sounds like I’m all for FAIR, but I have some reservations. The main one being that there isn’t always data available to determine such an empirical result. Risk according to FAIR is calculated by a multiplication of loss frequency (the number of times a loss event will occur in a year) with loss magnitude (the $ range of loss from productivity, replacement, response, compliance and reputation). It’ll be hard to come up with a loss frequency value when there is no past data to base it on. I’ll be guessing the value and not estimating it. FAIR suggests doing an estimate for a subgroup if there isn’t enough reliable data available, but again I see the same problem. The subgroup for loss frequency is the multiplication of number of time the threat actors attempt to effect the asset with the percentage of attempts being successful. Unless you have that data, that to me is no less easier to determine.
Overall it still feels like a much better way of quantifying risk. I’ll end with a quote from the instructor. “Risk statements should be of probability, not of predictions or what’s possible.” It resonated with me as it is something I too often forget.
The Interconnected World of billions of IoT (Internet of Thing) devices has revolutionised digitalisation, creating enormous opportunities for humanity. In this post, I will be focussing on the uniqueness of the security challenges presented by these connected IoT devices and how we can respond.
We are ever more connected in the history of humanity. Every time we wear or connect a device to the Internet, we extend this connectivity, increasing your ability to solve problems and be more efficient and productive. This connected world of billions of IoT (Internet of Thing) devices has revolutionised digitalisation. However, such massive use of connected devices has presented a different cybersecurity challenge. A challenge that compelled c-suite to develop and implement separate cybersecurity programs to respond to two competing security objectives. Where IT (information Technology) focuses on managing confidentiality and integrity more than the system’s availability, OT(Operational Technology) focuses on the availability and integrity of the industrial control system. The convergence of the two desperate technology environments does improve efficiency and performance, but it also increases the threat surface.
What makes OT different from IT?
IT predominantly deals with data as a product that requires protection. On the other hand, data is a means to run and control a physical machine or a process. The convergence of the two environments has revolutionised our critical infrastructure, where free exchange data has increased efficiency and productivity. You require less physical presence at the remote sites for initiating manual changes. You can now remotely make changes and control machines. With Industry 4.0, we are witnessing the next wave of the industrial revolution. We are introducing a real-time interaction between the machines in a factory (OT) and the external third parties such as suppliers, customers, logistics, etc. Real-time exchange of information from the OT environment is required for safety and process effectiveness.
Unfortunately, the convergence of IT into OT environments has exposed the OT ecosystem to more risks than ever before by extending the attack surface from IT. The primary security objective in IT is to protect the confidentiality and integrity of the data ensuring the data is available as and when required. However, in OT safety of the people and the integrity of the industrial process is of utmost importance. The following diagrams show a typical industrial process and the underlying devices in an OT environment.
As you can see, the underlying technology components are very similar to IT and, therefore, can adopt the IT security principles within the OT environments. OT environments are made of programmable logic controllers (PLCs) and computing devices such as Windows and Linux computers. Deployment of such devices exposes the OT environment to similar threats in the IT world. Therefore, the organisations need to appreciate the subtle similarities and differences of the two environments applying cybersecurity principles to improve the security and safety of the two converged environments.
Risk profile of the converged environment
The interconnected OT and IT environments give an extended attack surface where the threats can move laterally between the two environments. However, it was not until 2010 the industry realised the threats, thanks to the appearance of the Stuxnet. Stuxnet was the first attack on the operational systems where 1000 centrifuges were destroyed in an Iranian nuclear plant to reduce their uranium enrichment capabilities. This incident was a trigger to bring cybersecurity threats to the forefront.
Following are some of the key threats and risks inherent in a poorly managed converged environment. * Ransomware, extorsion and other financial attacks * Targeted and persistent attacks by nation-states * Unauthorised changes to the control system may result in harm, including loss of life. * Disruption of services due to the delayed or flawed information relayed through to the OT environment leads to malfunctioning of the controls systems. * Use of legacy devices incapable of implementing contemporary security controls be used to launch a cyberattack. * Unauthorised Interference with communication systems directs operators to take inappropriate actions, leaving unintended consequences.
Cybersecurity Behaviours and Practices
As noted earlier in this post, the converged IT & OT environments can take inspiration from IT security to adopt tools, techniques and procedures to reduce cyberattack opportunities. Following are some strategies to help organisations set up a cybersecurity program for an interconnected environment.
Reduce complexity and attack opportunities
* Reduce the complexity of networks, applications, and operating systems to reduce the “attack surface” available to an attacker.
Better perimeter and service knowledge
* Map the interdependencies between networks, applications, and operating systems. * Identify assets that are dealing with sensitive data.
Strengthen internal collaboration
* Avoid conflicts between business units (business owners, information technology, security departments, etc.) and improve internal communication and collaboration.
Strengthen External collaboration
* Improve and strengthen collaboration with external entities such as government agencies, Vendors, customers etc., sharing threat intelligence to improve incident response.
Know your insider threats
* Identify and assess insider threats. * Regularly monitor such threats, including your employees, for their changing social behaviours.
Increase awareness and training
* Invest in targeted employee security awareness and training to improve behaviours and attitudes towards security.
Strengthen Integration by data/traffic analysis
* Improve network traffic data collection and analysis processes to improve security intelligence, improving informed and targeted incident response.
Build in-house security capabilities
* Build in-house security competencies, including skilled resources for continuity and enhanced incident response.
Limit BYOD (bring your device)
* Clear BYOD policy must be defined and implemented within the IT & OT environments. * Only approved devices can always be connected to the environment with strict authorisation and authentication controls in place. * Monitor all user activity whilst connected with the network
Align Cyber Program with Industry standards
* Align your cybersecurity program with well-established security standards to structure the program. * Some of the industry standards include ISO 27001–27002, RFC 6272, IEC 61850/62351, ISA-95, ISA-99, ISO/IEC 15408, ITIL, COBIT etc.
* Ensure clear demarcation of the IT & OT environments. Limit the attack surface. * Virtual segmentation with zero trust. Complete isolation of control and automation environments from the supervisory layer. * Implement tools and techniques to facilitate incident detection and response. * Implement a zero trust model for endpoints
Implement threat hunting
* Implement threat hunting capabilities for the converged environment focused on early detection and response.
Conclusion
The last fifteen years or so have shown us how vulnerable our technology environments are. Protection of these environments requires a multi-pronged and integrated strategy. This strategy should not only consider external risks but also consider insider threats. A prioritised approach to mitigate these risks requires a holistic approach that includes people processes and technology. Benchmarking exercises could also help organisations to identify the “state of play” of similar-sized entities. We are surely seeing consistent investment in the security efforts across the board, but we still have to work hard to respond to ever-changing threat scenarios continuously.
It was just over thirty years when Tim Berners-Lee’s research at CERN, Switzerland, resulted in World Wide Web, which we also Know as the Internet today. Who would have thought, including Tim, that the Internet will become such a thing as today? This network of networks impacts every aspect of life on Earth and beyond. People are never connected ever before. The Internet has given way for new business models and helped traditional businesses find new and innovative ways to market their products.
Unfortunately, like everything else, we have evil forces on the Internet who are trying to take advantage of the vulnerabilities of the technologies for their vested interests. As first-generation users of the Internet, everything for us was new. Whether it was online entertainment or online shopping, we were the first to use it. We grew up with the Internet. We all had been the victims of the Internet or cybercrimes at some point in our lives. This created a whole new industry now called “cybersecurity”, which is seen as the protectors of cybercrimes. However, it has always been a big challenge to fix who is responsible for the security, business or cybersecurity teams.
What is the need to fix responsibility?
Globalisation and more recently, during the pandemic, has increased the number of people working remotely. It has become an ever-increasing headache for companies. As a result, the number of security incidents has increased manifolds, including the cost per incident. The cost of cyber incidents is increasing year on year basis.
According to IBM’s Cost of a Data Breach 2021 report, the average cost of a security breach costs businesses upward of $4.2 million.
Governments mandate cybersecurity compliance requirements, non-compliance of which attract massive penalties in some jurisdictions. For example, non-compliance with Europe’s General Data Protection Rule (GDPR) may see companies be fined up to €20 million or 4 per cent of their annual global turnover.
Companies that traditionally viewed security as a cost centre are now viewing it differently due to the losses they incur because of the breaches and penalties. We have seen a change in the attitude of these organisations due to the above reasons. Today, companies see security as everyone’s responsibility instead of an IT problem.
Cyber-hygiene: Challenges and repercussions of a bad one.
Cyber hygiene, like personal hygiene, is the set of practices that organisations deploy to ensure the security of the data and networks. Maintaining basic cyber-hygiene is the difference between being breached or quickly recovering from the one without a massive impact on the business.
Cyber hygiene increases the opportunity cost of the attack for the cybercriminals by reducing vulnerabilities in the environment. By practising cyber hygiene, organisations improve their security posture. They can become more efficient to defend themselves against persistent devastating cyberattacks. Good cyber-hygiene is already being incentivised by reducing the likelihood of getting hacked or penalised by fines, legal costs, and reduced customer confidence.
The biggest challenge in implementing a good cyber hygiene practice requires knowing what we need to protect. Having a good asset inventory is a first to start. In a hybrid working environment having clear visibility of your assets is important. You can’t protect something you don’t know. Therefore, it is imperative to know where your information assets are located on your network and who is using them. It is also very important to know where the data is located and who can access it.
Another significant challenge is to maintain discipline and continuity over a long period. Scanning your network occasionally will not help stop unrelenting cyberattacks. Therefore, automated monitoring must be implemented to continuously detect and remediate threats, which requires investment in technical resources that many businesses don’t have.
Due to the above challenges, we often see poor cyber hygiene resulting in security vulnerabilities and potential attack vectors. Following are some of the vulnerabilities due to poor hygiene:
Unclassified Data: Inadequate data classification result in misplaced data and, therefore, stored in places that may not be adequately protected.
Data Loss: Poor and inadequate data classification may result in data loss due to a lack of adequate protection controls. Data may not be recovered because of a data breach, hardware failure, or improper data handling if it is not regularly backed up and tested for corruption.
Software vulnerabilities: All software contains software vulnerabilities. Developers release patches regularly to fix these vulnerabilities. A lack of or poor patch management process will leave software vulnerable, which hackers can potentially exploit to gain access to the network and data.
Poor endpoint protection: According to AV-TEST Institute, they register over 450,000 new malicious applications (malware) and potentially unwanted applications in the wild every day. Due to the inadequate endpoint protection cyber hygiene practices, including malware protection tools, hackers can use a wide range of hacking tools and techniques to get inside your network to breach the company’s environment stealing data.
Inadequate vendor risk management: With ever-increasing supply chain attacks, comprehensive vendor risk management must be implemented considering the potential security risks posed by third-party vendors and service providers, especially those with access to and processing sensitive data. Failure to implement such a process will further expose service disruptions and security breaches.
Poor compliance: Poor cyber hygiene often results in the non-compliance of various legal and regulatory requirements.
Building Accountability within your cybersecurity organisation
With ever-increasing breaches and their impacts, we shall start considering as an industry and society to motivate organisations to make cybersecurity a way of life. Cyber hygiene must be demanded from the organisations that hold, process, and use your data.
Now that we understand the challenges of having good cyber hygiene, we must also understand what we have been doing to solve these issues. So far, we have tried many ways. Some companies have internally developed controls, and others externally mandated rules and regulations. However, we have failed to address the responsibility and accountability issue. We have failed to balance the business requirements and the rigour required for cybersecurity. For example, governments have made laws and regulations with punitive repercussions without considering how a small organisation will be able to implement controls to comply with these laws and regulations.
There are no simple solutions for this complex problem. Having laws and regulations definitely raises the bar for organisations to maintain a good cybersecurity posture, but this will not keep the hackers out forever. Organisations need to be more proactive in introducing more accountability within their security organisation. Cybersecurity professionals need to take responsibility and accountability in preventing and thwarting a cyberattack. At the same time, business leaders need to understand the problem and bring the right people for the job to start with. Develop and implement the right cybersecurity framework which aligns with your business risks. Making cybersecurity one of the strategic pillars of the business strategy will engrain an organisation’s DNA.
There are many ways we can start this journey. To start with, organisations will need glue, a cybersecurity framework. Embracing frameworks like the National Institute of Standards and Technology (NIST) Cyber Security Framework (CSF)
NIST-CSF is a great way to start baselining your cybersecurity functions. It provides a structured roadmap and guidelines to achieve good cyber hygiene. In addition, CSF provides guidance on things like patching, identity & access management, least-privilege principles etc., which can help protect your organisation. If and when you get the basics along with automation, your organisation will have more time to focus on critical functions. In addition, setting up the basic-hygiene processes will improve user experience, predictable network behaviour and therefore fewer service tickets.
Research has shown that the best security outcomes are directly proportional to employee engagement. Organisations may identify “Security Champions” within the business who can evangelise security practices in their respective teams. The security champions can act as a force multiplier while setting up accountabilities. They can act as your change agents by identifying issues quickly and driving the implementation of the solutions.
Conclusion
There is no good time to start. However, the sooner you start addressing and optimising your approach to cyber-hygiene and cybersecurity, the faster you will achieve assurance against cyberattacks. This will bring peace of mind knowing the controls are working and are doing what they are supposed to. You will not be scrambling during a breach to find solutions to the problem but ready to respond to any eventuality.
Besides poor cyber hygiene, if your organisation has managed to avoid any serious breach, it is just a matter of time before your luck will run out.
Globally, businesses leveraged the benefits of transforming their businesses by adopting new ways of doing business and delivering their products to the market quickly and efficiently. The digital transformation has made a distinctive contribution to this effort. Organisations use modern and efficient applications to deliver the above business outcomes. APIs, behind the scenes, are the most critical components that help web and mobile applications deliver innovative products and services.
API is a piece of software that has direct access to the upstream or downstream applications and, in some cases, directly to the data. The following picture depicts a typical scenario where a web application calls an API, which calls downstream resources and data. Unfortunately, due to the nature of the API, direct access to the data introduces a new attack surface called “API Breaches”, which is continuously on the rise, resulting in impersonation, data theft, and financial fraud.
According to Gartner, by 2022, the outlook of API breaches will change from an infrequent breach to the most-frequent attack vector, which will result in frequent data losses for enterprise web applications. This changing trend has brought the realisation that something needs to be done to protect data at the API and the digital interfaces level.
Example of an API Breach
There are many factors of API breaches, and one of the common issues is the broad permissions within the application ecosystem intended for a non-human interaction. Stricter controls regarding user access ensure only the authorised user can access the application. Once a user is authenticated, the APIs can frequently access any data. This is all said and done, but the problem starts when a bad actor bypasses the user authentication accessing the data from the downstream systems.
The above picture shows a user accessing a mobile application using an API while the desktop users connect the same API accessing the same backend through a web interface. Mobile API calls include URIs, methods, headers, and other parameters like a basic web request. Therefore, the API calls are exposed to similar web attacks, such as injection, credential brute force, parameter tampering, and session snooping. Hackers can employ the same tactics that would work against a traditional web application, such as brute force or injection attacks, to breach the mobile API. It is inherently difficult to secure mobile applications. Therefore, attackers are continuously decompiling and reverse engineering mobile applications in pursuit of finding vulnerabilities or opportunities to discover hardcoded credentials or weak access control methods.
API Security Threats
OWASP.ORG, in 2019 released a list of top10 API Security Threats advising on the strategies and solutions to mitigate the unique security challenges and security risks of APIs. Following are the ten API security threats.
API1:2019: Broken Object-Level Authorisation
APIs often expose endpoints handling object identifiers. This exposure leads to a wide attack surface where a user function used to access a data source creates a Level Access Control issue. Therefore, it is recommended to carry out object-level authorisation checks for all such functions accessing data.
API2:2019: Broken User Authentication
User Authentication is the first control gate accessing the application, and attackers often take advantage of the incorrectly applied authentication mechanisms. It is found that the attackers may compromise an authentication token or exploit flaws assume other user’s identity temporarily or permanently. Once compromised, the overall security of the APIs is also compromised.
API3:2019: Excessive Data Exposure
In the name of simplicity, developers often expose more data than required relying on the client-side capabilities to filter the data before displaying it to the user. This is a serious data exposure issue and therefore recommended to filter data on the server-side, exposing only the relevant data to the user.
API4:2019: Lack of Resources and Rate Limiting
System resources are not infinite, and often poorly designed APIs don’t restrict the number or size of resources. Excessive use of the resources may result in performance degradation and at times, can cause Denial of Service (DoS), including malicious DoS attacks. Without rate limiting applied, the APIs are also exposed to authentication vulnerabilities, including brute force attacks.
API5:2019: Broken Function-Level Authorisation
Authorisation flaws are far more common and are often the result of overly complex access control policies. Privileged access controls are poorly defined, and unclear boundaries between regular and administrative functions expose unintended functional flaws. Attackers can exploit these flaws by gaining access to a resource or enacting privileged administrative functions.
API6:2019: Mass Assignment
In a typical web application, users update data to a data model often bound with the data that users can’t. An API endpoint is vulnerable if it automatically converts client parameters into internal object properties without considering the sensitivity and the exposure level of these properties. This could allow an attacker to update object properties that they should not have access to.
API7:2019: Security Misconfiguration
Security misconfiguration is a very common and most pervasive security problem. Often inadequate defaults, ad-hoc or incomplete configurations, misconfigured HTTP headers or inappropriate HTTP methods, insufficiently restrictive Cross-Origin Resource Sharing (CORS), open cloud storage, or error messages that contain sensitive information leave systems with vulnerabilities exposing to data theft and financial loss.
API8:2019: Injection
When untrusted data is sent to the interpreter alongside the command or query results in Injection flaws (including SQL injection, NoSQL injection, and command injection). Attackers can send malicious data to trick the interpreter into executing commands that allow attackers to access data without proper authorisation.
API9:2019: Improper Asset Management
Due to the nature of the APIs, many endpoints may be exposed in modern applications. Lack of up-to-date documentation may lead to the use of the older APIs, increasing the attack surface. It is recommended to maintain an inventory of proper hosts and deployed API versions.
API10:2019: Insufficient Logging and Monitoring
Attackers can take advantage of insufficient logging and monitoring, coupled with ineffective or lack of incident response integration, to persist in a system to extract or destroy data without being detected.
Common Attacks Against APIs
APIs, similar to traditional applications, are also exposed to many of the same types of attacks that we defended to protect networks and web applications. Following are some of the attacks that can easily be used against APIs.
How to Secure APIs
Some other broader controls that organisations may implement to protect their publicly shared APIs in addition to the mitigation strategies mentioned in the above table.
“Security First”, prioritise security.
Often security is an afterthought and seen as someone else’s problem, and API security is no different. API security must not be an afterthought. Organisations will lose a lot with unsecured APIs. Therefore, ensure security remains a priority for the organisation and built into your software development lifecycle.
Maintain a comprehensive inventory of APIs.
It is so commonly seen that several APIs are publicly shared, and often, organisations may not be aware of all of them. To secure these APIs, the organisation must first be aware of them. A regular scan shall discover and inventory APIs. Consider implementing an API gateway for strict governance and management of the APIs.
Use a strong authentication and authorisation solution.
Poor or non-existent authentication and authorisation controls are major issues with publicly shared APIs. APIs are designed not to enforce authentication. It is often the case with private APIs, which are only for internal use. Since APIs provide access to organisations’ databases, the organisations must employ strict access controls to these APIs.
Least privileges.
The foundational security principle of “least privileges” holds good for API security. All users, processes, programs, systems and devices must only be granted minimum access to complete a stated function.
Encrypt traffic in transit.
API payload data must be encrypted when APIs exchange sensitive data such as login credentials, credit card, social security, banking information, health information etc. Therefore, TLS encryption should be considered a must.
Limit data exposure.
Ensure development data such as development keys, passwords, and other information that must be removed before APIs are made publicly available. Organisations must use scanning tools in their DevSecOps processes, limiting accidental exposure of credentials.
Input validation.
Input validation must be implemented and never be sent through the API to the endpoint.
Use rate limiting.
Setting a threshold above which subsequent requests will be rejected (for example, 10,000 requests per day per account) can prevent denial-of-service attacks.
Deploy web application firewall.
Deploy a web application and configure it to ensure that it can understand API payloads.
Conclusion
APIs are the most efficient method of accessing data for modern applications. Mobile applications and Internet of Things(IoT) devices leverage this efficiency to launch innovative products and services. With such a dependency on APIs, some organisations may not have realised the API specific risks. However, most organisations already have controls to combat well-known attacks like cross-site scripting, injection, distributed denial-of-service, and others that can target APIs. In addition, the best practices mentioned above are also likely practised in these organisations. If you are struggling to start or know where to start, the best approach could be to start from the top and start making your way down the stack. It doesn’t matter how many APIs your organisation chooses to share publicly. The ultimate goal should be to establish principle-based comprehensive and efficient API security policies and manage them proactively over time.