
.avif)
Offensive
Triskele Labs have the privilege of working with clients through a multitude of sectors, including Government & Defense, Financial Services, industrial, Healthcare & Life Sciences, Retail & E-commerce, Information Technology & Telecommunications, Energy & Utilities, and Education & Research.
Introduction
Eight vulnerabilities per engagement
Triskele Labs have the privilege of working with clients through a multitude of sectors, including Government & Defense, Financial Services, Industrial, Healthcare & Life Sciences, Retail & E-commerce, Information Technology & Telecommunications, Energy & Utilities, and Education & Research.
During the last financial year, as global cyber threats intensified, our team conducted over 500 engagements. Although the number and severity of findings varied considerably across these assessments, we identified a total of 3,887 vulnerabilities, an average of nearly eight per engagement and fifteen new vulnerabilities each day.
Upon further analysis, several notable trends emerged. The incidence of critical vulnerabilities has declined, while the number of longstanding clients engaged in continuous testing has grown. This relationship highlights the direct impact of regular penetration testing and the implementation of its recommendations on reducing overall risk within client environments.
Severity analysis
All identified issues are assigned a risk rating to allow for prioritisation of remediation efforts. Triskele Labs utilises a custom risk rating framework to identify risk level, built from tried and trusted processes and experience. The Triskele Labs framework is detailed below.
These Can be derived from the following principles:
- Risk: Likelihood × Impact.
 
- Likelihood: Vulnerability + Threat.
 
- Vulnerability: An error or weakness in the design, system implementation or system.
 
- Threat: A person (internal/external) or something that can cause damage to an asset.
 
Impact: Loss of confidentiality, integrity, availability & accountability. It can consist of ease of vulnerability being detected, exploited, and discovered or information leakage.
Likelihood definitions
- UNLIKELY: Although a flaw or weakness exists, this flaw is either exceedingly difficult to exploit, would require a highly skilled attacker, needs to be combined with other vulnerabilities in order for exploitation to occur, or is otherwise affected by factors which reduce the likelihood, such as timing or conditions.
 - POSSIBLE: An attacker with a moderate level of skills and knowledge may be able to exploit this flaw or weakness, however the flaw is not widely exploited.
 
Likely: The flaw is well-know and exploitable by a novice or inexperienced attacker.
Consequence definitions
- Minor: The result of exploiting the flaw or weakness is a small gain for the attacker. This could include access to information that may assist other attacks which is not highly sensitive. The business should experience little impact as a result of exploitation of this flaw.
 - Moderate: Exploitation of the flaw would result in the attacker gaining access a low level of access to a resource, or access to sensitive information, however there are limitations and restrictions on this access.
 - Major: Exploitation of the flaw results in a high level of access to the attack, or a major change impacting the business. This is a position from which significant damage can be caused. This includes access to highly sensitive information.
 
Vulnerability analysis
The following provides a broad overview of trends and assessments conducted by Triskele Labs.
Key takeaways:
- Overall volume and density are almost identical: ~7.5 to 7.76 vulns per workspace in both timeframes.
 - Critical issues are proportionally rarer post-2024 (0.3 % vs 0.8 %), and fewer engagements identified any Critical findings.
 - High severity issues remained at ~8 % of all vulnerabilities for both periods, however a higher number of engagements post 2024 (138 vs 82) saw at least one High finding, reflecting the larger sample.
 - Medium & Informational vulnerabilities reduced slightly in share, while Low risk findings rose from ~51 % pre-2024 to ~54 % post-2024.
 
Most Common Test Types
Consistent with previous years, the three most common types of penetration testing include the following:
Web Applications and Application Programming Interfaces (APIs)
Designed to identify and exploit vulnerabilities in web applications and their supporting APIs, from small, single page brochure style web applications, to large, complex, Customer Relation Management (CRM) software. The primary goal is to evaluate the
application's resilience against common threats. This testing involves a combination of automated scanning and manual techniques to assess business logic, input validation, access controls, and more. Inclusions typically cover OWASP Top 10 vulnerabilities.
External Networks
Focuses on assessing the security posture of an organization’s internet-facing assets, such as firewalls, VPN gateways, email servers, and public IP ranges. The primary goal is to identify and exploit vulnerabilities that an external attacker could leverage to gain unauthorized access to the internal network or sensitive systems.
Internal Networks
A controlled assessment that simulates an attacker who has already gained access to an organization’s internal environment, whether through physical access, social engineering, or as a rogue employee, in what is known as an “assumed Breach” scenario. The primary goal is to evaluate the effectiveness of internal security controls in preventing privilege escalation, lateral movement, and unauthorized access to critical systems and sensitive data
OWASP Top 10
Using the OWASP Top 10 is perhaps the most effective first step towards changing the software development culture within organizations into one that produces more secure outcomes.
Our 2024 analysis highlights five security gaps as repeat offenders:
- Broken Object Level Authorisation has risen to the top as attackers still exploit simple ID manipulation in APIs and web applications to harvest or tamper with data, placing customer records at constant risk.
 - Broken Authentication continues to enable credential stuffing and session hijacking, turning weak login flows and long-lived service tokens into fraud and compliance penalties
 - Server-Side Request Forgery remains critical because cloud workloads that fetch URLs on behalf of users allow adversaries to reach internal metadata services and steal keys that can compromise entire environments
 - Security Misconfiguration still plagues mature teams, with default credentials, open buckets, and verbose error pages giving intruders easy footholds that quickly escalate to full outages
 - Improper Assets Management exposes untracked or outdated hosts, leaving known vulnerabilities unpatched and hidden data flows unmonitored so lack of real-time asset inventory can undermine every other control.
 
Emerging Sectors
Two key emerging fields which are experiencing the highest rate of testing growth are environments which contain Operational Technology (OT).
Transport, healthcare, and infrastructure are growing at a rapid pace, and are naturally a more prominent target for cyber security related threats. As these sectors routinely utilizing unique hardware for highly specialised tasks and capabilities, conducting frequent and thorough testing has become paramount to ensuring operational stability, and protection of customer information.
While Operational technology holds significant importance, not all specialised tools are “air-gapped” or sufficiently segmented. This has a direct relationship with the second emerging field, being red team, or “adversarial simulation” style testing. This testing ensures that, alongside all the default internal network, active directory, and network segmentation best practices are well designed, architected, and adhered to, but that the Security Operations Centre teams are operating with full visibility and responsiveness to incoming threats of a more stealthy and targeted approach.
These tests have highlighted the importance of adherence to the more baseline principles such as credential management, Multi-Factor Authentication (MFA), which are still the most common access vector in the majority of real world attacks, and penetration testing exercises.
The Emergence of AI In Cyber Security
An ever-increasing aspect within cyber security is the implementation of Artificial Intelligence, Machine Learning, and Large Language Models within the security landscape. This pertains not only to the creation of applications via LLM assisted codebase creation, but also within defensive and offensive tools used to conduct automated testing and automated discovery. These tools should be treated with caution, as they are disruptive and may introduce a multitude of potential vulnerabilities, with the emphasis on creating a “working baseline” with no emphasis on security best practices.
About:
The Offensive team at Triskele Labs is recognised as an industry leader in identifying and sharing critical vulnerability insights on both local and global scales. Supported by highly qualified specialists, we conduct in-depth trend analyses to ensure comprehensive coverage of both established and emerging techniques within the continually evolving threat landscape.
Our expert penetration testers work diligently to ensure the efficacy of security implementations of clients’ networks and environments, both physical and virtual. The main priority of penetration testing is to reveal potential entry points before malicious actors can exploit them and provide our clients with clear, prioritized, and actionable recommendations for remediation.
Introduction
Eight vulnerabilities per engagement
Triskele Labs have the privilege of working with clients through a multitude of sectors, including Government & Defense, Financial Services, Industrial, Healthcare & Life Sciences, Retail & E-commerce, Information Technology & Telecommunications, Energy & Utilities, and Education & Research.
During the last financial year, as global cyber threats intensified, our team conducted over 500 engagements. Although the number and severity of findings varied considerably across these assessments, we identified a total of 3,887 vulnerabilities, an average of nearly eight per engagement and fifteen new vulnerabilities each day.
Upon further analysis, several notable trends emerged. The incidence of critical vulnerabilities has declined, while the number of longstanding clients engaged in continuous testing has grown. This relationship highlights the direct impact of regular penetration testing and the implementation of its recommendations on reducing overall risk within client environments.
Severity analysis
All identified issues are assigned a risk rating to allow for prioritisation of remediation efforts. Triskele Labs utilises a custom risk rating framework to identify risk level, built from tried and trusted processes and experience. The Triskele Labs framework is detailed below.
These Can be derived from the following principles:
- Risk: Likelihood × Impact.
 
- Likelihood: Vulnerability + Threat.
 
- Vulnerability: An error or weakness in the design, system implementation or system.
 
- Threat: A person (internal/external) or something that can cause damage to an asset.
 
Impact: Loss of confidentiality, integrity, availability & accountability. It can consist of ease of vulnerability being detected, exploited, and discovered or information leakage.
Likelihood definitions
- UNLIKELY: Although a flaw or weakness exists, this flaw is either exceedingly difficult to exploit, would require a highly skilled attacker, needs to be combined with other vulnerabilities in order for exploitation to occur, or is otherwise affected by factors which reduce the likelihood, such as timing or conditions.
 - POSSIBLE: An attacker with a moderate level of skills and knowledge may be able to exploit this flaw or weakness, however the flaw is not widely exploited.
 
Likely: The flaw is well-know and exploitable by a novice or inexperienced attacker.
Consequence definitions
- Minor: The result of exploiting the flaw or weakness is a small gain for the attacker. This could include access to information that may assist other attacks which is not highly sensitive. The business should experience little impact as a result of exploitation of this flaw.
 - Moderate: Exploitation of the flaw would result in the attacker gaining access a low level of access to a resource, or access to sensitive information, however there are limitations and restrictions on this access.
 - Major: Exploitation of the flaw results in a high level of access to the attack, or a major change impacting the business. This is a position from which significant damage can be caused. This includes access to highly sensitive information.
 
Vulnerability analysis
The following provides a broad overview of trends and assessments conducted by Triskele Labs.
Key takeaways:
- Overall volume and density are almost identical: ~7.5 to 7.76 vulns per workspace in both timeframes.
 - Critical issues are proportionally rarer post-2024 (0.3 % vs 0.8 %), and fewer engagements identified any Critical findings.
 - High severity issues remained at ~8 % of all vulnerabilities for both periods, however a higher number of engagements post 2024 (138 vs 82) saw at least one High finding, reflecting the larger sample.
 - Medium & Informational vulnerabilities reduced slightly in share, while Low risk findings rose from ~51 % pre-2024 to ~54 % post-2024.
 
Most Common Test Types
Consistent with previous years, the three most common types of penetration testing include the following:
Web Applications and Application Programming Interfaces (APIs)
Designed to identify and exploit vulnerabilities in web applications and their supporting APIs, from small, single page brochure style web applications, to large, complex, Customer Relation Management (CRM) software. The primary goal is to evaluate the
application's resilience against common threats. This testing involves a combination of automated scanning and manual techniques to assess business logic, input validation, access controls, and more. Inclusions typically cover OWASP Top 10 vulnerabilities.
External Networks
Focuses on assessing the security posture of an organization’s internet-facing assets, such as firewalls, VPN gateways, email servers, and public IP ranges. The primary goal is to identify and exploit vulnerabilities that an external attacker could leverage to gain unauthorized access to the internal network or sensitive systems.
Internal Networks
A controlled assessment that simulates an attacker who has already gained access to an organization’s internal environment, whether through physical access, social engineering, or as a rogue employee, in what is known as an “assumed Breach” scenario. The primary goal is to evaluate the effectiveness of internal security controls in preventing privilege escalation, lateral movement, and unauthorized access to critical systems and sensitive data
OWASP Top 10
Using the OWASP Top 10 is perhaps the most effective first step towards changing the software development culture within organizations into one that produces more secure outcomes.
Our 2024 analysis highlights five security gaps as repeat offenders:
- Broken Object Level Authorisation has risen to the top as attackers still exploit simple ID manipulation in APIs and web applications to harvest or tamper with data, placing customer records at constant risk.
 - Broken Authentication continues to enable credential stuffing and session hijacking, turning weak login flows and long-lived service tokens into fraud and compliance penalties
 - Server-Side Request Forgery remains critical because cloud workloads that fetch URLs on behalf of users allow adversaries to reach internal metadata services and steal keys that can compromise entire environments
 - Security Misconfiguration still plagues mature teams, with default credentials, open buckets, and verbose error pages giving intruders easy footholds that quickly escalate to full outages
 - Improper Assets Management exposes untracked or outdated hosts, leaving known vulnerabilities unpatched and hidden data flows unmonitored so lack of real-time asset inventory can undermine every other control.
 
Emerging Sectors
Two key emerging fields which are experiencing the highest rate of testing growth are environments which contain Operational Technology (OT).
Transport, healthcare, and infrastructure are growing at a rapid pace, and are naturally a more prominent target for cyber security related threats. As these sectors routinely utilizing unique hardware for highly specialised tasks and capabilities, conducting frequent and thorough testing has become paramount to ensuring operational stability, and protection of customer information.
While Operational technology holds significant importance, not all specialised tools are “air-gapped” or sufficiently segmented. This has a direct relationship with the second emerging field, being red team, or “adversarial simulation” style testing. This testing ensures that, alongside all the default internal network, active directory, and network segmentation best practices are well designed, architected, and adhered to, but that the Security Operations Centre teams are operating with full visibility and responsiveness to incoming threats of a more stealthy and targeted approach.
These tests have highlighted the importance of adherence to the more baseline principles such as credential management, Multi-Factor Authentication (MFA), which are still the most common access vector in the majority of real world attacks, and penetration testing exercises.
The Emergence of AI In Cyber Security
An ever-increasing aspect within cyber security is the implementation of Artificial Intelligence, Machine Learning, and Large Language Models within the security landscape. This pertains not only to the creation of applications via LLM assisted codebase creation, but also within defensive and offensive tools used to conduct automated testing and automated discovery. These tools should be treated with caution, as they are disruptive and may introduce a multitude of potential vulnerabilities, with the emphasis on creating a “working baseline” with no emphasis on security best practices.
About:
The Offensive team at Triskele Labs is recognised as an industry leader in identifying and sharing critical vulnerability insights on both local and global scales. Supported by highly qualified specialists, we conduct in-depth trend analyses to ensure comprehensive coverage of both established and emerging techniques within the continually evolving threat landscape.
Our expert penetration testers work diligently to ensure the efficacy of security implementations of clients’ networks and environments, both physical and virtual. The main priority of penetration testing is to reveal potential entry points before malicious actors can exploit them and provide our clients with clear, prioritized, and actionable recommendations for remediation.