In part one of this series, we discussed the evolving landscape of cybersecurity and the roles artificial intelligence (AI) and machine learning (ML) play in the security space today. Here in part two, we discuss the advancements that have been made in AI and ML that strengthen cybersecurity and the challenges that come with implementing this evolving technology.

Advancements in AI and ML for Cybersecurity

Cybersecurity solutions are making use of advancements in AI and ML to overcome the limitations of traditional, signature-based detection methods. These new systems analyze vast amounts of data in order to learn patterns and behaviors and make informed decisions based on what they’ve learned.

Among the most impressive advancements in ML are deep learning systems. These systems use artificial neural networks that mimic the functions and structure of the human brain. In practice, AI/ML-driven solutions that leverage this technology can analyze enormous amounts of data to learn user behavior throughout an organization, detect when deviations occur, and take appropriate action. Among many other implementations, these solutions are capable of preventing new malware from entering a system as well as identifying insider threats and compromised accounts.

Looking toward the future, AI/ML-driven cybersecurity solutions will have more sophisticated threat detection and response capabilities and will most likely perform their tasks faster and more efficiently. They could also evolve into completely autonomous security systems that can operate without any human intervention.1 As AI/ML-driven cybersecurity solutions edge closer to these realities, security professionals need to keep a few concerns top of mind.

Challenges and Limitations of AI and ML in Cybersecurity

Despite the exciting contributions of AI and ML in cybersecurity, their challenges and limitations cannot be ignored. One major concern lies with algorithmic bias, which can occur if an AI/ML- driven solution is trained with biased data. If the data is biased, then the solution is likely to perpetuate and even magnify those biases.

For instance, an AI/ML-driven solution trained on historical data that is biased toward specific types of threats may disregard other attacks that behave outside of those previously learned parameters. Cybersecurity professionals thus recommend training AI/ML-driven solutions with diverse data sets and performing regular audits to identify and fix possible biases.

Furthermore, AI/ML-based solutions still struggle to understand intent and context. This limitation can lead to false positives or negatives, which are misidentifications of normal behavior as malicious or vice versa. False reporting is one of the many reasons why AI/ML-

driven solutions cannot yet be completely autonomous, as human intervention is still sometimes needed to interpret AI/ML-generated results.

Preparing for an AI-Driven Cybersecurity Future

Your organization’s unique needs will dictate how you implement AI and ML, but there are a few must-do’s to keep in mind when onboarding these technologies.

Be sure to extensively inspect and test any ML training models on a secure, virtual machine before you fully deploy them. Some training data might be “poisoned” deliberately as a form of cyberattack to force your AI to learn incorrectly and not function properly. It's also possible for a model to be tampered with by an inside adversary.

Provide your system with updated data as often as possible. Your AI/ML-driven solution is only as good as the data it can learn from—making it imperative for your system to continuously learn from diverse data sets to adapt to the evolving threat landscape as well as to the changes in your organization. Also, remember that just because your solution can learn doesn’t mean it is autonomous. Human oversight and intervention will be necessary to keep your system trained properly and to also reinforce/discourage certain behaviors such as false reporting.

AI and ML: the Next Frontier for Cybersecurity

AI and ML stand to change the game for security professionals in their never-ending quest to get ahead of the bad guys. While these technologies come with some limitations, they are worth adopting to better defend against threat actors who now leverage similar technology in an attempt to out-gun defenders. Don’t fall behind in this escalating arms race—check out CASM® today.

1.https://www.analyticsvidhya.com/blog/2023/02/future-of-ai-and-machine-learning-in-cybersecurity/

With the help of artificial intelligence (AI) and machine learning (ML), cybercriminals are creating novel, sophisticated threats more frequently and with fewer resources than ever before. These threats are increasingly difficult to detect using signature-based analysis methods and continue to wreak havoc across the digital business landscape. In 2023, the global average cost of a single data breach was $4.45 million1—a 15% increase over the past three years.

But AI and ML are equally as powerful for cybersecurity professionals in their efforts to defend against advanced threat tactics. In this blog—and in Part 2—we dive into the roles of AI and ML in cybersecurity and explore how these emerging technologies both create opportunities for stronger security and complicate the threat landscape.

The Evolving Landscape of Cybersecurity

Today’s cybersecurity professionals find themselves in a never-ending arms race with threat actors who increasingly use AI and ML to one-up security technologies. Generative AI streamlines software creation and scamming campaigns, allowing cybercriminals to craft extensive attacks with fewer people and less expertise. AI combined with ML enables threat actors to create more successful phishing and social engineering attacks as they can now leverage these technologies to manufacture dangerously realistic deepfakes.

Consider one such incident where cybercriminals tricked an employee at the company Retool into revealing their multi-factor authentication (MFA) code—ultimately exposing 27 cloud customer accounts.2 The attacker penetrated the employee’s account through a spear phishing attack. They then navigated multiple layers of security, called the employee using an AI- powered voice clone of an IT staff member, and asked for the MFA code. This attack is a prime example of how today’s organizations are underprepared and need to advance their threat detection methods by fighting fire with fire.

The Role of AI in Cybersecurity

AI plays an essential role in cybersecurity and threat detection through automated data analysis. Incident response and forensics teams need to analyze enormous amounts of data including logs, network traffic, and user behavior in order to identify threats and indicators of compromise (IOC). Using AI for these jobs not only speeds up the process but also helps to detect patterns that manual analysis might miss.

AI can also be used to analyze data from previous encounters and threat actors to identify trends that may be signals of an attack—allowing security teams to anticipate and proactively mitigate risk. In Cyber Magazine,3 Hitesh Bansal, Wipro's Country Head (UK & Ireland) – Cybersecurity & Risk Services explains, "Advanced AI now leverages existing protection technologies to build a logical layer within models to proactively protect data. For example, this can take the form of blocking traffic at the firewall level, before the threats compromise the boundaries of an organisation.”

Machine Learning’s Contribution to Threat Detection

When combined with ML, AI can do much more than just analyze and report data—it can learn and make informed decisions. For instance, an ML-capable cybersecurity solution learns the patterns of normal behavior for an organization and its users. Empowered with this knowledge, AI can detect when an anomalous activity happens. It then enters the threat-hunting process and closely examines the inconsistency. Depending on the nature of the anomaly, AI takes action, such as creating an exception, shutting down the activity completely, or deferring to a human to make the choice.

Traditional detection methods need human input and rely on known malware signatures— meaning someone somewhere had to be infected first. While this method accurately detects certain threats, it struggles to detect zero-day (never before recorded) malware. SynerComm integrates this traditional vulnerability scanning with ML and automatic discovery in its continuous attack surface management (CASM®) and continuous penetration testing (CPT) solutions. This combination enables you to proactively identify IT infrastructure vulnerabilities and improves your threat detection capabilities. Learn more about CASM® and be prepared for the evolving cyberthreat landscape.

Sources:

1. https://www.ibm.com/reports/data-breach
2. https://securityaffairs.com/150981/hacking/retool-smishing-attack.html
3. https://cybermagazine.com/articles/the-role-of-generative-ai-in-tackling-cyber-threats

The recent 20th Anniversary of IT Summit was an eye-opener for tech enthusiasts, security professionals, and business leaders alike. This annual two-day event brings together IT leaders from across the country to learn about the latest strategies and challenges in the infrastructure, data center, and InfoSec communities.

This year’s discussions revolved around the evolving landscape of business applications and data center access. This evolution is driven by the need to adapt to a rapidly changing digital world, characterized by increasing cyber threats and the demand for enhanced security. A few key themes were hot topics this year including identity-based access, leveraging the zero-trust model, the use of xDR API-enabled security ecosystems, and the integration of automation and AIOps for self-healing security, network, data center and public cloud infrastructure.

Here are our top takeaways from our industry visionaries on each topic:

Identity-Based Access

Traditionally, security systems have focused on network-based or perimeter-based defenses. As remote work and cloud services have become the norm, identity-based access is gaining importance. This approach ensures that only authorized users can access critical systems and data, regardless of their location.

Leveraging the Zero-Trust Model

In a zero-trust environment, trust is never assumed, and verification is a constant process. This model provides a higher level of security by continuously verifying the identity and security posture of every user and device attempting to access resources. By adopting the zero-trust model, organizations can enhance their security and protect against both external and internal threats.

xDR API-Enabled Security Ecosystem

This approach emphasizes data context sharing and enrichment, allowing security solutions to work in synergy. By integrating various security tools through APIs, organizations can enhance threat detection and response capabilities. This holistic approach to security is vital in a world where cyber threats are constantly evolving.

Automation and AIOps

Instead of relying on static security infrastructure responses, organizations are moving towards dynamic, self-healing responses. AIOps (Artificial Intelligence for IT Operations) allows for real- time threat detection and response, reducing the human intervention required for security operations. Automation ensures that security systems can adapt to emerging threats with agility and precision.

Intent and Narrow Focus

To achieve success in network and security infrastructure automation and AIOps initiatives, it's crucial to have a clear intent and a "narrow focus". In other words, organizations need to set specific goals and identify the data points that provide the necessary visibility. This requires upgrading infrastructure to collect and correlate these data points. High-fidelity input and data points are essential for effective security.

Quantifying Cyber Investments

InfoSec programs have evolved over the years, starting from technical controls investments to compliance and risk-based controls investments. The current focus is on data-driven investments based on financial exposure and annual expected losses, ensuring that investments align with risk tolerance and financial objectives.

How SynerComm Can Help

SynerComm's One Strategic Security Plan (OneSSPTM) offers a comprehensive range of services to support organizations on their security maturity journey. We collaborate with IT teams to identify their security needs, develop a unique path forward, and provide both the necessary solutions and expertise.

INSIGHTS Express and Enterprise offers tailored assessments, risk analysis, and financial impact evaluations to help organizations understand their current security posture and plan for improvements with a clear return on investment.

Our team also offers application assessments, penetration testing, adversary simulations, and continuous penetration testing to test and fine-tune security controls. Our technology sourcing expertise optimizes network and security infrastructure design, deployment, and ongoing operations, ensuring cost-effective and efficient solutions.

The 2023 IT Summit shed light on the critical shifts in business application and data center access, driven by identity-based access, xDR API-enabled security ecosystems, and more. As the digital landscape continues to evolve, organizations must adapt and prioritize security to protect assets and stay competitive. Our range of services and expertise can assist you in navigating this evolving landscape and enhancing your cybersecurity defenses. Connect with our team to get started today.

In today’s business world, most companies are fully reliant on technology to maintain their daily operations. Data has become valuable currency and as much as technology creates convenience and efficiency, the sheer volume of connected devices and systems has increased risk and vulnerability. Attacks on systems are becoming more prolific and companies need to constantly evaluate if they have done enough to protect themselves or their customers.  

In a recent IT Trendsetters webinar with Rapid7, an MDR service provider, we discussed how cybersecurity is evolving and what the trends are for 2023. Specifically, common mistakes that make companies an easy target. You’ll want to avoid these pitfalls: 

Thinking that nobody cares about our company or data 

There is a perception that cybercriminals only target major multinational companies that have large customer databases of sensitive information that are worth exploiting. This may have been the case several years back, but no longer. These companies have invested heavily in security making it harder for criminals to break into their systems and so the attackers are turning to easier targets – small and medium sized business.  

Typically smaller companies don’t invest as heavily in security or monitor their systems as diligently, yet most are connected to the internet in some way. This makes them relatively easy targets. Even smaller businesses have to protect their reputation and their customer data and criminals know and exploit this. Unfortunately, without adequate protections in place, most small to medium-sized businesses don’t survive a targeted cyberattack. When considering security, it should be viewed as a necessity for business continuity rather than an additional expense. If company systems are exposed to the internet, they’re vulnerable and it takes a strategic effort and investment to make them more secure.  

Not utilizing Multi-Factor Authentication (MFA) 

A major trend emerging from 2022, was that almost 40% of high-severity breaches were a result of not implementing MFA on public-facing surfaces. Attackers got into systems with relative ease and were able to do a fair amount of damage in a short period of time. While many employees may feel that MFA is an annoyance, in terms of business, it’s become essential. It’s a really simple, no-cost way of making it harder for attackers to access and navigate through systems. The value cannot be understated. In fact, most insurance companies include MFA as a requirement for obtaining insurance coverage.  

Not securing connected devices 

Exchange servers, gateways, firewalls, and any endpoint that touches the internet could become an access point for an attacker if it is not properly secured. These are some of the areas that threat actors commonly go after to get into company systems and account for approximately 25% of attacks. Companies need to be diligent in keeping these access points patched and monitoring them for any unusual activity.  

Compromised identities 

Another major trend is attackers using stolen credentials to gain access to a company system. These are often obtained through Phishing emails or compromising an employee’s social media account. In addition, there are many brokers on the dark web making good business by selling compromised but authenticated identities. These are often identities of past employees and without having robust authentication and monitoring services in place, these compromised identities can go undetected. The risk of compromised identities is another reason to implement MFA, If an identity is compromised but MFA is in place, it makes it harder for attackers to use the identity to progress within company systems.  

Inadequate defense mechanisms 

As much as companies are proactive about security, the reality is that attack methods are constantly evolving and it’s not always possible to keep ahead of and block every vulnerability. This is why it’s critical when a threat is identified, to have partners, systems, and policies in place to be able to isolate and quickly shut down the attack to minimize the damage.  

The challenge is that this is a complex task requiring specific expertise that has the capacity to work with great urgency. Where the attack originated, how attackers gained access, what they did, and how it impacted business, all forms part of how the threat is resolved. Most small to medium-sized businesses can’t afford to employ this level of expertise full-time. Especially as the nature of threats are becoming increasingly complex. This is why it often makes sense to partner with Endpoint Detection and Response (EDR) and security specialists as part of a managed solution. In working with a number of clients, they have greater insight into how best to counter attacks and can often move more swiftly to mitigate the damage 

But even in that, there is a challenge. There are so many different security services available and it can be difficult to identify which ones are applicable to a specific business. There is no one-size-fits-all solution. When investigating options, it’s important to understand where the services start and end. For example, a managed detection and response service likely won’t be running system and patch updates, but they would be able to identify and work to resolve threats.  

Because of these complexities, another emerging trend is that many insurance companies are recommending companies outsource their security to partners who are specialists. Their collective exposure to threats makes them better positioned to be able to identify possible threats and remediate them. They can also then use this information to identify what gaps exist in terms of threats and what steps need to be taken to put the right security in place to reduce the risks.  

Cybersecurity constantly evolves, as these trends indicate, and requires an agile approach. Companies should continue to be proactive about security, partnering with industry specialists and keeping abreast of threats and vulnerabilities.  

 

 

Are you concerned about keeping your online account, personal information, and business accounts secure? Check out this infographic on password security. Our team of experts has shared a visual guide that provides valuable tips and tricks on how to create strong and unique passwords, and how to store and manage them securely. With cyber attacks becoming more sophisticated each day, it's crucial to take proactive measures to protect yourself and your sensitive information. Let our team of experts help secure your business today with a password assessment!

There are few things more frustrating in business than systems that don’t work as efficiently as they should. With the complexity of modern IT infrastructure, which includes a hybrid workplace, identifying whether the problem lies with software or hardware such as network switches, servers or data centers can be hard without a structured approach.  

This is why a regular infrastructure refresh makes sense. It gives businesses an opportunity to conduct an audit and identify if the current infrastructure is performing well and will continue to do so in the future as the business scales. Security is another reason. Often, its vulnerabilities in outdated software or hardware that threat actors exploit to gain access to data centers. Ensuring systems are up to date goes a long way to improving a company’s security posture.

ng well and will continue to do so in the future as the business scales. Security is another reason. Often it’s vulnerabilities in outdated software or hardware that threat actors exploit to gain access to data centers. Ensuring systems are up to date goes a long way to improving a company’s security posture.

Benefits of an Infrastructure Refresh

It’s true that cybercrime is on the increase and threat actors are becoming increasingly creative in finding ways around security efforts. While keeping ahead of security threats is a good enough reason to regularly refresh infrastructure, the benefits extend beyond that. Having the flexibility to easily expand infrastructure when needed is a major business advantage. It means that companies can take advantage of a growth opportunity with less concern about whether their systems will be up to the task of scaling when new customers come on board. 

There are a wide range of technologies including AI applications that are geared toward helping organizations become more productive. Ensuring employees have the right technologies that they need and that systems are properly integrated is vital if the benefits of these productivity tools are to be realized. Conducting a refresh can pick up if there are conflicts between new technologies and older systems and help identify where bottlenecks occur or where employees have trouble accessing necessary data.  

Productivity is often linked to cost-saving targets and by doing an infrastructure refresh it may be possible to improve both. While in the short term, upgrading systems may be an expense, the longer-term cost savings gained by increased efficiency, improved security, and the ability to scale make the investment worthwhile.  

Apps have become integrated into modern-day business and require efficient data centers to operate efficiently. This is especially true for collaboration and time-sensitive apps when it comes to maintaining the level of productivity they’re designed for. A refresh can align the capacity of a data center to ensure it’s able to support the functions of collaboration apps to maximize efficiency.  

What’s Involved in an Infrastructure Refresh? 

SynerComm’s approach is a three-step process done in partnership with Juniper Networks. As an industry leader, Juniper Networks has a broad range of products and services focused on helping organizations improve efficiency and security, while also achieving measurable cost savings. This makes them an ideal partner for SynerComm when conducting an infrastructure refresh.

Day ZeroDesign, and Planning 

Understanding an organization’s infrastructure needs requires network engineers to consult with relevant company stakeholders. The purpose is primarily to gain user feedback on what’s working or where inefficiencies are experienced. This is followed by a thorough analysis of all hardware and software within the network to establish where shortcomings might exist. The focus is not only ensuring that the network infrastructure meets the company’s current needs but also that it has the flexibility built in to scale or adapt to changing business needs. Careful planning helps to eliminate disruptions that could result from discovering that modifications needed can’t be made to existing systems.

Day 1 – Hardware and Software Implementation 

A benefit of working with SynerComm is that if the audit identifies that new hardware or software is required, there is a broad range of products and services available that can be implemented, including cloud-based services. Working with a team of experienced network engineers ensures that the infrastructure will support greater levels of productivity and security. An asset management portal such as SynerComm’s AssetIT portal can also be leveraged to assist with asset management and ongoing maintenance.

Day 2 – Support and Maintenance 

Following implementation, an expert team remains available to help troubleshoot any problems that may arise and to make sure the new infrastructure is performing according to expectations.

Infrastructure as the Backbone of Business 

Technology continues to evolve at a rapid pace. Ensuring that data systems and infrastructure are up to the task of supporting business functions requires a critical look at performance regularly. This includes keeping an eye out for alternative options that could provide more reliable infrastructure. Adding new technologies and apps without upgrading infrastructure can negatively impact efficiency and user experiences, both from an employee and customer perspective.  

Conducting an infrastructure refresh can help identify which hardware and software are outdated and therefore impacting workflows and security. While audits should be conducted on whole networks, typically the primary areas of focus are storage systems, servers, networking equipment, and cybersecurity systems and solutions as these have the biggest impact on daily business operations.  

Most importantly refreshing infrastructure regularly helps to improve security as it helps to proactively identify vulnerabilities typically found in aging hardware and software. Up-to-date infrastructure helps companies retain a stronger security posture while supporting daily operations more efficiently. It’s a worthwhile exercise, best supported by a team of expert network engineers.  

For companies that want to keep ahead of IT issues to ensure that they remain agile, efficient, and more secure, regularly implementing an infrastructure refresh is an important consideration. The added benefit of this approach is that when new technologies or apps are to be implemented, the company already has up-to-date knowledge of the existing infrastructure. If it has been designed with the ability to scale, it becomes significantly easier to implement the new apps or technologies. This enables companies to move faster and reap the benefits of the new technologies ahead of competitors who may need to establish compatibility with existing infrastructure.  

Ready to upgrade your infrastructure? Contact SynerComm today to refresh your systems and improve your business's efficiency, security, and scalability. Don't let outdated infrastructure hold your business back any longer - take the first step towards a more productive and secure future by scheduling your infrastructure refresh today.

Many companies host their systems and services in the cloud believing it’s more efficient to build and operate at scale. And while this may be true, the primary concern of security teams is whether this building of applications and management of systems is being done with security in mind.   

The cloud does easily enable the use of new technologies and services as it is programmable and API driven. But it differs from a DC in both size and complexity in that it uses entirely different technologies. This why specific Cloud Security Posture Management should be a priority for any business operating primarily in the cloud.  

Common Cloud Security Mistakes 

There are several aspects of cloud security that are often overlooked that could lead to vulnerabilities. These include: 

How does CSPM help to improve security? 

Cloud Security Posture Management (CSPM) analyses the cloud infrastructure including configurations, management, and workloads, and monitors for potential issues with configurations of scripts, build processes, and overall cloud management. Specifically, it helps address the following security issues: 

1. Identify Misconfigurations 

CSPM helps to identify misconfigurations that go against compliance. For example: If the company has a policy that says you shouldn’t have an open S3 bucket, but an administrator configures an S3 bucket without the correct security in place, CSPM can identify and alert that this vulnerability exists.  

2. Remediate Violations 

If the CSPM is set up to monitor and protect, it can not only identify misconfigurations. It can also pull them back in order to shut down that vulnerability. In the process, it creates an active log to see what the root cause of non-compliance was and how it was remediated.  

3. Compare to Industry Standards 

Knowing what’s happening in the broader industry helps to identify vulnerabilities and alert on changes that need to be made. This helps with compliance and also ensures that security teams don’t overlook vulnerabilities because they aren’t aware of them.  

4. Continuous Monitoring 

Conducting scans and audits to ensure compliance are good practices, but the reality is that security in the cloud is constantly evolving. No company can ever be sure that they’re 100% safe from a breach just because they’ve completed an audit. Continuous monitoring is necessary to try to keep ahead of threats and ensure that you’re able to quickly identify any vulnerabilities.  

CSPM at work 

One of the common uses of CSPM is to be able to identify a lack of encryption at rest or in transit. Often HTTP is set as a default and this doesn’t get updated when it should. If this isn’t identified it can create a major problem further down the line.  

In the cloud, improper key management can create vulnerabilities. One way to mitigate this is to rotate key management so that if one does get out there, there’s also the capability with CSPM to automatically take keys out of rotation.  

Companies frequently ask for an audit of all account permissions and this often identifies that some users have permissions and access that they shouldn’t. This can be an oversight when roles are assigned or for example when a developer asks for access to a specific project but those permissions are never pulled back once the project has been completed.  

Ensuring that MFA is activated on critical accounts is important and CSPM can run an audit to ensure that security protocols such as MFA are being implemented. The same applies to misconfigurations and data storage that is exposed to the internet. Having a way to continually monitor and dig into what is happening in cloud systems and alert on non-compliance can significantly improve a company’s security posture.  

Advanced CSPM tools go beyond this by showing how an incident was detected, where it was identified, and how to fix it. As well as an explanation as to why it should be fixed.  

There are multiple vendors offering a range of services and it’s good to keep in mind to not have all systems tied up to a single vendor. If they have unknown vulnerabilities that can impact your company's security. With multiple vendors monitoring, they’re more likely to pick up on these and it reduces the risk exposure.  

To hear a more detailed discussion on the topic of CSPM, tune into the podcast with Aaron Howell, a managing consultant of the MAI team with over 15 years of IT security focus. Link: https://www.youtube.com/watch?v=9XNdB4zDMjg 

The most recent quarterly threat report issued by Expel at the end of 2022, revealed some interesting trends in cyberattacks. It highlights how attack methodologies are constantly changing and is a reminder to never be complacent.  

Security efforts require more than putting policies, systems and software in place. As detection and defence capabilities ramp up on one form of attack, cyber criminals divert to other attack paths and defence efforts need to adapt. When they don’t, it becomes all too easy for attackers to find and exploit vulnerabilities.  

The Expel Threat Report indicates that attackers have shifted away from targeting endpoints with malware. Instead, they’re focusing on identities and using phishing emails and other methods to compromise employee credentials.  

Currently, targeting identities accounts for 60% of attacks. Once attackers have a compromised identity, they use this to break into other company systems, such as payroll. With the ultimate goal of getting onto that payroll and extracting money from the company.  

Being able to get onto company systems via a compromised email account is proving to be a very viable attack path. With a compromised identity, attackers will often sit and observe what access a user has into company systems and how they might exploit this. They’re patient and determined. 

Can MFA help or do vulnerabilities remain? 

One of the ways in which companies try to improve email security is to implement a multi-factor authentication (MFA) policy. This can be very effective in reducing the risk of attack. However, recent trends indicate that attackers are now leveraging MFA fatigue to gain access to emails and employee identities. They do this by relentlessly and repeatedly sending push notifications requesting authorisation, until a manager is so fatigued by this that they grant the request.  

Once the attacker has access, they can easily navigate various company systems because their identity is seen to be valid and verified, having passed MFA. This can be discovered by monitoring for high volumes of push notifications. Some MFA providers have a way to report this and are working on solutions to address this type of attack. But in the meantime, employees need to be trained to be weary of multiple requests and rather report than assume it’s a system error and simply approve them. When an MFA bypass is discovered, the remedy is to shut that identity down and isolate that account and then investigate further what access may have been gained and what vulnerabilities exist as a result.  

Does monitoring IP addresses help?  

A past approach to monitoring for atypical authentication activity was to take into consideration what IP address the request originated from. It used to be a relatively easy security approach to flag and even block activity from IP addresses that originated from certain countries known for cybercriminal activity.  It was a good approach in theory, but like MFA it’s become very easy to bypass using legitimate tools such as a VPN. A VPN will show a legitimate US based IP address which won’t get flagged as being one to watch. This highlights how conditional access such as the geolocation of an IP address isn’t enough. 

There’s a whole underground of brokers eager to sell off compromised credentials and identities. Combined with a local IP address, it makes it easier for attackers to bypass basic alerts. This is why security remans a complex task that requires a multi-pronged defence approach.  

What’s the minimum needed for better security? 

Cybersecurity insurance guidelines are often used to identify what the minimum requirements are for security systems and policies. Currently this includes recommendations such as Endpoint Detection and Response (EDR) or Managed Detection and Response (MDR) combined with MFA policies, and regular employee security training. Ultimately the goal is for whatever tools and systems are in place, to be generating value for the company.  

Being able to monitor systems, check activity logs, gain visibility into end points and check accessibility  - ie – knowing what’s happening in terms of authentication is really valuable. For companies that have cloud based systems it’s important to be able to see cloud trails and activity surrounding new email accounts or API requests.  

There is no one size fit’s all security solution and most companies will continue to make use of multiple security tools, products and services. They have to, because regardless of whether they operate in the cloud or with more traditional servers, attackers are continually adapting, looking for ways to get around whatever security a company has in place.  

A new approach is to have a managed vulnerability service that can alert companies to changing attack paths being used gain authentication. This can help companies identify where they may be vulnerable and what they can do to beef up security in that area of business.  

Ultimately, it’s about closing the window of opportunity for attackers and making it harder for them to access systems or get authentication. It requires agility and constant learning, keeping up to date with what could be seen as a vulnerability and exploited.  

If you’d like to hear more on this topic, you can listen to a recent IT Trendsetters podcast: https://www.youtube.com/watch?v=1QXk_zcSfuc which discusses the different approaches to flagging atypical authentication requests and how to deal with them.  

Active Directory Certificate Services (AD CS) are a key element of encryption services on Windows domains. They enable file, email and network traffic encryption capabilities and provide organizations with the ability to build a public key infrastructure (PKI) to support digital signatures and certificates. Unfortunately as much as AD CS is designed to improve security, they’re all too easy to circumnavigate, resulting in vulnerabilities that cyber criminals easily exploit.

A SpectreOps research report released in April 2021, identified that this could take place through 8 common misconfigurations, leading to privilege escalation on an active domain. Abuse of privileges was another common path. Further research resulted in a CVE (CVE-2022-26923 by Certifried) which highlighted how an affected default template created by Microsoft could lead to elevation. As this is a relatively new attack surface, research is still ongoing.

Common categories of attack of AD CS

To date, three main categories have been identified as common paths of attack.

The first of these is misconfigured templates which then become exploitable. A misconfigured template can create a direct path for privilege elevation due to insecure settings. Through user or group membership it may also be possible to escalate privileges which would be viewed as abuse of privileges. Changes made to a default template could also result in it being less secure.

The second common category of attack relations to web enrolment and attack chains. Most often these are implemented using NTML relays which enables attackers to be positioned between clients and servers to relay authentication requests. Once these requests are validated it provides attackers access network services. An attack method that highlighted this vulnerability was PetitPotam that results in the server believing the attacker is a legitimate and authenticated user because they’ve been able to hijack the certification process.

A third way attackers gain access to a server is through exploitable default templates. An example of this is Certifried CVE-2022-26923, which led to further research into a template that was inherently vulnerable. In this case it’s possible for domain computers to enrol a machine template. Domain users can create a domain computer and any domain user can then elevate privileges.

What forms of remediation are possible?

Misconfiguration usually relates to SSL certificates valid for more than one host, system or domain. This happens when a template allows an enrolee to supply a Subject Alternate Name (SAN) for any machine or user. Therefore, if you see a SAN warning pop up, think twice before enabling it. If certificates are also being used for authentication it can create a vulnerability that gives an attacker validated access to multiple systems or domains. 

Often the first steps to remediation are to make a careful review of privileges and settings. For example: to reduce the risk of attacks through web enrolment, it’s possible to turn web enrolment off entirely. If you do still need web enrolment, it’s possible to enable Extended Protection for Authentication (EPA) on web enrolment. This would then block NTLM relay attacks on Integrated Windows Authentication (IWA). As part of this process be sure to disable HTTP on Web Enrolment and to instead use an IIS feature that uses HTTPS (TLS).

For Coercion Vulnerabilities it’s best to install available patches. As these threats evolve, new patches will come available so it’s important to keep up to date.

Certificate Attacks use an entirely different attack vector and are often a result of an administrator error. This means that vulnerabilities are created when default templates are changed. Sometimes administrators are simply not aware of the security risks associated with the changes that they make. But often it’s a mistake made during testing deployment, or when a custom template is left enabled and forgotten during testing. 

In an Escalation 4 (ESC4) type of attack, write privileges are obtained. These privileges are then exploited to create vulnerabilities in templates. Some of the ways that this is done is by disabling manager approvals  or increasing the validity of a certificate. It can even lead to persistent elevated access even after a manager changes their password. If this found to be the case during an Incident Response the remediation is to revoke the certificate entirely. Other forms of remediation are to conduct a security audit and to implement the principle of least privilege. It’ common for and AD CS administrator to have write privileges but not others. It’s possible they may have been activated during testing and then not removed, but any other user or group that has write privileges should be fully investigated.  

As attack methods continue to evolve so will the means to investigate and remediate for them. Becoming more familiar with how to secure PKI and what common vulnerabilities are exploited can help you know what to look out for when setting up and maintaining user privileges.

It’s estimated that in 2022 there are more than 23 billion connected devices around the world. In the next two years this number is likely to reach 50 billion, which is cause for concern. With so many devices linking systems it is going to create more vulnerabilities and more risks for businesses.

There’s absolutely no doubt that cyber security is an essential for every business. Most are confident that they have an attack surface in place. But with ever changing threats how do you know if it’s sufficient? Especially with the increase in the number of connections and the very real risk that many assets aren’t known or visible.

Why visibility matters

Cyber security is about protecting business assets to maintain the ability to operate effectively. But without knowing what technology asset’s a business has, how they’re connected and what their purpose is, it’s difficult to manage and secure them. More critically, it’s impossible to make good decisions about cybersecurity or business operations.

When taking about assets, this goes beyond computers or network routers in an office. It could be a sensor on a solar array linked to an inverter that powers a commercial building. In the medical field it could be a scanner or a diffusion pump in a hospital. Understanding what version of operating system (OS) a medical device has is as important as knowing what software the accounting system runs on. A very old version may no longer be supported and this could lead to vulnerabilities, given how connected systems are.

As an example: A medical infusion device was hooked up to a patient in a hospital. It the middle of the treatment it was observed that the device had malware on it. Normally the response would be to shut an infected device down and quarantine it. But in this particular medical context it wasn’t possible because it could have affected the well-being of the patient. Instead, it required a different approach. Nursing staff were sent to sit with the patient and monitor them to make sure the malware didn’t affect the treatment they were receiving. Then plans were set in place to begin to isolate the device as soon as the treatment was completed, and send it in for remediation. This highlights why context is so important.

Understanding what assets form part of business also requires understanding their context at a deeper level. Where are the assets located? What the role they perform? How critical are those assets to business operations and continuity? What’s the risk if they become compromised? And how do you remediate any vulnerabilities that are found?

Has work-from-home increased system risks?

At the start of the pandemic the priority for many businesses was continuity. i.e. finding ways to enable employee to work from home and have them connected to all the systems they needed to be, in order to achieve that. It’s fundamentally changed the way of working, especially as many businesses continue to embrace work from home and hybrid flexible working models. Employees have access to databases, SaaS systems, and they’re interacting with colleagues in locations across the globe. It’s all been made possible by the ability to connect anywhere in the world, but it’s not without risks. Now, post-pandemic, many of the vulnerabilities are starting to come to the fore and businesses aren’t always sure how to manage them.

In terms of assets, it’s resulted in an acceleration of a porous perimeter because it’s allowed other assets to be connected to the same networks that have access to corporate systems. By creating an access point for users, it has opened up connectivity to supposedly secured business operating systems through other devices that have been plugged in. Worse is that most businesses don’t have any visibility as to what those connected devices are. Without a way to scan an entire system to see what’s connected, where it’s connected and why it’s connected, it leaves a business vulnerable.  These vulnerabilities are likely to increase in the future as more and more devices become connected in the global workplace.

What are the critical considerations for business enterprises moving forward?

Currently there is too much noise on systems and this is only going to get worse as connectivity increases. Businesses need to find ways to correlate and rationalize the data they’re working with to make it more workable and actionable. This will help to provide context and allow businesses focus on the right things that make the most impact for the business, such as are continuity of business operations and resilience.

An example is being able to examine many different factors about an asset to generate a risk score about that particular asset. This includes non-IT assets that typically aren’t scanned because there isn’t an awareness that they exist. The ability to passively scan for vulnerabilities across all assets enables businesses to know what they’re working with. It gives teams the opportunity to focus on the critical areas of business and supporting assets - both primary and secondary. Just having the right context enables people to make better decisions on where to prioritize their efforts and resources. This ability to focus is going to become even more critical as the volume of assets and connections increase globally and the risks and vulnerabilities alongside them.

To learn more about getting a handle on business deployments listen to a recent SynerComm’s IT Trendsetter’s podcast with Armis in which they discuss the topic in more detail. Alternatively, you can also reach out to Synercomm.

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram