Many companies host their systems and services in the cloud believing it’s more efficient to build and operate in at scale. And while this may be true, the primary concern from security teams is whether this building of applications and management of systems is being done with security in mind.   

The cloud does easily enable use of new technologies and services as it is programmable and API driven. But it differs from a DC in both size and complexity in that it uses entirely different technologies. This why specific Cloud Security Posture Management should be a priority for any business operating primarily in the cloud.  

Common Cloud Security Mistakes 

There are several aspects of cloud security that are often overlooked that could lead to vulnerabilities. These include: 

How does CSPM help to improve security? 

Cloud Security Posture Management (CSPM) analyses the cloud infrastructure including configurations, management and workloads and monitors for potential issues with configurations of scripts, build processes and overall cloud management. Specifically it helps address the following security issues: 

  1. Identify Misconfigurations 

CSPM helps to identify misconfigurations that go against compliance. For example: If the company has a policy that says you shouldn’t have an open S3 bucket, but an administrator configures an S3 bucket without the correct security in place, CSPM can identify and alert that this vulnerability exists.  

  1. Remediate violations 

If the CSPM is set up to monitor and protect, it can not only identify misconfigurations, it can also pull them back in order to shut down that vulnerability. In the process it creates an active log to see what the root cause of non-compliance was and how it was remediated.  

  1. Compare to industry standards 

Knowing what’s happening in the broader industry helps to identify vulnerabilities and alert on changes that need to be made. This helps with compliance and also ensuring that security teams don’t overlook vulnerabilities because they aren’t aware of them.  

  1. Continuous Monitoring 

Conducting scans and audits to ensure compliance are good practices, but the reality is the security in the cloud is constantly evolving. No company can ever be sure that they’re 100% safe from a breach just because they’ve completed an audit. Continuous monitoring is necessary to try keep ahead of threats and ensure that you’re able to quickly identify any vulnerabilities.  

CSPM at work 

One of the common uses of CSPM is to be able to identify a lack of encryption at rest or in transit. Often http is set as a default and this doesn’t get updated when it should. If this isn’t identified it can create a major problem further down the line.  

In the cloud improper key management can create vulnerabilities. One way to mitigate for this is to rotate key management so that if one does get out there, there’s also the capability with CSPM to automatically take keys out of rotation.  

Companies frequently ask for an audit of all account permissions and this often identifies that some users have permissions and access that they shouldn’t. This can be an oversight when roles are assigned or for example when a developer asks for access for a specific project but those permissions are never pulled back once the project has been completed.  

Ensuring that MFA is activated on critical accounts is important and CSPM can run an audit to ensure that security protocols such as MFA are being implemented. The same applies to misconfigurations and data storage that is exposed to the internet. Having a way to continually monitor and dig into what is happening in cloud systems and alert on non-compliance can significantly improve a company’s security posture.  

Advanced CSPM tools go beyond this by showing how an incident was detected, where it was identified, and how to fix it. As well as an explanation as to why it should be fixed.  

There are multiple vendors offering a range of services and it’s good to keep in mind to not have all systems tied up to a single vendor. If they have unknown vulnerabilities that can impact your company security. With multiple vendors monitoring, they’re more likely to pick up on these and it reduces the risk exposure.  

To hear a more detailed discussion on the topic of CSPM, tune into the podcast with Aaron Howell, a managing consultant of the MAI team with over 15 years of IT security focus. Link: https://www.youtube.com/watch?v=9XNdB4zDMjg 

The most recent quarterly threat report issued by Expel at the end of 2022, revealed some interesting trends in cyberattacks. It highlights how attack methodologies are constantly changing and is a reminder to never be complacent.  

Security efforts require more than putting policies, systems and software in place. As detection and defence capabilities ramp up on one form of attack, cyber criminals divert to other attack paths and defence efforts need to adapt. When they don’t, it becomes all too easy for attackers to find and exploit vulnerabilities.  

The Expel Threat Report indicates that attackers have shifted away from targeting endpoints with malware. Instead, they’re focusing on identities and using phishing emails and other methods to compromise employee credentials.  

Currently, targeting identities accounts for 60% of attacks. Once attackers have a compromised identity, they use this to break into other company systems, such as payroll. With the ultimate goal of getting onto that payroll and extracting money from the company.  

Being able to get onto company systems via a compromised email account is proving to be a very viable attack path. With a compromised identity, attackers will often sit and observe what access a user has into company systems and how they might exploit this. They’re patient and determined. 

Can MFA help or do vulnerabilities remain? 

One of the ways in which companies try to improve email security is to implement a multi-factor authentication (MFA) policy. This can be very effective in reducing the risk of attack. However, recent trends indicate that attackers are now leveraging MFA fatigue to gain access to emails and employee identities. They do this by relentlessly and repeatedly sending push notifications requesting authorisation, until a manager is so fatigued by this that they grant the request.  

Once the attacker has access, they can easily navigate various company systems because their identity is seen to be valid and verified, having passed MFA. This can be discovered by monitoring for high volumes of push notifications. Some MFA providers have a way to report this and are working on solutions to address this type of attack. But in the meantime, employees need to be trained to be weary of multiple requests and rather report than assume it’s a system error and simply approve them. When an MFA bypass is discovered, the remedy is to shut that identity down and isolate that account and then investigate further what access may have been gained and what vulnerabilities exist as a result.  

Does monitoring IP addresses help?  

A past approach to monitoring for atypical authentication activity was to take into consideration what IP address the request originated from. It used to be a relatively easy security approach to flag and even block activity from IP addresses that originated from certain countries known for cybercriminal activity.  It was a good approach in theory, but like MFA it’s become very easy to bypass using legitimate tools such as a VPN. A VPN will show a legitimate US based IP address which won’t get flagged as being one to watch. This highlights how conditional access such as the geolocation of an IP address isn’t enough. 

There’s a whole underground of brokers eager to sell off compromised credentials and identities. Combined with a local IP address, it makes it easier for attackers to bypass basic alerts. This is why security remans a complex task that requires a multi-pronged defence approach.  

What’s the minimum needed for better security? 

Cybersecurity insurance guidelines are often used to identify what the minimum requirements are for security systems and policies. Currently this includes recommendations such as Endpoint Detection and Response (EDR) or Managed Detection and Response (MDR) combined with MFA policies, and regular employee security training. Ultimately the goal is for whatever tools and systems are in place, to be generating value for the company.  

Being able to monitor systems, check activity logs, gain visibility into end points and check accessibility  - ie – knowing what’s happening in terms of authentication is really valuable. For companies that have cloud based systems it’s important to be able to see cloud trails and activity surrounding new email accounts or API requests.  

There is no one size fit’s all security solution and most companies will continue to make use of multiple security tools, products and services. They have to, because regardless of whether they operate in the cloud or with more traditional servers, attackers are continually adapting, looking for ways to get around whatever security a company has in place.  

A new approach is to have a managed vulnerability service that can alert companies to changing attack paths being used gain authentication. This can help companies identify where they may be vulnerable and what they can do to beef up security in that area of business.  

Ultimately, it’s about closing the window of opportunity for attackers and making it harder for them to access systems or get authentication. It requires agility and constant learning, keeping up to date with what could be seen as a vulnerability and exploited.  

If you’d like to hear more on this topic, you can listen to a recent IT Trendsetters podcast: https://www.youtube.com/watch?v=1QXk_zcSfuc which discusses the different approaches to flagging atypical authentication requests and how to deal with them.  

Active Directory Certificate Services (AD CS) are a key element of encryption services on Windows domains. They enable file, email and network traffic encryption capabilities and provide organizations with the ability to build a public key infrastructure (PKI) to support digital signatures and certificates. Unfortunately as much as AD CS is designed to improve security, they’re all too easy to circumnavigate, resulting in vulnerabilities that cyber criminals easily exploit.

A SpectreOps research report released in April 2021, identified that this could take place through 8 common misconfigurations, leading to privilege escalation on an active domain. Abuse of privileges was another common path. Further research resulted in a CVE (CVE-2022-26923 by Certifried) which highlighted how an affected default template created by Microsoft could lead to elevation. As this is a relatively new attack surface, research is still ongoing.

Common categories of attack of AD CS

To date, three main categories have been identified as common paths of attack.

The first of these is misconfigured templates which then become exploitable. A misconfigured template can create a direct path for privilege elevation due to insecure settings. Through user or group membership it may also be possible to escalate privileges which would be viewed as abuse of privileges. Changes made to a default template could also result in it being less secure.

The second common category of attack relations to web enrolment and attack chains. Most often these are implemented using NTML relays which enables attackers to be positioned between clients and servers to relay authentication requests. Once these requests are validated it provides attackers access network services. An attack method that highlighted this vulnerability was PetitPotam that results in the server believing the attacker is a legitimate and authenticated user because they’ve been able to hijack the certification process.

A third way attackers gain access to a server is through exploitable default templates. An example of this is Certifried CVE-2022-26923, which led to further research into a template that was inherently vulnerable. In this case it’s possible for domain computers to enrol a machine template. Domain users can create a domain computer and any domain user can then elevate privileges.

What forms of remediation are possible?

Misconfiguration usually relates to SSL certificates valid for more than one host, system or domain. This happens when a template allows an enrolee to supply a Subject Alternate Name (SAN) for any machine or user. Therefore, if you see a SAN warning pop up, think twice before enabling it. If certificates are also being used for authentication it can create a vulnerability that gives an attacker validated access to multiple systems or domains. 

Often the first steps to remediation are to make a careful review of privileges and settings. For example: to reduce the risk of attacks through web enrolment, it’s possible to turn web enrolment off entirely. If you do still need web enrolment, it’s possible to enable Extended Protection for Authentication (EPA) on web enrolment. This would then block NTLM relay attacks on Integrated Windows Authentication (IWA). As part of this process be sure to disable HTTP on Web Enrolment and to instead use an IIS feature that uses HTTPS (TLS).

For Coercion Vulnerabilities it’s best to install available patches. As these threats evolve, new patches will come available so it’s important to keep up to date.

Certificate Attacks use an entirely different attack vector and are often a result of an administrator error. This means that vulnerabilities are created when default templates are changed. Sometimes administrators are simply not aware of the security risks associated with the changes that they make. But often it’s a mistake made during testing deployment, or when a custom template is left enabled and forgotten during testing. 

In an Escalation 4 (ESC4) type of attack, write privileges are obtained. These privileges are then exploited to create vulnerabilities in templates. Some of the ways that this is done is by disabling manager approvals  or increasing the validity of a certificate. It can even lead to persistent elevated access even after a manager changes their password. If this found to be the case during an Incident Response the remediation is to revoke the certificate entirely. Other forms of remediation are to conduct a security audit and to implement the principle of least privilege. It’ common for and AD CS administrator to have write privileges but not others. It’s possible they may have been activated during testing and then not removed, but any other user or group that has write privileges should be fully investigated.  

As attack methods continue to evolve so will the means to investigate and remediate for them. Becoming more familiar with how to secure PKI and what common vulnerabilities are exploited can help you know what to look out for when setting up and maintaining user privileges.

It’s estimated that in 2022 there are more than 23 billion connected devices around the world. In the next two years this number is likely to reach 50 billion, which is cause for concern. With so many devices linking systems it is going to create more vulnerabilities and more risks for businesses.

There’s absolutely no doubt that cyber security is an essential for every business. Most are confident that they have an attack surface in place. But with ever changing threats how do you know if it’s sufficient? Especially with the increase in the number of connections and the very real risk that many assets aren’t known or visible.

Why visibility matters

Cyber security is about protecting business assets to maintain the ability to operate effectively. But without knowing what technology asset’s a business has, how they’re connected and what their purpose is, it’s difficult to manage and secure them. More critically, it’s impossible to make good decisions about cybersecurity or business operations.

When taking about assets, this goes beyond computers or network routers in an office. It could be a sensor on a solar array linked to an inverter that powers a commercial building. In the medical field it could be a scanner or a diffusion pump in a hospital. Understanding what version of operating system (OS) a medical device has is as important as knowing what software the accounting system runs on. A very old version may no longer be supported and this could lead to vulnerabilities, given how connected systems are.

As an example: A medical infusion device was hooked up to a patient in a hospital. It the middle of the treatment it was observed that the device had malware on it. Normally the response would be to shut an infected device down and quarantine it. But in this particular medical context it wasn’t possible because it could have affected the well-being of the patient. Instead, it required a different approach. Nursing staff were sent to sit with the patient and monitor them to make sure the malware didn’t affect the treatment they were receiving. Then plans were set in place to begin to isolate the device as soon as the treatment was completed, and send it in for remediation. This highlights why context is so important.

Understanding what assets form part of business also requires understanding their context at a deeper level. Where are the assets located? What the role they perform? How critical are those assets to business operations and continuity? What’s the risk if they become compromised? And how do you remediate any vulnerabilities that are found?

Has work-from-home increased system risks?

At the start of the pandemic the priority for many businesses was continuity. i.e. finding ways to enable employee to work from home and have them connected to all the systems they needed to be, in order to achieve that. It’s fundamentally changed the way of working, especially as many businesses continue to embrace work from home and hybrid flexible working models. Employees have access to databases, SaaS systems, and they’re interacting with colleagues in locations across the globe. It’s all been made possible by the ability to connect anywhere in the world, but it’s not without risks. Now, post-pandemic, many of the vulnerabilities are starting to come to the fore and businesses aren’t always sure how to manage them.

In terms of assets, it’s resulted in an acceleration of a porous perimeter because it’s allowed other assets to be connected to the same networks that have access to corporate systems. By creating an access point for users, it has opened up connectivity to supposedly secured business operating systems through other devices that have been plugged in. Worse is that most businesses don’t have any visibility as to what those connected devices are. Without a way to scan an entire system to see what’s connected, where it’s connected and why it’s connected, it leaves a business vulnerable.  These vulnerabilities are likely to increase in the future as more and more devices become connected in the global workplace.

What are the critical considerations for business enterprises moving forward?

Currently there is too much noise on systems and this is only going to get worse as connectivity increases. Businesses need to find ways to correlate and rationalize the data they’re working with to make it more workable and actionable. This will help to provide context and allow businesses focus on the right things that make the most impact for the business, such as are continuity of business operations and resilience.

An example is being able to examine many different factors about an asset to generate a risk score about that particular asset. This includes non-IT assets that typically aren’t scanned because there isn’t an awareness that they exist. The ability to passively scan for vulnerabilities across all assets enables businesses to know what they’re working with. It gives teams the opportunity to focus on the critical areas of business and supporting assets - both primary and secondary. Just having the right context enables people to make better decisions on where to prioritize their efforts and resources. This ability to focus is going to become even more critical as the volume of assets and connections increase globally and the risks and vulnerabilities alongside them.

To learn more about getting a handle on business deployments listen to a recent SynerComm’s IT Trendsetter’s podcast with Armis in which they discuss the topic in more detail. Alternatively, you can also reach out to Synercomm.

The use of QR codes has grown exponentially in the last few years. So much so that the software for reading QR codes now comes as a default in the camera settings on most mobile devices. By just taking a photograph of a QR code the camera automatically brings up an option to open a link to access information.

The problem currently is that there is no way to verify if the link will take you to where it says it will, especially as most of the URL’s shown by QR readers display as short links. Humans can’t read the digital signature, and there’s no way to manually identify what information is contained in a QR code or where it’ll lead. For individuals and businesses this poses a security risk.

Consider how many QR codes exist in public places and how broadly they’re used in marketing. From parking garage tickets to restaurant menus, promotions and competitions in-store. Now consider that QR codes can easily be created by anyone with access to a QR creator app. Which means they can also be misused by anyone. It’s really not hard for someone to create and print a QR code to divert users to an alternate URL and place it over a genuine one on a restaurant menu.

What led to the rise in adoption of QR codes

QR codes were created in the mid 1990’s by a subsidiary of Toyota, Denzo Wave. The purpose of the QR code development was to be able to track car parts through manufacturing and assembly. However, the developers created it as an open code with the intention that it could be freely used by as many people as possible. Marketers saw the opportunity in the convenience it offered and soon it became a popular way to distribute coupons and other promotions.

When the pandemic hit and social distancing became a requirement, QR codes were seen as the ideal solution for many different applications. Instead of having to hand over cash or a credit card, a QR code could be scanned for payment. Instead of handing out menu’s, restaurants started offering access to menus through QR codes. In many ways the pandemic was largely responsible for the acceleration of QR code adoption. QR codes were seen as a “safer” no-contact solution. But in making things easier and more convenient for consumers, it’s created a minefield when it comes to security.

How do QR codes create vulnerabilities compared to email?

Over the years people have learnt not to click on just any link that comes through their email account. There are a few basic checks that can be done. These include independently verifying where the email came from. If the person or company is a known entity, as well as checking the destination URL of the link.

The problem with QR codes is that none of this information is available on looking at it. It’s just a pattern of black and white blocks. Even when bringing up the link, this is usually a short link so it’s not even possible to validate the URL. On email there are number of security options available including firewalls, anti-phishing and anti-virus software that can scan incoming emails and issue alerts. But nothing like this exists for QR codes.

Currently there is no software or system capable of scanning and automatically authenticating a QR code in the same way as an anti-virus would do for email. Without technology available to help with security, reducing vulnerabilities is reliant on education.

Best practices to reduce vulnerabilities:

As most QR codes are scanned with a mobile device, and most employees also access company emails and apps from their phones, there needs to be greater awareness of the risks that exist. Criminals are increasingly targeting mobile phones and individual identities in order to gain access to business systems. If an employee inadvertently clicks on a link from a QR code that is from a malicious source, it could set off a chain reaction. With access to the phone, it may also be possible to gain access to all the apps and systems on that phone – including company data.

From a user perspective, the key thing to know is that gaining access through a QR code requires manual input. The camera on a mobile phone may automatically scan a QR code when it sees it, but it still requires the user to manually click on the URL for anything to happen. That is the best opportunity to stop any vulnerability. Dismiss the link and there’s no risk. The QR code can’t automatically run a script or access the device if the link is ignored.

From a business perspective, if you’re using QR codes and want people to click on them you need to find ways to increase transparency and show where the link is sending them. The best way to do this is to avoid the use of short links. Show the actual URL, provide a way to validate that it’s a genuine promotion or link to your website.

QR code takeaway:

QR codes are in such broad circulation already, they’re not going away. But it’s a personal choice whether or not to use them. There’s nothing more personal in terms of technology than a mobile phone. If people want to improve their identity security there has to be a greater awareness of where the risks lie. Protecting devices, personal information and even access into company systems starts with a more discerning approach to QR codes.

Because the technology doesn’t currently exist to validate or authenticate QR codes, we need to learn how to use them in a safe way. We had to learn (often the hard way) not to insert just any memory card into a computer or open emails without scanning and validating them. Similarly, there needs to be a greater awareness to not scan just any QR code that’s presented.

To hear a more detailed discussion of QR codes and the security risks they pose, watch Episode 25 of IT Trendsetters Interview Series.

In boxing, the attributes that make up a champion are universally understood. A swift jab, a bone-crunching cross, and agile footwork are all important. However, true champions like Robinson, Ali, and Leonard knew how to defend against incoming attacks and avoid unnecessary punishment at all costs.

The same is true for a champion DNS security solution––it should help you avoid unnecessary breaches, protect against incoming attacks, and keep your data and systems safe at all costs. Your solution must not only be able to deliver content quickly and efficiently, but also protect against all types of incoming threats. It must be able to defend against DDoS attacks, data leaks, and new malicious offensive strategies.

Attributes of a Champion DNS Security Solution

When a boxing champion is able to avoid getting hit and can develop a shield against blows, they stay in the ring longer and are more likely to succeed. Again, the same is true for your business when it comes to developing a champion DNS security solution. Here are some of the features and attributes your DNS security solution should have:

Improved Security ROI

A champion solution will increase the return on investment from your other security investments. It will also, without requiring additional effort from you or a third party, secure every connection over physical, virtual, and cloud infrastructure.

Comprehensive Defense

A champion solution provides comprehensive defense using the existing infrastructure that runs your business—including DNS and other core network services.  BloxOne Threat Defense, for example, maximizes brand protection by securing your existing networks and digital imperatives like SD-WAN, IoT, and the cloud.

Powers & Optimizes SOAR Solutions

SOAR, (security orchestration, automation, and response) solutions help you work smarter by automating the investigation and response to security events. A champion DNS solution will integrate with your SOAR platform to help improve efficiency and effectiveness.

Scalability and Seamless Integration

A champion solution will integrate easily into your current environment and scale seamlessly as your business grows. It should require no new hardware, or degrade the performance of any existing network services.

Most importantly, a champion DNS security solution must be able to defend against any potential incoming threats. It must be able to defend against DDoS attacks, data leaks, and other malicious threats

BloxOne Threat Defense

BloxOne Threat Defense is a comprehensive, cloud-native security solution that meets all of the criteria, and more. It offers industry-leading features such as DNS firewalling, DDoS protection, and data leak prevention. BloxOne Threat Defense is easily scalable, integrates seamlessly with existing SOAR solutions, and maximizes ROI from your other security investments.

Don't leave your business unprotected against the ever-evolving landscape of DNS threats. A good offensive recovery is useful, but an adaptable defensive strategy is what separates a true DNS security champion from the rest.

To learn more about how BloxOne Threat Defense can help you defend against incoming threats, contact us today and book a free trial today!

Consolidating data centers, increased business agility and reduced IT system costs are a few of the benefits associated with migrating to the cloud. Add to these improved security and it makes a compelling case for cloud migration. As part of the digital transformation process, companies may implement what they consider the best tools, and have the right people and policies in place to secure their working environment. But is it enough?

Technology is continually evolving and so are the ways in which cybercriminals attack. Which means that no system is entirely secure. Every small change or upgrade has the potential to create a vulnerability. In that way, operating in the cloud is not all that different to having on-site systems that need to be tested and defended.

Understanding the most common mistakes made in cloud security can help companies become more aware of where vulnerabilities exist. We highlight the top five we often come across when testing:

Unhardened systems

This is one of the most common issues that comes up as a vulnerability in cloud systems. Normally as part of any on-site data center change or upgrade, there would be a process of removing the unneeded services and applications, then checking and patching the system to ensure it’s the latest version to reduce the number of vulnerabilities. But when new systems are set up in the cloud, often some of these steps are skipped. It could simply be a case of them being exposed to other networks or the internet before they’re hardened. More often though, they’re simply overlooked, and this creates vulnerabilities.

Excessively exposed services

Frequently vulnerabilities occur through remote desktop protocols, SSH, open files shares, database listeners, and missing ACL’s or firewalls. Usually these points of access would be shielded by a VPN, but now they’re being exposed to the internet. An example of how this could happen is through default accounts and passwords. If during setup these defaults weren’t removed or secured and SSH or databases are inadvertently exposed to the internet, it opens up a pathway for an attacker to access the system through the default logins and passwords.

Insecure API’s

While this is often seen in on-site systems, it is more prevalent in cloud systems. Perhaps because there seems to be less vigilance when migrating to the cloud. Weak authentication is a concern, and also easy authentication bypasses where an attacker is able to skip authentication altogether and start initiating queries to find vulnerabilities within a system.

Missing critical controls

Basic system controls such as firewalls, VPN, and two factor authentication need to be in place as a first line of defense. Many cloud servers have their own firewalls which are more than adequate, but they need to be activated and visible. Another common vulnerability can exist in a hybrid on-site cloud system that is connected by a S2S VPN. A vulnerability in the cloud system could give an attacker access to the on-site system through that supposedly secure link.

Insufficient logging and lack of monitoring

When a cloud server has been compromised, the first thing that is asked from the affected company, is the logs showing access, firewalls and possible threats to the different systems hosted within the cloud. If these logs don’t exist or haven’t been properly set up, it makes it almost impossible to monitor and identify where the attacks originated or how they progressed through the system.

Identifying cloud vulnerabilities through penetration testing

While there is a big movement towards cloud servers, many companies don’t give the same level of consideration to securing their systems in the cloud as they have for years on their on-site servers.  This is where penetration testing is hugely valuable in that it can identify and report on vulnerabilities and give companies an opportunity to reduce their risk.

The approach of penetration testing on cloud servers is no different from on-site servers because from an attacker’s point of view, they’re interested in what they can access. Where that information is located, makes no difference. They’re looking for vulnerabilities to exploit. There are some areas in the cloud where new vulnerabilities have been identified, such as sub-domain takeovers, open AWS S3 buckets, or even open file shares that give internet access to private networks.  Authentication systems are also common targets.  Penetration testing aims to make vulnerabilities known so that they can be corrected to reduce the risk a company is exposed to.

For companies that want to ensure they’re staying ahead of vulnerabilities, adversary simulations provide an opportunity to collaborate with penetration testers and validate their controls. The simulation process demonstrates likely or common attacks and gives defenders an opportunity to test their ability to identify and respond to the threats as they occur. This experience helps train responders and improve system controls. A huge benefit of this collaborative testing approach is sharing of information such as logs and alerts. The penetration tester can see what alerts are being triggered by their actions, while the defenders can see how attacks can evolve. If alerts aren’t being triggered, this identifies that logs aren’t being initiated which can then be corrected and retested.

SynerComm can help

As companies advance in their digital transformation and migrate more systems to the cloud, there needs to be an awareness that risk and vulnerabilities remain. The same level of vigilance taken with on-site systems needs to be implemented alongside cloud migrations. And then the systems need to be tested. If not attackers will gladly find and exploit vulnerabilities and this is not the type of risk companies want to be exposed to.

To learn about Cloud Penetration Testing and Cloud Adversary Simulation services reach out to Synercomm.

Having access to data on a network, whether it’s moving or static, is the key to operational efficiency and network security. This may seem obvious, yet the way many tech stacks are set up is to primarily support specific business processes. Network visibility only gets considered much later when there’s a problem.

For example: When there is a performance issue on a network, an application error or even a cybersecurity threat, getting access to data quickly is essential. But if visibility hasn’t been built into the design of the network, finding the right data becomes very difficult.

In a small organization, getting a crash card usually requires someone going to the tech stack and start running traces to find out where the issue originated. It’s a challenging task and takes time. Imagine the same scenario but with an enterprise with thousands of users. Without visibility into the network, how do you know where to start to troubleshoot? If the network and systems have been built without visibility, it becomes very difficult to access the data needed quickly.

How to build visibility into the design process?

There is a certain amount of consideration that needs to be given to system architecture to gain visibility to data and have monitoring systems in place that can provide early detection – whether it’s for a cybersecurity threat or network performance. This may include physical probes on a data center, virtual probes on a cloud network, changes to user agents or a combination of all of these.

Practically, to gain visibility into a data center, you may decide to install taps on the top of the rack as well as some aggregation devices that help you gain access to the north / south traffic on that rack. The curious thing is that most cyberattacks actually happen on east / west traffic. This means that monitoring only the top rack won’t be able to provide visibility or early detection on those threats. As a result you may need to plan to have additional virtual taps running in either your LINUX or vm ware environment which will provide a much broader level of monitoring of the infrastructure.

For most companies they also have cloud deployments, and these could go back 15 years, using any number of cloud systems for different workflows. The question to ask is: Does the company have the same level of data governance that it used to have on its own data center, as the data centers it no longer owns and just runs through an application? Most times the company won’t have access to that infrastructure. This means that a more measured approach is needed to determine how monitoring of all infrastructure can be achieved. Without a level of visibility it becomes very difficult to identify vulnerabilities and resolve them.

Lessons on network visibility highlighted by remote working and cloud deployments

More than two years after pivoting infrastructure to enable employees to work from home, many issues relating to data governance and compliance are now showing. These further highlight the challenges that occur when visibility isn’t built into infrastructure design. In reality, the pivot had to happen rapidly to ensure business continuity. At the time, access was the priority and given the urgency it wasn’t possible to build in the required levels of security and visibility.

With hybrid working becoming the norm for many companies, the shift in infrastructure is no longer considered temporary. Companies have systems that span data centers, remote workers and the cloud and there are gaps when it comes to data governance and compliance.  IT and cybersecurity teams are now testing levels of system performance and working to identify possible vulnerabilities to make networks and systems more secure.

There is an added challenge in that tech stacks have become highly complex with so many systems performing different functions in the company.  This is especially true when you consider multilayered approaches to cybersecurity and how much infrastructure is cloud based. Previously, when companies owned all the systems in their data centers, there were a handful of ways to manage visibility and gain access to data. Today, with ownership diversified in the different systems, it’s very difficult to have the same level of data visibility.

What’s the best approach given this complexity?

As system engineers develop and implement more tools to improve application and network performance, the vision may be to be able to manage everything in one place and have access to all the data you need. But even with SD LAN, technology is not yet at a point where one system or tool can do everything.

For now the best approach is to look at all the different locations and get a baseline for performance. Then go back 30 or 60 days and see if that performance was better or worse. When new technology is implemented it becomes easier to identify where improvements have taken place and where vulnerabilities still exist.

Even with AI/ML applications, it comes back to data visibility. AI may have the capacity to generate actionable insights, but it still requires training and vast volumes of data to do so. Companies need to be able to find and access the right data within their highly complex systems to be able to run the AI applications affectively.

Traditionally, and especially with cloud applications the focus is usually on building first and then securing systems later. But an approach that focuses on how the company will access critical data as part of the design, helps build more robust systems and infrastructure. Data visibility is very domain specific and companies that want to stay ahead in terms of system performance and security are being more proactive about incorporating data visibility into systems design.

There’s no doubt that this complex topic will continue to evolve along with systems and applications. To hear a more in-depth discussion on the topic of data visibility, watch the recent IT Trendsetters podcast with Synercomm and Keysight.

When US based companies are expanding and setting up offices in foreign countries, or they’re already established but need to do a systems upgrade, there are two primary options available. The company might look to procure equipment locally in-country. Or they might approach their US supplier and then plan to export the equipment.

At first it may appear to be a simple matter of efficiency and cost. How to get the equipment on the ground at the best price and as quickly as possible. But both options are mired in complexity with hidden costs and risks that aren’t immediately obvious. It’s when companies make assumptions that they run into trouble. Even a small mistake can be very costly, impacting the business reputation and bottom line.

Being aware of common pitfalls when looking to deploy IT systems internationally, can help decision makers reduce risk and go about deployment in a way that benefits the company in both the short and long term. To highlight what factors need to be taken into consideration, we discuss some common assumptions and oversights that can land companies in trouble.

  1. Total cost of local procurement

Initially when getting a quote from a local reseller, it may appear to be more cost effective compared to international shipping and customs clearance. However, it’s important to know if the purchase is subject to local direct or indirect taxes and if they have been included in the quote. For example: In some European countries VAT (Value Added Tax) is charged at 21% on all purchases. If this is not included and specified on the quote, companies could inadvertently find themselves paying 21% more than budgeted.

  1. Maintenance and asset management

IT systems may require procurement from multiple local vendors and this can be a challenge when it comes to managing warranties, maintenance contracts and the assets. Even if the vendors are able to provide a maintenance service, the responsibility still rests with the company to ensure the assets are accurately tagged and added to the database. When breakdowns occur or equipment needs to be replaced, the company will need to have the information on hand to know how to go about that, and if they don’t, it can be problematic.  With a central point of procurement, asset management can be much easier.

  1. Unknown and unforeseen factors

Operationally, dealing with suppliers and vendors in a foreign destination can be challenging. Without local knowledge and understanding of local cultures and how business operates within that culture, it’s easy to make mistakes. And those mistakes can be costly. For example: local vendors may have to bring in stock and this could result in delays. It may be difficult to hold the vendor to account, especially if they keep promising delivery, yet delays persist. Companies could be stuck in limbo, waiting for equipment. Installation teams are delayed and operational teams become frustrated. These types of delays can end up costing the company significantly more than what was originally budgeted for the deployment.

  1. Export/import regulations

Some companies may decide to stick with who they know and buy from their usual US supplier with the view to ship the equipment to the destination using a courier or freight forwarder. The challenge comes in understanding international import and export regulations. Too often companies will simply tick the boxes that enable the goods to be shipped, even if it isn’t entirely accurate. There are many ways to ship goods that might get them to a destination, but only one correct way that ensures the shipment is compliant. Knowing and understanding import regulations, taxes and duties, including how they differ between countries is the only way to reduce risk and avoid penalties.

  1. Multiple risks and impacts

Even within regions, countries have different trade and tax regulations regarding how imports are categorized and processed through customs. On major IT deployments with many different equipment components this can become highly complex. The logistics of managing everything is equally complex and any mistakes have knock on effects. Keep in mind that the company usually has to work with what is deployed for a number of years. This makes the cost of getting a deployment wrong a major risk. If there’s non-compliance with trade and tax regulations it can result in stiff penalties that can set a company back financially. If logistics and installation go awry it can result in company downtime which has operational implications.

Understanding the risks and challenges, what’s the solution?

There’s value in having centralized management of international IT deployment. Especially when that centralized management incorporates overseeing trade and taxation compliance, procurement and asset management, as well as logistics and delivery. If at any stage of the deployment there are queries or concerns, there’s a central contact to hold accountable and get answers.

Initially the costs of managing deployment centrally may appear to be higher, but the value comes in with removing risk of non-compliance and reducing risk of delays and operational downtime. Plus having an up to date asset database makes it significantly easier to manage maintenance, warranties and breakdowns going forward.

Companies debating which deployment route is most efficient, need to consider the possible pitfalls and their abilities to manage them independently. If there’s any doubt, then a centrally managed solution should be a serious consideration.

To hear a more detailed discussion on these and other common deployment pitfalls listen to our recent podcast on IT Trendsetters.  The podcast contains some valuable and enlightening discussion points that you may find helpful.

 

 

The Benefits of a Partnership Built on Trust – 4 Unique Case Studies

Get the Full Whitepaper for Free

What's Inside? 

To read the FULL Whitepaper for free, fill out the form above and click download! 

  • Why having a trusted advisor and partnership is an invaluable asset to your  business 
  • How trust and communication can maximize efficiency, minimize costs, and get the job done correctly the first time 
  • How to overcome a "one-size-fits-all" mindset and choose a unique and customized solution for your organization 

How SynerComm Builds Trusting Relationships   

Every project begins by carefully listening to our customer's problems and asks. Through this process, we learn about their underlying priorities, objectives, and unique project requirements. 

Instead of spending our time “selling” to our customers and prospects, we focus our energies on investing in the right solutions for our customers and letting our expertise, industry reputation, and excellent work speak for itself.  

No matter how large or small your deployment is, you need to know that you can trust your logistics and IT partner to provide you with tailored solutions, sound advice, and trustworthy white-glove service. This whitepaper was created to help you learn more about the importance of trust in your IT partnerships. 

Learn more about ImplementIt and how we can make your next project stress-free by clicking here.

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram