SynerComm's marketing team likes to ask our pentesters for help with their messaging. Like all good hackers, we adapt, automate and improve. So, when we asked our team for some helpful marketing messages, Dylan R. responded within seconds with this.

Password security assessments are important because they help ensure that the passwords being used to protect sensitive information and systems are strong and secure.

In today's digital world, passwords are the primary means of authentication for many online accounts and systems. If a password is weak or easy to guess, it can be vulnerable to attack, allowing unauthorized access to sensitive information or systems. This can have serious consequences, such as data breaches, identity theft, and financial loss.

By conducting a password security assessment, organizations can identify and address any weaknesses in their password policies and practices. This includes identifying and correcting any weak or easily guessable passwords, enforcing password length and complexity requirements, and implementing password expiration and rotation policies.

Overall, password security assessments are a crucial component of an organization's overall security strategy, helping to ensure the confidentiality, integrity, and availability of sensitive information and systems.

Chat GPT - December 2022

Well stated Dylan (or should I say Chat GPT) !!!

A.I. is a powerful tool with endless potential uses. See how SynerComm is using A.I. to enhance our penetration testing and continuous attack surface management (CASM) solutions.

In boxing, the attributes that make up a champion are universally understood. A swift jab, a bone-crunching cross, and agile footwork are all important. However, true champions like Robinson, Ali, and Leonard knew how to defend against incoming attacks and avoid unnecessary punishment at all costs.

The same is true for a champion DNS security solution––it should help you avoid unnecessary breaches, protect against incoming attacks, and keep your data and systems safe at all costs. Your solution must not only be able to deliver content quickly and efficiently, but also protect against all types of incoming threats. It must be able to defend against DDoS attacks, data leaks, and new malicious offensive strategies.

Attributes of a Champion DNS Security Solution

When a boxing champion is able to avoid getting hit and can develop a shield against blows, they stay in the ring longer and are more likely to succeed. Again, the same is true for your business when it comes to developing a champion DNS security solution. Here are some of the features and attributes your DNS security solution should have:

Improved Security ROI

A champion solution will increase the return on investment from your other security investments. It will also, without requiring additional effort from you or a third party, secure every connection over physical, virtual, and cloud infrastructure.

Comprehensive Defense

A champion solution provides comprehensive defense using the existing infrastructure that runs your business—including DNS and other core network services.  BloxOne Threat Defense, for example, maximizes brand protection by securing your existing networks and digital imperatives like SD-WAN, IoT, and the cloud.

Powers & Optimizes SOAR Solutions

SOAR, (security orchestration, automation, and response) solutions help you work smarter by automating the investigation and response to security events. A champion DNS solution will integrate with your SOAR platform to help improve efficiency and effectiveness.

Scalability and Seamless Integration

A champion solution will integrate easily into your current environment and scale seamlessly as your business grows. It should require no new hardware, or degrade the performance of any existing network services.

Most importantly, a champion DNS security solution must be able to defend against any potential incoming threats. It must be able to defend against DDoS attacks, data leaks, and other malicious threats

BloxOne Threat Defense

BloxOne Threat Defense is a comprehensive, cloud-native security solution that meets all of the criteria, and more. It offers industry-leading features such as DNS firewalling, DDoS protection, and data leak prevention. BloxOne Threat Defense is easily scalable, integrates seamlessly with existing SOAR solutions, and maximizes ROI from your other security investments.

Don't leave your business unprotected against the ever-evolving landscape of DNS threats. A good offensive recovery is useful, but an adaptable defensive strategy is what separates a true DNS security champion from the rest.

To learn more about how BloxOne Threat Defense can help you defend against incoming threats, contact us today and book a free trial today!

Consolidating data centers, increased business agility and reduced IT system costs are a few of the benefits associated with migrating to the cloud. Add to these improved security and it makes a compelling case for cloud migration. As part of the digital transformation process, companies may implement what they consider the best tools, and have the right people and policies in place to secure their working environment. But is it enough?

Technology is continually evolving and so are the ways in which cybercriminals attack. Which means that no system is entirely secure. Every small change or upgrade has the potential to create a vulnerability. In that way, operating in the cloud is not all that different to having on-site systems that need to be tested and defended.

Understanding the most common mistakes made in cloud security can help companies become more aware of where vulnerabilities exist. We highlight the top five we often come across when testing:

Unhardened systems

This is one of the most common issues that comes up as a vulnerability in cloud systems. Normally as part of any on-site data center change or upgrade, there would be a process of removing the unneeded services and applications, then checking and patching the system to ensure it’s the latest version to reduce the number of vulnerabilities. But when new systems are set up in the cloud, often some of these steps are skipped. It could simply be a case of them being exposed to other networks or the internet before they’re hardened. More often though, they’re simply overlooked, and this creates vulnerabilities.

Excessively exposed services

Frequently vulnerabilities occur through remote desktop protocols, SSH, open files shares, database listeners, and missing ACL’s or firewalls. Usually these points of access would be shielded by a VPN, but now they’re being exposed to the internet. An example of how this could happen is through default accounts and passwords. If during setup these defaults weren’t removed or secured and SSH or databases are inadvertently exposed to the internet, it opens up a pathway for an attacker to access the system through the default logins and passwords.

Insecure API’s

While this is often seen in on-site systems, it is more prevalent in cloud systems. Perhaps because there seems to be less vigilance when migrating to the cloud. Weak authentication is a concern, and also easy authentication bypasses where an attacker is able to skip authentication altogether and start initiating queries to find vulnerabilities within a system.

Missing critical controls

Basic system controls such as firewalls, VPN, and two factor authentication need to be in place as a first line of defense. Many cloud servers have their own firewalls which are more than adequate, but they need to be activated and visible. Another common vulnerability can exist in a hybrid on-site cloud system that is connected by a S2S VPN. A vulnerability in the cloud system could give an attacker access to the on-site system through that supposedly secure link.

Insufficient logging and lack of monitoring

When a cloud server has been compromised, the first thing that is asked from the affected company, is the logs showing access, firewalls and possible threats to the different systems hosted within the cloud. If these logs don’t exist or haven’t been properly set up, it makes it almost impossible to monitor and identify where the attacks originated or how they progressed through the system.

Identifying cloud vulnerabilities through penetration testing

While there is a big movement towards cloud servers, many companies don’t give the same level of consideration to securing their systems in the cloud as they have for years on their on-site servers.  This is where penetration testing is hugely valuable in that it can identify and report on vulnerabilities and give companies an opportunity to reduce their risk.

The approach of penetration testing on cloud servers is no different from on-site servers because from an attacker’s point of view, they’re interested in what they can access. Where that information is located, makes no difference. They’re looking for vulnerabilities to exploit. There are some areas in the cloud where new vulnerabilities have been identified, such as sub-domain takeovers, open AWS S3 buckets, or even open file shares that give internet access to private networks.  Authentication systems are also common targets.  Penetration testing aims to make vulnerabilities known so that they can be corrected to reduce the risk a company is exposed to.

For companies that want to ensure they’re staying ahead of vulnerabilities, adversary simulations provide an opportunity to collaborate with penetration testers and validate their controls. The simulation process demonstrates likely or common attacks and gives defenders an opportunity to test their ability to identify and respond to the threats as they occur. This experience helps train responders and improve system controls. A huge benefit of this collaborative testing approach is sharing of information such as logs and alerts. The penetration tester can see what alerts are being triggered by their actions, while the defenders can see how attacks can evolve. If alerts aren’t being triggered, this identifies that logs aren’t being initiated which can then be corrected and retested.

SynerComm can help

As companies advance in their digital transformation and migrate more systems to the cloud, there needs to be an awareness that risk and vulnerabilities remain. The same level of vigilance taken with on-site systems needs to be implemented alongside cloud migrations. And then the systems need to be tested. If not attackers will gladly find and exploit vulnerabilities and this is not the type of risk companies want to be exposed to.

To learn about Cloud Penetration Testing and Cloud Adversary Simulation services reach out to Synercomm.

Having access to data on a network, whether it’s moving or static, is the key to operational efficiency and network security. This may seem obvious, yet the way many tech stacks are set up is to primarily support specific business processes. Network visibility only gets considered much later when there’s a problem.

For example: When there is a performance issue on a network, an application error or even a cybersecurity threat, getting access to data quickly is essential. But if visibility hasn’t been built into the design of the network, finding the right data becomes very difficult.

In a small organization, getting a crash card usually requires someone going to the tech stack and start running traces to find out where the issue originated. It’s a challenging task and takes time. Imagine the same scenario but with an enterprise with thousands of users. Without visibility into the network, how do you know where to start to troubleshoot? If the network and systems have been built without visibility, it becomes very difficult to access the data needed quickly.

How to build visibility into the design process?

There is a certain amount of consideration that needs to be given to system architecture to gain visibility to data and have monitoring systems in place that can provide early detection – whether it’s for a cybersecurity threat or network performance. This may include physical probes on a data center, virtual probes on a cloud network, changes to user agents or a combination of all of these.

Practically, to gain visibility into a data center, you may decide to install taps on the top of the rack as well as some aggregation devices that help you gain access to the north / south traffic on that rack. The curious thing is that most cyberattacks actually happen on east / west traffic. This means that monitoring only the top rack won’t be able to provide visibility or early detection on those threats. As a result you may need to plan to have additional virtual taps running in either your LINUX or vm ware environment which will provide a much broader level of monitoring of the infrastructure.

For most companies they also have cloud deployments, and these could go back 15 years, using any number of cloud systems for different workflows. The question to ask is: Does the company have the same level of data governance that it used to have on its own data center, as the data centers it no longer owns and just runs through an application? Most times the company won’t have access to that infrastructure. This means that a more measured approach is needed to determine how monitoring of all infrastructure can be achieved. Without a level of visibility it becomes very difficult to identify vulnerabilities and resolve them.

Lessons on network visibility highlighted by remote working and cloud deployments

More than two years after pivoting infrastructure to enable employees to work from home, many issues relating to data governance and compliance are now showing. These further highlight the challenges that occur when visibility isn’t built into infrastructure design. In reality, the pivot had to happen rapidly to ensure business continuity. At the time, access was the priority and given the urgency it wasn’t possible to build in the required levels of security and visibility.

With hybrid working becoming the norm for many companies, the shift in infrastructure is no longer considered temporary. Companies have systems that span data centers, remote workers and the cloud and there are gaps when it comes to data governance and compliance.  IT and cybersecurity teams are now testing levels of system performance and working to identify possible vulnerabilities to make networks and systems more secure.

There is an added challenge in that tech stacks have become highly complex with so many systems performing different functions in the company.  This is especially true when you consider multilayered approaches to cybersecurity and how much infrastructure is cloud based. Previously, when companies owned all the systems in their data centers, there were a handful of ways to manage visibility and gain access to data. Today, with ownership diversified in the different systems, it’s very difficult to have the same level of data visibility.

What’s the best approach given this complexity?

As system engineers develop and implement more tools to improve application and network performance, the vision may be to be able to manage everything in one place and have access to all the data you need. But even with SD LAN, technology is not yet at a point where one system or tool can do everything.

For now the best approach is to look at all the different locations and get a baseline for performance. Then go back 30 or 60 days and see if that performance was better or worse. When new technology is implemented it becomes easier to identify where improvements have taken place and where vulnerabilities still exist.

Even with AI/ML applications, it comes back to data visibility. AI may have the capacity to generate actionable insights, but it still requires training and vast volumes of data to do so. Companies need to be able to find and access the right data within their highly complex systems to be able to run the AI applications affectively.

Traditionally, and especially with cloud applications the focus is usually on building first and then securing systems later. But an approach that focuses on how the company will access critical data as part of the design, helps build more robust systems and infrastructure. Data visibility is very domain specific and companies that want to stay ahead in terms of system performance and security are being more proactive about incorporating data visibility into systems design.

There’s no doubt that this complex topic will continue to evolve along with systems and applications. To hear a more in-depth discussion on the topic of data visibility, watch the recent IT Trendsetters podcast with Synercomm and Keysight.

When US based companies are expanding and setting up offices in foreign countries, or they’re already established but need to do a systems upgrade, there are two primary options available. The company might look to procure equipment locally in-country. Or they might approach their US supplier and then plan to export the equipment.

At first it may appear to be a simple matter of efficiency and cost. How to get the equipment on the ground at the best price and as quickly as possible. But both options are mired in complexity with hidden costs and risks that aren’t immediately obvious. It’s when companies make assumptions that they run into trouble. Even a small mistake can be very costly, impacting the business reputation and bottom line.

Being aware of common pitfalls when looking to deploy IT systems internationally, can help decision makers reduce risk and go about deployment in a way that benefits the company in both the short and long term. To highlight what factors need to be taken into consideration, we discuss some common assumptions and oversights that can land companies in trouble.

  1. Total cost of local procurement

Initially when getting a quote from a local reseller, it may appear to be more cost effective compared to international shipping and customs clearance. However, it’s important to know if the purchase is subject to local direct or indirect taxes and if they have been included in the quote. For example: In some European countries VAT (Value Added Tax) is charged at 21% on all purchases. If this is not included and specified on the quote, companies could inadvertently find themselves paying 21% more than budgeted.

  1. Maintenance and asset management

IT systems may require procurement from multiple local vendors and this can be a challenge when it comes to managing warranties, maintenance contracts and the assets. Even if the vendors are able to provide a maintenance service, the responsibility still rests with the company to ensure the assets are accurately tagged and added to the database. When breakdowns occur or equipment needs to be replaced, the company will need to have the information on hand to know how to go about that, and if they don’t, it can be problematic.  With a central point of procurement, asset management can be much easier.

  1. Unknown and unforeseen factors

Operationally, dealing with suppliers and vendors in a foreign destination can be challenging. Without local knowledge and understanding of local cultures and how business operates within that culture, it’s easy to make mistakes. And those mistakes can be costly. For example: local vendors may have to bring in stock and this could result in delays. It may be difficult to hold the vendor to account, especially if they keep promising delivery, yet delays persist. Companies could be stuck in limbo, waiting for equipment. Installation teams are delayed and operational teams become frustrated. These types of delays can end up costing the company significantly more than what was originally budgeted for the deployment.

  1. Export/import regulations

Some companies may decide to stick with who they know and buy from their usual US supplier with the view to ship the equipment to the destination using a courier or freight forwarder. The challenge comes in understanding international import and export regulations. Too often companies will simply tick the boxes that enable the goods to be shipped, even if it isn’t entirely accurate. There are many ways to ship goods that might get them to a destination, but only one correct way that ensures the shipment is compliant. Knowing and understanding import regulations, taxes and duties, including how they differ between countries is the only way to reduce risk and avoid penalties.

  1. Multiple risks and impacts

Even within regions, countries have different trade and tax regulations regarding how imports are categorized and processed through customs. On major IT deployments with many different equipment components this can become highly complex. The logistics of managing everything is equally complex and any mistakes have knock on effects. Keep in mind that the company usually has to work with what is deployed for a number of years. This makes the cost of getting a deployment wrong a major risk. If there’s non-compliance with trade and tax regulations it can result in stiff penalties that can set a company back financially. If logistics and installation go awry it can result in company downtime which has operational implications.

Understanding the risks and challenges, what’s the solution?

There’s value in having centralized management of international IT deployment. Especially when that centralized management incorporates overseeing trade and taxation compliance, procurement and asset management, as well as logistics and delivery. If at any stage of the deployment there are queries or concerns, there’s a central contact to hold accountable and get answers.

Initially the costs of managing deployment centrally may appear to be higher, but the value comes in with removing risk of non-compliance and reducing risk of delays and operational downtime. Plus having an up to date asset database makes it significantly easier to manage maintenance, warranties and breakdowns going forward.

Companies debating which deployment route is most efficient, need to consider the possible pitfalls and their abilities to manage them independently. If there’s any doubt, then a centrally managed solution should be a serious consideration.

To hear a more detailed discussion on these and other common deployment pitfalls listen to our recent podcast on IT Trendsetters.  The podcast contains some valuable and enlightening discussion points that you may find helpful.

 

 

NVIDIA RTX 4090 Unboxing

In February 2017, I co-authored a blog detailing our build of an 8-GPU password cracker. Over the past 8 years, it’s had millions of views and thousands of comments. To all the concerned writers, nothing has melted down and we continue to run 2 nearly identical 8-GPU crackers today. Both are currently running 8 NVIDIA GTX 1080Ti cards.

We also got our hands on one of NVIDIA’s latest cards! We missed the October 12th launch by 9 days but finally found an overclocked Gigabyte GeForce RTX 4090 GAMING OC 24G for sale locally.

Our Goals:

Stay tuned for a future article on our monster RTX 4090 Kracken4 build!!

For our unscientific analysis, we used Hashcat’s NTLM (-m 1000) benchmark (-b) to test our 2 current model cards and the new RTX 4090. This included the NVIDIA GTX 1080Ti, NVIDIA RTX 3090, and NVIDIA RTX 4090.

Device (as seen by Hashcat)Hashcat NTLM Benchmark Speed
GeForce GTX 1080 Ti, 11039/11178 MB, 28MCU66.76 GH/s (28.03ms)
NVIDIA GeForce RTX 3090, 23680/24575 MB, 82MCU121.2 GH/s (22.55ms)
NVIDIA GeForce RTX 4090, 23010/24563 MB, 128MCU252.0 GH/s (16.74ms)
All benchmark tests were performed using Hashcat 6.2.6 using the -b and -m 1000 options (-O is applied automatically)

When attempting to crack a single NTLM hash using an 8-character brute force crack, the actual average performance was closer to 225 GH/s. Without any tuning and using the latest NVIDIA driver for Windows, the RTX 4090 could brute any 8-character password in approximately 8 hours!

Screenshot of a single NTLM hash crack job running an 8-character brute force attack.

Our early testing shows that the NVIDIA RTX 4090 is a strong contender for high-performance password hash cracking. Despite running the RTX 4090 right out of the box without any tuning on a Windows 11 desktop computer, the cracking performance is amazing. When compared to our current cracking rigs with 8 GTX 1080Ti cards, a single RTX 4090 is roughly 48% as powerful. That makes the RTX 4090 almost 4x faster than the GTX 1080Ti. Stay tuned as we figure out how many 4090’s we can get our hands on and combine into a single cracking rig.

Interested in learning more about SynerComm's password cracking services? Check out this page!


The Benefits of a Partnership Built on Trust – 4 Unique Case Studies

Get the Full Whitepaper for Free

What's Inside? 

To read the FULL Whitepaper for free, fill out the form above and click download! 

  • Why having a trusted advisor and partnership is an invaluable asset to your  business 
  • How trust and communication can maximize efficiency, minimize costs, and get the job done correctly the first time 
  • How to overcome a "one-size-fits-all" mindset and choose a unique and customized solution for your organization 

How SynerComm Builds Trusting Relationships   

Every project begins by carefully listening to our customer's problems and asks. Through this process, we learn about their underlying priorities, objectives, and unique project requirements. 

Instead of spending our time “selling” to our customers and prospects, we focus our energies on investing in the right solutions for our customers and letting our expertise, industry reputation, and excellent work speak for itself.  

No matter how large or small your deployment is, you need to know that you can trust your logistics and IT partner to provide you with tailored solutions, sound advice, and trustworthy white-glove service. This whitepaper was created to help you learn more about the importance of trust in your IT partnerships. 

Learn more about ImplementIt and how we can make your next project stress-free by clicking here.

The Benefits of IT-Focused Logistics:
How to Empower Your Project Manager

While project managers possess a wide variety of skills, few have the extensive experience needed for handling the logistics of large, complex deployments and could benefit from expert advice and assistance. 

In this whitepaper, we reveal how to empower your project managers to handle the logistics of large, complex deployments by adopting an IT-focused approach. 

Get the Whitepaper for Free

What's Inside?

  • How to empower your project managers and save your organization time and money 
  • The key to keeping any project on track and progressing smoothly 
  • COVID-19's effect on IT and logistics 
  • Why risk and issue management is vital to project success 

“Many companies claim to offer “white-glove service,” offering a rigid set of one-size-fits-all processes and procedures that allow them to check off items on a checklist. Our approach is different. We value our customer relationships immensely and think of our customers as part of the team. We are driven by a deeply seeded and pervasive culture that drives us to always do right by each and every customer. White-glove is more than a checklist; it’s a way of conducting business that governs every aspect of our company.” 

Are you interested in how you can empower your project manager to tackle large-scale deployments with confidence? Fill out the form above to gain access to our full white paper. 

Visit our website to learn more about ImplementIT 

SynerComm's Continuous Attack Surface Management (CASM Engine®) and Continuous Penetration Testing was named a top-five finalist in the Best Vulnerability Management Solution category for the 2022 SC Awards

SynerComm's CASM Engine® has been recognized as a Trust Award finalist in the Best Vulnerability Management Solution category for the 2022 SC Awards. Now in its 25th year, the SC Awards are cybersecurity's most prestigious and competitive program. Finalists are recognized for outstanding solutions, organizations, and people driving innovation and success in information security.

"We created the CASM Engine® with the goal of being completely hands-off, while still providing multiple user-types with the most detailed and accurate attack surface reporting available today. A single solution that solves your asset inventory and monitoring, vulnerability management, attack surface management, and reporting needs," said Kirk Hanratty, Vice President/CTO and Co-Founder of SynerComm. "We are proud that our solution has been recognized as a Trust Award finalist this year."

The 2022 SC Awards were the most competitive to date, with a record 800 entries received across 38 categories - a 21% increase over 2021. This year, SC Awards expanded its recognition program to include several new award categories that reflect the shifting dynamics and emerging industry trends. The new Trust Award categories recognize solutions in cloud, data security, managed detection and more.

"SynerComm and other Trust award finalists reflect astonishing levels of innovation across the information security industry, and underscore vendor resilience and responsiveness to a rapidly evolving threat landscape," said Jill Aitoro, Senior Vice President of Content Strategy at CyberRisk Alliance. "We are so proud to recognize leading products, people and companies through a trusted program that continues to attract both new entrants and industry mainstays that come back year after year."

Entries for the SC Awards were judged by a world-class panel of industry leaders from sectors including healthcare, financial services, manufacturing, consulting, and education, among others.

I can remember it like it was yesterday...  Casey, Hans, Jason, Scott, Sam, Bill and I were slowly destroying my hotel suite at Circle City Con while trying to win the 2015 CTF. (We took 2nd place and never got our GoPro prize... still sour, can you tell?)  Amongst all the teams’ brilliant ideas that evening, was that we really needed a blog. A few hours later, #_shellntel was born.

Our intent was (and still is) to focus on pentesting, hacking and offensive security; we feared that some articles may be too edgy for some corporate/professional readers. Therefore, we separated our #_shellntel articles from other SynerComm blogs. Over the past 7 years, things have changed and today everyone loves pentester articles.

We are grateful for the loyal support of our #_shellntel readers throughout the years. Please continue to read about the latest IT news, tech trends, and cybersecurity threats on our new blog at www.synercomm.com/blog or link directly to our #_shellntel articles at www.synercomm.com/blog/tag/shellntel/. All of our existing content was moved, and all new articles will be published here going forward.

Thank you always,

Brian Judd, VP Information Assurance

SynerComm, Inc.


linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram