Is Zero Trust segmentation the answer to mitigating ransomware threats?
InterServer Web Hosting and VPS


With the decline of the traditional enterprise network perimeter, more and more organizations are turning to a zero trust approach to securing their systems.

This not only reduces the attack surface, it ensures that if an attack does succeed it’s much less likely to spread laterally within the network. We talked to Tim Silverline, VP of security at network automation specialist Gluware, to find out more about what implementing zero trust means.

InterServer Web Hosting and VPS

BN: Why is it important to check identity on every network request?

TS: First, I would point out that ‘every network request’ does not need to have identity checked. Some obvious examples of this are publicly accessible servers and DNS requests. These sorts of network requests are not reasonable to constantly be checking identity on since organizations typically don’t have access to the identities of customers using their public resources. In addition, DNS performance would be slowed down greatly to verify identity for each request with little to no mitigated security risk. Where it is important to check identity continuously is on network traffic that is associated to critical services, applications, assets, or privileged data. This is important because identity is what the least-privileged policies the zero-trust system is enforcing are based upon. Without being able to leverage identity in policies, they become too broad which increases the overall attack surface that zero trust network access solutions aim to minimize.

BN: Where do you start when planning to implement zero trust?

TS: The first place to start is defining the scope that you are looking to protect with the zero-trust solution, sometimes referred to as the protect surface, as opposed to attack surface, which is much larger and harder, if not impossible, to fully apply zero trust principles to. Organizations must consider what are the sensitive applications, data, assets, and services they are looking to protect. They then must map the network flows that apply to these items and document the enforcement points within these flows where policies could be applied. Some systems may have client-side software which can put the enforcement points as close to workloads and data as possible. Other systems, such as IoT and SCADA devices, may be restricted in their ability to control access at the endpoint level, meaning the enforcement point to limit access will have to be built into the network using a firewall. In all, a strategy must be made to protect the important systems and data in a cohesive way that is manageable for the size of the organization. For some organizations this might mean looking at commercially available software solutions and for others this may involve building their own software to automate their existing technology stack. Often it ends up being some combination of those two strategies to completely cover everything important with a zero-trust methodology.

BN: Does least privilege look different depending on the type of organization?

TS: The concept of least privilege doesn’t change depending on the type of the organization but the amount of different user types, the number of different privileges, and the protect surface can be substantially greater in larger organizations with more overall assets, making building least privileged policies much more challenging. Ultimately, least privilege should mean the same thing to all organizations though — default denial of access for every system and user from a data, system, and networking perspective unless that access has a justifiable and documented business requirement.

BN: What are some of the problems organizations encounter when moving to zero trust?

TS: One of the problems organizations encounter is trying to apply zero-trust to everything at once. This is why the first step of defining the protect surface is so important and it is ideal to start out with a smaller surface when possible. After successfully implementing a successful zero-trust strategy that covers a smaller portion of the organization’s assets, it is often easier to expand the surface to cover more of the environment using the lessons learned from the initial rollout. Another challenge companies will run into is disrupting user workflows while trying to implement the technology and having to roll the solution back due to user complaints or business being disrupted. Thorough testing should be performed prior to solution deployment to mitigate this risk.

BN: Are we going to see a zero-trust approach become universal in the next few years?

TS: I don’t think we will ever see any single approach to cybersecurity become universal. There are too many companies that still run with the mindset that they don’t need to invest in cybersecurity until after they are breached. However, I do believe adoption will improve as the technological approaches to implementing zero-trust mature and become more automated from an administrative perspective. Also, now that we have standards and frameworks not just from vendors but from organizations like Forrester, NIST, and CISA discussing guidelines and methodological approaches to implementing zero-trust, there are a lot more resources for companies to leverage in determining the best overall solution for themselves and this should help to increase adoption rates as well. Last, cyber insurance providers are starting to become a huge force in pushing companies to adopt specific strategies. I think when we finally see requirements coming from the insurance industry around zero-trust is when the biggest momentum shift will happen towards widespread adoption.

Image credit: Olivier26/depositphotos.com





Source link