We’re roughly six months into the world’s sudden, unplanned leap into a work-from-home (WFH) lifestyle. And most of the IT policies thrown together to handle a sprint are showing clear gaps now that we’re running a marathon. To see what we’ve learned so far and how organizations should be adapting, we talked with Pratum CEO Dave Nelson and PC Matic Federal President Terry McGraw.
Here we share the first of two blogs featuring an edited transcript of the conversation with these cybersecurity leaders. You can watch the full video below.
What are the key threats at this point?
Dave: An uptick in social engineering attacks. With this shift to remote work, a lot of informal approvals in the office went away. Now you can’t just check in with your boss down the hall about a transaction they want you to make. There’s a lot of confusion, and processes weren’t solidified during the WFH transition. Attackers are capitalizing on the chaos with spearfishing, pretexting, and other attacks.
Terry: This scenario has accelerated and exacerbated parts of the cyberthreat landscape that have been there for a while but had a limited vector.
The measures we took to ensure the mobile workforce was secure now have to apply to your general organization, and I don’t think our architectures were well equipped to do this at scale.Terry McGraw President - PC Matic Federal
Social engineering and deep fakes still work because people lack two-party check systems. If I get an e-mail or a phone call that seems a little suspect, I should have a two-party check to verify it.
How have the threats changed?
Terry: The barrier to entry to being an e-criminal now is just a desire to commit crime. Five, eight years ago, people needed to know how to craft and employ these tools. Now you can lease the infrastructure to create an attack. The rapidity with how quickly tradecraft becomes commoditized and then reused in the e-crime environment is one of the biggest upticks we’ve seen.
Dave: When you think of the physical tools you need to carry out a war, the U.S. was well-equipped with the infrastructure to build the tools for that. But in cyberspace, a small organization that’s not even backed by a nation state but wants to rain down terror can lease resources and target and overwhelm someone in a very short period of time.
How should we adjust IT architecture for this environment?
Dave: We have to move to an environment where I don’t care what device you’re accessing data from or what location you’re accessing it from. I need to protect data because that’s what moves around in a vendor environment or a client environment.
In WFH, we sent a lot of people home without a laptop. They went home, and we turned on VPNs we didn’t have turned on before. We allowed use a personal computer that’s used by everyone in the home and that probably has viruses running around all over it. We allowed that to connect into the corporate network or the corporate cloud. Now we have all these unknown devices and unknown threats sitting there unmanaged.
We figured we could do that for 90 days. But now we’re in September, and we’re thinking it may be next June before people go back, if we’re lucky. So we have to reevaluate those risks we took early on.
We have to think about moving to a data-centric model. If you haven’t even begun, you’re behind the eight ball already.Dave Nelson CEO - Pratum
Terry: I’m a big fan of zero trust architecture, which, at its core, is being as granular as you can be in user object permission schema and validating that the data and the user are scoped to the exact access they need and validated every time.
I can’t tell you how many times I’ve walked into an organization that swore they had multifactor authentication (MFA), and it’s nowhere in sight. Sadly, the percentage barely moves year over year.
It’s primarily an architectural problem, but it’s exacerbated by the fact that we don’t have basic blocking and tackling in place. We don’t have MFA involved. We don’t have a good handle on our data. We don’t have full asset enumeration. Those were all problems we could gloss over because we had a somewhat contained office environment. But now you’ve broadened the aperture. You have to just assume everything is dirty. You have to look at containerization and segmentation and MFA.
What are the first steps for tackling these challenges?
Terry: I like to start with a macro model. There are lots of frameworks that deal with pieces of the problem. But if you raise it up one level, I need three major things to reduce my business risk in a cyber environment:
- I need to have sensing technologies that determine adversary access to my environment.
- I need a view of myself. I need to understand the limits of my environment and have good eyes on things accessing my data.
- I need to have a good handle on all the things that are mission-critical in my environment and those I do business with.
The one thing I would do today if I hadn’t already done it is implement MFA. I need to make sure everyone touching my environment is authenticated from the system they’re working on.
Dave: If you did a risk assessment before, the environment has changed. So ask four main questions:
- What data do I have? Assess your risk based on the confidentiality, integrity and accessibility of that data in the whole life cycle.
- Where does it come from?
- What do I do with it while I have it? Does it go outside the organization?
- What happens when I’m done with it? Does it need to be saved somewhere? Destroyed?
In each piece of the life cycle, your risk changes because different people and systems have different access. Assessing risk continually is really critical.
How much can we realistically expect end users to maintain home routers and handle other IT tasks?
Dave: I don’t think it’s realistic to expect anything out of them. So it goes back to zero trust architecture. Say, “I don’t trust you or the devices you’re coming from, even if it’s a device I manage.” If I assume that I’ve been breached, then I don’t care anymore about the workstation. I care about the user and what they can do. So the key is really restricting down the user access.
Let’s say I click a link and get ransomware. Anything I have access to is subject to being encrypted. If we restrict Dave’s access to only what he absolutely needs to do his job, then we can restrict the depth to which ransomware gets into our organization and starts encrypting files, which reduces the cost.
Terry: Even in traditional networks, limiting lateral scope is important. Microsegmentation has been growing for a while, but it’s been cost-prohibitive. Now with more cloud data environments and need being the mother of invention, I think we’ll see more microsegmentation solutions hitting the market soon.
You should also validate that what you think about your environment is true. That probably means having a third-party organization doing a pen test.