My previous blog entry (“Exactly Where Are All Your Field Assets, Anyway?”) seems to have garnered the attention of my respected industry colleague, Eric Byres at Tofino. He seems to like the way I pointed out some particularly worrisome aspects of recent vulnerability disclosures, but took issue when I wandered a little too carelessly into his strike zone. Specifically, he thinks I made a “serious technical error” when I said, “Put all the deep packet inspection on it you want – you won’t find a signature.”
Now, to be clear, I like Eric. I consider him a good friend, I have a lot of fun discussing industry challenges with him, and I genuinely look forward to seeing him at the next S4 conference in January. But, honestly, I think he reads a little more into my casual statement than is really there, and may have even made a technical error of his own in the process. Specifically, I made the original statement in an effort to illustrate the shortfalls of today’s tools in dealing with this issue; and in his reply, I think Eric makes a statement that is a similarly overly broad generalization. Even more importantly, however, I think Eric has missed a key point at the heart of the matter.
So, by way of extending our little dialog, I offer up the following response:
Thanks for taking the time to read my thoughts on the impact of the Crain/Sistrunk vulnerabilities, and sharing with me some thoughts of your own. I agree that the key issue is not so much the vulnerabilities themselves, but the broader implications for all ICS protocols and how they are implemented.
I would also like to apologize for being a bit too casual in my reference to Deep Packet Inspection. I realize that not all DPI firewalls depend solely on signature databases, although I think your statement that “DPI firewalls don’t use signatures” may also be a bit too broad to hold up under scrutiny. It’s not too hard to find spec sheets that brag about DPI and signature databases all in the same breath. But let’s not get hung on semantics. The better DPI firewalls – as you mention – do in fact go the extra mile and implement packet validation. However, the ones I have seen so far still fall short in one critical area: they are looking for bad packets instead of good ones.
In other words, DPI firewall vendors are looking for packets that are malformed in the same ways that Adam and Chris have found (along with any other vulnerabilities that have been published). I’m sure they can also add an endless string of rules looking for every way that an exploit can be crafted, so long as they know about the vulnerability upon which it is based. And that’s the key. They are looking for bad packets as defined by the vulnerabilities that have been found and published so far, not good packets as defined by the specification.
The difference in looking for bad packets instead of good ones boils down to the classic blacklist vs. whitelist approach. You may not call it a signature, but if you are looking for malformed packets as defined by published vulnerabilities – even if they have been generalized, you are still looking for a pre-configured and pre-determined attack pattern. This leaves us in a constant game of catch-up, and does nothing for vulnerabilities found by people that don’t publish.
So, where we part company is in our assessment of the effectiveness of today’s DPI. I would like my firewall to know exactly what constitutes a valid DNP3 packet, no more and no less. And if you build one, I will be most interested to see how it performs. DPI may one day be a valid defense just as you say, but it is not there now – at least not for DNP3.
Please keep me posted on developments. I like what I have seen from Tofino so far, and would love it if you can implement DPI the way it should be.
In closing, I should note that my first preference (even above firewall capabilities) would be to have packet validation built into the application as a separate step before any part of the packet is parsed. This may seem a bit far-fetched, but I will be talking about this approach and showing how it can be done along with Meredith Patterson and Sergey Bratus at S4 in January. Hope you can join us for the fun.
Recently, we have seen a blog post and a few articles in the news that should have raised the hairs on the back of the neck of any utility engineer with a pulse. Specifically, a couple of researchers started finding vulnerabilities in products from every manufacturer of DNP3 equipment that they tested.
Implementations of the single most common field device communication protocol deployed in the utility sector are littered with vulnerabilities.
It gets better.
Vulnerabilities have been found in remote devices (servers) as well as master stations (clients).
Still not good enough?
The vulnerabilities apply whether we are talking about serial communications or TCP/IP.
Pause and think about that for a minute. We now know about vulnerabilities that apply to both serial and IP for the most common field communications protocol in the industry, in both master stations and field equipment from every manufacturer tested so far.
About this time, you should be wondering if the implementation of DNP3 in your control center is vulnerable. You should also be starting to think about all the different places you have field equipment deployed that is connected back to your SCADA master using DNP3.
The first place that most people have started talking about these devices is a substation. Too many engineers are searching for ways to make themselves feel better because there is a fence and/or a locked building keeping the bad guys out. Maybe even a camera, too.
Unfortunately, those defenses don’t do much to slow someone that gets access to a key that has been duplicated hundreds of times over the last 10 years or so. Or with halfway decent lock-picking skills. They might even have a uniform so they don’t have to bother with blinding the camera.
And honestly, no half-way informed attacker is going to mess with a substation when they have much easier access to many more pad-mount and pole-mount devices in more remote and less noticeable locations. With no cameras.
A few people have even become distracted by the discussion of whether such an attack could get past a firewall. Most firewalls will let a packet right through if the source and destination IP address, port numbers, and protocol headers all look correct. Put all the deep packet inspection on it you want – you won’t find a signature. And if that’s not enough to convince you, how many utilities even have a firewall between their SCADA master and their field devices?
Scenario: Late on a Friday night, you lose communications to a cap bank controller in the remotest corner of your service territory for about a minute and a half. The device comes back on line and everything looks normal. How long would it be before you had a lineman go physically inspect the device? If everything seemed to be working fine, would it be inspected at all? Would a lineman be sufficiently trained to be able to spot a very unsuspicious, factory-looking device that had been inserted between the controller and the comms? What if the attacker’s device was small enough to be obscured from ready view? Would the lineman pull the assembly apart looking for a cause?
There’s a children’s story about an emperor and his sense of fashion waiting for us on
The best cybersecurity question ever asked surely must be “should we use IPSec or TLS?” The question itself seems straightforward enough… the two most common technological implementations of communications encryption… some interesting differences in the implementation implications between the two… What is it that makes this question so special?
Often this question is asked by a well-meaning, conscientious technologist looking to “build security in” to their product or solution. The technologist is not a security expert, and will readily admit as much; but they have done enough research to know that there are key differences here (no pun intended) that will have a substantial impact on how their product is built. So they have taken the initiative to seek out the counsel of someone who really understands security before finalizing their design and going to market with an insecure product.
While there are legitimate contexts in which the question of IPSec or TLS is worthy of discussion at face value, more frequently this question is a flashing, neon warning sign to the experienced security practitioner that portends much sighing and gnashing of teeth ahead for at least one of, if not both, the interrogator and interrogatee. The technologist asking the question knows they will need to tell their security friend about their application so that they can make an informed decision and provide sound advice. What the technologist does not know is exactly how much information that will be, or – more critically – just how many other security considerations are about to be uncovered.
Security is about more than just encryption. As a matter of fact, sometimes security is about everything but encryption. Because at the end of the day, security is about making sure the technology does what it’s supposed to do, and doesn’t do what it’s not supposed to do – all from the perspective of a specific stakeholder. Availability… Integrity… Confidentiality… each has a myriad of answers and solutions for a virtually endless supply of problem sets. NONE of them are always right for every situation.
So the next time you find yourself looking for some help in determining which kind of encryption library you need to include to make your product secure, be forewarned that the right answer will include more questions than you ever imagined would be relevant, and is likely to uncover the fact that at this point in the product cycle, you are most likely not “building security in.”
How do we determine what to spend on security? How do we evaluate the cost of security failure? What basis do we use to frame our risk models?
Executives and Boards of Directors must value their company according to traditional accounting practices. If the company were to be sold tomorrow, what would the price tag be? Obviously there is considerable wiggle room here, which is why we have negotiations. And while speculative new tech stocks might jump in an IPO, it would be rare for a T&D utility stock to be off by anything even approaching 20% with a much more realistic margin of variability more likely at less than 5%.
But what if the price tag was off by an order of magnitude – that is, selling for $50 when it really should be $500? Or how about two orders of magnitude? Wouldn’t we quickly have pandemonium in the market (and probably some very scared accountants)? Yet that very level of discrepancy is what seems to go completely unrecognized in our discussions about utility security.
On one hand, you have security practitioners scratching their head and asking why we aren’t spending more on security. They look at the state of our legacy field-deployed systems, the frustrations of obtaining even a modicum of security in new field-deployed products, things like Stuxnet and SHODAN, and the results of the Northeast Blackout of 2003 (causing numerous deaths and costing billions of dollars), and ask “how will we defend our choice to skimp on security in the wake of a cyber-triggered catastrophe?” Electricity is fundamental to modern society. We must protect our nation’s critical infrastructure.
On the other hand, you have utility management looking at the numbers and saying, “we can’t spend more on security than the business function we’re protecting is worth.” Security is ultimately a quality modifier. For a security measure to make business sense, its cost must be a relatively small fraction of the business function it protects. So, no matter how politically correct it might sound to put yourself out of business because you’re doing the right thing, it’s not a sustainable model regardless of who steps in to fill the utility’s shoes.
I’m tempted to ask how we reconcile these perspectives on utility security spending, but I’m not even sure most people in the chatter even realize they are having two different conversations. We can’t ask utilities to put themselves out of business by arbitrarily spending countless dollars on security. Nor can we effectively protect our critical infrastructure on a budget derived from traditional corporate accounting. Meanwhile, the only tool we’ve tried so far – regulation – is a coarse, inflexible, and slow means of forcing the issue in an environment that requires precision and rapid adaptability – all while doing nothing to resolve the disparate points of reference.
We need to agree on a cost basis if we are ever going to converge on the definition of a reasonable solution. Otherwise, our contextual disconnect is likely to turn into a physical one.
Welcome to the UtiliSec blog. Please feel free to provide us feedback by contacting us directly.