MaS is about computer security, malware and spam issues in general.

2008/09/27

We may be losing


Yesterday (26. Sept. 2008), I was on a panel at the Polytechnic institute of NYU to discuss targeted malware. I chose the title above for my introductory talk to be provocative, but, in truth, the security is in reactive mode and is slow even doing that. I wanted to outline what I thought are the problems in a nutshell and what the intended interdisciplinary audience could be thinking of doing about them. Because I was trying to stick to my alloted 5 minutes, I didn't get through all the material, so here is what I was trying to communicate:

Problems

The tendency to overcomplicate design

We overcomplicate systems in two ways: (1) in engineering we seem to want to use the hammer that we know and treat everything as a nail. We are also very much caught up in a legacy mode to thinking. OSes like OS390, Windows, Linux are examples. The same thing goes for application frameworks. We try to shovel every problem into these without thought of if this makes sense. This results in overcomplicated designs that are impossible to understand and audit. (2) The systems are also very hard to actually understand for the user. We design the UI in a way that hides the internals and is alien to how people think the system is going about its business. The fact that a spammer can abuse the email sender field is a total HCI failure.
Furthermore, we never seem to consider security and privacy in the initial design. “Let’s get this out first, and see if people like it.” We lived with Macro viruses for such a long time because Microsoft thought that macros in documents might be a good idea. Only when they disabled running macros in documents by default did the problem largely disappear - and no one thought it was a big loss.
User education continues to fail as we can barely explain things to each other, let along to non-technical users. Also the landscape continues to change too quickly. For the longest time, I would say that documents could not be infected, hoping that it would stay true even when we were aware of the possibility. And then WM/Concept hit the scene in 1995...
We are fighting an ecology of cyber criminalsThis is not about ‘one criminal’ or ‘one gang’. There is a vast network of service providers that the principle perpetrator uses for nearly every aspect of the crime. It is a surprise that they have not adopted a SOA architecture yet. For law enforcement, it is extremely difficult to find every party involved and then usually find that many of the actors are out of their jurisdiction and the principle perpetrator is heavily shielded.
The 80/20 rule seems to apply. They can get the most profit out of perhaps 20% of the potential targets. These are the people who have not updated their systems and kept it secure enough. I would like to say they must be gullible, but these attacks have become very sophisticated and not enough has been done to make it easy to spot the deception. There are also attacks against high-valued targets, but these are rare for a variety of reasons. First of all, economies of scale are not favorable, and because of the methods used, the perpetrators are more easily traced. These attacks exist and are probably being used in IP theft and patriotically motivated citizen cyber warfare (e.g, Estonia and Georgia).
Security vendors unfortunately have the same problem. They have to go after the 20% of the potential vulnerabilities they they feel will be responsible for 80% of the attacks. Covering all bases is impossible and the landscape is constantly changing anyway. I know this because I typically look further down the pipeline than most.
The trick is to make it not worth the criminal’s while. If a bank can limit the potential payout, make it inconvenient enough or create a mandatory delay in processing to give law enforcement an edge, they will move elsewhere. You still have to be vigilant as you don’t want to become the low-hanging fruit.

Dealing with the problem

System and service providers must assume the risk

EULAs and Terms of Use typically release the system provider from any responsibility to produce secure software. This is a mistake and probably shouldn’t be legal. A system of insurance for managing risk should be instated to deal with the security risks in the software as the insurance industry has a lot of experience dealing with risk mitigation and will find a good balance.
There is a desperate need to over engineer software w.r.t. security especially. The problem is that current economics don’t encourage this practice. It is hard to become a car manufacturer because creating a car to withstand all the stresses on a car is something gained through long experience. But failure to do so is no longer tolerated: there are regulations and litigative pressures to do ones best. Also interestingly, regulation doesn’t specify methods, but outcomes. While I don’t like the car analogy too much, I think similar ideas can come to play in the software industry.
Brittle in this context means that your system interaction will not break down into an inhuman experience when things go wrong. It shouldn’t dissemble when confronted with unusual input. The failure mode should be rooted in common sense. System design must be rooted in human expectation and not just on machine feasibility.

Reactive Security

Lastly, security need to become reactive - and it’s not reactive enough. Diagnosis needs to occur much closer to the user so that timeliness and context is not lost. At the moment, malware is collected in an ad hoc fashion, signatures are created and deployed. The time between a threat being deployed and detected is far too long. We tried to fix this with the Digital Immune System, but tragically it never was deployed as expected and is now dead with no apparent replacement. Perhaps better solutions are still being shunned by customers as they are often heuristic based, and traditional solutions are preferred as they are perceived as having a more deterministic outcome. However, this is faulty thinking. The variety of solutions have to work together in concert, but for this to happen, they must understand each other.

This is pretty dense, but even though I compressed it even further when speaking, I think I may have gone overtime. Of course, so did everyone else, so there was not really enough time left for a proper discussion. But in the breaks and over lunch and dinner these topics came up again and again.
It was a great workshop and fun to meet old friends and make new ones. Thanks to the organizers who put it together. It would be interesting to attend the next time, too!
For students, there is a similar workshop held nearly every year and organized by IFIP: the IFIP Summer School on Security, Privacy and Society organized by IFIP WG 9.2, 9.6/11.7 and 11.6. The last one was in the Czech Republic. As a student, you can get credit for participating.

No comments: