When criminals and spies create and use zero days (program code for mistakes in software) to steal our money and our information, we unanimously disapprove. But what if our own spies and military personnel create and use them? Do we disapprove of that too? After all, they are ‘the good guys’. Shouldn’t they simply report mistakes in software to the makers, so that they can correct the problem? The fewer mistakes in software, the safer we will be.
All software is full of mistakes, and most of them are not serious. But some are so serious that anyone making clever use of them can break into someone else’s computer unnoticed. Criminals and spies exploit these bugs to steal money and information via the internet. Including from the Dutch government, from Dutch companies and from Dutch citizens. Fewer mistakes in software would make our systems more secure. The more dependent our society becomes on ICT, the more urgent this becomes, and it no longer concerns just money and information, but human lives.
Many attacks make use of mistakes that have long ago been fixed. That is possible, because software users have often not installed the most recent updates. The chances of success are of course increased if you exploit a mistake that no one else is aware of, and for which there is yet no solution. In the jargon, we call the program code used to exploit a mistake that is not yet known a ‘zero day’.
Any consideration of whether the use of zero days is ethically justifiable – and under which conditions – in my view should be based on at least the following two factors: the specificity of the zero day and the nature of the threat against which the zero day is deployed, in particular the urgency and potential impact.
A zero day called ‘Eternalblue’ was published on the internet earlier this year by a hacker group. It is generally believed that this zero day was stolen from the American NSA. EternalBlue was not specific and could be used against multiple versions Windows: from Windows XP to Windows Server 2016, therefore among the majority of companies, governments and civilian users of the internet all over the world. A solution for this bug would make hundreds of millions of systems more secure.
We will never know whether Eternalblue was ever deployed for an imminent threat with great impact. But we do know with reasonable certainty that the NSA has had Eternalblue at its disposal for at least six years. An imminent threat with great impact doesn’t last six years.
There has very likely been a long period in which there was an ethical choice between allowing Microsoft to correct it, or keeping control over it. Making millions of systems more secure is usually preferred above keeping the exploit under wraps in case it should prove useful against an imminent threat of great impact, or using it regularly for diverse threats. Don’t forget that our opponents have also had 15 years’ time to find the bug and use it before it became public. We will probably never know whether that actually happened.
My second example is short and theoretical: imagine that there is a threat in the Netherlands and as the security service you know that a specific zero day could mean a breakthrough in your investigation. I believe that the use of such a zero day in certain cases is ethically justifiable. Cases in which the threat is imminent and the impact substantial: concerning human lives, a case of life and death. And there are conditions attached to this use: the bug must, as soon as the threat has been defused, be reported to the software makers so that they can correct the problem and thereby increase the resilience of others. This condition becomes more important the more generic the zero day is.
In practice, I have no direct influence on the way in which security services deal with zero days. But I do have influence on the way in which my employer deals with them.
Mistakes in software must be reported to the makers so that they can correct them. Fewer mistakes in software will mean fewer problems: fewer data leaks, less identity fraud, less espionage and less theft. And eventually, the more dependent our society becomes on ICT, fewer deaths.
In my view, it is only justifiable to act differently in very specific cases. And for a commercial entity, cooperating in this is only justifiable in very specific cases and with specific conditions. That is the advice I give my employer, Fox-IT, in my role as a member of the ethical committee, and in line with the current CSR policy.