possible.
There had always been problems with operationalizing high security. The keys to the Exchange were information and transaction speed. During the crash of 1929, the ticker tapes that recorded trades and were the lifeblood of traders had run hours behind events. The growing lag had spread panic and, it was believed, intensified the financial disaster. Traders had speculated in the dark, acting on rumors, many of which later proved unfounded. Reforms, including faster ticker machines and new regulations concerning trades, had improved transactions and renewed traders’ faith in the Exchange but never eliminated a lingering level of unease.
NYSE Euronext traded equities, derivatives, futures, and options of nearly every sort. It listed nearly ten thousand individual items from more than sixty countries. The Exchange’s markets represented a quarter of all worldwide equities trading and provided the most liquidity of any global exchange group, meaning it was almost always possible to actually make a trade. It was rapidly working to become the only exchange any trader would ever need for every kind of financial trading transaction.
As a consequence, NYSE Euronext had embarked on the greatest expansion in its history. When the expansion was completed, nearly all the world’s trades would, at some point, pass through the Exchange’s computers. The envisioned future was breathtaking in its audacity.
Nothing so innocuous as a bit of untargeted malware was going to bring the integrity of NYSE operation into question. The implications of broad distrust in its security were simply unimaginable, not just to the Exchange, but also to the interconnected world financial system. It was a system that operated largely on faith. Break that faith, and a financial catastrophe of epic proportions loomed.
As the pair had expected, NYSE system security was first rate. But once past the initial layer of defense, Jeff discovered the same erratic patching he had seen time and again with companies that asked the public to trust them with their private information. Some of this exposure had to do with time, as a certain delay was inherent in how patching was actually performed. First the vulnerability had to be detected, which usually took place only after an exploit that took advantage of it was released. It then took the software vendor, security research firms, or in-house shops anywhere from two to four weeks to develop mitigating configurations and a corrective patch, which would then be rolled out. The actual patching itself was time consuming and many times failed to receive the immediate IT attention it deserved, resulting in another delay until a patch was finally applied to the company’s software, though too often even that failed to take place.
Part of the reason for delays and failures was simply human error and sloppiness. But there was more than just negligence involved. Every business had to make an assessment of the consequences that might arise from installing a patch. Updates were not always smooth and could create any number of unintended problems. Businesses, therefore, tended to err on the side of assuming the patch might compromise their software or interfere with something that interacted with it. In many cases, security risks were balanced against the risks to business processes, and then there was a period of reflection, during which the consequences were weighed. Sometimes after deliberation, the patch was intentionally never installed.
But whether holes were left unpatched as a result of a conscious decision or from plain ineptitude, they remained open doors for aggressors who might come later. Banks with household names too frequently had tin-box defenses within their outer walls, even though they usually adhered to industry-approved responses and followed cybersecurity best practices.
In the case at hand, an unpatched vulnerability in Payment Dynamo, a popular business application, was the missing brick in