Back when I was a young graduate student our system administrator was a bit of a gamer. We used UNIX: a Digital Equipment VAX running some BSD version, and later SUN workstations – and I pause for a moment in memory of those worthy but now defunct corporations. UNIX at the time came with a bunch of standard command-line-oriented games (and graphical ones later on the SUN’s) – which of course the sysadmin was free to delete, but ours didn’t. He even installed a new game – Empire (a multi-player “Civilization”-like game) – and started a few games hosted on our computers, soliciting players from around the internet.
For a few months Empire, rather than physics, became my passion. It ran on a schedule such that every 4 hours or 6 hours the clock ticked and you could make more moves of your military units, move commodities from one city to another, or make new plans for your cities. And of course all your opponents did the same. Being there right at the clock tick allowed you to attack first, if that was in the cards, or prepare necessary defenses for an expected attack. And missing a clock tick (for something as useless as sleep, for instance) meant losing tempo in the game; your military units might just sit there rather than move, one of your cities might start to starve, or food or other elements might be wasted because there was no room to store more.
Realizing this wasn’t personally sustainable, I delved into the C programming language which seemed to be the standard for UNIX (but up to then I’d hardly used – I’d done some Fortran and assembly programming before). After a few days work I had an automated player program that I could schedule to run shortly after each clock tick to take care of the basics – moving commodities around and moving some of my units along pre-arranged paths that I could update once or twice a day.
This gave me a slight advantage over those players who weren’t waking up every 4 or 6 hours at night to update their games, and my game started to do quite well. But not well enough for me; I started to notice some anomalies in the way certain things behaved in the game. If I used ground transport to move a fighter plane from one city to another, the mobility level in the city I moved it from dropped far more than I expected. And if I moved two aircraft from two different cities, both dropped to the same level. There was some bug in the game software, and I needed to track it down.
So I started reading through the source code of the game. This really got me up to speed on programming with the C language – the code had extensive use of pointers and there were arrays of pointers to functions and multiple layers of indirection that had to be traced to figure things out. When I finally got down to the code regarding moving aircraft, I discovered what was going on. The bug was that it was using the mobility of the central capital city as the starting value before subtracting the mobility cost of moving the plane, rather than the mobility of the actual source city. I quickly realized I could exploit the bug – if I kept my capital city mobility high, I could make use of the bug to quickly raise the mobility available in any city by bringing in a fighter plane and moving it around. This gave a huge advantage in the game – mobility was the key factor that limited how much you could do with each tick of the clock.
While perusing the source code I found some other things that looked like bugs too, and verified them in the game. One of the issues was handling of negative numbers. If you loaded a negative quantity of a commodity onto a ship in a harbor city, the code was set up to treat that the same as unloading a positive quantity from the ship to the city. However, while for positive loading the code checked that the city had sufficient quantity of the commodity, for negative loading (unloading) it didn’t do that check for the ship. Loading large negative quantities of gold onto a ship gave you a way to create unlimited quantities of gold (or any other commodity the same way).
Finding these bugs that could give such a huge advantage in the game gave me some moral qualms, and I consulted our sysadmin, who was running the game, about what I should do. He asserted that my only responsibility was to file bug reports and suggested fixes with the game developers, and then he’d update the game software when they fixed the problems. As long as the bug was reported, it was perfectly legal (according to standard game rules) to exploit them… So I did…
My obvious and mysterious advantages in the game didn’t sit well with the other players, a few of whom knew who I was. I soon found my nation under attack from a united alliance of all the others. With my bag of tricks I was still able to largely prevail, until the nuclear weapons came out…
Not long after this (November 1988) I was working on one of our Sun machines when suddenly everything mysteriously slowed down – the computers were being attacked by the first “internet worm”. It turned out I was very close to the epicenter of this event, and one of my colleagues was a good friend of Robert Morris, the student who launched the attack which exploited vulnerabilities in some standard UNIX system services. The era of computer viruses and worms was upon us. Morris was taking advantages of bugs in major computer systems just as I had exploited bugs in the Empire software to gain advantage in that game.
Bugs with destructive power in themselves or available for exploitation by the unscrupulous are almost inevitable consequences of our efforts at automation and removing humans from low-level oversight and decision-making in any system. Even in systems where humans ostensibly make the decisions, if human actions are governed by rigid rules (whether or not they function well under ordinary circumstances) or are taken with incomplete understanding of what they are doing, the extent to which such a system becomes a “machine” with predictable responses almost inevitably invites a quest for “bugs” to exploit for personal advantage. Infamous hacker Kevin Mitnick found social engineering (tricking people into giving him their passwords) at least as effective as anything else in breaking in to computers.
The problem extends far beyond the domain of computer systems. Economic, media, legal and political systems have become highly complex “machines” in modern times, governed by rigid rules and understood by few of those who depend on them. Vital decisions are often made by poorly paid bureaucrats (on regulation enforcement, say) or low-status workers (those mortgage “robo-signers”, for instance). The process can be mystifying to the outsider, but to somebody who works to understand it, “bugs” in the system open up enormous (what most would regard as immoral, but often perfectly legal) opportunities for great riches or power.
The 2007-2008 financial crisis is very much a case in point, at least as I understand from my recent reading of Michael Lewis’ account, “The Big Short: Inside the Doomsday Machine”. A number of people became enormously wealthy while bankrupting their own companies, their customers, or large swathes of the general public. The ways in which they managed this was through exploitation of a handful of real “bugs” in US and international systems of finance. Some of these bugs have been addressed; some I’m less confident will be, exhibiting further bugs in our political and media systems.
Perhaps the most important “bugs” were within the security rating agencies: Moody’s, Standard & Poors and lesser players. When mortgage securitization began, with banks turning collections of mortgages into a series of “asset-backed” bonds, these companies developed formulas to determine whether such bonds were at low risk of default and deserved the highest rating levels (AAA for example) or were of higher risk and should be given lower ratings. One of the key criteria in the bond-rating formulas was the average FICO (credit rating) score of the borrowers in the collection of bonds. If the average FICO score was above a certain level, the bonds would get a higher rating. That seems like a sensible rule – but relying only on the average introduces a couple of very serious bugs that can be exploited:
- Banks could combine mortgages sold to borrowers with very low scores with a similar number of mortgages sold to those with very high scores to get a suitable average, but with a much higher actual risk of default due to the many low-score borrowers.
- The rating companies failed to distinguish scores based on “long” records from those based on “short” records. Somebody with almost no credit history could have a very high FICO score, with the mortgage loan in question being their first major financial transaction. They were therefore at much higher risk of default on that mortgage than their short history FICO score would indicate.
A third bug associated with the bond rating agencies came in their analysis of correlation between different groups of mortgage bonds. From recent past behavior they believed that housing declines would be geographically limited. So by combining low-rated securitized bonds with geographic diversity (California, Michigan, and Florida, say) the banks could create new derivative securities some of which again received triple-A ratings indicating very low risk, bringing in yet more capital to the business.
If the rating companies had actually put human beings on the job of looking at the individual mortgages behind the securities they were rating, rather than relying on these formulas, many of these securitized mortgage bonds would have been rated as far riskier, and investment money would not have been so readily available to make those loans in the first place. By 2007 or 2008 the bond rating agencies finally recognized the problem – but the crash in housing and finance was already under way.
Mortgage securitization in itself seems like a good idea in providing new ways for capital to flow into home ownership. But the rating agency “bugs” allowed extravagance beyond all reason, with banks and others along the way earning enormous profits through fees for creating the loans to whoever they could find. The securitization bugs created a huge demand for borrowers and homes to make all those loans, and you ended up with people with $20,000 annual incomes owning half a dozen homes and owing millions before the crash.
Securitization itself had a downside, a bug in itself, closely linked to these issues with ratings. In the old days, when a bank or savings and loan company loaned somebody the money for a house, the loan stayed on the books of the bank. The borrower paid the bank directly, and the bank felt it right away if somebody was having trouble paying their mortgage. The incentives of the system were such that a bank had to ensure borrowers would usually be able to pay up. With securitization, the loan was created by a bank or other agent and then immediately sold, with the originator retaining little or none of the risk associated with the mortgage. The incentive to ensure from the start that the borrower would be likely to pay back was replaced by the need merely to meet the requirements of the bond rating agencies – bugs and all.
Former Federal Reserve Chair Alan Greenspan recognized this problem in late 2008:
“As much as I would prefer it otherwise, in this financial environment I see no choice but to require that all securitizers retain a meaningful part of the securities they issue,” Greenspan said. That would give the companies an incentive to ensure the assets are properly priced for their risk, advocates say.
Stock and commodity markets have evolved over time to handle “bugs” of this sort: first by the natural mechanism of a market in matching buyers to sellers at a mutually agreed price, and second by allowing people who spot problems to bet against those who might try to exploit them. If a stock is temporarily over-priced, smart investors can “short” the stock, a transaction that pays off if the price falls. As long as there’s a reasonably large and liquid market for any given stock, any public information affecting the price of that stock will be almost immediately reflected in its price, thanks to the natural market effect, strengthened by this ongoing battle between “shorts” and “longs”. People can still manipulate prices by manipulating the information about a stock, but there are some strong legal sanctions on fraud and insider trading that limit that. And of course people with enough money can always bounce a less liquid stock around and try to profit from that – but there’s a risk on both sides of that sort of trick too so it’s not the sort of thing individuals can make billions on these days.
From Lewis’ account, in the early days of mortgage securitization there was much more limited market liquidity, making it difficult to establish good prices for the bonds. Each mortgage-backed bond was based on a real collection of actual mortgages, and so was unique and not a commodity that could just be exchanged with another similar one. Prices were nominal and essentially based on the ratings from the bond rating agencies. There was no direct way to pursue a “short” strategy for investors who thought a particular bond was priced too high, because its underlying mortgages had a higher than usual likelihood of default. A number of the more persistent ones finally found a way around this, purchasing insurance against default on the bonds – the “Credit Default Swap” (CDS).
But, again following Lewis’ storylines, it took remarkable levels of persistence to be able to play this game, it was certainly not for the average investor. The big investment banks particularly in the US and Germany placed themselves at the center of these markets, and CDS’s and their “synthetic” variants, CDO’s, again received apparently artificial and hard-to-justify prices largely set by those banks. The nominal justification again was the underlying ratings on the original mortgage securities, and the derivatives themselves which received their own ratings.
Normally in an illiquid market prices can rise and drop violently depending on small changes in demand or supply. The lack of a proper market for these mortgage derivatives and price-setting by the investment banks produced the opposite, an artificially low volatility. And here enters one more bug: derivatives pricing models assert that risk depends almost exclusively on volatility of the underlying securities. The (artificial) low volatility meant the risk associated with these investments was believed to be very low; meanwhile those intrepid investors trying to “short” the bonds kept paying their “insurance” premiums, a pretty high return rate – and so this combination of apparently low risk and high return hugely increased demand. CDS’s and CDO’s grew explosively through 2006, 2007 and into 2008, even while problems were starting to become apparent.
The need for consideration of consequences over the long term must be an essential part of human motivation to produce good, cooperative outcomes. If incentives are only short-term or one-time, and not long-term, we end up with destructive, competitive, antagonistic cycles that do little good for society as a whole. Ponzi schemes are examples of completely short-term thinking: they fundamentally have no long-term plan to restore funds to investors. But similar issues arise with the incentives for executives of more normal companies, relative to those who invest in them.
A trader who personally makes $20 million in a year has no more need for any loyalty to his company than lottery winners have to wherever they work. I believe Alan Greenspan finally realized this in the end as well, though I can’t find the quote from him on the subject. Contracts between corporations are made on the basis of an assumption that the individuals signing off on them are acting in the interests of those corporations. If they are acting only in their own short-term interest and care little for or have little personal stake in the long-term health of their place of work, well, that’s a huge bug just waiting for the unscrupulous to exploit for their own gain.
Note an official Financial Crisis Inquiry Commission concluded the crisis of 2008 was avoidable, with widespread failures in financial regulation, corporate governance, and systemic breaches in accountability and ethics at all levels. The fundamental problem was it was in almost no individual’s personal interest to fix the bugs in the system – rather almost everybody tried to take advantage of them before it was too late and the bubble burst. As it has to eventually, with any Ponzi-like scheme that creates money out of nothing, even as complexified as this one was.
At least this game didn’t end in nuclear devastation… yet. I just hope we will now watch a little more carefully for these bugs. Perhaps this plea is an indication of some momentum in the right direction:
Let’s save the world by keeping our engineers out of finance. We need them to, instead, develop new types of medical devices, renewable energy sources,and ways for sustaining the environment and purifying water, and to start companies that help America keep its innovative edge.
Creating money out of nothing is neither moral nor sustainable, no matter how wealthy you can become while the good times last.