SBA

Information | Process | Technology

EU e-Privacy Directive

This website uses cookies to manage authentication, navigation, and other functions. By using our website, you agree that we can place these types of cookies on your device.

You have declined cookies. This decision can be reversed.

You have allowed cookies to be placed on your computer. This decision can be reversed.

Does Software Cheat?

Software has been in the news again recently, for all the wrong reasons. Volkswagen have been caught, like a naughty schoolboy, emitting noxious gases. The US Environmental Protection Agency (EPA) have accused the world’s largest auto manufacturer of using a “Defeat Device” to “cheat” in emissions tests. The VW CEO has admitted failure by the company and resigned, one of many heads apparently likely to roll out of the firm’s Wolfsburg HQ.

 

 

The term “Defeat Device” sounds like something tangible, a mechanical doodah we can touch, but in this case is used to describe a part of the software in the Bosch engine management system used by some VW diesel engines. The software has been configured to recognise certain conditions and to alter its behaviour in controlling the engine accordingly - responsive behaviour which is common in many computers systems. For instance in a web shop the automatic provision of free shipping on orders over £100 would be responsive behaviour - the web shop is intentionally inconsistent in its treatment of customers. Similarly the decision as to whether to charge VAT or not, based upon the customer’s country of residence or zero-rated status. Software is routinely programmed to recognise circumstances and change behaviour accordingly, for example as I have written previously my hearing aids automatically change behaviour to try to cope with quiet speech, noisy rooms, music etc.

 

In a car the engine management software may change the fuel - air mixture depending on temperature, accelerate or retard ignition based upon engine speed and load, and adopt strategies for providing the engine with an appropriate balance of performance vs economy vs emissions depending on driving style and circumstances - heavy acceleration will see the engine focus on power output whilst when gently cruising it will tend towards economy. In some engines the stop start behaviour of urban traffic is assumed to indicate city driving and the engine is automatically “tuned” to minimise emissions. These three factors - power, economy, and emissions, make up the envelope within which the software programmers are attempting to shape the performance of the engine. In cars with automatic gearboxes the selection of gear ratio may also be controlled by software to help engine performance by pushing the engine speed to the more efficient zones in its power output profile. Clearly the engine management software in most modern cars recognises features of the environment and driver behaviour to determine how the engine should behave.

 

None of the above is “cheating”, it is optimising the performance of the product. It is only to be expected that if the regulatory authorities tell auto manufacturers that they will perform their tests under a specific set of scenarios the auto manufacturers will respond by tuning their engines to optimise performance for those circumstances - the regulators have expressed a specific opinion as to how engines should behave. As it happens the regulatory authorities, in the USA and the EU, have constructed their tests in a very artificial environment where emissions tests are performed in a warm shed on a rolling road with no hills and no wind resistance, instead of using more difficult to construct but more indicative real-world environment simulations. The result of this is that the real-world performance of many diesel cars is that they emit four to five times the pollution levels claimed during testing, without any “cheating”, because in the real world they have to operate under sub-optimal conditions which demand higher power output, at the cost of higher fuel consumption and more polluting emissions. It would perhaps be better if regulatory authorities tested in real-world conditions, causing manufacturers to optimise performance for the way we actually use our vehicles.

 

The difference in the Volkswagen case seems to be that they have gone beyond optimising emissions performance for the operating scenarios described by the regulators, and have created software programming which detects and specifically responds to test conditions. It’s a small but significant jump in performance optimisation, small in that it is very easy to do, significant in that it means the regulatory testers would no longer be testing an engine designed to drive around, instead they would be unknowingly testing an engine designed to pass static tests.

 

Of course the regulator is only one stakeholder, albeit the one with the big stick. The auto company customers, the people with the cheque books, want cars which perform well, are economical to run, and don’t pollute. Maximising fuel economy is not the same as minimising emissions although the two are linked, depending on what emissions are being tested for the best real-world strategy to minimise engine emissions to pass regulatory tests may be to reduce fuel economy thereby increasing cost to the customer, increasing fossil fuel usage, and inevitably increasing pollution - just not the specific pollution measured for in the emissions test. Basically, as is so often the case across regulated industries, applying regulation in one dimension just moves problems to another and the probability is that we customers have been “cheated” out of more economical vehicles for many years by one-dimensional pollution testing. Under the current system in both the USA and Europe the auto makers can’t win, in that they cannot satisfy their customers to the best of their capabilities because the guys with the big stick, the regulators, won’t allow them to.

 

Which all adds up to a rather tasty corporate governance problem, taking the full spectrum of corporate governance into account and the interests of all stakeholders rather than merely the parochial concerns of accountants, lawyers and regulators. There has undoubtedly been a major corporate governance failure in VW, because the software made what is technically a small step over the line of acceptable behaviour, but it is easy to see how it might have happened. In attempting to broker the compromise between customer and regulatory interests VW went too far, rather like a teacher coaching students to pass an exam rather than educating them to understand the underlying principles of the subject. Technically a small step for the programmers, but in governance terms a major breach because it defied the intent of the EPA testing regime however flawed and disadvantageous to customers.

 

This is a recognised and often debated aspect of programming computer systems which software engineers commonly face in real life. Determining where the boundaries of acceptable behaviour lie for computer systems is difficult, particularly because it is difficult to determine the possible contexts for all cases. The first of the Three Laws of Robotics (conceived in 1942 by Isaac Asimov) states that “A robot may not injure a human being or, through inaction, allow a human being to come to harm”. But what if harming one human is the only way of saving many? How would a robot policeman behave if confronted by a gun-wielding madman massacring students? Harm the madman to save the rest? More prosaically, how do our business computer systems behave? Are they programmed to maximise creditor days thereby increasing the working capital of the enterprise? Or to minimise interest payment liabilities by rounding down instead of up? Are these behaviours ethical? Software developers regularly face ethical and moral challenges, and their judgements influence the behaviour of the computer systems on which our businesses depend. Many people have died as a result of decisions made by software in aircraft and autos, stock markets have crashed - the 2010 “Flash Crash” knocked 9% off the value of the Dow Jones index in a few minutes; conditional decisions made by software developers can have a major impact and many of our business systems include such automated decision-making. It’s a scary thought for company directors, but there are probably many more Volkswagen type misjudgements hiding away in our computer systems, and some of them will, over time, come to light.

 

BCS The Chartered Institute for IT requires, like other professional bodies, that its members conform to a professional code of conduct. As you might expect it says many “good things” including:

 

  • You shall  have due regard for public health, privacy, security and wellbeing of others and the environment.

 

  • You shall ensure that you have the knowledge and understanding of Legislation and that you comply with such Legislation, in carrying out your professional responsibilities.

 

  • You shall avoid injuring others, their property, reputation, or employment by false or malicious or negligent action or inaction.

 

All of which are brought into sharp relief by the recent VW scandal. Much of our business activity today is controlled or performed by business systems. Somebody programmed them; in the light of the VW scandal company directors should pause to consider whether the computer systems their businesses operate, and the people or companies that created them, actually reflect the culture, ethics morals and values of their organisations. With the VW scandal Cyber Risk has just embraced a new dimension.

 

You are here: Home Thinking(s) IT Matters Does Software Cheat?