Governments around the world are increasingly debating how to regulate artificial intelligence. Among the most ambitious of the proposed regulations is the Artificial Intelligence Act that is currently making its way through the European Union’s legislative sausage making. In the U.S., the Federal Trade Commission has issued a number of warnings about the controls a company should have in place if it is using algorithms to make decisions and the agency has said it plans to begin rulemaking on the technology. But it is one thing to make new laws. It is another to be able to enforce them.
Bryce Elder, a journalist with The Financial Times, makes this point in a well-argued opinion piece in the newspaper’s “Alphaville” section this week. Elder points out that the industry that is in many ways the furthest along in deploying autonomous systems is finance, where firms have embraced algorithmic trading for more than two-decades and are now increasingly replacing static, hard-coded algorithms with those created through machine learning. Algorithms account for as much as 75% of U.S. equities trading volumes, and 90% on foreign exchanges, according to a 2018 SelectUSA study.
There are stringent rules on the books in most jurisdictions about these algorithms: European Union law requires that they be thoroughly tested before being set loose, with firms asked to certify that their trading bots won’t cause market disorder and that they will continue to operate correctly even “in stressed market conditions.” It also specifies that humans at the trading firms using the algorithms bear ultimate responsibility should the software run amok. Trading venues are also held responsible for ensuring market participants have tested their algorithms to this standard.