Or is it just another tool that will help financial professionals make better decisions more efficiently? Here, experts from across the AI field identify five trends in AI for the coming year.
1. Identifying best execution
Most buy-side professionals want to know how AI can assist with alpha generation, according to Matthew Sargaison, co-chief executive officer of Man AHL. But Sargaison believes AI could make a bigger impact much sooner on the execution side. Not only can AI deliver better modeling of the order book to improve execution, it can also use reinforcement learning to optimize order routing.
“Rather than classic A/B testing, AI can perform tests that are much more dynamic,” he says. “Every time they trade, AI tools learn more about how to execute trades for a given model. Over time, they figure out the best distribution. This delivers optimal results and frees up humans to do more productive work than making these calculations.”
2. Predicting trade failure
The predictive power of AI isn’t just hype, according to Thomas Durif, global head of middle office and data products for BNP Paribas Securities Services. Right now, tools are emerging that can predict whether a trade will fail within a three-day window after it is made.
Trades fail for a variety of reasons that can be difficult to predict. New AI tools analyze historical data to identify patterns of trades that have previously failed and alert asset managers and brokers when similar conditions occur, so they can take action sooner.
“With AI, we can see problems with trades before they happen,” he says. “This application is very much in line with the broader regulatory effort to impose fines for trade failures.”
3. Advanced decision support
Jeremy Waite, chief strategy officer for IBM Watson Customer Engagement in Europe, thinks a forthcoming AI system capable of debating humans on complicated topics could revolutionize decision support in finance and other industries.
“It can handle arguments without binary answers,” he says. “As a result, it’s capable of helping people make insightful decisions that have huge implications.”
The reason this is important for financial professionals, Waite notes, is the constant need to make sense of escalating volumes of data. Out of all the data that exists at this moment, 90 percent was created in the last 12 months and 80 percent is unstructured, including social media data, voice data and data from connected devices in the Internet of Things.
“Humans can’t keep up,” Waite says. “Only about a third of that data is useful. AI can help you find the data that is actually of value and gain insights from it.”
4. Automated portfolio management
One of the biggest concerns about AI is that it could replace humans in the workplace. Sargaison, for example, maintains that increased automation has actually given asset managers the freedom to hire larger teams focused on creative research that only humans can perform.
Marco Fasoli, co-chief executive officer and co-founder for A.I. Machines, however, offers a much more provocative point of view. He describes AI tools already in use today that replicate the entire investment process — including data processing and analysis, trading idea generation, risk management and the determination of optimal portfolio weights — and that can be applied to any portfolio management system as long as the underlying assets are liquid.
“This industry will be completely transformed within five years whether we like it or not,” Fasoli says. “There are already 100 percent AI-powered products that can give you a slight predictive edge on both risk and price. It’s not a silver bullet, of course. But that edge, embedded in the right software framework, can deliver substantially improved investment outcomes.”
5. Integrating ethics
Can algorithms learn to be ethical? This is just one question that concerns Catalina Butnaru, an ambassador for City AI and Women in AI in London, whose work focuses on integrating ethical thinking into product design.
In finance, integrating ethics means ensuring that the personal perspectives of data scientists and other experts do not inform how AI algorithms are trained, developed or used. Data scientists, understandably, tend to optimize for error rate, prediction speed and other measurable KPIs and not focus on more complicated, non-measurable issues, such as the well-being of humanity.
“There could be a risk of having one expert influence what you measure, and that person could be intrinsically biased,” she says. “If you only measure performance KPIs, you may become over-optimized. Is that the only thing you should focus on? And is that the right thing to do?”
Butnaru believes ethical alignment should happen at the product design level, guided by an “ethics team” made up of, for example, a data scientist, a project manager and someone who brings a perspective from outside the business.
“You can’t solve the problem of ethical misalignment by developing better algos,” she says. “It’s a mathematical sequence. It can’t be more or less ethical. You have to prioritize ethics as much as you prioritize other performance indicators. That’s important for reducing reputational risk.”