embracing the future

What will be the consequences of technological advances upon the research industry?

Key points 

• Data analysis techniques will become an integral part of investment research
• The challenge is trying to keep pace with the complex sets of data in the market
• Valuable sources of data include corporate record-keeping, supermarket scanner or supply chain data and credit cards
• Making sense of data is a greater challenge then collecting it

There is a lot of buzz about artificial intelligence (AI) changing the investment research world. But, for now natural language processing (NLP), which involves teaching machines to understand the nuances of human language, is where progress is being made. As with any new technology, it is only a matter of time before AI and other data analysis techniques become integral. 

“In terms of AI, what we are seeing is that the sell-side is focusing their efforts on the trading side with the use of algo wheels where they allocate flow based on past performance,” says Valerie Bogard, equity analyst of Tabb Group. “We see greater use of alternative data on the buy-side but I think this will change as sell-side firms figure out what clients want and value in unbundled world. Once this happens, we will see them making greater investment in these technologies.”

A report published last year, Quantifying the Future, co-authored by Rebecca Healey, head of Emea market structure and strategy at Liquidnet, notes that sell-side firms need to become more innovative. They should create new data products or partnerships with other firms that can help develop products or distribute and evaluate data to stay ahead of competitors. 

However, investing in the infrastructure, systems and processes is only part of the equation. As Healey and co-author Niki Beattie, founder and chief executive of consultancy Market Structure Partners, put it: “The more data is disseminated digitally the more people can be involved and the more readily its use can be tracked, evaluated and audited. However, the culture has to be in place to make this effective in order to improve profit margins and gain a competitive edge”.

There are plenty of role models that can point the way. Amazon, Google and Facebook have been using predictive data for years to identify what customers want to buy, according to Martin Lueck, co-founder of Aspect Capital. “At the moment, there is a lot of hype. I would not say that this is a watershed moment in financial services but a continuum of a trend that was started 30 years ago. The difference today is that processing power is much greater and there are many more data sources.”

Marko Kolanovic and Rajesh Krishnamachari of JP Morgan’s quantitative and derivative strategy team, echo these sentiments in their recent report Big Data and AI Strategies: Machine Learning and Alternative Data Approach to Investing. They point to Sam Walton, founder of Walmart, who in the 1950s used airplanes to fly over and count cars on parking lots to assess real estate investments.

Data deluge

Fast forward to today and the challenge is trying to keep pace with the multitude of complex sets of data in the market. The JP Morgan report found that 90% of current data has been created in the past two years alone, and just 0.5% of it is being analysed. 

For sell-side firms, the most useful alternative data can be broken down into three types – the first being satellite imagery of cars and foot traffic of ship locations. In other words, there is no longer a need to fly to look at car parks. 

Patterns of behaviour can also be gleaned from individual data feeds, including Google searches, social media websites such as Twitter, Facebook, LinkedIn as well as those containing product reviews like Yelp, E- Amazon, and Mobile app analytics companies such as App Annie. This not only highlights consumer preferences but local farmers’ tweets can offer insights into crop yields which could impact agricultural prices.  

The last and most valuable alternative sources of data, according to Kolanovic and Krishnamachari, can be found in business processes such as data exhaust covering corporate record-keeping likebanking records, supermarket scanner or supply chain data and credit card transactions. These can gauge consumer consumption and spending habits. 

These systems are designed to ingest large amounts of data, determine the interesting and relevant points, then provide a textual report to help summarise and understand the main points. This is useful in the time-consuming process of publishing quarterly or annual earnings recap reports.To date – although they did not comment – most investment banks and brokerage houses are using NLP tools to analyse these different news feeds and tweets, process earnings statements, scrape websites and in some cases deploy algorithms to trade on the information instantaneously. However, some are more innovative. For example, JP Morgan is not only employing NLP to improve discoverability and tagging of its research, but also applying Natural Language Generation (NLG) technologies to augment, supplement and automate data-centric research. 

State Street, on the other hand, in September unveiled its Quantextual Idea Lab, which uses technology and human curation to synthesise hundreds of research reports from the sell-side, buy-side and academia, Machine learning algorithms consume complex research reports, tags them by investment themes and assets, and then suggests new, relevant materials based on the user’s specific needs, preferences and observed reading behaviour.

On a sector level, UBS’ global chemical industry equity research team was reported to have used satellite, unmanned aerial and other imagery and sensors to assess China’s air quality in 2,186 locations to determine the winners and losers in the petrochemical industry given the Chinese government’s new air pollution campaigns. The analysts identified Arkema, BASF , Covestro and Evonik among the potential beneficiaries in the European chemical sector, because they were “exposed to more constrained commodity chemicals production and their non-China operations could significantly benefit from higher global industry pricing,” according to analysts.

There is also talk of investment banks trying to create so-called virtual agents which imitates the quality of an analyst. Last year, Morningstar introduced a quantitative equity rating framework which is generated by a machine learning statistical model that tries to replicate human results. It does not replace the individual, but complements the work by collating data and providing daily history to track changes. 

For now, machine learning cannot replace human intuition, or understand complex, long-term investment trends. “The technology such as NLP helps aggregate structured and unstructured information sets and identifies anomalies but does not necessarily transform raw data into actionable insights,” says Chirag Patel, managing director, head of innovation and advisory solutions, EMEA, at State Street. “You still need experienced human beings to analyse the outputs and generate investment ideas.”

Huw Roberts, macro-fixed-income specialist at Quant Insight, agrees: “It is not just about having the data; the real challenge is making sense of it.”

Overfitting is a common problem in any machine learning approach. It is typically referred to as highlighting patterns that do not exist because the model was fooled into seeing a signal. The model may have good performance on the data on which it was tested but little or no predictive power on new data in the future. However, technologists are busy working on solutions such as dynamic models that change and adapt as they obtain new information. 

Investment Research: A new regime dawns