Be part of our community
JOIN LIBF FOR ACCESS TO FINANCIAL WORLD
Access the FW archive here
Comment & analysis
Is there a philosopher in the house?
Martina Garcia, the outgoing Director of the CSFI at LIBF, reflects on generative AI and worries about how it might transform financial services

Photoshop for writers has finally arrived. Will I manage to write this article without succumbing to the temptation of pushing it through the AI churn and receiving a beautiful, smooth, crisp, flawless, irresistible script, requiring just a couple of tweaks here and there to avoid lying too much?
And it thinks too! Well, it structures, and it turns out that structuring and thinking are next to the same thing. Who knew humans were so simple. By predicting the most probable next word/token in the sequence, it ends up generating what often sounds like a sensible response. It is a pattern recognition machine. And what is received wisdom if not the most likely pattern of tokens?
The growing skills graveyard
The growing skills graveyard
After calculators, spell-checkers and internet browsers, good education became about learning to structure information and facts into a coherent narrative, thus the proliferation of essay writing as an assessment method. Well, there is no longer much point in acquiring that skill. Does it matter? I certainly don’t know how to start a fire in the wild, pluck a chicken or mend a sock.
And, although it was more predictable that machines would learn to code, it’s amazing just how much schadenfreude I feel about it. Can anyone teach common sense? The key question for educators has to be: ‘What do you need to know to exercise good judgment?’.
AI has no intentionality (it is machine, duh!) and it does not distinguish fact from what can be strung together into a sentence. Does that mean that only two types of jobs (AI trainers and non-automatable manual work) will remain? Will there be fact checkers and decision-makers, the former increasingly badly paid under zero hours contracts, scattered around the world to provide a seamless 24-hour service, and the latter in their ivory glass towers in the metropolis? I am not so sure.
Who needs facts?
Who needs facts?
We have all heard about AI hallucinations and the poor US lawyer who unwittingly constructed his case on invented precedents. But did the false cases produce a fair illustration of what was happening in the real ones or were they misleading? It matters in law that they were not actual precedents, but it is rare that we need exact facts in financial services, economic policy or business in general. Good estimates are key, spurious exactitude a problem. So, are the hallucinations good enough estimates of real facts or widely off the mark? We might continue to need armies of underpaid AI trainers but maybe not as many fact checkers as we think. This brings us back to ‘what do you need to know to exercise good judgment?’.
The capacity to analyse vast volumes of unstructured data changes the game significantly
How many decisions are codifiable into a set of pre-determined parameters? Obviously, if you are a gatekeeper already following a script, for example triaging at an A&E centre, you might be advised to look for another job pretty fast. In retail financial services, at first sight, many decisions have already been automated – I doubt there are any humans deciding the size of my free overdraft – but what about wholesale markets still characterised by throngs of highly paid professionals selling strangely similar products to each other? Some question whether we really need humans to decide where it would be best to invest, whether to take part in an IPO, join a syndicated loan, buy a hedge or price a reinsurance deal?
Tech innovation in finance has been going on for a while but the capacity to analyse vast volumes of unstructured data changes the game significantly. Until now, the business model of financial data companies was in structuring the data and making it digestible, and therefore valuable. Raw data had relatively low value – much of it is public or provided for free by users of financial services. But now that structuring is becoming a public good, how are these companies going to continue monetising free data? Watch the scramble to control raw data that’s coming. Data regulation is not even out of the starting blocks.
Feeding the monster
Feeding the monster
The problem now is that you cannot extract from the open system without feeding it, which is upsetting for people working with confidential information. So they are buying their own systems, segregating data pools, and consequently reducing the value of the generated intelligence. They have good reason for this. Any data in the model can be ‘resurfaced’ by the right questions.
Whether the predominant model is one of fully segregated data pools or one of asymmetric ones, economics will tell you that the quality of the public data pool is likely to decline precipitously. And who wants a predictive model based on the information contained in one company? Imagine going from obtaining, at least in principle, the received wisdom of the world/the internet/the web pages that are open to web crawlers with a click, to paying for the purest form of group thinking.
Real-time diagnostics and systemic analysis might soon become a much more realistic proposition
In a way, talking about the tragedy of the commons when it comes to AI doesn’t make much sense. AI is a game for the big players. The data models involved are enormous and the resources required to build and manage them very significant. It’s estimated that it cost OpenAI $40m to process the prompts fed into ChatGPT in January 2023 alone. That is beyond the pockets of many companies and of quite a few countries.
Big brother
Big brother
Who is the biggest collector of financial data with full property rights and no need to share the monster to maintain quality? Regulatory, monetary and tax authorities. What have they been doing with their vast data harvests? Not much. Between ex-post collection methods, legal constraints, incompatible databases and lack of capacity (computer and human), real-time diagnostics and systemic analysis has not been a realistic proposition, but it might soon become so.
If I were a data scientist, I would join one of them – that’s where the most interesting work is likely to take place in the next five to ten years. If I worked in compliance at a bank, I would plan for early retirement.
Back to first principles
Back to first principles
Paradoxically, I feel my main gap in understanding generative AI and its implications is not computer or data science, but philosophy. I want to understand better what thinking is, how creativity works, how to apply ethics, what is intentionality, how do we decide, what is a fact? Will I get much joy on that from ChatGPT…?
Martina Garcia
Martina Garcia is the former Director of the Centre for the Study of Financial Innovation at the LIBF. Her career spans more than 20 years and includes senior roles at the London Stock Exchange Group, the Treasury and the OECD
More from
Comment & Analysis