LIBF_logo.svg
close.svg
Share page
Share on LinkedinShare on TwitterShare on Facebook
LIBF logo
open.svg
right_arrow.svgleft_arrow.svg
Access the FW archive here
Follow us
Data protection
© The London Institute of Banking & Finance 2023 - All rights reserved
back_to_top_arrow.svg
Features
Insurtech
BigTech and the powers of persuasion
Hamacher_online_manipulation_by_big_tech.jpg
As Google and other technology giants move into financial services, how can regulators rein in algorithms that prey on our data? It won’t be easy, says Adriana Hamacher
Finance aspires to be a precise discipline, yet human behaviour often defies the neat equations of statistical models. Take, for example, ‘action bias’, the compulsion to act, even when inaction is statistically more advantageous, or our susceptibility to so-called ‘dark patterns’, website interfaces that subtly nudge users towards add-on purchases, and ‘sludge’, which comes in the form of subtle frictions, designed to dissuade users from completing what they set out to do.
Our subconscious is responsible for up to 95% of decisions, according to Harvard Business School research. Google, Amazon and other big technology companies have become adept at exploiting this to convert clicks into cash. Simply keeping users engaged for longer makes them more susceptible to suggestion and manipulation from increasingly sophisticated algorithms trained on billions of gigabytes of data.
Keeping users engaged for longer makes them more susceptible to suggestion and manipulation
As these technology companies venture into financial services, regulators are taking note. In July 2023, the UK’s Financial Conduct Authority (FCA) announced in a report on ‘The Potential Competition Impacts of Big Tech Entry and Expansion in Retail Financial Services’ that it was investigating the extent and nature of “sludge, dark patterns and gamification of financial services” so that it can protect consumers better.
Meanwhile, the EU is introducing stringent regulations. But it remains to be seen whether these frameworks can evolve quickly enough to keep pace with technological innovation.

Exploiting user information

Since its initial public offering in 2004, Google has acquired more than 250 companies, including YouTube, Nest, FitBit and Waze, each contributing to an ever-expanding reservoir of user data. It also collects a wealth of user information through Maps, Android, Chrome, Google Play, Gmail and Google Home, giving the company a wide-angle view of what people buy, wear, read and watch, as well as where they go and who they are with.
As Google and other technology giants branch out into credit, insurance and wealth management, this data can be used to improve product offerings and influence decisions. It is also monetised in aggregate form and used to create targeted advertising with predictive capabilities. “The cost of gathering it is low, and individuals have little or no say in how it is used or aggregated,” says Juliette Powell, a data scientist and co-author of ‘The AI Dilemma: seven principles for responsible technology’. “Moreover, much of the data is probably out of date but still circulates in aggregate form.”
If an algorithm is trained on flawed, skewed or incorrect data, the resulting model will inevitably be biased. A case in point comes from the US, where an algorithm designed to predict healthcare needs for a population of 200m people ended up reinforcing racial bias. The algorithm used healthcare costs as a stand-in for actual health needs, leading to a misleading conclusion. It falsely determined that black patients were healthier than they actually were, resulting in them receiving less medical care compared with white patients with similar health conditions.

Personalisation and manipulation

Traditional banks face competition from tech giants providing innovative, highly personalised services that offer consumers more choice, lower prices and an enhanced experience. But this personalisation can also lead to manipulation and misuse of data. Algorithms can push credit card offers to consumers already in debt or promote high-risk investments to inexperienced investors, while targeted marketing can ‘crowd out’ certain categories of consumer to optimise the reach of a product.
The processes are subtle, says Alain Samson, Chief Science Officer at Syntoniq, a behavioural science research and consultancy firm. “A ‘nudge’ refers to any subtle adjustment in the way choices are presented that can influence people’s behaviour,” he says. “For instance, making certain options more salient, labelling them as ‘most popular’, or changing defaults.”
To preserve users’ autonomy, choice architects should make it easy to opt out of nudges or other types of manipulation, suggests Samson. They “should not only be sensitive to the nudge target’s needs and preferences, but also think about and weigh up the costs and benefits of their interventions”.
Widely accepted marketing practices can often be manipulative. According to Matej Sucha, Chief Executive of Mindworx Behavioral Consulting, manipulation arises when a service provider is less than fully transparent, and when a product is specifically engineered to exploit consumer biases. For example, offering a credit card with a deceptively low introductory rate to capitalise on consumers’ tendency to prioritise immediate costs is a tactic known as hyperbolic discounting. Sucha, who assists clients in crafting ethical and personalised user experiences, emphasises the need for standards to govern these practices.
Algorithms can push credit card offers to consumers already in debt or promote high-risk investments
And those standards are needed across the industry as it’s not just the bigtechs who use data-enabled personalisation. “There’s a big trend to move towards contextual banking and contextual services,” Sucha says, adding that more advanced banks and financial services providers already use transactional data to time the offer of a credit card, for instance. They can even mine a customer’s communication style to determine a psychological profile and adjust their marketing efforts to fit.

Responsibility and regulation

Simply put, the regulatory environment around big techs and consumer risk is a patchwork quilt. Differing regulatory approaches prevail while a global, cross-sector response to shield consumers from heightened risks of scams and manipulation is lacking.
Fresh advances in artificial intelligence can only add to the challenges that regulators face
Take Section 230 of the Telecommunication Act 1996, which protects online platforms from liability for user-generated content. It is a US law, so it applies to some of the world’s biggest big tech companies, and other jurisdictions have taken a similar approach. In the EU, for example, social media firms are deemed to be ‘mere conduits’ for user-generated content. Since Section 230 was enacted, its opponents have argued for removing or amending it to make social media sites more accountable for the content they host, which includes deceptive financial advertising.
Earlier this year, Barclays called for greater accountability from bigtechs after the bank revealed that four in every five scams it encounters originate from social media, online marketplaces or dating apps.
But even critics admit that changing the law is not straightforward, since, without some kind of immunity, platforms are likely to over-moderate to avoid legal risk.
In the UK, the new Digital Markets, Competition and Consumers Bill proposes to give the Competition and Markets Authority new powers to take on tech platforms with ‘pro-competition rules’.
And in financial services, in particular, the Consumer Duty rules require financial services firms to design products and services that secure demonstrably good consumer outcomes. That is likely to catch any digital manipulation in its net.
In the EU, the Digital Services Act aims to regulate social media and introduces rules for content moderation and transparency requirements, which are already being followed by US companies. Meanwhile, the goal of the Digital Markets Act is to keep big tech firms that have been designated ‘gatekeepers’ of core platform services from forcing people to use only their platforms and products, and to make it more difficult for them to track users online. There are six gatekeepers: Alphabet, Amazon, Apple, ByteDance, Meta and Microsoft.
Accusations against major digital platforms span from anti-competitive practices to the mass harvesting of user data and failure to address illegal or harmful content. Regulators are aware of the problems, but they move much more slowly than technology does. Arguably, fresh advances in artificial intelligence can only add to the challenges regulators face, with the prospect of ever more advanced systems that predict user behaviour in increasingly opaque ways.
Adriana Hamacher
Adriana Hamacher is an independent researcher in human-robot interaction and an award-winning writer specialising in emerging technologies. Her work has been featured by the BBC, Wired, Mashable and other media outlets
More from
Features