Where’s the AI-based research in thought leadership today?

James Watson

You can’t open LinkedIn these days without coming across a new study about AI, with new ones seemingly launched on an almost daily basis. But what is far less obvious to see is the number of studies where AI is providing the primary research input. 

To be clear, I’m not talking about market research aimed at helping a company make business decisions, where the use of AI is increasingly widespread. Rather, this is about new, published research or thought leadership that aims to provide a meaningful update on a topical business issue. Here, nearly all reports rely on the go-to method for B2B thought leadership: a survey. 

Take these five new studies profiled by the Global Thought Leadership Institute (GTLI) in the past week as an example: 

No sign of AI in any of that, despite plenty of findings about AI. There are some examples of AI-based research around. Most notably, this recent and fascinating study on HBR uses AI to analyse 1,388,711 job posts from an online freelancing platform to explore how demand for certain kinds of jobs changed between July 2021 to July 2023. It highlights how freelance work for writing and coding, among other things, dropped fairly sharply following the introduction of ChatGPT (yes, many people should be worried). It’s an excellent example of what a great AI-based B2B thought leadership piece could look like. But in this instance it’s an academic study, rather than a corporate research piece.  

This isn’t a comprehensive review, but B2B examples of using AI within research are generally few and far between. This Counterparty Radar piece used machine learning to scrape and aggregate trading data across some 2,000 funds. Microsoft’s 2023 Will AI Fix Work? study analyses aggregate user behaviour across its 365 office suite, but largely as a side note to a wider survey. This earlier report from PwC scanned some 2.2m companies across the UK to determine their usage of AR/VR technology. All very interesting, but the absence of AI as a research input is more notable than anything else. 

All this is in stark contrast to the scientific community, where numerous research papers exist that are based on research underpinned by AI. These include the whimsical, such as this example, which used AI to help uncover novel links and relationships across 1,000 different scientific papers, which uncovers structural parallels between biological materials and Beethoven's 9th Symphony, among other things (who knew?). Far more usefully, this paper shows how AI can help detect and diagnose dementia, and do so better than human-only diagnosis. Most notably, Sir Demis Hassabis and John Jumper of Google DeepMind were two of three joint recipients of a Nobel Prize in chemistry last year, thanks to their work in predicting the structure of every known protein using the company’s AlphaFold AI tool. 

Naturally, plenty of GenAI is used around the edges of B2B thought leadership - much of which is not publicised, for obvious reasons. For example, social media posts being created, ideas for good survey questions being generated, options for titles, and so on. But the core research inputs for thought leadership reports remain much the same as before: more often than not, they’re based on surveys. (There are of course many excellent examples of non-survey based research, such as econometric indexes.) 

This isn’t an entirely new phenomenon with new technology. A decade or so ago, there was a proliferation of reports about the rise of big data and how it was going to change businesses. At the same time, it was nigh-on impossible to find reports actually drawing on analysis of big data to derive their findings. 

So why might this be? 

First and most obviously, surveys work really well for B2B thought leadership. Many studies are intended to be inherently forward looking, such as the WEF example mentioned above which looks at the future of jobs. Polling companies to ask how their job plans are changing is a clear and relevant way to work this out, especially when paired with backwards-looking real-world jobs data. 

Second, it’s rare that original data or research actually exists about some of the issues under discussion. For example, the Payhawk report mentioned above explores the technology gap within companies’ finance technology stacks. It’s exceedingly unlikely that data exists on a topic like this, so conducting a survey is a solid choice for exploring this, along with a number of one-to-one interviews.   

Third, it’s very difficult to get AI-based research right. Designing a credible study based on non-survey data is a non-trivial challenge, and many organisations lack the skills and experience to pull it off. ChatGPT and other GenAI tools can deliver impressive insights, but they need to be very carefully considered and tightly prompted. And delivering something that’s actually newsworthy is an even higher-order challenge. To take the HBR example above, this draws on expertise in identifying and accessing relevant data sources, data extraction and structuring, the ability to analyse large data sets, and so on.  

For now, though, expect more studies about AI, but relatively few relying on AI as the source of their insight.


Article first published here on FT Longitude


Photo by BoliviaInteligente on Unsplash

Share this post

Ready to lead with content that matters? Let’s make it happen.