Will agent-based research disrupt thought leadership surveys?

James Watson

A major topic within the global thought leadership industry today is about how AI will change, or even disrupt, the go-to input for nearly all B2B research today: the survey.

A recent piece from Francis Hintermann, Accenture Research’s Global MD, argues that so-called ‘agent-based research’ (ABR) is going to disrupt surveying as we know it. Using ABR, you’d replace actual survey respondents with “digital proxies using publicly available data”. (This is an emerging field, so terms are not yet all nailed down, but it somewhat overlaps with synthetic data and agent-based modelling).

The benefits of ABR are obvious: it’s significantly faster and cheaper, because you don’t actually have to go through the hassle of convincing real people to answer a bunch of questions.

It’s true that the survey industry is ripe for disruption (and ABR seems to show promise in other domains), but it’s much less clear that this approach will displace B2B surveys for thought leadership studies anytime soon.

The launch day PR sanity check

One key reason is best explained by a simple sanity check that helps gauge the potential approach and impact of a planned study. Bear with me.

In this test, you visualise the PR findings being used to announce your new study, which you’re explaining to an audience of journalists. Focus on messaging in particular: what kind of headlines is this study likely to produce? Are they interesting and newsworthy? Do they seem relevant and credible? Can you describe the people you’ve surveyed, and does that group make sense? It’s a simple and useful way to shape your thinking at the design phase.

Now use this method to announce your new agent-based research: “We’ve got some exciting findings from surveying hundreds of virtual personas or digital proxies, each of which was told to pretend to be a Fortune 500 CEO”. It’s highly likely that hands would be raised, and questions are asked: “So did you survey any real people, or are these all made-up?”, “How do you know that these responses are anything remotely like what the real Fortune 500 CEOs would say?”, and so on. Some journalists could run with it as an interesting novelty. Others might have a field day taking down your new study. Don’t underestimate the risk.

The reality is that a key merit in favour of the studies that B2B companies conduct is that they’ve got sufficiently deep pockets to actually poll senior executives about the issues of the day, which other sources like the media usually can’t do. It’s not cheap. It takes time. It’s not perfect. But it gives you something original, proprietary and credible to say. In contrast, reporting synthetic data from your digital agents is a hard sell.

The challenge increases when you expand ABR beyond roles such as the CEO or CFO, where there is a lot of source data to inform the behaviours of your digital proxies. Simulating the niche audiences often used in corporate thought leadership is a much longer shot (eg, retail sector treasurers, or supply chain leaders in the automobile sector). With little to no reference data to draw on, output quality will be poor.

ABR aside, there are numerous ways in which AI is changing how we do surveys: for example, speeding up the process of formulating hypotheses and drafting questionnaires (helpful, but needs human oversight) and speeding up analysis (can be useful, but also error-prone). But when it comes to the actual fieldwork and process of data collection, the status quo remains: there’s no shortage of thought leadership about AI, but there’s surprisingly little AI within thought leadership research.

Where ABR could help

Nevertheless, this discussion does raise an interesting question of where and when could ABR be useful in corporate research? Three possible options come to mind here:

  • To verify your planned campaign approach. For campaigns where you’re wanting to kick the tyres on your planned approach, ABR could help fine tune this. Similar to the dummy data function available on some survey platforms, but (hopefully) with more realistic responses. Is this going in an interesting direction? Are we asking the right questions?
  • To create scenarios or simulations, for use as a storytelling device. There’s definite potential to use ABR to help bring traditional scenario planning to life. For example, using your 500 simulated CEOs to see how they’d respond to certain situations (eg, tariffs go higher, trade wars erupt, etc), or other hypothetical what-ifs. It’s editorially interesting, providing it was clearly defined and communicated as a simulation. This is probably the most compelling use case, given its wide ranging applicability.
  • To dive into the why. An alternative would be to use ABR to help get a better handle on what’s driving certain responses within your typically closed-ended survey. These questions tell you the way the wind is blowing, but it’s often less clear as to “why”. While you wouldn’t necessarily report this additional input, this approach could help to create a more nuanced understanding of why your study findings are as they are.

A final note: while the jury remains out on ABR, surveys have their own problems (beyond cost and time), and in particular quality remains a challenge, so your mileage can vary widely. They’re likely to remain the thought leadership industry’s go-to research tool, but this doesn’t mean they’re perfect.




Image: Photo by Compare Fibre on Unsplash

Share this post

Ready to lead with content that matters? Let’s make it happen.