top of page

AI is a black box system.

That is to say, that how it gets to the outputs you desire is unknown.


What it also means is that it doesn’t always do the same thing in the same way. That’s by design. It’s deciding on a case-by-case basis that this – the prompt – probably means this, and I can probably interpret this – the source/fodder – like that.


That means that, with the exact same prompt and source/fodder, it might go about the task in a different way, and you might then get a different emphasis.


A lot of the time, that’s not too much of a problem. From the outset, I saw the potential for large language models to summarise and compress lots of detail, especially the written and qualitative. If the story and main messages are clear enough, they will come through regardless, just with a slight difference in tone or emphasis.


There are though, two issues:


1) A smaller one. If the story and main messages are not very clear, your AI tool of choice may still deliver back themes with exactly the same confidence – but you may not know that that confidence is misplaced.


2) The bigger issue is repeatability of results. If you just need to review some source material one-time, barring the above caveat, you should be good. But what if you need to do it again?


What if you want to re-run but with a different emphasis. Maybe something as simple as you now want it to combine two topics into one? 


What if you wanted to analyse a transcript the same way as you did with the one from last week?


What if you need to repeat a whole process, three months down the line?


These apparently simple tasks are now more difficult. You don’t know what’s being compared and how.


None of this is impossible to get right by yourself, you just need to understand what’s going on, and prepare accordingly.


It’s changing the process from: “Give me that main themes from this” to something far more precise. “I’d like you to identify the frequency of these themes in this. I define themes like [this]. I’d like you to also report themes you think are ambiguous i.e. the ones that you have difficult classifying, and other themes that occur frequently but are outside of the list I have given you.” [Note: this is a fairly crude starting point, not pret-a-prompter.]


It’s going to need iteration, and it’s going to need a close and critical human eye on all of the outputs, until you are satisfied. That takes time, a bit less so with experience.


And that’s why – in spite of the potential – something like a few hundred verbatim survey responses is still best done “by hand”, perhaps with some AIssistance. Once into 1000s, then the value is in setting the AI up for repeatable success.

 
 
 

Recent Posts

See All
AI synthesis is deciding your employer brand

AI tools are shaping how candidates understand employers, often from thin data and category language. EVP specificity matters more than ever. What you will find in this piece: a first-hand account of

 
 
 
Power in Data

I’ve spent a lot of time over the past year upskilling myself in data. And I’ve really enjoyed being able to apply new skills. And it’s given me new perspectives too. Slow Down In terms of a biggest s

 
 
 
Agile EVP: Faster, Braver, More Credible

A problem with Traditional EVPs Your EVP should be what sets you apart from your competitors. It’s the reality of what you are, placed in terms that your audience will respond to, expressed in a way t

 
 
 

Comments


bottom of page