Matt Ince*
Artificial Intelligence (AI) provides the foundation for major innovations across the intelligence cycle. This trend will probably continue over the coming years, creating additional analytic bandwidth and reshaping how intelligence products are crafted and communicated to their intended audiences.
However, despite its potential to drive transformational change within intelligence ecosystems, AI is not a viable substitute for the human in the loop. Highly trained, digitally savvy intelligence professionals will almost certainly remain in high demand.
The benefits of at-scale AI integration
AI has so far dominated the technology discussion in 2023. Tools like OpenAI’s text-generator ChatGPT and image generators like Stable Diffusion and Dall-E have been the subject of growing intrigue and excitement. Microsoft has unveiled an AI-enhanced version of its Bing search engine. And Alphabet, the parent company of Google, has announced its own AI-powered chatbot, Bard.
Many of these technologies can be used to enable and optimise how intelligence is collected, processed, analysed, and disseminated. If used correctly, large language models (LLMs) such as ChatGPT can save valuable analyst time by completing a number of essential intelligence collection and processing tasks. Examples might include scraping the web for the latest protests incident data, conducting literature reviews on emerging issues of geostrategic importance, or creating and curating routine open-source intelligence feeds.
Emerging machine learning (ML) applications, such as natural language processing (NLP), will similarly reduce the need for analysts to undertake numerous activities, such as speech-to-text transcription, voice identification, and language translation. These are tasks that are generally much more time consuming for humans to complete.
Incorporating AI and greater levels of automation into the intelligence assessment processes is likely to enhance intelligence professions’ sensemaking capabilities as well as their ability to surface new insights to clients. Emerging AI-powered data analytics tools will help analysts to identify otherwise difficult to detect warning signs which, if left unnoticed, risk increasing the potential for strategic surprises.
With the help of AI-powered video creation tools, such as Lumen5, intelligence providers will be able to reimagine how strategic intelligence products and services are disseminated and communicated. Future AI engagement and feedback tools could similarly help intelligence providers to become even better placed to anticipate the subjects which clients would be most interested in. This is likely to increase intelligence providers’ ability to tailor their products more to specific client information needs and preferences.
Risks, limitations and concerns
Despite the significant potential for AI to reshape the world of intelligence, pure AI – as compelling as it may often sound – has significant shortcomings in intelligence work. Using conversational AI for specialised research risks introducing inaccuracies and bias into intelligence assessments. ChatGPT and other LLMs produce text that is convincing, but often wrong, so their use can skew scientific facts and amplify falsehoods.
The lack of transparency, including around how AI applications reach their conclusions, is also a concern. Intelligence assessments rely on structured processes and clear explanations and reasoning. Client confidence in intelligence providers will therefore very likely reduce if analysts are unable to explain how machine-based judgements are reached.
AI will enhance human intelligence, not replace it
The tools and applications that are defining the current age of AI will benefit almost every aspect of intelligence work. But they won’t be an adequate alternative to the human analyst whose purpose is to make sense of complex global developments, contextualise key information for decision-makers and make judgements about the most likely outcomes. ChatGPT, despite its many benefits, lacks the same level of general intelligence, emotional intellect and creativity that are essential for completing these types of intelligence activities.
Moreover, AI cannot make credible judgement calls about the implications of highly consequential global events or replicate the rigorous analytic tradecraft which allows analysts to fill critical information gaps and produce high-impact anticipatory analysis. And while AI can probably deliver and defend an analyst’s top analytic lines to clients, it cannot be held accountable for the judgments contained within.
Current AI technology is also far off being able to exceed or even match the understanding and learning capacity of well-trained human analysts. Experts anticipate it will be several decades before AI systems exist that are generally smarter than humans. This currently hypothetical strand of AI is often referred to as Artificial General Intelligence (AGI).
Even with the arrival of AGI, it is unlikely that such a technology will be capable of developing the personal relationships required to obtain human intelligence (HUMINT). Or, for that matter, building the levels of trust and confidence in intelligence assessments required for intelligence products to have their desired impact with the intended decision-maker.
People will therefore continue to be vital for completing a wide range of intelligence tasks. And those working in intelligence should be reassured that the role of the intelligence analyst is not at risk of simply being reduced to checking that AI-derived insights are accurate, appropriate, and fit for purpose.
Creating more strategic bandwidth
Armed with additional time on their hands, after AI lightens their workload, intelligence analysts will be better able to prioritise tasks of higher importance and greater levels of complexity. This could include spending more time delivering client briefings, as well as understanding their specific information requirements and the decisions that clients are using intelligence products to inform. Or it could mean more time spent on developing all-important human sources.
At-scale adoption of AI will also create more strategic bandwidth for analysts to make sense of developing situations and think critically about what the future security and geopolitical landscape will look like. This will only become more important against a backdrop of highly consequential strategic trends, such as accelerating climate change, de-globalisation, and rising authoritarianism which are often under-reported by most intelligence providers compared to more routine operational issues.
By enabling increased productivity, AI technologies also present openings to advance other aspects of the people agenda within intelligence settings. This includes the potential for more flexible working patterns that can help to make the intelligence community a more diverse, equitable, and inclusive working environment – all prerequisites for attracting the brightest human minds into the sector.
Building AI-literacy
To remain at the top of their game, intelligence professionals will however need to be able to use AI to maximum effect. The next generation analyst must be capable of understanding and explaining AI-derived findings, as well as identifying and calling out when AI-technologies fall short of delivering their desired outcomes.
Intelligence communities and the teams that consume the products we produce will both almost certainly need to proactively build up baseline digital and data skills going forward. For intelligence professionals, this will ensure that we can best harness AI and analytics tools in our analysis and supporting processes. Intelligence analysts that can build their digital capabilities and deploy these skills alongside other traditional intelligence competencies, will almost certainly be better placed to add value within future intelligence environments.
Whereas for consumers of intelligence products, the net benefit will be the ability to become even more intelligent customers. Increased AI-literacy will enable corporate security teams that use intelligence to better question how their providers are using AI and how the AI end product actually works, so that they can be clear when buying services that they are reliable and deliver what is needed.
The human in the loop
AI and related emerging technologies are already reshaping how intelligence is gathered, processed, and evaluated. And they will very likely continue to help further enhance the relevance, timeliness and actionability of assessment products from providers like Dragonfly in the coming months and years.
That said, although AI – along with automation and big data – will form an ever more important part of the intelligence practitioners toolkit, it will not replace the toolkit (or indeed the analyst). Research shows that humans and machines working together often perform better than either humans or AI on their own. And human creativity and originality will almost certainly remain essential for producing relevant and innovative intelligence outputs.
As AI adoption increases, ensuring intelligence practitioners remain in the loop will be critical to guaranteeing that AI hallucinations are picked up, and that intelligence products remain accurate, auditable, and strategic. Not to mention that intelligence providers can continue to speak “truth to power” – albeit in a potentially more tailored way.
*Matt Ince is Strategic Intelligence Manager at the geopolitical and security intelligence service Dragonfly.
No comments:
Post a Comment