What do you think has sparked the use of artificial intelligence (AI) for IROs?
Looking at it from an IR perspective, right now IROs are a little bit on the back foot. AI has been around for quite a while – maybe not as front and center as ChatGPT, but it’s been around – but it’s mostly being used by the buy side.
If you look at the market right now, there is quite an asymmetrical problem when it comes to the buy side using AI, because some people are using it and some people are not so those using it have a lot more granular information. As an IR executive, you’re basically facing a market in which some parties know a lot more than others. And those parties that know a lot more are using AI. I believe that’s quite a worry for IROs as to what type of impact AI is going to have on a company.
What kind of questions should an IRO be asking an AI system?
The first question is: should you use it? In my opinion, the most pragmatic way of using AI now is to answer investor questions quickly. For example, we just used ChatGPT to answer investor questions and help write up a draft email. The time it takes to use AI and send a message out is short.
But there are companies right now using ChatGPT to draft press releases – not a good idea. ChatGPT is a learning platform, which means anything you put into it is used for something else. It’s also a third-party platform so your security team will have your head on a platter if you decide to draft something that’s regulated and confidential on an automated platform before publishing the final version.
What are the challenges facing IROs who may use AI as part of their role?
The problem we’re dealing with here is that IROs see something like ChatGPT and may assume it will make their life easier. But they’re wrong – you really must understand you’re putting confidential information on a third-party platform.
One of the problems I foresee with AI is that it means some funds or buy-side parties can already make a very good guess about the content of your next trading update before you have even written it. I think it’s quite a big challenge for IR folks, because it eliminates [nuance] in how they can use it – unless a company builds its own AI technology, which is something else.
What other technological developments should IROs be aware of?
We have AI that can measure sentiment during AGM and analyst calls. There is also face-recognition software that can be used during Zoom meetings. This can be useful to measure body language. When I’m talking to you, are there other signals in the way I talk? Am I touching my hair, looking left, up, right? Am I lying to you? Do I feel insecure about a certain question that’s asked?
Should an IRO use AI as part of the role?
I think there is an ethical question here: where am I going to use this technology? Is it fair to use it? For people tuning in to analyst calls or an AGM, for example, do they know whether AI is being let loose? AI parties can react faster and more accurately to situations, in my opinion, so it disrupts the level playing field. Because if I’m a big fund and I have a lot of money to invest in AI, I might have an advantage over others when it comes to buying or selling stock.
For people just looking at graphs and data, there’s no way they can calculate all of that as fast as AI can, though you could also argue that if I have three Harvard graduates on the case, I also have that information. But the real question is: does AI really provide such a huge advantage that it could become dangerous? I think it could.