AI tools aren’t perfect. And many of their limitations you’ve likely encountered already. Here are some of its glaring shortcomings:
1. No competitive advantage to using commonly available tools
As a competitive intelligence pro, you’re no doubt familiar with the logical necessity that, without variation, there can be no competitive advantage.
Differentiation is the lifeblood of any competitive strategy. No matter whether it’s in how you price your products, or the problems they solve for your customers.
There might have been a time when having all of your staff use computers constituted a competitive advantage. Today, their use is so commonplace, it’d be odd if a business wasn’t buying high-end computers for all its staff.
Feasibly, the same is set to happen with AI. And AI business models focusing on accessibility and widespread adoption only make that outcome more likely.
In short, if all your competitors are using AI, you gain no competitive advantage over them by using it yourself. Only competitive parity.
This is not, of course, a strong argument for dismissing AI and its place in business going forward.
Don’t overlook AI and its burgeoning utility. Just think of what the top developer at your business can do with a computer, compared with the least technically proficient person you know.
With tools of sufficient complexity, the competitive advantage becomes wrapped up in how you use them. So, learn to use generative AI to its fullest extent. This way, you become the differentiating variable. You become the competitive advantage.

2. Hallucinations
You’re probably already familiar with the “hallucination” phenomenon. Given the reality of how GPTs work, it’s possible for the AI to, essentially, start making things up sometimes.
Hallucinations are a crucial limitation of generative AI and GPTs. Not least because they become more common when you ask the AI to think, speculate, or make predictions.
Remember that creativity, as it pertains to AI, is a function of the model’s training data, and the patterns, styles, and structures it learns from it, as well as artificially injected randomness via the temperature variable.
When you ask the AI to think for itself, you’re necessarily asking it to stray from its training data into the unknown and the unconventional. The AI can use this randomness to mimic patterns in speculative text it has been trained on.
It appears to be making things up because it is literally drawing on randomness as the best possible solution for generating speculative responses, and it struggles to differentiate between innovation and pure factual inaccuracy.
This has deep ramifications for your requests to an LLM to “predict what will happen in the next 12 months in my competitive landscape,” or for it to “perform a competitive analysis on our top five competitors and offer strategic guidance on how we can maximize our competitive advantage.”
So long as you’re asking the AI to form conclusions based only on vetted data, you can probably be confident in its conclusions. But when you ask the AI to make predictions, speculate, or otherwise move away from concrete, trusted data, you’re opening the door for random output.
According to ChatGPT itself, the following are the types of prompts most likely to lead to hallucination:
- Requests for highly specific factual claims or about future events.
- Requests about very recent events or cutting-edge developments.
- Deeply technical or specialized knowledge outside mainstream coverage.
- Hypothetical scenarios with complex variables.
- Extremely detailed creative writing requests.
- Misleading requests, or those based on misinformation.
- Ambiguous or contradictory information provided in the prompt.
“AI has its faults. For now, you still need a human in the loop to edit the content, and vet everything. When you’re talking about something like competitive intelligence, where that data can inform major business decisions, or a specific person taking on a sales opportunity, you need a human to be able to vet that AI and its output to make sure everything is okay.” – Ben Hoffman, Senior Manager of Competitive Intelligence, Adobe
3. Incorrect training data
LLMs first go through unsupervised learning. The LLM is let loose on huge swathes of data, and it processes this information to understand probabilistic relationships between units, or tokens, of that data.
What follows is supervised learning. During this stage, human beings are contracted to write varied answers to a series of questions. The LLM is then trained on these human-generated responses to learn what “good”, or “correct”, looks like for various types of queries and requests.
Crucially, where humans are involved, human error can rear its ugly head. Even something as simple as a few copy-paste errors can lead to incorrect responses to questions the LLM is then trained on.
Just as, in competitive intelligence, your conclusions are only as strong as the data they spring from, an LLM can only be as good as the data it’s trained on. When even a small percentage of an LLM’s training data is factually inaccurate, it could throw off the accuracy of its responses in critical situations. This is a classic case of increased complexity correlating with increased probability of error.
There is some debate about this, with some considering so-called “label noise” a necessary, unavoidable evil that affects even human learning. But it underscores the reasons why we shouldn’t blindly trust AI-generated answers.

4. Data privacy and security concerns
An LLM can only be as good as the data it was trained on. For this reason, data is a critical component in making AI useful.
What’s more, specialized datasets open the door for specialized responses, and can differentiate the tool’s ability in your hands versus the hands of your competitors, posing a possible avenue for competitive advantage.
So it stands to reason that you’d want to provide the LLM with data that is not available to all, either purely for context, or to fine-tune it to make it more fit for particular tasks.
This means data on your market, your business, and your customers. But there are privacy and security concerns when you choose to do this.
“I wouldn’t put any proprietary information into a tool where you haven’t done the due diligence to ensure it’s closed off. I wouldn’t trust putting my company’s information or my personal information into the free ChatGPT. If it’s more generic information, that’s different. And if you’re paying for something, and you know that there are security measures in place, I’d feel more comfortable with that.” – Ben Hoffman, Senior Manager of Competitive Intelligence, Adobe
The problem:
By default, AI tools like ChatGPT store the prompts you give them, as well as any additional data or information you upload, for use in improving their models.
It’s feasible that, if you uploaded proprietary data, that information would eventually be used to train and improve the LLM, baking that information into its knowledge base. Your competitors, or anyone else, could potentially then benefit from that knowledge.
What you can do about it:
ChatGPT has a “Chat history & training” setting you can deselect. Doing so opts you out of having anything in your chats used to improve OpenAI’s models.
Also, according to OpenAI’s own documentation, they “don’t use content from our business offerings such as ChatGPT Team, ChatGPT Enterprise, and our API Platform to train our models.”
Judging by the text on the above reference webpage, OpenAI have, for a while, been honoring requests to not use data for certain accounts to train their models. This seems promising, but whether you trust it or not is up to you.
So... should you trust them?
There’s arguably too much at stake when it comes to your obligations to safeguarding the privacy of your customers, and your own intellectual property, for you to risk uploading proprietary data to a public LLM even with this option deselected.
Using synthetic data is a possible solution. Another is to run an instance of the LLM in a sandbox environment, hosted on a private server only you have access to. This way, you can be 100% sure your data is private and secure. But these are complex tasks, best handled by a specialist.
Threats and opportunities grow in tandem
As if that weren’t enough, various LLM-based exploits have already been documented. According to the nonprofit application security organization, OWASP, “the breakneck speed at which development teams are adopting LLMs has outpaced the establishment of comprehensive security protocols, leaving many applications vulnerable to high-risk issues.”
OWASP’s Top 10 for LLMs report documents prompt injection, insecure plugin design, and excessive agency as just a few of the exploits already in play that you, and your system administrators, should be aware of before implementing a version of an LLM that has access to your business’ proprietary data.

5. The depth problem
Finally, there is the classic AI “depth problem”, where generative AI can’t go deep enough into a specific problem or task to return a genuinely useful response.
According to Allen Borts, Technical Marketing Director of ADSC, there are three possible solutions to this problem:
The first solution is the default: to use a pre-trained model like ChatGPT that has been trained on a very large set of text data. The larger the training data set, the deeper the AI should be able to go into any task, and the more useful and meaningful any generated responses should be.
However, if you don’t trust the company behind the AI with your data, then this is not sufficient.
A second solution is to use subject-matter experts trained to ask the right questions, with enough prompt training to guide the AI to a meaningful response. This should improve the output of even lesser trained models.
For the most useful responses possible, it’s probably necessary to meet both of the above conditions. But both are subject to the same data privacy concerns.
The third possible solution is to use an integration, or a purpose-built generative AI agent, to extend the capabilities of the pre-trained model. Alternatively, as discussed, you can run an instance of the LLM on a secure, private server, or use synthetic data.
These solutions give you peace of mind over the security and privacy of your intellectual property, but they are not simple, and are therefore not cheap.
For more insights into AI in competitive intelligence, download the ebook. It’ll help you optimize prompts the right way to get useful answers, save you time, and so much more.