This is a direct opinion based on what we observe: most corporate chatbots being deployed at mid-sized and large companies deliver no real value. They are experimentation projects that someone called 'implementation' before they were ready. And their worst damage is not financial: it is the trust credit they burn in the organisation for future AI projects.
The problem is not the technology
Current language models are capable of generating real value in business contexts. The problem is not the model: it is the deployment. A chatbot that answers generic questions with no access to the company's real systems, no updated data and no integration into the actual workflow is not a corporate chatbot. It is a worse version of a Google search.
How to recognise a chatbot that does not work
- No access to real company data (only to static documentation)
- Nobody measures how many questions it resolves fully vs how many it escalates to a person
- The team tried it the first week and no longer uses it
- It responds the same to questions about your company as to generic industry questions
- Not integrated into the channel the team already uses (Teams, Slack, email, ERP)
- No improvement process based on questions it failed to answer
When it does make sense
- Access to updated internal data: knowledge base, CRM, ERP, product documentation
- Solves a specific repetitive task someone used to do manually
- Resolution metrics (how many queries it completes without escalating) reviewed weekly
- Integrated into the channel where work already happens, not a new platform
- A feedback process exists to improve responses based on real failures
What makes the difference
The difference between a chatbot that works and one that does not is not the AI model: it is the use case design, the quality of data it has access to and the integration into the real workflow.
- Answers generic industry questions
- No access to real company data
- Nobody measures how many questions it resolves
- The team uses it once and abandons it
- Resolves specific questions about your operations
- Connected to your real, updated systems
- Resolution and escalation metrics reviewed weekly
- Integrated into the channel the team already uses
If your company is thinking about implementing a chatbot, the most useful question is not 'which model do we use?' but 'which specific process will it improve and how will we measure that?'. Without that answer, the project should wait.
We work on AI projects only when there is a concrete use case, available data and defined success metrics before we start.
See AI projects with real use cases