We Built It Because Nobody Else Would Tell the Truth
Every AI development company talks about building intelligent agents. Very few show you one running on their own website. Even fewer tell you what actually went into building it — the failures, the iterations, the decisions that seemed smart at the time and turned out to be wrong.
So here is the full story. The Entexis AI assistant — the chatbot you can open right now in the bottom-right corner of this page — took four major iterations to get to where it is today. It started as a weekend experiment and became one of the most valuable tools on our website. Not because the technology is impressive (it is, but that is not the point) — but because it taught us things about our visitors that two years of Google Analytics never did.
This is not a marketing case study. This is a build log. What we tried, what broke, what we changed, and what we learned. If you are thinking about building an AI agent for your business, this will save you months of trial and error.
Did nothing with them.
Too aggressive on leads.
Started being useful.
Still improving weekly.
Version 1: The "Let Us Just Try It" Phase
The first version was embarrassingly simple. We took an off-the-shelf LLM, wrote a system prompt that said "You are the Entexis assistant, answer questions about our services," and pointed it at our website content. It took about a day to set up.
And honestly? It sort of worked. If someone asked "What services does Entexis offer?", it would list our services. If someone asked about case studies, it would mention a few. It was better than having no chatbot at all.
But it had problems that became obvious within the first week:
It hallucinated confidently. Someone asked about our pricing, and it made up numbers. Specific numbers. Plausible-sounding numbers that were completely wrong. This is the nightmare scenario — your own AI telling potential clients incorrect pricing information with full confidence.
It answered anything. Someone asked it to write a poem about the moon. It did. Beautifully. On our API bill. Someone else asked it to help with their Python homework. It obliged. We were essentially running a free AI assistant for anyone who visited our website.
It did nothing with the conversations. People were asking detailed questions about pricing, timelines, and specific technical requirements — clear buying signals — and the chatbot would answer helpfully and then... nothing. No lead capture. No notification to our team. Those conversations disappeared into the void.
In the first two weeks, we spent more on API calls for off-topic requests than on actual business conversations. One user had a 45-minute conversation about machine learning theory. Another asked the bot to compare iPhone models. We were paying for these conversations. That is when we realized guardrails are not optional — they are day one infrastructure.
Version 2: Adding a Brain and a Purpose
Version 2 was a complete rebuild. We made three fundamental changes:
1. RAG Instead of Raw Prompting
Instead of hoping the LLM remembered our website content from its training data (it did not, reliably), we built a RAG pipeline. We crawled every page on our website — service pages, case studies, industry pages, blog posts, the about page, the process page, even the contact page. We chunked the content into manageable pieces and stored them in a knowledge base.
When a visitor asks a question, the system pulls the most relevant chunks and injects them into the conversation context. The AI generates responses grounded in our actual content, not its general training data. Hallucination dropped dramatically — not to zero, but to a level we could manage with guardrails.
2. Conversation Tracking and Lead Capture
Every conversation now gets tracked — session ID, visitor IP, page URL, message history. We added a lead capture form that pops up after a few messages, asking for name and email. We also built auto-detection: if a visitor mentions their email address in conversation, the system captures it automatically and creates a lead in our CRM.
3. A Real System Prompt
The V1 system prompt was one sentence. The V2 prompt was a full page — covering tone of voice, what to say about pricing (never quote specific numbers), how to handle project discussions (ask about their needs first, do not rush to sell), when to suggest sharing contact details, and what to do with off-topic questions.
This version was significantly better. But it had a new problem we did not anticipate.
It was too aggressive with lead capture. The form popping up after exactly 3 messages felt robotic. Someone would ask two genuine questions, and before they could ask a third — bam, "Before we continue, can I get your details?" It felt like talking to a salesperson who asks for your business card before you have finished your first sentence.
Lead capture timing is not about message count. It is about intent. Someone asking "What is your email?" on their first message is ready to connect. Someone asking "How does SaaS development work?" on their fifth message is still researching. The trigger should be contextual, not numerical.
Version 3: Context Is Everything
Version 3 added two features that transformed the chatbot from "useful" to "genuinely valuable."
Page Awareness
The chatbot now knows which page the visitor is on. If someone is on our CRM Development page and asks "How do you approach this?", the bot does not give a generic answer about software development — it talks specifically about CRM architecture, industry-specific data models, and our CRM case studies.
If someone is on the Contact page, the bot knows they are ready to engage and adjusts its tone accordingly — more direct, more action-oriented. If they are reading a blog post, it stays educational and naturally mentions relevant services.
This single feature improved the relevance of responses more than any other change we made. The same question — "Tell me more" — generates completely different answers depending on where the visitor is. That is how human conversations work, and it is how AI conversations should work.
Links to Relevant Pages
This sounds obvious in hindsight, but Version 2 would mention our services without linking to them. "We offer SaaS Development" — but no way for the visitor to actually visit the SaaS Development page. They would have to navigate there manually.
Version 3 injects all our page URLs into the system prompt. When the bot mentions SaaS Development, it includes a clickable link. When it references a case study, it links to the case study page. When it suggests contacting us, it links to the contact page. Every mention of a service, industry, or case study becomes a navigation opportunity.
This turned the chatbot from a dead-end conversation into a guided tour of our website.
Version 4: The Production Agent
Version 4 is what you are talking to right now. It added the layer that separates a chatbot from a production system: guardrails, intelligence, and operational maturity.
Chat widget (Shadow DOM)
Page URL detection
Session persistence
Conversation history
Suggested questions
271 knowledge chunks
Page-aware prompting
Off-topic blocking
Pricing protection
URL injection
Lead capture (contextual)
CRM integration
Email notifications
Conversation logging
Rating feedback
What V4 Added
The Knowledge Base — The Part Nobody Talks About
Everyone focuses on the LLM. Nobody talks about the knowledge base. But the knowledge base is what makes your agent yours — without it, you just have a generic AI with a custom name tag.
Our knowledge base has 63 sources and 271 chunks. Here is what that actually means:
The auto-crawled content covers what we do — services, industries, case studies, blog posts. But the manual entries are what make the agent genuinely useful. These are the things that exist in your team's heads but not on your website:
Pricing and engagement models. Not specific numbers — but how we structure engagements. Fixed price, time and material, monthly retainer. What each model is best for. How to get a quote.
Team structure and working hours. Our timezone, communication preferences, project management approach. Things a visitor might ask that no web page covers.
Frequently asked questions that are not on the FAQ page. Minimum project size, NDA policies, post-launch support, technology stack preferences. The questions your sales team answers in every single first call — put those in the knowledge base, and your agent handles them before the call even happens.
Upwork profile and track record. Many of our enquiries come from people who found us on Upwork. The agent can discuss our Upwork history, ratings, and completed projects.
The questions your visitors actually ask are not the questions you put on your FAQ page. Read your conversation logs for two weeks before designing your knowledge base. You will be surprised by what people want to know — and how differently they phrase it from what you expected.
What We Got Wrong — And Would Do Differently
Transparency time. Here are the mistakes we made that cost us time and money:
agent must NOT do
before anything else
add manual FAQs
your team answers daily
links in responses +
conversation history
CRM integration +
team notifications
rating feedback +
improve constantly
What the Conversation Logs Taught Us
This is the part I find most fascinating. After months of reading conversation logs, here is what we learned about our visitors that we could never have learned from traditional analytics:
People compare us to companies we have never heard of. Multiple visitors asked how we compare to specific agencies in Bangalore and Delhi that we did not know existed. That is competitive intelligence you cannot get from any other source.
The questions people ask are not the questions we anticipated. We expected questions about our services and pricing. We got questions about our team size, our remote work policy, whether we sign NDAs, what happens after launch, and whether we work with startups or only enterprises. We added all of these to the knowledge base.
Visitors from different pages have completely different intent. Someone on the homepage is exploring. Someone on a case study page is evaluating. Someone on the contact page is ready to act. The same chatbot needs to handle all three — and the page awareness feature makes this possible.
The first question someone asks predicts whether they will convert. Visitors who start with a specific problem — "We need a CRM for our brokerage" — convert at a dramatically higher rate than those who start with general exploration — "What do you do?" This insight now informs how we prioritize lead follow-ups.
Try It Yourself
The chatbot is live on this page. Click the icon in the bottom-right corner and test it. Ask it about our services. Ask it something off-topic and see how it handles it. Ask about pricing and watch how it redirects. Try to break it — we have, extensively, and we would love to know if you find something we missed.
If the broader question behind this build is what teams are actually doing with AI agents in 2026 — across chatbots, workflow automation, and autonomous systems — read the companion piece: AI Agents in 2026: What Businesses Are Actually Building — From Chatbots to Autonomous Workflows.
If the question is narrower and more practical — whether your business website actually needs an AI chatbot in the first place — read the companion piece: Why Every Business Website Needs an AI Chatbot in 2026.
And if the technical foundation — RAG, retrieval augmented generation, the architecture that made the Entexis agent actually accurate — is the part you want to understand, read the companion piece: What Is RAG and Why Every Business Should Care.
The AI agent is not the product. The conversations are the product. Every chat log is a window into what the market actually wants, how buyers think about your services, and what questions stand between them and becoming customers. The technology enables the conversations. The conversations generate the insights. The insights improve everything — the agent, the marketing, the sales process, the service positioning. That feedback loop is the real value of building an AI agent, and the part most teams discover only after they ship one.
At Entexis, we build AI agents, website chatbots, and lead-qualification systems for businesses across fintech, real estate, NGOs, and e-commerce — with RAG architecture, guardrails, page awareness, and conversation analytics baked in from day one. If you are scoping a production AI agent and want a team that has already made the expensive mistakes, let us run you through a no-pressure discovery session. Start the conversation with Entexis.