Ship an AI assistant for customer support
Our support team handles 2,000 tickets per week. About 40% are common questions that could be answered by an AI assistant trained on our docs and past tickets. I'm proposing we build a chatbot that handles tier-1 support, with seamless handoff to humans when it can't answer.
We could use GPT-4o-mini for the LLM, RAG over our knowledge base for grounding, and keep a human in the loop for anything the model flags as uncertain.
Comments 9
Good pushback all around. I think the agent-assist approach is a smart middle ground. Could we prototype both and A/B test? The customer-facing version with a very prominent 'this is AI' disclosure and easy human escalation.
Building on the agent-assist idea — what if we used AI to generate a first draft of the knowledge base article after a novel issue is resolved? That way the AI improves the docs passively rather than answering customers directly.
I'd start even simpler: use AI to categorize and route tickets to the right specialist. Right now tickets sit in a general queue and get manually triaged. Automated routing alone could cut resolution time by 30% without any customer-facing AI.
We need to think about this from a data privacy angle. Customer tickets contain PII, account details, sometimes financial data. Are we comfortable sending that to OpenAI's API? What does our privacy policy say about third-party AI processing?
Cost analysis: GPT-4o-mini is cheap per call, but at 2,000 tickets/week with multi-turn conversations, RAG retrieval, and embedding generation, you're looking at $3-5K/month in API costs. That's less than one support agent's salary, so the math works if it actually deflects tickets.
I think the framing is wrong. Instead of an outward-facing chatbot, build an AI assistant for the support agents. It suggests responses, pulls up relevant docs, and drafts replies — but a human always sends the message. Same efficiency gain, none of the risk.
The 40% number is probably optimistic. I looked at our ticket data and many of the 'common' questions have subtle variations that require context about the customer's account. A chatbot that gives a generic answer to a specific question is worse than no chatbot.
I've used AI support chatbots as a customer and they're infuriating. The moment I get a canned AI response, I immediately look for 'talk to a human' button. We risk making our support experience worse, not better.
What happens when the AI gives a wrong answer and a customer acts on it? We sell financial software — bad advice could have real consequences. The liability question needs to be answered before we build anything.
Themes 4
The 40% number is probably optimistic. I looked at our ticket data and many of the 'common' questions have subtle variations that require context about the customer's account. A chatbot that gives a generic answer to a specific question is worse than no chatbot.
I think the framing is wrong. Instead of an outward-facing chatbot, build an AI assistant for the support agents. It suggests responses, pulls up relevant docs, and drafts replies — but a human always sends the message. Same efficiency gain, none of the risk.
Cost analysis: GPT-4o-mini is cheap per call, but at 2,000 tickets/week with multi-turn conversations, RAG retrieval, and embedding generation, you're looking at $3-5K/month in API costs. That's less than one support agent's salary, so the math works if it actually deflects tickets.
I'd start even simpler: use AI to categorize and route tickets to the right specialist. Right now tickets sit in a general queue and get manually triaged. Automated routing alone could cut resolution time by 30% without any customer-facing AI.
Good pushback all around. I think the agent-assist approach is a smart middle ground. Could we prototype both and A/B test? The customer-facing version with a very prominent 'this is AI' disclosure and easy human escalation.
I've used AI support chatbots as a customer and they're infuriating. The moment I get a canned AI response, I immediately look for 'talk to a human' button. We risk making our support experience worse, not better.
Building on the agent-assist idea — what if we used AI to generate a first draft of the knowledge base article after a novel issue is resolved? That way the AI improves the docs passively rather than answering customers directly.
Cost analysis: GPT-4o-mini is cheap per call, but at 2,000 tickets/week with multi-turn conversations, RAG retrieval, and embedding generation, you're looking at $3-5K/month in API costs. That's less than one support agent's salary, so the math works if it actually deflects tickets.
I think the framing is wrong. Instead of an outward-facing chatbot, build an AI assistant for the support agents. It suggests responses, pulls up relevant docs, and drafts replies — but a human always sends the message. Same efficiency gain, none of the risk.
What happens when the AI gives a wrong answer and a customer acts on it? We sell financial software — bad advice could have real consequences. The liability question needs to be answered before we build anything.
What happens when the AI gives a wrong answer and a customer acts on it? We sell financial software — bad advice could have real consequences. The liability question needs to be answered before we build anything.
The 40% number is probably optimistic. I looked at our ticket data and many of the 'common' questions have subtle variations that require context about the customer's account. A chatbot that gives a generic answer to a specific question is worse than no chatbot.
I've used AI support chatbots as a customer and they're infuriating. The moment I get a canned AI response, I immediately look for 'talk to a human' button. We risk making our support experience worse, not better.
We need to think about this from a data privacy angle. Customer tickets contain PII, account details, sometimes financial data. Are we comfortable sending that to OpenAI's API? What does our privacy policy say about third-party AI processing?
Building on the agent-assist idea — what if we used AI to generate a first draft of the knowledge base article after a novel issue is resolved? That way the AI improves the docs passively rather than answering customers directly.