Hugo
October 17, 2025

AI Agents and Human Support in One CX Workspace

Author: Sainna Christian

TL;DR

BPO leaders are moving to AI agents to stay competitive. The win is not replacing people, it is routing every conversation to the smartest next step. Text App puts live chat, ticketing, knowledge, and an AI agent in one place. You launch fast, keep context, and scale quality without extra tools.

About the guest

Kacper Wiacek leads Customer Success Experience at Text. He helps BPO teams blend AI agents with human support in one workspace. In this interview, he shares how to start, how to keep handoffs clean, and what to track each week.

What AI first and human-backed means

Question

What does AI first, human backed support mean for a BPO client?

Answer

It means quick change, easy scale, and smart cost without hurting CSAT. Most questions go to an AI agent, so training shifts from people to content. Upload a file, point to a site, or refresh a workflow and the AI picks it up fast without extra cost.

New hires need time. An AI agent does not. It handles very high concurrency with no fixed cap, so simple cases resolve on the spot and only tricky ones go to people who coach, improve knowledge, and work the edge.

As volume grows, you add people where they matter most. The AI keeps around-the-clock coverage and opens tickets with full context when someone should take over.

The first win to ship this week

Question

What is one no-regrets workflow to automate first, and why?

Answer

Start with the easy win. In week one, ship an “L0” front-door agent that clears spam and confused chats and answers the simplest questions (“Where is section X?” “How do I check the Y report?”).

It’s quick to set up and low-risk: we trained it on our website content, added a handful of FAQs, and enforced a strict “transfer to agent” rule whenever confidence was low or a policy edge was detected. That alone cut human-handled chats by about 50% on day one, and the setup took minutes.

With L0 soaking up junk and basics, we then layered in more advanced flows. Now the AI agent is present in every conversation, sometimes resolving end-to-end, sometimes handing off cleanly with full context to a human. It’s a no-regrets move because you get immediate relief and the data you need to decide what to automate next.

Hugo

How clean handoffs work

Question

How do you connect AI agents with live agents without losing context?

Answer

Keep the whole conversation in one place. In Text App, an AI agent owns the thread from the first message and handles what it can. When confidence is low or a rule requires a person, a live agent joins the same chat. Nothing is copied or retyped. The agent lands in a thread with full history.

Because the AI sees the same context, Reply suggestions open with a draft the agent can tune and send. The AI stays on as a quiet co-pilot. It fetches details, proposes next steps based on the data you uploaded, and keeps a running summary while the human drives the resolution. The handoff is clean, and customers never repeat themselves.

Hugo

The weekly CSAT and automation signals

Question

Which three metrics should a BPO leader watch each week, and why?

Answer

I track three things every week: automated vs. assisted vs. manual volume, CSAT, and missed chats. Together, they tell me if we’re scaling, how customers feel, and where money might be leaking.

I start with the automation mix. If the share of automated or AI-assisted chats is rising while quality holds, we’re unlocking capacity the right way. I look for the trend, not the snapshot. Spikes in manual work usually point to a new policy, a broken integration, or a knowledge gap the AI can learn from.

Next, I check CSAT across both automated and human-handled conversations. Stable or improving CSAT alongside more automation is the signal you want. If it dips, we pull a few low-score transcripts, fix the intent or workflow, and check the same view the following week to confirm the change landed.

Finally, I watch missed chats. With an AI front door, handoffs to humans should be intentional and answered. Any uptick is a red flag for staffing or routing, and it’s often the fastest way to recover lost revenue. Keep this as close to zero as possible and treat exceptions like incidents: find the cause, patch the workflow, move on.

Hugo

What happier agents look like

Question

What does agent happiness look like after introducing AI, and what is one signal to track?

Answer

You see it in the rhythm of the day. In Text App, the whole conversation lives in one thread, so less tab hunting and tool switching. Reply suggestions open with a ready draft, which cuts repeat writing and the “where’s that link” chase.

New hires ramp faster because the common language and references are already there. Senior agents spend more time coaching and working tricky cases, not firefighting. During spikes, there are fewer urgent pings and smoother handoffs. After PTO, people come back without dreading a backlog.

Over time, edits to AI-drafted replies become lighter. Agents start suggesting the next thing to automate on their own. That is the tell. The system is making the job better, not just faster.

As a simple pulse, ask each week: Did the tools help you finish tough chats faster this week?

Why APIs and MCP make it fit

Question

How do APIs and the MCP server make Text adaptable to unique client workflows?

Answer

Text is API first and ships with an embedded MCP server. APIs move data. MCP gives the AI secure access to your Text tools. The assistant calls explicit tools such as find-tickets, list-tickets, get-chat-transcript, and list-archived-chats, and access mirrors the signed-in user.

That’s why it adapts well: standardized tools, scoped access, and natural-language invocations that slot into whatever workflow the client already runs.

Security that earns trust

Question

What should security mean for BPOs adopting AI agents?

Answer

For me, security starts with visibility. Our Text Trust Center shows how we protect data, what we comply with, and what changed recently.

In Text App, every AI agent action is logged – who invoked which tool, with what inputs, and what was the output. We publish evidence for reviews too, including VAPT results and PCI vulnerability assessments, so your auditors have what they need.

We keep compliance current and public, our subprocessors are listed with regions, so data locality is clear. And our Privacy Policy and DPA spell out what we collect, where we store it, how long we keep it, and how deletion works.

Hugo

How to run a safe pilot

Question

What mistakes do teams make when they pilot AI, and how do you avoid it?

Answer

The most common mistake is trying to automate the whole workload on day one. You end up debating edge cases, wiring bespoke integrations, and shipping nothing.

Instead, pick one high-volume, low-risk intent (or a simple L0 front door), set clear success criteria (e.g., automation/assist rate up, CSAT steady, misses near zero), and time-box it to a week. Ship with a strict “transfer to agent” rule for anything ambiguous, review a handful of transcripts, and use what you learn to add the next one or two intents.

Hold off on custom integrations. Start with your site knowledge, routing, and Reply suggestions to prove value first. Treat misses as training data, not failures. Once the pilot is stable for two weeks, widen the scope.

That pace gets wins fast, keeps risk low, and builds momentum.

Build your Dream Team

Ask about our 30 day free trial. Grow faster with Hugo!

Share