_CORE
AI & Agentic Systems Core Information Systems Cloud & Platform Engineering Data Platform & Integration Security & Compliance QA, Testing & Observability IoT, Automation & Robotics Mobile & Digital Banking & Finance Insurance Public Administration Defense & Security Healthcare Energy & Utilities Telco & Media Manufacturing Logistics & E-commerce Retail & Loyalty
References Technologies Blog Know-how Tools
About Collaboration Careers
CS EN
Let's talk

Agentic Workflows

Agent plans, acts, evaluates. You control.

We build multi-step AI agents with graph-based orchestration, tool-use and secure human escalation.

>95%
End-to-end success rate
4-8
Average steps/task
<5%
Human escalation
<10s
P95 latency

From chatbot to autonomous worker

A classic chatbot answers questions — prompt in, text out. An AI agent is fundamentally different: it plans a sequence of steps, acts in real systems (APIs, databases, files), evaluates results and decides on next steps. It’s a programmable worker with a defined mandate and permissions.

Anatomy of agentic workflow

┌─────────────────────────────────────────────────────────┐
│  TRIGGER (email, ticket, API call, schedule)             │
└────────────────────────┬────────────────────────────────┘
                         │
                         ▼
┌─────────────────────────────────────────────────────────┐
│  PLANNING                                                │
│  Agent analyzes task, breaks it into steps, identifies  │
│  required tools and data                                 │
└────────────────────────┬────────────────────────────────┘
                         │
                         ▼
┌─────────────────────────────────────────────────────────┐
│  EXECUTION LOOP                                          │
│  ┌─────────────┐  ┌──────────────┐  ┌───────────────┐  │
│  │ Tool Call    │→ │ Result       │→ │ Decision      │  │
│  │ (API, DB,   │  │ evaluation   │  │ (next step /  │  │
│  │  file)      │  │              │  │  escalation / │  │
│  └─────────────┘  └──────────────┘  │  completion)  │  │
│                                      └───────────────┘  │
│  ← repeats until task is completed ─────────────────→   │
└────────────────────────┬────────────────────────────────┘
                         │
                         ▼
┌─────────────────────────────────────────────────────────┐
│  OUTPUT (result, report, notification, system write)     │
└─────────────────────────────────────────────────────────┘

Orchestration — how we control agents

Graph-based orchestration

We model workflows as directed acyclic graphs (DAG), where each node is an isolated step with defined: - Input — what the node needs (data, context, results from previous steps) - Output — what it produces - Error handling — what happens on failure (retry, fallback, escalation) - Timeout — maximum execution time

Benefits of DAG approach: - Parallel processing of independent steps - Deterministic behavior (same input = same graph traversal) - Easy debugging (you see exactly where workflow failed) - Testability (each node can be tested in isolation)

Tool-use — agent’s hands

Agent acts through tools — defined functions with clear interface:

Tool: create_jira_ticket
Input:   { project, summary, description, priority }
Output:  { ticket_id, url }
Permissions: agent_service_account (write: project_SUPPORT)
Rate limit: 10/min

Each tool has: - Schema — input/output validation - Permissions — who can call it - Rate limiting — protection against runaway agents - Logging — full audit trail - Retry logic — exponential backoff, circuit breaker

Human-in-the-loop

Not everything should be done by the agent alone. We define escalation rules:

  • Confidence threshold — if agent is uncertain (< 80% confidence), escalate
  • High-risk actions — payments, deletions, customer communication → human approval
  • Anomalies — unexpected input, edge case → escalation with context
  • Regulatory requirements — some decisions must be made by humans (AML, KYC)

Escalation isn’t failure — it’s a designed mechanism. Agent prepares materials, context, recommendations. Human decides. Agent continues.

Real examples

Invoice processing automation

Before: Accountant manually processes ~50 invoices/day. Reads PDF, transcribes data to ERP, checks against orders. 3-4 hours daily.

After: Agent receives invoice (email attachment, DMS upload), extracts data (OCR + LLM), validates against order in ERP, checks for duplicates, writes to accounting system. Escalates discrepancies with pre-filled data.

Results: - 200+ invoices/day processed automatically - 98.5% accuracy (validated against manual processing) - 85% reduction in manual work - Accountant handles only edge cases and exceptions

Customer support triage

Before: Support team manually reads tickets, classifies, assigns. 30% of time goes to routing, not solving.

After: Agent analyzes incoming ticket (text + metadata), classifies (category, severity, product), enriches with context (customer history, knowledge base), routes to correct team. Solves simple queries autonomously.

Results: - 95% classification accuracy (vs. 87% manual) - 65% of tickets resolved without human intervention - MTTR (mean time to resolve) -40%

Data enrichment pipeline

Before: Sales team manually researches companies — web, registries, LinkedIn. 2 hours per lead.

After: Agent receives company name, automatically downloads data from public registries (ARES, OR), analyzes website, extracts key information (size, industry, technologies), enriches CRM record.

Results: - 5 minutes per lead (vs. 2 hours) - More consistent data (standardized format) - Salespeople focus on relationships, not research

Security and control

Permission boundary

Each agent has explicitly defined scope of permissions:

Action Permission Condition
CRM read ✅ Allowed
CRM write ⚠️ With approval Value > 100K CZK
ERP read ✅ Allowed Read-only
ERP write ⚠️ With approval Always
Customer email ⚠️ With approval Always
Internal notification ✅ Allowed
Data deletion ❌ Forbidden Never

Kill-switch

Immediate agent shutdown — implemented at three levels: 1. Per-task — stops specific running task 2. Per-agent — stops all tasks of one agent 3. Global — stops all agents (emergency stop)

Audit trail

Every agent action is logged: - Timestamp, agent ID, task ID - Input (what agent received) - Reasoning (why it decided this way) - Tool call (what it called, with what parameters) - Output (what it got back) - Duration, cost (tokens, API calls)

Audit trail is immutable, archived 12+ months, available for compliance audit.

Technology

Orchestration framework

  • LangGraph — for complex, stateful workflows with conditions and loops
  • Custom DAG engine — for high-performance, deterministic workflows
  • Event-driven architecture — for reactive, real-time processing

Deployment

  • Kubernetes — containerized agents with autoscaling
  • Serverless — for event-driven, low-frequency workflows
  • Queue-based — RabbitMQ/Kafka for high-throughput scenarios

Monitoring

  • Distributed tracing — we track every workflow step end-to-end
  • Metrics — success rate, latency, cost per task
  • Alerting — automatic alerts on anomalies, failures, degradation

Časté otázky

A chatbot answers questions. An agent acts — calls APIs, reads and writes to systems, decides on next steps, escalates. A chatbot is reactive, an agent is proactive with a defined mandate.

Every agent has a permission boundary — an explicit list of allowed actions. Critical actions (write, delete, payment) require human approval. Kill-switch stops the agent immediately. Audit trail logs every step.

Typically 4-8 steps for standard workflows. Complex scenarios (multi-system orchestration) can have 15-20 steps. The key is that each step is atomic, logged and revertible.

Yes, we recommend an incremental approach. Shadow mode (agent runs in parallel but doesn't act), pilot (10% of cases), full rollout. At each stage we measure and compare with human performance.

Máte projekt?

Pojďme si o něm promluvit.

Domluvit schůzku