Process Integration
An AI agent without integration is just a chatbot.
We connect AI agents to real systems — ERP, CRM, ticketing, email. With retry logic, circuit breakers, and full monitoring.
Why integration is critical¶
An AI agent that isn’t connected to real systems is just a chatbot with better UX. Real value emerges when the agent:
- Reads customer history from CRM before responding to queries
- Writes extracted data from invoices directly into ERP
- Creates tickets in Jira with full context
- Sends notifications to the right people through the right channel
- Monitors systems and reacts to events in real-time
Without integration, you have a demo. With integration, you have a production AI worker.
Integration architecture¶
┌─────────────────────────────────────────────────────┐
│ AI AGENT │
│ │ │
│ ▼ │
│ INTEGRATION LAYER │
│ ┌──────────────────────────────────────────────┐ │
│ │ API Gateway (auth, rate limiting, routing) │ │
│ │ Retry Logic (exponential backoff) │ │
│ │ Circuit Breaker (failure isolation) │ │
│ │ Dead Letter Queue (failed operations) │ │
│ │ Transform Layer (schema mapping) │ │
│ │ Monitoring (per-connection metrics) │ │
│ └──────────────────────────────────────────────┘ │
│ │ │ │ │ │
│ ▼ ▼ ▼ ▼ │
│ ┌─────┐ ┌─────┐ ┌──────┐ ┌────────┐ │
│ │ ERP │ │ CRM │ │ Jira │ │ Email │ │
│ │ SAP │ │ SF │ │ SNow │ │ Teams │ │
│ └─────┘ └─────┘ └──────┘ └────────┘ │
└─────────────────────────────────────────────────────┘
Typical integrations¶
ERP systems (SAP, Oracle, Microsoft Dynamics)¶
Use-cases: Reading orders/invoices, writing processed documents, checking inventory, automating accounting workflows.
Approach: REST/OData API (SAP S/4HANA), BAPI/RFC (SAP ECC), database connector (legacy). Always through service account with minimal permissions.
Typical integration time: 2-3 weeks (modern API) / 4-6 weeks (legacy BAPI)
CRM systems (Salesforce, HubSpot, Microsoft Dynamics)¶
Use-cases: Contact enrichment, lead scoring automation, customer history for support agents, opportunity tracking.
Approach: Native REST API, bulk API for data sync, streaming API for real-time events. Webhook listeners for trigger-based workflows.
Typical integration time: 1-2 weeks
Ticketing (Jira, ServiceNow, Zendesk)¶
Use-cases: Automatic ticket classification, routing, escalation, creating tickets from agent findings, status updates.
Approach: REST API, webhook notifications, bi-directional sync.
Typical integration time: 1 week
Communication channels (Email, Slack, Teams)¶
Use-cases: Receiving and processing emails (invoices, queries, orders), notifications, escalations, user interactions.
Approach: IMAP/SMTP (email), Slack API (Bot + Events), Microsoft Graph API (Teams, Outlook).
Typical integration time: 1 week per channel
Document Management (SharePoint, Confluence, Google Drive)¶
Use-cases: Knowledge base for RAG, new document ingestion, versioning, metadata enrichment.
Approach: Native API, webhook for change detection, incremental sync.
Typical integration time: 1-2 weeks
Legacy systems (without API)¶
Use-cases: Any system the agent needs but lacks modern API.
Approach (from best to worst): 1. Database connector (direct SQL read/write) — fastest, but requires DB access 2. Screen scraping (Playwright/Puppeteer) — for web-based legacy apps 3. File-based (SFTP, shared drive, CSV/XML export/import) — for batch processing 4. RPA adapter (UiPath, Power Automate) — for desktop applications
Typical integration time: 2-4 weeks (depends on complexity)
Integration layer robustness¶
Retry logic¶
Each integration has a defined retry strategy:
Attempt 1: immediate
Attempt 2: wait 1s
Attempt 3: wait 4s
Attempt 4: wait 16s
Attempt 5: wait 60s
→ Dead Letter Queue (manual review)
Idempotence: Every write operation is idempotent — repeated calls won’t perform the action twice. Implemented through idempotency keys.
Circuit breaker¶
If the target system repeatedly fails, the circuit breaker switches to open state:
| State | Behavior |
|---|---|
| Closed | Normal operation, requests pass through |
| Open | System failing, requests go to DLQ, periodic health check |
| Half-open | Test request, if OK → closed, if fail → open |
Eliminates cascading failures — one system outage doesn’t bring down the entire agent.
Dead Letter Queue¶
Failed operations (after exhausting retries) go to DLQ: - Logged with full context (what happened, why it failed) - Alerting on new DLQ messages - Manual or automatic reprocessing after system recovery - Aging monitoring — DLQ messages older than threshold escalate
Per-connection monitoring¶
Each integration has its own metrics: - Availability — is the system accessible? - Latency — how fast does it respond? - Error rate — how many requests fail? - Throughput — how many requests are we processing? - DLQ depth — how many failed operations are waiting?
Dashboard with real-time status of all integrations. Alerts on degradation.
Change management¶
Why 50% of AI projects fail on adoption¶
A technically perfect AI system that nobody uses is worthless. Most common failure reasons:
- Users weren’t involved — “desk-designed” solution doesn’t match reality
- Lack of trust — users don’t trust AI and bypass it
- Poor UX — AI is more cumbersome than existing process
- Missing feedback loop — problems aren’t addressed, frustration grows
Our approach to adoption¶
Phase 1: Shadow mode (2 weeks) - Agent runs parallel with humans but doesn’t act - Compare agent results vs. human - Identify gaps and edge cases
Phase 2: Pilot (2-4 weeks) - 10-20% of processes through agent - Selected early adopters (motivated users) - Daily feedback, rapid iterations
Phase 3: Rollout (2-4 weeks) - Gradual expansion to more users/processes - Training and documentation - Support channel for questions and issues
Phase 4: Optimization (ongoing) - Adoption measurement (what % of processes go through agent) - User satisfaction surveys - Continuous improvement based on data
Measuring adoption¶
| Metric | Target | How we measure |
|---|---|---|
| Adoption rate | >80% | % of processes handled by agent |
| User satisfaction | >4/5 | Regular survey |
| Bypass rate | <10% | How many users bypass the agent |
| Support tickets | Decreasing trend | Issues with AI system |
| Time savings | >30% | Before/after comparison |
Integration process¶
Discovery (1 day)¶
- System and data flow mapping
- Integration point identification
- API capabilities and limitations analysis
- Integration priority definition
Design (1 week)¶
- Integration architecture
- Schema mapping (source → target)
- Error handling strategy
- Security review (credentials, permissions)
Implementation (1-2 weeks per system)¶
- API adapter development
- Retry, circuit breaker, DLQ
- Unit + integration tests
- Security hardening
Testing (1 week)¶
- End-to-end testing on staging
- Load testing
- Failure scenario testing (what if system crashes?)
- User acceptance testing
Go-live & monitoring (ongoing)¶
- Shadow mode → pilot → production
- Per-connection monitoring
- SLA tracking
- Continuous optimization
Časté otázky
Yes. For legacy systems without REST APIs, we use screen scraping (Playwright), database connectors (direct SQL), file-based integrations (SFTP, shared drives), or RPA adapters. We always find a way, but we prefer APIs.
Retry with exponential backoff, circuit breaker (if system repeatedly fails, we stop calling temporarily), dead letter queue (unsuccessful operations are stored and processed after recovery), fallback logic (alternative path).
5-15 systems per project. Typically: 1-2 core systems (ERP, CRM), 2-3 communication channels (email, Slack, Teams), 1-2 knowledge sources (wiki, DMS), 1-2 monitoring systems.
Technical integration is half the work. The other half: stakeholder alignment, user training, gradual rollout (shadow → pilot → production), adoption measurement, iteration based on feedback. Without adoption, even the best integration is useless.