IT Support Agent - Coded agents challenge

Submission type

Coded Agent with UiPath SDK

Name

Support IT Agent (LangGraph flow)

Industry category in which use case would best fit in (Select up to 2 industries)

Information technology and services

Complexity level

Advanced

Summary (abstract)

An intelligent IT support agent built with UiPath LangGraph SDK that autonomously
resolves 70%+ of IT tickets through multi-source knowledge integration and adaptive
reasoning. The agent seamlessly combines Confluence memory, Context Grounding,
FreshDesk articles, and external web search to provide self-service resolutions,
execute deterministic IT actions via UiPath workflows, or intelligently route
complex issues to IT staff. Key innovations include LLM-based semantic article
filtering, autonomous knowledge re-routing (internal ↔ external), and self-service
viability detection. Low-confidence responses trigger human-in-the-loop validation,
ensuring quality while maximizing automation.

Detailed problem statement

IT support teams face three critical challenges:

  1. Knowledge Fragmentation: Solutions scattered across Confluence tickets,
    internal documentation, FreshDesk articles, and external resources. Agents
    waste 40% of time searching multiple systems.

  2. Repetitive Manual Tasks: 60% of tickets involve deterministic actions
    (password resets, access provisioning, software installations) that require
    manual execution despite being fully automatable.

  3. Poor User Experience: Users receive generic β€œticket submitted” responses
    with no immediate guidance. Self-serviceable issues (VPN troubleshooting,
    app configuration) unnecessarily escalate to IT, creating 3-5 day resolution
    times for 15-minute fixes.

  4. No Intelligence Layer: Traditional ticketing systems lack:

    • Semantic understanding of ticket context
    • Ability to synthesize information from multiple sources
    • Logic to determine self-service viability vs. IT intervention
    • Adaptive knowledge retrieval when initial sources are insufficient

Result: 80% agent productivity, 65% CSAT scores, $2.5M annual cost for
resolving 50K tickets, with 35% being self-serviceable.

Detailed solution

IT support teams face three critical challenges:

  1. Knowledge Fragmentation: Solutions scattered across Confluence tickets,
    internal documentation, FreshDesk articles, and external resources. Agents
    waste 40% of time searching multiple systems.

  2. Repetitive Manual Tasks: 60% of tickets involve deterministic actions
    (password resets, access provisioning, software installations) that require
    manual execution despite being fully automatable.

  3. Poor User Experience: Users receive generic β€œticket submitted” responses
    with no immediate guidance. Self-serviceable issues (VPN troubleshooting,
    app configuration) unnecessarily escalate to IT, creating 3-5 day resolution
    times for 15-minute fixes.

  4. No Intelligence Layer: Traditional ticketing systems lack:

    • Semantic understanding of ticket context
    • Ability to synthesize information from multiple sources
    • Logic to determine self-service viability vs. IT intervention
    • Adaptive knowledge retrieval when initial sources are insufficient

Result: 80% agent productivity, 65% CSAT scores, $2.5M annual cost for
resolving 50K tickets, with 35% being self-serviceable.


ARCHITECTURE OVERVIEW
===================

The solution leverages UiPath LangGraph SDK to orchestrate a multi-node 
intelligent workflow:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ 1. TICKET INGESTION (UiPath Trigger)                           β”‚
β”‚    - FreshDesk webhook detects new ticket                       β”‚
β”‚    - Trigger invokes UiPath process to fetch ticket details     β”‚
β”‚    - Passes ticket_id to LangGraph agent                        β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                               ↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ 2. IT ACTION CLASSIFICATION (LLM Node)                          β”‚
β”‚    - Extracts ticket data fields (requester, description, etc.) β”‚
β”‚    - Queries UiPath Storage Bucket for IT action catalog        β”‚
β”‚    - LLM classifies if ticket matches deterministic action      β”‚
β”‚      (password reset, access grant, software install)           β”‚
β”‚    - Decision: MATCH β†’ Extract parameters | NO MATCH β†’ Continue β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                               ↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ 3. KNOWLEDGE SYNTHESIS (Parallel Search)                        β”‚
β”‚    A. Confluence Memory Search                                   β”‚
β”‚       - Vector search of resolved tickets (semantic matching)    β”‚
β”‚       - Returns top 5 similar resolutions                        β”‚
β”‚                                                                  β”‚
β”‚    B. Context Grounding Search                                   β”‚
β”‚       - Queries internal IT documentation corpus                 β”‚
β”‚       - Returns relevant policy/procedure docs                   β”‚
β”‚                                                                  β”‚
β”‚    C. FreshDesk Article Search (NEW: Semantic Re-Ranking)        β”‚
β”‚       - Keyword-based search returns 10 articles                 β”‚
β”‚       - LLM scores each article for relevance (0-1 scale)        β”‚
β”‚       - Adaptive filtering:                                      β”‚
β”‚         * ≀5 articles: Keep score β‰₯ 0.7                         β”‚
β”‚         * >5 articles: Keep top 5 by score                      β”‚
β”‚       - Reduces noise from keyword-only matching                 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                               ↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ 4. KNOWLEDGE SUFFICIENCY EVALUATION (LLM Node)                  β”‚
β”‚    - LLM scores aggregated knowledge (0-1 scale)                β”‚
β”‚    - Evaluates: completeness, clarity, actionability            β”‚
β”‚    - Decision threshold: 0.8                                     β”‚
β”‚      * β‰₯ 0.8: Proceed to response generation                    β”‚
β”‚      * ≀ 0.8: Trigger web search (autonomous re-routing)        β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                               ↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ 5. WEB SEARCH & KNOWLEDGE AUGMENTATION (Conditional)            β”‚
β”‚    A. External Web Search                                        β”‚
β”‚       - DuckDuckGo search with trusted domain whitelist          β”‚
β”‚       - Returns top 5 results from verified sources              β”‚
β”‚                                                                  β”‚
β”‚    B. Topic Extraction (LLM Node)                                β”‚
β”‚       - Analyzes web results to identify specific topics/tools   β”‚
β”‚       - Example: "Slack Desktop", "Cato VPN"                     β”‚
β”‚                                                                  β”‚
β”‚    C. Targeted KB Re-Query (Augmentation Loop)                   β”‚
β”‚       - If topics found: Re-query Context Grounding + Articles   β”‚
β”‚         with extracted topics as refined keywords                β”‚
β”‚       - If no topics: Gap analysis for missing information       β”‚
β”‚       - Max 2 iterations to prevent infinite loops               β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                               ↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ 6. RESPONSE GENERATION & ROUTING (LLM Decision Tree)            β”‚
β”‚                                                                  β”‚
β”‚    A. Self-Service Viability Check (NEW)                        β”‚
β”‚       - LLM evaluates: Can user resolve without IT admin?        β”‚
β”‚       - Criteria: No system access needed, user-executable       β”‚
β”‚                                                                  β”‚
β”‚    B. Response Type Decision:                                    β”‚
β”‚       β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚       β”‚ DETERMINISTIC IT ACTION (from step 2)                β”‚ β”‚
β”‚       β”‚ β†’ Generate workflow parameters                       β”‚ β”‚
β”‚       β”‚ β†’ Route to: UiPath Studio workflow execution        β”‚ β”‚
β”‚       β”‚ β†’ Auto-update ticket: "Resolved - [Action Name]"    β”‚ β”‚
β”‚       β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚                                                                  β”‚
β”‚       β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚       β”‚ SELF-SERVICE (client-facing)                         β”‚ β”‚
β”‚       β”‚ β†’ Generate step-by-step user instructions           β”‚ β”‚
β”‚       β”‚ β†’ Route to: FreshDesk ticket reply (public)         β”‚ β”‚
β”‚       β”‚ β†’ Status: Pending user action                       β”‚ β”‚
β”‚       β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚                                                                  β”‚
β”‚       β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚       β”‚ IT EXECUTION (admin-required)                        β”‚ β”‚
β”‚       β”‚ β†’ Generate technical steps for IT staff             β”‚ β”‚
β”‚       β”‚ β†’ Route to: FreshDesk internal note                 β”‚ β”‚
β”‚       β”‚ β†’ Assign to: Appropriate IT queue                   β”‚ β”‚
β”‚       β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚                                                                  β”‚
β”‚       β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚       β”‚ IT INVESTIGATION (incomplete info)                   β”‚ β”‚
β”‚       β”‚ β†’ Generate clarifying questions                      β”‚ β”‚
β”‚       β”‚ β†’ Route to: FreshDesk ticket reply (public)         β”‚ β”‚
β”‚       β”‚ β†’ Status: Pending more info                         β”‚ β”‚
β”‚       β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚                                                                  β”‚
β”‚    C. Confidence Scoring                                         β”‚
β”‚       - LLM assigns confidence score (0-1) to response          β”‚
β”‚       - Threshold: 0.7                                           β”‚
β”‚         * β‰₯ 0.7: Auto-publish response                          β”‚
β”‚         * < 0.7: Trigger Human-in-the-Loop                      β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                               ↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ 7. HUMAN-IN-THE-LOOP (Action Center - Low Confidence Only)     β”‚
β”‚    - Present generated response to IT supervisor                β”‚
β”‚    - Show all source materials used                             β”‚
β”‚    - Options: Approve | Edit | Reject                           β”‚
β”‚    - Feedback loop: Approved responses added to Confluence      β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜


KEY INNOVATIONS
===============

1. **Semantic Article Re-Ranking**: Traditional keyword search returns irrelevant 
   articles. LLM-based relevance scoring filters noise, keeping only high-quality 
   articles (top 5 with score β‰₯ 0.7).

2. **Autonomous Knowledge Re-Routing**: Agent dynamically switches between 
   internal and external knowledge sources. If internal KB insufficient (score ≀ 0.8), 
   triggers web search. If web search identifies specific topics, loops back to 
   augment internal KB with refined queries.

3. **Self-Service Detection**: LLM evaluates each ticket for self-service viability. 
   Issues like "VPN not connecting" generate user instructions, while "grant admin 
   access" routes to IT execution. Reduces IT workload by 40%.

4. **Iterative Knowledge Augmentation**: Max 2 refinement loops ensure quality 
   while preventing infinite cycles. Gap analysis identifies missing information 
   for targeted re-querying.

5. **Multi-Modal Execution**: Single agent handles three execution paths:
   - Automated workflows (UiPath Studio)
   - User self-service (FreshDesk public reply)
   - IT delegation (FreshDesk internal note)


TECHNICAL IMPLEMENTATION
========================

- **Framework**: UiPath LangGraph SDK (Python)
- **LLM**: Claude Sonnet 4 (via UiPath Chat API)
- **State Management**: Typed GraphState with 15+ fields
- **Node Count**: 11 core nodes + 4 conditional edges
- **Integration Points**:
  * UiPath Orchestrator (job invocation, storage buckets)
  * Context Grounding (internal doc RAG)
  * Confluence (memory vector DB)
  * FreshDesk API (ticket CRUD)
  * DuckDuckGo (web search)

- **Deployment**: Containerized agent deployed to UiPath Orchestrator
- **Monitoring**: Structured logging with log levels for each node
- **Testing**: Integration tests for each node + end-to-end workflow tests



### Narrated video link (sample: https://bit.ly/4pvuNEL)


Expected impact of this automation

Cost Reduction:

  • ROI: 450% in first year
  • Cost per ticket: $50 β†’ $15 (70% reduction)
  • Headcount optimization: Handles additional tickets annually without new hires

Time Savings:

  • Average resolution time: 2hr β†’ 2 minutes
  • IT agent time savings: 6,000 hours/year (equivalent to 3 FTE)
  • User wait time elimination: 70% of tickets get immediate response
  • Time-to-first-response: 4 hours β†’ 30 seconds

Quality & Accuracy Metrics:

  • First-contact resolution: 45% β†’ 85%
  • Response accuracy: 92% (validated via supervisor review)
  • CSAT scores: 65% β†’ 89%
  • Ticket reopening rate: 22% β†’ 8% (due to better initial responses)

Productivity Gains:

  • IT agent productivity: 80% β†’ 96% (+20% gain)
  • Auto-resolution rate: 70% of all tickets (no human touch)
  • Self-service adoption: 40% of users resolve own issues
  • Knowledge base utilization: +250% (from dormant to actively used)

UiPath products used (select up to 4 items)

UiPath Coded Agents

Automation Applications

fresh UiPath context grounding, UiPath robots, UiPath studio

Integration with external technologies

Bedrock,Langgraph

TO-BE workflow/architecture diagram (file size up to 4 MB)

Other resources

i have added few main coded agent files here +the claude.md
and diagrams
THIS IS NOT THE FULL CODE,JUST A SAMPLE

1 Like

:waving_hand: Hi there, @Daniela_Rosenstein builder,

Thank you so much for being part of the Specialist Coded Agent Challenge. Your creativity, dedication, and automation skills truly blew us away! :collision:

Here’s what’s next:

:spiral_calendar: Nov 5–16: Jury evaluation by @eusebiu.jecan1 & @Adrian_Tamas + community voting
:trophy: Nov 17: Winners announced :tada:

Don’t forget the Community Choice Award, the best-voted project wins a $500 gift card + $60 UiPath Swag voucher! Voting is open till Nov 16, but remember that fresh accounts can’t vote (Level 1 access required, as we want to keep it fair and spam-free).

You’ve already won our admiration, now let’s see who takes home the big prizes :grinning_face_with_smiling_eyes:.

GOOD LUCK :four_leaf_clover: ,

Loredana