Service Request Automation

We have a service Request tool from which we will have requests coming in at a rate of 5-10 requests at a time. The SLA for this request to be entered into the system is 15 mts. So obviously we are talking about parallel processing here. If we have 3 or 4 robots dedicated for this process, do we have a mechanism in Orchestrator where it can see which robot is free and assign the service request to that machine. Each request is coming in as an xml via email or available at a location.

Queues in Orchestrator are there for exactly the same purpose. You can add your transaction items to queues. You can assign any number of bot to the same queues. So that all the robots will share the workload automatically. Please read more about queues in the below link.

Thanks for the reply. Is high density robots too a solution for this one? @loginerror

Hi @rahulraj987

High density means simply that you can run multiple robots from 1 machine. For this situation, I would follow @Vivek_Arunagiri suggestion of using queues.

A queue is a list of elements to be processed. It’s like a stack of bricks. You can then schedule any number of robots and each one will simply take one element from the queue and process it independently of the other robots. This way you can have a lot of robots all working on one task of processing one specific queue.

As to the content of the queue item, it could be the path to the xml file. Robot would take the path and then process the item.

2 Likes

@loginerror Here we are talking about live requests coming in live, in email with information as attachment. These requests keep coming in live as email throughout the day. And this need to be processed within the SLA which is very short.

What I under stood from queue is that we first add everything to the stack and then robot runs on them. This is different from my scenario . How can I approach my scenario. Thanks in advance

Here are a few suggestions:

For now, you would need to have two robots/processes (it can be one robot that does first the check and then the processing):

  • one that will be checking constantly and adding items to the queue
  • one that will be scheduled to process that queue of items as many times as you need

In a few updates, you will also be able to set a trigger in Orchestrator on new queue items, see here:

Ok. Thank you.

So what I can do is to write a windows service to continuously monitor email and download the attachments(JSON) and keep it in a folder,extract the info from JSON and call the AddQueueitem api to populate the queue. Correct me if I am wrong here @loginerror

Also while a robot is running , i hope there wont be a scenario where two robots comes at the same time and try to take the same item from the queue and end up throwing a lock error.
Thank you

I believe you can even create a robot that will monitor a directory for new files, see here:

So yes, the idea is correct. Also, there won’t be any issue with accessing the queues because they were designed precisely to allow multiple robots to work on the same queue. This is the major principle behind spiting the workload between multiple robots (=faster processing of your data) :slight_smile:

Thank you.

Creating a new robot means another license only for this, which I would like to avoid as of now.

In that case, you could indeed have one robot running two processes interchangeably:

  1. Check dedicated folder for new files, process them, add items to the queue
    and then
  2. Run the robot that will process the added queue items

Hi

Let me try that. But here the SLA comes into play as well. Each request needs to be logged in the system within 15 mts it hits the mailbox

Regards

Rahul

If your windows service that continuously monitors the email is reliably dropping the files in the target folder, then the robot that adds elements to the queue will take only a few seconds to process everything. If the procedure to process those queue items is relatively short, it shouldn’t be difficult to comply with the time limit.

Here there is a risk of multiple robots trying to access the same json file from the target folder. There is a chance of files not getting locked by one robot and other trying to access the same file. We already faced this in another process

Well, you should have only one process that processes the files in the folder. After that one robot/process adds your JSON directly as a queue item, other robots can take those queue items asynchronously and process them with no collision what so ever :slight_smile:

That means a dedicated bot for adding items to the queue :frowning:

It could still be part of the same process. It could be scheduled to run every 5 to 10 minutes and first check the folder for files, add the items to a queue and then process those queue items if there are any.

Alternatively, you could make it two processes, that are still run on 1 robot interchangeably.