Add “Maximum number of pending and running jobs allowed simultaneously” Option to Event Triggers

Currently, event triggers in UiPath (integration service based like when new file created in SharePoint or an email is received) do not support the configuration of a limit for the number of pending and running jobs. This feature is available in queue triggers, and having it for event triggers would be a very welcome addition.

Adding this capability would allow better control of resource consumption and prevent job flooding in scenarios where multiple triggers are fired in a short period of time. It would help orchestrators maintain system stability and improve job scheduling efficiency in high-volume environments.

Hi @Gal_Rothschild

Welcome to the community!

:light_bulb: Feature Request: Limit for Pending and Running Jobs in Integration Service-based Event Triggers

Current Limitation
At present, event triggers (e.g., “When a new file is created in SharePoint”, “When an email is received”) in UiPath’s Integration Service do not allow configuration of a limit on the number of pending or running jobs per trigger.

In contrast, Queue Triggers offer this functionality through the “Maximum number of pending/running jobs” setting, which helps control concurrency and system load.

Why This Feature Is Needed

:counterclockwise_arrows_button: High-frequency triggers: When multiple events are fired in quick succession, they can flood the orchestrator with job requests.

:balance_scale: Resource management: Without a limit, it’s difficult to manage robot capacity and avoid resource contention.

:prohibited: Avoid overloading: Prevents unnecessary job queuing or execution when the system is already under heavy load.

:chart_increasing: Improves scalability: Helps organizations using unattended automation scale more safely and predictably.

Hope this data helps you cheers!

2 Likes

@Gal_Rothschild

Welcome to the community

I partially agree with you or may be there is upgraded event trigger that we moght need

As for each file a separate process is to be triggered as ideally as per design norms for each file the process should run separately and its not like once process starts inside the process we handle all files which are added or all mails which are added that is not how the event triggers are to be used ideally

May be if you want to process something liek that then its better for any evemt you add the items to queuw on trigger and from there the control of process is made

But that said as a design consideration this should be limited because the current issue is if the polling time is less then for same file or mail the process gets teiggered multiple times it might help in limiting that

Cheers

1 Like

Thank you for the response and for the welcome!

For my specific use case, where for each file I just push it to a queue for an actual process, I believe making it a separate process for each file, rather than a process that runs in a loop over all files is a bit wasteful.

  • It adds a lot of tiny jobs in the jobs tab that might make it a bit messy to look for the specific one (a bit petty I know :))
  • If a higher priority job pops up while the jobs are processing it might add a delay, especially if the job the queue triggers is higher priority, that means the queue executor will be ran for each file.
  • Say we have two robots that are able to perform that action, if they both pick up a job at the same time and process the files, they might process the same file (Although while writing this I’m thinking there’s probably a way to pass the event triggering file to the job.. I need to look into it)

I agree for some use cases it might be beneficial to have a job per file/email/etc., maybe add an advanced settings section that can have that option.

2 Likes

This doesn’t make any sense to me…

If you have a job limit then events will just be missed.

Lets say you make a job limit of 5, and 10 events are picked up by the trigger, it makes jobs for the first 5 and the last 5 just… disappear? Don’t get processed…?

Like how do you plan to catch the missing 5 events and process the 5 files they represent…

1 Like

@Gal_Rothschild

as said event triggers are for each event…if you want to ignore few and consider only few then its not how that works eight..say an email is received you need to process only that email it should not be like process all emails after that trigger..thats not an event trigger as all emails coming after the first trigger will get ignored

and in event triggers ideally the trigger details etc are passed to know about the event and what triggered as well if we club or as said above if you are adding to queue..then may be your trigger process shoudl jsut get the details and add to queue and I dont think just adding event details to queue would take so much time as well and as item is already added to queue ..that si where the restriction would be there

cheers

I think the real root cause (and solution here) is that event triggers can only start jobs…
Its been requested before, but if an event trigger could instead create a queue item this issue goes away, you can process the events transactionally as you please.

So I’d say we shouldnt try to make the event triggers work like a queue trigger, but compliment them by being able to make queue items.

2 Likes

Just noticed I didn’t submit my answer to your previous comment, that doesn’t matter, what you just said is the answer. I believe that must be the best way to go about it, make the queues that already work wonderfully, be even more powerful and versatile.

I can already think of multiple places I can fit in a trigger like that.

I mostly worked on on-prem solutions, and didn’t have my fun exploring event triggers fully yet (So for things like reading files it would be in a schedule to read all files in the input folder), maybe for now the solution is to stop thinking in bulk, and start making more granular processes to work on each event like @Anil_G mentioned.

Thank you both for your inputs! I highly appreciate it!

2 Likes

This is one of the biggest challenges I see for people to work with event based processing or Orchestration, the switch from bulk processing mindsets.

They still have a really important place, as we RPA the bulk processing, meaning we don’t need to repeatedly log in and out of an application are super important, but I think the ‘stigma’ of having hundreds or even thousands of jobs needs to go away.
We do need better job tracking. I have been trying to convince the products team to let us set a ‘Reference’ on a job, like we do on Queue Items, to help search for them.

We have multiple processes that reads the mailbox, add the mail to the queue and then processes the queueitems. The connection checks every 5 minutes, when there are 10 mailmessages then there are 10 jobs created. The process checks if a mail exists in the queue, if not it add the mail to the queue otherwise the robot continues, after that the RPA process continues with processing the queueitems with the state ‘New’.
It happens often that the last pending jobs doesn’t has queueitems to process anymore because the preceding jobs has processed all the queueitems.
Therefore, executing these pending jobs is unnecessary and inefficient because it wastes the robot’s machinetime.
The option of using a performet/dispatcher doesn’t solve this problem because it still restarts the robot process multiple times.
Adding the forementioned settings from the queue trigger would solve this problem in our opinion.

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.