Bug in Queue when Unique Reference Enforced

Hello, everyone.
I posted the same issue in the Help section 3 days ago but did not receive any solutions. Therefore, I am reposting it as a bug as I assume this might be a bug.

The question is related to the unique reference property of a Queue, and I believe there might be a bug or something I’m missing. Please take a look at the scenario described below:

The process is developed using the REFramework with the Dispatcher and Performer concept.

Lets Say-

  1. I have created a Queue with Auto Retry set to 1 and Unique Reference enforced.
  2. I executed the dispatcher and added a few queue items, let’s say 10.
  3. Upon running the performer, it fetched a new queue item for processing. However, the processing failed on the first attempt. Since a retry is configured, a new Queue Item with the same ID (as per the REFramework Template) was added to the queue with a “New” status.
  4. Before proceeding to process “Transaction 2” or any other new queue items, I stopped the process.
  5. During the end state of performer, let’s assume that I deleted all queue items with statuses other than “New” (as required by the process).
  6. I initiated the dispatcher once again.
  7. Now, I attempted to add the same queue item that was added in the previous run of the dispatcher. (This is the queue item that initially failed and was added as new due to the retry rule of the Queue.)
  8. Strangely, the queue item was added again, resulting in two “New” queue items with exactly the same reference. This is surprising considering the unique reference enforced.

So, my question is: Since there is already an existing “New” queue item with the same reference, how is it possible for a new queue item with the same reference to be added? Doesn’t this situation contradict the entire rule of enforcing a unique reference?

Best regards,
Devasya Singh

Hi @devasyasingh

There will be no issues with the queue items as how many time you run the dispatcher the queue will be added but in the process when retrieving the data the first queue which is added will perform the actions on the principle of FIFO(First In First Out) so no bug will be raised with the queue with unique reference.

@Dinesh_Guptil Sorry to say but I think you did not understand my question. It is related to duplicacy in Queue Items when Unique Reference is enabled. Ideally it should not happen. Could you please go through the question again?

Hi @devasyasingh,

Looking at the screenshots from Unexpected Behavior with Retries in Queue when Unique Reference Enforced - Help / Orchestrator - UiPath Community Forum

I do believe this is a bug. Thank you for reporting this. We probably need to wait for the product team to take a look at this.

I suggest, you add the same screenshots from your other posts in this feedback post, it makes it easier to understand the scenario.

Hi @jeevith ,

Thanks for looking into it. As you suggested, I am attaching the screenshots here as well.


Please refer the below images of example scenario.

Dear All,

Still hoping for an official clarification on the aforementioned issue…

@loginerror @Ovidiu_Constantin @Pablito @Catalina_Ianus @Alexandru_Szoke


hei @devasyasingh

Thanks for the find and raising the topic, we will strengthen this in our documentation on unique-references.

The way the system behaves since inception is that retried and deleted items do not participate in uniqueness checks. In your case the deletion does the harm, you can leave it as failed till retention will clean it up without any side-effects.

Moreover I agree that it would have been cleaner to have retries participate in uniqueness checks but such a change might be very invasive towards existing automations, I lean hence towards not changing this part, especially since we optimized some retry counters and keys where we just aligned to the documentation and generated side-effects in existing automations.

Hope this clears things up for you.
Best regards, Alex.

Hi @Alexandru_Szoke

Thank you for the clarification. This is a very rare case. I would avoid getting stuck in this scenario because, in my opinion, leaving as “failed till retention” will only be effective for cloud licenses, and I agree that making changes at this point will be challenging for existing automations.

I hope that in the coming days, we will have a better way to manage this scenario.


This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.