Immediately Retry Failed QueueItems

I have a process where failed queue items should be processed immediately, not added to the top of the queue to be retried after all others. I am using the ReFramework, and I noticed that it is using the Set Transaction Status activity which is putting the QueueItem in the Failed status. Is there a way to state whether the new Item should be high priority or something? I noticed the Output collection for the activity, is that simply for reporting? Also, if I were to set the priority for the QueueItem that is going to fail to High, would it then set the new QueueItem to the same status?

I tried researching this, and I was surprised I didn’t really come back with anything. I was wondering, before I dig too deeply into this, if anyone has come up against this and if there’s a preferred way to handle this case.

Thank you,

I have a question. Did you test this, like did you run a job using 1 Robot and it placed the New item at the top, then picked the first New item at the bottom? I don´t think there is a resolution for this, but maybe it´s an idea that can be posted to dev team. For example, ‘an option’ to higher prioritize when a Retried state is made to give your Queue flexibility.

I was actually just optimizing my shell to use the RetryNumber and MaxRetryNumber alongside the Queue retry. That way, for instances where I need the robot to attempt retries itself before sending it back to the Queue, I can set that up.

Would also be nice if a retry can be attempted on a different Robot that it was ran on previously. Let´s say you run it on 2 Robots, one robot has an issue on its configuration that caused it to fail, so it runs on the other which works good. Right now, there´s a chance the previously failed Robot will be chosen to process the item again.

Technically, there is a workaround. You can program your bot to manage the Retries. For example, if your item Fails, you can delete the item, then Add it back in with a High Priority. But, this workaround would not track the Retries, I don´t think. - So might not be an ideal solution.

Hello Clayton,

I unfortunately haven’t had much time to delve into this further. I had not tried too much out before posting as I didn’t want to go too deeply before seeing whether someone else ran into the same issue since it seemed pretty basic.

I think the problem is that the REFramework is only setting the status of the transaction to failed and Orchestrator is taking that information, checking whether the retries count is less than the max for the queue, and automatically creating a new transaction after placing the previous one in the Retried status.

So, really, is this likely going to have to be a feature request regarding how Orchestrator queues handle retrying, and have a way to let users select the order in which things are performed in this case.

I agree. It’s more user-friendly if items are processed in the order they come in. But, there are also advantages to placing them to the back of the queue, too, I suppose - like, all failed items that have issues will let all the successful items go through first which may increase process speed… on the other hand, it depends because most failures are application-based and do not depend on the item itself being processed. And, if it is with the item, then it’s most likely a business rule exception and won’t be retried.

I kind of believe that items should be retried “locally” using the MaxRetryNumber in the code alongside the Orchestrator Queue retries. If the item still fails, then Orchestrator will plant it to the back of the line to potentially try on a different machine. Imagine only using the Queue retry feature with a retry number of > 10, and most items fail. You could end up with 100s of items in “Retried” state, and if you wanted to add an unique item back into the queue, you need to delete every one of them, since you can not add an item again unless it is in Deleted state. So, if you use a combination of “local” retries and “queue” retries, then it may function better I think, and you just need to log for example: “[Queue Retry 1][Attempting Local Retry 1 of 10] Application Exception - Source: Activity, Message: System.Exception”, where it is on the second attempt by the Queue and another error happened so it is doing a local retry - if the 10 local retries fail still, then the queue retry will increase to 2 the next time the item processes.

So that’s just a few additional thoughts.


Yes I agree.

Retrying a queue item/transaction is the user’s choice so building a code to retry it locally until it success/crosses the MaxRetry should be inculcated in the workflow.

Build a workflow only to retrieve transaction and separate module to retry until succeeds.

Regards :slight_smile:

PS : @ClaytonM the different robot retry is a unique idea…!!

I think this is achievable if you’re using due date (SLA) instead of priorities. The new item will have the SLA cloned therefore it will be processed with high priority.


Hey Mihai,

I tried your suggestion and it looks like it works. The items that I’m adding to my queue are supposed to be processed within 24 hours, but I gave myself some leeway just in case because I didn’t want them to unnecessarily time out. I used DateTime.Now.AddDays(7) within my Add Queue Item activity. I readded some test queue items so that they would be generated with the due date and purposefully errored out the process. Sure enough, it was immediately reprocessing the items. Thank you for the suggestion.

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.