My dispatching-bot looks into a folder “…/input/” and if there is a new file, the bot uses move file activity to transfer the file to an the folder “…/output/”. After that the bot creates a transaction item referring to that filename. Sometimes a second machine does the same dispatching-bot at the same time and moves the same file. That produces only one file in “…/output” but a second transaction item. I surrounded the move file with a try catch, but the second machine doesn’t have an error while moving the file.
How can I prevent that a second machine wants to move the file? Or how can I check if a file is already in moving status?
In my example you see just one moved file but two items created by different machines:
Thanks for your help!
Hello @mm1904 ,
What about using a flag value? If file is moving by first bot, it has to set the value as “Process”, then if 2nd bot tries to check the values, if it is process, then bot needs to wait for the status to change to “Completed”. If completed bot 2 can start moving the file and setting the flag value as “Process”
thanks for your answer! Where would you set the flag value? At the file as an attribute? Or in my bot or anywhere else?
You shouldn’t run two Jobs of the same dispatcher.
Hi @postwick, you are right, I shouldn’t run two dispatcher jobs at the time, but due to our working modell I can’t change that. So I have to solve my problem with two bots moving the same file.
You can create an orchestrator asset. USing Set Asset activity you can change the asset value. Else keep the value in a normal excel file.
What does this mean? If your working model results in two jobs trying to work on the same file, your working model is wrong.
Hi, I don’t think, the working modell is wrong, but I can understand, that someone not knowing our modell thinks so.
Every machine runs a dispatcher job to look for new tasks and to to put them into the different queues and then searches by our priority list in the queues for the next task to do. That works very good in our main software, as the software doesn’t allow two machines to open the same task. So we won’t change our modell there. It decides on the situation in the different queues in the moment, when the next job starts, not by time or anything else.
But the file system doesn’t produce errors or other message when two bots try the same. So I have to find a way to prevent the second bot from doing the same.
I hope, that makes our working modell and my needs a little bit clearer.
Again, why? Why are you running more than one dispatcher? There shouldn’t be a reason to.
And, frankly, the fact that you’re having this problem illustrates why you shouldn’t have more than one dispatcher running. The dispatcher performer model is intended so that you have one dispatcher that creates queue items and then multiple performer - exactly to avoid these kind of problems.
ok, maybe our dispatcher does things other way or more things then a ‘normal’ dispatcher. First part is looking for new tasks and create transaction items. Second part of our dispatcher is sorting the queues by our priority list and looking in every queue, if there is a transaction item. So the oldest transaction item in the queue with highest priority will be startet next. We didn’t find a function in orchestrator to sort queues by priority, so we build our dispatcher with this function.
But I will think about the first part how to rebuild it, that only one machine is doing it and not the other ones. Maybe by checking if dispatcher process is running on an other machine, the other machine doesn’t look for new tasks. I don’t want to do that part exclusivly on one machine, because looking for new tasks would stop, if this bot / machine has an error.
Thanks for your help!
Then you need to do proper error handling.
What does this mean? Queues have priority, deadline, etc features. A single dispatcher should be prioritizing queue items as it creates them.
Are you running these dispatchers attended or unattended?
This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.