Scheduling multiple jobs all from the same processes, but with different parameters

Yes you can, just give it a different name or put it in a different folder, the two processes using the same package can even be on different versions.

The triggers failing to make jobs I think are to prevent a huge pileup of pending jobs incase of errors, you can specify a max number of pending jobs on a trigger I believe, either way its not an issue if you follow the intended design (use queues).

If there is a setting for Max number of pending jobs, Please tell me how to set it. This is still a very painful issue for me. I have work arounds in place, but If somebody screws up and has their job run long, the whole house of cards collapses. This issue combined with the fact that the kill after feature calculates from Job Scheduled start and not Job Start really limits our productivity.

Sorry Roger, you can do this on Queue Triggers.
It does not appear to be a setting on time based triggers.

As I recommended before, convert to a transactional model and use queues to solve your problem.

Never said I wasn’t using queues… Parameters are there to let robot know what queues/configurations/settings it should look at.

Also, this: “Yes you can, just give it a different name or put it in a different folder, the two processes using the same package can even be on different versions.” is not a solution but a work around (and ugly at that).

Lets forget for a second how one can bypass this limitation. Explain to me please, given current configurations one can make with packages/processes/triggers, how one should effectively use parameters? To me, their only point of existence is to allow customization for 1 process to be executed in a different way based on parameters passed in.

I never said you weren’t using queues Bromanen…
The only reply I made to you was to point out how you can create more than one process from 1 package.

I really do not understand why you are unhappy with the suggestion of adding the process multiple times with a different name. Surely you recognize the need for the process to have a unique name to keep track of it. Its not a workaround. Its like complaining that you cannot create two files in the same directory with the same name in my opinion, its logical you cannot because they need to have different names to distinguish them.

First assuming the queuing actually solves my problem. This limitation is just silly! Each trigger has it’s own unique name and that’s the name the scheduler should be tracking. This is just half baked!

Now back to trying to solve the problem using queuing. I’m probably missing the point, I’ve looked into Queues and Transactions but It’s my understanding I will still need a timed trigger to add each item to the queue. Again I would have a single process driven by parameters to add each transaction to the queue, it seems I’m in the same boat? Even though the job to add an transaction to the queue would run super quick, if I trigger Extract1 to run at 2am,Extract2 to run at 2:01, Extract3 to run at 2:02. If a totally unrelated job launched at 1:30 and ran long until 2:30. Extract2 & Extract3 would vaporize never be added to the queue not even a log entry to tell be to rerun it in the morning. In reality I wouldn’t be stacking them all up so close, there would be other jobs mixed in with prerequisite, and or subsequent jobs. Remember in my use case I created a single process to do extracts from BI. each nightly extract is completely unrelated to each other so I don’t need to and in fact prefer not to stack them up close together.

Overhead of maintenance is what I don’t like about the approach of creating multiple packages to create multiple processes. If there was an ability to create multiple processes from same package we would be ok.

I feel your pain!

At this point in time the only suggestion that I have to resolve this is to change architecture of the solution:

  1. Combine all of your processes for different regions into 1 dispatcher that will populate queue with info on the region you want to execute.
  2. have a generalized worker that will perform the work. (can be queue triggered)

Overhead of maintenance is what I don’t like about the approach of creating multiple packages to create multiple processes. If there was an ability to create multiple processes from same package we would be ok.

I dont think you are reading my post correctly. 1 package, multiple processes. You simply need to give the process a different name if you want it to be in the same folders as others. Its not a high maintenance solution.

I do it reguarly. I have a bot for example to works with various different customers, each customer has their own subfolder and process, made from the same package. This allows me to use different assets for each of them. If I update the package I can update all the processes very easily.

It’s not practical to put them all in one dispatcher as they need to run at different times. Some can run early like 1:00 AM, others need to wait for a feed to run at 4:00 AM. Some have pre and/or post jobs to run. Again if everything runs correctly all is well. As we have no night operator watching the robot, if an earlier job runs long the timing is all messed up and then jobs just start vaporizing without even an error in the log, so when I come in in the morning I don’t even see it faulted to tell me to rerun it. I don’t even know if one didn’t run until users start calling to complain.

Hi Jon,

This sounds like a usable work around, but I attempted your work around and I must be doing something wrong. I go to Automations, Processes. I hit the + to add a process.
Shown in the screenshot below
I pick the reusable package: Bi_Extract_By_Perips_to_CSV
I give it a new name: BI Ambition Extract (Intra Day) 2
I add my parameters and I hit Create.

I get the error "Make sure that Bi_Extract_By_Perips_to_CSV is not already deployed on one of the robots from this environment (#1250)

What am I doing wrong?

Thanks!