Orchestrator PaaS Deployment Goes Into Unresponsive State When Ramped-Up At Day Start

How to troubleshoot when the deployment of Orchestrator PaaS that goes into unresponsive state, when ramp-up begins at the start of the day?

Orchestrator PaaS deployment goes into unresponsive state when ramp-up begins at the start of the day. Orchestrator Appservice recycles frequently due to Average Response Rate spikes during these events.Request towards the Orchestrator timeout causing Internal Server Errors - 500 substatus 121 (Azure Appservice Timeout)

  1. Configuration problems Nuget was set to use Redis Coordination. FSM disabledProcesses. FilterOutDeleted was added to the Web.configScale-Out rules were disabled until they can be evaluated correctlyWebSockets were turned OnScalability.SignalR.
Transport should be set to 1 to force WebSockets and avoid spawning unhealthy Long Polling or Server Sent Requests.
  1. Environment scaling differences corroborated with Azure elasticityScale-Out rules will definitely impact how this will work. So far 30 nodes are by default allocated to have sufficient room during heavy load times at ramp-up Scalability. Heartbeat. PeriodSeconds set to 300 seconds to reduce load and Robot Service restarted in mass to promote new settings.