ScreenPlay feature

Hi Team,

Is it the UiPath screenplay similar to Autopilot? then what is the major difference.
Just trying to understand

Thanks

Hi @venkatnarasimha9600

Screenplay records user actions to create automations, while Autopilot uses AI to understand user intent and assist in designing, generating, and executing automations.

Screenplay - You show once, UiPath records steps and converts them into an automation.
Autopilot - You ask in natural language, AI understands intent and helps build/execute automation

Hope it helps!!

Thanks & Happy Automations

UiPath Screenplay and Autopilot are not the same, even though both involve AI assistance.

Screenplay is an AI-driven automation builder inside the UiPath ecosystem.
You describe a business task in plain language (example: “Read invoices from email and update Excel”).

Copilot is a general-purpose AI assistant.
It helps with:
Writing text or emails
Explaining code
Generating scripts or suggestions

It does not build UiPath workflows or create structured automations.
Copilot mainly assists humans, not automation platforms.

@venkatnarasimha9600

Autopilot is an AI driven chatbot to ask about details or to run processes etc

Screenplay on the other hand is an AI driven agent which can interact with UI based on the instructions provided

Cheers

Hi @venkatnarasimha9600

They are related but not the same.

UiPath Screenplay is mainly about UI automation it records and understands user actions on the screen and helps generate automations based on those steps.

UiPath Autopilot is broader. It uses AI/GenAI to assist across the automation lifecycle — generating workflows, answering questions, fixing errors, and guiding developers, not just capturing UI actions.

In short:

  • Screenplay → focuses on recording and interpreting UI interactions
  • Autopilot → AI assistant for building, understanding, and improving automations (can use Screenplay as one of its inputs)

@venkatnarasimha9600

Both are not same.

Screenplay uses Large Action Model which is advanced AI systems that go beyond text generation (like LLMs) to understand instructions and execute complex, multi-step tasks in digital environments, acting as autonomous agents that can use software, APIs, and even robots to achieve goals, automating processes.

You just have to give a prompt to automate some UI interaction like click on first button on the screen or first row. The LAM will handle rest everything.

Autopilot uses LLM and is a helper for developer, user, tester etc. where it’s being used