Try UiPath ScreenPlay for free—join the preview

Hello UiPath Community!

We’re excited to announce the next stage of the UiPath ScreenPlay public preview. Starting today, it becomes available to all enterprise and community license users free of charge: no third-party licenses required!

What is ScreenPlay? It’s UiPath’s fully autonomous self-driving engine for building and executing reliable unattended UI automations at scale starting from a single prompt:

  • Build UI automations using simple prompts: Describe your task in natural language, and ScreenPlay will autonomously navigate and execute the steps in the target app​.

  • Make your automations more resilient: ScreenPlay dynamically adapts to context, ensuring your automations run smoothly even as applications change.

  • Benchmark proven quality: UiPath’s Screen Agent (the LAM engine powering ScreenPlay) with GPT-5 recently ranked #2 on the OSWorld-Verified Foundation E2E GUI benchmark, with a 53.6% success rate (Accessibility tree enabled, 50 max steps).

A few key components through which ScreenPlay delivers its value:

  • Deep UI grounding: Our Screen Understanding engine blends DOM extraction with Computer Vision to anchor models in real applications, eliminating hallucination and brittleness.
  • Enterprise orchestration & governance: Enables resilient, compliant automation across thousands of processes.
  • Security via the Trust Layer: Enterprise-grade safeguards for data, models, and execution.
  • Seamless integration with RPA: Deterministic bots and adaptive AI collaborate in the same workflow.
  • Model choice and flexibility in itself: A curated dropdown of state-of-the-art models lets you balance speed, cost, and power per use case.

How can you try ScreenPlay? As part of this public preview, each account gets free access to all five available models within a free consumption package. More details are available in our user guide: Agents - Installing ScreenPlay

If you have any questions or run into any issues, please reach out here or via the Insider Portal.

We can’t wait for you to try ScreenPlay, and we look forward to your feedback!

Anastasia and the ScreenPlay team

[FAQ] Using variables in the prompt

You can use variables in the prompt like this:{{variable}}

:information_source: This is a temporary solution - we’ll soon have an improved prompt editor which will have all the usual necessities around working with variables. We implemented this very basic editor because we wanted to avoid being vulnerable to prompt injections attacks via the initial Expression Editor we were using - we needed to separate variables from the rest of the prompt. We’re aware this is not an ideal experience and we apologize for that - we’ll push to have the new prompt editor available as soon as possible.

24 Likes

Thanks for the amazing release @Anastasia_Yasevych.

The most liked feature of UiPath when I started is UiAutomation. Now, seeing Screenplay is absolutely amazing to just write prompts for automating the UI.

So, when I tried Screenplay:

  1. Is there any difference in approach when we use computer use/operator vs. regular GPT and Gemini models? I saw a kind of performance variation in both and also in the way they handle things.
  2. Sometimes the use browser props are not inherited, like the input method.
  3. Unable to open the expression editor for prompts.
  4. Unable to capture fields from the screen and set them to a variable or out argument.
  5. What’s the way it can learn by itself after multiple runs? Is there any possible feature in the future for training or retaining, kind of? Or something similar to prompt health and evaluation sets in Agent Builder?

:star_struck: Overall it looks very promising and super excited.

5 Likes

will it consume Agentic units and how ?

1 Like

@sudarshan_thite Uipath Screenplay - #3 by ashokkarale

1 Like

Hi.

  1. Yes, Anthropic Computer Use and OpenAI Operator are image-only models, that can “see” exactly what is visible on the screen and nothing more. The UiPath ones also have access to the page DOM, so they they have access to more data, and also don’t necessarily need to scroll, although sometimes they want to gather visual information as well.
  2. Do you have some logs or an example workflow for us to investigate?
  3. Yes, that’s somewhat by design and we are fully aware of the horrible limitations that the current editor has, but rest assured we have a much better one coming up.
  4. What do you mean? The Screenplay activity has an output field, and you can instruct the model to output anything you want to it, including multiple fields formatted as JSON.
  5. A) Yes, we are thinking of methods to transform what is currently an agentic process, into more of a codified workflow.
    B) Yes, evals that allow you to stress-test prompts, models, and targets are coming.
1 Like

Screenplay will have an out-of-the-box generous quota included, and anything above that will consume agentic units.

Make sense, Thanks :+1:

Here is the one : Main.zip (4.5 KB)

Cool :slight_smile:

In the same workflow above, I had an out argument or even tried with a variable where I want to save the created case reference ID into it which was not happening. I can see the case id captured in logs but not getting saved to a variable or out arg.

Thanks, I’m excited for it :rocket:

Just tried out ScreenPlay — really impressed with how it handles UI changes and still completes the task smoothly. I tested it with a few different models, even on the UiBank demo portal, and noticed a small delay between steps across all of them. Is that expected behavior during the preview, or does it depend on the model being used?

@Anastasia_Yasevych, Got a chance to explore UiPath ScreenPlay recently and honestly it’s a great experience. The way it understands plain language and turns it into actual actions feels smooth and practical. It really makes automation easier to build and more natural to work with.

1 Like

:clapper_board: I just tested UiPath ScreenPlay and recorded a video showing how it works in practice!

:television: Watch here: https://youtu.be/MOxXxupg4TA

3 Likes

Amazing, happy to read about your ScreenPlay experience!

1 Like

Hi everyone,

I’m facing an issue while using the Screenplay activity in UiPath. I’m working on this website
https://www.menswearhouse.com/c/mens-clothing-sale/30-percent-off-suits
I’m passing this prompt to the Screenplay activity:
On the current webpage, click once on the “Sort” button to open the sorting menu.
Wait until all sorting options are fully visible.
Locate the sorting option labeled “Best Sellers.”
The radio button for this option is to the right of the label.
Click directly on the radio button to the right of “Best Sellers,” not on the text or any other nearby option.
After selecting it, click once on the “Apply” button to confirm the selection.
Wait until the product list updates or the page reloads before proceeding.
Do not click again once the selection is made.

Issue:
When I run the automation, the Screenplay activity always clicks slightly below the target radio button instead of clicking on it directly i.e for eg when passing value “Best Sellers” in the prompt it is always clicking on Price : Low-High radio button and similarly for all the cases.

Has anyone faced a similar issue?
Any suggestions on how to precisely click the radio button (which is located on the right side of the label) would be really helpful.

Thanks in advance!

1 Like

Excited to see what ScreenPlay will evolve into.
I myself will be testing it out thoroughly as preparation for an uncomming presentation :star_struck:

1 Like

Try with different models and tweak your prompt.

It’s not working with other prompts as well

try setting the “Use DOM when available” property to False as well - in some situations, the DOM information is not accurate in terms of UI elements size and location. Setting this property to False forces an image-only targeting which might solve your issue.

also, it seems there’s something wrong with the link you provided, I couldn’t replicate your scenario:

one of the major drawbacks to Computer Use today is the fact that mostly all underlying LAMs (Large Action Models) powering it are in their early stages - think GPT 2.5 level of quality

one mental image you can make of them is this guy, whom we use in all our presentations when describing the state of the current models:

this is why we recommend limiting your prompts to microtasks (very short sequences of steps (1 to 5), ideally around 2. So a couple of clicks, or a few type actions):

that being said, though, we expect models to evolve quite rapidly - Gemini 2.5, as an example, is a decently “not slow” (I can’t say “fast” yet :stuck_out_tongue:) alternative you can use now

1 Like

Hi,
Getting errors while running screenplay using desktop version.
Error " An item with the same key has been added. Key : orgaudit