UiPath Community 2022.4 Stable Release - Test Suite

UiPath Community 2022.4 Stable Release - Test Suite

This topic goes in-depth about the improvements in Test Suite. To read about other products, please navigate to the main topic here.

Visualization of data variations

In 21.10 we introduced Test Explorer - a neat new panel that provides you a view on all your quality assets and test results at a glance.
For data driven Test Cases we are displaying every variations separately including the argument values.
To clean up this view we decided to change the visualization a bit…

  • The name of the variation is changed to:
    (1) First_name: Vincent…
    (2) First_name: John…
    So we show the index at the beginning, followed by the first argument+value followed by 3 dots
  • On hover, a tooltip is shown with the following format:
    argument1: value
    argument2: value
    argument3: value
    So we show a list view with all arguments + values

Shortcuts for Test Explorer Tabs

We have added the possibility to use shortcuts for common task within Test Explorer to improve the efficiency of your day to day work.

  • Ctrl + Alt + T - Open Test Explorer panel. You can use this anywhere within the Designer panel.
  • Alt + T - Open Test Results sub-panel in Test Explorer . You can use this only in the Test Explorer panel.
  • Alt + A - Open **Activity sub-panel in Test Explorer . You can use this only in the Test Explorer panel.

Refresh of Test Explorer

So far a general refresh within Studio did not necessarily also refresh the content within Test Explorer. This could lead to a situation where you created new artifacts within your project view which are not immediately visible in Test Explorer.
With this version every task triggering a general refresh in Studio will also refresh Test Explorer.

Execution Duration

When you run Test Cases from Test Explorer the result will now also include the execution duration.

Object Repository in Test Automation projects

Maintenance is challenging specifically in Test Automation, where you have to deal with probably 1000s of Test Cases at a time. Object Repository provides the necessary capabilities to reduce maintenance significantly.
In 21.10 Object Repository was not available within Test Automation projects - we have fixed this issue now.

Mock activity children can’t be deleted when the designer is zoomed in

When the designer is zoomed in (the root activity is different from the original one, the ActivityBuilder), activities from the right hand side of a Mock activity can’t be deleted. This was a bug we have fixed in this release.

UI Freeze when updating test data

When updating a test data source that is used by multiple test cases the UI freezes for a few seconds while updating the Test Explorer panel. We have fixed this issue

Extended context menu

Within Test Explorer the following tasks are now available within the context menu on Test Cases:

  • add test data
  • remove test data
  • update test data
  • link to test manager

Extended Filter Option

We have extend our existing Filter options within the Toolbar to also incorporate the following options:

  • Display Activity Coverage
  • Display Execution time

Additionally, we merged the coverage options into the filter button and add the following options:

  • show covered activities
  • show uncovered activities

General Test Result Summary

Within the test result tab on Test Explorer we now offer an overview on the entire test results including the execution state and time.

Collapse All/Expand All

image029

We tried to keep the structure in Test Explorer as flat as possible due to the limited space available, which is why we spared showing folder structures from the project panel. Still we have some structure showing RPA Tests as children of RPA workflows as well as data variations as children of data-driven Test Cases. For you to keep oversight within Test Explorer we therefore have added expand/collapse buttons within the toolbar.

Display Execution Time

A nice little metric we have added to your test results is the actual execution time.

Automatic RPA Test Data Generation

Why did we create this feature?

A while ago we conducted a survey among you as we wanted to know more about your RPA quality practices. This is what we found out:

Based on the results we drew the following conclusions:

  • RPA developers acknowledge the value of proper testing to keep bots up and running
  • RPA developers are cautious about the effort required to create automated tests
  • Most testing is done manually right now and only during development
  • The perfect testing solution for RPA takes care of testing fully autonomously

Due to this we started a research project together with the University of Bucharest to create a solution that is capable of automatically generating test cases for your RPA workflows.

In this release this new capability is available in the first version.

What does it do?

It uses a symbolic execution algorithm which analyzes your automated workflow to determine what inputs cause each branch of the workflow to execute. An interpreter follows the program, assuming symbolic values for inputs rather than obtaining actual inputs as normal execution of the program would. It thus arrives at expressions in terms of those symbols for expressions and variables in the program, and constraints in terms of those symbols for the possible outcomes of each conditional branch. Finally, the possible inputs that trigger a branch can be determined by solving the constraints.

How does it work?

For this feature to work best, you should have exposed all your relevant variables used for conditions within your workflow as arguments, as those arguments are the basis for the algorithm to generate test data inputs.

First, create a new Test Case on your workflow:

image022

Next, select Auto-generate as your test data source:

image023

This will trigger the automatic data generation:

image025

After a few seconds, you will see a table with your generated data. The first row within this table contains your default argument values. The additional rows contain data required to fulfil the conditions set throughout your workflow. Our goal here is to reach an activity coverage of +90% of your workflow.

If the data generated does not serve your needs, you can of course edit it afterwards.

Now you have a new data-driven RPA Test Case that, when you execute it, will run once for each data variation and in the best case cover all the paths through your workflow.

Data Driven Testing with UiPath Data Service

Why did we create this feature?

Data Driven Testing is the most appreciated feature within test suite due to its simplicity of handling test data. However, the downside of this approach is, that test data is baked into the project during design time and changing the data requires several steps. The number one ask from our customers was to provide a solution that allows them to define test data dynamically and consume it during runtime rather then design time.

What does it do?

We decided to leverage existing dynamic data sources such as data service and enabled them for the use of data driven testing.

By using data service as your test data source Studio will import the definition of your data object which then can be referenced within your test cases, but the actual data will be called dynamically during runtime.

How does it work?

First, create a new Test Case within Studio.

Then, select Data Service as your test data source:

image028

Now you will be able to choose the entities of your connected Data Service as data source:

image030

You can also filter your entities for specific criteria, to make sure only those records are used, that serve your specific testing needs.

Now, your entity will be available as an argument and its fields can be accessed via dot-syntax.

As a result you get a data-driven Test Case that consumes test data dynamically during runtime. If triggered from Orchestrator it will be run once for each data variation available on data service.

Hi I run Studio 23.4.1, In Test explorer the Activity Coverage tab does not show up besides Test Results tab and the Descriptor Coverage.

In the Test Explorer itself, I get the pct. of the coverage