Last stable behavior:
Last stable version:
Others if Relevant: (workflow, logs, .net version, service pack, etc):
Data Scraping not working in the sequence, but working when I just test it on an already opened web page. Can someone please help me with this issue?
Last stable behavior:
Can someone please help me in this issue?
“Data Scraping not working in the sequence”
Do you get any error or Datatable does not have data?
I’m not getting any error. Datatable does not have data after execution. @vvaidya
But I’m performing the same action in a test project on the same page and it is filling the exact data that I need into the datatable.(for this the webpage will already be kept open to extract the data).
I also tried debugging and found that while executing the original workflow it is not even taking one second to extract data from page but while executing the test project workflow it is taking time to extract data from the webpage.
can you check ExtractMetaData property of data scraping for both workflow are they same or different?
They are same. I even tried copy pasting the data scraping section from my original project to test project and it is working fine as desired. @ddpadil
Could you please share the workflow? It will be easy to identify if we have at least a screenshot.
Are you doing the extraction inside a Open Browser Activity or even inside a Attach Browser Activity to Uipath be able to identify the Browser that you are working of?
I tried working on it but it didn’t work. Even in the Test project I worked on, it (Data Scraping Activity) is extracting the data differently every other time.
For eg If I am extracting a file name and its url from a table which contains many file links using Data Scraping activity then it is extracting different number of filename’s/Url’s every other time.
So tried to extract the data using some other method. There is an option to download that table as a csv file on the webpage and I am extracting the filename and Url from that csv file. In this way my problem has been solved(It isn’t actually solved but i opted a different way).
Thank you @ovi for editing the topic and making it visible to other people in the forum.
Would you please check if the scrape is taken care of it when the page is
loaded, perhaps, and more likely, is doing the scrap before load is
I have had a very similar problem but documented the circumstances more completely, I just didn’t spam tag people and no one responded so I didn’t feel obliged to post my solution. Replacing ExtractDataTable every browser session If this sounds similar to your problem then I may have a solution:
Occasionally UiPath will truncate css selectors after a certain number of characters and replace them with a wildcard ‘*’, rendering them invalid, as there are often other child elements. This appears when looking through the UiExplorer. I can enter a complete path into UiExplorer, in my case that path looks something like:
…;div>div>div>table’ tag=‘TABLE’ />
entering this directly into UiExplorer and clicking the validate button tells us that the selector is valid, but it immediately tries to shorten it to
…div>div>di*’ tag=‘TABLE’ />, which of course is invalid.
Data Extraction makes workarounds particularly tricky, as it is nearly impossible to select the table item that the extraction requires, but it is possible to preserve the full string by running the data extraction wizard, and copying the Selector in the Target portion of Properties. Before running anything, this selector should be complete. I then check after each data scrape when using the automation if the resulting table is empty with a callout in a retry scope, and if it is, I rerun the extraction but pass in what I know is the full selector instead of the one generated by the wizard, which somehow corrupts over time/sessions of use.
Additionally, make sure that you have the appropriate extensions installed, including Java and Silverlight, and that your .NET framework is at least version 4+.
Wait for element complete should take care of this but I can verify that the problem persists across Firefox and Chrome regardless of what option is selected, although you are right that in this circumstance it is probably best to make sure it is set to complete
I met the same problem. After adding a ‘open browser’ activity, I used data scraping to get the data in the webpage and then write the data into an excel. However, the new generated excel was blank, but no error for the robot. I tried to open the webpage and do the data scraping, it works fine. Not sure why…Anybody knows this ? thanks a lot !
@yueliwh Did you got the solution for this? I am having the same issue.
Was Facing the same issue. Resolved bu Updating the selectors.
Step 1 : Have a look at the selector of Extractdata when you place it inside the sequence.
Step 2 : Compare it with the Selector of the independent Extractdata.
When you place extractdata inside a Sequence , Uipath alters the selector.
For Eg : Uipath adds an id below the selector when placed inside sequence.
Solution : Omit the Extra selector uipath has added. (sceenshot attached.)
Also, Have a look at the Attach Browser Selector. No need to have 2 attach browser. Copy paste the Extractdata Activity inside the Existing attach browser container
Hello, folks. I was having a similar problem and solved it by embedding the data extraction activity within a retry scope loop (see this topic for the credits, thanks to tjsauter) . In my case the selector provided by Data Extraction wizard was fixed, ie. with no ‘*’ wildcards, but for some reason the extraction didn’t work properly when running the automation (sometimes even in debugging mode!) but after trying the extraction a couple of times then it worked.
Great that you have fixed this! When doing it, what did you put in the ‘Condition’ part of the Retry Scope?