Here what is my bot do:
Reading 10 URL in excel (ReadRange)-> For Each URL → Open URL Website (Use Application/browser) → Data Scraping → WriteRange → Close Browser.
I set MaxNumberOfResult is 100. Each page has 10 results, so the bot need to scrap 10 pages.
However, there are 2 problems:
In the first URL, bot didn’t scrap page 1. Bot open url, then automatically click Next button without scraping page 1.
After scraping first URL, in the second URL. MaxNumberOfResult doesn’t work, bot scrap data unlimit.
How can I solve those problems?
Thanks for reading.
My version is 2021.10.0-beta.5978
Did you check the preview or output data which is write to excel? Confirm scraping element pattern is same.
If you are getting the data by clicking on next button it will go on click all the pages but only 100 datas only saved in your scraped data. check it. If you want to stop immediately after 100 records or some number of pages then handle with initializing and incrementing Count variable.
Like @rahulsharma said, selector wasn’t dynamic for all pages. That’s why bot scrap data nonstop as in fact, bot didn’t get any data. The number of result is 0 so bot just click Next button and I feel that bot scrap data nonstop.
However, it still has some contradictions. If I didn’t clear data table, so after First URL, the number of result in data table is 100 so definitely on the second URL, data scrapping activity doesn’t need to work as it reaches MaxNumberOfResult (100).
Anyway, I’ve solved the problem thank you guys for your help.
the size of datatable → you can increase the number of rows to be extracted in the ‘Extract Data Table’ activity to desired number or keep 0 for ‘all’ value to be extracted
If you are using any datatable, make sure if you want to have max rows extracted, make the limit value as -1 in Build datatable ans make it 0 in extract data table.