We are trying to build an RPA that extracts over 60 PDF’s from a secure website; the issue is that the PDF’s reside in different Webpages.
Is there a way to extract these PDF’s easily using an activity, or will i need to have the robot manually download the files one by one?
We are using Studio 2020 version.
If it’s not gonna be possible manually then of course we may not in the automation.
But you can write some common code which will help you to fetch all the PDFs one by one of course.
Thanks so much for your quick response, i truly appreciate it!
So in our automation, we COULD have the bot do it manually, but its going to be a long line of activities and I’m not sure if that’s good business practice.
Essentially the bot opens each web page, clicks an “Export” button, and then we change the dropdown to Export as PDF and click Export. This downloads the PDF, but we’d need to do this for all 60 PDF’s.
Unfortunately its a bad design on this site, and was initially being done manually which is why it was selected to be automated as it takes atleast 7 hours to do.
I can understand the need for automation here.
So as discussed the automation code should be in such a way that you write for one pdf and run it for many PDFs.
Hope that helps.
so are you already downloading the files using bots?? browser UiActivities??
- this will be the normal way to do , opening browser, direct to the URL , do clicks of filtering and other things, finally click download part
2)do some analysis of how the file download happenes from the browser side, just open chrome development tab and check what are the network requests sent , try to find some patterns in the URL that is giving you the final pdf , from that you can directly type into this URL into browser and check if the download works , then you can adjust different queries in the URL to get other files without any complicated UI activities
Thank you, I figured this might be my only way