Case Study – TestAutomasi Blog https://testautomasi.com/blog A blog On Test Automation Case Studies & Automation tools Wed, 24 Apr 2024 11:14:54 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 Case Study: How we migrated Java selenium tests to Python robot framework https://testautomasi.com/blog/2023/11/03/case-study-how-we-migrated-java-selenium-tests-to-python-robot-framework/ https://testautomasi.com/blog/2023/11/03/case-study-how-we-migrated-java-selenium-tests-to-python-robot-framework/#respond Fri, 03 Nov 2023 16:12:37 +0000 https://testautomasi.com/blog/?p=227 Background: Recently we worked on a project to migrate around 810+ cases which were written in java+testng+maven+jenkins stack to python+robotframework+gitlab stack.

Stakeholders wanted to switch to Python and robot framework due to the following reasons:

  • robot is keyword-driven so it is easier for even manual QA with no coding background to write test
  • python being the development language, automation code can sit with development code and make pipeline integration easier
  • since the robot already has a wrapper defined writing new tests takes less timeless code once you are proficient in this
  • robot supports gherkin and plain English language so with auto-suggest enabled QAs can write tests without much help from others
  • Developers can write backend tests themselves

Process:

When I looked into Java code, I knew there were a lot of things where we were testing things in the wrong way, after analyzing the whole code, I found below things that would need improvements->

  1. Low API/Backend Coverage(solution: In Java all the coverage was on GUI, even simple calculation changes such as calculating pricing for the product for the user are checked via frontend assertion, and if somehow the frontend does not use API which is useful for async actions or cron calls then those APIs were never tested.

This is the first thing I did, in the first 3 months, I focused on regress API coverage, testing all the endpoints, all the payloads, and, all the flows possible through API, we wrote 180+ tests but since API results were a faster and stable lot of the tests from gui automatically become irrelevant.

Testing APIs is pretty simple in robot, you could test in a single line if your API assertion does not have a big schema as they have a builtin status code assertion

    ${response}    Post On Session    product_session    ${end_point}    headers=${headers} json=${payload}

2. Duplicate Code (solution enums, optional arguments, default arguments): One of the biggest issue I saw was a lot of code duplication, as a user I need to go to section “D” from section “A” during user journey, we were always using the entire flow to reach with step by step code

Rather than using the same code again and again, using enums as different steps is a common generic method to reach wherever needed in a single saving tens of lines of code

Sample-> ProceedTill(fromStep==A, toStep==D)

this can cover all cases such as A-B, B-D, B-C, A-C, C-D) and so on saving 100s of lines of code on multiple test cases.

3. No Data Driven Testing(solution- Test Template): One of biggest issue was java code did not had data driven testing so we end creating multiple tests with 100s of lines repeating 90% of the code just to write 10% of the flow differently.

For example, -> BuilderAI Studio Product has six kinds of build card types(Coded words- store,pro,pp,tm,sp,fm) and 20 different currency support for payment then creating a test matrix for that kind of flows would produce around 120 cases->

that means we end up creating so much duplicate code and chances are you may miss some flows too but with the “Test Template” Feature in robot data-driven testing is real fun

you could just create a simple method with two arguments ${card_type} and ${currency_code} and then pass it as different values to create test cases in a single line reusing all code except two lines where you are saying :

if ${card_type}==”ss”

then perform ss-related action and so on

if ${currency_code}==${currency_code.INR}

then select INR as the currency

That’s it and you end up creating all 120 tests with the same code in a single line like this:

Test Template Verify Build Card Creation ${card_type} ${currency_code}

Tests

Verify SS Card Creation ${card_types.ss} ${currency_code.INR}

Verify SS Card Creation ${card_types.sp} ${currency_code.INR}

Verify SS Card Creation ${card_types.ss} ${currency_code.USD}

Verify SS Card Creation ${card_types.sp} ${currency_code.USD}

4. No Component Testing(solution deep links, Atomic Tests): By looking at the code it was visible, test pyramid concepts were missing here, If we have 10+ pages in an application where 10 is the last page and 1 is the first, we were going to 1-10 pages in sequence to verify some components of 10th page, this was causing a lot of flakiness in tests also consuming a lot of time and code in execution side.

To solve this I introduced component testing: Using API filters to fetch data from the backend run and then use that data in the front end combining deep links.

For example, if the URL of the 10th page looks like the below->

https://app.com/10/{uniqueID}

then from backend run, we had tons of unique ids in DB, I used api filters to fetch latest data(which was created in last 2 hours) to append that in URL and then with simple format function to replace that in deep link url, in this way after login(which is done through cookie injection as well) we end up going on page 10 directly without having having to other 9 pages saving tons of lines of code and execution time (brilliant right i know that’s the power of deep links and component testing mixing with API filters)

sample code->

 @keyword

    def get_ongoing_unique_id(self,username):

        card_json_array={}

        unique_id=”no id found”

        payload={“email”:username,”password”:os.environ.get(‘app.password’)}

        response=requests.post(os.environ.get(‘app.api.loginapi’),data=payload)

        header={“token”:response.json()[”token”],”content-type”:”application/json”}

        response=requests.get(os.environ.get(‘app.api.ongoing’),headers=header)

        assert response.status_code==200

        json_array=response.json()[‘unique_ids’]

        for item in json_array:

             if item[‘card_type’]==’ss’:

                 unique_id=item[‘unique_id’]

                 break

        return unique_id

This approach made sure our tests were atomic and we were testing what we wanted to test rather than unnecessary flows.

5. Bad Code Structure & Readability(Solution: Moving to atomic tests with Gherkin Support): In Java code was following the test script using page objects and page actions without following proper atomic tests flow with user journey verifications

A Code Sample from Java Code to download pdf file from the card menu:

fromPageFactory.Login()

fromPageFactory().goToHome()

fromPageFactory.clickGoButton()

fromPageFactory().clickBuildNowBtn()

fromPageFactory.clickNext()

fromPageFactory().typeCardDetails()

fromPageFactory.clickOnMenuOption()

fromPageFactory().clickOnPdfDownload()

while similar code is written in robot with built-in gherkin support:

 Author : Chandan | Scenario: Verify that pdf download is working correctly on build card page

    [Tags]           component high  pdf buildcard regression

    Given User is on buildcardpage

    When User clicks on menu option ${build_card_menu_options.DownloadPDF}

    Then PDF should be generated successfully

much more clear and readable right?

6. No Metrics & Analysis On Previous Results( Solution: Django Dashboard, Gitlab Pipeline Reports, Report Portal, Grafana Dashboard, Realtime alerts, Slack hooks): Making decision-based on a test run is the most important part, without easier debugging clear reports and data-driven based decisions, we were unable to find which tests to concentrate and focus and spending days to fix issues or analyze issues

We started to dump our results in the database to understand the behavior, it also helped to move those java tests first which were flaky therefore not disturbing or missing automation sign-off while development on the robot python project was in progress.

Having clear reports and clear errors helped us to focus on failures and fix and report them quickly compare to java where analyzing them was taking a lot of time since we were only dependent on current day results from html files.

Having a combined view of multiple products also helped us to follow up with the concerned team member which was a difficult task in testng(merging multiple HTML files through emails)

A sample of some of the reporting changes done by us is attached as an image.

Results of all this was->

  • 1.We were able to execute 301k test runs compared to 71k (in java)(increased by 40%)
  • 2.We removed 800+ java cases and moved them to robot(60 pending ones which were obsolute),
  • 3. We migrated 4 years of work in java to robot in under 6 months that’s no joke right :D.
  • 4. we have added 1800+ new tests in the robot not just moving java ones, but adding missing tests too with the help of a data-driven approach.
  • 5. We were able to raise around 153 issues compared to 65 in (java) 
  • 6. Flakiness in working tests has been reduced from 15% to 4.5% per product(thanks to atomic tests and component tests)
  • 7. Test Execution time has been reduced from 3 hours to under 30 mins per product (can be reduced more if platform performance improves, we just need to increase thread count :D)

]]>
https://testautomasi.com/blog/2023/11/03/case-study-how-we-migrated-java-selenium-tests-to-python-robot-framework/feed/ 0
How to run selenium 3 tests on zalenium docker containers https://testautomasi.com/blog/2023/03/31/how-to-run-selenium-3-tests-on-zalenium-docker-containers/ Fri, 31 Mar 2023 17:05:03 +0000 https://testautomasi.com/blog/?p=205 By using Zalenium Docker containers for running chrome/browser drivers and selenium grid, you can run your selenium tests without the need for any dependency such as chrome, chrome driver or webdrivermanager locally.

When you use zalenium, you can view the live execution of tests through built-in vnc viewer, view test results in the dashboard with video and chromedriver logs, and control chrome instances easily through docker containers, so let’s get started first of all since selenium is dependent on docker therefore, install docker on and zalenium images on machine by running below commands-

-update Linux system
sudo apt-get update

-install docker
sudo apt install docker.io

-check docker version
sudo docker -v

-pull docker selenium and zalenium images

sudo docker pull elgalu/selenium
sudo docker pull dosel/zalenium
sudo docker images

-start zalenium, here we are starting with 2 containers with 2 chrome instances each to run 4 tests in parallel

sudo docker run --rm -dti --name zalenium -p 4444:4444 -v /var/run/docker.sock:/var/run/docker.sock -v /tmp/videos:/home/seluser/videos --privileged dosel/zalenium start --desiredContainers 2 --maxTestSessions 2

-in case something wrong happened, you can stop all containers with below command

sudo docker stop $(sudo docker ps -a -q)

-see running containers info-

sudo docker ps

-see logs of the particular container

sudo docker logs containerid

-see performance stats of containers

sudo docker stats

-check zalenium live tests panel, make sure 4444 port is accessible

http://yourserverip:4444/grid/admin/live

-check zalenium dashboard

http://yourip:4444/dashboard/#

run selenium tests using remote web driver url of zalenium hub
in your tests


package simplilearn.appiummarch2023;

import static org.testng.Assert.assertEquals;

import java.net.URL;

import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeOptions;
import org.openqa.selenium.remote.BrowserType;
import org.openqa.selenium.remote.CapabilityType;
import org.openqa.selenium.remote.DesiredCapabilities;
import org.openqa.selenium.remote.RemoteWebDriver;
import org.testng.annotations.Test;

public class ZaleniumTests {

	@Test
	public void test1() {

		WebDriver driver = null;

		DesiredCapabilities caps = new DesiredCapabilities();
		caps.setCapability(CapabilityType.BROWSER_NAME, BrowserType.CHROME);

		caps.setCapability("zal:name", "testchandan");
		caps.setCapability("zal:tz", "Europe/Berlin");
		caps.setCapability("zal:recordVideo", "true");
		caps.setCapability("zal:screenResolution", "1920x1058");
		ChromeOptions options = new ChromeOptions();
		options.addArguments("disable-infobars"); // disabling infobars
		options.addArguments("--disable-extensions"); // disabling extensions
		options.addArguments("--disable-gpu"); // applicable to windows os only
		options.addArguments("--disable-dev-shm-usage"); // overcome limited resource problems
		options.addArguments("--no-sandbox"); // Bypass OS security model
		options.addArguments("--headless"); // Bypass OS security model
		caps.setCapability(ChromeOptions.CAPABILITY, options);

		try {
			driver = new RemoteWebDriver(new URL("http://yourip:4444/wd/hub"), options);

		} catch (Exception e) {
			e.printStackTrace();
		}

		driver.get("https://testautomasi.com");
		try {
			Thread.sleep(30000);
		} catch (InterruptedException e) {
			// TODO Auto-generated catch block
			e.printStackTrace();
		}
		assertEquals(driver.getTitle(), "Home - Welcome to automasi solutions private limited");

		driver.quit();

	}

}

Live Screenshots:



Have any feedback? Please don’t hesitate to leave it in the comments section.

]]>
Case Study: How I Reduced Appium Test Execution Time By More Than 50% https://testautomasi.com/blog/2020/08/14/case-study-how-i-reduced-appium-test-execution-time-by-more-than-50/ Fri, 14 Aug 2020 14:38:39 +0000 https://testautomasi.com/blog/?p=109 Recently, I spent my time in improving test execution time for our kredivo mobile application automated tests and by doing certain activities during refactoring and implementing certain strategies, I was able to decrease our test suite execution time by more than 50%.

Before Making Improvements-

Total Tests Count- 287, Total Time Taken During Execution-240.532 minutes

After Making Improvements-

Total Tests Count-228, Total Time Taken During Execution-113.339 minutes.

You may ask why a decrease in test cases count right? You will get the answer by the end of this article.

Before I start discussing improvements, Let me tell you our set up details-

Platform- iOS and Android(Using Appium For both)

Devices-Using browserstack cloud device services to run tests on real devices where some random device is allocated automatically during execution based on device parameters, For now, we have 5 Devices plan so parallelly we are running 5 tests at a time.

Programming Language: Java with Shell Scripting For Clean-Up Activities.

List of improvements which I did to achieve faster tests execution are listed below:

1. Minimize Xpath usage– search elements by Id(Android) or AccessibilityID(iOS), this seems to be a small issue as the difference lies in few milliseconds to few seconds but when you are dealing with more than 1000 elements in all your tests then this becomes a vital issue especially on ios tests where AccessibilityID is much much faster compared to XPath. No excuses here, work with devs to add id and AccessibilityID in your application, I asked my iOS dev to add AccessibilityID even though it was not present earlier.

2. Optimize Retry Failures: This was the second biggest culprit, We were using a retry failure listener where failed tests were retried once before marking it a failure due to flakiness in tests though we were capturing intermediate failures too in the report, the problem with that approach was when there is a genuine failure then we lose out time because in case of genuine failure retries were not needed.

To optimize this, I analyzed the test cases execution pattern for the last 3 months as we were dumping our test results in DB and then I found out which tests were flakier compared to others. After carefully analyzing the results I customized retry listener where by default retry count was 0 but for certain tests only which seemed flaky, the count was 1, the list of test cases are captured from DB based on last 3 months data using a dynamic SQL query at a run time so that I don’t have to do this activity manually again and again.


3. Say Hard No To Thread.sleep:
This was the third biggest culprit in our tests in terms of consuming execution time, there was a lot of sleep waits used by my team members to achieve test stability, while test does become stable with this approach but it has a big cost in terms of execution time as with each sleep we were losing time ranging from 5 seconds to 30 seconds depending upon test cases. To fix this, I created a custom await type of wait which was making continuous calls to check something on the DB/external system which can’t be handled by explicit waits.

 sample-
 await().atMost(5, SECONDS).until(statusIsUpdatedInDB());

4. Use Multiple Types Of Explicit Wait: For most of the appium driver element related tasks, we were using explicit wait but the problem with having a single explicit wait only, when elements are not present or test fails due to element not found error, it consumes the maximum time given during declaration. To fix this, I created multiple explicit wait types and refactored tests to use them based on requirement-

Long wait- 30 seconds- Only added this wait while waiting for the new element on a new page as new page load takes time

Short wait- 10 seconds- added this wait while waiting for some elements which depend on API call once the page is loaded, any API taking more than 10 seconds were reported as a bug.

Minimal wait- 2 seconds- Added this wait for those elements which were not dependent on API but on app logic(something was hidden and displayed once the user performs some action). This wait can also be used for optional elements(an element that may or may not appear).

No wait-0 seconds- Added this wait for those elements which were loaded as soon as page loads, for example, buttons, labels so the first element depends on long wait and after that, all elements on that page were refactored to use no wait unless it involves some app or API steps.

5. Focus On Test Cases Coverage Rather Than Test Cases Count: This is where test execution time improved the most, There were a lot of duplicate test cases were present in our test suite for example login was part of multiple tests but still, we had login test running separately, why would you want to test something again and again, if login will fail any way you get to know from other login dependent tests.

similarly, there were tests on social media where we need to test social media pages such as Youtube, Twitter, Facebook, and Instagram, Now we had 4 individual tests for them while this can be achieved in the single test as these tests are static and can be tested at once using back button function in the device since we are already in the social media section and we just need to verify single social media tests, go back and perform different one until all social media tests are completed.

So due to this, we noticed a reduction in the test cases count but now saved a lot of time in appium driver set up, app installation, app invocation, and duplicate test steps, it was completely worth it.

We saved the maximum time in this improvement activity, around 45-50 minutes were saved using this approach and that is why you see a reduction in the test cases count during the early part of this article.

6. Every Command To Appium Server Is Costly, Think Before Using Each Command
: One of the prime reason why appium is slower compared to xcuitest and android expresso is that it follows a client-server approach and while code reusability is there but each appium command cost time that is why you need to be careful while making any appium commands, always try to use different way if possible rather than using appium commands in your tests. Use API/DB calls wherever possible to achieve things which were already verified from mobile app UI in some other tests, your focus should be testing UI components only once and if they are working once then for the next time or for other tests you can make use of API/DB calls.


7. Use In-Memory Caching Wherever Possible: This is also related to point #6 Since each appium command involves a time cost, On application pages where there are a lot of elements especially textview components where you need to extract text and verify content (we have payment tests where after payment we need to verify multiple prices, tenure, and item list), rather than making singular find element command for each text view elements you can make a find elements command once to get all the textview components and then verify textview components text using indexing. This approach is only recommended on those screens where there are a lot of elements are present as time difference would be very less if elements are less.

8. Use Deeplinks: Deeplinks are generally used in marketing campaigns whereby clicking those links, the user is redirected to a particular page if the app is already present in the mobile device, in test automation too we can make use of this and save a lot of execution time which is spent in the steps while reaching to that particular page, one example of this would be after login I would go directly go to deals section of my app using “appname://deals” deep-link to execute deals related tests without wasting time in clicking menu->deals and so on.

9. Other Small Improvements: Apart from the above improvements, I also made many other improvements such as reducing timeout value on browserstack, adding waitForQuiescence=false capability, using app switch if elements rendering takes a lot of time, using setValue instead of sendkeys wherever possible, using wait interval in Millis rather than in seconds if needed between element clicks(sometimes execution was too fast causing race condition)

This improvement did not make much impact but still, a difference of 3-5 minutes was visible due to this.

So, After Using all these improvements, We saved more than 50% time in test execution and utilized that time by running two test suite iterations in the day time which was not possible earlier as all the tests were running only in nightly builds due to high execution time.

Did you like these improvements? leave your feedback in comments.

]]>
Creating a sms otp reader to run sms based tests on cloud devices https://testautomasi.com/blog/2020/06/14/creating-a-sms-otp-reader-to-run-sms-based-tests-on-cloud-devices/ Sun, 14 Jun 2020 17:44:09 +0000 https://testautomasi.com/blog/?p=75 Problem Statement

Our Krdivo Mobile application has integration with multiple eCommerce providers such as Tokopedia and Shoppee, now to connect to these providers app to app is quite difficult as these merchants need SMS OTP verification before you can perform any action on their app.

In local environment, we can use the real device with real Simcard to get SMS and read SMS from notification panel but when we run tests on cloud devices using cloud services such as BrowserStack and Soucelabs then we face two problems-

1. These devices don’t have sim cards so we can’t receive OTP SMS
2. Devices are not dedicated so every time we run our tests a random device from the pool of devices is selected so we can’t set up everything on the device once and then reuse set up later.

Solution-

So how can we solve this? On a high level, it was pretty much clear we can’t ignore these tests as these tests are one of the most critical cases in our app and we can’t test them daily on our local environment with every new change made in merchant code.

Requirements for achieving this-

1.One cloud-based virtual sim card service which can provide incoming SMS data in API(preferred way) or in HTML/text(secondary option).

2. Virtual sim card service should support the Indonesian number as we need to log in using the Indonesian number on the merchant app.

3. Provide at least 20 incoming SMS to cater to our automated tests at least twice in our regression test run.

4. Service should be within our budget(not more than 200k IDR~15$ or less).


How did I achieve this?

I start looking for an API/service that can give me a virtual Indonesian phone number which can provide incoming SMS data in JSON/HTML/text format and data can be fetched through API/web service call as fetching data from UI would be time-consuming and may require IP whitelisting from SMS providers.

I looked at the number of APIs such as Twilio and Plivo, I had used them in Drishtisoft to simulate outgoing and incoming calls in my webrtc automation suite.

But the problem was –

(i)They did not offer incoming SMS functionality on virtual phone numbers
(ii)They did not offer Indonesian numbers
(iii) If they satisfied above two conditions then price was really high(for example Telkomsel)

Finally, After going into the various darker side of the internet and after trying out more than10 services(many services were fake too), I found the desired service –GoSMSGateway

They had their API in form param and they were using PHP as their server-side language. So my task was –

(i) Login into the service using my API credentials
(ii)Extracting the first unread SMS from list of SMS using the inbox
(iii)Archive all the SMS present on inbox so that state is clean once new tests try to read something from inbox

I logged in using their API, got content in HTML response, parsed content using jsoup, and then extracted otp using regex.

Code snippets are present below-

Response response1 = given().spec(gosmsspec).when().formParam("username", "youruser")
				.formParam("password", "yourpassword").post("/loginapi");
inboxHtml = response1.getBody().asString();
Document doc = Jsoup.parse(inboxHtml);
otp = doc.getElementsByClass("msg from").get(0).toString().replaceAll("[^0-9]", "").substring(0,3);
//^0-9 will remove all non numeric values and substring 0,3 will give first 4 characters from result(as otp is the first numeric word in sms but it may contain date also sometimes).

using session-id obtained from response1 and .delete(“api/sms/inbox/archieve”) , I deleted the inbox SMS list .

And that’s how our SMS OTP based tests are also running daily to make sure nothing is broken on merchant end in case of any change in merchant code.

Things to remember while doing this-

1. Do not run login based tests on merchant multiple times else your number may be blocked (try to run 1 or 2 times a day only).

2. If your tests have retry mechanism then customize it for OTP based tests.

3. Sometimes receiving OTP may take time so using retry with await to make sure, the performance of the test is not impacted.

Have questions? leave it in comments.

]]>
Creating A Live Email Alert Bot For Foreign Exchange Rates https://testautomasi.com/blog/2020/05/02/automation-in-real-life-creating-a-live-email-alert-bot-for-foreign-exchange-rates/ Sat, 02 May 2020 11:29:08 +0000 https://testautomasi.com/blog/?p=27 Real-Life Problem Statement

I have to send money to my parents in India every month since I am living and working here in Indonesia. So I receive my salary in IDR currency and then I had to send it in INR through exchange services in Indonesia such as Topremit.

Now, Before this COVID situation, currency fluctuation was not that volatile so It was not a problem for me to send money anytime I want. But with the COVID situation, the entire market was crashed and IDR-INR rates went up like crazy around 20-30% more than the average rates. I kept looking at the exchange rates daily on google but the problem was when sometimes rate went low I did not notice the rate and missed out on opportunities and when I use to notice again the rate was up again.

Solution

Then, one particular day, I thought let’s automate this problem and get currency rate alerts on email on the particular condition so that I can send my money to India whenever I get an alert on email.

Requirements for achieving this-

1.One Currency Converter API which can tell me rates in INR and IDR.
2.One server or online/cloud service(like Jenkins) which can run our corn in every 30 minutes or so and send an email to the particular email address
3. For easier maintenance, I didn’t want to create any git-repo or code management activity for this, Everything should happen through service used in step 2.

How did I achieve this?

I start looking for an API that can give me results in from IDR to INR and should be free of cost (at least 100 API calls should be free based on my requirement).

I looked at the number of APIs but all of them were either paid or did not have INR as Base Currency. Most of the API were giving results with a base currency like USD or GBP.

Finally, I found one API- https://free.currconv.com/api/v7/ which was giving me results in INR to IDR, All I had to do is generate my access token and then hit their API with to and from currency as params.

Now, I created a Jenkins job with an email ext plugin to send an email to me and in a build step, I used a shell command where I supplied a simple curl command with required details. To receive the only price-related stuff from the API response, I added start and end condition in Jenkins build log by using “${BUILD_LOG_EXCERPT, start=”\b(start-here)\b”, end=”\b(end-here)\b”}” and added this code in email ext body content.

--curl command 
response=$(curl -sb -H "Accept: application/json" "https://free.currconv.com/api/v7/convert?q=INR_IDR&compact=ultra&apiKey=yourkey")
echo "${response}"

And Boom I started receiving email alerts of pricing and sent money to India when I saw the price was right, Having price history in emails, also helped me to see the trend where I can notice at what time of the day. price seems to be the lowest.
Build Trigger Handling-In the cron expression of the build trigger section, I mentioned it to run on every 30 minutes.

That’s how you can create a simple currency exchange alert bot for your money transfer problem and can save 5-10% money every month.

User Question: Why Storing data in emails and not in some excel file or in DB tables.

Answer: Problem with Excel and DB would have been viewing it in iPad or mobile by using email I am seeing alerts when I want to and dumping it also for analysis. Once I have enough data to do analysis, I can easily extract it from emails to CSV using UIpath or java /python mail programs.

User Question: Why not restrict emails by applying some conditions so that server does not send emails every 30 mins.

Answer:48 emails a day is not much load on the server and all these emails go to a very new email account which I created for this purpose only, Filters and conditions will be applied when I have enough data to visualize. Once I see the pattern for 2 months then I can say send alerts when the rate is <195. There is no point in applying filers right now as it may not reach that amount but based on data I can see the lowest rate amount and frequency of this amount between let’s say date 20th-30th. Now I can select a reasonable amount to send email alerts.



]]>