chandan – TestAutomasi Blog https://testautomasi.com/blog A blog On Test Automation Case Studies & Automation tools Wed, 24 Apr 2024 11:14:54 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 Case Study: How we migrated Java selenium tests to Python robot framework https://testautomasi.com/blog/2023/11/03/case-study-how-we-migrated-java-selenium-tests-to-python-robot-framework/ https://testautomasi.com/blog/2023/11/03/case-study-how-we-migrated-java-selenium-tests-to-python-robot-framework/#respond Fri, 03 Nov 2023 16:12:37 +0000 https://testautomasi.com/blog/?p=227 Background: Recently we worked on a project to migrate around 810+ cases which were written in java+testng+maven+jenkins stack to python+robotframework+gitlab stack.

Stakeholders wanted to switch to Python and robot framework due to the following reasons:

  • robot is keyword-driven so it is easier for even manual QA with no coding background to write test
  • python being the development language, automation code can sit with development code and make pipeline integration easier
  • since the robot already has a wrapper defined writing new tests takes less timeless code once you are proficient in this
  • robot supports gherkin and plain English language so with auto-suggest enabled QAs can write tests without much help from others
  • Developers can write backend tests themselves

Process:

When I looked into Java code, I knew there were a lot of things where we were testing things in the wrong way, after analyzing the whole code, I found below things that would need improvements->

  1. Low API/Backend Coverage(solution: In Java all the coverage was on GUI, even simple calculation changes such as calculating pricing for the product for the user are checked via frontend assertion, and if somehow the frontend does not use API which is useful for async actions or cron calls then those APIs were never tested.

This is the first thing I did, in the first 3 months, I focused on regress API coverage, testing all the endpoints, all the payloads, and, all the flows possible through API, we wrote 180+ tests but since API results were a faster and stable lot of the tests from gui automatically become irrelevant.

Testing APIs is pretty simple in robot, you could test in a single line if your API assertion does not have a big schema as they have a builtin status code assertion

    ${response}    Post On Session    product_session    ${end_point}    headers=${headers} json=${payload}

2. Duplicate Code (solution enums, optional arguments, default arguments): One of the biggest issue I saw was a lot of code duplication, as a user I need to go to section “D” from section “A” during user journey, we were always using the entire flow to reach with step by step code

Rather than using the same code again and again, using enums as different steps is a common generic method to reach wherever needed in a single saving tens of lines of code

Sample-> ProceedTill(fromStep==A, toStep==D)

this can cover all cases such as A-B, B-D, B-C, A-C, C-D) and so on saving 100s of lines of code on multiple test cases.

3. No Data Driven Testing(solution- Test Template): One of biggest issue was java code did not had data driven testing so we end creating multiple tests with 100s of lines repeating 90% of the code just to write 10% of the flow differently.

For example, -> BuilderAI Studio Product has six kinds of build card types(Coded words- store,pro,pp,tm,sp,fm) and 20 different currency support for payment then creating a test matrix for that kind of flows would produce around 120 cases->

that means we end up creating so much duplicate code and chances are you may miss some flows too but with the “Test Template” Feature in robot data-driven testing is real fun

you could just create a simple method with two arguments ${card_type} and ${currency_code} and then pass it as different values to create test cases in a single line reusing all code except two lines where you are saying :

if ${card_type}==”ss”

then perform ss-related action and so on

if ${currency_code}==${currency_code.INR}

then select INR as the currency

That’s it and you end up creating all 120 tests with the same code in a single line like this:

Test Template Verify Build Card Creation ${card_type} ${currency_code}

Tests

Verify SS Card Creation ${card_types.ss} ${currency_code.INR}

Verify SS Card Creation ${card_types.sp} ${currency_code.INR}

Verify SS Card Creation ${card_types.ss} ${currency_code.USD}

Verify SS Card Creation ${card_types.sp} ${currency_code.USD}

4. No Component Testing(solution deep links, Atomic Tests): By looking at the code it was visible, test pyramid concepts were missing here, If we have 10+ pages in an application where 10 is the last page and 1 is the first, we were going to 1-10 pages in sequence to verify some components of 10th page, this was causing a lot of flakiness in tests also consuming a lot of time and code in execution side.

To solve this I introduced component testing: Using API filters to fetch data from the backend run and then use that data in the front end combining deep links.

For example, if the URL of the 10th page looks like the below->

https://app.com/10/{uniqueID}

then from backend run, we had tons of unique ids in DB, I used api filters to fetch latest data(which was created in last 2 hours) to append that in URL and then with simple format function to replace that in deep link url, in this way after login(which is done through cookie injection as well) we end up going on page 10 directly without having having to other 9 pages saving tons of lines of code and execution time (brilliant right i know that’s the power of deep links and component testing mixing with API filters)

sample code->

 @keyword

    def get_ongoing_unique_id(self,username):

        card_json_array={}

        unique_id=”no id found”

        payload={“email”:username,”password”:os.environ.get(‘app.password’)}

        response=requests.post(os.environ.get(‘app.api.loginapi’),data=payload)

        header={“token”:response.json()[”token”],”content-type”:”application/json”}

        response=requests.get(os.environ.get(‘app.api.ongoing’),headers=header)

        assert response.status_code==200

        json_array=response.json()[‘unique_ids’]

        for item in json_array:

             if item[‘card_type’]==’ss’:

                 unique_id=item[‘unique_id’]

                 break

        return unique_id

This approach made sure our tests were atomic and we were testing what we wanted to test rather than unnecessary flows.

5. Bad Code Structure & Readability(Solution: Moving to atomic tests with Gherkin Support): In Java code was following the test script using page objects and page actions without following proper atomic tests flow with user journey verifications

A Code Sample from Java Code to download pdf file from the card menu:

fromPageFactory.Login()

fromPageFactory().goToHome()

fromPageFactory.clickGoButton()

fromPageFactory().clickBuildNowBtn()

fromPageFactory.clickNext()

fromPageFactory().typeCardDetails()

fromPageFactory.clickOnMenuOption()

fromPageFactory().clickOnPdfDownload()

while similar code is written in robot with built-in gherkin support:

 Author : Chandan | Scenario: Verify that pdf download is working correctly on build card page

    [Tags]           component high  pdf buildcard regression

    Given User is on buildcardpage

    When User clicks on menu option ${build_card_menu_options.DownloadPDF}

    Then PDF should be generated successfully

much more clear and readable right?

6. No Metrics & Analysis On Previous Results( Solution: Django Dashboard, Gitlab Pipeline Reports, Report Portal, Grafana Dashboard, Realtime alerts, Slack hooks): Making decision-based on a test run is the most important part, without easier debugging clear reports and data-driven based decisions, we were unable to find which tests to concentrate and focus and spending days to fix issues or analyze issues

We started to dump our results in the database to understand the behavior, it also helped to move those java tests first which were flaky therefore not disturbing or missing automation sign-off while development on the robot python project was in progress.

Having clear reports and clear errors helped us to focus on failures and fix and report them quickly compare to java where analyzing them was taking a lot of time since we were only dependent on current day results from html files.

Having a combined view of multiple products also helped us to follow up with the concerned team member which was a difficult task in testng(merging multiple HTML files through emails)

A sample of some of the reporting changes done by us is attached as an image.

Results of all this was->

  • 1.We were able to execute 301k test runs compared to 71k (in java)(increased by 40%)
  • 2.We removed 800+ java cases and moved them to robot(60 pending ones which were obsolute),
  • 3. We migrated 4 years of work in java to robot in under 6 months that’s no joke right :D.
  • 4. we have added 1800+ new tests in the robot not just moving java ones, but adding missing tests too with the help of a data-driven approach.
  • 5. We were able to raise around 153 issues compared to 65 in (java) 
  • 6. Flakiness in working tests has been reduced from 15% to 4.5% per product(thanks to atomic tests and component tests)
  • 7. Test Execution time has been reduced from 3 hours to under 30 mins per product (can be reduced more if platform performance improves, we just need to increase thread count :D)

]]>
https://testautomasi.com/blog/2023/11/03/case-study-how-we-migrated-java-selenium-tests-to-python-robot-framework/feed/ 0
How to run selenium 3 tests on zalenium docker containers https://testautomasi.com/blog/2023/03/31/how-to-run-selenium-3-tests-on-zalenium-docker-containers/ Fri, 31 Mar 2023 17:05:03 +0000 https://testautomasi.com/blog/?p=205 By using Zalenium Docker containers for running chrome/browser drivers and selenium grid, you can run your selenium tests without the need for any dependency such as chrome, chrome driver or webdrivermanager locally.

When you use zalenium, you can view the live execution of tests through built-in vnc viewer, view test results in the dashboard with video and chromedriver logs, and control chrome instances easily through docker containers, so let’s get started first of all since selenium is dependent on docker therefore, install docker on and zalenium images on machine by running below commands-

-update Linux system
sudo apt-get update

-install docker
sudo apt install docker.io

-check docker version
sudo docker -v

-pull docker selenium and zalenium images

sudo docker pull elgalu/selenium
sudo docker pull dosel/zalenium
sudo docker images

-start zalenium, here we are starting with 2 containers with 2 chrome instances each to run 4 tests in parallel

sudo docker run --rm -dti --name zalenium -p 4444:4444 -v /var/run/docker.sock:/var/run/docker.sock -v /tmp/videos:/home/seluser/videos --privileged dosel/zalenium start --desiredContainers 2 --maxTestSessions 2

-in case something wrong happened, you can stop all containers with below command

sudo docker stop $(sudo docker ps -a -q)

-see running containers info-

sudo docker ps

-see logs of the particular container

sudo docker logs containerid

-see performance stats of containers

sudo docker stats

-check zalenium live tests panel, make sure 4444 port is accessible

http://yourserverip:4444/grid/admin/live

-check zalenium dashboard

http://yourip:4444/dashboard/#

run selenium tests using remote web driver url of zalenium hub
in your tests


package simplilearn.appiummarch2023;

import static org.testng.Assert.assertEquals;

import java.net.URL;

import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeOptions;
import org.openqa.selenium.remote.BrowserType;
import org.openqa.selenium.remote.CapabilityType;
import org.openqa.selenium.remote.DesiredCapabilities;
import org.openqa.selenium.remote.RemoteWebDriver;
import org.testng.annotations.Test;

public class ZaleniumTests {

	@Test
	public void test1() {

		WebDriver driver = null;

		DesiredCapabilities caps = new DesiredCapabilities();
		caps.setCapability(CapabilityType.BROWSER_NAME, BrowserType.CHROME);

		caps.setCapability("zal:name", "testchandan");
		caps.setCapability("zal:tz", "Europe/Berlin");
		caps.setCapability("zal:recordVideo", "true");
		caps.setCapability("zal:screenResolution", "1920x1058");
		ChromeOptions options = new ChromeOptions();
		options.addArguments("disable-infobars"); // disabling infobars
		options.addArguments("--disable-extensions"); // disabling extensions
		options.addArguments("--disable-gpu"); // applicable to windows os only
		options.addArguments("--disable-dev-shm-usage"); // overcome limited resource problems
		options.addArguments("--no-sandbox"); // Bypass OS security model
		options.addArguments("--headless"); // Bypass OS security model
		caps.setCapability(ChromeOptions.CAPABILITY, options);

		try {
			driver = new RemoteWebDriver(new URL("http://yourip:4444/wd/hub"), options);

		} catch (Exception e) {
			e.printStackTrace();
		}

		driver.get("https://testautomasi.com");
		try {
			Thread.sleep(30000);
		} catch (InterruptedException e) {
			// TODO Auto-generated catch block
			e.printStackTrace();
		}
		assertEquals(driver.getTitle(), "Home - Welcome to automasi solutions private limited");

		driver.quit();

	}

}

Live Screenshots:



Have any feedback? Please don’t hesitate to leave it in the comments section.

]]>
Getting Started With Locust: A Python Based Performance Testing Tool https://testautomasi.com/blog/2021/06/13/getting-started-with-locust-a-python-based-performance-testing-tool/ Sun, 13 Jun 2021 19:54:02 +0000 https://testautomasi.com/blog/?p=173 Last year in one of the consulting assignments, I was asked to perform performance testing on an application using locust, client wanted to integrate load testing within backend code which was written in python.

Initially, I suggested JMeter but the client said a valid point that they want something that can easily be maintained by developers later on without much training.

I understood their point and started looking for python based load testing tool and within seconds I stumbled upon “Locust”. While Usage of this tool still quite low compare to other popular tools in this domain such as Jmeter, Loadrunner and Gatling but in terms of features, it is just as good as others.

Of course, being new and open source tool, it does have limitation but good part it, since it is pure python programming based tool, whatever is missing can easily be achieved through custom code and python third party libraries.

So in this locust series, We will discuss various ways to perform different types of performance testing and some common solutions to problems that I faced during my assignment.

Let’s start with a quick HTTP request test using the below steps…

  1. Installation

    Installation is pretty easy, you just need to run one single command, please note latest locust version required python 3.6 version or above.
pip3 install locust

2. Writing a quick load test of testautomasi home page in a python file:

from locust import HttpUser, task, between

class QuickLoadTest(HttpUser):
    wait_time = between(1, 10)

    @task
    def test(self):
        self.client.get("/")

Explanation of code: A class QuickLoadTest is created which extends the HtppUser locust class which has all the methods related to HTTP request testing as it is built on top of the python-requests module.

wait time is kind of a uniform random timer if you compare it with JMeter, which pauses threads for a random time interval which is between 1 to 10 in this case.
task annotation is like an HTTP request sample in Jmeter, under which you can write code to test one request or application flow with custom logic. Here, we are hitting the homepage(the base URL will be given during runtime) with the “get” method type.

3. Running test

Running this test is very straightforward, save the step 2 code as python file then open terminal/command prompt and run below command:

 locust -f ".\locationoffile.py"

After that, browse to locust web interface for providing base URL and thread count, by default at http://localhost:8089

Thread count is equivalent to the number of users in the locust, the spawn rate is equivalent to the ramp-up period in JMeter where 10 means, 10 threads will become active in 1 second till the total thread count reaches 100.

Host means the base URL/application URL, in this case, we are using testautomasi.com for demo purposes.

4. Viewing Results/Metrics:

Once you start tests, results can be viewed in real-time on the locust web interface in form of statistics, charts, failures, exceptions, and CSV data as shown below:

If you notice the above image closely, in locust, you can change thread count during runtime as well using the edit link just below status section, which isn’t possible in many other performance testing tools.

CSV report also produces data like summary report listener in Jmeter, you can download and view results metrics or send it to higher management/ Developers to take actions.

So That’s how you can use locust to perform your performance testing tasks, quite easy right?, Let’s discuss some more cool stuff about locust in upcoming articles, please leave your feedback about locust in the comment section.

]]>
Top 20 Best Practices For Writing Better Automation Test Code https://testautomasi.com/blog/2021/06/06/top-20-best-practices-for-writing-better-automation-test-code/ Sun, 06 Jun 2021 12:03:06 +0000 https://testautomasi.com/blog/?p=165 Programming Language Wise:

  • Follow Programming Language Guidelines( method name-getPayload(), class name- APIEndpoints, package name-com.testingsumo.backend.tests, variable name-authToken)
  • Follow Oops Concepts Wherever Possible- Abstraction(base classes), Inheritance(multiple implementations of same things/multiple inheritances), Polymorphism(many forms with something different), Data Hiding(hide unnecessary/sensitive info), Encapsulation(Bind small entities into a single large entity)
  • Reduce code duplication (think before writing new code, can I use/make changes in existing code?)
  • Increase code reusability 
  • Make your code generic wherever possible
  • Leave no hardcoded data in source code
  • Keep your static data outside the source code
  • Keep your dynamic data dynamic in test code (fetch it from DB queries or scripts)
  • Test your code properly, use IDE options such as call hierarchy or show usage to test your changes end 2 end

Framework and Debugging Wise

  • Use Extensive logging- everything which is part of source code should be analyzed from logs without looking at the source code
  • Generate and save failure proofs outside the src code- videos/data/screenshots/logs
  • Focus on making your code scalable and faster without compromising the code quality 
  • Your code should be platform and system  independent
  • Use as many assertions as possible focus on automated testing rather than automation
  • Leave no hardcoded data in source code
  • Always think for the future, separate out tech dependencies so that migration to new tech is easy in case it is needed
  • Keep your tests independent for better results in multithreading unless they are related (for example publisher-subscriber related tests)
  • Use Proper Documentation
  • Create code that can be easily read and modified by others

If you want to understand these best practices in a detailed manner with more depth and examples, please view the below video tutorials here-


If you have any best practices to suggest or have any feedback for us, please comment below.

]]>
How to run selenium 4 grid tests on Docker containers https://testautomasi.com/blog/2021/03/28/how-to-run-selenium-tests-on-docker-containers/ https://testautomasi.com/blog/2021/03/28/how-to-run-selenium-tests-on-docker-containers/#comments Sun, 28 Mar 2021 06:25:20 +0000 https://testautomasi.com/blog/?p=156 By using Docker containers for running chrome/browser drivers and selenium grid, you can run your selenium tests without the need for any dependency such as chrome, chrome driver or webdrivermanager locally.

When you use a remote selenium grid, then it also makes your local debugging easy, you can point to remote URL and see why your tests are failing on remote without the need to access Linux server and running debugging commands on a remote machine, below is a step by step guidance on how you can set up docker containers to run selenium tests-

-update Linux system
sudo apt-get update

-install docker
sudo apt install docker.io

-check docker version
sudo docker -v

-create compose.yml for selenium grid and chrome containers

To execute this docker-compose yml file use docker-compose -f docker-compose-v3.yml up
Add the -d flag at the end for detached execution
To stop the execution, hit Ctrl+C, and then docker-compose -f docker-compose-v3.yml down

version: “3”
services:
chrome:
image: selenium/node-chrome:4.8.3-20230328
shm_size: 2gb
depends_on:
– selenium-hub
environment:
– SE_EVENT_BUS_HOST=selenium-hub
– SE_EVENT_BUS_PUBLISH_PORT=4442
– SE_EVENT_BUS_SUBSCRIBE_PORT=4443

selenium-hub:
image: selenium/hub:4.8.3-20230328
container_name: selenium-hub
ports:
– “4442:4442”
– “4443:4443”
– “4444:4444”

yml reference- https://github.com/SeleniumHQ/docker-selenium/blob/trunk/docker-compose-v3.yml

-run containers using compose command (-d for detach mode)

-install compose and set permission

sudo apt install docker-compose

sudo docker-compose up

-see running containers info-

sudo docker ps

-see logs of the particular container

sudo docker logs containerid

-see performance stats of containers

sudo docker stats

-check grid is running

http://yourip:4444

run selenium tests using remote web driver URL of selenium hub
in your tests


@Test
	public void test() {
		WebDriver driver = null;

		ChromeOptions options = new ChromeOptions();
		options.addArguments("start-maximized"); // open Browser in maximized mode
		options.addArguments("disable-infobars"); // disabling infobars
		options.addArguments("--disable-extensions"); // disabling extensions
		options.addArguments("--disable-gpu"); // applicable to windows os only
		options.addArguments("--disable-dev-shm-usage"); // overcome limited resource problems
		options.addArguments("--no-sandbox"); // Bypass OS security model
		options.addArguments("window-size=1200,1100");// set display size of window
		try {
			driver = new RemoteWebDriver(new URL("http://ip:4444/wd/hub"), options);
			driver.get("https://testautomasi.com");
			Thread.sleep(20000);
			assertEquals(driver.getTitle(), "Home - Welcome to automasi solutions private limited");
		} catch (MalformedURLException | InterruptedException e) {
			e.printStackTrace();
		}

		driver.quit();

	}


Have any feedback? Please don’t hesitate to leave it in the comments section.

]]>
https://testautomasi.com/blog/2021/03/28/how-to-run-selenium-tests-on-docker-containers/feed/ 1
How to create a compressed tar file with the relative file path in python https://testautomasi.com/blog/2021/03/06/how-to-create-a-compressed-tar-file-with-the-relative-file-path-in-python/ Sat, 06 Mar 2021 14:16:48 +0000 https://testautomasi.com/blog/?p=149 Whether you are working on file-heavy applications or on logs heavy applications, compressing files to save storage is very useful for efficient files management as well as disk storage management.

Recently I worked on a project which required compressing a set of files and folders and below are some useful things I noticed while creating a compressed tar file in python:

Create a simple tar file from a given path-

import tarfile

os.chdir(tar_location)
with tarfile.open(tar_location + '.tar', mode='w:tar') as browser_tar:
    browser_tar.add(tar_location)

so here, first, we move to the input directory after that, by using tarfile we add all the content of a folder in the tar file, simple enough right?

But there was a problem in this method, it creates a tar with absolute path i.e when you untar file then folder structure starts from input location, something like c://users/chandan/tarfilelocation.

Ideally when you want to send this tar as email content (like in the case of build logs) or send it to an application to start the processing of files(application-specific), then after creating a tar file, the structure of the folder should start from the relative folder location i.e. ./tarfilelocation in this case.

To see the whole path of your tar during debugging you can make use of getmembers() method which gives the info about the whole tar file path is shown below:

browser_tar.getmembers()

Now, to solve this problem tarfile module provides an extra argument called “arcname” and using it you can easily tar file with relative path without worrying about user directories as shown below:

browser_tar.add(tar_location, arcname=".")

and not only that, by using arcname, you can also provide a specific name too which could be very useful where you are playing with multiple tar files and you need to append something like a unique timestamp to create tar files with a unique name as shown below:

browser_tar.add(tar_location, arcname=unique_tar_name)


In this way, you can easily create compressed tar files and efficiently manage your disk space and set of files, do you have something to add then do leave your comment in the comments section.

]]>
Python Pytest Cheatsheet https://testautomasi.com/blog/2021/02/14/python-pytest-cheatsheet/ Sun, 14 Feb 2021 13:42:50 +0000 https://testautomasi.com/blog/?p=145 #installation pip install pytest #using in py files import pytest #run all tests with pytest pytest tests/ --where tests is a folder contains all test classes with starts or ends with test in name #grouping in tests --mark it in tests @pytest.mark.<markername> --then run using -m flag pytest tests/ -m "sanitytests" # calling pytests through python python -m pytest -q test_sample.py #run tests with specific keyword in test method name pytest tests/ -k "metric" #stop test run on first failure or max failure pytest tests/ -x pytest tests/ --maxfail=2 #retry failures pip install pytest-rerunfailures pytest tests/ --reruns 5 #disable test @pytest.mark.skip(reason="no way of currently testing this") #multithreading pip install pytest-xdist pytest tests/ -n 3 #ordering @pytest.mark.run(order=17) #using cmd parameters in pytest --supply param from cmd pytest tests\ --env qa --read from cmd in conftest file def pytest_addoption(parser): parser.addoption("--env", action="store", default="environment value is not provided") def pytest_sessionstart(session): env_param = session.config.getoption("--env") --set and get param using os.getenv os.environ['env'] = env_param os.getenv['env'] #fixtures/beforetest --create fixture in conftest @pytest.fixture def input_value(): input = 10 return input --use it in tests def test_divisible_by_3(input_value): assert input_value % 3 == 0 # data parameterization @pytest.mark.parametrize("num, output",[(1,11),(2,22),(3,35),(4,44)]) def test_multiplication_11(num, output): assert 11*num == output #reporting in pytest --pytest default html --pytest-html-reporter --junit pytest tests --junitxml="result.xml" --allure pytest tests --alluredir=/allure allure serve allure

]]>
Top 5 programming languages to learn as sdet in 2021 https://testautomasi.com/blog/2021/01/02/top-5-programming-languages-to-learn-as-sdet-in-2021/ Sat, 02 Jan 2021 14:30:16 +0000 https://testautomasi.com/blog/?p=140

1. Java– Open doors for selenium, appium,winapp, restassurred, karate, katalon cucumber, testNG, and serenity. Easy to learn, a lot of community support and wrappers are available, complex problems are easily available on Google/StackOverflow, jobs are easily found especially in India subcontinent and southeast Asia.

2. Python– Open doors for selenium, appium,winapp, requests, robot, pytest,behave and tkinter. Python is also very helpful for scripting, data analysis, and infrastructure automation tasks. Easier and faster to learn, learning resources are easily available, jobs are less compared to java but competition is also less compared to java.

3. JavaScript: Open doors for you in multiple testing tools such as cypress, Nightwatch, puppeteer,webdriverio, playwright, jest, supertest and postman. Slightly difficult to learn compared to other, learning resources are easily available, complex problem solution could be difficult to find. Companies in Europe and US prefer this over other languages.

4. C#– Open doors for you in almost all the things as java and very useful in desktop utility development. Resources are available on ms websites. Jobs-wise this language is quite popular in Australia and New Zealand.

5. Groovy: Open doors for JMeter, Gradle, SoapUI, katalon, Jira/Bitbucket, Jenkins. Very similar to java and easier to learn if you know java, very useful in Jira workflow automation, Jenkins scripted pipelines, and JMeter scripts, jobs are easier to find for performance tester with groovy knowledge.

]]>
Javascript Tutorials For Beginners https://testautomasi.com/blog/2020/12/26/javascript-tutorials-for-beginners/ Sat, 26 Dec 2020 07:44:33 +0000 https://testautomasi.com/blog/?p=132 In this video, you will learn JavaScript In 30 Minutes where IDE setup, Variables, Logics, Loops, List, Debugging, Functions & Json will be discussed.

]]>
How to make an impact on build quality as a sdet/automation engineer https://testautomasi.com/blog/2020/12/12/how-to-make-an-impact-on-build-quality-as-a-sdet-automation-engineer/ Sat, 12 Dec 2020 14:44:40 +0000 https://testautomasi.com/blog/?p=129 As a sdet/automation engineer if you are working with QAs/Test engineer after the release is deployed in QA then it’s already too late for you to make an impact in build quality, you need to work with developers and help them automate their unit testing and integration testing cycle by running automation suite or by providing them utilities/test scripts which can help them to deliver better quality builds to QA. Spend time with your developers to accelerate and enhance their testing activities and you would automatically see an increase in build quality and a decrease in build release timelines.

]]>