Home » Blog » Blog » Semiconductor » Utilization of Advanced Pytest Features
| |

Utilization of Advanced Pytest Features

In the world of extreme technology advancements, each product, system or platform is completely different from another. Hence, the testing needs too vary drastically.
To test a web application, the system needs are limited to just a web browser client and Selenium for test automation. Whereas an IoT product needs end device, cloud, web app, API as well as Mobile application (Android & iOS) to be tested thoroughly end to end.
Similarly, there can be a completely different type of product or system. So, one size fits all is never an option when it comes to testing. To test different types of systems, one needs to be extremely versatile as well as flexible. Same goes for test automation. Each system needs a completely different type of automation framework to accommodate the complexity, scalability and different components of the product under test. If there is a single test framework which is versatile and flexible enough to achieve all of the above, it is Pytest.

Pytest provides numerous advantages that can assist companies in optimizing their software testing procedures and enhancing the overall quality of their products. One of the key benefits of Pytest is its seamless integration with continuous integration and delivery (CI/CD) pipelines, which allows for automated testing of code modifications in real time. This results in improved efficiency and faster bug resolution time, ensuring that the product meets the desired level of quality and reliability. Pytest offers comprehensive reporting and analysis functionalities, which can help developers and testers promptly identify and resolve issues.

Pytest offers a huge number of Features in form of Functions, Markers, Hooks, Objects, Configurations and a lot more to make the framework development extremely flexible and give freedom to the test framework architect to implement the desired structure, flow and outcome which best fits product requirement.

But to make use of the above, and what to use when, is a major challenge. This blog will explain the features of different categories which are used to achieve complex automation.

Hooks:
Hooks are a key part of Pytest’s plugin system and are used by plugins and by Pytest itself to extend the functionality of Pytest. Hooks allow plugins to register custom code to be run at specific points during the execution of Pytest, such as before and after tests are run, or when exceptions are raised. They provide a flexible way to customize the behavior of Pytest and to extend its functionality. The categories of hooks go by the stage at which they are used.
Some widely used hooks in each category summarized below:

BootStrapping: At the very beginning and end of test run.

  • pytest_load_initial_conftests – Load initial plugins and modules that are needed to configure and setup the test run ahead of the command-line argument parsing.

Initialization: After boot strapping to initialize the resources needed for test run.

  • pytest_addoption – add additional command line options to the test runner.
  • pytest_configure – Perform additional setup/configuration after command line options are parsed.
  • pytest_sessionstart – Perform setup steps after all configurations are completed and before the test session starts.
  • pytest_sessionfinish – Teardown steps or generate reports after all tests are run.

Collection: During test collection process, used to create custom test suites and collect test items.

  • pytest_collectstart – Perform steps/action before the collection starts.
  • pytest_ignore_collect – Ignore the collection for specified path.
  • pytest_generate_tests – Generate multiple parameterized calls for a test based on parameter.
  • pytest_collection_modifyitems – Modify the collected test items list as needed. Filter, Re-order according to markers, or other criteria.
  •  

Runtest: Control and customize individual test run

  • pytest_runtest_setup/teardown – Setup/teardown for a test run.
  • pytest_runtest_call – Modify the arguments or customize the test when a test is called.
  • pytest_runtest_logreport – Access the test result and modify/format before it is logged.

Reporting: Report status of test run and customize reporting of test results.

  • pytest_report_header – Add additional information to the test report.
  • pytest_terminal_summary – Modify/Add details to terminal summary of the test results.

Debugging/Interaction: Interact with the test run that is in progress and debug issues.

  • pytest_keyboard_interrupt – Perform some action on keyboard interrupt.
  • pytest_exception_interact – Called when an exception is raised which can be interactively be handled.
  • pytest_enter/leave_pdb – Action to perform when python debugger enters/leaves interactive mode.

Functions:
As the name suggests, these are independent pytest functions to perform a specific operation/task.
Pytest functions are directly called like a regular python function call. i.e. pytest.()
There are a number of Pytest functions available to perform different operations.
Below listed are widely used Pytest functions and their uses.

approx: assert that two numbers are equal to each other with some tolerance.
Example:
assert 2.2 == pytest.approx(2.3)
# fails, because 2.2 is not similar to 2.3
assert 2.2 == pytest.approx(2.3, 0.1)
#pass with 0.1 tolerance

skip: skip an executing test with given message reason. Used to skip on encountering a certain condition.
Example:
import pytest
if condition_is_encountered:
pytest.skip(“Skip integration test”)

fail: Explicitly fail an executing test with given message. Usually used to explicitly fail test while handling an exception.
Example:
import pytest
a = [1, 2, 3]
try:
invalid_index = a[3]
except Exception as e:
pytest.fail(f”Failing test due to exception: {e}”)

xfail: Explicitly fail a test. Used for known bugs. Alternatively preferable to use pytest.mark.xfail.
Example:
pytest.xfail(“This is an existing bug”)

skip: Skip the test with a given message.
Example:
pytest.skip(“Required environment variables were not set, integration tests
will be skipped”)

raises: Validate the expected exception is raised by a particular block of code under the context manager.

Example:
with pytest.raises(ZeroDivisionError):
1 / 0

Importorskip: Import a module or skip the test if module import fails.
Example:
pytest.importorskip(‘graphviz’)

Marks:
Marks can be used to apply meta data to test functions (but not fixtures), which can then be accessed by fixtures or plugins. They are very commonly used for test parameterization, test filtering, skipping and adding other metadata. Marks are used as decorators.

@pytest.mark.parametrize: Parametrization of arguments for a test function. The collection will generate multiple instances of the same test function as the number of parameters.
Example:
@pytest.mark.parametrize(“test_input,expected”, [(“3+5”, 8), (“2+4”, 6), (“6*9”, 42)])
def test_eval(test_input, expected):
assert eval(test_input) == expected

@pytest.mark.usefixtures: A very useful marker to define which fixture or fixtures to use for the underlying test function. The fixture names can be specified as a comma separated list of strings.
Example:
@pytest.mark.usefixtures(‘fixture_one_name’, ‘fixture_two_name’)

@pytest.mark.custom_markers: These are markers that are created dynamically which user can give name as per the requirement. These custom markers are mainly used to test filtering and categorizing different sets/types of tests.
Example:
@pytest.mark.timeout(10, “slow”, method=”thread”)
@pytest.mark.slow
def test_function():

Conclusion:
Pytest is way above and over the above listed concepts. Compiled the most useful concepts that are difficult to find all at one place. These concepts are useful for a framework developer to plan the architecture of the test automation framework and make the most of them to build an efficient, flexible, robust and scalable test automation framework.

MosChip offers top-notch Quality Engineering Services for software and embedded devices, enabling businesses to develop high-quality solutions that are well-suited for the competitive marketplace. Testing of embedded software and devices, DevOps and test automation, as well as machine learning application and platform testing, are all part of our Quality Engineering services. Our team of experts have experience working on automation frameworks and tools like Jenkins, Python, Robot, Selenium, JIRA, TestRail, Jmeter, Git and more, as well as ensure compliance with industrial standards such as FuSa – ISO 26262, MISRA C, and AUTOSAR.

To streamline the testing process, we have developed STAF, our in-house test automation framework that facilitates end-to-end product/solution testing with greater efficiency and faster time-to-market. MosChip comprehensive Quality Engineering services have a proven track record of success, as evident in our numerous success stories across different domains.

About MosChip:

MosChip has 20+ years of experience in Semiconductor, Product Engineering services  & Software, security with the strength of 1300+ engineers.
Established in 1999, MosChip has development centers in Hyderabad, Bangalore, Pune, and Ahmedabad (India) and a branch office in Santa Clara, USA. Our software expertise involves platform enablement (FPGA/ ASIC/ SoC/ processors), firmware and driver development, systems security, BSP and board bring-up, OS porting, middleware integration, product re-engineering and sustenance, device and embedded testing, test automation, IoT, AIML solution design and more. Our semiconductor offerings involve silicon design, verification, validation, and turnkey ASIC services. We are also a TSMC DCA (Design Center Alliance) Partner.

Stay current with the latest MosChip updates via LinkedIn, Twitter, FaceBook, Instagram, and YouTube

Similar Posts