Automated Testing with Cypress: 5 Best Practices
This article will delve into the best practices for automated testing using Cypress, a popular front-end testing tool. It will provide insights into how these practices can be effectively implemented in real-life scenarios, thereby enhancing the efficiency and reliability of software testing processes.

Automated testing has significantly improved efficiency, reduced human error, and accelerated time-to-market for many software companies, which is particularly evident when it comes to visual regression testing, aiming to identify unintended visual changes—a critical factor for maintaining a good user experience. However, simply setting up automation is not always enough, it’s just as important to have it implemented efficiently.
A popular tool in the realm of visual testing is Cypress, since it’s one of the few tools allowing developers to write automated tests directly in Javascript. In this guide you’ll get a comprehensive overview of what best practices to consider when automating your visual testing using Cypress tests, as well as how they should be implemented.
Why Choose Cypress Tests for Automated Testing?
The JavaScript-based nature of Cypress testing allows for frictionless integration with modern stacks, which are often based on some kind of JavaScript framework like React or Angular.
Cypress’ intuitive interface and real-time feedback mechanisms significantly ease the process of writing and debugging tests. This will not only help streamline the testing process but also make it more accessible for both developers and QA engineers, reducing the learning curve typically associated with most automated testing tools.
Reducing Flakiness
One of the most critical aspects of any testing tool is its reliability. Cypress addresses this by offering built-in retry-ability features that aim to reduce test flakiness. By automatically retrying failed assertions, Cypress testing ensures more reliable test outcomes, thereby increasing the credibility of your testing suite. That being said, while retries are often a great way of reducing the flake rate of a given testing tool it doesn’t remove flakes.
Comparison Table: Cypress vs. Other Tools
To fully grasp the features of Cypress and how it compares, it’s important to do a quick assessment of how it compares to other tools. The main Cypress competitors are:
- Meticulous: Known for its focus on visual regression testing and minimal maintenance requirements.
- Test IO: Utilizes a crowdtesting model, providing a diverse range of real-world testing scenarios.
- Sauce Labs: Offers emulation-based device testing along with real-time feedback mechanisms.
- Browserstack: Specializes in real-world device testing, offering a comprehensive solution for cross-platform compatibility.
Criteria | Cypress | Meticulous | Test IO | Sauce Labs | Browserstack |
---|---|---|---|---|---|
Test Automation | Yes (JavaScript) | Yes (Visual Regression) | No (Crowdtesting) | Yes (Emulation-based) | Yes (Real-world devices) |
Language Support | JavaScript | Multiple | Multiple | Multiple | Multiple |
Ease of Use* | High | Moderate | Low | Moderate | High |
Flake Rate* | Moderate | Low | High | Moderate | Low |
- Subjective assessment based on reviews
Cypress Testing Best Practices
Following best practices isn’t just a matter of pleasing managers, it has direct implications on the efficiency and reliability of automated testing processes. Over time this will help you reduce maintenance time, which is crucial in agile development environments where changes are frequent. Also, poorly designed tests can lead to misleading results which can result in two common situations:
- Letting a bug slip through the cracks and into production
- Spending time debugging a false positive, wasting engineering hours on something that wasn’t faulty in the first place
Moreover, the adoption of best practices enhances team collaboration, making it easier to understand, modify, or extend test cases, in turn fostering a more cohesive and productive development environment.
When it comes to best practices related to writing tests in Cypress, their own documentation provides fairly comprehensive documentation on this. As such, this blog post will focus more on how you can best utilize the different features offered by the tool.
Utilizing the Cypress Test Runner and Command Log and Debugging Features
Cypress offers a robust Test Runner that serves as the core of its testing experience allowing for real-time execution of tests, providing immediate feedback via a test report, and accelerating the debugging process. One of its most powerful features is the Command Log, which provides a detailed, chronological record of all test actions, assertions, and network requests.
This allows developers to see the Application or Component Under Test (AUT/CUT) and explore its DOM in real time. It’s not just a display, but a fully interactive application with developer tools to inspect elements just like you would in a normal browser. This interactive element is often very helpful for debugging and understanding the behavior of the application under different test conditions.
Command Log: Your Debugging Companion
The Command Log is displayed on the left-hand side of the Cypress Test Runner, serving as a visual representation of your test suite. Clicking on any test—which are neatly nested appropriately—reveals every Cypress command executed within that test's block. This includes commands executed in relevant before
, beforeEach
, afterEach
, and after
hooks. The Command Log also offers a feature known as "Time Traveling," which allows developers to hover over each command to see the exact state of the DOM at that moment.
Best Practices for Utilizing Test Runner and Command Log
- Real-time Feedback: Make the most of the real-time execution feature for quicker identification and resolution of issues.
- Command Log Interpretation: Learn to read the Command Log effectively. It provides a wealth of information that can help you debug tests faster.
- Time Travel Feature: Use the Time Travel feature judiciously to understand the state changes in your application, aiding in more effective debugging.
Scenarios Benefiting from Real-Time Execution and Command Log
The real-time execution and Command Log features are particularly useful in debugging complex user flows that involve multiple steps or conditions. They are also invaluable in dynamic web applications where elements are frequently updated or changed. The detailed logging and real-time feedback simplify the process of updating and maintaining test cases, making these features especially beneficial in agile development environments.
Understanding and effectively utilizing Cypress' Test Runner and Command Log can significantly enhance your debugging capabilities. These features not only provide real-time feedback but also offer detailed insights into the execution flow and state changes in your application, making them indispensable tools in your testing arsenal.
Leveraging Cypress's Automatic Waiting Feature
While it is possible to use a traditional .wait()
statement in Cypress, there are a number of ways to avoid it. One way is to use the .should()
statement, which allows you to pass a callback function that will run after a a given command. For instance, take a look at this example usage pulled from Cypress’ official documentation:
cy.get('p').should(($p) => {
// should have found 3 elements
expect($p).to.have.length(3)
// make sure the first contains some text content
expect($p.first()).to.contain('Hello World')
// use jquery's map to grab all of their classes
// jquery's map returns a new jquery object
const classes = $p.map((i, el) => {
return Cypress.$(el).attr('class')
})
// call classes.get() to make this a plain array
expect(classes.get()).to.deep.eq([
'text-primary',
'text-danger',
'text-default',
])
})
In this example, Cypress makes sure to only execute the code within the .should()
callback function after the .get('p')
function has successfully executed and returned a value. Another example from Cypress’ documentation on the .wait()
function shows how it’s not always necessary to input a specific time interval to wait, but instead you can tell Cypress tests to wait for a given alias to respond:
// Wait for the alias 'getAccount' to respond
// without changing or stubbing its response
cy.intercept('/accounts/*').as('getAccount')
cy.visit('/accounts/123')
cy.wait('@getAccount').then((interception) => {
// we can now access the low level interception
// that contains the request body,
// response body, status, etc
})
Although there are some great ways of performing automatic waits within Cypress, it’s important to still understand and be aware of the balance between time spent on optimizations, versus time spent on getting things done. If a simple .wait(2000)
is going to increase the total run time of your tests by between 1 and 2 seconds, but will work perfectly fine in 100% of cases, then you should likely just use the .wait(2000)
statement. Automatic waits are great when they’re either easy to implement or when the required waiting time varies enough that a static .wait()
statement will cause flakiness.
Using Spies, Stubs, and Clocks for Controlling Behavior
Like many other testing tools, the use of spies, stubs, and clocks are a popular and powerful way of handling application behavior during testing. With one simple line you can define the behavior of a function to be exactly what you’d like it to be:
// force obj.method() to return "foo"
cy.stub(obj, 'method').returns('foo')
// force obj.method() when called with "bar" argument to return "foo"
cy.stub(obj, 'method').withArgs('bar').returns('foo')
This is often useful when the part of the web application you’re testing relies on a function/method that lies out-of-scope of your test suite, and so it needs to be mocked. However, it’s important to exercise caution when using these tools, as manual customization of application behavior can lead to a number of unwanted results like:
- Increased development time due to maintenance, as mocks often have to be updated in order to reflect new behavior in the function being mocked
- Incorrect test results due to misconfiguration of mocks
- Unrealistic test results due to manual configuration like the timing of a UI element loading
All in all, spies, stubs, and clocks are powerful ways of getting your tests to behave as you want them to, but it’s crucial to research and understand whether you can utilize real-world traffic instead.
Implementing Network Traffic Control for Edge Case Testing
Edge cases refer to the testing scenarios that occur at the extreme ends of input boundaries, which may not always be present in real-world traffic captured from production. This is a great example of a scenario where the use of stubs may in fact be necessary. While edge cases can happen at any point of an application, it’s very common to experience when testing interactions between multiple APIs, in which case it’s necessary to mock the edge case scenarios.
While it is possible to use the .stub()
method introduced in the previous section, Cypress testing does offer the .intercept()
function as a specific solution to handling network requests:
cy.intercept(
{
method: 'GET', // Route all GET requests
url: '/users/*', // that have a URL that matches '/users/*'
},
[] // and force the response to be: []
).as('getUsers') // and assign an alias
That being said, like with the .wait()
function, this should be treated as an okay but non-preferred solution in specific use-cases, like when there’s a need for edge-case testing. As stated in the previous section, adding manual mocks of any kind will inevitably increase the maintenance and reduce the reliability of testing suites, which is why you should always try to rely on real-world data where possible.
Ensuring Consistent Results with Cypress's Unique Architecture
One of the most unique features of Cypress is its JavaScript-based architecture, which is also one of its most “dangerous”. The possibility of writing tests in the same language you’re writing your application allows for a lot of freedom, but without caution this can quickly become a drawback.
For instance, a popular feature of JavaScript is the async/await
feature, allowing you to effectively run multiple “threads” within the same runtime environment. But, this also easily result in a number of race conditions. Normally, this isn’t too much of a worry as there are a number of ways to work around it, but when it comes to testing—especially visual testing—this is an easy way to introduce flakiness into your test suite.
All in all, Cypress’ unique architecture allows for amazing flexibility and freedom when creating test suites, however it’s crucial to still keep your tests suites as simple as possible in order to increase reliability and maintenance.
Enter Meticulous: Reducing Flakiness in Visual Regression Testing
While Cypress’ JavaScript-based approach will appeal to a large group of developers, it’s important to remember that there are important trade-offs to balance. For instance, the concern of flaky tests have been raised multiple times throughout this post, as it’s often the cause of unreliable tests and false positives.
Another up-and-coming tool on the market has gone the opposite direction of Cypress, opting for a solution relying on no code at all. To use Meticulous, you simply install a recorder script onto your website, which will then capture all user interactions. Then, once you’ve developed a new visual feature, fixed a bug, or for some other reason made changes to the front-end, Meticulous will reenact a user’s behavior and capture screenshot of your application’s interface.
These screenshots will be compared to the original ones recorded during normal user interactions, and by comparing each screenshot’s pixel depth, Meticulous can accurately determine and report any visual differences. Additionally, the recorded user traffic will also contain the responses of any third-party service being used (like a backend API) which will then automatically be mocked, essentially presenting you with a no-code, no-configuration, no-maintenance solution that almost eliminates the possibility of flakes.
Final Thoughts
From its unique JavaScript-based architecture that ensures consistent and reliable test results to its robust features like real-time execution and debugging, automatic waiting, and network traffic control, Cypress stands out as a tool designed to enhance the efficiency and reliability of software testing processes.
However, it’s crucial to become too reliant on the efficacy of these features, as there are still a number of pitfalls you can run into, resulting in unreliable and/or flaky tests. Remember, any test you write yourself is subject to human error. By extension, the most reliable tests will come from utilizing native features and automation as much as possible.
While this is true for all code it’s especially dangerous when it comes to testing, as your tests are supposed to catch human errors.
Meticulous
Meticulous is a tool for software engineers to catch visual regressions in web applications without writing or maintaining UI tests.
Inject the Meticulous snippet onto production or staging and dev environments. This snippet records user sessions by collecting clickstream and network data. When you post a pull request, Meticulous selects a subset of recorded sessions which are relevant and simulates these against the frontend of your application. Meticulous takes screenshots at key points and detects any visual differences. It posts those diffs in a comment for you to inspect in a few seconds. Meticulous automatically updates the baseline images after you merge your PR. This eliminates the setup and maintenance burden of UI testing.
Meticulous isolates the frontend code by mocking out all network calls, using the previously recorded network responses. This means Meticulous never causes side effects and you don’t need a staging environment.
Learn more here.
Authored by Kasper Siig

Kasper is a seasoned technical professional with experience ranging from DevOps engineering to specialized content creation, currently leveraging his deep understanding of technology to lead Siig Marketing, a firm dedicated to crafting high-converting, developer-focused marketing content.