285 results found with an empty search
- The Hidden Dangers of Untested Queues | Whitepaper
The Hidden Dangers of Untested Queues Prevent costly failures in queues and event driven systems with HyperTest. Download now Prevent Logical bugs in your database calls, queues and external APIs or services Book a Demo
- Scaling with Microservices MAANG'S Experience | Whitepaper
Scaling with Microservices MAANG'S Experience This Guide delves right into the transition journey of MAANG from monoliths to microservices, providing the underlying approaches they used to successfully run more than 1000 microservices as of today. Download now Prevent Logical bugs in your database calls, queues and external APIs or services Book a Demo
- Simplify Your Code-A Guide to Mocking for Developers, Simplify Your Code: A Guide to Mocking for Developers
07 Min. Read 8 April 2024 Simplify Your Code: A Guide to Mocking for Developers Shailendra Singh Vaishali Rastogi WhatsApp LinkedIn X (Twitter) Copy link You want to test your code but avoid dependencies? The answer is “ Mocking ”. Mocking comes handy whenever you want to test something that has a dependency. Let’s talk about mocking first in a little detail. What’s mocking, anyway? The internet is loaded with questions on mocking, asking for frameworks, workarounds and a lot more “ how-to-mock ” questions. But in reality, when discussing testing, many are unfamiliar with the purpose of mocking. Let me try by giving an example: 💡 Consider a scenario where you have a function that calculates taxes based on a person's salary, and details like salary and tax rates are fetched from a database. Testing with a database can make the tests flaky because of database unavailability, connection issues, or changes in contents affecting test outcomes. Therefore, a dev would just simply mock the database response i.e. the income and tax rates for the dummy data he is running his unit tests on. By mocking database interactions, results are deterministic which is what devs desire. Hope the concept is clear now, but when everything seems good with mocking, what’s the purpose of this article? Continue reading to get the answer to this question. All seems good with mocking, what’s the problem then? API mocking is typically used during development and testing as it allows you to build your app without worrying about 3rd party APIs or sandboxes breaking. But evidently, people still got some issues with mocking! Mocking Too Much is still a hot topic of discussion among tech-peers, but why do they have this opinion in the first place? This article is all about bringing out the real concerns people have with mocking. And presenting you a way that takes away all the mocking-related pain. 1️⃣ State Management Complexity Applications flows are fundamentally stateless. But database imputes state in a flow because it makes a flow contextual to a user journey. Imagine testing checkout, to do so the application should be in a state where a valid user has added a valid SKU with the required inventory. This means before running a test, we need to fill the database with the required data, execute the test, and then clean out the database once the test is over. This process, however, repetitive, time-consuming and with diminishing returns. Now, consider the complexity of handling numerous user scenarios. We'd have to prepare and load hundreds, maybe thousands, of different user data setups into the database for each test scenario. 2️⃣ False Positives/Negatives False positives in tests occur when a test incorrectly passes, suggesting code correctness despite existing flaws. This often results from mocks that don't accurately mimic real dependencies, leading to misplaced confidence. Conversely, false negatives happen when tests fail, indicating a problem where none exists, typically caused by overly strict or incorrect mock setups. Both undermine test reliability—false positives mask bugs, while false negatives waste time on non-issues. Addressing these involves accurate mock behavior, minimal mocking, and supplementing with integration tests to ensure tests reflect true system behavior and promote software stability. 3️⃣ Maintenance Overhead Assume UserRepository is updated to throw a UserNotFound exception instead of returning None when a user is not found. You have to update all tests using the mock to reflect this new behavior. # New behavior in UserRepository def find_by_id(user_id): # Throws UserNotFound if the user does not exist raise UserNotFound("User not found") # Updating the mock to reflect the new behavior mock_repository.find_by_id.side_effect = UserNotFound("User not found") Keeping mocks aligned with their real implementations requires continuous maintenance, especially as the system grows and evolves. HyperTest’s way of solving these problems We have this guide on why and how on HyperTest , just go through it once and then hop over here. To give you a brief: 💡 HyperTest makes integration testing easy for developers. What’s special is its ability to mock all the third-party dependencies including your databases, message queues, sockets and of course the dependent services. This behavior of autogenerating mocks that simulate dependencies not only streamline the test creation but also allow you to meet your development goals faster. The newer approach towards mocking Let’s understand this HyperTest approach by quoting an example scenario to make things easy to understand and explain. So imagine we’ve a shopping app and we need to write integration tests for testing it. 💡 The Scenario Imagine we have a ShoppingCartService class that relies on a ProductInventory service to check if products are available before adding them to the cart. The ProductInventory service has a state that changes over time; for example , a product might be available at one moment and out of stock the next. class ShoppingCartService: def __init__(self, inventory_service): self.inventory_service = inventory_service self.cart = {} def add_to_cart(self, product_id, quantity): if self.inventory_service.check_availability(product_id, quantity): if product_id in self.cart: self.cart[product_id] += quantity else: self.cart[product_id] = quantity return True return False 💡The Challenge To test ShoppingCartService 's add_to_cart method, we need to mock ProductInventory 's check_availability method. However, the availability of products can change, which means our mock must dynamically adjust its behavior based on the test scenario. 💡Implementing Stateful Behavior in Mocks To accurately test these scenarios, our mock needs to manage state. HyperTest’s ability to intelligently generate and refresh mocks gives it the capability to test the application exactly in the state it needs to be. To illustrate this, let's consider the shopping scenario again. Three possible scenarios can occur: The product is available, and adding it to the cart is successful. The product is not available, preventing it from being added to the cart. The product becomes unavailable after being available earlier, simulating a change in inventory state. HyperTest SDK will record all of these flows from the traffic, i.e., when the product is available, when the product is not available and also when there’s a change in the inventory state. In its test mode, when HyperTest runs all the three scenarios, it will have the recorded response from the database for all, testing them in the right state to report a regression if either of the behaviors regresses. I’ll now delve into how taking advantage of HyperTest’s capability of auto-generating mocks one can pace up the work and eliminate all the mocking-problems we discussed earlier . 1. Isolation of Services for Testing Isolating services for testing ensures that the functionality of each service can be verified independently of others. This is crucial in identifying the source of any issues without the noise of unrelated service interactions. HyperTest's Role: By mocking out third-party dependencies, HyperTest allows each service to be tested in isolation, even in complex environments where services are highly interdependent. This means tests can focus on the functionality of the service itself rather than dealing with the unpredictability of external dependencies. 2. Stability in Test Environments Stability in test environments is essential for consistent and reliable testing outcomes. Fluctuations in external services (like downtime or rate limiting) can lead to inconsistent test results. HyperTest's Role: Mocking external dependencies with HyperTest removes the variability associated with real third-party services, ensuring a stable and controlled test environment. This stability is particularly important for continuous integration and deployment pipelines, where tests need to run reliably at any time. 3. Speed and Efficiency in Testing Speed and efficiency are key in modern software development practices to enable rapid iterations and deployments. HyperTest's Role: By eliminating the need to interact with actual third-party services, which can be slow or rate-limited, HyperTest significantly speeds up the testing process. Tests can run as quickly as the local environment allows, without being throttled by external factors. 4. Focused Testing and Simplification Focusing on the functionality being tested simplifies the testing process, making it easier to understand and manage. HyperTest's Role: Mocking out dependencies allows testers to focus on the specific behaviors and outputs of the service under test, without being distracted by the complexities of interacting with real external systems. This focused approach simplifies test case creation and analysis. Let’s conclude for now HyperTest's capability to mock all third-party dependencies provides a streamlined, stable, and efficient approach to testing highly inter-dependent services within a microservices architecture. This capability facilitates focused, isolated testing of each service, free from the unpredictability and inefficiencies of dealing with external dependencies, thus enhancing the overall quality and reliability of microservices applications. Prevent Logical bugs in your databases calls, queues and external APIs or services Take a Live Tour Book a Demo
- Mitigate API Breakage-Insights from the 2023 Regression Report, Mitigate API Breakage: Insights from the 2023 Regression Report
05 Min. Read 9 July 2024 Mitigate API Breakage: Insights from the 2023 Regression Report WhatsApp LinkedIn X (Twitter) Copy link APIs are the backbone of modern digital ecosystems carrying up to 70% of the business logic of the application. They enable different software systems to communicate and share data seamlessly. As businesses increasingly rely on APIs to deliver services, the need for robust API testing has never been more critical. Since they play such a crucial role in an app, keeping them sane and tested at all times is a key thing to ensure the smooth functioning of your app. It not only helps identify issues early in the development process, but also prevent them from escalating into major problems that can disrupt business operations. The Danger of Regressions Regressions are changes that unintentionally break or degrade the functionality of an API. If not addressed promptly, regressions can turn into bugs that affect the user experience and lead to significant business losses. Common regressions include: 💡 Key Removals: Critical data keys being removed. 💡 Status Code Changes: Unexpected changes in response codes. 💡 Value Modifications: Alterations in expected data values. 💡 Data Type Changes: Shifts in data formats that cause errors. The Study: How We Did It? To understand the current landscape of API regression trends, we drew insights from our own product analytics for the entire year “2023”, which revealed a staggering 8.6 million regressions across various sectors. Our report compiles data from multiple industries, including eCommerce/Retail, SaaS, Financial Services, and Technology Platforms . Methodology Our analysis involved: Data Collection : Gathering regression data from diverse API testing scenarios. Sectoral Analysis : Evaluating the impact of regressions on different industries. Root Cause Investigation : Identifying the common causes of API regressions. Strategic Recommendations : Providing actionable insights to mitigate regressions. Key Findings ⏩API Regression Trends: A Snapshot Our study revealed that the most affected sectors by API regressions in 2023 are: eCommerce/Retail : 63.4% SaaS : 20.7% Financial Services : 8.3% Technology Platforms : 6.2% ⏩Common Types of Regressions Key Removed : 26.8% Status Code Changed : 25.5% Value Modified : 17.7% Data Type Changed : 11.9% ⏩Sectoral Metrics: Regressions & Test Runs Analysis Financial Services : Leading in total regressions (28.9%), followed by Technology Platforms (22.2%). Total Test Runs : Highest in SaaS and Financial Services sectors, indicating the critical need for robust testing practices. ⏩Root Cause Analysis Our investigation identified the following common causes of API regressions: Rapid API Changes : Frequent updates leading to instability. Server-side Limitations or Network Issues : Affecting API performance. Bad Data Inputs : Incorrect data leading to failures. Schema or Contract Breaches : Violations of predefined API structures. Strategic Recommendations To address these issues, we recommend: Building Robust Automation Testing Suites : Invest in agile testing tools that integrate well with microservices architectures. Testing Real-World Scenarios : Simulate actual usage conditions to uncover potential vulnerabilities. Adopting a Shift-Left Approach : Integrate testing early in the development lifecycle to anticipate and address potential regressions. Establishing Real-Time Monitoring : Quickly identify and address issues, especially in user-intensive sectors like e-commerce and financial services. Conclusion The 2023 State of API Testing Report highlights the critical role of effective regression testing in ensuring robust, reliable APIs. By addressing common causes of regressions and implementing strategic recommendations, organizations can significantly reduce the risk of API failures and enhance their development processes. For a deeper dive into the data, trends, and insights, we invite you to download the full report. Visit HyperTest's official website to access the complete "State of API Testing Report: Regression Trends 2023." Stay tuned for more insights and updates on the latest trends in API testing . Happy testing! Prevent Logical bugs in your databases calls, queues and external APIs or services Take a Live Tour Book a Demo
- why unit tests are not enough, 3 reasons why Unit Tests aren't enough
07 Min. Read 8 March 2024 3 reasons why Unit Tests aren't enough Shailendra Singh WhatsApp LinkedIn X (Twitter) Copy link In the fast-paced world of software development, ensuring code quality and functionality is paramount. Unit testing plays a crucial role in achieving this by verifying individual units of code. However, while unit tests are essential, they have limitations , particularly when it comes to testing the interactions and communication between different services. This is where integration testing steps in. This article explores three key reasons why unit tests alone fall short and why integration testing deserves a prominent place in your development arsenal. 1. Unit Tests Live in Isolation: By design, unit tests focus on individual units of code in isolation. They mock external dependencies like databases or APIs, allowing for focused testing logic without external influences. While this fosters granular control, it creates a blind spot – the interactions between services. In modern, microservices-based architectures, service communication is the lifeblood of functionality. Unit tests fail to capture these interactions, leaving potential integration issues hidden until later stages of development or even worse, in production. Imagine this scenario: Your unit tests meticulously validate a service's ability to process user data. However, they don't test how the service interacts with the authentication service to validate user credentials. In this case, even a perfectly functioning service in isolation could cause a system-wide failure if it can't communicate with other services properly. Integration testing bridges this gap: By simulating real-world service interactions, it uncovers issues related to data exchange, dependency management, and communication protocols. Early detection of these integration problems translates to faster fixes, fewer regressions, and ultimately, a more robust and reliable system. Solved Problem with HyperTest: ➡️ HyperTest simulates the responses of outbound calls made by the service under test to its dependent services, including third-party APIs, databases, and message queues. ➡️ Furthermore, it rigorously tests and compares all outbound call requests against a pre-recorded stable version. This comparison not only checks for deviations in request parameters up to the API layer but also extends scrutiny down to the data layer. 2. Mocking limitations can mask integration problems Unit testing heavily relies on mocking external dependencies. While mocking provides control and simplifies testing logic, it doesn't always accurately represent real-world behavior. Mocks can't perfectly replicate the complexity and potential edge cases of real services. Here's an example: You mock a database dependency in your unit test for a service that writes data. The mock might return predictable results, but it can't simulate potential database errors or network issues. These real-world scenarios could cause integration issues that wouldn't be surfaced by unit tests alone. Integration testing brings real dependencies into play: By interacting with actual services or realistic simulations, it reveals how your code behaves in a more holistic environment. This allows developers to uncover issues that mocking can't capture, leading to a more comprehensive understanding of the system's behavior. Solved Problem with HyperTest: HyperTest's innovative AI-driven methodology for generating mocks sets it apart. It synchronizes test data with actual transactions and continually updates mocks for external systems. This approach notably improves testing for intricately interlinked services in microservices architectures. ➡️ Isolation of Services for Testing ➡️ Consistency in Test Environments ➡️ Acceleration and Efficiency in Testing ➡️ Streamlined Testing: Focus and Simplification 3. Unit tests miss how errors cascade across your system Unit tests excel at isolating and verifying individual components, but they can miss the domino effect of failures across services. In a complex system, a seemingly minor issue in one service can trigger a chain reaction of errors in other services that depend on it. For Instance: A unit test might verify that a service successfully retrieves data from a database. However, it wouldn't reveal how a bug in that service's data processing might corrupt data further down the line, impacting other service functionalities. Integration testing creates a more holistic test environment: By simulating real-world service interactions, it allows developers to observe and troubleshoot cascading failures that wouldn't be evident in isolated unit tests. This proactive approach helps identify and fix issues early in the development lifecycle, preventing them from propagating and causing larger disruptions later. Solved Problem with HyperTest: HyperTest autonomously identifies relationships between different services and catches integration issues before they hit production. Thorough Interaction Testing: HyperTest rigorously tests all service interactions, simulating diverse scenarios and data flows to uncover potential failure points and understand cascading effects on other services. Enhanced Root Cause Analysis: HyperTest traces service interactions to pinpoint the root cause of failures, facilitating swift troubleshooting and resolution by identifying the responsible component or service. Through a comprehensive dependency graph, teams can effortlessly collaborate on one-to-one or one-to-many consumer-provider relationships. Conclusion Unit testing remains a cornerstone of modern development, providing invaluable insights into code logic. However, it's crucial to recognize its limitations. By incorporating integration testing into your development process, you can bridge the gap between unit tests and real-world scenarios. Integration testing with HyperTest fosters a more comprehensive understanding of how your services interact, leading to the creation of robust, reliable, and ultimately, production-ready software. Prevent Logical bugs in your databases calls, queues and external APIs or services Take a Live Tour Book a Demo
- Are we close to having a fully automated software engineer, Are we close to having a fully automated software engineer?
05 Min. Read 12 July 2024 Are we close to having a fully automated software engineer? WhatsApp LinkedIn X (Twitter) Copy link Introduction In the fast-paced world of software development, engineering leaders constantly seek innovative solutions to enhance productivity, reduce time-to-market, and ensure high-quality code. Language model (LM) agents in software engineering workflows promises the possibility to revolutionise how teams approach coding, testing, and maintenance tasks. However, the potential of these agents is often limited by their ability to effectively interact with complex development environments To address this challenge researchers at Princeton published a paper discussing the possibility of a super smart SWE-agent, an advanced system that can maximise the output of LM agents in software engineering tasks using an agent computer interface or ACI, that can navigate code repositories, perform precise code edits, and execute rigorous testing protocols. We will discuss key motivations and findings from this research that can help engineering leaders prepare for the future that GenAI might is promising to create for all of us which we should not afford to ignore What is the need for this? Traditional methods of coding, testing, and maintenance are time-consuming and prone to human error. LM agents have the capability to automate these tasks, but their effectiveness is limited by the challenges they face in interacting with development environments. If LM agents can be made to be more effective at executing software engineering work, it can help engineering managers reduce the workload on human developers, accelerating development cycles, and improving overall software reliability What was their Approach? SWE-agent: a system that facilitates LM agents to autonomously use computers to solve software engineering tasks. SWE-agent’s custom agent-computer interface (ACI) significantly enhances an agent’s ability to create and edit code files, navigate entire repositories, and execute tests and other programs. SWE-agent is an LM interacting with a computer through an agent-computer interface (ACI), which includes the commands the agent uses and the format of the feedback from the computer. LM agents have been so far only used for code generation with moderation and feedback. Applying agents to more complex code tasks like software engineering remained unexplored LM agents are typically designed to use existing applications, such as the Linux shell or Python interpreter. However, to perform more complex programming tasks such as software engineering, human engineers benefit from sophisticated applications like VSCode with powerful tools and extensions. Inspired by human-computer interaction. LM agents represent a new category of end user, with their own needs and abilities. Specialised applications like IDEs (e.g., VSCode, PyCharm) make scientists and software engineers more efficient and effective at computer tasks. Similarly, ACI design aims to create a suitable interface that makes LM agents more effective at digital work such as software engineering The researchers assumed a fixed LM and focused on designing the ACI to improve its performance. This meant shaping their actions, their documentation, and environment feedback to complement an LM’s limitations and abilities Experimental Set-up DataSets : We primarily evaluate on the SWE-bench dataset, which includes 2,294 task instances from 12 different repositories of popular Python packages. We report our main agent results on the full SWE-bench test set and ablations and analysis on the SWE-bench Lite test set. SWE-bench Lite is a canonical subset of 300 instances from SWE-bench that focus on evaluating self-contained functional bug fixes. We also test SWE-agent’s basic code editing abilities with HumanEvalFix, a short-form code debugging benchmark. Models : All results, ablations, and analyses are based on two leading LMs, GPT-4 Turbo (gpt-4-1106-preview) and Claude 3 Opus (claude-3-opus-20240229). We experimented with a number of additional closed and open source models, including Llama 3 and DeepSeek Coder, but found their performance in the agent setting to be subpar. GPT-4 Turbo and Claude 3 Opus have 128k and 200k token context windows, respectively, which provides sufficient room for the LM to interact for several turns after being fed the system prompt, issue description, and optionally, a demonstration. Baselines: We compare SWE-agent to two baselines. The first setting is the non-interactive, retrieval augmented generation (RAG) baselines. Here, a retrieval system retrieves the most relevant codebase files using the issue as the query; given these files, the model is asked to directly generate a patch file that resolves the issue. The second setting, called Shell-only, is adapted from the interactive coding framework introduced in Yang et al. Following the InterCode environment, this baseline system asks the LM to resolve the issue by interacting with a shell process on Linux. Like SWE-agent, model prediction is generated automatically based on the final state of the codebase after interaction. Metrics. We report % Resolved or pass@1 as the main metric, which is the proportion of instances for which all tests pass successfully after the model generated patch is applied to the repository Results The result demonstrated that LM agent called SWE-agent that worked with custom agent-computer-interface or ACI was able to resolve 7 times more software tasks that pass the test bench compare to a RAG using the same underlying models i.e. GPT-4 Turbo and Claude 3 Opus and 64% better performance to Shell-only. This research ably demonstrates the direction that agentic architecture is making (with the right supporting tools) in making a fully functional software engineer a distant but possible eventuality Read the complete paper here and let us know if you believe if this is a step in the positive direction Would you like an autonomous software engineer in your team? Yes No Prevent Logical bugs in your databases calls, queues and external APIs or services Take a Live Tour Book a Demo
- How to generate mocks for your test without needing mockito, How to generate mocks for your test without needing mockito?
07 Min. Read 26 April 2024 How to generate mocks for your test without needing mockito? Shailendra Singh WhatsApp LinkedIn X (Twitter) Copy link 📖 Scope of mocking in unit tests When writing unit tests for code that interacts with external dependencies (like APIs, databases, file systems, or other external services), a developer needs to ensure these dependencies are properly isolated to make tests predictable, fast, and reliable. The key strategies involve mocking, stubbing, and using test doubles (mocks, stubs, and fakes). Developers achieve this using any of the many mocking frameworks out there (mockito for example), but HyperTest can auto-generate such mocks or stubs without needing any manual intervention or set-up. This solves a lot of problems: 1.Saves time : Developers valuable time is freed up from writing and maintaining mocks and stubs 2. Maintenance of mocks : Hand-written mocks and stubs become stale i.e. the behavior of the systems mocked can change that requires rewriting these artefacts. On the contrary, HyperTest generated mocks are updated automatically keeping them always in sync with the current behaviour of dependencies 3. Incorrect Mocking : When stubbed or mocked using frameworks, developers rely at best on their understanding of how external systems respond. Incorrect stubbing would mean testing against an unreal behavior and leaking errors. HyperTest on the other hand builds stubs and mocks based on real interactions between components. This not only ensures they are created with the right contracts but are also updated when behaviours of dependencies change keeping mocks accurate and up-to-date So let’s discuss all the different cases where developers would need mocks, how they build them using a framework like mockito and how HyperTest will automate mocking, removing the need to use mockito 1️⃣ Mocking External Services: Downstream calls, 3rd party APIs Mocking involves creating objects that simulate the behavior of real services. A mock object will return predefined responses to function calls during tests. This is particularly useful for external APIs or any service that returns dynamic data. Example: Suppose a developer of a service named AccountService in a bank is dependent on the TransactionService to fetch updates on account balances when a user performs credit or debit transaction on his account. The TransactionAPI would look something like this: curl -X POST 'https://api.yourbank.com/transactions/updateBalance' \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer {access_token}' \ -d '{ "customerId": 3, "transactionAmount": 500 }' The AccountService has a class AccountService which the developer needs to test without actually needing to call the API. To do so he decides to use mockito and would do the following: Mock Creation : Mock the TransactionAPI using mock(TransactionAPI.class) . Method Stubbing : The updateBalance method of the mock is configured to return a new BalanceUpdateResponse with the specified old and new balances when called with specific arguments. Service Testing : The AccountService is tested to ensure it properly delegates to TransactionAPI and returns the expected result. Assertions and Verifications : After calling performTransaction , assertions are used to check that the returned balances are correct, and verify is used to ensure that TransactionAPI.updateBalance was called with the correct parameters. This is how his mockito implementation will look like: import org.junit.jupiter.api.Test; import static org.junit.jupiter.api.Assertions.assertEquals; import static org.mockito.Mockito.*; public class AccountServiceTest { @Test public void testPerformTransaction() { // Create a mock TransactionAPI TransactionAPI mockTransactionAPI = mock(TransactionAPI.class); // Setup the mock to return a specific response when(mockTransactionAPI.updateBalance(3, 500)) .thenReturn(new BalanceUpdateResponse(10000, 10500)); // Create an instance of AccountService with the mocked API AccountService accountService = new AccountService(mockTransactionAPI); // Perform the transaction BalanceUpdateResponse response = accountService.performTransaction(3, 500); // Assert the responses assertEquals(10000, response.getOldBalance()); assertEquals(10500, response.getNewBalance()); // Verify that the mock was called with the correct parameters verify(mockTransactionAPI).updateBalance(3, 500); } } Mocking external services without needing Mockito HyperTest eliminates all this effort in a jiffy! Service to service interactions called contracts are automatically built by HyperTest by monitoring actual interactions, in this case between the AccountService and TransactionService . How this happens? The HyperTest SDK is set-up on the AccountService and TransactionService It monitors all the incoming and outgoing calls for both AccountService and TransactionService. In this case, the request - response pair i.e. the contract between AccountService - TransactionService is captured by HyperTest. This contract is used as to mock the TransactionService when testing AccountService and vice versa. Now when the developer wants to test his AccountService class in Accounts Service , HyperTest CLI builds AccountService app locally or at the CI server and calls this request, and supplies the mocked response from TransactionService HyperTest SDK that tests TransactionService and AccountService separately, automatically asserts 2 things: The TransactionAP I was called with the correct parameters by the AccountService Response of the TransactionService , i.e. new balance is same as 10500 or not. If not it reports error like this. 🚨 TAKEAWAY: HyperTest mocks upstream and downstream calls automatically, somethings that 20 lines of mockito code did above. Best thing, it refreshes these mocks as the behavior of the AccountService (requests) or TransactionService (response) change 2️⃣ Database Mocking: Stubbing for Database Interactions Stubbing is similar to mocking but is typically focused on simulating responses to method calls. This is especially useful for database interactions where you can stub repository or DAO methods. Example: Now developer of the TransactionService wants to mock the database layer, for which he creates stubs using Mockito. This service retrieves and updates the account balance from the database. The db interface would like this: public interface AccountRepository { int getBalance(int customerId); void updateBalance(int customerId, int newBalance); } TransactionService public class TransactionService { private AccountRepository accountRepository; public TransactionService(AccountRepository accountRepository) { this.accountRepository = accountRepository; } public BalanceUpdateResponse performTransaction(int customerId, int transactionAmount) { int oldBalance = accountRepository.getBalance(customerId); int newBalance = oldBalance + transactionAmount; accountRepository.updateBalance(customerId, newBalance); return new BalanceUpdateResponse(oldBalance, newBalance); } } Unit test with Mockito Mock the database layer AccountRepository Create instance of TransactionService with mocked repository Perform the Transaction Assert the response import org.junit.jupiter.api.Test; import static org.mockito.Mockito.*; import static org.junit.jupiter.api.Assertions.assertEquals; public class TransactionServiceTest { @Test public void testPerformTransaction() { // Mock the AccountRepository AccountRepository mockRepository = mock(AccountRepository.class); // Setup stubs for the repository methods when(mockRepository.getBalance(3)).thenReturn(10000); doNothing().when(mockRepository).updateBalance(3, 10500); // Create an instance of TransactionService with the mocked repository TransactionService service = new TransactionService(mockRepository); // Perform the transaction BalanceUpdateResponse response = service.performTransaction(3, 500); // Assert the responses assertEquals(10000, response.getOldBalance()); assertEquals(10500, response.getNewBalance()); // Verify that the repository methods were called correctly verify(mockRepository).getBalance(3); verify(mockRepository).updateBalance(3, 10500); } } Mocking the database layer without needing Mockito HyperTest SDK, that sits on the TransactionService , can mock the database layer automatically without needing to stub db response like explained above with mockito. The SDK performs the same way for database interactions as it did intercepting outbound http (GraphQL / gRPC) call for external services. In this example, the TransactionService asks the database 2 things: Query 1 : For a given customerId , return the oldBalance or current balance Query 2 : Update the oldBalance to newBalance for the same customerId HyperTest mocks both these operations for the TransactionService , like shown below: The outputs are captured as mocks as the SDK looks at the actual query from the traffic. This is then used when TransactionService is tested. This is what HyperTest does: Perform the transaction i.e. replays the request but use the captured output as the mock Compare the response and database query in the RECORD and REPLAY stage and assert newBalance across both the service response and db query 🚨 TAKEAWAY: HyperTest mocks the database layer just like external services, again that 25 lines of mockito code did above. This removes the need to stub db responses 3️⃣ Testing message queues or event streams Mocking a message queue or event stream is particularly useful in scenarios where you need to test the interaction of your code with messaging systems like RabbitMQ, Kafka, or AWS SQS without actually sending or receiving messages from the real system Example : Let's say you have a class MessagePublisher that sends messages to a RabbitMQ queue. You can mock this interaction to verify that messages are sent correctly without needing a running RabbitMQ instance. Java Class to Test: javaCopy code public class MessagePublisher { private final Channel channel; public MessagePublisher(Channel channel) { this.channel = channel; } public void publishMessage(String queueName, String message) throws IOException { channel.basicPublish("", queueName, null, message.getBytes()); } } This MessagePublisher class uses an instance of Channel from the RabbitMQ client library to publish messages. Unit Test with Mockito: javaCopy code import com.rabbitmq.client.Channel; import org.junit.jupiter.api.Test; import org.mockito.Mockito; import java.io.IOException; import static org.junit.jupiter.api.Assertions.assertDoesNotThrow; public class MessagePublisherTest { @Test public void testPublishMessage() throws IOException { // Create a mock Channel Channel mockChannel = Mockito.mock(Channel.class); // Create an instance of the class under test, passing the mock Channel MessagePublisher publisher = new MessagePublisher(mockChannel); // Define the queue name and the message to be sent String queueName = "testQueue"; String message = "Hello, world!"; // Execute the publishMessage method, which we are testing assertDoesNotThrow(() -> publisher.publishMessage(queueName, message)); // Verify that the channel's basicPublish method was called with the correct parameters Mockito.verify(mockChannel).basicPublish("", queueName, null, message.getBytes()); } } Explanation: Mock Creation : The Channel class is mocked. This is the class provided by the RabbitMQ client library for interacting with the queue. Method Testing : The publishMessage method of the MessagePublisher class is tested to ensure it calls the basicPublish method on the Channel object with the correct parameters. Verification : The test verifies that basicPublish was called exactly once with the specified arguments. This ensures that the message is formatted and sent as expected. Mocking Queue Producers & Consumers without needing Mockito HyperTest use the same technique to capture interactions between a queue producer and consumer. This gives it the ability to verify if producer is sending the right messages to the broker, and if the consumer is doing the right operations after receiving those messages. When testing producers, HyperTest will assert for: Schema : The data structure of the message i.e. string, number etc Data : The exact values of the message parameters Here is the example of how HyperTest has mocked the broker for the producer by capturing the outbound calls (expected downstream call) and asserting the outbound messages by comparing the actual message (real downstream call) with one captured message. The same way the HyperTest SDK on the respective consumer would test all side effects by asserting consumer operations 🚨 TAKEAWAY: HyperTest mocks the broker for both the queue producer and consumer in an actual end to end event flow. It can then test the producer and consumer separately. This automates mocking and testing an async event flow without the need of mocking frameworks like mockito or others. Prevent Logical bugs in your databases calls, queues and external APIs or services Take a Live Tour Book a Demo