A significant part of the Trembit teamwork is related to Video Streaming projects. That is a set of small to big projects we work on simultaneously. The projects vary by purpose and scope. The customer may require either Proof of concept (POC) or complete application in production. Also, it is worth mentioning that the budget of a Streaming project is usually strictly limited.
Working on Video Streaming projects is quite challenging for QAs. First of all, you need to keep testing two or more projects at a time. Another point is that deadlines are usually quite strict. Generally, such projects should support not only desktop browsers, but tablet and mobile devices as well. Under such circumstances, it is difficult to stay focused and be sure nothing is missed while testing. The dynamic of a project is very often hard to predict. A quite common situation is when a project is paused and then resumed.
All of the mentioned above puts a QA in position when they have to remember all the ins and outs of the projects. Comprehensive QA documentation might help with this. However, as it was mentioned above, our customers are mostly start up owners which implies limited budget. That is why we can’t create, and what is even more expensive, maintain detailed and solid QA documentation. To solve this problem we use the lean QA documentation approach. It means a very limited number of test documents together with strictly defined testing flows. Also, we pay attention to other QA process details such as bugs flow and communication within the team. This approach allows us to keep high quality products delivered.
Test Plan – all in one test documentation
On Video Streaming projects we use a Test Plan document which is not a classic Test Plan but rather an aggregating document for
- “what to test”
- “how to test”
- “where to test”
- “what is the current testing status”.
It is created per project and is the core document for QAs. Having the Test Plan, any QA who has general knowledge about the project may start testing it.
Management and product owners also may find it useful as the document provides the testing status information.
To keep things as simple as possible we use Google sheets. Hence the doc can be easily shared and accessed from anywhere.
We have a Test Plan template containing some common basic info which can be customized according to the project needs, some scripts to operate with data within the TP, and the manual about how to use the Test Plan:
Using this template, a dedicated Test Plan is created for every project.
Let’s look closely at a Test Plan sample and what information it contains.
Here you may find the Test Plan for a Dummy project so you can try and test it while reading: https://docs.google.com/spreadsheets/d/1Mk4vd8251NBAQnbFKYAhBhKvHTL0OU21X8–m0sEwx4/edit?usp=sharing
As it was mentioned above, the document aggregates four main blocks of information:
1. What to test?
Here we have a combo of a checklist with test cases organized in Test Suites. The thing here is to cover 100% of functionality with minimum wording. The aim of it is to prevent a QA from missing or omitting something during the regression. To keep the document lean, we don’t describe any common or obvious details such as login/logout, file upload, main user flows, etc. In other words, something that a person who works with the project knows. What must be stated here are boundary values, corner cases, limitations and everything which is not obvious, common, or clear for a person who is familiar with the product. For instance, in case we have a file upload functionality, it will be described in the TP following way.
It is very important to have listed all functionality, so nothing is missed during testing. However, it is possible to leave the details column/cell empty for some self-explanatory or commonly used functionality.
2. How to test?
This part is crucial for QAs but also might be useful for any team member or company management.
In column A of the document we have all input data required for testing. Such as URLs to different environments, credentials, links to Postman collections, some SQL queries useful for pre-conditions modeling, contacts of people who can help with some specific questions like granting some permissions or something. Simply put, everything you may need during testing should be stored here. There is no defined structure for this section. However it should be clear what is stated (please, refer to the example below). You shouldn’t rely on the info being self-explanatory.
It is worth mentioning that due to the sensitive nature of information in column A security concerns might arise here. This question should be discussed with the customer before the start of the project. Normally, any Trembit employee has access to Test Plans. In case of any concerns, it should be clearly defined how the risks will be managed. Whether the VPN will be used, or the info will be stated in some dedicated place or the list of people having access to Test Plan will be strictly limited.
3. Where to test?
This is the smallest but very important part of the Test Plan.
The table shows which browsers/devices we are testing on (in grey). It should be read as we support and test only those browsers/devices which are filled with grey here. In case customers find some issues on devices/browsers which are not ‘grey’ here we will reject the ticket or change it to ‘feature request’. Also it should be taken into account that adding one gray cell increases testing efforts linearly.
Considering that the management and QA team should pay close attention to it during negotiation with the customer and test planning.
Also, I want to go into a bit more details about how we cover the supported environments during the testing. If we have a newly implemented feature we test it on every supported environment explicitly. However, during regression we cover the environments randomly, running some cases on one device/browser and other cases on another device/browser.
Usually, we have very tight deadlines for regression testing so we don’t track the environment coverage per test run. However, it might be discussed additionally with the customer.
4. What is the current testing status?
The purpose of test run history is to provide the overview of the project quality changes through the development. There is no aim to duplicate Jira here. The test runs should give the feeling of where on the ‘good-bad’ scale the project is. The red to green ratio provides the feeling of the project’s health status.
QAs should use this part of TP to decide what should be run and what might be omitted now. As I have mentioned above, very often we are working within a tight time frame delivering a release and can’t perform a complete regression. Hence, we do the smoke plus some other cases. Looking at the screenshot below you will get the idea which test cases should be chosen for the nearest run (I have highlighted them with yellow).
The ‘not tested’ values should form the checkerboard pattern through the test runs history.
Naturally, some tests are easier to run and others require some preconditions or additional steps which makes them ‘outsiders’ for a test run selection. This is a human factor that is difficult to manage. But when you have the picture showing that a case hasn’t been tested for several regressions already you can’t ignore that.
We adopted this approach after the one but very painful error. There was a case requiring some hardware preconditions, setting up of which was very time-consuming. So, the test runs looked like this.
Eventually, customers found a major bug in production which was explicitly described by that test case, but because of that human factor it wasn’t tested. QAs team looked very bad in that situation. That’s why that lesson hit close to home. Your Test Plan can not have long ‘not tested’ lines, let alone the ‘not tested’ lines through the whole test run history.
To make this approach work, QAs should diligently mark tested/not tested test cases. From my own experience, I know the common approach when you test a Test Suite from memory and then just mark all tests passed. This is wrong. You should read what the test is about and leave it ‘not tested’ if you didn’t check exactly that case.
Other rules of test runs are:
– Failed tests must have the ‘comments’ cell filled. Usually, the comment is a Jira ticket number. However, a plain text description may be used.
– Some comments or Jira ticket numbers may also be set for passed test cases. That means the attached Jira ticket is low priority, clarification or something which doesn’t fail the test.
– Some test run information should be stated in the header of a test run. Usually it is the date, the test run purpose, and the environment.
– A test run should be created for every regression and every big new feature initial testing
– A test run might be created in case some solid bunch of bug fixes has been delivered.
As mentioned above, the test run history part of the Test Plan should not duplicate the Jira but provide the understanding ‘where the project is’. Hence, QA may use any custom approaches to populating the test runs as long as they are handy and easy to use by the team and management.
It is not good if you don’t have any red (‘failed’) cells through test run history. That means that QA makes the runs only for regressions and only after all found bugs are fixed. There is nothing critical for the project in that kind of approach but the test runs creation turns out to be just a waste of time providing no info in the end.
There was a case with one of our projects. From the project chat, I knew that there was a total disaster there due to some technical issue. QAs were regularly blocked by some major bugs. Sometimes those bugs were not even raised in Jira as separate tickets and were fixed in the scope of other developers’ tasks. However, when I opened the Test Plan I saw a peaceful green picture. That was because QAs weren’t able to run any tests until the blocker was fixed and thus nothing was populated in the test run. Later, when they were unblocked, major issues weren’t happening anymore until the next blocker.
This is the case when QA should think about how to show the real situation in the Test Plan. For instance, it is possible to mark the whole test run column with red and add a comment in the test run header. Then when the blocker is fixed, a new test run may be added and be green. Having those totally red columns each time the major blocker happens would provide a true picture of the current product quality.
Bugs and clarifications
Bugs are the way of communication between QA and developers about a feature. It is vital for the development process to assure that the communication is smooth. In the worst case, the team can spend hours clarifying details, doing and reverting code changes just because of miscommunication. That is why I believe there should be a strictly defined template for the bug. Also, a detailed step-by-step description should always be provided. Some introduction steps may look redundant when you are hands deep in feature testing. However, this will help a lot when the raised bug will be taken into the work a couple of months after it has been raised.
Also, it is important to pay attention to accuracy as an innocent grammar mistake may change the whole context of the description. Moreover, accuracy in the bug’s description means a lot when the Project Manager (PM), Business Analyst (BA), or a Technical Writer (TW) creates the Release notes. Having the issues described correctly, they can simply get the list of delivered tickets from the Jira, which saves a lot of time.
So, the following bug is considered to be written well:
Quite a common situation is when a QA sees the incorrect behavior but it is not clear what behavior is expected. Of course, such questions should be clarified with the PM or BA before raising the Jira ticket. However, as we switch between the projects quite often, the point may get lost if you can’t get the answer right away. To prevent this we have the ‘clarification’ Jira tickets. They have a defined template that is quite similar to the bug template. Clarifications are always assigned to the PM or BA. The assignee edits the ticket providing the expected result, edits the title and assigns it to the right team member.
Here is an example of a clarification ticket.
Devices for testing
The majority of our projects support mobile and tablet devices. To perform cross-device testing we use https://browserstack.com service. However, it doesn’t cover all our needs. The core of all our projects is video and audio chats which requires access to a device’s cameras and microphones. Hence we have a set of physical devices in the office. All information about devices, their availability and test accounts is tracked in a single google document which is updated once something has changed in the lab.
Communications within the team
Our QA team has two kinds of meetings that are not tied to any project: weekly one-to-one meetings with the team lead and QA talks.
Weekly one-to-one meetings are required because we don’t interact a lot during the day as everyone is focused on different projects. A meeting takes between 15 to 30 minutes. It is not much but contributes a lot to the team’s health.
QA talks are the meetings for knowledge sharing and discussions. These meetings are obligatory for the QA team but managers from other teams are free to join them. Also, we use the QA talks to cover requests from businesses about new testing techniques or approaches that we don’t practice yet. In that case, there is a volunteer who investigates the topic and prepares the lecture or series of lectures for QA talks. The point of QA talks is to have some handy and easy-to-use instructions regarding the discovered topic. Usually, those instructions are presented as some Test Suites and posted into the Test Plan template. Later, anyone can get those Test Suites, modify them according to the project needs and add them to the project Test Plan. We have been conducting the QA talks for almost a year already and feedback from the team and management is quite positive.
Having our current QA process we are constantly working on its improvements which are not limited to polishing minor issues. Now we are in the progress of empowering our audio/video quality testing abilities. We are sure our customers will benefit a lot from the planned improvements.