Before you start your class, make your Ed-tech app pass!

The present landscape of Ed-tech is rife with ingenious solutions that positively impact a learner’s growth while paving the path for more path-breaking innovations a mile a minute. 

Ed-tech now fulfills a promise that’s far reaching; beyond bridging the connectivity divide. Schools and teachers have already turned to AI solutions, digitizing the classroom experience for sustained learning. Startups are connecting professors from around the world with learners who want to upgrade their skills with a number of collaboration tools. Assessment softwares are leveraging AI to power personalized learning and offering critical insights into students’ progress based on data analytics. 

Then, there is also social learning that encourages collaboration & networking between learners to solve problems together.  

From online learning platforms using gamified learning with an aim to enhance productivity, behavioural science driven chatbots that take a layered approach to student communication to schools & universities using data analytics that offer a broader glimpse into students’ learning curve, the now of Ed-tech reflects collaboration, inclusivity, accessibility and the most important- uninterrupted learning. It’s a mammoth of a promise. How do you sustain it? And raise the bar simultaneously?

Key challenges that our clients face.

  1. Poor performance when the traffic is at its peak

    The performance of your software is translated into a plurality of purposes for different students. A lot of factors need to be considered to ensure that your software fulfills its own set of purpose for its intended audience. Most of the time the visually-rich, interactive content takes longer than usual time to load especially during stress times. Availability of content in different media formats further adds to the load. 

    What happens when students remain logged-in for extended hours during long study sessions? How does your app handle the stress of thousands of students logging in simultaneously before an exam or when they take their assessment tests? Everyday common scenarios like these give your app a chance to shine or sink. 
  2. Pre-recorded or live, performance issues make it a lost cause for the teacher and the child

    Most of the time, teachers face audio issues while capturing their pre-recorded lessons with no mechanism to know whether the students have seen them. Even during live-streaming, due to technical glitches teachers aren’t able to close distracting tabs. HarnessIng the potential of interactive audio-visual ways to teach is an attempt in vain due to substandard delivery and inefficient resources.
  3. (Un) availability of learning content

    Students log in from different time-zones, networks, bandwidths, browsers, operating systems and devices. From the most sophisticated feature to the most simplistic details, the app should perform as expected in the aforementioned variables any day, anytime. However, most of the time, the content isn’t seamlessly accessible.
  4. Designing for accessibility but not implementing flawlessly

    When you make your app accessible for all learners, ensuring that you solve their challenges is crucial for an impactful learning experience. Most of the time, apps report video captions not synced properly or some lines being missed out. Availability of different content formats for dyslexic students, such as texts, audios and graphics while the content not loading properly is another hurdle. 
  5. Content workflow (mis) management 

    Content workflow management is inevitable for successful interaction between the learner and the educator. Everyday, diverse interaction happens between the two parties wherein they share different forms of media back and forth. The greater the amount and richness of interactions, the greater the chances of errors. When the student uploads their homework is it readily accessible to the teacher? When the teacher resumes a lesson, does it start off from the last reading point? These minute yet significant details, make or break the user experience. 
  6. Data Security

    The risk of information breach still looms and is meant to be tackled efficiently. In 2020, a major online learning platforms’ user data was breached. Is your threat intelligence smart enough to identify your current vulnerabilities that need immediate attention and prioritize tackling security gaps one by one?

The probability of your software running into these challenges and the possibility of it not disrupting your software’s performance, comes down to TESTING.

How does Testing help pave the way?

The purpose of your app is to motivate learners to learn. Poor performance does the opposite. Performance and load testing sets up a list of KPIs (Key performance indicators), based on the learning environment, nature of learner, organization and the learning arena. It tests the app’s functionality during normal load and heavy stress conditions. 

Similarly, the user experience hinges hard on compatibility testing. To ensure that your live streaming video works properly when a student moves from a high network connectivity to low network connectivity, that the layout and styling remains consistent across different networks, browsers, devices and bandwidth, amongst other cases, compatibility testing is the key. 


Automated software testing is your answer to ensure effectiveness of learning methods while frequently testing new features and shortening your launch cycles.

From lesson conceptualization to lesson delivery, success of each step and performance for each stakeholder is dependent on quality testing. Your learners deserve an uninterrupted experience for happy learning. Build it for them. Test for the best!

Why is API Testing indispensable for the success of your product?

Introduction to API Testing

 This article is focused on providing detailed information on API Testing. An Application Programming Interface is a set of programs and instructions to allow interaction between two components of a software application or entirely separate software systems. It consists of a set of routines, protocols, and tools for building the software applications.

For example, you use a mobile application to check the weather, the application connects to the internet and sends requests to a defined server. The server receives the request, interprets it, collects necessary information, and sends it back to your phone. The application grabs the response and presents you the information in readable format. The complete transaction happened via API.

What is API Testing?

API testing is a type of software testing that involves testing APIs directly and as a part of integration testing to check whether the API meets expectations in terms of functionality, reliability, performance, and security of an application. In API Testing our main focus will be on a business logic layer of the software architecture. API testing can be performed on any software system which contains multiple APIs. API testing won’t concentrate on the look and feel of the application. API testing is entirely different from GUI Testing.

Testing APIs directly and as a part of integration testing to check whether the API meets expectations in terms of functionality, reliability, performance, and security of an application.

Different types of APIs

Web APIs

Can be accessed using the HTTP protocol. The API defines endpoints, and valid request and response formats. Web APIs include the APIs used to communicate with the browser. They may be services such as web notifications and web storage. Different web APIs feature varying levels of security and privacy, including open, internal and partner APIs. Multiple web APIs can be combined into a composite API – a collection of data or service APIs.

Open/Public/External APIs

Open APIs, also known as External or Public APIs, are available to developers and other users with minimal restrictions. They may require registration, and use of an API key, or may be completely open. They are intended for external users (developers at other companies, for example) to access data or services.

Internal APIs

In contrast to open APIs, internal APIs are designed to be hidden from external users. They are used within a company to share resources. They allow different teams or sections of the business to consume each other’s tools, data and programs. Using internal APIs has several advantages over conventional integration techniques, including security and access control, an audit trail of system access, and a standard interface for connecting multiple services.

Partner APIs

Partner APIs are exposed by/to the strategic business partners. They are not available publicly and need specific entitlement to access them. A partner API, only available to specifically selected and authorized outside developers or API consumers, is a means to facilitate business-tobusiness activities.

Composite APIs

Composite APIs allow developers to access several endpoints in one call. These could be different endpoints of a single API, or they could be multiple services or data sources. Composite APIs are especially useful in microservice architectures, where a user may need information from several services to perform a single task. Using composite APIs can reduce server load and improve application performance, as one call can return all the data a user needs.

Need of API Testing

More IT companies are inclining towards the concept of microservices. Microservices facilitate different datastores corresponding to each section of the application that requires different commands for operations. Companies prefer to use the concept of microservices because it allows quick deployment which further makes the development process smoother. APIs play an important role here. Each section of the applications gets commands through the API only. Hence, API testing is a must to do because it helps to identify the errors or bugs at the very early stage of development. Also, through API testing, we get to know whether the API is effectively interacting with all the sections of the code or not? Here, testers validate the response of the API.

Types of APIs Testing

Functional Testing

Functional testing is meant for testing the selected functions of the application based upon the codes. The API functions are to be tested with specified parameters to ensure that they function well within the application after it goes to the targeted audience’s hands.

User Interface Testing

User Interface testing is meant to examine the easy accessibility of the application for the users. This test focuses on the interface that holds onto the API. Moreover, this test will give a verdict on the usability, health, accessibility, and efficiency of the application.

Security Testing

Security testing is essential within the API testing practices ensuring that the app is safe from external threats. Some of the aspects that are checked within the security testing services are encryption validation, API design for access control, user rights management, and others.

Load Testing

Load testing is imposed to make sure that the entire codebase has the potential to withstand heavy load. All the theoretical assumptions in terms of the load-bearing capacity of the application are also monitored. Hence, load testing is used to check the performance of the application in both normal as well as peak conditions.

Runtime & Error Detection

Runtime and error detection is to define the actual API running potential. This testing technique is meant for monitoring the app performance, identifying the errors, resource leaks, error detection, and other such aspects. The detected errors will be rectified and fixed to ensure that here will be no runtime breakdown.

Validation Testing

Validation testing is mostly for the final steps of API Testing, which is meant for the development process. This testing process is carried out to verify the product, behavior, efficiency, and other such aspects of the application. Hence, this testing assures that the application is correctly developed.

Fuzz Testing

Fuzz testing determines the security audit by identifying any negative behaviors or forced crash situations. This test is determined to ensure that the API limits are optimal for tackling the worst-case scenarios.

Penetration Testing

Penetration testing is a type of in-depth testing used to find vulnerabilities within an application and save it from potential attackers.

Benefits of API Testing

  1. Access to application without user interface:
    The major core advantage of API testing is that it provides access to application without users actually having to interact with a potentially disparate system. This helps the tester to detect and recognize the errors early, instead of them becoming larger issues during GUI testing.
  2. Protection from malicious code and breakage:
    API test requires extraordinary conditions and inputs, which protects the application from malicious code and breakage. Basically, API tests push software to their connective limits. API testing helps remove vulnerabilities.
  3. Time efficiency vs functional and validation testing:
    API testing is far less time consuming than functional and validation testing. 10,000 automated API tests save 3 hours of time on average vs. functional and validation testing.
  4. Reduces Testing Cost:
    API test automation requires less code than GUI automated tests thus providing faster test results and better test coverage. The end result of faster testing is a reduction in overall testing costs. Testing the API level functionality of the application provides an early evaluation of its overall build strength before running GUI tests. Early detection of errors reduces the manual testing cost. API test automation increases the depth and scope of the tests.
  5. Technologically Independent:
    In an API test, the data is interchanged using XML or JSON and composed of HTTP requests and responses. These all are technology independent and used for development. Thus an API test allows you to select any core language when using automated API testing services for your application.

What exactly needs to be verified in API testing? 

Basically, on API Testing Services, we send a request to the API with the known data and we analyze the response.

  • Data accuracy
  • HTTP status codes
  • Response time
  • Error codes in case API returns any errors
  • Authorization checks
  • Non-functional testing such as performance and security testing

Challenges in API Testing

  • Selecting proper parameters and its combinations.
  • Categorizing the parameters properly.
  • Proper call sequencing is required as this may lead to inadequate coverage in testing.
  • Verifying and validating the output.
  • Due to the absence of GUI, it is quite difficult to provide input values.

Types of bugs while performing API Testing

  • Stress, performance, and security issues
  • Duplicate or missing functionality
  • Reliability issues
  • Improper messaging
  • Incompatible error handling mechanism
  • Multi-threaded issues
  • Improper errors

API Testing best practices

  • Test for the expected results
  • Add stress to the system by sending a series of API load tests
  • Group API test cases by test category
  • Create test cases with all possible inputs’ combinations for complete test coverage
  • Prioritize API function calls to make it easy to test
  • Create tests to handle unforeseen problems
  • Automate API testing wherever it is possible

API Testing Tools

Popular tools which can ease the API testing process are:

  • Postman
  • SoapUI
  • Katalon Studio
  • APIgee
  • Tricentis Tosca
  • JMeter
  • Rest-Assured

Conclusion

API consists of a set of classes/functions/procedures which represent the business logic layer. If the API is not tested properly, it may cause problems not only in the API application but also in the calling application. It is an indispensable test in software engineering.

References

  • http://www.testingjournals.com/
  • https://www.testrigtechnologies.com/
  • https://www.softwaretestingmaterial.com/

Compatibility Testing today, for saving your costs tomorrow

Introduction to Compatibility Testing

Compatibility testing is a non-functional testing technique, which is generally performed to validate and verify the compatibility of the developed software product or website with various other objects, such as other web browsers, hardware platforms, operating systems, mobile devices, network environments etc. It is performed during the early stages of quality assurance. Compatibility testing enables the team to ensure that the compatibility requirements requested by the client are fulfilled and inbuilt in the end-product.

Compatibility testing enables the team to deliver a software product that works seamlessly across various configurations of the software’s computing environments and offers consistent experience and performance across all platforms.

Need of Compatibility Testing

In today’s market, expectations on quality standards and compatibility with complete ecosystem of device / browser / OS is high from software applications. This is achieved through opting for compatibility testing service which detects any errors before the product is delivered to the end-user. Testing confirms that the product meets all the end-user requirements.

The quality product in turn improves the reputation of the firm and propels the company to success. It also improves the sales and marketing efforts that bring delight to the customer. In addition, compatibility testing also confirms the workability and stability of the software that is of much importance before its release.

Testing confirms that the product meets all the end-user requirements. In addition, compatibility testing also confirms the workability and stability of the software.

Categories of Compatibility Testing

Hardware

In hardware compatibility testing, it checks if software is compatible with different hardware configurations.

Operating Systems

It checks if software is compatible with different operating systems such as Windows, Unix, Linux, Mac OS, etc.

Software

It checks if developed software is compatible with other software. For example, MS Word application is compatible with other software like MS Outlook, MS Excel, etc. 

Network

It checks the performance of a software in a network with varying parameters such as Bandwidth, Operating Speed, Capacity, etc.

Browser

It checks the compatibility of website with different browsers like Firefox, Googel Chrome, Internet Explorer, Safari, etc. 

Devices

It checks the compatibility of the software with different devices like USB Port Devices, Printers & Scanenrs, other media devices, and Bluetooth.

Mobile

It checks if software is compatible with mobile platforms like Android, iOS, etc.

Versions

It verifies the compatibility across various versions of OS across devices.

Categories of Compatibility Testing

Backward Compatibility Testing

It is a technique to verify the behavior and compatibility of the developed hardware or software with their older versions of the hardware or software. Backward compatibility testing is much predictable as all the changes from the previous versions are known.

Forward Compatibility Testing

It is a process to verify the behavior and compatibility of the developed hardware or software with the newer versions of the hardware or software. Forward compatibility testing is a bit hard to predict as the changes that will be made in the newer versions are not known.

Compatibility Testing Process

  1. Design Test Cases & Configuration:

    During this stage of the process, team designs different test cases & configurations.
  2. Establish Test Cases & Environment:

    Team establishes the environment for testing, wherein the compatibility of software will be tested and verified.
  3. Result Analysis & Reporting:

    Any defect, issue, bug or discrepancies noticed by the team during this phase is recorded and reported.
  4. Rectification & Retesting:

    The responsible team rectifies and resolves the issue and retests the software, to validate the accuracy of the process.

Common Defects 

  • Modifications or changes in the UI.
  • Any changes in the font size.
  • Issues related to alignment can hamper the effectiveness as well as the compatibility of the software.
  • Changes in the CSS Style and color.
  • Any broken or incomplete tables or frames in the software.
  • Defects or issues related to scrollbar.

Benefits of Compatibility Testing

  • It helps to detect errors in the software product before it is delivered to the end users.
  • Improves the process of software development, as it tackles all compatibility related issues.
  • Team can validate that the software meets the business and user requirements and is optimized for quality.
  • It reduces the future help desk cost, mainly incurred for customer support for various compatibility issues.
  • It helps to test the product’s scalability, stability, and usability.
  • It ensures there is no loss of business if a potential customer visits an organization on any platform.

Compatibility in IoT

IoT is growing in many different directions, with many different technologies competing to become the standard. It is important to check Hardware-Software compatibility in the IOT system as there are lots of devices which can be connected through the IOT system.

These devices have varied software and hardware configuration, protocol, product versions and OS. Therefore, the possible combinations are huge.

Automation Tools

Popular tools which can ease the compatibility testing process are:

  1. CrossBrowserTesting.com
  2. LambdaTest
  3. Ranorex Studio
  4. Browsershots
  5. TestComplete
  6. Turbo Browser Sandbox
  7. Browsers
  8. pCloudy
  9. Selenium

 pCloudy is our own Compatibility Testing platform that helps user to perform testing on more than 5000 device-browser combinations. (For more details refer https://www.pcloudy.com)

Selenium Grid helps to run automated scripts on the Selenium grid of desktop browsers and real mobile devices.

Conclusion

The main intention behind compatibility testing is to make sure that the software is working fine in different kind of platform/software/configuration/ browsers/ hardware etc.

Performing testing compatibility will reduce the gross error of the software. Thus, this comparatively inexpensive process is a boon to ensure that your product is a success

Crestech Logo Launch

Logo Image

We have a new identity! Our new logo smiles at you in all its rich purple glory. Now we unpack the context of how. 

Back in 2005, we began Crestech with a clear vision of bringing quality into software releases. 16 years in business, 1600 successful projects in our kitty, 300+ customers who trust us and 200+ of team members with whom we built this dream. Big numbers and a long list. Long enough for us to ask the question: what’s more? 

We knew we didn’t need to look any further than inwards. Baring our raison d’etre and dissecting it to its simplistic detail. To look inwards is to ask questions like what do we mean to our clients, how does our team working with us feel? We do what we do but why? When we get past the run of the mill details, we tend to see the bigger picture. The perspective of our clients is rooted deeply in the quality of our services. To be Crestech for them is to be agile, dependable, value oriented, all ears and open-eyed. Our team mates who spend a third of their daily lives working with us, we strive to create a safe space of belongingness for them. In voice and in action. And yet, when we think of what it is that keeps us inspired everyday, what emerges so strongly is the satisfaction we drive- with the people we work, for the people we are working. It’s as simple as that. The happiness that we deliver and the faces we make smile. 

“With every smile generated or delivered to a client, or a coworker, or maybe a business companion, we believe we are one step closer towards our goal”.

When we began to give form and shape to our new identity, there was so much to say and it took a series of conversations to put it all together.

From laying down our branding blocks- What do we envision ourselves to be, spelling out our brand substance- our strategy and expression, we moved from a successful discovery workshop to the next phase of the work in progress.


Over 10 hours of meetings, 15 hours of brainstorming, 5 hours of disagreements and one sweet shared moment of agreement led to giving the final keywords to infuse into different brand expressions. What dictated this all along? A deeper dig into our ideology.

“Simplicity is better than elaborate embellishment. We have ‘less is more’ ideology as the foundational pillar of anything that comes in or goes out of Crestech”

There, we knew what’s more. For us, it’s less.

Hello world, say hi to our new crest. It’s purple. We have embraced a colour palette of soothing colours, our typeface is more flexible, we are bolder and sharper. Oh, there’s one more- we serve with a smile. Always.
We are

Logo

5 Key Elements of Scaled Agile Framework

The Scaled Agile Framework® (SAFe) is an online knowledge base of Tested principles to apply Lean-Agile (continuous delivery and improvement) at enterprise level. It provides a simple and lightweight experience for the software development team.

SAFe is most-popular among enterprise organizations as many of its facets focus on eliminating the common challenges teams face when scaling agile. Developed in the year 2011 to help software development teams bring better quality products to market at a faster pace. It was originally called the “Agile Enterprise Big Picture” by software-industry veteran Dean Leffingwell, who published the bestselling book Agile Software Requirements Before SAFe- when we used to build large and complex systems using Agile Methodology, the results were delayed delivery and the quality was not that great, as a result, the customer experience was also not great. SAFe tries to address these issues and software testing companies who have adopted these frameworks have shown amazing result.

When to Use Scaled Agile Framework

To fix the following inefficiencies SAFe is used.

  • Difficulty in coordinating multiple teams working on a large-scale project
  • Coping with longer planning horizons
  • Increased effort in keeping track of multiple sources of requirements
  • Un-mapped dependencies creating unexpected issues and obstacles

SAFe Core values

1. Alignment: It is necessary to keep up with the rapid change. More importance should be given to enterprise business objectives over team goals. .

2. Built-in quality: Ensures every element and increment that’s being built is of same slandered of quality.

3. Transparency: To achieve best results transparency within the organization is really important. Transparency & trust ensure that the business and development can confidently rely on another, particularly in times of difficulty.

4. Program execution: Leaders participate as Business Owners in Program Increment (PI) planning and execution, while aggressively removing impediments.

SAFe Principles:

  • Take an economic view
  • Apply systems thinking
  • Assume variability; preserve options
  • Build incrementally with fast, integrated learning cycles
  • Base milestones on objective evaluation of working systems
  • Visualize and limit WIP, reduce batch sizes, and manage queue lengths
  • Apply cadence, synchronize with cross-domain planning
  • Unlock the intrinsic motivation of knowledge workers
  • Decentralize decision-making
  • Organize around value

Highlights of SAFe

  • Agile Release Train: Is a long lived team of Agile teams, which, along with other stakeholders, incrementally develops one or more Solutions in a value stream.
  • Continuous Delivery Pipeline: Describes the workflows, activities, and automation needed to provide a constant release of value to the end user.
  • Customer Centricity: Is a mindset that focuses on creating positive experiences, such as the customer journey, which takes buyers through the full set of products and services that the enterprise offers.
  • Program Increment (PI): Is a time box in which an ART delivers incremental value. PIs are typically 8 – 12 weeks long, and the most common pattern for a PI is four development Iterations followed by one Innovation and Planning (IP) iteration.
  • Innovation and Planning (IP) Iteration: Provides the teams with an opportunity for exploration and innovation, dedicated time for planning, and learning through informal and formal channels.
  • ScrumXP: ScrumXP uses the Scrum framework for managing the team and their work as well as XP derived quality practices.
  • Team Kanban: Is a method that helps teams facilitate the flow of value by visualizing workflow, establishing Work in Process (WIP) limits.
  • Built-In Quality: Ensures every solution increment is high in quality and can readily adapt to change.

Challenges with SAFe:

As explained above SAFe agile is to overcome Agile’s pitfalls, however every model have some challenges and so does SAFe. A few of them can be as follows:

  • Primarily Top-Down Decision Making: Due to which it Possesses Similarities to waterfall model.
  • Terminology Heavy: There are 4 levels in SAFe. Coupled with its use of Lean, Agile, and;
  • System Thinking: It does end up with a significant amount of terminology and body of knowledge.

In short, SAFe is a framework which gives us alignment not only with the team(lower level) and program level(middle) but also helps us align to with organization strategy(top level) and how a team’s works in adding value to customers right from the top level. It is available in different configurations, and companies can take advantage of it.

SAFe comes in various configurations, depending on the specific needs of an organization. These configurations include Essential SAFe, Large Solution SAFe, Portfolio SAFe, and Full SAFe, each offering different levels of guidance and complexity to address different organizational contexts.

It’s important to note that while SAFe is widely adopted in many enterprises, it’s not the only approach to scaling Agile practices. Organizations should carefully assess their own context, needs, and culture before deciding on the best approach to scale Agile within their organization.

5 Metrics to a clearer view of your Project’s Health and Quality

Introduction

Metrics are used to measure various characteristics of a project. They describe an attribute, as a unit. From a software point of view, they can be classified into product quality metrics or project quality metrics. Product metrics are the ones that focus on product quality by describing its attributes and features whereas project metrics focus on improving the project quality. There’s another category of metrics, process metrics which we leave for another post.

Why Quality Metrics?

Quality metrics are measured against quality standards to determine whether the product works to the client’s expectations and if the project is in good health. By good health, it is meant that the development of the software (product) is on track with minimal or negligible problems. Problems that might end up hampering the whole development process, hence resulting in delayed results.

One must understand that metrics aren’t just limited to finding defects, but is about getting insights to optimize the development process.  It also concentrates on qualities like reliability, consistency, and so on. Both products as well as project metrics should be measured and monitored with equal importance.

Generally, you might find a huge number of quality metrics to measure. Let’s focus on the ones which help us analyze a project’s health by providing insights that really matter.

Following are the Metrics

 Let’s look into some project metrics:-

1. Finance

Some people may not consider costing as a quality metric to measure, but in reality, it definitely is. Without laying down the budgeting plans, monitoring the expenditure, and going through the finance books, you cannot deliver something of topmost quality as one might run of resources to maintain the same. This eventually ends up affecting the project’s health. Costing should be looked after with the utmost care to sustain good quality and a healthy project. Some metrics to use are:-

  • Cost Variance: Difference between the actual cost and planned cost.
  • Cost per Problem Fixed: Amount spend on an engineer/developer to get the problem fixed.

2. Defect Quantification

To make the project free of bugs and errors, defects need to be quantized and worked upon(fixed). Lesser the number of defects, better the project’s health. Defects can be dealt with in many ways. All we need to make sure is to make the best out of the defect resolution process and hence increase productivity. Some of the metrics are:-

  • Defect Density = Total Number of Defects / Total Number of Modules
  • Defect  Gap Analysis ( Also called Defect Removal Efficiency)% = (Total number of fixed defects/Total number of valid defects reported)/100
  • Defect Age = Average time taken in finding a defect and resolving it.

3. Scheduling

It helps to analyse the progress made in the completion of a project. Being on the schedule should be of topmost priority, as at the end of the day, you might not want to disappoint your stakeholders with a delayed result. All you need to do is stick to the planned schedule and measure Schedule Variance.

  • Schedule Variance:  Difference between the scheduled completion of a task by the actual completion of the task.ie.
  • SV= Actual Time Taken – Time  Scheduled

Every project is eventually a product made available in the market.  Following are the product metrics that one should always measure:-

4. Performance of the Project

Performance is measured by the performance metrics. Every software is designed to accomplish specific tasks and give results. It is measured if the product can deliver as per the requirement of the client by analysing the time taken and the resources used. One way of measuring performance is to set small goals and work for them. After the accomplishment of such goals, study the process. This approach ends up giving exceptional insights into the project’s health.

  • ROI – Return of Investment: Comparison of earned perks/benefits and the actual cost
  • Resource Utilization: Measures how the individual team member’s time is spent.

5. Usability

A program should always be user-friendly, as eventually, it has to be used by an end-user. One way of measuring this is by analysing the project from a user’s perspective almost after every step in the developing process. This will help to fix errors and bugs on the go, so you don’t have to revise the steps you took weeks ago just to fix a recently discovered bug which might end up being really frustrating. Measuring the usability metric will provide insights to improve effectiveness, bring about efficiency, and thus achieve customer satisfaction. Some metrics to measure are:-

  • Task Completion Rate (used to measure effectiveness) Effectiveness = (Number of Completed Tasks/Number of Task Undertaken)*100
  • Task Completion Time =Task End Time – Task Start Time

Conclusion

Summing up, now is the time to get over the traditional practices, and add this method (of measuring metrics) to your work approach. Find the weak points, prioritize opportunities, and experiment to know what works, or what doesn’t. If you want a powerful and attractive project, which is healthy and guarantees customer satisfaction, measuring quality metrics is the answer you’re looking for.

If you’re looking for more information , please contact us we will be happy to help.

Uncover the hidden bugs with Non Functional Testing.

Even when you think you have got it right, Non Functional Testing can expose the hidden flaws

This is your big idea. Maybe, not necessarily yours, it’s your client’s. But you have spent months mulling over the concept, assembling the best team of developers and you are ready to go. Your end goal is to solve problems and make life easier for the end user, right? Well, achieving client satisfaction and maintaining a positive end-user experience is hinged on one important factor; Testing.

Quality Assurance (QA) is a pivotal part of your mobile/web application development lifecycle. Whether it be a pre-installed, installed, or browser-based app, rigorous testing of functionality, compatibility, and usability, among others must be done every step of the way.

Functional Testing

Functional testing is an important and popular step in the app development process. Primarily because, focusing on an AUT’s ability and efficiency to perform as required is second nature to QA practice. However, it is important to note that non-functional testing is as equally important as functional testing because it greatly affects client satisfaction and the whole user experience. In this article, I will attempt to explain what QA non-functional testing is, differentiate between functional and non-functional testing, and highlight the importance of non-functional testing.

Non-functional Testing

It is a type of software test for assessing the non-functional aspects (e.g. performance, usability, reliability, etc.) of a software application. It is essentially aimed at testing the abilities of a system on such non-functional parameters which are usually not done by functional testing. In other words, this testing handles the aspects of a software application which is not connected with the defined user action or a function.

TYPES OF NON-FUNCTIONAL TESTING

Security Testing:

This checks how a system is safeguarded against intentional or spontaneous attacks from known or unknown sources, also known as VAPT (Vulnerability and Penetration Testing). It also detects loopholes within the system and measures the vulnerability of an AUT to being hacked.

Both Manual and Automated assessment of vulnerabilities through active and passive scans are part of this testing.

Performance Testing:

Performance testing encompasses a number of parameters. 

  • Load Testing: Load testing checks  the ability of a system/ AUT to deal with different numbers of users given a performance range. 
  • Stress Testing: Stress Testing assesses the tenacity of an AUT, measuring what happens to the system when put under valid load in excess of its originally designed capacity. For instance, how many users working on a particular app at a time can cause it to crash?
  • Endurance Testing: This test is essential to know the stability of the system over a period of time and to see if small errors that are accumulated over the said period can affect the efficacy and integrity of the system.
  • Recovery Testing: This checks that the software system continues to perform to the required standards and recovers completely in the unfortunate case of a system failure.
  • Reliability Testing: This is done to check the extent to which any software system repeatedly performs a given function without failure. 
  • Scalability Testing: The scalability test is essential for commercialization of a product. It measures the extent to which a software application can expand its processing capacity to meet an increase in demand. 

Portability Testing:

The ease with which a software can be changed or transferred from its current environment (hardware/software) to  another is checked by portability testing.

Usability Testing: 

The ease with which any user can learn, operate, and interact with a system is measured by the usability test.

Other tests performed during the non-functional testing phase include Failover Testing, Compatibility Testing, Accessibility Testing, Maintainability Testing, Volume Testing, Disaster Recovery Testing, Compliance Testing, Documentation Testing, Internationalization and Localization Testing etc.

Ultimately, the motive of this is to test all characteristics of an application that would help to produce a product that meets the expectations of the user. It helps to improve the developer’s knowledge of the product behaviour, latest trends in technology and supports research development.

Functional Testing and Non Functional Testing: Two Different Concepts

The major difference between the two types of testing is this: Functional testing ensures that your product meets customer and business requirements and doesn’t have any major bugs. Non-functional testing verifies that the product meets the end user’s expectations. 

Functional Testing:

Functional testing is a type of software testing that evaluates the system against the functional requirements. It focuses on verifying that the software/application performs its intended functions correctly. The objective is to ensure that the system meets the specified functional requirements and operates as expected.

Non-Functional Testing:

Non-functional testing, also known as quality attributes testing, focuses on evaluating the performance, reliability, usability, and other non-functional aspects of a software/application. It aims to assess the system’s behavior under different conditions, rather than its specific functionalities.

The major difference between the two types of testing is this: Functional testing ensures that your product meets customer and business requirements and doesn’t have any major bugs. Non-functional testing verifies that the product meets the end user’s expectations.

Functional and Non-Functional tests are technically differentiated from each other based on their objective, focus area, functionality, ease of use, and execution.

Functional and Non-Functional tests are technically differentiated from each other based on their objective, focus area, functionality, ease of use and execution.

Objective: 

Functional testing assesses the behavior of the software system of the AUT such as login function, valid/ invalid inputs, etc. whereas Non-functional testing deals with the performance or usability of the software.

Focus area:

Functional testing focuses on customer requirements, while Non-functional testing focuses on user expectations.

Functionality: 

Functional tests check that the system works as expected. It testing checks how well the system works.

Ease of use: 

Functional testing is easy to execute manually, like black box testing but it is hard to execute non-functional testing manually. It is more feasible to use automated tools.

Execution:

Functional testing generally gets performed before non-functional testing, i.e. before the compilation of code while Non-Functional testing is mostly performed after the compilation of code.

Now, imagine finalizing the masterpiece you have created, and testing its functional requirements fully, leaving out its non-functional requirements.

Would you like to predict what would happen when the application is subjected to a massive load when it goes live? Would you be confident of its stress capabilities?

Would you want to imagine how slow it may become? What if it crashes on product launch day? Or an unauthorised party completely takes over the functionality of the system? These scenarios depicted make no pleasurable viewing. I wouldn’t want to touch such a product with a ten-foot pole or be associated in any way with it.

Though testing over the years have been traditionally limited to the functional requirements, the concept of non-functional testing has gradually become an integral part of software processing, without which consumer expectations may not be fully met. When a product fails to meet these expectations, it affects the reputation of the developers, company, and even the overall product sale. This is why non-functional testing cannot be ignored.

Both functional and non-functional testing are crucial for ensuring the overall quality, reliability, and user satisfaction of a software/application. They complement each other by validating different aspects of the system’s performance and behavior.

Non-functional testing is primarily focused on evaluating the performance, reliability, security, and usability aspects of a software system. While it may not directly target detecting hidden bugs, it can indirectly help uncover certain types of bugs or issues that may not be apparent during functional testing.

While non-functional testing techniques can help uncover hidden bugs indirectly, it’s important to note that functional testing, which tests against the expected behavior and requirements, remains essential for detecting most bugs and ensuring the software meets its intended purpose.

When you think you have got it right, it will expose all the hidden flaws!

Is your QA practice ‘Future-Ready’?

COVID-19 has changed the world. It has changed mine. I no longer have the luxury of breathing in the unfiltered atmospheric air, where I get to smell the delicious aroma of food from wayside vendors. There’s always a mask on my face. COVID-19 has affected the way organisations and businesses and QA practice are being run as well. 

Arguably, Quality Assurance practice however, has been not so heavily affected by the pandemic, apart from, save a few structural and behavioural changes. Of course, there may be unprecedented time-to-market pressure or extreme cost pressure but, by and large, relying on the age-old test efficiency rule book will steer software teams out of harm’s way. 

With regards to COVID-19, there seems to be no end in sight and as such, we must actively seek new ways of dealing with the new normal. This is essential in sustaining the QA practice while maintaining the same level of work efficiency and quality of services. I call this ” The future-ready QA practice”. 

First, we must come to terms with the new normal and remote work. QA teams that used to huddle around in small spaces, writing and executing software test plans may not be able to do so anymore. Employees are increasingly being distributed across space and time zones and QA teams must adapt to the new system without compromising on providing the highest quality of digital experiences for the end user. 

Let’s look at the pro’s of the new setup. 

  • One advantage is that work can be done anywhere, or anytime depending on contractual terms. This helps in easier time management and results in higher productivity.
  • This setup could potentially improve employees’ work-life balance, and spillover into positive attitudes towards work.
  • It eliminates the travelling time and cost, the day to day cost of spending a day at office and hence, helps save some crucial time and money.

All the above being true, this does come with its own challenges. Employees may not have an official setup (office desk, space etc.)  fast internet connection, depending on which part of the globe you’re practicing, which could cause release cycle delays and disruptions. Employers must therefore make provision for the requisite tools needed for a smooth practice at home. This could mean accelerating the adoption of digital cloud computing services; SaaS, or helping employees set up adequate home networks for efficiency sake.

Cloud to the rescue

Accessing the test environment presents another challenge for remote QA practice. The test environment could be accessed remotely, either through an on-premises server or a cloud-based service. This further underscores the need to move towards a cloud-based development and test environment. While at it, automated tests must be meticulously written, they must follow the branch of code they test, be peer-reviewed, and merged into the regression set. There should be proper documentation as well, so team members at different geographical areas can troubleshoot a test as easily as the originator.

New Engagement Models

Organisations must also consider new delivery models on important factors such as data security and privacy, risk, and compliance audits. Though remote work is convenient, it poses an increased risk for internet fraud, data loss, or system compromise. While you work hard to meet your client’s expectations, hackers are equally working hard to find vulnerabilities to exploit. It is essential to obtain original software licenses and keep an inventory of all open source usage across development teams. Maybe you could add a VPN to your network, have stricter password policies and more importantly, create backups. I cannot overemphasize the Backup.

New ways to supervise and communicate

Supervision. Effective supervision is the difference between a good product and a great product. Nancy Kline, founder and President of Time to Think, described supervision as an opportunity to bring someone back to their own minds to show them how good they can be. Every employee, no matter how skilled, needs a mentor, a supervisor or just somebody to run things by. Supervisors must set achievable goals with reasonable timelines. Employees must endeavor to meet those timelines while delivering on quality. It is also important to reward hard work. Honorary mentions can be made on the organization’s internal social media groups when an outstanding achievement is made by an employee. This can motivate them to do better and remind others that they’re still being watched though they’re at home.

Adaptive and Agile Workforce

Continuous Professional Development for employees is required to maintain a competitive practice within the industry. Technology is changing. There is always something new to learn, or another skill to garner. More so, the job market is now open to anyone around the world with the required skills who demonstrates aptitude for the task at hand. Therefore, the need to constantly improve skills is now more important than ever. Digital learning, however, makes it easier to acquire skills without necessarily taking time off the job. Admittedly, it will take some effort on the employees’ part and encouragement on the employer’s part to keep up with lessons, but it is far from impossible. Ultimately, it becomes a win-win situation for both employer ( who has the most skillful testers) and employee ( who has developed himself into a more valuable asset).

Keeping the human element alive

Finally, working from home or remote work gives employees a level of isolation. Everybody loves a happy and healthy work environment surrounded by work buddies who would give you a brief pat on your shoulder for a good work done, or rub your back while you’re battling with major bugs. But remote work takes the human element away. This means that communication must be of good quality, proactive (on the part of employees) brief, (nobody wants a nagging boss on the phone for hours) and frequent. This is where tools like Microsoft Teams, Zoom and Google Meets come in handy. The good old telephone call works fine as well. Weekly check-in calls with all employees, seeking suggestions and opinions on what could be improved is admirable. Again, everybody loves a great party. Who says you cannot organise a bring your own bottle party on Zoom? The downside of this is that, when all’s said and done, employers may have a hard time bringing back employees into the office space. But that is the inevitable future, and the faster the acceptance, the better.

While the uncertainty of living in the  Covid-19 era continues to affect organizations  all around the world, only the most agile, dynamic and resilient teams will come out stronger and unscathed. Is your team future-ready?

In summary, being future-ready in QA practice testing means embracing emerging technologies, methodologies, and trends to ensure high-quality software products that meet the demands of the ever-evolving digital landscape. By staying ahead of the curve, QA teams can contribute significantly to the success of software development initiatives. This will help in delivery quality assurance and testing services.

5 things you should know about Digital Analytics

A large portion of the world we now live in, happens online. We wake up in the morning not to an alarm clock, but to our wearable devices connected to smart phones. We research things, watch videos, catch up with friends on social networks. We even get directions and book our vacations online. And everything we do leaves a trail of data behind it.

As a consumer, you might not know this, however, as a marketer, you’re using all this consumer data to make better decisions and thinking about how to spend your marketing dollars and improve your websites and mobile apps to optimize the customer experience. All the above is Digital analytics.

By definition data analytics is the process of analysing digital data from various sources like websites, mobile applications, among others. Digital analytics is a tool used by organizations for collecting, measuring, and analysing the qualitative and quantitative data. This data consists information on what your visitors/users are doing, where they come from, what content they like, and a lot more.

Type of data that can be analysed:

Structured data

  • Sales Record
  • Payment or expense details
  • Payroll Details
  • Inventory details
  • Financial details

Unstructured Data:

  • Email and instant message
  • Payment text description
  • Social media activity
  • Corporate document repository
  • News feed.

Business value of digital analytics:

  • Identifying unknown risks.
  • Deeper insight into business to predict customer trends.
  • Act with confidence, based on numbers.
  • Targeted approach based on your actual user base.
  • Deep Analytics and comparisons into different behaviour of your user base.
  • Interactive visualizations of trends
  • Ability to curate projects and then share with non-analysts, making analytics more approachable than ever.

Digital Analytics Use Cases:

Modern analytics framework empowers the ordinary businessmen by bringing advanced analytics tools to their desktop. In Retail it helps to predict sales outcomes for the immediate future and in Healthcare it predicts risk of potential threats to patients’ wellbeing. Financial and risk management uses Big Data, along with predictive analytics, in forecasting demand.  The Consumers and Practitioners of Digital analytics can range from a CXO to a Product Owner.

Stages Involved in Digital Analytics:

  1. Curate: Transforming data in a standard structure to be usable.
  2. Profile: Validating data at a macro level.
  3. Analyse: Examining data to discover essential feature.
  4. Investigate: Observing the data in detail.
  5. Reporting: Documenting and reporting in granular form as per the requirements.

Every organization, regardless of size, requires analytics tools to understand the performance of its website/app, satisfaction of its consumer and gain key context from business rivals. Most Common subset of digital analytics is to analyse the website data that is called web analytics and further let’s know how it is implemented.

Web Analytics Tools:

These tools help us to go way beyond counting hits and page views. It help us to make decisions and find the answers to questions. Different people and different roles in your organization will need different sets of data and different levels of granularity.

For example, a company head will be interested in seeing what the trends of yearly revenues are? A marketing manager might want to drill deeper and understand which marketing channels are driving those revenues? Using the data generated by an e-commerce site, these tools can tell us which products are selling well and which ones aren’t. This can help in inventory management, sales forecasting and even manufacturing or procurement decisions. And we can even deep dive and see which products are selling well in which geographic region.

Following are some trending web analytics tools:

  • Google Analytics
  • Adobe analytics
  • ClickMeter
  • Crazyegg
  • Clicky

Key Concepts of the tools:  

Events: Events are user interactions with content that can be measured independently from a web page or a screen load. Downloads, clicks, Flash elements, and video plays are all examples of actions you might want to measure as Events.

Dimension and Metrics: Every report in Analytics is made up of dimension and metrics. Metrics are the quantitative numbers that are measuring data in counts, ratios, percentages. Whereas dimensions are the qualitative categories that describe the data in segments or breakouts.

Page View: The number of page views refers as a count for every time a visitor loads that page.

Referrers: Indicates where the users came from, and are separated into four main types: Search Engines, Other Websites, Campaigns and Direct Entry.

Visitor: The user who made the visit. We may find the visitors divided in new visitors and returning visitors, which lead us to fidelity indexes. It also may contain a large amount of technical information about their computer, browser, operating system, screen size, plugins, location, etc.

Segmentation: Segmentation isolate your data into subset for deeper analytics and solves your problems, you can always segment your data by following: – Date and time – Device – Marketing channels – Geographic channels – etc. (Dozens of options)

Significance of Web Analytics Testing

The web analytics testing services are important to help you to see how your users are connecting to your sites. For the increment of conversions rate, you should use different testing method including Web analytics A/B testing and WAAT using selenium.

Web analytics A/B testing: This testing help us to compare two or more versions of an application or a web page outcomes. Also it enlightens you regarding the execution of clickable components of your website page. With this you’re pitting two versions of your asset against one another to see which comes out on top. This assists in getting a site with continuous execution development. Web analytics automation testing framework:

WAAT (web analytics Automation framework) is an open source and valuable framework that provides a way to automate the verification of name value pair properties / tags being reported to a Web Analytics System

Typical Business Dashboards for a web application:

  • Top visited pages or journeys that is most valuable from a customer traffic,
  • Revenue (by marketing channel or program)
  • Opportunities and prospects
  • Conversion rates, Geographic data

Culmination: In simple words it is a way of collecting and analysing what’s happening on your application i.e. what your visitors/users are doing, which is great for businesses that you want to develop and evolve without taking huge risks

Non Functional Governance

One of the key factors determining your product success is the end user’s experience of using your product. And you would agree that its way beyond just functional correctness of your product. Whole lot of factors like usability, performance and security determine how end user feels about your product. Unfortunately these performance, security and usability testing are often looked at towards end of development lifecycle.

How Crestech helps govern your non functional requirements

Though our Non functional governance solution, Crestech help enterprises in setting up and managing Non Functional governance centers within their development teams  so that non functional requirements like performance, security, usability, content etc. are tested throughout the SDLC and not just towards the end. This includes

  • Defining all the non functional parameters that impact product usage experience
  • Validating product requirements for completeness of Non Functional parameters
  • Setting up development best practices around non functional aspects of product
  • Setting up periodic code and architecture reviews to flush out usability, performance and security flaws early in lifecycle
  • Testing the code for performance, usability and security right from unit level to integration and system level
  • Building dashboards to reflect and quantify Non functional quality index of application