In the world of software development, the term agile typically refers to any approach to project management that strives to unite teams around the principles of collaboration, flexibility, simplicity, transparency, and responsiveness to feedback throughout the entire process of developing a new program or product. And agile testing generally means the practice of testing software for bugs or performance issues within the context of an agile workflow.
Agile testing techniques in most cases are like their traditional counter parts. However, Agile methodology may need some special considerations due to variances that standout with respect to techniques, jargon and documentation.
Acceptance Criteria, Adequate Coverage, and Other Information for Testing
Requirements in Agile Projects out flow as user stories and the Product Owner along with the team are responsible for writing user stories. The user stories follow a prescribed format from a point of value of the end user. All the user stories are stacked in Product backlog which is prioritized by the Product Owner frequently as per the business needs. The user stories contain acceptance criteria in which the requirements of a story are explained in brief. The acceptance criteria may also contain certain non-functional requirements, boundary conditions, and usability and performance conditions and may also relates dependency with other user stories.
The User stories should be related to a testable outcome, and they pave the way for incremental agile architectures, few examples of are outlined below
• Expert judgment and previous learnings
• Existing features, functionality and the technical characteristics of the architecture.
• User Personas (context, system configurations, and user behaviour).
• Code quality and tools deployed
• Wireframes that serve as mock-ups
• Defect density from current and previous iterations
• Any regulatory standards, if applicable (e.g., DO-178B for avionics software, IEEE, SOX, FDA)
• System quality and risks (see Assess quality risks in agile projects)
The development team in every iteration implements features that are prescribed in the user stories, with all the strings attached like verifiable quality built in, code checked in, and validated by acceptance testing. The acceptance criteria should comply with the following traits,
• Functional behaviour: The functionality of the product increment should be reflected as working software in the format when X is given as input and we see Y as output under conditions Z
• Quality Characteristics: The performance of the system subjected to specific quality attributes like quality characteristics, reliability, usability that forms the patterns of system behaviour.
• Scenarios (use cases): A set of actions in a specific order between a User persona and the system, to fulfil a business goal.
• Business Rules: These are collection of activities that operate on the system under given conditions subjected to various external procedures and constraints (example: The call routing mechanism when a cell phone network initiates a call)
• External interfaces: The inter system joints and intra system joints that are yet to be developed for external world. These may be categorized into various types (User interface, another systems interface, etc)
• Constraints: Design descriptions and implementations may form certain constraints that will crunch options to the development team. For example: Embedded software devices may often have the constraints related to size, weight, dependency and interfaces.
• Data definitions: The customer described format of data type, default values, character lengths for a data items to frame complex business rules and data structures (example: the ZIP code in U.S. snail mail)
Apart from the above considerations, the tester may also find some information about:
• System usage and its interfaces and operational modes
• The importance of current tool kit and it scalability
• Tester’s competency about the current testing tasks.
There are a lot of activities that go on in a software testing environment. Consider a situation where you want a group of people to evaluate a set of options and choose one, single best option. For example, suppose you are creating a Web-based tool program for use on your company’s intranet. You have designed four quite different user interface prototypes, and you ask a group of people to evaluate each prototype and rank their preferences from best to worst. Now the question is just how can you make sense of this data? There is a large body of knowledge on various techniques to analyse group evaluations of a set of alternatives designed to select the best option. However, I’ve noticed that these techniques are almost totally unknown to most software testers. There are many fascinating effects in group analyses. Suppose you have four options (A, B, C, D) and 14 evaluators, and some data like this:
A > B > C > D according to 8 people
B > C > D > A according to 4 people
D > C > A > B according to 2 people
Suppose you decide to give 3 points for a top ranking, 2 points for a second place, 1 point for a third, and 0 points for a fourth place. Then option A has 26 points, option B has 28 points, option C has 20 points, and option D has 10 points. So option B is best. But then suppose someone observes that option D is such a loser it shouldn’t have even been there in the first place. You agree and toss out option D and now the data looks like:
A > B > C according to 8 people
B > C > A according to 4 people
C > A > B according to 2 people
But now if you re-compute you’ll see that option A has 18 points, B has 16, and C has 8, and so now option A becomes the winner instead of option B
Smart Bear’s software quality tools are used by development teams to aid them in building and delivering the world’s best applications. Delivering those applications on time and with as few bugs as possible, means finding issues earlier in the development lifecycle. Smart Bear’s Collaborator and AQtime Pro tools help to ensure that issues are found before being shipped downstream.
Collaborator helps enterprise development, testing and management teams work together to produce high quality code. Collaborator is a code review tool that reviews code, user stories and test plans in a transparent, collaborative framework — instantly keeping the entire team up to speed on changes made to the code. By enabling team members to work together to review their work throughout the development process, Collaborator can help you catch bugs and defects before your software hits the market and your customers do!
• Collaboration among developers and sharing of best practices
• Changes can quickly be reviewed by anyone available
• Quickly provide audit reports for regulatory and compliance standards
• Greater knowledge of the overall code base among developers
LEARN MORE ABOUT COLLABORATOR
When it comes to uncovering software performance problems, AQtime Pro is the ideal exploration tool. It’s easy to track down memory leaks, CPU and other I/O bottlenecks, perform comprehensive code coverage analysis, and perform fault simulation in this powerful, fully featured performance profiling tool.
• Resolve performance issues within minutes. Analyse hot execution paths down to specific slow lines of code.
• Find memory and resource leaks in native and .NET applications. Optimize memory usage to improve the application performance.
• Profile your application directly from Microsoft Visual Studio and Embarcadero RAD Studio as you write the code. No context switching needed!