SAFe

Managing Non-Functional Requirements in SAFe

 Managing Non Functional Requirements in SAFeManaging non-functional requirements (NFR’s) in software development has always been a challenge. These “system capabilities”, such as ‘how fast a page loads’, ‘how many concurrent users the system can sustain’ or ‘how vulnerable to denial-of-server attacks can we be", traditionally have been ascribed as belonging to “quadrant four of the agile testing quadrants” of Brian Marick. That is, these are tests that are technology facing and which critique the product. That said, it has never been clear *why* this is so as this information  can be critical for the business to clearly understand.

In the Scaled Agile Framework (SAFe) NFR’s are represented as a symbol bolted to the bottom of the various backlogs in the system. This indicates that they apply to all of the other stories in the backlog. One of the challenges of managing them lies in at least one aspect of our testing strategies: When do we accept them if they represent a "constant" or "persistent constraint" on all the rest of the requirements?

This paper advances an approach to handling NFR’s in SAFe  which promotes the concept that NFRs are more valuable when considered as first class objects in our business facing testing and dialogs. It suggests that the business would be highly interested in knowing, for example, how many concurrent users the system can sustain on-line.  If you're not sure about this just ask the business people around the healthcare.gov project! One outcome of this approach is that we see a process emerge that reduces or need to treat them as a special class of requirements at all.

If we expose the NFR’s to the business, in a language and manner that would create shared understanding of them, we could avoid surprises while solving a major challenge.

Please consider the following Gherkin example:

Feature: Online performance

In order to ensure a positive customer experience while on our website

I’d like acceptable performance and reliability

So that the site visitor will not lose interest or valuable time

Scenario: Maximum concurrent signed-in user page response times

  • Given there are 1,000 people logged on
  • When they navigate to random pages on the site
  • Then no response should take longer than 4 seconds

Scenario: Maximum concurrent signed-in user error responses

  • Given there are 1,000 people logged on
  • When they navigate to random pages on the site for 15 minutes
  • Then all pages are viewed without any errors

These are pretty straight-forward and easy to understand test scenarios. If they were managed like any other feature in the system the creation, elaboration and implementation of them would serve as a ‘forcing function’  where derived value in the form of shared understanding between the business and the development would be gained. As well these directly executable specifications could be automated such that they could run against every build of the software. This fast feedback is very important to development flow. If we check in a change, perhaps a configuration parameter, or new library, that broke any NFR, we’d know immediately what changed (and where to go look!).  Something that is also very valuable (and often overlooked!) is that each build serves as a critical on-going baseline for comparison of performance and other system capabilities.

Any NFR expressed in this fashion becomes a form of negotiation. It makes visible economic trade-off possibilities that might not otherwise be well understood by the business. For example, if push came to shove, would there still be business value if, under sustained load, page responses were sometimes reduced to 5 seconds in some cases?

Another benefit of writing the test first is that it would increase the dialog about *how* we will implement the NFR scenario helps to ensure, by definition, that a "testable design" emerges.

This approach to requirements/test management is known as "Behavior Driven Development" (BDD) and "Specification By Example". The question of how and when to implement these stories in the flow sequence remains a challenge and the remainder of this article addresses this challenge directly. I’ll address one solution in SAFe.

The recommendation is to Implement the NFR an an executable requirement using natural language tools like Cucumber, SpecFlow (which supports Gherkin) or Fit/FitNesse (which uses natural language and tables) as soon as they are accepted as NFRs in an iteration as part of the architectural flow. Create a Feature in the Program backlog that describes implementation of the actual NFR (load, capacity, security etc.) and treat it like any other feature that point. Have the system team discuss, describe and build the architectural runway to drive the construction of the systems that will support the testing of them. Use the stories as acceptance against the architectural runway, if that is appropriate. If you do not implement the actual test itself right away (not recommended) at least wire it up to result in a “Pending” test failure (not really recommended but I’ll describe that more in a moment). When the Scenarios are running in your continuous integration (CI) environment, the story can be accepted. With regards to your CI, keep in mind that some of these tests, with large data sets or with long up time requirements will take a while to complete so it is very important to separate them from your fast failing unit tests.

The next important step is to make these tests visible to the business and to the development team. To the business, one way to make them visible, along with your other customer facing acceptance tests, is you use a tool like Relish that can publish them, along with markup and images as well as navigation and search.

Another recommendation in this approach would be to build a “quality” dashboard using the testing quadrants as described earlier. That is, each quadrant would report a pass/fail/pending status that could be used for governance and management of the system. When all quadrants are green, you can release. You can get quite creative with this approach and use external data sources, such as Sonar and Cast (coverage and code quality tools, respectively) and even integrate with Q3 exploratory testing results, for example. There is work to be done in this area. Hopefully someone will write a Jenkins plugin or add this to a process management tool.

Using this approach you will always know what the status of your NFR’s are and get the information you need in a timely fashion, when there is still time to react. This approach would help to eliminate surprises and remove the need for a major (unknown cost) effort at the end of your development cycle. In the case above, even if these tests had been marked “Pending” you’d have the knowledge that the status of these NFR’s were unknown, which would increase trust and share the responsibility across the entire value stream.

Learn more about the Scaled Agile Framework: download SAFe Foundations.

Learn more about our Scaled Agile Framework Training and Certification Classes.

What does it mean for the enterprise to be agile?

What does it mean for the enterprise to be agile?Closely allied to establishing the business objectives in adopting agile practices, an understanding of what it means for an enterprise to be agile should be clear to everyone in the enterprise. This post summarizes what it means for an enterprise to be agile from the perspective of the senior executives and stakeholders.

“Agile” is a set of behaviors that help a business achieve its objectives. The most prevalent agile practice, Scrum, defines a set of project management-based behaviors that help practitioners (especially software practitioners) achieve those objectives. However, little is said in Scrum about how to be agile outside of the immediate environment of the Scrum teams. Team agility does not automatically engender enterprise agility.

Deciding where a so-called value chain starts and ends is going to vary according to the individual enterprise considerably, according to factors such as size, business area, degree of specialization, vendors and suppliers as part of the larger value chain (or even ‘ecosystem’). However, this is a bit like a “5 WHYS” analysis, you have to recognize where it makes sense to stop the analysis at. Mostly, a company’s corporate boundary makes a natural place to stop (though ideally the whole external supply chain would be synchronized and agile). However, this may be too great a challenge for many organisations to begin with, so smaller organizational units and business units within the enterprise may have to suffice for the initial vision and implementation.

As a reference point, for a hardware-based product company, the groups that might be considered for inclusion in the scope for enterprise agility could look like: Sales, Marketing, HR, Executive Management, Software Engineering, Hardware Engineering, Product Definition, Product Releasing, Product Testing, Technical Documentation, Project Management, Programme Management, Quality Assurance. Where any of these groups are excluded, there will probably be a detrimental reduction in overall agility.

Here are some of the major characteristics that an agile enterprise will typically exhibit, at the ‘manager’ and ‘senior executive’ levels (some apply more to some groups than others):

  • Commitment through close involvement and engagement with agile teams
  • Removal of organisational impediments and issues
  • Flexibly determining release content and being responsive to change: based on sustainable organisational capacity and economic value (including cost of delay); taking into account (test) results and feedback
  • Be Servant Leaders: inspire, motivate, lead by example: including: allowing teams to self-organise - “Self-organisation does not mean that workers instead of managers engineer an organisation design. It does not mean letting people do whatever they want to do. It means that management commits to guiding the evolution of behaviours that emerge from the interaction of independent agents instead of specifying in advance what effective behaviour is.” – Philip Anderson, The Biology of Business
  • Demonstrating trust, especially in avoiding delving into (and controlling) the detail: but note also that trust is engendered by successful delivery
  • Focusing on throughput of (valuable) work rather than on 100% Resource Utilization
  • Recognizing the differences between repeatable and highly variable knowledge work (avoid purely “widget engineering”)
  • Evolving legacy practices into new (e.g. by evaluating and challenging old Ways of Working): powerful corporate forces can be afoot, so this is not easy.

Leffingwell’s Scaled Agile Framework provides a suitable structure for scaling Scrum to enterprise levels and fills in on many of the executive roles and functions required for success in agile at the enterprise level.

Some useful links: