Wednesday, December 8, 2021

What of Quality?

In our industry, we have conferred the word ‘quality’ to mean what the Quality Organization does, in other words, we have defined quality to mean product defect tracking, measurement, and prediction based upon testing and validation of functionality.  This has been the norm for over 70 years, ever since the term ‘bug’ was introduced into our software jargon.  Now is a good time to step back and ask, what are all the attributes that determine product quality?  Is product quality more than defect rates over time? 

From a business perspective, customers consume our software product that creates value for them and therefore delivers revenue to us.  The value delivered to the customer is what enables us to continue to create on-going new capabilities for them.  What if we assign our definition of quality to the value that we continuously improve and deliver to our customers?  If you agree, then we can ask, what aspects of our process helps us to improve value, hence quality?  Remember, we make business decisions to fund developers to write code delivered in releases that are validated and consumed by customers. If you tend to agree with this, we could inspect Quality by inspecting aspects that create Value, those being Business Decisions, Development, Code, Releases, Validation and Customer Usage. Let’s call these six aspects Quality Contributors. 

Can we compare Quality Contributors between organizations doing Traditional/Sequential development and operating businesses, and Scrum-based ones?  We could deploy the hordes of analysts to gather and process all types of numbers. Still not sure that we can address the apples-to-oranges comparison problem.

Maybe we can use relative comparisons based upon the fundamental behaviors or trends on how Traditional/Sequential and Scrum-based organizations create Customer Value. What if the relative comparison of the six Quality Contributors shows that one approach is relatively better in four of the six Quality Contributors. Would we consider that the quality is higher in one approach over the other? Maybe. Let’s try this thinking and hope that we side-step a three-to-three tie. 

Before we start, allow me to specify that in this comparison, we are talking about world-class, mature Traditional/Sequential and Scrum organizations who have mastered their respective methods.  There are examples of both types of organizations who have mastered what is presented here. There are no magic-happens-here gaps in what is compared (albeit, there are gaps in how any one organization may do Traditional/Sequential and Scrum methods today). This posting does not contemplate mixed modes, like Kanban, Scrum-Fall, SAFe variations, Spiral, etc.

The Business Decisions Quality Contributor is defined to be ‘time from business commitment to customer availability, and the cost of changes during that period’. For example, a business will study the market, determine a list of required features, develop a comprehensive plan and at some point, early in development, make a business commitment to the program. This starts the clock ticking. After this time, the cost of change means the cost of disruption and adjustment to the plan. The cost of change could be large depending upon how far along the program is and how impactful the change.

The Development Quality Contributor needs a bit of an introduction before we define it. A key question is, ‘When is quality created during development?’ Some argue that quality is tested in by QA. I argue quality is validated by QA. QA doesn’t create or test in quality. Rather, quality is created in the mind of the developer once they comprehend the customer need and before they finish the last line of code. 

There is a time lag from when the developer finishes writing that last line of code and knowing that the code has been validated as ‘known-good’. Prior to knowing the code is known-good, if a defect is found, the developer must recall precisely the customer need and their code to correctly fix the defect. This period is the ‘Risk of Recall’. We know that developers who are distracted, even for short periods of time, must expend effort to re-engage the creative process. We know that the longer the time between creation and recall, the more intense the effort to recall the details and fidelity of the work. Erroneous recall can create follow-on defects that are unintentionally added with the fixes. At the point of GA, the developer knows that their code is known-good. At this point, they have the ‘Freedom to Forget’. 

For this paper, the Development Quality Contributor is ‘the magnitude of Risk of Recall over time until they reach the Freedom to Forget point’.

Let’s go to the next Quality Contributor, Code. All code has defects and defect densities. Let’s break the defects out into two groups. Going back to code quality and defect densities studies of the 1970’s and 1980’s where they found based upon the original code’s design and implementation, there is a ‘basal’ rate of defect discovery that is relatively unchanged over time. Even with intensive testing and defect fixing efforts, basal defect rates remain constant. The only method that fundamentally changes a component’s basal defect rate is to re-design and re-implement (from scratch) the component. What they don’t guarantee us, is that after doing the re-design/re-implementation the basal defect rates will be lower. In fact, the basal defect rate could be higher. The basal defect rate is the first component of the Code Quality Contributor. There are also ‘development’ defects added during development and discovered during validation prior to the code being known-good. The rate of discovery of development defects is the second component of Code Quality Contributor. 

The Code Quality Contributor is ‘the development defect rate plus the basal defect rate over time’.

The Release Quality Contributor is how much work accumulates in partially done state that must be completed before the release is shipped, or work in process (WIP) over time. Let’s look at manufacturing for a comparison. Assume a business does assembly of the same product at two factories, each factory has four manufacturing steps with a capacity of 1,000 products produced per month.

Factory A first assembles the month’s 1,000 subassemblies through Step 1. Factory A then takes the 1,000 subassemblies through Step 2 then through Step 3. Hopefully, by the end of the month, Factory A finishes the 1,000 products by completing Step 4. Factory A queue depth is 1,000 at each step in the process. Factory B however, takes one subassembly through Step 1, moves it to Step 2 before starting another at Step 1. Factory B continues this process as the subassembly moves to Step 3 and Step 4. The Factory B queue depth at each step is one. Hopefully, Factory B’s rate is 250 products shipped each week to reach 1,000 products shipped by the end of the month. Factory A has accumulating amounts of work-in-process (WIP) peaking at 1,000 subassemblies during the month. Factory B has at most, 4 work-in-progress (WIP) subassemblies at any moment in time. While both Factory A and B are shipping 1,000 products per month, lower WIP is considered consistent with higher quality. If a process or material defect is discovered, say during Step 4, Factory B with lower WIP will discover the defect sooner, total rework is lower, and waste is minimized.

As in manufacturing, product development cycles also carry WIP. WIP is the partial work accumulated until the product is ready for shipment to customers. WIP in development, like a factory, is an indicator of quality and represents risk due to the unknown work remaining. In development, we don’t have discrete product development steps easily discernable as manufacturing steps. Nor are we able to inspect software WIP as easily as subassembly WIP. To ensure that we have finished all WIP, we run validation tests to ensure no WIP remains before the product goes to market. We are able to discern the effort placed in development (WIP building) and the amount of valuation completed (WIP decreasing) until the release is back at known-good. 

We will use the definition ‘development effort less validation effort until known-good is reached over time’ as our Release Quality Contributor.

Validation is the process by which we come to know that the product is known-good and ready for customer consumption. Validation proves that all WIP has been completed, otherwise a flag (a development defect) is raised for development to ensure that the work gets completed before shipment. Let’s assume that Traditional/Sequential and Scrum-based validation processes are rigorous and equally funded. We will inspect the utilization of validation resources and frequency of validation stalls. A validation stall is when the validation process is stopped and reverted to a previous state due to a failure in the product under test. Validation stalls create inefficiency of testing and, potentially, schedule impacts.  Resiliency of the validation process can minimize validation stalls, and our assumption is that the validation process is world class. However, defects will periodically halt validation. 

The Validation Quality Contributor is ‘the percentage of validation resource utilized and rate of validation stalls over time’.

‘Code Currency’ is a major industry topic. Code Currency is defined as the percentage of customers who operate ‘current’ or GA(n-1) code. (if Finance was hoping that currency was about making more money with the code, they were disappointed.) There are two key quality reasons that customers should always be using the latest code. The most recent release has the benefits of the most mature version of the validation process and has the most fixes that address the basal defects. The customer, even without using a single new feature, has a better-quality product. Of course, the new features are an added benefit. Any time delayed in using the newest release is a needless lowering of a customer’s perception of a product’s quality. 

The last Quality Contributor is Customer Usage based upon Code Currency, where we measure the rates of customers’ adoption or usage of the most recent release.

One final point, I use GA(n) to refer to the current release under development, GA(n-1) for previous release and GA(n+1) as next release.

For a Traditional/Sequential business, I’m going to fix the period between GA releases to be 3 quarters with commitment for the GA(n+1) happening a quarter before GA(n) ships. The planning horizon is approximately a year. The Traditional/Sequential business development processes are all mature and best practice. 

For Scrum business, I’m going to fix the sprint at a two-week duration where they deliver GA(n) at the end of the sprint. GA(n+1) happens two weeks later. The Scrum business is mature in doing correct grooming, modern code validation and state-of-art deployment techniques for either as-a-service delivery or enterprise software (yes, there is enterprise software being updated every two weeks... just watch your laptop do its Windows thing or watch Amazon deploy their AWS services in their data centers).

Ready to start comparing Traditional/Sequential and Scrum-based organizations? At the end of each, I’ll declare whether Traditional/Sequential or Scrum-based businesses win and why or draw on the comparison.

Business Decision Quality Contributor

A Traditional/Sequential business commits to GA(n+1) release one quarter before GA(n). The period for potential re-planning is 4 quarters and the cost of a re-plan increases as WIP builds because the plan is already in execution and the code is in the state of partial implementation. Changes in plans means going back and rooting out partial work while adding in new work. Cost of a re-plan after Functional Complete are lower due to lack of time for new development so rational options are limited. Another cost of change is when the narrow, isolated change impacts the GA(n+1) date. In this case, all other work is delayed simply because the release is delayed. If a competitor does something near mid-cycle of GA(n+1), the business is faced with the most-costly change to GA(n+1) or wait to respond in the next release, in this case, 1.5 years out in time.

A Scrum business spends extensive resources in grooming work in such a fashion that all scrum teams can complete the customer increment GA(n), however small, within a two-week sprint. Changes in plans impact the previous grooming and depending on the degree of the change impacts GA(n+1) and later releases. An isolated change in one team does not have impact on the other teams in delivering the GA(n) or the subsequent GA(n+1).

If a competitor does something mid-cycle of a sprint, the team can take on grooming and tradeoffs to decide when to phase it into the teams’ work plans. While that is happening, teams continue to complete customer value increments in GA(n).

Comparing the two, the Traditional/Sequential business has a longer period of re-planning and cost impacts on a release simply because of the longer release time. The intensity and cost of grooming is higher in Scrum and with a change some or all the grooming can become waste.  One could argue that these are roughly equal. The difference that surfaces is how much of the existing work or customer value goes to market and how quickly the future work can be redirected and brought to market. In this comparison, the Scrum business has the advantage with teams creating small increments and all increments going to market at the end of the Sprint, GA(n). The increments reflecting the changes will get to the market quicker in the GA(n+1) and GA(n+2) sprints.

Development Quality Contributor

Traditional/Sequential development starts doing work as early as, if not before, commitment agreement, a year before GA(n+1) releases. The developer’s Risk to Recall will start slowly and raise continuously until GA(n+1) happens. All developers hit their peak of risk to recall just before release because validation could uncover a development defect at any time. Even at GA(n+1), they have already started development on GA(n+2), so they never truly hit a Freedom to Forget point at GA. While they will eventually reach a freedom to forget point for a specific piece of work when it releases, given the fact that there is always WIP they never have a point in time where they can completely forget during the release.

Scrum teams will start doing work right after the sprint review meeting. They must return their code to GA quality (known-good) with each completed user story; this can happen multiple times within a Sprint. Their risk of recall raises and falls with each user story implemented and completed.

The Development Quality Contributor is significantly better with Scrum as developers can be focused on one customer increment until done at known-good. Once done, they can freshly take on the next piece of work with the freedom to forget the previous story’s work.

Code Quality Contributor

For simplicity reasons, let’s assume post-GA code for both Traditional/Sequential and Scrum businesses have roughly the same basal defect rates.  I’m happy to re-consider this assumption if there’s research showing one being materially different than another.

Focusing on development defect rates, Traditional/Sequential business due to the large-batch, large-WIP nature and validation happening late in the release plan, means an on-going buildup of development defects in the code. These defects are uncovered when QA fires up the validation. As validation progresses, development defect rate spikes. Developers focus on fixes. The development defect rate drops until reduced to zero for the final validation.

Scrum business has a near constant rate of development defect rates. The reason being that the test pressures are constantly being applied to the code and developers must return the code back to known-good continuously. There is no on-going buildup of development defects. The feedback cycles to developers are much quicker. Any defects newly introduced are readily surfaced and fixed.

Some will give me eye-rolls while I am saying that while Traditional/Sequential development has spiky defect rates, the net number of bugs found and fixed will be similar to Scrum development over the same period of time. Looks like where heading to a tie on the Code Quality Contributor by admitting this, right? But wait. While the number of development defects of both methods may be the same over time, the faster time to detection and fix means that there will be a higher quality fix available sooner with less propagating impacts to other teams with Scrum.

Additionally, over the course of a Traditional/Sequential development effort, there will be scope change; the team may spend time finishing WIP and fixing defects that are no longer important to the release. Whereas with Scrum, the scope change would push down the product backlog items, so the team won’t have the defects for that now deprioritized feature because it was never created. Code Quality Contributor is better in Scrum over Traditional/Sequential.

Release Quality Contributor

A Traditional/Sequential business builds WIP at commitment until functional complete and starts to burn down WIP as validation takes hold. The WIP build up lasts multiple quarters with a quarter or so backend to burn WIP back down as the product returns to known-good at GA(n). In other words, once development starts on the release, the code is in known bad state until GA(n) where it momentarily returns to known-good.

In Scrum, the WIP is extremely small and contained within each Scrum team. They return the quality of their code back to known-good multiple times within a sprint and always before a story is ‘done’. In other words, the product is always expected to be known-good with short, frequent windows where a team has the code known bad.

Release Quality Contributor is better in Scrum due to lower WIP and time the product spends in known-good state.

Validation Quality Contributor

In Traditional/Sequential business, given the size and duration of WIP and the development defect rates, the impact on validation stalls is significant. Even with state-of-the-art validation and resources, the code is untestable during development and early validation. Validation resources must wait until the code reaches functional complete and still there are validation stalls due to the spike in development defect rates. In Scrum business, it has equally but opposite impact; the size and duration of WIP and development defect rates, impacts the validation stalls positively. Because the code is kept in near known-good state always, the validation can run non-stop. Even the most minor stalls become noticed immediately and have major impacts, so teams constantly are developing ways to keep the system operational even when their updated code fails. (Read up on continuous deployment with blue/green operations as examples.) 

Validation Quality Contributor is better in Scrum.

Customer Usage Quality Contributor

According to some Traditional/Sequential business Quality organizations, after GA(n-1) release is declared ‘target code’ the reasonable adoption rate of that type of code is approximately 15% per quarter. It takes the time needed to declare the GA release ‘target code’ plus up to 6 quarters to hit 100% adoption; approximately 2 years. Why so slow? I would argue because Traditional/Sequential businesses have conditioned customers to expect releases won’t work until they fix a few remaining bugs post GA or ‘target code’, and these organizations often make upgrades visible, opt-in, carefully planned, and resource intensive events. 

Scrum business adoptions rates push adoption to 100% within a sprint post GA. That’s a two-week period. If you want to account for the blue/green and phased automated pushes, the 100% adoption is achieved within two sprints or four weeks post GA. Why so fast? I would argue that it is because the code is kept in known-good state always, always under continuously improving validation pressure, under near 100% adoption with automated updated, rollback and phased usage, where customers never see the impacts of the failures. 

Customer Usage Quality Contributor is better in Scrum.

My tally shows 6 to 0 in favor of Scrum business. If you are wondering did I stack the deck or fake the comparisons, the answer is an absolutely no.

What has happened is that Scrum and other innovations have created a self-re-enforcing positive cycle where improvements in one Quality Contributor re-enforces improvement in another Quality Contributor. While any one Quality Contributor may only be incrementally better, the combination of all six creates a new powerful, dynamic in quality. Scrum businesses deliver demonstrably higher quality customer value every time.


Thursday, September 9, 2021

Trusting In Pairs

I was hired into a company to run an enterprise product engineering organization.  My job beyond feature development and release delivery was correcting a failing two-year Scrum transition.  
After two years, the organization was delivering higher productivity per engineer, improved feature quality, and increased development velocity.  Scrum development process was corrected and maturing.  The time came for restructuring.

Customers were using more Cloud offerings and were requesting a Cloud offering from the company.  Since less development engineers were needed on the enterprise product team, as part of the restructuring, I spun off a team of 30 highly skilled engineers and a manager to do strategy and early offering development for Cloud users.  I took leadership of the team due to my Cloud experience.

I needed the new team to increase their learning speed by an order of magnitude, adopt new architectures, use new languages and embrace DevOps.  While none of this was in dispute with executive management and the team, adopting pairwise development (programming) would be the key enabler.  I had discussed pairwise development earlier with executive management.   Pairwise development was considered too radical due to the potential of talent loss and confusion with HR’s focus on individual performance.

Getting a credible strategy and initial offering into the executives’ hands was critical.  Other alternatives such as outsourcing, using a different internal team, hiring a new technical leader and hiring a new team were considered and ruled out because of timing, lack of skilled talent and lack of executive management support.  Most of the company considered Cloud offerings to be more fad than reality.  Using the existing team members meant that we had to increase their learning speed, knowledge and new value creation.

While there are many blogs and articles describing the benefits and pitfalls of pairwise programming, I found no definitive measures of before and after improvements, no assessment of team characteristics that would indicate success, and no clear business analysis of economic pros and cons.  I decided to build an argument based upon knowledge, earned trust and focused experimentation.

Pivotal Labs was founded on the principles of Agile, customer value first and pairwise programming with their hands-on Dojo Labs.  I asked the team to join me for an afternoon visiting the Cambridge Dojo Lab.  After the visit, I held a group meeting where I talked about our journey together over the past two years, our skills gaps, our need for quick customer value development, and what lies ahead in the for our customers who will adopt Cloud offerings.  I asked the team to take the next step to pairwise programming for the next 90 days as we focused on our first offering.  I offered that the risk of failure would be owned by me and that the first pairings would be a starting point to be adjusted as needed.  After 90 days we would assess their experiences and choose to adopt or adjust together as a team.  I left the meeting to allow team discussion.

The team agreed with the reasoning and the approach.  They took the weekend to consider who each would choose as a partner.  Fortunately, the pairing requests were reasonable, and the manager was able to handle the conflicts.  While awkward, they started to work in pairs on the sprint objectives.  Slowly, most pairs took on their own singular identity and worked closely together on assignments.  For the pairs who were struggling, the manager worked with them and restructured a few pairs to increase the chances for functional pairings.

As soon as the team agreed to pairing, I engaged executive management across the division.  I informed them of the reasoning and team agreement.  Since the risk was limited, need was clear and potential upside explained, management signed off on us continuing with pairwise development.   I engaged HR to ensure that we would abide to their focus on individual performance.

Immediately, the rate of learning accelerated.  Pairs were more willing to take up things that they didn’t know.  They helped each other understand and apply the new technologies.  While individuals historically wanted months to read, learn and apply.  Pairs took days to dig into new topics and quickly showed results.  Pairs instructed the team at-large on their findings and demonstrated the newfound value.

Pairs showed an increase in their willingness to take risks.  The ‘I have got your back’ reality of a pair’s partnership helped a pair member share concerns and find solutions.  This allowed the pair to take on more risk since another person was actively engaged to identify and address risks immediately.

After the 90-day period, the team agreed to keep pairwise development.  Only one senior developer decided that he could not work in this model and left the project.  The team created a go forward strategy and delivered the cloud offering as a viability proof.   Subsequently, the team joined another SaaS effort and released this new SaaS offering into production. Two plus years later, the team continued to use pairwise development.  When asked if they would go back, they could not imagine doing development any other way.  

I should have done a few things differently.  I should have dug deeper into documented cases where pairwise programming improved productivity, especially increased learning/application speeds and risk taking. I should have bootstrapped a smaller team earlier, for example, the initial team of 30 might have gone pairwise up to 6 months earlier.  I should have spent more time with management early on especially with risk mitigations and leveraged the company culture more by invoking company culture of employee skill investment.

Looking back, I believe that highly skilled engineers want to continue to improve their professional skills.  Showing how new approaches and technology helps their productivity and professional standing was a powerful force to motivate learning and facilitate application.

Thursday, February 18, 2021

Why (Batch) Size Matters

At one company, we worked to bring the development batch size down from multiple quarters across the whole organization to a few weeks of each team’s efforts.  To enable this endeavor, I defined a batch as development work completed to an organizational-wide, agreed-upon definition of done and customer outcomes.  The batch size was calculated by the time and effort required from start to  finish of the agreed upon customer outcomes by people or teams involved.  


Even with this simple definition of a batch, a number of the company’s leaders engaged in passive aggressive behaviors to undermine the movement to smaller batches.  The executive sponsors focused on surfacing the passive aggressives’ reasoning and systematically addressed their concerns over time.  In doing so, the development batch size was reduced, code quality increased, and predictability improved.


I found the passive aggressives’ reactions and thinking informative of the challenge facing organizations when moving from traditional, large batch development to agile, small batch development.  Allow me to represent their thinking here.


There are two vastly different principle sets at work behind traditional, large batch development and agile, small batch development.  The traditional development principles are based upon the efficiency gained in development by aligning partial work outcomes across all teams.  For example, aligning key requirements by a requirements completion date, aligning designs by a design completion date, aligning implementation by a functional completion date, aligning testing by a test completion date, and aligning system delivery by a system test completion date.  


The agile development principles are based upon the efficiency gained in development by always returning each small batch to a high quality of completeness and delivering the batch to final system testing or to the customer many times within a sprint.  


Both are valid claims to efficiency yet diametrically opposed on how they gain efficiencies of development.


Changing an organization from traditional, large batch development to agile, small batch development requires a different level of thinking across roles/teams, processes, architecture, requirements, validation and delivery.  At the root of the push back, was an element of the change in development principle sets.  


Claim: that’s not how we do work.  Response: Correct, small development batches cannot be achieved by doing what we did with large development batches.  In Scrum and Scrum@Scale, the organization structures and individual/team roles are very different.  We need to reset expectations of people’s roles and their metrics including the leaders’ roles.


Claim: We need to make design changes as we implement and test features.  Response: We need to spend more ensuring that APIs are correctly designed, validated and backward/forward compatible so incremental changes can be made with each small batch.  We have to design in perceived and conceptual integrity.


Claim: we lack the build and test speeds.  Response: let’s prioritize time to speed up builds and test automation so we can reduce the development batch size over time.  Otherwise, we’ll be wasting team members’ time waiting.  The payback in increased velocity will offset the upfront costs.


Claim: Features take a lot of cross team communication. Response: The increased cross team communication is a potential sign that our APIs are poorly designed, documented and/or validated.  Let’s spend time incrementally improving these where cross team communication is costly and can be significantly reduced by an improved API.


Claim: Features are bigger than a single sprint.  Response: Yes, features are bigger than a sprint, and more importantly, than a batch when the batch size is smaller than a sprint.  This means that our grooming has to take into account time for understanding overall customer outcomes, needed validation investments, good APIs and breaking up customer outcomes so they can be incrementally delivered.  


Claim: We have too many bugs to fix prior to release so we’ll defer them and fix them in the next batch.  Response: Let’s step back, why are we deferring defects?  They are deferred because we poorly estimate how much work can be completed in a batch.  The larger the development batch, the less confident we’re in our estimation.  Shifting to small development batches, better APIs, better grooming and consistent definition of done, we increase our confidence and decrease our error margins.  We won’t have to defer defect fixes.  Equally, a batch is not finished until it meets the definition of done and accepted by the product owner per the agreed upon outcome.


Claim: Customer will never accept increased frequency of product delivery.  Response: Why won’t they?  Is it because our infrequent complex releases, labor/resource intensive updates and deferred defects create poor product update experiences?  We’ll address these by prioritizing non-disruptive-update capabilities, use the improved update capabilities to speed up our validation automation, never defer defects, improve design integrity, and incrementally show improvement in customer update experiences.


As we worked through these claims, the responses and changes, the passive aggressives started to sign on to small batches, albeit, slowly.  The best part was that customers noticed these improvements as the new releases came available.


Footnote for executives: note how investments in people, process, tools, methods and architecture are key to moving to an agile development principle set.  These can be done incrementally, however, must be done with forethought and purpose.  As noted in the answers, these investments are codependent with grooming the backlog, inspection of outcomes, and resolution of impediments.  Transparency and inclusion of these investments helped this organization address the concerns and move forward together.

Monday, January 4, 2021

Constraints: The Good, The Bad, and The Ugly

 A frequent conversation with executives who are entwined with an agile transition is about the often stated claim that an executive cannot tell the agile development team what to do, how to do it, or when it must be done.  I ask her from where did the teams pick up these claims?  Her response is that she is unsure.  She says that it maybe was stated during the Scrum team’s training and that their Scrum Master is claiming this to be true.  I ask given this situation how does she communicate status with her stakeholders and boss.  After a pained expression, she says that’s why I’m talking to you.

Talk about constraints!  My executive friend seems to be the one constrained by her teams and team leaders.  Constraints are an “applicable restriction or limitation, either external or internal to a project, which will affect the performance of a project or process”, as defined by CrossLead, an executive coaching firm.  For me, teams forcing constraints on executives seems a bit odd and are likely the ugly side of constraints.


So, are constraints good?  Let’s use CrossLead’s definition to rephrase this question.  Are ‘applicable restrictions and limitations, either external or internal to a project, which will affect the performance of a project or process’ good?  Let’s try this example out.  Let’s say that we have an agreement with the company’s  investors for a specific level of funding for a specified period of time to deliver a capability.  Is this a good constraint?  I would argue that if reasonable and real, knowing this is a constraint is helpful during teams and leaders’ decision making.  Yes, it’s a good constraint, especially, when compared to not having investors or funding.


Other examples of good constraints are: the requirement that one collaboration and work tracking tool is used consistently across all teams, the definition of done for completeness of work, the timing of Sprint standups, planning, review and retrospective meetings, the structure and priorities of various customer and business outcomes in the backlog, the definition of teams/roles across the organization, the offering architecture and design, and the build/devops methods.


These good constraints provide the correct context for teams to efficiently and effectively operate together in creation of high-integrity customer outcomes.  In other words, these restrictions and limitations help teams in delivering the desired outcomes together.  They act as a binding function bringing teams together.  You might consider these good constraints akin to a collaboration framework.


So, what type of constraints are bad?  Clearly those that inhibit teams from delivering their outcomes.  Many times, these constraints are undocumented, unspoken and/or out-of-date decisions that have become inculturated into an organization’s thinking.  


When I hear comments such as, that’s not how we work, that’s against policy, that’s not what our customer is now demanding before they buy, etc, I immediately question where these constraints are coming from and how visible/valid they are today.

  

My first investigation approach with constraints like these is to ask, ‘where are these constraints documented?’  The answer is more than not that the constraints aren’t documented.  I guide the teams forward by asking them to develop the right constraints (if any) that will take their place, document all of the constraints in force, and widely communicate them.  If they are documented, then I approach the leadership team to discuss if they are aware of the constraints and if the constraints remain in force.  If the constraints are no longer in force, we actively communicate the situation, why the constraint is not in force and what has taken its place.  If the constraints are in force, we add it to our constraints and ensure that every team and leader understands the constraint and its reason for being.


Back to the ugly constraints.  These are constraints that are half-truths or based in ignorance that are accepted by the organization and unquestioned by leadership.  For example, take the three listed my executive friend, an executive cannot tell the agile development team what to do, how to do it, or when it must be done.  While partially true, that an executive cannot tell the agile development team what to do, every team has stakeholders and has agreed to a prioritized backlog with customer outcomes and a common definition of done.  This is how an executive works with teams to reach agreement with the teams.  The executive has to know how the teams decide what to do and has to participate in these methods for everyone to be aligned in what will be done.  


The executive cannot tell the team how to do it is a half-truth because it doesn’t inform the executive that she is able to inspect the outcomes at every sprint review meeting as a stakeholder and to inform her of potential changes in what’s desired for the upcoming sprints.  If there’s a gap, backlog prioritization and grooming will help bring the stakeholders and team together.


Lastly, on executives not saying when the work must be done is a confusion of constraints.  Mature agile teams are able to communicate the rate of completion of customer outcomes, or the velocity of their work, and are able to project potential future outcomes based upon their properly groomed backlog.  Mature agile executives should know how to use this information to correctly communicate to their stakeholders.  If there are any concerns, the executives and teams have the grooming mechanism to identify and address them together, while maintaining excellent agile methods.


Notice the pattern here, the executives are active in the identification, disposition, communication and articulation of constraints.  By doing so, the executives positively affect the performance of their teams.