Friday, April 26, 2024

Salty Story Points

While salt adds flavor to an entree, too much salt makes it inedible.    Similarly, only using ‘salty’ story point estimates for a release’s schedule projection, may make the release untenable. 

“How long will it take you?” is an often asked question of managers to developers.  Industries that use highly repetitive steps or processes can measure the time taken to perform a task across various employees and create estimation standards for the effort per step or process.  Unfortunately, in software development, the procedure to hammer out the desired customer outcomes, the design, the code changes, validation and deployment can vary greatly from one customer outcome to another, and from one developer or team to another.

In traditional development, we attempt to reach agreement on all functionality to be implemented in a large batch of work, often embodied in a release.  We do sufficient design to understand the complexity and scope of the changes before starting implementation.  At about 30 to 40% estimated effort spent on the project, we believe that we’ve done enough work to be able to estimate the size of the remaining work and project a ‘commitment’ date for completion.  These estimates, from my recent experiences, are often off by 25% to more than 250%.  When the reality finally surfaces, the release is subject to requirement changes and functionality reductions.

Enter in agile development.  Here we reduce the batch size down into small increments that each must meet a known, high-quality definition of done.   We track the rate of accumulated small increments as we build up sufficient value to release.   

A way of reducing the batch size is to break up a large requirement into small user outcomes called user stories, and to estimate the work for each user story using story points. Story point estimation uses a Fibonacci sequence of sizes that each team after a discussion agrees to assign to the user story.  If size is too large, the team with the product owner often breaks apart the story into multiple, smaller outcomes where each has a smaller batch size with a smaller story point estimate.

As the team progresses since story points are team specific, they record their accumulated story point velocity for each sprint.  To estimate a completion of a ‘release’ or an accumulation of stories, the team charts out remaining unfinished stories with story points, their velocity and their confidence factor.  This projection is a burndown rate over time thus keeping the team specific story points within the team.  For those who have taken Scrum Master training, this approach should be fairly standard.

In a few organizations, the engineering managers ask me why they couldn’t just use time or effort estimates per story.  To get a schedule estimate, the managers divide the team’s summation of effort estimates by the available team engineering size.  The managers believe that everyone understands effort estimation given that the size of each story is small and are all broken down into small increments, the effort estimation should be reliable.  If there’s slippage, the engineering manager can put pressure on the team or the individual to make good on their effort estimation commitment.

This way of thinking is intoxicating to engineering managers and leaders: just demand that each story point be an engineering day of effort.  Export all of the stories with their story points from the team planning tool, like Jira, into a spreadsheet.  Add up the story points.   Add up available team members’.  Divide to get days remaining.  Project out over a calendar (don’t forget holidays and vacation!).  Now, we have a schedule commitment.

I liken this intoxicating simplicity to a chef who only cooks with the most common seasoning, salt.  While every recipe will likely include salt, all recipes don’t include only salt as a seasoning.  Imagine if a chef only seasoned every dish with salt.  This is akin to an engineering manager boiling down story points to engineering days.  What’s missing from this dish are the other seasonings in the spice rack.  For example, relating seasonings to things teams should consider when story pointing:  cumin to complexity, garlic to familiarity and/or uncertainty, peppercorn to validation changes, cardamom to clarity of outcome, turmeric to architecture impacts, chili pepper to user impacts, basil to migration realities, thyme to risks and cinnamon to team dynamics.  

Each seasoning adds something to the entree.  Consideration of overall complexity of the outcome relative to how the product is currently structured adds how much change may be needed.  Familiarity and uncertainty of previous team experiences with similar outcomes can add or remove story points because of shared understanding.  Having to completely restructure how the product is validated or no-change to validation adds or takes away story points.  Outcome clarity and/or how the outcome does or does not fit within the current product architecture may increase the team’s coordination with those who decide architectural decisions.  If users have to learn, unlearn and/or relearn something about the product, that deserves consideration in the story pointing due to user interface design and validation.  If the internal mechanisms or data representations have to change and therefore a partial or complete migration has to be planned, will definitely add work to be done.  If there’s an external risk, such as another team is working in shared code and the teams need to coordinate their changes, the estimation should be adjusted accordingly.  Lastly, internal to the team, there may be a need for critical resources, skills and/or knowledge that are committed elsewhere (like to a family’s vacation), those should be reflected in the story point estimation.  

Since the team has likely dealt with these considerations story after story, sprint after sprint and release after release, they are best situated to enter into the deep, experience-based collaboration needed for efficient story point estimation without being forced into the overly simplified short-hand of engineering days.  A team acting as a chef looking at a user story as an entree will want to consider all of their seasonings, including salt, for their recipe.  The mixture of all required seasonings may need to be used to make the entree deliciously flavored for consumption. 

The key ingredient necessary for being able to estimate time is a clear and mature ‘definition of done’.  Regardless of the entrees needing just salt or a large mixture of seasonings, the rate that the team is able to complete small increments of ‘done’ work establishes the consistent rate of work and allows the team to project a credible schedule.  

So, if you’re an engineering leader who is demanding that every dish on the menu must only be prepared with salt as a seasoning, you’re going to end up with simply too salty story points and inedible releases.


Monday, March 11, 2024

It Depends

 As part of an agile transformation, I was guiding an organization towards smaller batches enabling more frequent inspections of the organization’s progress against a ‘known good’ definition-of-done.  For complex agile projects, this approach validates that the correct collective progress is being made, and ensures issues are addressed as they are uncovered.

The organization was insistent on retaining their traditional development mindset and keeping their dependencies documented using ‘depends on’ links, effectively generating a dynamic Gantt chart.  Whenever a team issued a new dependency link, the dependency got an immediate high priority from leadership until the two teams agreed upon a path forward together.  A new dependency would disrupt the receiving team who had other critical work to complete.  Over time, the teams reverted back to on-going partial work with ever increasing dependencies on other teams.  The onslaught of dependencies disrupted teams from reaching their sprint goal, lowered their definition-of-done to maintain velocity, and undermined their predictability.

How one handles dependencies are completely different in traditional development compared to agile development. I have struggled to help leaders understand that there are two ways of delivering complex programs, that these two ways are based upon different principle sets, and that they are executed with different expectations and rituals.  

Traditional development follows a series of milestones of progressive decision making over time, for example, serial agreements to business case, product requirements, architecture & design, implementation, validation, market readiness and finally product launch decisions.  A traditional development Gantt chart maps out the program plan with dependencies within the phases showing where one team’s outcome enables another team’s work.  This is consistent with traditional development’s making a plan and executing the plan.  The use of the phases helps provide early detection that something’s amiss.  For example, if a team cannot complete all of their architecture & design decisions by the design complete date, executives know that they have an issue early in the program.

Agile development is different, first and foremost, because the product is in the continuous state of being ‘ready to ship’, or ‘done’.  To do this, means that product backlog and architecture has been constructed such that the teams can work independently and incrementally in design, development and validation.   At any point in time, all teams can operate and inspect the current ‘production’ system.  When everyone agrees that sufficient customer value has been achieved, the system is immediately released to customers.  Equally important, no one team is allowed to cause builds of the whole system to fail, validation to become stalled, or deployments to the staging environment to pause.  Even during early development, any of the aforementioned events is equal to a production outage and teams do everything to return the system to ‘production’ quality.

So, how are the interim agile development dependencies handled across teams?  Think of a traditional program’s Gantt chart as being a horizontal relationship map over time.  Now rotate the Gantt chart 90 degrees where an incremental agile program outcome is at the top and all necessary work cascades downward to stories that teams complete during the same sprint (or a few sprints).  This is called, a hierarchically structured backlog, where stories are completed to enable an epic, epics are completed to deliver a theme, where each issue type of the hierarchy is ‘done’ and inspected.  Each and all teams’ progress can be inspected and tracked.  

But using vertical dependencies, or a cross-organization hierarchically structured backlog, planning is focused on a series of inspectable, incremental, cross-team outcomes that keeps the system operational even when feature-poor.  This allows surfacing unknown risks early and uniform inspection of progress across the whole organization.  When there is a failure, the identification of the root cause and corrective action can happen quickly to enable the current incremental outcome and adjust the future outcomes.

Agile development’s known-good, production quality first over traditional development’s feature design and coding first allows leadership to have a common standard to inspect and understand progress.  Traditional development’s strict phase-driven decision making allows leadership to have a different standard to inspect and understand progress.  However, a mixture of low quality implementation and loose phase-driven decision making means pure development chaos.

Friday, March 1, 2024

Transforming Architecture

 Architecture, an architect’s role and their relationship to agile principles are seldom defined when I start an agile transformation.  There are those who believe architecture is a lofty set of future technical needs or desires.  For example, architects provide solutions to address past technical debt that will never be implemented.  There are those who believe that architects live in an ivory tower and profess designs that have no possibility of being implemented within the constraints of the -ilities (affordability, scalability, securability, sustainability, supportability, etc).  There are those who know that agile means do whatever, whenever and however they want until there is something new and urgent that needs to be done instead.  When all of these beliefs are held in an organization, architecture is dismissed.

Given this baggage, I start an agile transformation by asking, how do you define architecture and the role of architects?  As expected, I get a mixture of responses across the organization depending upon who’s answering.  Engineering managers tend to define the role subservient to their role as managers, as in technical designs that fit into their envisioned schedule and feature set. Product managers define the role as subservient to their roles balancing stakeholders and customers’ needs, as in technical designs that support the whim of sales or customers’ desires.  Support or quality leads define the role as ensuring supportability or delivering reliability.  Even the technical leads will define their own roles as being subservient to everyone else.  

When I ask, if a technical infeasibility surfaces, who raises it and resolves it?  I’m surprised by their willingness to immediately take on technical debt and paper over the infeasibility.  Or, worst yet, toss the technical issue on another team or organization, and move forward with an unrealistic, low-quality solution.

To reset the discussion, I define architecture as ‘technical decisions that we hold ourselves accountable to’.  This means that the decisions (or agreements) are technical in nature and made by engineers, technical leads and/or architects.  Architecture decisions relate to but are not management, product, quality or support decisions.  Everyone is held accountable to these decisions which means that the decisions have to be written down and tested/checked against.   If there’s a discrepancy, we either change the implementation to be consistent with the technical decision(s), or we change the technical decision(s) for everyone.  Architects, or technical leads, are responsible for cultivating, documenting and, when necessary, making these technical decisions.  Architecture and architects stand as a separate task and job role.  They are part of the collaboration between Product Management, Engineering Management and the team.

This definition avoids the pitfalls of the above baggage by centering architecture in the space of documenting and brokering technical decisions that are written down and used.  Normally, engineering managers, product managers, support, and quality don’t want to cultivate or maintain a set of technical documentation.  Equally important, most everyone will agree that given the pragmatic nature of the definition, it avoids the ivory tower and irrelevance concerns.

I can hear the screams of the agile purists, ‘working software over comprehensive documentation’!  Most agree that ‘over’ does not mean ‘instead of’.  Both working software and documentation are highly valued and important.

Let’s step back and use Mary Poppendieck’s ‘Build Integrity In’ Tools; Perceived Integrity, and Conceptual Integrity.  Perceived Integrity is the consistency in how we present abstractions and interactions to our users.  This means that our abstractions, or user design decisions are expressed in both internal and user documentation, and are carefully cultivated and validated with our users. Equally, this means that we have explicitly decided and documented who our users, or user personas, are. Therefore, I place the decisions of user design and user personas in the realm of architects’ technical domain.   Any change in user personas has a massive impact on every aspect of the product and needs to be carefully considered and controlled.  Architects tend to understand these impacts on Perceived Integrity better than the other roles.

Turning to Conceptual Integrity as it comes more naturally to technical leaders.  Conceptual Integrity deals with the speeds, feeds, interfaces, APIs and functionality of the code itself.  As we’ve learned, APIs are contracts between components.  Neither component can change the contract without an agreement of all parties involved and a phased transition to the new agreement.  A lot of modern language and API styles have helped to ease these transitions but regardless, the contract has to be kept to keep the system operating.  When these are documented, maintained and tested against, the resulting system tends towards resilience. 

Back to the ‘working software over comprehensive documentation’ concern.  Recent innovations in validation, APIs, languages, CI/CD, tooling and UI development, have moved us towards the reality when comprehensive documentation is also working software.  As we maintain software, we can maintain architecture and our technical agreements.  With each software change, we know if we’re staying aligned with those agreements.

Another benefit of this approach, by clearly defining and using user personas, all teams can now align their user stories and outcomes to identically the user personas defined for Conceptual Integrity.  So as Product Managers, Engineering Managers and teams discuss potential user value, they have a common understanding of who they are discussing for precisely the intended outcome that is consistent with past delivered value.

During an agile transformation, it takes a while for the organization to grow accustomed to this clearly defined architecture role, responsibility and accountability.  Some organizations have created a ‘triad’ collaboration where Engineering Leadership, Architecture and Product Owners engage the team as a single voice during the team’s refinement, planning and execution.  Teams benefit because they know what’s expected of them so they can focus on creating the highest value for the customer.

Saturday, February 17, 2024

TLDR

If you have been reading my postings, you have noticed that I write detailed, complex and long compositions to explain my thoughts on agile principles and transformations.  I also do this in the normal course of planning and guiding agile transformations for organizations.  I like to share the ‘conscious’ side of ‘competence’ so others can explain the ‘why’ behind the ‘how’.

Needless to say, I often get TLDR in response, as in ‘Too Long, Didn’t Read’.  I have been asked to summarize my thoughts in a summary paragraph, a short presentation, and yes, once by a boss who told me to put the important parts in the email subject line.  Somehow, there’s an expectation that if I could only shorten the concepts to bullets, the ‘aha’ would happen across the organization.  Adoption of the concepts would be self-motivated and immediate.

This leads me to ponder how a medical doctor becomes one.  Consider how the best and brightest high school students are guided to a strong ‘core’ set of biology and chemistry while getting their bachelor's degree.  In medical school, they spend their first year learning how a healthy body works, in great detail and with hands-on experience (I won’t go into detail of the hands-on experience).  Their second year is spent learning pathology, or why and how things go wrong in an unhealthy body.  They finish out their last 2 years of medical school rotating between various medical disciplines understanding the basic practices, procedures and realities while continuing to deepen their basic knowledge.  They spend years preparing for the medical exam that will determine what medical specialty that they will spend the next two to seven years being an intern after medical school.  They have board certifications to master before they are allowed to freely practice as an attending physician.

Why do they spend this much effort to learn the complexity of the human condition?  One reason is that the downside of making a mistake is so high that it can cause undue pain and suffering, even death, as well as waste time and resources on missed diagnoses.

Let’s ponder a TLDR version of medical education.  Let’s assume that we have the most excellent medical snippets from X (formerly known as Twitter), TED talks, Youtube, LinkedIn and TikTok.  Let’s assume that a really smart designer using AI figured out how to place in front of our medical students the right information in the right order, of course monitoring dwell times, actions, and answers.  Once the medical students have been exposed to the right materials long enough with sufficiently successful metrics, they are free to practice medicine.  

Would you be willing to go to such a TLDR medical doctor?  

They might be able to diagnose simple cases.  They might be able to perform simple procedures.  They might be able to, when presented with x causes y causes z, reason with the patient that x causes y causes z.  They would likely be good at a particular ‘how’.

They would likely be unable to reason the complex ‘why’.  When they are presented with unseen representations or complex, multi-causal symptoms, they will not have the context to reason possible diagnoses. 

Why?  A simple diagnosis or procedure taught without context means that that same simple procedure may not apply in all contexts.  In fact, that same simple procedure may cause harm in many other contexts.  The ‘why’ explains context and the ‘how’ explains the procedure for the right diagnosis. 

Well, what does a medical education have to do with hi-tech?  There are no life threatening development teams out there.  Right?  While this may be true, technology development does have customers, investors, stakeholders, and co-workers who are depending on leadership knowing the ‘why’ and ‘how’ of complex situations, organizations, projects and products.  Making a mistake does cause harm to these dependents.  Why would anyone simply trust a TLDR trained technology leader?  Why would anyone trust an organization that demands TLDR communications or processes?

Allow me to redefine TLDR as Tough Learning Different Reasoning.  To build the necessary context takes time and exposure.  The context explains the solutions, procedure and rituals.  This is done by humans in taking time to learn, understand and experiment with new concepts.  Watching a lot of motivational TED talks won’t suffice.  Reading a ton of summaries or email subject lines won’t help. 

Tough Learning means that one has to spend time and effort to learn and think.  Simple impressions of Learning are insufficient to master the full context.  Different Reasoning means that the underlying principles and methods are unfamiliar.  To learn them requires practice and experimentation.  Few are able to read about the new rituals and discern the new principles.  Study, effort and practice are required for mastery.  

You should ask me then, what’s a test that you can take to determine which TLDR you have been exposed to in your past.  Here is the test:  Read the Poppendiecks’ book, Lean Software Development, An Agile Toolkit.  It takes between 2 to 4 hours for a seasoned technical leader to read the whole book.  

If your response is that you don’t have the 2 to 4 hours to read the book, then you know which TLDR that you’ve been exposed to.

If you read the book AND you can explain to yourself ‘why’ and ‘how’ for ALL of the 22 Tools apply to Scrum, S@S and/or SAFe v6 (or later), you have been exposed to the second definition of TLDR and are a well trained agilist.

If you read the book AND you can not explain to yourself ‘why’ and ‘how’ for ANY of the 22 Tools apply to Scrum, S@S and/or SAFe v6 (or later), you have fallen victim to the first definition of TLDR.  You should consider vesting time to learn the foundations and principles of agile development.

If you read the book AND you can not explain to yourself ‘why’ and ‘how’ for SOME of the 22 Tools, you have been exposed to a mixture of the TLDRs.  You have more to learn.  Focus on one of the tools and immerse yourself into building your understanding of that tool’s context.  Move to the next tool until you can explain the ‘why’ and ‘how’ for all 22 Tools.

We expect our medical doctors to be conscious competent.  Shouldn’t we expect that much of ourselves when doing agile development?

Tuesday, February 13, 2024

The Most Critical Step in Agile Transformations

The most enjoyable part of every agile transformation is the time spent with people during one-on-ones. They bring their realities and difficulties to the conversation. We sort out what’s going on. They allow me to share insights and provide alternatives for their consideration. They leave the discussions appreciative with ideas to consider for next steps. The best compliment that I can receive is when they thank me for allowing them time to think.

An often asked question after we’ve wrapped up a discussion is, from my perspective, what’s the most critical step in an agile transformation?  They are always surprised that my answer doesn’t appear to conform to the Agile Manifesto.  They expect something like, establish Scrum rituals.  Or, do agile training.  Or, define individual and teams’ roles and responsibilities.  Or, define the SAFe or S@S hierarchy.  Or, establish a maturity model with a clear definition of done.  And to be fair, these are important items to establish during an agile transformation, but they aren’t the most critical.

My answer is simply to establish a Learning Organization.  That is an organization that values curiosity, change, experimentation, inspections, introspection, education, teaching moments, trends, root cause analysis, improvement, and innovation.

I have found that many organizations are in an unconscious competence state where they do what they do because they have documented processes for how they do everything, but they have forgotten why they do these things, or they are in telling leadership style where they expect to ‘just tell the organization what and how to do everything’.  

The urgency of needing to fix a bad agile transformation gone wrong, to move quickly to mature agile development, and/or the onslaught of business conditions reinforces the leadership’s expectations to do what they do faster or just tell them what to do now.  There’s no time to learn.  Absolutely no time for experimentation.  If there’s any failure, there’s only time to punish.

At the core of agile thinking is empirical inspections, or the Plan, Do, Check, Act (PDCA) cycle.  The core is reinforced by Sprint Review meetings, retrospectives, burndowns, velocity and root cause analysis.  How can any of these be done without a Learning Organization?  Sadly, none can be done correctly.  Those organizations that do these rituals without reasoning, or do Scrum in name only aren’t improving, growing or learning.

When staging an agile transformation, my first act is to set up as many cross-functional, multilevel one-on-ones as possible.  The one-on-one frequency varies based upon particular needs of the individual.  They are always confidential and designed as a safe place for exploration and learning.  I always focus on root cause analysis in my questioning to help raise curiosity as to ‘why’.  I have never been disappointed by the 5 ‘whys’ questioning method that quickly guides our one-on-one to a deeper understanding of the situation and presents alternatives to consider.  I explain how PDCA, retrospectives, etc are aspects of learning and experimentation, and how those must be done to achieve improvements, albeit initially on a small scale.

What I’ve noticed is after a number of these meetings, that those involved in my one-on-ones start to set expectations with their leaders and teams for root causes, deeper reflections, incremental improvements and seeking information on concepts they don’t understand.  Learning is valued.  Time is set aside for discussions.  Experimentation is planned.  Results are inspected.  Adjustments are made.

As leaders mimic, teams start to learn and pick up on the value of Sprint Reviews and Retrospectives.  Teams become curious about ‘why’ the various rituals, metrics and roles are defined.  They become open to the role of the Scrum Master guiding the team to maturity.  This opens the door to more learning and improvements.  

A Learning Organization that only remains within a single function like engineering, can make good progress on an agile transformation, but one that does it across multiple functions can make amazing progress on agile transformation and the velocity of business value delivered.

The difficult step in a Learning Organization is to learn together across functional groups and across teams.  This means understanding that the role of leadership in an agile organization is different from what they may have grown accustomed to.  Functional leaders and their functions play important roles in agile organizations.  Setting up and enabling the role of architecture that empowers teams to act and gain velocity. Building a value chain from specification to builds to validation to deployment to enablement to sales to support.  Defining a product backlog that is built on continuous increments of capabilities that are continuously deployed.  

I liken creating a Learning Organization across functional groups to that of nurturing a human being.  When the human is a child, we nurture and educate them on the basics with an expectation that they will eventually grow and mature.  When they reach their teens, we help them understand the deeper, abstract nature of the world around them.  We can demand more from them, but they remain immature in other ways.  When they reach their twenties, we help them understand the interconnectedness and interdependence of complex systems.  We demand professionalism and perfection.  

I have found that a functional organization is well aware of their shortcomings, however, they expect and demand perfection from all of the other functions.  I have to point out that while the individuals and organizations may be highly experienced and successful, they too are in the midst of an agile transformation both within their organization and their relationship with the other organizations.  We need to realize that together we operate more like a child that needs nurturing and education together.  This is far better than demanding a child to do something that they are incapable of doing and expecting adult results.  

As we grow being a Learning Organization, we as a whole organization will become teens and adults.  As mature adults, we now understand the complexities in delivering value and continuous improvement.  We innovate. We teach.  We are consciously competent together.