Saturday, January 31, 2026

Total Failure

I was in the restaurant waiting for my friend to discuss his current gig at a large SaaS company.  He was hired to lead the effort to fix the company’s failed agile transformation that has struggled over past three years.  He wanted to share his current findings and brainstorm next steps.  He was running late as his various, overlapping meetings generated a non-stop flow of issues and corrective actions.  He finally poured himself into the chair next to me and paused to breathe. 

He described how the company had restructured multiple times with frequent changes in leadership.  The new leadership team had aggressively backed the ‘agile’ brand, embraced every agile method, instituted CI/CD changes, and published multiple detailed customer roadmaps.  The CEO felt they had sufficient engineering staff to meet all these new demands.  The VPs were operating in silos with each wanting their own ‘agile’ tools and methods.  The Product Management organization operated in one tool while Engineering operated in another tool.  The major product was an amalgam from various technology suppliers and acquisitions.  Beyond a basic stitching of the pieces together, the product had never been refactored to improve user experience or operational efficiency.   They had weak architecture and poor operational control.  Meanwhile, the company was growing fast and expanding into new markets.  

Leadership published a long list of product 'new ventures' that were categories of work that the organization had to deliver something ‘new’ in each ‘venture’.  Meanwhile, they published customer commitment roadmaps that were a mixture of sales commitments made to sign new customers and exciting new enhancements for the annual customer event.  Leadership demanded that Product Management put the litany of commitments and new ventures into the work tracking tools.  Since the organization was large with multiple Engineering organizations, they felt a flat list of outcomes assigned to teams made the most sense.  Teams duplicated the commitments as they saw fit and broke the work into their own incremental deliverables.  They were able to categorize the work by new venture and by desired release for reporting purposes.

The key coordination mechanism between teams were meetings, endless meetings.  When leaders were confronted with a new escalation, they held a meeting.  When a product build failed, they held a meeting.  For efficiency, they held meetings via chat sessions.  Leaders, middle managers and teams were flooded daily with new requests, new dependencies, and new requirements.  Product Managers would work out multi-sprint outcomes with their engineering teams, only to have these undone by the new daily requests from other teams.  The changes in plans led to more meetings.

Seeing the increasing time spent in meetings and the slowing of work, Leadership rushed to the root cause of this situation, meeting efficiency!  The leadership rushed new training for the managers and teams on how to conduct meetings, who to invite, and if invited, how to decline the invitation.  They seemed to believe that everyone in the company had somehow forgotten this basic professional behavior in the din of on-going work pressure.

Upon entry into the organization, my friend talked to Leadership presenting his diagnosis that the organization lacked prioritization of defined outcomes that would help the organization out of its thrashing and into a series of sensible, coordinated successful deliveries together.  The focus was on how to bring about collaboration and focus rather than the ever increasing 'do everything' that was based on their belief that they had more than enough resources to do everything.  

The shocking part was the hostility that the recommendation received from Leadership.  The hostile responses were that his observations were just wrong, that the product was necessarily complex, that executives were too busy running the business, and that there were plenty of resources to do the work.  For raising his observations, his boss started to explain why the Leadership was right and why my friend shouldn't be so blunt in his assessment.  His boss needed my friend to be a better team player, provide the supportive answers as needed, and not attempt to instruct Leadership on reality.  

He paused just long enough to order his lunch.  Turning to me, he asked, what did I think of his situation.

I shared the summary of Martin Seligman’s book, Learned Optimism: How to Change Your Mind and Your Life.  Martin’s work builds upon learned helplessness, where an individual shuts down due to having no control during adverse situations.  While Martin’s work focuses on an individual, I have seen this same behavior in work organizations that disempowered their employees and managers by relentless demands, constant changes, disconnected decision making and scapegoating.  Since there is no constructive structure, each manager and team controls what they can control, that is the work that they choose to do for the day (or sprint) completing the work to their group-specific DoD as they unwittingly accumulate organization-wide technical debt.

Sadly, I told him that I have only seen one thing that forces Leadership to change direction when an organization hits the learned helplessness state.  That one thing is total failure.  Total failure is when the organization fundamentally fails to deliver what’s promised to customers whereby those customers reject the product and likely switch vendors.  Total failure cannot be masked over by Leadership and forces the organization’s acceptance that fundamental change is required.  Most times, the fundamental change starts with changes in Leadership.  Sadly, total failure can lead to business failure too.

“Is everything lost?”, he asked.  

Here's where Martin’s book can help after total failure occurs and Leadership decides to sponsor meaningful change.  Martin talks about how an individual can learn optimism, hence overcome pessimism and learned helplessness, by thinking about one positive thought every day.  Once Leadership realizes that they must change and fix what’s broken, the organization can step back and make incremental changes to correct what’s been broken.  The analogy of one positive thought every day for the organization is the retrospective, action, and learning that an organization can take with each sprint, product increment and release. These small corrections will snowball into measurable improvements that can win back those customers.

However, until Leadership sets its sights on learning and improving over demanding and thrashing, a learned helplessness organization will continue towards total failure.  Nothing that my friend could do will stop this organization from reaching total failure.  

Our lunch arrived.  

My friend sat there staring at his lunch.

“Should I start looking for another job?”, he sighed. 

I would look for new opportunities just in case you need them.

Friday, April 26, 2024

Salty Story Points

While salt adds flavor to an entree, too much salt makes it inedible.    Similarly, only using ‘salty’ story point estimates for a release’s schedule projection, may make the release untenable. 

“How long will it take you?” is an often asked question of managers to developers.  Industries that use highly repetitive steps or processes can measure the time taken to perform a task across various employees and create estimation standards for the effort per step or process.  Unfortunately, in software development, the procedure to hammer out the desired customer outcomes, the design, the code changes, validation and deployment can vary greatly from one customer outcome to another, and from one developer or team to another.

In traditional development, we attempt to reach agreement on all functionality to be implemented in a large batch of work, often embodied in a release.  We do sufficient design to understand the complexity and scope of the changes before starting implementation.  At about 30 to 40% estimated effort spent on the project, we believe that we’ve done enough work to be able to estimate the size of the remaining work and project a ‘commitment’ date for completion.  These estimates, from my recent experiences, are often off by 25% to more than 250%.  When the reality finally surfaces, the release is subject to requirement changes and functionality reductions.

Enter in agile development.  Here we reduce the batch size down into small increments that each must meet a known, high-quality definition of done.   We track the rate of accumulated small increments as we build up sufficient value to release.   

A way of reducing the batch size is to break up a large requirement into small user outcomes called user stories, and to estimate the work for each user story using story points. Story point estimation uses a Fibonacci sequence of sizes that each team after a discussion agrees to assign to the user story.  If size is too large, the team with the product owner often breaks apart the story into multiple, smaller outcomes where each has a smaller batch size with a smaller story point estimate.

As the team progresses since story points are team specific, they record their accumulated story point velocity for each sprint.  To estimate a completion of a ‘release’ or an accumulation of stories, the team charts out remaining unfinished stories with story points, their velocity and their confidence factor.  This projection is a burndown rate over time thus keeping the team specific story points within the team.  For those who have taken Scrum Master training, this approach should be fairly standard.

In a few organizations, the engineering managers ask me why they couldn’t just use time or effort estimates per story.  To get a schedule estimate, the managers divide the team’s summation of effort estimates by the available team engineering size.  The managers believe that everyone understands effort estimation given that the size of each story is small and are all broken down into small increments, the effort estimation should be reliable.  If there’s slippage, the engineering manager can put pressure on the team or the individual to make good on their effort estimation commitment.

This way of thinking is intoxicating to engineering managers and leaders: just demand that each story point be an engineering day of effort.  Export all of the stories with their story points from the team planning tool, like Jira, into a spreadsheet.  Add up the story points.   Add up available team members’.  Divide to get days remaining.  Project out over a calendar (don’t forget holidays and vacation!).  Now, we have a schedule commitment.

I liken this intoxicating simplicity to a chef who only cooks with the most common seasoning, salt.  While every recipe will likely include salt, all recipes don’t include only salt as a seasoning.  Imagine if a chef only seasoned every dish with salt.  This is akin to an engineering manager boiling down story points to engineering days.  What’s missing from this dish are the other seasonings in the spice rack.  For example, relating seasonings to things teams should consider when story pointing:  cumin to complexity, garlic to familiarity and/or uncertainty, peppercorn to validation changes, cardamom to clarity of outcome, turmeric to architecture impacts, chili pepper to user impacts, basil to migration realities, thyme to risks and cinnamon to team dynamics.  

Each seasoning adds something to the entree.  Consideration of overall complexity of the outcome relative to how the product is currently structured adds how much change may be needed.  Familiarity and uncertainty of previous team experiences with similar outcomes can add or remove story points because of shared understanding.  Having to completely restructure how the product is validated or no-change to validation adds or takes away story points.  Outcome clarity and/or how the outcome does or does not fit within the current product architecture may increase the team’s coordination with those who decide architectural decisions.  If users have to learn, unlearn and/or relearn something about the product, that deserves consideration in the story pointing due to user interface design and validation.  If the internal mechanisms or data representations have to change and therefore a partial or complete migration has to be planned, will definitely add work to be done.  If there’s an external risk, such as another team is working in shared code and the teams need to coordinate their changes, the estimation should be adjusted accordingly.  Lastly, internal to the team, there may be a need for critical resources, skills and/or knowledge that are committed elsewhere (like to a family’s vacation), those should be reflected in the story point estimation.  

Since the team has likely dealt with these considerations story after story, sprint after sprint and release after release, they are best situated to enter into the deep, experience-based collaboration needed for efficient story point estimation without being forced into the overly simplified short-hand of engineering days.  A team acting as a chef looking at a user story as an entree will want to consider all of their seasonings, including salt, for their recipe.  The mixture of all required seasonings may need to be used to make the entree deliciously flavored for consumption. 

The key ingredient necessary for being able to estimate time is a clear and mature ‘definition of done’.  Regardless of the entrees needing just salt or a large mixture of seasonings, the rate that the team is able to complete small increments of ‘done’ work establishes the consistent rate of work and allows the team to project a credible schedule.  

So, if you’re an engineering leader who is demanding that every dish on the menu must only be prepared with salt as a seasoning, you’re going to end up with simply too salty story points and inedible releases.


Monday, March 11, 2024

It Depends

 As part of an agile transformation, I was guiding an organization towards smaller batches enabling more frequent inspections of the organization’s progress against a ‘known good’ definition-of-done.  For complex agile projects, this approach validates that the correct collective progress is being made, and ensures issues are addressed as they are uncovered.

The organization was insistent on retaining their traditional development mindset and keeping their dependencies documented using ‘depends on’ links, effectively generating a dynamic Gantt chart.  Whenever a team issued a new dependency link, the dependency got an immediate high priority from leadership until the two teams agreed upon a path forward together.  A new dependency would disrupt the receiving team who had other critical work to complete.  Over time, the teams reverted back to on-going partial work with ever increasing dependencies on other teams.  The onslaught of dependencies disrupted teams from reaching their sprint goal, lowered their definition-of-done to maintain velocity, and undermined their predictability.

How one handles dependencies are completely different in traditional development compared to agile development. I have struggled to help leaders understand that there are two ways of delivering complex programs, that these two ways are based upon different principle sets, and that they are executed with different expectations and rituals.  

Traditional development follows a series of milestones of progressive decision making over time, for example, serial agreements to business case, product requirements, architecture & design, implementation, validation, market readiness and finally product launch decisions.  A traditional development Gantt chart maps out the program plan with dependencies within the phases showing where one team’s outcome enables another team’s work.  This is consistent with traditional development’s making a plan and executing the plan.  The use of the phases helps provide early detection that something’s amiss.  For example, if a team cannot complete all of their architecture & design decisions by the design complete date, executives know that they have an issue early in the program.

Agile development is different, first and foremost, because the product is in the continuous state of being ‘ready to ship’, or ‘done’.  To do this, means that product backlog and architecture has been constructed such that the teams can work independently and incrementally in design, development and validation.   At any point in time, all teams can operate and inspect the current ‘production’ system.  When everyone agrees that sufficient customer value has been achieved, the system is immediately released to customers.  Equally important, no one team is allowed to cause builds of the whole system to fail, validation to become stalled, or deployments to the staging environment to pause.  Even during early development, any of the aforementioned events is equal to a production outage and teams do everything to return the system to ‘production’ quality.

So, how are the interim agile development dependencies handled across teams?  Think of a traditional program’s Gantt chart as being a horizontal relationship map over time.  Now rotate the Gantt chart 90 degrees where an incremental agile program outcome is at the top and all necessary work cascades downward to stories that teams complete during the same sprint (or a few sprints).  This is called, a hierarchically structured backlog, where stories are completed to enable an epic, epics are completed to deliver a theme, where each issue type of the hierarchy is ‘done’ and inspected.  Each and all teams’ progress can be inspected and tracked.  

But using vertical dependencies, or a cross-organization hierarchically structured backlog, planning is focused on a series of inspectable, incremental, cross-team outcomes that keeps the system operational even when feature-poor.  This allows surfacing unknown risks early and uniform inspection of progress across the whole organization.  When there is a failure, the identification of the root cause and corrective action can happen quickly to enable the current incremental outcome and adjust the future outcomes.

Agile development’s known-good, production quality first over traditional development’s feature design and coding first allows leadership to have a common standard to inspect and understand progress.  Traditional development’s strict phase-driven decision making allows leadership to have a different standard to inspect and understand progress.  However, a mixture of low quality implementation and loose phase-driven decision making means pure development chaos.

Friday, March 1, 2024

Transforming Architecture

 Architecture, an architect’s role and their relationship to agile principles are seldom defined when I start an agile transformation.  There are those who believe architecture is a lofty set of future technical needs or desires.  For example, architects provide solutions to address past technical debt that will never be implemented.  There are those who believe that architects live in an ivory tower and profess designs that have no possibility of being implemented within the constraints of the -ilities (affordability, scalability, securability, sustainability, supportability, etc).  There are those who know that agile means do whatever, whenever and however they want until there is something new and urgent that needs to be done instead.  When all of these beliefs are held in an organization, architecture is dismissed.

Given this baggage, I start an agile transformation by asking, how do you define architecture and the role of architects?  As expected, I get a mixture of responses across the organization depending upon who’s answering.  Engineering managers tend to define the role subservient to their role as managers, as in technical designs that fit into their envisioned schedule and feature set. Product managers define the role as subservient to their roles balancing stakeholders and customers’ needs, as in technical designs that support the whim of sales or customers’ desires.  Support or quality leads define the role as ensuring supportability or delivering reliability.  Even the technical leads will define their own roles as being subservient to everyone else.  

When I ask, if a technical infeasibility surfaces, who raises it and resolves it?  I’m surprised by their willingness to immediately take on technical debt and paper over the infeasibility.  Or, worst yet, toss the technical issue on another team or organization, and move forward with an unrealistic, low-quality solution.

To reset the discussion, I define architecture as ‘technical decisions that we hold ourselves accountable to’.  This means that the decisions (or agreements) are technical in nature and made by engineers, technical leads and/or architects.  Architecture decisions relate to but are not management, product, quality or support decisions.  Everyone is held accountable to these decisions which means that the decisions have to be written down and tested/checked against.   If there’s a discrepancy, we either change the implementation to be consistent with the technical decision(s), or we change the technical decision(s) for everyone.  Architects, or technical leads, are responsible for cultivating, documenting and, when necessary, making these technical decisions.  Architecture and architects stand as a separate task and job role.  They are part of the collaboration between Product Management, Engineering Management and the team.

This definition avoids the pitfalls of the above baggage by centering architecture in the space of documenting and brokering technical decisions that are written down and used.  Normally, engineering managers, product managers, support, and quality don’t want to cultivate or maintain a set of technical documentation.  Equally important, most everyone will agree that given the pragmatic nature of the definition, it avoids the ivory tower and irrelevance concerns.

I can hear the screams of the agile purists, ‘working software over comprehensive documentation’!  Most agree that ‘over’ does not mean ‘instead of’.  Both working software and documentation are highly valued and important.

Let’s step back and use Mary Poppendieck’s ‘Build Integrity In’ Tools; Perceived Integrity, and Conceptual Integrity.  Perceived Integrity is the consistency in how we present abstractions and interactions to our users.  This means that our abstractions, or user design decisions are expressed in both internal and user documentation, and are carefully cultivated and validated with our users. Equally, this means that we have explicitly decided and documented who our users, or user personas, are. Therefore, I place the decisions of user design and user personas in the realm of architects’ technical domain.   Any change in user personas has a massive impact on every aspect of the product and needs to be carefully considered and controlled.  Architects tend to understand these impacts on Perceived Integrity better than the other roles.

Turning to Conceptual Integrity as it comes more naturally to technical leaders.  Conceptual Integrity deals with the speeds, feeds, interfaces, APIs and functionality of the code itself.  As we’ve learned, APIs are contracts between components.  Neither component can change the contract without an agreement of all parties involved and a phased transition to the new agreement.  A lot of modern language and API styles have helped to ease these transitions but regardless, the contract has to be kept to keep the system operating.  When these are documented, maintained and tested against, the resulting system tends towards resilience. 

Back to the ‘working software over comprehensive documentation’ concern.  Recent innovations in validation, APIs, languages, CI/CD, tooling and UI development, have moved us towards the reality when comprehensive documentation is also working software.  As we maintain software, we can maintain architecture and our technical agreements.  With each software change, we know if we’re staying aligned with those agreements.

Another benefit of this approach, by clearly defining and using user personas, all teams can now align their user stories and outcomes to identically the user personas defined for Conceptual Integrity.  So as Product Managers, Engineering Managers and teams discuss potential user value, they have a common understanding of who they are discussing for precisely the intended outcome that is consistent with past delivered value.

During an agile transformation, it takes a while for the organization to grow accustomed to this clearly defined architecture role, responsibility and accountability.  Some organizations have created a ‘triad’ collaboration where Engineering Leadership, Architecture and Product Owners engage the team as a single voice during the team’s refinement, planning and execution.  Teams benefit because they know what’s expected of them so they can focus on creating the highest value for the customer.

Saturday, February 17, 2024

TLDR

If you have been reading my postings, you have noticed that I write detailed, complex and long compositions to explain my thoughts on agile principles and transformations.  I also do this in the normal course of planning and guiding agile transformations for organizations.  I like to share the ‘conscious’ side of ‘competence’ so others can explain the ‘why’ behind the ‘how’.

Needless to say, I often get TLDR in response, as in ‘Too Long, Didn’t Read’.  I have been asked to summarize my thoughts in a summary paragraph, a short presentation, and yes, once by a boss who told me to put the important parts in the email subject line.  Somehow, there’s an expectation that if I could only shorten the concepts to bullets, the ‘aha’ would happen across the organization.  Adoption of the concepts would be self-motivated and immediate.

This leads me to ponder how a medical doctor becomes one.  Consider how the best and brightest high school students are guided to a strong ‘core’ set of biology and chemistry while getting their bachelor's degree.  In medical school, they spend their first year learning how a healthy body works, in great detail and with hands-on experience (I won’t go into detail of the hands-on experience).  Their second year is spent learning pathology, or why and how things go wrong in an unhealthy body.  They finish out their last 2 years of medical school rotating between various medical disciplines understanding the basic practices, procedures and realities while continuing to deepen their basic knowledge.  They spend years preparing for the medical exam that will determine what medical specialty that they will spend the next two to seven years being an intern after medical school.  They have board certifications to master before they are allowed to freely practice as an attending physician.

Why do they spend this much effort to learn the complexity of the human condition?  One reason is that the downside of making a mistake is so high that it can cause undue pain and suffering, even death, as well as waste time and resources on missed diagnoses.

Let’s ponder a TLDR version of medical education.  Let’s assume that we have the most excellent medical snippets from X (formerly known as Twitter), TED talks, Youtube, LinkedIn and TikTok.  Let’s assume that a really smart designer using AI figured out how to place in front of our medical students the right information in the right order, of course monitoring dwell times, actions, and answers.  Once the medical students have been exposed to the right materials long enough with sufficiently successful metrics, they are free to practice medicine.  

Would you be willing to go to such a TLDR medical doctor?  

They might be able to diagnose simple cases.  They might be able to perform simple procedures.  They might be able to, when presented with x causes y causes z, reason with the patient that x causes y causes z.  They would likely be good at a particular ‘how’.

They would likely be unable to reason the complex ‘why’.  When they are presented with unseen representations or complex, multi-causal symptoms, they will not have the context to reason possible diagnoses. 

Why?  A simple diagnosis or procedure taught without context means that that same simple procedure may not apply in all contexts.  In fact, that same simple procedure may cause harm in many other contexts.  The ‘why’ explains context and the ‘how’ explains the procedure for the right diagnosis. 

Well, what does a medical education have to do with hi-tech?  There are no life threatening development teams out there.  Right?  While this may be true, technology development does have customers, investors, stakeholders, and co-workers who are depending on leadership knowing the ‘why’ and ‘how’ of complex situations, organizations, projects and products.  Making a mistake does cause harm to these dependents.  Why would anyone simply trust a TLDR trained technology leader?  Why would anyone trust an organization that demands TLDR communications or processes?

Allow me to redefine TLDR as Tough Learning Different Reasoning.  To build the necessary context takes time and exposure.  The context explains the solutions, procedure and rituals.  This is done by humans in taking time to learn, understand and experiment with new concepts.  Watching a lot of motivational TED talks won’t suffice.  Reading a ton of summaries or email subject lines won’t help. 

Tough Learning means that one has to spend time and effort to learn and think.  Simple impressions of Learning are insufficient to master the full context.  Different Reasoning means that the underlying principles and methods are unfamiliar.  To learn them requires practice and experimentation.  Few are able to read about the new rituals and discern the new principles.  Study, effort and practice are required for mastery.  

You should ask me then, what’s a test that you can take to determine which TLDR you have been exposed to in your past.  Here is the test:  Read the Poppendiecks’ book, Lean Software Development, An Agile Toolkit.  It takes between 2 to 4 hours for a seasoned technical leader to read the whole book.  

If your response is that you don’t have the 2 to 4 hours to read the book, then you know which TLDR that you’ve been exposed to.

If you read the book AND you can explain to yourself ‘why’ and ‘how’ for ALL of the 22 Tools apply to Scrum, S@S and/or SAFe v6 (or later), you have been exposed to the second definition of TLDR and are a well trained agilist.

If you read the book AND you can not explain to yourself ‘why’ and ‘how’ for ANY of the 22 Tools apply to Scrum, S@S and/or SAFe v6 (or later), you have fallen victim to the first definition of TLDR.  You should consider vesting time to learn the foundations and principles of agile development.

If you read the book AND you can not explain to yourself ‘why’ and ‘how’ for SOME of the 22 Tools, you have been exposed to a mixture of the TLDRs.  You have more to learn.  Focus on one of the tools and immerse yourself into building your understanding of that tool’s context.  Move to the next tool until you can explain the ‘why’ and ‘how’ for all 22 Tools.

We expect our medical doctors to be conscious competent.  Shouldn’t we expect that much of ourselves when doing agile development?

Tuesday, February 13, 2024

The Most Critical Step in Agile Transformations

The most enjoyable part of every agile transformation is the time spent with people during one-on-ones. They bring their realities and difficulties to the conversation. We sort out what’s going on. They allow me to share insights and provide alternatives for their consideration. They leave the discussions appreciative with ideas to consider for next steps. The best compliment that I can receive is when they thank me for allowing them time to think.

An often asked question after we’ve wrapped up a discussion is, from my perspective, what’s the most critical step in an agile transformation?  They are always surprised that my answer doesn’t appear to conform to the Agile Manifesto.  They expect something like, establish Scrum rituals.  Or, do agile training.  Or, define individual and teams’ roles and responsibilities.  Or, define the SAFe or S@S hierarchy.  Or, establish a maturity model with a clear definition of done.  And to be fair, these are important items to establish during an agile transformation, but they aren’t the most critical.

My answer is simply to establish a Learning Organization.  That is an organization that values curiosity, change, experimentation, inspections, introspection, education, teaching moments, trends, root cause analysis, improvement, and innovation.

I have found that many organizations are in an unconscious competence state where they do what they do because they have documented processes for how they do everything, but they have forgotten why they do these things, or they are in telling leadership style where they expect to ‘just tell the organization what and how to do everything’.  

The urgency of needing to fix a bad agile transformation gone wrong, to move quickly to mature agile development, and/or the onslaught of business conditions reinforces the leadership’s expectations to do what they do faster or just tell them what to do now.  There’s no time to learn.  Absolutely no time for experimentation.  If there’s any failure, there’s only time to punish.

At the core of agile thinking is empirical inspections, or the Plan, Do, Check, Act (PDCA) cycle.  The core is reinforced by Sprint Review meetings, retrospectives, burndowns, velocity and root cause analysis.  How can any of these be done without a Learning Organization?  Sadly, none can be done correctly.  Those organizations that do these rituals without reasoning, or do Scrum in name only aren’t improving, growing or learning.

When staging an agile transformation, my first act is to set up as many cross-functional, multilevel one-on-ones as possible.  The one-on-one frequency varies based upon particular needs of the individual.  They are always confidential and designed as a safe place for exploration and learning.  I always focus on root cause analysis in my questioning to help raise curiosity as to ‘why’.  I have never been disappointed by the 5 ‘whys’ questioning method that quickly guides our one-on-one to a deeper understanding of the situation and presents alternatives to consider.  I explain how PDCA, retrospectives, etc are aspects of learning and experimentation, and how those must be done to achieve improvements, albeit initially on a small scale.

What I’ve noticed is after a number of these meetings, that those involved in my one-on-ones start to set expectations with their leaders and teams for root causes, deeper reflections, incremental improvements and seeking information on concepts they don’t understand.  Learning is valued.  Time is set aside for discussions.  Experimentation is planned.  Results are inspected.  Adjustments are made.

As leaders mimic, teams start to learn and pick up on the value of Sprint Reviews and Retrospectives.  Teams become curious about ‘why’ the various rituals, metrics and roles are defined.  They become open to the role of the Scrum Master guiding the team to maturity.  This opens the door to more learning and improvements.  

A Learning Organization that only remains within a single function like engineering, can make good progress on an agile transformation, but one that does it across multiple functions can make amazing progress on agile transformation and the velocity of business value delivered.

The difficult step in a Learning Organization is to learn together across functional groups and across teams.  This means understanding that the role of leadership in an agile organization is different from what they may have grown accustomed to.  Functional leaders and their functions play important roles in agile organizations.  Setting up and enabling the role of architecture that empowers teams to act and gain velocity. Building a value chain from specification to builds to validation to deployment to enablement to sales to support.  Defining a product backlog that is built on continuous increments of capabilities that are continuously deployed.  

I liken creating a Learning Organization across functional groups to that of nurturing a human being.  When the human is a child, we nurture and educate them on the basics with an expectation that they will eventually grow and mature.  When they reach their teens, we help them understand the deeper, abstract nature of the world around them.  We can demand more from them, but they remain immature in other ways.  When they reach their twenties, we help them understand the interconnectedness and interdependence of complex systems.  We demand professionalism and perfection.  

I have found that a functional organization is well aware of their shortcomings, however, they expect and demand perfection from all of the other functions.  I have to point out that while the individuals and organizations may be highly experienced and successful, they too are in the midst of an agile transformation both within their organization and their relationship with the other organizations.  We need to realize that together we operate more like a child that needs nurturing and education together.  This is far better than demanding a child to do something that they are incapable of doing and expecting adult results.  

As we grow being a Learning Organization, we as a whole organization will become teens and adults.  As mature adults, we now understand the complexities in delivering value and continuous improvement.  We innovate. We teach.  We are consciously competent together.  

Monday, September 19, 2022

The Wonderment of All Innovation

I was talking with an Engineering Manager who said that the Product Owner (PO) and PO’s boss were concerned that the team was spending too much time reducing technical debt, improving quality and gaining velocity.  While the PO agreed on the priorities, the PO wanted more innovation delivered to customers.  On the other hand, the team was feeling that these critically important debt-reduction, quality and velocity improvements were not valuable to the business, that they were not doing anything innovative, and that management didn’t acknowledge their progress.  Net-net, no innovation means no customer features, no business value and no rewards.

Reaching for my trusty keyboard and browser, I searched for the definition of innovation and found from Wikipedia.org, “Innovation is the practical implementation of ideas that result in the introduction of new goods or services or improvement in offering goods or services”.  I pointed out that innovation is also practical improvements, confirmed by Wikipedia.org no less.  

While customers may not be blown away by some long overdue restructuring of code to improve reliability or performance, they would call support less often to complain.  While customers may not see whizzy new UX features, they would appreciate ceasing well-known, frequent UX workarounds because the UX bugs were finally fixed.  While customers cannot see that the team is becoming more efficient, they will appreciate that over time, the team is delivering more new features every quarter.  The Engineering Manager appreciated the perspective and headed back to engage the PO and team with a new perspective about innovation.

I started to realize that high technology lauds innovation of the new products, new services, and new business models.  Employee performance evaluations often have a section related to the employee’s innovation during the past year.  This implies the more impressive the innovations, the higher the performance rating.  Gaining access to the high rungs of the technical ladder requires patent grants, new products and new features in the candidate’s recent accomplishments.  Executives sponsor innovation days that allow teams and individuals to work on something innovative and new, and to break their daily monotonous routine.  

The notion of ‘1% inspiration and 99% perspiration bring new products to market’ floats into my head.  Does that mean that we personally only spent 1% of our lives innovative?  Does that mean that we, as individuals, have 1% of people who are spending most of their time being innovative?  Does that mean that we, as teams, have 1% who are spending most of their time being innovative?  Does that mean that we, as organizations, have 1% who are spending most of their time being innovative?  What if I’m in the 99 percentiles personally, individually, team-wise and organization-wise?  Am I relegated to an existence without being innovative?  Looking back at my career, I am clearly in the 99% and yet, I know that I, my teams, and my organizations have been highly innovative.

How does the above notion mesh with my experience with agile transformations where teams, leaders and organization literally rethink and refactor almost everything they do?  They refactor how the communicate strategy and set goals.  They refactor how they architect, build, validate and deliver their value.  They change their expectations of teams, individuals, management, product managers and architects.  They rethink their relationships with internal partners and external customers.  In short, they implement a pragmatic new development and delivery system that improves quality and speeds new value delivery to customers.  Using the innovation definition, it seems that those who improve both how and what they deliver to customers is worthy of being called innovative. 

So, why do high tech companies seem to value new breakthroughs or major innovations more than on-going, constant improvements by implying one is innovation and the other isn’t?

Maybe because it’s easier to recognize and reward the infrequent, yet highly valuable breakout features and products.  We cheer major accomplishments more than the on-going, constant flow of incremental improvements.  A patent is tangible, demonstratable and rare.  Having a patent wall to celebrate the achievements makes good sense.  A well-received product launch where the press and analysts’ sound bites reflect the innovative new features, is easy to reference.  Audiences like sound bites.

Whereas a constant sprint over sprint improvement of 1% in team velocity is abstract, gradual, and too small to be noticed or recognized.  Beyond the team, who cheers this accomplishment? Same is true for all the numerous on-going improvements that a Scrum team makes from retrospective to retrospective.  These small improvements are critical and yet too small to take note.

Maybe we are missing the notion of wonderment, or the state of awed admiration or respect.   I am fascinated by a child’s wonderment at the small things they encounter, the recognition of a familiar face, their first mobility, their first verbal communication that yields a result, their first success in their classroom, their first friend, and their first financial transaction (You can get candy if you give them this round shiny thing?!?!  Who knew?). 

Equally, I’m fascinated by an organization’s wonderment at small things they encounter on their agile transformation, their first meaningful standup, their first successful sprint goal being fully met, their first planning or review meeting conducted solely by the team with everyone in the correct roles, and their first incremental delivery to a high-quality definition of done.  

What if we expressed our wonderment at every improvement?  What if wonderment encourages more innovation in both small and big outcomes?  Imagine if leadership would spend time sharing their wonderment with all the innovations that their organizations including the small, incremental improvements.  

Maybe it’s too time consuming to express wonderment at all the small improvements, however, it’s easy to see the resulting trend improvements (better velocity, quality, meeting effectiveness, reduction of technical debt and efficiency) when using Scrum.  Maybe expressing wonderment for the positive trends would suffice to encourage teams to continue doing both the small and big innovations.