start portlet menu bar

HCLSoftware: Fueling the Digital+ Economy

Display portlet menu
end portlet menu bar
Close
Select Page

Click here to read part 1 of the series 

Click here to read part 2 of the series 

Click here to read part 3 of the series 

You’ve heard me talk a lot about business agility during webinars, blog posts, and presentations. It’s even part of the HCLSoftware DevOps tagline – Secure, Data-Driven Business Agility. But what does business agility actually mean, and how does it fit into a data-driven DevOps strategy? 

Business Agility refers to the capability of a business or its components to rapidly respond to a change by adapting to maintain stability. This concept is by no means new. In fact, its where we get the core ideas for Agile development. The ability to adapt to change is a cornerstone of Agile project management and is one of the key advantages of the Agile methodology. When development teams put their time to good use, they can deliver what the stakeholders want in a timely matter. If the stakeholders needs change, the teams actions can change right alongside them. 

In 2020, businesses have faced unprecedented circumstances due to the Coronavirus pandemic. These circumstances have caused a significant number of disruptions to how companies are accustomed to doing their daytoday activities, especially when it comes to interacting with their clientele. The companies with the best chance to survive these rough times will no doubt need to possess data-driven business agility.  

Data-Driven DevOps organizations are giving themselves the edge on their competition by being able to quickly identify what is being disrupted so they can have fast, proactive conversations on course correction. But that course correction is not a flawless exercise – many individuals and business units will come up with solutions to problems, not every new idea is going to be great, and many of these ideas will present new challenges. How do we vet them all?  How do we recognize when we have over or under corrected for a problem? 

Without data, this process is a guessing game with lots of wasted time spent testing a new process that is bound to fail. With the right data, we can fail faster so we can get to success sooner. Successful organizations embrace the idea of failing fast with a culture that accepts the idea that we need to fail to be greatThe faster we fail and recognize that failure, the sooner it becomes a learning opportunity. Henry Ford said, “Failure is simply an opportunity to begin again, only this time more intelligently.” That is precisely the goal of Data-Driven DevOps. 

In part two of this serieswe discussed how data produced by individual contributors, originating from source code management as well as work item management technologies, could help improve overall culture, as well as an organizations ability to accurately forecast deliverables to stakeholders. For the topic of business agility, we are going to focus on data coming from a wide variety of DevOps technologies, ranging from continuous integration and deployment solutions, to test automation and security scan results. Organizations that are able to capture, visualize, and process this data will find that they can  

  • Benchmark their current end-to-end software delivery process, from idea to customer 
  • Identify existing or newly created bottlenecks 
  • Understand the impacts to new process changes and ideas so that they can fail fast and avoid disruption 

 

The Ultimate Feedback Loop and Continuous Improvement 

By visualizing and analyzing the data coming from software delivery pipelines, organizations can take the DevOps approach of implementing a valuable “fast feedback” loop. For many companies, the chance to overhaul how they perform daytoday operations is a once or twice a year opportunity. This simply is not fast enough. Culturally, we have to embrace getting better every single day. To do that, organizations first must establish a baseline of how they operate today – in other words, a benchmark of their current performance. Once we have that benchmark, the live representation of the data becomes a realtime value stream allowing organizations to focus on where they have room to improve.   

A realtime value stream comes with a number of outstanding benefits. One of the most obvious benefits is being able to track specific units of work and associate them to sprints, releases, teams, and individual contributors. This work can then be tied to specific stages of your value stream, making it easy to discover where work is stuck.  

One of my favorite examples of these hidden bottlenecks was from one of our own development teams at HCLSoftware. This team was struggling with code reviews. They always seemed to have a large backlog of code reviews that needed to be done before the code could be merged. By visualizing the data in their value stream, they were able to see precisely the stage at which the work became stuck, creating a bottleneck in the delivery process. It turns out they were great at doing code reviews, but they simply didn’t have enough people who were granted permission to approve the code reviews in the system. It may seem like an insignificant gain, but there are dozens of pockets of waste littered throughout our software development processes. The more of them we are able to remove, the better chances we have clearing the path for that transition from horse to unicorn. 

Process Changes and Disruption 

As teams start to evaluate their current value streams, they will come up with many creative ways to streamline their current processes. Some of them might be good and some of them might be bad, but what is important is that both the team and the business can track the results from the new process changes to determine if there is an uptick in throughput. This allows us to know if the new processes put in place actually result in delivering better quality software, faster 

Not everything has to be an opinion when you have such a large amount of data. Data removes “I think” from the conversation and changes it to “we know”. Instead of individuals stating, “I think we are doing better since we moved from two-week sprints to one-week sprints” we can examine the data and know for sure. This harkens back to our dialogue on culture — if you trust a team to define their own processes, you need to supply them with the tools to help them learn from their own mistakes. This protects engineering teams from going too fast and potentially having quality suffer, but at the same time makes sure that the full endtoend process is as robust and efficient as possible.  

Thank you for going on this Data-Driven DevOps journey with us. So far, we have looked at how data improves culture, makes tracking and planning of software delivery more efficient, and provides organizations with business agility to allow them to react swiftly to potential disruptions. Please stay tuned for our next post when we talk about bringing all the data together to provide unprecedented visibility into business alignment. We will also discuss how data can reduce risk and get your organization on the path to automating governance once and for all.  

Click here to read part 5 of the series

Get everything and more in the Data-Driven DevOps eBook

Comment wrap
Secure DevOps | March 13, 2024
2023 Value Stream Management Trends: A Recap
New survey reveals challenges and trends in Value Stream Management. Learn how to improve collaboration, gain insights, and drive efficiency.
Secure DevOps | December 21, 2023
eBook - The Journey of VSM Where It Began Where It’s Going How We Can Help
Struggling to deliver software that makes customers happy? Learn how Value Stream Management can bridge the gap between ideas and real results in our free eBook.
Secure DevOps | January 4, 2023
2022 Value Stream Management Trends: A Recap
The survey responses compiled and analyzed in this report provide a window into the state of VSM — increasing the flow of business value from customer request to delivery — in 2022.