Strong analytics should be a core component of your business automation initiative

Business automation initiatives require more than just good automation tech. In this post, I explain how strong analytics can help organizations create much more value with automation — and what those analytics should look like.

Automation can generate far more value than many realize. Capitalizing upon this potential requires Strong analytics to drive organizational transformation

The source of automation’s true potential lies in the economics principle of comparative advantage: freeing human workers to focus on activities at which they excel. This shift in focus creates significant value for the organization — not only in the form of higher worker productivity, but also greater job satisfaction, improved customer experience, and increased capacity that allows the organization to grow faster and capture larger market share. Those who view automation as simply a means to hiring fewer workers frequently miss far more valuable opportunities that automation creates: successful automation initiatives can drive bottom-line and top-line growth.

Capitalizing on this potential requires follow-through: freed-up resources must be productively redeployed. Even at its best, automation affects only a subset of the many inputs into an organization’s broader production function. The ultimate success of automation is determined by how all of those inputs combine together in new and more productive ways that are enabled by automation — but not completed by automation alone. This reorientation of work does not happen automatically. No hidden forces or natural tendencies will lead an organization into its new optimal shape. The organization’s leaders must drive the required transformation, and they need strong analytics to do it.

Where automation vendors and their customers utilize analytics at all, they tend to focus on mapping out the legacy processes to be automated (process mining or task mining) and automation speed (the # of widgets per hour their bot produces). This is unsurprising: we tend to prioritize discovery above implementation, and pre-packaged software cannot be aware of specific organizational contexts. Such analytics are insufficient for driving organizational transformation.

Organizational transformation requires analytics that measure the benefits of automation accruing to the organization, identify the actions needed to increase them, assess whether those actions are yielding the expected results, and help leaders recalibrate as needed.

For concreteness, consider the introduction of a bot capable of replicating the work of 2 humans into a business unit with 10 workers. On paper, the bot frees up 80 hours per week, which comes out to 96 minutes per person per day. Could you tell how those 96 minutes are being redeployed and how much of a bump in productivity results? Real-world automation initiatives happen in the context of dozens if not hundreds of employees and fluctuating product demand, customer mixes, wholesale prices and supply chain issues, vacation schedules, workflows, training schedules, and other noise that make the important tasks of tracking freed-up capacity and managing it toward its most productive use very difficult to do by anecdotal observation alone. Without appropriate analytics, the broader organization’s productivity will almost surely lag far behind potential.

We have deep experience helping leaders use data to measure the productivity of their businesses, develop a quantitative understanding of its drivers, and see automation initiatives all the way through to capture value far beyond the direct cost of labor hours saved.

Intelligent automation creates many new paths for technology to drive business value. managing Intelligent automation requires strong analytics

While rapid uptake of RPA continues, bold leaders are increasingly looking to machine learning-enabled automation to help them achieve their Big Hairy Aggressive Goals. Intelligent automation and hyperautomation give rise to many new use cases involving humans teaming with technology to unleash potentially enormous value. These new possibilities only serve to underscore the need for strong organization-level analytics discussed above to support the required follow through. In addition, organizations implementing this exciting technology should be equipped with the analytics required to manage it.

Why is a special set of analytics needed to manage intelligent automation? First, machine learning requires training data. Sometimes training data is scarce, but the opposite can also be true: organizations are often replete with data that could potentially be used to train bots to perform specific sets of tasks. Alternatively, bots powered by machine learning might come pre-trained with external data. But which is the right data to use for a given application or context? The answer is rarely obvious, and strong analytics are needed to help guide this decision.

To illustrate, consider a bot used to help human reviewers spot discrepancies between data manually entered into a form vs. troves of source material. Such a bot could be very useful in the context of reviewing health insurance claims for reimbursement by improving accuracy and freeing human reviewers to spend more time on nonroutine tasks like digging into particularly frequent or costly errors. But insurance claims for dental imaging look very different from claims for psychiatric treatment; a claim from a large national dental group likely looks very different from those from a local solo practitioner; and a claim stemming from treatment of a patient with comprehensive coverage will need to contain different data than a claim from the same treatment by the same doctor of a patient with more limited coverage.

Which is the right training data to use in each case? The value of each potential training dataset differs across applications and sub-applications. Some observers would advocate for a “kitchen sink” approach — just use all available data to train and let the training process itself determine the appropriate weights to put on various segments of the data. In practice, this can’t be the answer. Frequently, an unguided or “kitchen sink” approach to algorithmic training will bog down performance and lead to costly missed opportunities as automation projects take too long to ramp up and lose steam or are prematurely axed.

Architects of commercial AI applications must be deliberate about selecting training data to maximize accuracy (and their chances of success) out of the gate and avoid building unwanted biases into a bot’s work. And their customers — leaders of organizations implementing these technologies — require visibility into the training process to understand what data they need to provide and to validate performance of the tech.

For greater success, purpose-built analytics should be installed around a process for identifying, testing, and refining potential training data to give intelligent automation architects and their customers appropriate “under the hood” visibility. The analytics should provide insights into how well a given set of training data aligns to the application at hand, which features of the data are most important to that application, and how well the tech is performing on that application.

Moreover, even with the best training data strategy, no AI application is optimized in the lab — automation needs to be further tuned once in production as it is scaled up and applied to different contexts. In the example given above, the mix of insurance claims will likely evolve over time, perhaps with different trends across processing locations. Continuous monitoring and refinement are needed to prevent the technology from becoming outdated. Knowing what areas will need to be refined next — and the levers to pull to achieve specific performance improvements — again requires a different breed of analytics.

Horizon works in partnership with intelligent automation vendors and their customers, providing them with analytics to optimize the initial training process and manage the technology moving forward.

What should analytics look like in this context?

For automation to truly succeed, a robust set of analytics should focus on key organizational goals and inputs needed to manage intelligent automation as scales and over time. These analytics will align to the two key objectives discussed above:

  1. Gaining visibility and actionable insights for automation’s impact on the organization

    • For example, a common goal with automation is to increase the organization’s productivity. Evaluating progress toward this goal requires comparing metrics of organization-wide productivity before vs. after automation goes into production, and monitoring these metrics moving forward. Other required KPIs may include measures of customer churn and employee turnover.

    • What about actionable insights? Take productivity: this core metric of the benefit of automation is unlikely to increase all on its own — again, that’s because broader organizational reconfiguration is usually required in order to capture the benefit of automation. The required reconfiguration — involving both processes and human resources — should be mapped out ex-ante and then evaluated along the way. How much human time is the automation freeing up, and to what extent is freed-up time getting redeployed to the right alternative activities? Where are the new bottlenecks in the system? With information at this level of detail, management is much better-equipped to mold the organization into its new shape and capitalize on the potential benefits of automation.

  2. Identifying the right training data for each application and managing intelligent automation once in production

    • Intuitively, the best training data will be closely aligned to the context of the application. The challenge is how to define that context. Define it too narrowly and the training process will produce overfitting, resulting in poorer performance. Define it too broadly and risk suffering the pitfalls of a “kitchen sink” approach — delays, undue expense, and underperformance.

    • Analytics are needed to glean insights and aggregate them into a process for scaling up the tech and adapting it alongside evolving real-world contexts. Potential training data should be analyzed to evaluate different approaches to segmenting the data and understand which of its contours are most meaningful with regard to accuracy and other automation performance criteria.

    • Once armed with an understanding of the linkage between training data and performance, organizations can optimize initial training. Perhaps more importantly, they can also determine the right process for making refinements moving forward: which aspects of the context to monitor (e.g., the mix of dental imaging claims vs. psychiatric claims over time by location), what steps to take when a given trend is identified, and what performance to expect.

Your organization’s automation initiative should include an analytics strategy designed and implemented alongside the automation tech itself. Partnering with Horizon can help protect your investment in automation and ensure it has the best chance of meeting performance goals — positioning your organization to manage and scale the technology once in production, follow through on making best use of freed-up resources, and thereby capitalize on automation’s true potential.