• Home /
  • Blogs /
  • In-Plant Insights /
  • Metrics That Drive Action, Story 1 Creating a Better Dashboard by Turning Missing Data into Actionable Data

Metrics That Drive Action: Story 1 — Creating a Better Dashboard by Turning "Missing Data" into Actionable Data

First of 2 blog posts examining Metrics That Drive Action

  • Howie Fenton
  • |
  • March 09, 2026
...most dashboards don't improve anything because they're built on weak information.

For several years, everyone has been talking about dashboards in print production—real-time KPI screens, color-coded gauges, "single pane of glass" visibility. Still, most dashboards don't improve anything because they're built on weak information. If the underlying data is late, inconsistent, or missing the job-floor events that explain where time and capacity are being lost, the dashboard becomes a prettier version of the same problem: a monthly report that tells you what happened after it's too late to fix it. In print, the metrics that actually drive productivity—touches, queue time, rework causes, and constraint losses—can't be guessed from invoices, page counts, or end-of-shift notes; they require disciplined, trackable workflow events.

The story below is a composite (a conglomerate of multiple print operations)—a realistic stand-in to illustrate common KPI/data problems and fixes.

Julie oversees a printing business who's first KPI dashboard failed for the most common reason dashboards fail in print: no actionable data. Julie could see monthly totals—volume, revenue, a nice on-time gauge—but she couldn't answer the questions that actually drive improvement: Where did this job wait, is it a bottleneck? Who touched it? What triggered the rework? Most of the "data" lived in emails, sticky notes, or someone's memory, so root-cause conversations turned into opinion, and the dashboard became a scoreboard, not a control system.

Julie's first move wasn't a better dashboard. It was implementing RSA's WebCRD as the workflow front door

Julie's first move wasn't a better dashboard. It was implementing RSA's WebCRD as the workflow front door, so every job started with structured intake, approvals, and a trackable job record. The practical recommendation is to make intake "hard to do wrong": require the fields that prevent downstream churn (specs, quantities, stock, finishing, due date, proof requirements, mailing rules if applicable), and use drop-downs instead of free text wherever possible so the data stays consistent enough to analyze. Julie also standardized three service classes (rush/standard/economy) with defined promise rules—because "on-time" is meaningless if the promise is negotiated differently for every CSR.

WebCRD's job tracking and alerts (with email notifications) made status visible without supervisors chasing updates, but Julie added a key operating practice: assign one owner per job stage (CSR before release, prepress through proof approval, production from print start to ship). That reduced the extra touches created by "handoff ping-pong." Multiple approval levels reduced the "mystery hold" problem, and Julie made it measurable by requiring hold reasons (missing files, waiting on customer, stock unavailable, scheduling conflict) and setting approval SLAs (e.g., internal approvals within 4 business hours; customer approvals within 24 hours) with an escalation rule: missed SLA → job flips to "at risk" and becomes a huddle topic.

Then Julie fixed the data gap that kills most KPI programs: she required barcode-based production tracking at the handoff points that create real lead time—release, prepress complete, print start/finish, finishing start/finish, QA/pack, ship. The practical recommendation here is to start with 6–8 scan points max (too many will fail), and place scanners where work naturally changes hands. Julie made a scanning stick by tying it to real behaviors: you can't pull the next traveler until you scan the current operation complete; you can't print a packing slip until QA/pack is checked; and rework can't be processed until a cause code is selected.

Each scan became a timestamped event the team could trust—but Julie also did the two governance moves most shops skip. First, she published a one-page scan dictionary defining what each scan means (e.g., "Print Start" = first sheet off press; "Print Complete" = last sheet plus ticket), what counts as a "touch," and which holds are excluded from production cycle time (customer holds) vs included (internal holds). Second, she limited reason codes to a small, usable set (10–15) and banned "Other" unless a supervisor approved it at the end of the shift.

Suddenly, "touches per job" wasn't a guess; it was a count. "Time per touch" wasn't a debate; it was elapsed time between scans. Queue time wasn't a complaint; it was measurable waiting between steps. Julie then converted metrics into daily management: every morning, the huddle reviewed (1) the top three jobs at risk, (2) yesterday's top two queue points by minutes, and (3) yesterday's top two rework causes—each with an owner and a same-day corrective action (tighten intake fields, adjust release rules, resequence work, stage materials earlier, or eliminate a recurring file error with a template).

Share

About the Author

Howie Fenton

Howie Fenton is an independent consultant and trusted advisor to in-plant printers. He recommends equipment, best practices and workflow automation tools to streamline operations. To learn more about measuring performance, benchmarking to leaders, and improving your value e-mail Howie@howiefentonconsulting.com

Stay Connected with RSA: Sign Up for Our Newsletters

Copyright © 2026 Rochester Software Associates, Inc. All Rights Reserved.

This website uses first and third-party cookies to ensure you have the best experience with our content. You may deactivate and delete these cookies using your browser cookie settings. If you do, some parts of our website may not function as intended. Read Cookie Policy.