By Al Cormier, Director of Corrections Thought Leadership
After 30 years in corrections, I can tell you exactly what kills a technology rollout. It's not the technology; it's treating it like a technology problem.
I just got back from ACA's Winter Conference in Long Beach, and the conversation has shifted. Fewer people are debating whether AI belongs in corrections. The real question, the one most agencies aren't ready to answer, is: who's in charge of it once it's running?
The Corrections Staffing Crisis Makes Technology Governance Urgent
Right now, there are facilities with vacancy rates above 50%. Staff are stretched past the point of doing their jobs safely, let alone strategically. AI tools that flag suicide risk, automate intake workflows, or surface security patterns could provide real relief, and some already are.
But the part that keeps getting skipped: who makes the call when the algorithm says an inmate is high-risk and the officer on the floor disagrees? Where does that decision get documented? What happens when the AI is wrong and nobody built a process for catching it?
These aren't hypothetical questions. They're operational ones and in my experience, the agencies that stumble aren't the ones with bad technology. They're the ones that never decided how those decisions would actually get made once the new technology goes live.
AI in Corrections Without Governance Is Faster Chaos
At ACA, a session on AI-driven suicide risk prediction brought this into sharp focus. The clinical potential is real. AI can process behavioral indicators across a population faster than any human team. But if a mental health team doesn't trust the tool, they'll ignore it. If they trust it too much, they'll stop doing their own assessments. Neither outcome is acceptable when someone's life is on the line.
What was missing from that conversation, and from most conference sessions I attended, was any serious discussion of the governance layer. Who validates the model's accuracy over time? Who retrains it when the population changes? Who answers for it when it misses something? Without those answers, even the best predictive tools become a liability.
Lessons from a Corrections Technology Rollout: What Made It Stick
What I've seen work, and this goes back to well before anyone was talking about AI, is when agencies treat new tools as a people change, not a systems change. The new technology is the easy part.
When we rolled out inmate tablets across 6 facilities in Vermont over a decade ago, the technology performed as expected. What nearly derailed us was officer trust. Frontline staff needed to understand what the tablets did, how they fit into their existing workflow, and what they were supposed to do when the technology and their judgment didn't align. We had to invest as much time in those conversations as we did in the implementation itself. That's what made it stick. Whether an agency is implementing a full offender management system or simply introducing a single AI solution, those conversations matter.
The hard part is always getting veteran staff to trust a dashboard or getting leadership to fund the governance structure that makes the dashboard trustworthy in the first place.
Integrated Corrections Systems Need Aligned Teams
There was plenty of talk at the conference about integrated architecture, where data flows across intake, case management, investigations, and security instead of sitting in silos. It's the right idea, but I've watched agencies pour money into integration projects that went nowhere because they wired the data together without ever aligning the people who use it. This is especially true in jail and prison management where all aspects of operations must flow together.
What does alignment look like? It’s case managers, security staff, and clinical teams in the same room during implementation, agreeing on shared definitions, escalation paths, and who owns what data. When that happens, integration works. When it doesn't, a connected system with disconnected staff is just a more expensive mess.
Corrections Technology Governance: The Framework That Matters Most
The agencies that lead over the next five years won't be the ones with the most advanced AI. They'll be the ones that did the unglamorous work of building governance first. That means:
- Clear data ownership and quality standards
- Defined escalation paths when human judgment and algorithmic output conflict
- Human oversight built into the workflow, not bolted on as an afterthought
- Documented decision frameworks that hold up under audit
- Regular review cycles to validate that the tools are performing as intended
That's not a technology strategy. That's a leadership strategy. And right now, it's the decision that separates agencies moving forward from the ones buying software and hoping for the best.
If your agency is evaluating new configurable technology right now, ask yourself this, do you have a governance framework ready, or are you planning to figure it out after go-live? The answer will tell you more about your chances of success than any product demo ever will.
Want to talk about governance planning for your next rollout? Contact me at al.cormier@mi-case.com




