How to Tackle the Key Risks of Introducing AI Capabilities in ITSM

December 22 | 09 mins read

How to tackle the key risks of AI in ITSM

There’s no doubt that artificial intelligence (AI) and the capabilities it brings to IT service management (ITSM) tools has and will continue to change IT service delivery and support. Importantly, the use of AI to improve IT services, operations, and experiences is no longer a "future thing." Instead, many AI-enabled capabilities are already available in ITSM tools, which customer organizations are adopting to better serve their employees and customers. There are risks with AI adoption, though, and these need to be considered and mitigated on the path to success. This blog examines the primary concerns IT organizations have regarding AI adoption, based on a ManageEngine survey of 300 IT professionals (and the associated "The advent of AI agents in ITSM: Perception and future impact report" which is available here), and provides practical guidance on how to address the key risks associated with introducing AI capabilities
in ITSM.

What are the top AI-related concerns in 2025?

When asked "Do you have any worries, if at all, about deploying AI agents for your everyday IT service management operations?", the top three concerns were (survey respondents could select all the responses that applied):

  • AI governance, data security, and privacy concerns (45%)
  • Reliability of AI agents (39%)
  • Implementation complexity (34%).

Practical guidance on these top three concerns is shared below. Only 8% of survey respondents had no concerns about the implementation of AI agents.

The top concern matches other AI-in-ITSM-related surveys conducted in 2025. For example, the ITSM.tools poll for the most wanted ITSM content in 2025 had governance (including AI governance) in the top place.

It’s also important to appreciate that the top concerns differed little by organizational size:

  • 100 - 249 employees — "AI governance, data security, and privacy concerns," "Reliability of AI agents," and "Implementation complexity" (which matched the aggregated results)
  • 250 - 500 employees — "AI governance, data security, and privacy concerns," "Reliability of AI agents," and "Implementation complexity" and "Unproven technology" in joint third (which again matched the aggregated results)
  • More than 500 employees — "AI governance, data security, and privacy concerns," "Reliability of AI agents," and "Unproven technology" (which matched the aggregated results first two and fourth places).

Organizations are embracing AI

Despite these and other concerns, organizations are still open to adopting AI agents, where an AI agent was defined as "...an intelligent model that can detect user intent from a ticket, email, or through conversations and autonomously gather contextual data, make decisions, and perform tasks. AI agents can be deployed for service desk tasks such as incident management or service request fulfilment."

The ManageEngine survey found that 93% of respondents stated their organizations would be open to using AI agents in ITSM. Only 4% indicated that their organization wasn’t open to using AI agents. The appetite and ambition for AI ITSM capabilities are there, and in some cases are already being acted upon. However, as with any new technology adoption and associated organizational change, many risks can adversely affect the success of change initiatives. Practical guidance to address the top three challenges and risks identified by the survey follows.

AI governance, data security, and privacy issues

  • Create an AI governance framework — this should define AI policies and acceptable use standards.
  • Facilitate transparency by:
    1. Logging AI decisions
    2. Maintaining audit trails
    3. Explaining model behavior (what’s often called "explainability").
  • Regularly validate and test AI models for "drift," bias, and unintended outcomes.
  • Implement strong data security controls around AI use- this should include:
    1. Encrypting data at rest and in transit
    2. Using least-privilege access
    3. Vetting third-party models and APIs for supply chain risk.
  • Align AI use with relevant frameworks, such as GDPR and HIPAA, as well as internal privacy standards.
  • Train staff on the key AI risks and ethics, including data handling best practices and awareness of AI bias, hallucination, and explainability.

The reliability of AI agents

  • Define clear AI agent boundaries and guardrails by setting task-level permissions to restrict what AI agents can initiate. This should also define "no-go" zones related to the systems, commands, or datasets that AI agents are unable to access.
  • Implement continuous testing and validation by simulating and regularly testing AI agent behavior in sandbox environments. Also include fail-safes and rollbacks in automation flows.
  • Use explainable and auditable models — this includes explainable AI (XAI) techniques for classification or decision logic, logging every AI agent decision with reasoning (and context), and maintaining audit trails for compliance and troubleshooting purposes.
  • Monitor and measure AI agent reliability metrics, including success and failure rates of actions, escalation frequency, and feedback ratings. Use these metrics to adjust confidence thresholds or retrain the AI models.

Implementation complexity

  • It’s best to start with narrow, well-defined use cases; ideally, these are high-impact (in terms of business value), low-complexity use cases.
  • Build on existing ITSM and automation infrastructure, as "reinventing the wheel" increases complexity and cost.
  • Utilize agile, iterative implementation methods that break down deployment into milestones with short sprints, utilizing feedback loops from users and models to facilitate adaptation.
  • Select AI platforms with native ITSM integrations and low-code/no-code AI support. Importantly, avoid building "in-house" unless your organization has deep AI engineering capabilities.
  • Invest in AI literacy and skills development — staff must understand how your organization’s AI capabilities work, what they can and cannot do, and how to effectively supervise them.
  • Establish technical standards and ensure reusability — this will help prevent inconsistent implementations from causing duplication and maintenance debt.

Hopefully, this blog has been helpful. If you want to download the full "The advent of AI agents in ITSM: Perception and future impact report," it’s available here.

Sign up for our newsletter to get more quality content

Get fresh content in your inbox

By clicking 'keep me in the loop', you agree to processing of personal data according to the Privacy Policy.
Let's support faster, easier, and together