# Ollama Capabilities with OpManager AI can significantly accelerate IT operations by reducing resolution times and delivering immediate insights. However, organizations in regulated industries or with stringent data governance policies often cannot route sensitive data through external, cloud-based AI services. Locally hosted large language models provide a secure alternative. OpManager’s integration with Ollama combines AI-driven monitoring with fully on-premises data processing. This ensures that sensitive network data never leaves your infrastructure while still enabling advanced analytics. Ollama is an open-source runtime for deploying LLMs within your environment. Once configured, OpManager communicates directly with a locally hosted Ollama instance, ensuring that all device data, alerts, and performance metrics are processed internally. This enables context-aware summaries of device health and alarm conditions, including status overviews, diagnostic insights, and recommended actions—without any dependency on external AI services. Additionally, users can define custom monitors using natural language prompts, with OpManager leveraging Ollama to generate the required monitoring scripts automatically. ![Ollama Integration with OpManager](https://cdn.manageengine.com/manageengine/network-monitoring/images/ollama-integration-page.png) ## AI Summarization Agents for Deeper Network Visibility With Ollama integrated in OpManager, administrators can generate instant, on-demand summaries of monitored devices at the click of a button. Once set up, the AI summarization agent is triggered via the **Summarize** button, producing real-time, meaningful overviews of monitored metrics with all underlying data remaining securely within your environment. The following summary types are available: ### Device Summary For individual managed devices, OpManager produces AI-generated health and performance overviews covering the full range of critical indicators, including CPU utilization, memory usage, bandwidth consumption, device availability, uptime history, configuration parameters, performance irregularities, and early-stage failure warnings. Administrators get an immediate picture of any device's operational state without having to navigate through fragmented dashboards. To generate a summary, go to **Inventory**, select a device, open the **snapshot page**, and click the **Summarize** option. ### Device Group Summary When assessing entire logical groups of devices, the AI agent performs collective metric analysis to deliver a unified performance overview across the segment. This makes it considerably easier to pinpoint bottlenecks, surface recurring fault patterns, and benchmark performance across different network environments at scale. While single-device summaries provide granular per-endpoint diagnostics, group-level summaries are better suited for macro-level analysis—identifying cross-device trends and infrastructure-wide issues in a single view. Navigate to **Settings → Configurations → Groups** (or via Inventory directly), select a **device group**, and click the **Summarize** tab. ### Alarm Summary For individual alarm records, the Ollama integration generates detailed incident summaries that go beyond standard alert descriptions; each summary covers the technical nature of the fault, a probable root cause assessment, severity classification, estimated network impact, and a structured troubleshooting workflow to guide faster resolution. Navigate to **Alarms**, select an alarm, and select the **Summarize** button. ### All Alarms Summary Instead of manually fetching each alert in isolation, OpManager can process all active alarms collectively to produce a single, prioritized incident synopsis. This consolidated summary enables faster macro-level incident assessment, identification of recurring failure signatures, and detection of broader systemic issues that support more proactive monitoring. For NOC teams handling high-volume alert environments, this provides immediate situational awareness while ensuring that all sensitive alarm data remains within your secure on-premises boundary. To generate it, go to **Alarms** and click the **Summarize** button in the top-right corner. ## Natural Language-Driven Script Generation for Custom Monitoring The Ollama integration also transforms how custom monitoring scripts are built inside OpManager. The platform's script monitoring engine supports execution of custom scripts in PowerShell, Linux shell, VBScript, Perl, and Python, enabling operators to define monitoring logic and surface non-standard metrics. Traditionally, building these scripts required scripting expertise and considerable time investment. With Ollama, that process is democratized. Users simply describe their monitoring requirement as a natural language prompt, and OpManager uses Ollama to produce a fully formed, ready-to-deploy script template. To do so, go to **Settings → Monitoring → Script Templates**, click **Add** and enter the relevant monitoring details. Under **Script Details**, click **Generate with AI**. Enter your prompt in the dialog that appears and click **Generate**. Ollama will produce the script to be adopted as a monitoring template. Each generated script is fully reviewable and editable before deployment, ensuring that engineering teams retain complete control over the final output. This significantly reduces the time and specialization needed to configure new monitoring workflows, while ensuring that no proprietary script logic or sensitive system details are ever forwarded to an external cloud service. [Click here](https://www.manageengine.com/network-monitoring/help/integrate-opmanager-with-ollama.html?ollama_opm_integration) for the steps to configure Ollama integration.