• Home
  • Scalability
  • Capacity Sizing Guide

Log360 capacity planning guide for scalable deployments

On this page:

  • Introduction
  • The scalable architecture of Log360
  • Network requirements

Introduction

Capacity planning ensures that a Log360 deployment can handle current and future workloads without performance issues. It involves estimating the computing, storage, and network resources needed for log ingestion, analysis, and retention. A properly sized system ensures stable performance, faster searches, and uninterrupted log collection

This document explains:

  • The scalable architecture of Log360
  • Components to consider for capacity planning
  • Factors affecting resource requirements
  • Hardware guidelines for each role
  • How to determine the number of processors
  • Practical example scenarios
Note:

Please ensure you have gone through Log360's scalable architecture guide to understand the components and functioning in-depth and When to scale Log360 to know when the architecture needs to be scaled.

The scalable architecture of Log360

Log360 is designed to process large log volumes using a modular, distributed architecture. All log processor nodes are located in a central location. Remote sites use lightweight agents to collect logs and send them to the processors.

Processor roles explained

Default roles:

  • Processing Engine: Enriches parsed logs and by default handles log forwarding, alerts, and archiving.
  • Log Queue Engine: Manages event flow between components and prevents data loss.
  • Correlation Engine: Evaluates events against security rules.
  • Search Engine: Indexes data, stores it in Elasticsearch, and handles user queries.

Optional specialized roles / Custom roles: These functions can be decoupled from the Processing Engine and run on dedicated nodes for greater flexibility and performance:

  • Alerts: Generates notifications from general alerts and correlation alerts.
  • Log Forwarding: Sends logs to external tools or destinations for analysis or storage.
  • Log Archive: Stores logs based on retention policies.

Other components

In multi-site deployments, the Access Gateway Cluster acts as a reverse proxy in DMZ, receiving logs from agents and routing securely to processors. This ensures secure transmission and land load balancing.

Components to consider for capacity planning

1. Processors

  • Located at a central location
  • Can run one or more roles
  • Processing, queuing, search, and correlation roles are mandatory

Supported operating systems:

  • Windows Server 2016, 2019, 2022
  • Red Hat Enterprise Linux 8.x, 9.x
  • Ubuntu 20.04 LTS, 22.04 LTS

2. Agents

  • Installed at remote sites
  • Parse, compress, and send logs to HQ processors

Base hardware for agents

Component Requirement
CPU 12 cores
RAM 6 GB
Storage Minimal local storage (buffer only)

3. Indexer cluster (Elasticsearch)

  • Stores recent logs for fast searching.
  • Uses high-performance SSDs.
  • Storage size is based on retention policy and daily volume.

4. Database

The common database stores product configurations and metadata.

Supported external databases:

  • Microsoft SQL Server 2016, 2019
  • PostgreSQL 12.x, 13.x, 14.x

5. Shared storage

  • The shared storage can be NFS, or SMB.
  • Holds archives, Elasticsearch backups, and inter-processor communication files.

Network requirements

Communication between Log360 components requires specific ports and recommended bandwidth. Minimum bandwidth: 1 GBPS for log transmission.

Service Protocol Port Purpose
Web server HTTP 8095 (configurable) UI access and agent communication
Elasticsearch TCP 9300–9400 (configurable) Internal communication for search engine management and clustering
Internal communication UDP 5000 (configurable) Internal agent-to-server communication
Database TCP 33335 Connectivity to the external PostgreSQL/MySQL database
Log type Protocol Port Service
Windows logs TCP 135, 139, 445 WMI, NetBIOS, SMB for agentless collection
Windows logs TCP 49152–65535 Dynamic RPC range used by Windows
Syslogs TCP/UDP 513, 514 (configurable) Standard ports for syslog reception
Note:
  • Use TCP for critical logs to ensure guaranteed delivery.
  • Use UDP for high-volume logs where speed is more important than delivery guarantees.

Factors affecting deployment size

The following table maps the factors affecting Log360's scalable deployment and the impact. Consider them carefully to assess your enterprise requirements.

Factor Impact
Event rate (EPS) and daily volume Main driver of CPU, RAM, and storage needs.
Roles enabled Full SIEM(when all roles are enabled in a processor) requires resources to scaled.
Retention period Directly impacts hot and archive storage size.
User concurrency and search complexity Increases CPU and RAM load on search engines.
Remote log collection Requires a secure gateway server.
High availability Critical roles are deployed in at least two processors.

Hardware guidelines

Processor nodes

Role configuration CPU RAM
Processing engine + queue engine 12–16 cores 24 GB
With indexing and search 12–16 cores 32–48 GB
With correlation and alerts 12–16 cores >32 GB
Full SIEM (all roles along with log forwarding) 12–16 cores 48–64 GB

Indexer cluster

Storage formula:

Total storage = (Daily volume in GB) × (Retention days) × (1 + Replication factor) × 1.2

Replication factor of 1 = primary + one replica.

Replication Factor determines how many copies of indexed data are maintained across nodes.

Supported values: 0 or 1

  • 0 → Only a single copy of data is stored.
  • 1 → A duplicate copy of the data is stored on another node for redundancy.

Example:

If your deployment has 10 Log Processors and the replication factor is set to 1:

  • When one node goes down, all indexed data remains accessible, since every data block is replicated on another node.
  • If more than one node fails simultaneously, partial data availability may occur, depending on which nodes are affected.

Database

  • 100–200 GB storage
  • Low latency and high IOPS required

Shared storage

  • High throughput for archive operations
  • Scalable for long-term retention

Capacity planning scenario

The following scenario outlines the step-by-step approach to capacity planning. Please ensure the following prerequisites are completed before proceeding:

  • Estimate your daily log volume: EPS × 86,400(seconds in a day) × average log size.
  • Identify roles to be enabled.
  • Determine hardware per role using the guidelines above.
  • Apply redundancy for high availability.
  • Add 20% capacity for growth.

Example deployment: Scenario 1

  • 1 HQ, 2 remote sites
  • Peak rate: 15,000 EPS
  • Average log size: 500 bytes
  • Retention: 30 days hot storage
  • Features: full SIEM

Daily log volume

15,000 × 86,400 × 500 bytes - 650 GB/day (approximately)

Processor cluster at HQ

Node count Roles Hardware
12 All roles (log handling, queue, search, correlation, alerting) 12–16 cores, 48 GB RAM

Hot storage requirement

(650 × 30 × 1.2) ≈ 23 TB SSD for Elasticsearch.

Archive storage

Sized according to retention beyond 30 days, on shared NFS/S3.

Data flow

  • Remote agents parse, compress, and send logs to HQ over TCP/UDP
  • Processors enrich logs, store in queue topics, and forward to Elasticsearch for indexing.
  • Correlation engine processes real-time events and generates alerts.
  • Older logs are moved from Elasticsearch hot storage to shared archive.

Example deployment: Scenario 2 (Search‑heavy investigations)

  • 1 HQ, 4 remote sites
  • Peak rate: 10,000 EPS
  • Average log size: 700 bytes
  • Retention (hot): 45 days
  • Features: Search‑intensive (frequent complex queries by 25 analysts), correlation moderate

Daily log volume

10,000 × 86,400 × 700 bytes -> 563 GB/day (approximately)

Processor cluster at HQ

Node count Roles Hardware
12 Processing Engine + Queue Engine + Search Engine 12–16 cores, 24–32 GB RAM
1 Correlation + Alerts 12–16 cores, >32 GB RAM

Hot storage requirement

(563 × 45 × 2 × 1.2) ≈ 59 TB SSD for Elasticsearch (primary + 1 replica).

Archive storage

Beyond 45 days on shared NFS/S3 as per retention policy.

Data flow

Remote agents > Access Gateway Cluster (DMZ) > Processing and Queue > Search nodes for indexing > Analysts run concurrent, wide‑time‑range queries > Correlation generates alerts in real time.

Notes
  • Prioritize CPU/RAM on search nodes to handle query concurrency and aggregations.

Example deployment: Scenario 3 (Log Forwarding)

  • 1 HQ, 5 remote sites
  • Peak rate: 20,000 EPS
  • Average log size: 450 bytes
  • Retention (hot): None (indexer/search engine not deployed)
  • Features: Only Log Forwarding to external destinations (e.g., a cloud data lake or third‑party SIEM). Correlation/search disabled in Log360

Daily log volume

20,000 × 86,400 × 450 bytes ≈ 724 GB/day (approx.)

Processor cluster at HQ

Node count Roles Hardware
2 Processing Engine + Queue Engine + Log Forwarding 12–16 cores, 24–32 GB RAM

Optional archive (for audit/troubleshooting)

If you retain raw logs for 7 days: 724 × 7 ≈ 4.95 TB (pre‑compression) on shared storage.

Data flow

Remote agents > Access Gateway Cluster > Processing/Queue > Forward over TCP to external tools. Queue is required to handle peak log flow and during network issues.

Next steps

To understand how deployment works, please go through the How to plan a Log360 deployment guide.

Want to know more about the threat detection, investigation, and response capabilities of Log360? Explore the 30-day, free trial with technical assistance.