Linux logging best practices

In this page

  • What is Linux logging?
  • Log storage and common files
  • Best practices for Linux log management

Effective logging is critical for maintaining the health, security, and performance of your Linux systems. Linux logging practices help administrators quickly detect issues, troubleshoot problems, ensure compliance, and secure sensitive data.

However, without a consistent approach, logs can become overwhelming and inconsistent. This guide outlines the best practices for Linux logging to maximize the value of your logs while minimizing common issues such as excessive logging, poor formatting, and data exposure.

What is Linux logging?

Linux logging is a structured system for recording events, messages, and activities generated by the operating system, kernel, services, and applications. It plays a critical role in system monitoring, troubleshooting, security auditing, and performance analysis.

Not sure which logging daemon fits your needs?

Find out here

Log storage and common files

Linux stores logs mainly in the /var/log directory. Key log files include:

  • /var/log/syslog (Debian/Ubuntu) or /var/log/messages (Red Hat/CentOS): General system activity and service messages.
  • /var/log/auth.log (Debian/Ubuntu) or /var/log/secure (Red Hat/CentOS): Authentication events like logins, sudo usage, and PAM outputs.
  • /var/log/kern.log: Kernel messages including errors and warnings.
  • /var/log/wtmp, /var/log/lastlog, and more: User login history.

Best practices for Linux log management

Best practices for Linux log management include centralizing logs for easier analysis, implementing log rotation to prevent disk space issues, securing logs with proper permissions and encryption, using consistent structured formats, distinguishing log types, and rotating logs regularly to achieve effective logging.

The vital role of log levels to highlight severity

Assigning appropriate log levels to messages helps prioritize and filter log data effectively. Common log levels include:

  • Trace: The most detailed logging level, used for tracing the execution flow in the application. It provides extensive information about the internal state and is typically used for deep debugging.
  • Debug: This level is used for diagnostic purposes, providing detailed insights into the application's behavior. It includes information useful for troubleshooting and is often enabled in development environments.
  • Info: General information about the application's operational status. It logs significant events that are informational in nature, such as application startup or successful task completion.
  • Warn: Indicates potential issues or unexpected behavior that does not halt the application. It serves as a warning that something may need attention but is not critical.
  • Error: Logs errors that occur during execution, indicating that a specific functionality has failed but the application can continue running.
  • Critical/Fatal: Represents severe errors that lead to application failure or critical issues affecting key functionalities, requiring immediate attention.

Using these levels consistently allows you to focus on critical events and avoid noise from less important logs.

Use structured logging

Logs should follow a structured and consistent format to facilitate parsing, searching, and human readability. Avoid manual print statements that produce inconsistent or unstructured logs. Instead, use logging libraries or frameworks that support structured formats such as JSON or standardized text formats.

Benefits of consistent formatting include:

  • Easy integration with log management and analysis tools.
  • Simplified troubleshooting due to predictable log structure.
  • Enhanced ability to automate log parsing and alerting

Standardizing date/time formats, including contextual information (e.g., hostname, service name), and using proper log levels are essential components of this practice.

Avoid logging sensitive data

Logs may inadvertently contain sensitive data such as passwords, personal information, or security tokens. To protect privacy and comply with regulations:

  • Sanitize logs to remove or mask sensitive data before writing.
  • Use log scanning tools to detect accidental exposure.
  • Restrict file permissions so only authorized users can access logs.
  • Encrypt logs, especially when transmitting them to remote servers.
  • Avoid storing sensitive logs on the local host; use secure centralized storage if needed.

Rotate logs periodically

Log files can grow rapidly and consume excessive disk space, potentially causing system issues. Implement log rotation to:

  • Archive old logs automatically.
  • Compress rotated logs to save space.
  • Limit the number of retained logs based on retention policies.

Use tools like logrotate to configure rotation frequency (e.g., daily, weekly, monthly), compression, and retention counts. Proper log rotation prevents disk exhaustion and maintains system stability.

Here is an example of a logrotate configuration snippet for /etc/logrotate.d/syslog:

/etc/logrotate.d/syslog {
weekly
rotate 6
compress
missingok
postrotate
    systemctl reload rsyslog
endscript }

Centralized log collection

For environments with multiple servers or services, centralizing logs into a single location simplifies monitoring and analysis. Centralized logging enables:

  • Correlation of events across systems.
  • Easier search and alerting.
  • Improved security through controlled access.

Use a log management tool like ManageEngine EventLog Analyzer to collect and aggregate logs effectively from various sources, including firewall, IDS/IPS, servers, routers, switches, database applications, web servers, proxy servers, and more.

Secure your logs

Logs often contain sensitive operational details and must be protected to prevent unauthorized access or tampering. Best practices include:

  • Setting strict file permissions.
  • Using role-based access controls.
  • Encrypting logs at rest and in transit.
  • Isolating logs on secure servers or cloud storage.
  • Regularly auditing access to log files.

Monitor log size and growth

Proactively monitor log files to detect abnormal growth in size, which can indicate excessive error logging instances. Use system tools like du and df to check disk usage and set up alerts for unusual log size increases. This helps prevent outages caused by full disks and enables timely troubleshooting.

Set up alerts for critical system events

Configure alerts for critical events like permissions overriding, unauthorized changes to sensitive files, and users added to critical security groups to enable you to tend to the incident at once by quickly assigning SOC analysts.

EventLog Analyzer helps you set up alerts for incidents easily and enables you to get notified through a mail or SMS. The solution also lets you integrate ticketing tools to assign the incident to the administrator seamlessly.

Review and analyze logs for anomalous events

Regularly review logs for unexpected network activity, which can indicate ransomware attacks, data exfiltration attempts, brute-force attacks, and more.

EventLog Analyzer's anomaly detection mechanisms helps you identify and flag suspicious events. You can automate actions like blocking malicious IP addresses or other preventive measures to keep potential threats at bay.

Document everything

Document all the activities that go on in your network, like basic configurations, logging policies, user activities, and compliance with regulations.

EventLog Analyzer helps you comply with the popular compliance standards like the PCI DSS, HIPAA, SOC 2, and the GDPR. This helps you be audit-ready with reports readily available for compliance purposes.

What's next?

Identify, analyze, and remediate security incidents in Linux environments faster with EventLog Analyzer.