Photo by CHUTTERSNAP on Unsplash

Should you externalise your Docker Container Logs

Sriram Kumar Mannava
2 min readNov 29, 2023

--

It’s generally a best practice to monitor all application activity and actively understand what is going wrong.

But containers aren’t normal server like environments to store and read logs reliably.

We can't store logs in a container that's so ephemeral that when something goes wrong it'd just be destroyed and the same container can never be brought back.

So what's the idea? Let the container stream all its activity to an external integration - an external logs provider that receives all these logs that the containers generate and store them for us to analyze.

This is useful because, even if we scale our containers to tens and hundreds of instances, we don't need to worry much about it as all these logs are put into a common source.

Something in one of the containers fails, we can identify using the container identifier and fix it. And because it's external to the container stack, even if any container goes down the logs are still there for later analysis or reporting.

That's a long enough explanation - what options do we have to implement such a thing?

Turns out there are many such options available to easily integrate.

To configure docker containers to write logs to such a log provider, we can use the logging drivers that docker provides. These are mechanisms through which docker containers can push logs to these providers.

We can use host options such as - syslog, journald, graylog, logstash, fluentd, splunk, etw or even connect to cloud providers such as AWS CloudWatch or GCP Logs.

https://docs.docker.com/config/containers/logging/configure/

--

--

Sriram Kumar Mannava
Sriram Kumar Mannava

Written by Sriram Kumar Mannava

I make Full Stack Development Easy for You | Full Stack .NET Dev | 3× AWS Certified | Blogger