Lessons Learned from Container Performance Tuning
When I started using containers to deploy software, I did what everyone else was doing: I launched a container running Alpine Linux, logged into a Bash shell running inside the container, and said, “That was easy!”
Of course, that first container experience did not do much to solve any real-world problems. It was not until I took the next step—migrating an existing application that I was working on into a container—that I got a taste of the challenges that arise when using containers.
This article highlights some of the monitoring and troubleshooting challenges that you’re likely to face when you use containers, based on my experience with a containerized Spring Boot app at my company and deploying it using Docker.
Getting started with the Spring Boot app was easy. I was able to pull a template for the Dockerfile I needed from some blog posts, then tweak it to get things just right before launching my container.
When I started the container, all went smoothly. It fired up without issue, and I was able to connect to the app from my test client.
So far, so good. Now, let’s take a look at the log files.
Pulling the logs from the container runtime is easy enough. You first pull the list of containers running, then list the logs for the appropriate container, like so:
[root@origin helloworld-springboot]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6523e9917c11 springboot/helloworld "java -Djava.security" 3 minutes ago Up 3 minutes 0.0.0.0:8080->8080/tcp romantic_jepsen
[root@origin helloworld-springboot]# docker logs --tail 5 6523e9917c11
2018-01-21 03:47:05.458 INFO 1 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8080 (http)
2018-01-21 03:47:05.466 INFO 1 --- [ main] c.e.j.g.HelloworldApplication : Started HelloworldApplication in 4.379 seconds (JVM running for 5.618)
2018-01-21 03:47:12.134 INFO 1 --- [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring FrameworkServlet 'dispatcherServlet'
2018-01-21 03:47:12.134 INFO 1 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : FrameworkServlet 'dispatcherServlet': initialization started
2018-01-21 03:47:12.156 INFO 1 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : FrameworkServlet 'dispatcherServlet': initialization completed in 22 ms
That was easy enough…until I realized that the information above was incomplete—not all of the logs that I was expecting. I wasn’t able to find the log data I wanted.
Why? After some researching and digging around, I realized that the default logging configuration for a container only shows STDIN, STDOUT, and STDERR. Now that I’ve figured that out, I have some options available, and depending on the situation, I have chosen differently.
The easiest option is to mount a volume in the container that all the custom logs are written to, so they are available outside the container. This also makes it easy to work with third-party log aggregation services that may already be in place on the host.
The second and more complicated option is to retrofit the logging in the app to send directly to an external service. This could be syslog, or any other option that you might be comfortable with, but it requires changes to the app. For this reason, this is not always the easiest option for quickly building and deploying a containerized app, although it is arguably better over the long term and at scale.
Performance Tuning and Diagnostics
Running diagnostics and performance tuning on my containerized app turned out to be challenging, too.
The challenges here are similar to those involved in logging. Because everything is wrapped nicely in a container, getting at the performance metrics you need to help with performance tuning and problem diagnostics is difficult.
That is particularly true, I realized, when running Java applications. Tunables like max and minimum pool sizes, total memory footprint (heap size), and the number of threads running are vital pieces of information, but they are not readily available through the Docker CLI.
In the face of this obstacle, the only option that could meet my needs with any kind of historical record was a proper APM (Application Performance Management) solution that has automated discovery and mapping. Specifically, I was looking for a tool that provided low-overhead agentless monitoring, as well as deeper instrumentation and tracing when needed.
The lesson I learned from my initial experiments deploying a Spring Boot app in a container is that, when working with containers, traditional infrastructure monitoring doesn’t have much value beyond ensuring the host is healthy.
While some monitoring tools can bridge traditional environments and containerized environments, containers are where application-centric monitoring ceases to be the best option. Sure, you can get the data you want if you try really hard. But the manual effort required to access custom log data and performance metrics via the Docker CLI is not feasible in a large-scale container deployment.
For more on container monitoring, download the free Container Monitoring and Management eBook from The New Stack or visit The Essentials of Container Monitoring Hub.
Blog by Vince Power
Vince Power is a Solution Architect who has a focus on cloud adoption and technology implementations using open source-based technologies. He has extensive experience with core computing and networking (IaaS), identity and access management (IAM), application platforms (PaaS), and continuous delivery.