Optimizing Docker Images
Optimizes Docker images for size and security by implementing multi-stage builds, dependency minimization, and image scanning best practices
The Anatomy of a Production-Ready Dockerfile
We've all opened a repository and found the same thing: a Dockerfile that looks like a to-do list written by someone who's never shipped to production. FROM node:20. COPY . .. RUN npm install. CMD node index.js.
Install this skill
npx quanta-skills install optimizing-docker-images
Requires a Pro subscription. See pricing.
It works on your laptop. It fails everywhere else.
The image is huge because you're copying source code, build tools, and dependency caches into the final runtime layer. It runs as root because you never added a USER instruction. It has no health check, so your orchestrator assumes it's alive even when it's dead. And critically, it has no security scanning integrated into the build pipeline.
We built this skill because we're tired of seeing engineers copy-paste tutorials that prioritize convenience over security and efficiency. If you're working with Docker, you should already be familiar with the broader workflows in our Docker Mastery Pack, but this skill focuses on the specific, painful details of optimization and hardening that most teams skip until an incident forces their hand.
The problem isn't just size. It's the attack surface. Every layer, every package, and every running process is a potential vector. When you combine a 1GB image with a root user and no scanning, you're not just wasting storage; you're inviting supply chain attacks. OWASP's Docker Security Cheat Sheet [1] outlines the common errors we see daily, and the list is short: running as root, missing health checks, and using base images with known vulnerabilities.
The Cost of Ignoring Image Hygiene
Ignoring image hygiene costs you in three ways: security, velocity, and dollars.
Security: A bloated image contains more packages, more libraries, and more potential CVEs. If you're not scanning, you're blind. Image scanning is the process of analyzing the contents and build process of a container image to detect security issues [3]. Without it, you might push a critical vulnerability to production and not know it for weeks. OWASP DevSecOps guidelines emphasize that scanning must be integrated early, not bolted on after the fact [2]. Velocity: Large images slow down CI/CD. Pushing a 2GB image to a registry takes time. Pulling it on a fresh node takes time. Scaling up from zero takes time. Every minute your pipeline waits for a build or pull is money burned. If you're deploying to Kubernetes, you're also paying for idle resources while the image pulls. Dollars: Storage costs for registries are rising. Egress costs for cloud providers are real. But the biggest cost is engineering time. When an image is slow or broken, engineers spend hours debugging instead of shipping features.Secure configuration best practices include validation of settings, access control via non-privileged users, and monitoring [5]. If your images don't follow these, you're violating the baseline security posture of your infrastructure.
We've seen teams bypass scanning because it's slow or noisy. That's a mistake. If you need a deeper dive into scanning workflows, check out our Container Image Scanning, but this skill integrates scanning directly into the build process so it's impossible to skip.
How a Java Team Shrank Their Image by 75%
Imagine a backend team shipping a Java Spring Boot application. They use a "fat JAR" approach, bundling all dependencies into a single executable. Their Dockerfile looks like this:
FROM openjdk:17-jdk-slim
COPY target/app.jar /app.jar
ENTRYPOINT ["java", "-jar", "/app.jar"]
The resulting image is 850MB. It includes the full JDK, build tools, and unnecessary libraries. When they run a Trivy scan [8], it flags dozens of vulnerabilities in the base image and the JDK. The CI pipeline fails, but the team is under pressure to release, so they bypass the scan. Two weeks later, a critical CVE is disclosed. They're scrambling to patch.
This is a hypothetical illustration, but it's the exact pattern we see in real-world audits. The team doesn't need the JDK at runtime. They don't need build tools. They need a JRE and their application code.
By switching to a multi-stage build, they can separate the build environment from the runtime environment. They use eclipse-temurin for the build stage to compile the JAR, then copy only the JAR into a minimal JRE stage. The final image drops to 200MB. The attack surface shrinks. The vulnerabilities drop. The build speed increases because the cache is more granular.
This is what optimization looks like. It's not magic. It's discipline.
What Changes When Optimization Is Automated
When you install this skill, you're not just getting a few templates. You're getting a system that enforces best practices across your entire codebase.
Multi-Stage Builds: The skill provides production-grade templates for Node.js, Java Spring Boot, and Go.- Node.js: Uses
node:24-alpinefor the build andnginx:alpinefor runtime. It implements layer caching by copyingpackage*.jsonfirst, then dependencies, then source code. It runs as a non-root user. - Java Spring Boot: Uses
eclipse-temurinfor the build and a minimal JRE for runtime. It extracts the JAR and runs as a non-root user. - Go: Uses
golangfor the build andscratchfor runtime. It statically links the binary, resulting in an image with zero dependencies and a minimal attack surface.
USER instructions, HEALTHCHECK, and blocks dangerous instructions. This ensures compliance before the image is even built.
Analysis: The skill includes a script to analyze image size and layer history. It identifies large layers and suggests optimizations based on best practices.
If you're deploying to Kubernetes, we recommend storing approved images in a private registry and only pushing approved images to reduce risk [6]. This skill helps you generate those approved images.
DockSec, an OWASP Incubator Project, combines traditional scanners with AI to provide context-aware recommendations [7]. This skill brings that level of rigor to your builds, automated and integrated.
What's in the Optimizing Docker Images Pack
This is a multi-file deliverable. Every file is designed to be used by an AI agent or a human engineer to enforce optimization and security.
skill.md— Orchestrator skill file defining the philosophy, workflow, and references for optimizing Docker images. Guides the agent to use templates, scripts, and validators.templates/multi-stage-nodejs.dockerfile— Production-grade Node.js multi-stage Dockerfile using node:24-alpine for build and nginx:alpine for runtime. Implements layer caching and non-root user.templates/multi-stage-java-spring.dockerfile— Production-grade Java Spring Boot multi-stage Dockerfile using eclipse-temurin. Implements layer extraction, non-root user, and JRE runtime.templates/multi-stage-go.dockerfile— Production-grade Go multi-stage Dockerfile using golang for build and scratch for runtime. Implements static linking and minimal attack surface.scripts/scan.sh— Executable script to scan Docker images using Trivy. Exits non-zero on critical/high vulnerabilities. Generates SARIF and JSON reports.scripts/analyze.sh— Executable script to analyze Docker image size and layer history. Identifies large layers and suggests optimizations based on best practices.validators/spectral.yaml— Spectral ruleset for linting Dockerfiles. Enforces multi-stage builds, USER instruction, HEALTHCHECK, and no dangerous instructions.validators/test-lint.sh— Script to run Spectral against a Dockerfile. Exits non-zero if linting rules fail, ensuring compliance before build.references/dockerfile-optimization.md— Canonical knowledge on Dockerfile optimization: multi-stage builds, layer caching, .dockerignore, and security hardening techniques.references/trivy-security.md— Canonical knowledge on Trivy vulnerability scanning: severity levels, remediation strategies, and integration with CI/CD pipelines.examples/worked-example-nodejs.yaml— Worked example for a Node.js application, showing project structure, Dockerfile usage, and build commands.examples/worked-example-java.yaml— Worked example for a Java Spring Boot application, showing project structure, Dockerfile usage, and build commands.
Upgrade to Pro and Ship Smarter
Stop shipping bloated, vulnerable images. Stop wasting engineering time on manual optimization. Stop hoping your security team catches what you missed.
Upgrade to Pro to install this skill. Let the agent handle the templates, the scanning, and the linting. You focus on shipping features.
Install the skill and start building smaller, faster, and safer images today.
References
- Docker Security Cheat Sheet — cheatsheetseries.owasp.org
- OWASP DevSecOps Guideline - v-0.2 | OWASP Foundation — owasp.org
- Infrastructure as Code Security Cheatsheet — cheatsheetseries.owasp.org
- CI CD Security - OWASP Cheat Sheet Series — cheatsheetseries.owasp.org
- Security of Containers — owasp.org
- Kubernetes Security - OWASP Cheat Sheet Series — cheatsheetseries.owasp.org
- www-project-docksec — owasp.org
- Kubernetes Security — owasp.org
Frequently Asked Questions
How do I install Optimizing Docker Images?
Run `npx quanta-skills install optimizing-docker-images` in your terminal. The skill will be installed to ~/.claude/skills/optimizing-docker-images/ and automatically available in Claude Code, Cursor, Copilot, and other AI coding agents.
Is Optimizing Docker Images free?
Optimizing Docker Images is a Pro skill — $29/mo Pro plan. You need a Pro subscription to access this skill. Browse 37,000+ free skills at quantaintelligence.ai/skills.
What AI coding agents work with Optimizing Docker Images?
Optimizing Docker Images works with Claude Code, Cursor, GitHub Copilot, Gemini CLI, Windsurf, Warp, and any AI coding agent that reads skill files. Once installed, the agent automatically gains the expertise defined in the skill.