Containerised AcelleMail trades a small ergonomic cost (you have to think about volumes, image rebuilds, and patch flows) for two big wins: parity between dev and prod environments, and clean rollback by image tag rather than by manual file restoration. This guide ships a production-ready docker-compose.yml plus the operational recipes — patch upgrade, queue scaling, log shipping — that aren't obvious from the file alone.
The fundamental trick is that AcelleMail wasn't built container-first. The patch-upgrade flow assumes file mutations on the live filesystem (the /upgrade/run-file API replaces files in /var/www/acellemail in place). In Docker, this means the AcelleMail code lives in a named volume, not baked into the image — otherwise every patch would be wiped on the next docker compose up.
The compose file#
Save as /srv/acellemail/docker-compose.yml:
name: acellemail
services:
app:
image: php:8.3-fpm
restart: unless-stopped
working_dir: /var/www/acellemail
volumes:
- acellemail_code:/var/www/acellemail
- ./php/php.ini:/usr/local/etc/php/conf.d/zz-acellemail.ini:ro
depends_on: [mysql, redis]
networks: [internal]
nginx:
image: nginx:1.25-alpine
restart: unless-stopped
ports: ["80:80", "443:443"]
volumes:
- acellemail_code:/var/www/acellemail:ro
- ./nginx/acellemail.conf:/etc/nginx/conf.d/default.conf:ro
- ./certbot/conf:/etc/letsencrypt:ro
- ./certbot/www:/var/www/certbot:ro
depends_on: [app]
networks: [internal, edge]
mysql:
image: mysql:8.0
restart: unless-stopped
environment:
MYSQL_DATABASE: acellemail
MYSQL_USER: acellemail
MYSQL_PASSWORD_FILE: /run/secrets/mysql_pw
MYSQL_ROOT_PASSWORD_FILE: /run/secrets/mysql_root_pw
volumes:
- mysql_data:/var/lib/mysql
secrets: [mysql_pw, mysql_root_pw]
command: --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
networks: [internal]
redis:
image: redis:7-alpine
restart: unless-stopped
volumes: [redis_data:/data]
networks: [internal]
worker:
image: php:8.3-cli
restart: unless-stopped
working_dir: /var/www/acellemail
command: ["php", "artisan", "queue:work", "--sleep=3", "--tries=3", "--max-time=3600"]
volumes:
- acellemail_code:/var/www/acellemail
- ./php/php.ini:/usr/local/etc/php/conf.d/zz-acellemail.ini:ro
depends_on: [mysql, redis]
deploy: { replicas: 2 }
networks: [internal]
scheduler:
image: php:8.3-cli
restart: unless-stopped
working_dir: /var/www/acellemail
entrypoint: ["sh", "-c"]
command: ["while :; do php artisan schedule:run; sleep 60; done"]
volumes:
- acellemail_code:/var/www/acellemail
depends_on: [mysql, redis]
networks: [internal]
volumes: { acellemail_code: {}, mysql_data: {}, redis_data: {} }
networks: { internal: {}, edge: {} }
secrets:
mysql_pw: { file: ./secrets/mysql_pw.txt }
mysql_root_pw: { file: ./secrets/mysql_root_pw.txt }
Key design decisions, each one a battle scar:
- Code lives in a named volume.
acellemail_code is shared read-write to app + worker, read-only to nginx. The first docker compose up populates the volume from the unzipped install bundle (next section); patches mutate it in place.
- Workers and scheduler run as separate services. Mixing the queue worker into the FPM container is tempting and broken — when FPM restarts (image upgrade, OOM, supervisord glitch) the worker also dies, in-flight queue jobs become orphaned, and the scheduler stops firing. Separate services restart independently.
- Two worker replicas. Same Small-tier sizing as bare-metal. Bump to 4 at Medium tier.
- MySQL secrets via files, not env vars. Compose has supported file-backed secrets for years; using them is one less audit-log entry showing the password in plain text.
- Nginx mounts code read-only. If PHP-FPM is compromised it can write the volume, but the public-facing nginx cannot.
Initial install (one-time)#
mkdir -p /srv/acellemail/{php,nginx,certbot/conf,certbot/www,secrets}
cd /srv/acellemail
# Generate secrets
openssl rand -base64 32 > secrets/mysql_root_pw.txt
openssl rand -base64 32 > secrets/mysql_pw.txt
chmod 600 secrets/*
# php.ini overrides
cat > php/php.ini <<'INI'
memory_limit = 512M
upload_max_filesize = 300M
post_max_size = 300M
max_execution_time = 300
INI
# nginx vhost (HTTP only, certbot adds HTTPS later)
cat > nginx/acellemail.conf <<'NGINX'
server {
listen 80;
server_name mail.example.com;
root /var/www/acellemail/public;
index index.php index.html;
client_max_body_size 300M;
location ^~ /.well-known/acme-challenge/ { root /var/www/certbot; }
location / { try_files $uri $uri/ /index.php?$query_string; }
location ~ \.php$ {
fastcgi_pass app:9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
fastcgi_read_timeout 300;
}
}
NGINX
# Create the named volume + populate from install bundle
docker volume create acellemail_code
docker run --rm -v acellemail_code:/dst -v $PWD:/src alpine \
sh -c 'cd /dst && unzip -q /src/acellemail-latest.zip && chown -R 33:33 .'
docker compose up -d
chown -R 33:33 is the critical one-time fix. The php:8.3-fpm image runs as UID 33 (www-data); without that ownership, AcelleMail can't write logs or cached views and the web installer loops forever.
Add HTTPS#
Run certbot in a one-shot container against the running nginx:
docker run --rm -it \
-v /srv/acellemail/certbot/conf:/etc/letsencrypt \
-v /srv/acellemail/certbot/www:/var/www/certbot \
certbot/certbot certonly --webroot -w /var/www/certbot \
-d mail.example.com --email you@example.com --agree-tos --non-interactive
Then add the HTTPS server block to nginx/acellemail.conf (mirror the HTTP block, listen on 443 with ssl_certificate paths under /etc/letsencrypt/live/...) and reload: docker compose exec nginx nginx -s reload.
Weekly renewal cron on the host:
0 3 * * 0 docker run --rm -v /srv/acellemail/certbot/conf:/etc/letsencrypt -v /srv/acellemail/certbot/www:/var/www/certbot certbot/certbot renew --quiet && docker compose -f /srv/acellemail/docker-compose.yml exec nginx nginx -s reload
Patch upgrade workflow (the docker quirk)#
API mode is the cleaner choice with Docker — it writes to the named volume and the change is live without a container rebuild. From a host with curl:
TOKEN="..."
HOST="https://mail.example.com"
curl --max-time 900 -X POST "$HOST/api/v1/upgrade/run-file" \
-H "Authorization: Bearer $TOKEN" \
-F "patch=@/path/to/patch-latest.bin"
# Run finalize from inside the app container so cache + migrations refresh:
docker compose exec app php artisan migrate --force
docker compose exec app php artisan view:clear
docker compose exec app php artisan config:clear
docker compose restart app worker scheduler
The Acelle support handbook documents this restart requirement explicitly: opcache + Laravel's cached config will hold the old code path until the PHP process restarts. Always restart app, worker, scheduler together after any patch — partial restarts cause hard-to-debug version-mismatch behavior.
Logging#
Container stdout/stderr are captured by Docker's logging driver. For production, configure local driver with rotation at minimum:
logging:
driver: local
options: { max-size: "20m", max-file: "5" }
For real centralized logging, swap to loki, gelf, or push to a vector/fluentbit sidecar. The KB's log aggregation guide covers patterns.
Backups#
# Daily mysql dump (host cron):
0 2 * * * docker compose -f /srv/acellemail/docker-compose.yml exec -T mysql \
mysqldump --single-transaction --routines acellemail > /srv/backups/acellemail-$(date +\%F).sql
# Weekly volume snapshot:
0 3 * * 0 docker run --rm -v acellemail_code:/data -v /srv/backups:/backup \
alpine tar czf /backup/acellemail_code-$(date +\%F).tar.gz -C /data .
Related reading#
FAQ#
Can I bake the AcelleMail code into a custom image?#
Technically yes, but you lose the API-driven patch upgrade flow — every patch becomes a rebuild + redeploy. The volume-based pattern in this guide keeps the upgrade flow simple.
What about Kubernetes?#
Same architecture, more YAML. Use a PersistentVolumeClaim for the code volume, separate Deployment resources for app + worker + scheduler, and an Ingress for nginx. Avoid running multiple app replicas against the same code volume — Laravel's session storage assumes a single writer unless you configure Redis-backed sessions.
Why two worker replicas?#
Throughput. One worker handles one queue job at a time. AcelleMail's send-campaign jobs spawn child jobs per chunk; two workers process them concurrently without overwhelming the upstream sending API.
Does this work on Docker Desktop?#
For development, yes. For production, no — Docker Desktop's networking and volume performance are not production-grade. Use Linux + Docker Engine (or any managed container service like ECS, GKE, or Hetzner Container Service).
Network policy — restricting east-west traffic#
The compose above puts app, worker, scheduler, mysql, redis on a shared internal network and nginx on both internal and edge. That's the minimum useful isolation. To go further, split into per-service networks:
networks:
app-db: {} # app, worker, scheduler ↔ mysql, redis
edge: {} # nginx ↔ app
Then mysql only sees app-db, nginx only sees edge and the app nginx ↔ app interface. A compromised nginx cannot directly query MySQL — it must go through PHP-FPM. This is the "least privilege at the network layer" pattern; it's worth doing on multi-tenant hosts.
Secret rotation#
The compose secrets are file-backed. Rotate them by:
openssl rand -base64 32 > secrets/mysql_pw.txt.new
mv secrets/mysql_pw.txt.new secrets/mysql_pw.txt
docker compose restart app worker scheduler — they'll pick up the new file at startup.
- Inside MySQL:
ALTER USER 'acellemail'@'%' IDENTIFIED BY '<new value from file>';
Done quarterly is a reasonable cadence. If your compliance regime demands shorter, integrate with Hashicorp Vault or AWS Secrets Manager via a sidecar.