Frequently Asked Questions

Can’t find your answer here? Check the relevant service page in the sidebar or open the docs search with Ctrl+K / ⌘K.

General

The Coderz Stack is a fully self-hosted, production-ready DevOps infrastructure platform running on a single server (109.199.120.120). It bundles monitoring, logging, orchestration, API management, database, load testing, and Kubernetes — all managed via Docker Compose.
All stack files live under /opt/coderz/:
  • docker-compose.yml — main service definitions
  • configs/ — per-service configuration files
  • /opt/mintlify-docs/ — this documentation site
cd /opt/coderz

# Start all services
docker compose up -d

# Stop all services
docker compose down

# Restart a single service (e.g. grafana)
docker compose restart grafana
cd /opt/coderz
docker compose ps
All containers should show running or healthy. If a container shows restarting, check its logs:
docker compose logs -f <service-name>
cd /opt/coderz
docker compose pull <service-name>
docker compose up -d <service-name>

Access & Credentials

Change the default passwords in a production environment.
ssh root@109.199.120.120
Ensure your SSH key is authorized or use password authentication.
FieldValue
Host109.199.120.120
Port5433
Databasecoderapi
Usernamecoderapi
Passwordcoderapi_2024
Connect via pgAdmin at http://109.199.120.120:5080 — both databases are pre-loaded.

Monitoring & Alerting

  • Coderz Stack Home — all-in-one overview (CPU, RAM, Disk, Docker top containers)
  • Server Overview — Node Exporter metrics (CPU, RAM, Disk, Network, Load Average)
  • Docker Containers — cAdvisor per-container metrics
  • .NET API Full Stack — request metrics + PostgreSQL
  • Prefect Flows Overview — flow completions, failures, and live logs
  • Container Logs — Loki log viewer
  • Alerts Status — active alert state panels
  • Kubernetes (k3s) Overview — pod/node/deployment status
AlertThresholdEvaluation Window
High CPU Usage> 80%5 minutes
High RAM Usage> 85%5 minutes
High Disk Usage> 90%5 minutes
Low Container Count< 6 containers running2 minutes
Alerts are sent by email to aboodm7med1995@gmail.com via Postfix/Gmail relay.
echo "Test alert email" | mail -s "Coderz Alert Test" aboodm7med1995@gmail.com
Check your inbox. If it doesn’t arrive, verify Postfix status:
systemctl status postfix
journalctl -u postfix -n 50
Edit /opt/coderz/configs/prometheus/prometheus.yml then restart Prometheus:
docker compose restart prometheus
Prometheus scrapes: Node Exporter, cAdvisor, kube-state-metrics, k3s API server, and both APIs.

Logging

The stack runs two parallel log pipelines:
  • Loki + Promtail — lightweight, label-based log querying via Grafana. Best for live tailing and quick searches.
  • Elasticsearch + Logstash + Kibana (ELK) — full-text search, structured analytics, and saved searches. Best for deep investigation.
  1. Open http://109.199.120.120:5601
  2. Go to Discover
  3. Select a data view: filebeat-* (system/Docker logs) or logstash-* (processed logs)
  4. Use KQL to filter — for example: container.name: "coderz-dotnet-api" and log.level: "error"
Pre-built saved searches are available: Error & Critical Logs, Docker Container Logs, Warning Logs, and more.
  1. Open Grafana → Explore
  2. Select Loki as the datasource
  3. Use LogQL — for example:
{container_name="coderz-dotnet-api"} |= "error"
The Container Logs dashboard is pre-built for visual log browsing.
docker compose logs -f logstash
Common causes:
  • Filebeat not shipping to port 5044 — check Filebeat config
  • Elasticsearch not healthy — docker compose ps elasticsearch
  • Pipeline config error — check /opt/coderz/configs/logstash/

APIs

APIPortLanguageDescription
Web API8888PythonGeneral-purpose REST API
.NET API5050C# / .NETFull CRUD API with PostgreSQL backend
Both APIs are fully monitored via Prometheus and logged via Loki and Elasticsearch.
# Health check
curl http://109.199.120.120:5050/health

# List items
curl http://109.199.120.120:5050/api/items

# Create an item
curl -X POST http://109.199.120.120:5050/api/items \
  -H "Content-Type: application/json" \
  -d '{"name": "test-item", "description": "hello"}'
The Nginx-based API Gateway (port 80) handles:
  • Rate limiting — protects backend APIs from abuse
  • Redis caching — caches GET responses to reduce load
  • Routing — forwards requests to Web API and .NET API
  • SSL termination (if configured)

Database

Use pgAdmin at http://109.199.120.120:5080 — both databases are pre-configured.Or connect via CLI:
docker exec -it coderz-db psql -U coderapi -d coderapi
Or from the host:
psql -h 109.199.120.120 -p 5433 -U coderapi -d coderapi
docker exec coderz-db pg_dump -U coderapi coderapi > /opt/coderz/backups/coderapi_$(date +%F).sql
Schedule this via a Prefect flow or a cron job for automated daily backups.
pgAdmin takes up to 3 minutes to initialize on first start. Wait and then refresh.If it still fails:
docker compose logs -f pgadmin
docker compose restart pgadmin

Orchestration (Prefect)

FlowSchedulePurpose
system-health-checkEvery 5 minCPU, RAM, Disk health
services-health-checkEvery 10 minDocker service health + email on failure
daily-summary-reportDaily at 06:00 UTCFull stack summary email
threshold-alert-checkEvery 15 minEmail if CPU/RAM/Disk exceeded
weekly-cleanup-reportSunday at 02:00 UTCCleanup + weekly report email
k8s-health-checkEvery 15 mink3s pod/node health + email on failure
docker-restart-monitorEvery 10 minDetect container restarts/unhealthy
  1. Open Prefect UI at http://109.199.120.120:4200
  2. Go to Deployments
  3. Click the deployment → Quick Run
Or via CLI:
docker exec -it prefect-worker prefect deployment run <deployment-name>
All flow definitions are at:
/opt/coderz/configs/prefect/flows/sample_flows.py
Deployment config:
/opt/coderz/configs/prefect/flows/prefect.yaml

Kubernetes (k3s)

kubectl get nodes
kubectl get pods --all-namespaces
kubectl get deployments -n coderz
NamespaceWorkloadReplicas
coderzcoderz-web (nginx sample app)2
kube-systemkube-state-metrics1
Prometheus scrapes the cluster via the k3s API server and kube-state-metrics.
kubectl rollout restart deployment/coderz-web -n coderz
/usr/local/bin/k3s-uninstall.sh
This removes the entire k3s cluster and all workloads. This action is not reversible.

Load Testing

Open the k6 Runner UI at http://109.199.120.120:9000, select a scenario, and click Run.Available scenarios:
  • constant — steady fixed load
  • rampup — gradually increasing users
  • spike — sudden traffic burst
  • stress — high load for extended period
  • .NET API specific — dotnet-items, dotnet-crud, dotnet-mixed, dotnet-stress
Results appear live in the k6 Runner UI. You can also view request metrics in the Grafana .NET API Full Stack dashboard in real time during a test run.
/opt/coderz/configs/k6-runner/app.py

Troubleshooting

# Check which containers are unhealthy
docker compose ps

# View recent logs
docker compose logs --tail=100 <service-name>

# Restart the service
docker compose restart <service-name>
If the issue persists, check disk space (df -h) and memory (free -h), as resource exhaustion is a common cause.
  1. Verify Prometheus is running: docker compose ps prometheus
  2. Check Prometheus targets: http://109.199.120.120:9090/targets — all should be UP
  3. Verify the Grafana datasource: Grafana → ConnectionsData Sources → test each
  4. Check Prometheus logs: docker compose logs prometheus
# Remove unused Docker images, containers, networks
docker system prune -f

# Remove unused volumes (careful — this deletes data)
docker volume prune -f

# Check disk usage by directory
du -sh /opt/coderz/*
df -h
The Mintlify docs run as a systemd service:
systemctl restart mintlify
systemctl status mintlify

# View live logs
journalctl -u mintlify -f
Docs files are at /opt/mintlify-docs/.
Yes. The grafana/loki:latest image has no shell, so CMD-SHELL healthchecks do not work. The Loki healthcheck has been removed from the Compose file. Verify Loki manually from the host:
curl http://109.199.120.120:3100/ready
Should return ready.