Part 4: CI/CD Deployment to Pi2 with Jenkins

🧭 Series Navigation:

⬅️ Introduce AdventureTube Microservice Hub
⬅️ Part 1: Architecture Overview & Design Patterns
⬅️ Part 2: Development Environment & Debugging
⬅️ Part 3: Testing Strategies
Part 4: CI/CD Deployment with Jenkins (Current)


Production Deployment Pipeline: Git to Docker with Jenkins Automation

From code commit to production in minutes – here’s how enterprise teams deploy

We’ve covered architecture design, efficient development workflows, and comprehensive testing. Now comes the final piece: getting your microservices from development to production safely and automatically. Today I’ll show you my complete CI/CD pipeline that takes AdventureTube from Git commit to running containers on Raspberry Pi infrastructure.

Why Automated Deployment Matters

Manual deployment is the enemy of modern development because:

  • Human Error: Manual steps inevitably lead to mistakes
  • Inconsistency: Different environments, different results
  • Slow Feedback: Issues discovered too late in the process
  • Fear of Deployment: Complex processes discourage frequent releases

My solution? A fully automated pipeline that builds confidence and enables rapid iteration.

Deployment Architecture Overview

Here’s the complete flow from development to production:

Pipeline Visualization

Local Development (Mac) 
    ↓ git push
Git Repository (GitHub)
    ↓ webhook trigger
Jenkins (Pi2) 
    ↓ build & test
Docker Images
    ↓ deploy
Container Deployment (Pi2)
    ↓ register
Eureka Service Discovery

Environment Strategy

I use a three-environment approach that maximizes efficiency while minimizing costs:

  • Development: Mac local services + Pi2 infrastructure layer
  • QA: Pi2 containerized services + infrastructure
  • Production: Pi1 (planned) – full containerized deployment

This setup gives me production-like testing without cloud costs during development.

Git Workflow & Branch Strategy

Effective CI/CD starts with a solid Git workflow. Here’s my branching strategy:

Branch Management

  • main: Production-ready code (auto-deploy to production)
  • develop: Integration branch (auto-deploy to QA)
  • feature/*: Development branches (manual testing)
  • hotfix/*: Production fixes (fast-track to main)

Typical Development Workflow

# Start new feature
git checkout -b feature/user-profile-api

# Develop and test locally using Part 2 setup
# Make changes, test with local development environment

# Commit and push
git add .
git commit -m "Add user profile management endpoints"
git push origin feature/user-profile-api

# Create pull request to develop
# After review, merge triggers Jenkins build

# Merge to develop → Jenkins builds and deploys to Pi2 QA
# After QA testing, merge to main → Jenkins deploys to production

Webhook Configuration

GitHub webhooks automatically trigger Jenkins builds:

# GitHub webhook configuration
Payload URL: http://192.168.1.105:8080/github-webhook/
Content type: application/json
Events: push, pull_request

Jenkins Pipeline Configuration

Jenkins orchestrates the entire deployment process. Here’s my complete pipeline setup:

Jenkinsfile Structure

pipeline {
    agent any
    
    environment {
        DOCKER_REGISTRY = 'localhost:5000'
        SERVICE_NAME = 'auth-service'
        COMPOSE_FILE = 'docker-compose-adventuretubes.yml'
    }
    
    stages {
        stage('Checkout') {
            steps {
                checkout scm
                script {
                    env.BUILD_VERSION = sh(
                        script: "echo ${env.BUILD_NUMBER}-${env.GIT_COMMIT.take(7)}",
                        returnStdout: true
                    ).trim()
                }
            }
        }
        
        stage('Build') {
            steps {
                sh '''
                    echo "Building with Maven..."
                    mvn clean compile -DskipTests
                    mvn package -DskipTests
                '''
            }
        }
        
        stage('Test') {
            parallel {
                stage('Unit Tests') {
                    steps {
                        sh 'mvn test'
                    }
                    post {
                        always {
                            junit 'target/surefire-reports/**/*.xml'
                        }
                    }
                }
                stage('Integration Tests') {
                    steps {
                        sh 'mvn verify -Pintegration'
                    }
                    post {
                        always {
                            junit 'target/failsafe-reports/**/*.xml'
                        }
                    }
                }
            }
        }
        
        stage('Docker Build') {
            steps {
                script {
                    def image = docker.build("${DOCKER_REGISTRY}/${SERVICE_NAME}:${BUILD_VERSION}")
                    image.push()
                    image.push("latest")
                }
            }
        }
        
        stage('Deploy to QA') {
            when {
                branch 'develop'
            }
            steps {
                sh '''
                    echo "Deploying to QA environment..."
                    export IMAGE_TAG=${BUILD_VERSION}
                    docker-compose -f ${COMPOSE_FILE} up -d ${SERVICE_NAME}
                '''
            }
        }
        
        stage('Production Deployment') {
            when {
                branch 'main'
            }
            steps {
                script {
                    timeout(time: 5, unit: 'MINUTES') {
                        input message: 'Deploy to production?', 
                              ok: 'Deploy',
                              submitterParameter: 'DEPLOYER'
                    }
                }
                sh '''
                    echo "Deploying to production..."
                    export IMAGE_TAG=${BUILD_VERSION}
                    docker-compose -f ${COMPOSE_FILE} up -d ${SERVICE_NAME}
                '''
            }
        }
        
        stage('Health Check') {
            steps {
                script {
                    def maxRetries = 30
                    def retryCount = 0
                    def healthCheckPassed = false
                    
                    while (retryCount < maxRetries && !healthCheckPassed) {
                        try {
                            sh '''
                                curl -f http://192.168.1.112:8010/actuator/health
                                curl -f http://192.168.1.105:8761/eureka/apps/AUTH-SERVICE
                            '''
                            healthCheckPassed = true
                        } catch (Exception e) {
                            retryCount++
                            sleep(10)
                        }
                    }
                    
                    if (!healthCheckPassed) {
                        error("Health check failed after ${maxRetries} attempts")
                    }
                }
            }
        }
    }
    
    post {
        always {
            cleanWs()
        }
        success {
            echo "Deployment successful! Service is healthy and registered with Eureka."
        }
        failure {
            echo "Deployment failed. Check logs for details."
            // Add notification to Slack or email
        }
    }
}

Pipeline Stage Breakdown

1. Checkout: Clone repository and determine build version
2. Build: Compile Java code and create JAR files
3. Test: Run unit and integration tests in parallel
4. Docker Build: Create container images and push to registry
5. Deploy: Update containers based on branch strategy
6. Health Check: Verify service health and Eureka registration

Docker Image Building Process

Efficient Docker images are crucial for fast deployments. Here’s my multi-stage Dockerfile:

Optimized Multi-Stage Dockerfile

# Build stage
FROM maven:3.8-openjdk-17-slim AS build
WORKDIR /app

# Copy dependency files first (for layer caching)
COPY pom.xml .
COPY src ./src

# Build the application
RUN mvn clean package -DskipTests

# Runtime stage
FROM openjdk:17-jdk-alpine
VOLUME /tmp

# Create non-root user for security
RUN addgroup -g 1001 -S appuser && \
    adduser -u 1001 -S appuser -G appuser

# Copy JAR from build stage
COPY --from=build /app/target/*.jar app.jar

# Switch to non-root user
USER appuser

# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \
    CMD curl -f http://localhost:8010/actuator/health || exit 1

EXPOSE 8010
ENTRYPOINT ["java", "-jar", "/app.jar"]

Build Optimization Strategies

  • Layer Caching: Copy dependencies before source code
  • Multi-stage Builds: Separate build and runtime environments
  • Minimal Base Images: Alpine Linux for smaller images
  • Security: Non-root user for container execution
  • Health Checks: Built-in container health monitoring

Registry Management

I run a local Docker registry on Pi2 for fast image storage:

# Start local registry
docker run -d -p 5000:5000 --restart=always --name registry registry:2

# Build and push images
docker build -t localhost:5000/auth-service:latest .
docker push localhost:5000/auth-service:latest

Deployment to Pi2 Environment

The deployment process updates running containers with zero downtime:

Container Orchestration Strategy

# docker-compose-adventuretubes.yml
version: '3.8'

services:
  auth-service:
    image: localhost:5000/auth-service:${IMAGE_TAG:-latest}
    container_name: adventuretube-auth
    ports:
      - "8010:8010"
    environment:
      - SPRING_PROFILES_ACTIVE=pi2
      - CONFIG_SERVER_URL=http://192.168.1.105:9297
      - EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE=http://192.168.1.105:8761/eureka
    depends_on:
      - postgres
      - eureka-server
      - config-server
    networks:
      - adventuretube-network
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8010/actuator/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 60s

networks:
  adventuretube-network:
    external: true

Zero-Downtime Deployment

My deployment strategy ensures continuous service availability:

  1. Health Check: Verify current service is healthy
  2. Image Pull: Download new container image
  3. Rolling Update: Start new container, stop old one
  4. Verification: Confirm new service is healthy
  5. Cleanup: Remove old container and images

Environment Configuration

Pi2-specific configuration through environment variables:

# Pi2 environment variables
SPRING_PROFILES_ACTIVE=pi2
CONFIG_SERVER_URL=http://192.168.1.105:9297
POSTGRES_HOST=//adventuretube.net:5432/adventuretube
EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE=http://192.168.1.105:8761/eureka
LOGGING_LEVEL_COM_ADVENTURETUBE=DEBUG

Monitoring & Verification

After deployment, comprehensive monitoring ensures everything works correctly:

Post-Deployment Verification Steps

  1. Container Status: Verify container is running and healthy
  2. Service Registration: Check Eureka service registry
  3. API Endpoints: Test critical endpoints
  4. Database Connectivity: Verify database connections
  5. Inter-service Communication: Test service-to-service calls

Monitoring Tools

Portainer Dashboard: Container management and monitoring

# Access Portainer at http://192.168.1.105:9000
# Monitor:
# - Container status and resource usage
# - Log aggregation and analysis
# - Network connectivity
# - Volume management

Eureka Service Registry: Service discovery monitoring

# Access Eureka at http://192.168.1.105:8761
# Verify:
# - Service registration status
# - Instance health
# - Load balancing configuration
# - Service dependencies

Application Health Endpoints: Spring Boot Actuator monitoring

# Health check endpoints
curl http://192.168.1.112:8010/actuator/health
curl http://192.168.1.112:8010/actuator/metrics
curl http://192.168.1.112:8010/actuator/info

Automated Verification Script

#!/bin/bash
# verify-deployment.sh

SERVICE_NAME="auth-service"
SERVICE_PORT="8010"
SERVICE_HOST="192.168.1.112"

echo "Verifying $SERVICE_NAME deployment..."

# Check container status
if docker ps | grep -q $SERVICE_NAME; then
    echo "✅ Container is running"
else
    echo "❌ Container is not running"
    exit 1
fi

# Check health endpoint
if curl -f http://$SERVICE_HOST:$SERVICE_PORT/actuator/health &>/dev/null; then
    echo "✅ Health endpoint responding"
else
    echo "❌ Health endpoint not responding"
    exit 1
fi

# Check Eureka registration
if curl -f http://192.168.1.105:8761/eureka/apps/AUTH-SERVICE &>/dev/null; then
    echo "✅ Service registered with Eureka"
else
    echo "❌ Service not registered with Eureka"
    exit 1
fi

# Test API endpoint
if curl -f http://$SERVICE_HOST:$SERVICE_PORT/api/auth/health &>/dev/null; then
    echo "✅ API endpoints responding"
else
    echo "❌ API endpoints not responding"
    exit 1
fi

echo "🎉 Deployment verification completed successfully!"

Rollback & Recovery Strategies

When deployments go wrong, quick recovery is essential:

Automated Rollback Triggers

  • Health Check Failures: Service doesn’t respond within timeout
  • Eureka Registration Failure: Service doesn’t register
  • Database Connection Issues: Can’t connect to PostgreSQL
  • Critical API Failures: Essential endpoints return errors

Rollback Procedure

# Automated rollback script
#!/bin/bash
PREVIOUS_IMAGE_TAG=$1

echo "Rolling back to previous version: $PREVIOUS_IMAGE_TAG"

# Stop current container
docker-compose -f docker-compose-adventuretubes.yml stop auth-service

# Update image tag to previous version
export IMAGE_TAG=$PREVIOUS_IMAGE_TAG

# Start with previous image
docker-compose -f docker-compose-adventuretubes.yml up -d auth-service

# Verify rollback success
./verify-deployment.sh

echo "Rollback completed successfully"

Data Consistency Management

  • Database Migrations: Backward-compatible schema changes
  • Configuration Versioning: Config server maintains version history
  • Service Compatibility: API versioning for breaking changes

Production Readiness Considerations

Preparing for production scale requires additional considerations:

Scalability Preparation

  • Horizontal Scaling: Multiple container instances behind load balancer
  • Database Connection Pooling: HikariCP configuration optimization
  • Caching Strategies: Redis for session and data caching
  • Resource Limits: Container CPU and memory constraints

Security Implementation

  • Container Security: Non-root users and minimal base images
  • Network Isolation: Service-specific Docker networks
  • Secret Management: External secret stores for sensitive data
  • SSL/TLS: HTTPS termination at load balancer

Monitoring & Alerting

  • Log Aggregation: Centralized logging with ELK stack
  • Metrics Collection: Prometheus and Grafana dashboards
  • Alert Configuration: Slack/email notifications for failures
  • Performance Monitoring: APM tools for request tracing

Complete Journey Recap

Congratulations! You’ve now seen the complete AdventureTube microservice development lifecycle:

What We’ve Accomplished

  • Part 1: Designed a scalable three-layer microservice architecture
  • Part 2: Created an efficient hybrid development environment
  • Part 3: Implemented comprehensive testing strategies
  • Part 4: Built a complete CI/CD pipeline with automated deployment

Key Achievements

  • 🏗️ Enterprise Architecture: Production-ready microservice design
  • ⚡ Development Efficiency: Fast iteration without Docker rebuilds
  • 🧪 Quality Assurance: Multi-layered testing approach
  • 🚀 Automated Deployment: Git-to-production pipeline
  • 💰 Cost Optimization: Raspberry Pi infrastructure for development

Real-World Application

This isn’t just a learning exercise. The AdventureTube architecture and workflows I’ve shared are:

  • Production-Tested: Running real applications with real users
  • Scalable: Ready for horizontal scaling and increased load
  • Maintainable: Clean separation of concerns and automated processes
  • Cost-Effective: Minimal infrastructure costs during development

Next Steps & Advanced Topics

While this series covers the fundamentals, there are advanced topics worth exploring:

  • Service Mesh: Istio for advanced traffic management
  • Event Sourcing: Event-driven architecture patterns
  • Distributed Tracing: Request tracking across services
  • Chaos Engineering: Testing system resilience
  • Multi-Region Deployment: Geographic distribution strategies

Your Journey Continues

The microservice landscape is constantly evolving. Here’s how to keep learning:

  • Experiment: Try implementing these patterns in your own projects
  • Adapt: Modify the architecture to fit your specific needs
  • Share: Document your experiences and lessons learned
  • Connect: Join the community discussions in the comments

🧭 Series Navigation:

⬅️ Introduce AdventureTube Microservice Hub
⬅️ Part 1: Architecture Overview & Design Patterns
⬅️ Part 2: Development Environment & Debugging
⬅️ Part 3: Testing Strategies
Part 4: CI/CD Deployment with Jenkins (Current)

Thank You!

Thank you for joining me on this comprehensive journey through modern microservice development. I hope these insights help you build better, more efficient systems.

I’d love to hear about your experiences implementing these patterns! Please share your questions, challenges, and successes in the comments below.

Happy coding, and may your deployments always be successful! 🎉


Tags: #CI/CD #Jenkins #Docker #Microservices #DevOps #Deployment

Categories: BACKEND(spring-microservice), DevOps

Leave a Comment

Your email address will not be published. Required fields are marked *