Published on 00/00/0000
Last updated on 00/00/0000
Published on 00/00/0000
Last updated on 00/00/0000
Share
Share
7 min read
Share
Note: The Spotguides feature mentioned in this post is outdated and not available anymore. In case you are interested in a similar feature, contact us for details.This post showcases how to enable a simple Spring Boot application for the Banzai Cloud CI/CD flow, build and save the necessary artifacts, and deploy it to a Kubernetes cluster. We have already posted about our CI/CD flow several times, and have set up a few example projects to illustrate how it works; this time we'll show you how to use it with an arbitrary Spring Boot application.
To do that we've chosen this Spring Boot example project.
Note: in order to follow along, you'll need a Pipeline Control Plane running on a cloud provider. Check this post and this post for information on how to launch a control plane in one of the supported providers like AWS, Google and Azure, or BYOC. You'll also need a dedicated s3 bucket to store the application's artifacts (the Spring Boot Application archive).
CI/CD series: CI/CD flow for Zeppelin notebooks CI/CD for Kubernetes, through a Spring Boot example Deploy Node.js applications to Kubernetes
The desired Spring Boot deployment looks like this:
git clone git@github.com:spring-guides/gs-spring-boot.git
Create the flow descriptor file in the root folder of the freshly checked out project:
Note: the name of the file must be .pipeline.yml
cat << EOF > .pipeline.yml
pipeline:
create_cluster:
image: banzaicloud/plugin-pipeline-client:0.3.0
cluster_name: "[[cluster-name]]"
cluster_provider: "gcloud"
google_project: "[[google-project-id]]"
secrets: [plugin_endpoint, plugin_token]
remote_checkout:
image: banzaicloud/plugin-k8s-proxy:latest
original_image: plugins/git
remote_build:
image: banzaicloud/plugin-k8s-proxy:latest
original_image: maven:3.5-jdk-8
original_commands:
- mvn -f complete/pom.xml -DskipTests clean package
remote_publish_s3:
image: banzaicloud/plugin-k8s-proxy:latest
original_image: plugins/s3
bucket: [[s3-bucket]]
source: complete/target/gs-spring-boot-0.1.0.jar
strip_prefix: complete/target
region: eu-west-1
acl: public-read
secrets: [plugin_access_key, plugin_secret_key]
delete_app:
image: banzaicloud/plugin-pipeline-client:0.3.0
deployment_name: "banzaicloud-stable/springboot"
deployment_release_name: "springboot"
deployment_state: "deleted"
secrets: [plugin_endpoint, plugin_token]
deploy_app:
image: banzaicloud/plugin-pipeline-client:0.3.0
deployment_name: "banzaicloud-stable/springboot"
deployment_release_name: "springboot"
deployment_values:
artifactUrl: "https://s3-eu-west-1.amazonaws.com/[[s3-bucket]]/gs-spring-boot-0.1.0.jar"
# env:
# Java options
# - name: JAVA_OPTS
# value: "-Dserver.port=8080"
# Application arguments
# - name: ARGS
# value: ""
secrets: [plugin_endpoint, plugin_token]
EOF
The .pipeline CI/CD descriptor explained
The CI/CD descriptor.pipeline.yml
file lists the steps that drive the process, from building the source to deploying the application to a Kubernetes cluster.
Every step runs in a separate container (those prefixed with remote_
run in the Kubernetes cluster). Subsequent containers share a persistent volume, which is created for every iteration/build.
The name of each step should be self explanatory - steps can be named to most accurately describe what they do:
create_cluster
cluster_state: deleted
line to the blockremote checkout
remote_build
remote_publish_s3
public-read
acls. This is important, since the archive will be downloaded into the Kubernetes cluster when the application is deployed. It can use custom/restricted ACLs as well.delete_app
deploy_app
The progress can be followed on the user interface available on the Control Plane.
curl --request GET \
--url 'http://{{CP-ip}}/pipeline/api/v1/clusters/{{cluster_id}}/endpoints' \
--header 'Authorization: Bearer {{token}}' \
--header 'Content-Type: application/x-www-form-urlencoded'
We've created a Postman collection with many useful Pipeline API calls; check those out to find more ways of managing those clusters and deployments managed by the Pipeline instance on the Control Plane
embedded
Tomcat. For simplicity's sake we haven't changed that.
The chart sets up the following components in the Kubernetes cluster:
out-of-the-box
. If you are deploying a Spring Boot application using our CI/CD pipeline, API or spotguides, you're already the beneficiary of out-of-the-box JVM monitoring. We have a collector that configurably
scrapes and exposes the mBeans of a JMX target. It runs as a Java Agent, exposing a HTTP server and serving metrics of the local JVM. It can also be run as an independent HTTP server, and scrape remote JMX targets, but this has various disadvantages, such as making it harder to configure and rendering it unable to expose process metrics (e.g., memory and CPU usage). Running the exporter as a Java Agent is thus strongly encouraged.
We have forked this exporter, and enhanced it a bit with a Dockerfile, which adds support for all of the options above.
For the agent
version, you'll have three configuration options:
port
for the http(s) interface, where the metrics will be available to be scraped, which is already in a Prometheus friendly formatAn example looks like this:
-javaagent:/opt/jmx-exporter/jmx_prometheus_javaagent-0.3.1-SNAPSHOT.jar=9020:/etc/jmx-exporter/config.yaml
Get emerging insights on emerging technology straight to your inbox.
Outshift is leading the way in building an open, interoperable, agent-first, quantum-safe infrastructure for the future of artificial intelligence.
* No email required
The Shift is Outshift’s exclusive newsletter.
Get the latest news and updates on generative AI, quantum computing, and other groundbreaking innovations shaping the future of technology.