Skip to content

Configurations templates, playbooks, pipelines, script used while learning CI/CD

License

Notifications You must be signed in to change notification settings

sebastianczech/Learning-CI-CD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Learning CI/CD

Repository contains notes, configuration files and scripts created while learning Docker, Kubernetes, Jenkins, Gitlab, Ansible, Terraform and KVM.

Overview

Solution proposal contains key technologies used for creating CI/CD in home environment. Details of my solution are available on solution overview, which I presented below. It was created using plantUML, but there are also alternatives such as a diagram as a code.

solution_overview

Prepare VM for CI/CD learning

At the beginning to prepare VM for learning downloaded Debian non-free netinst version and after creating VM in VirtualBox and installing Debian, on host add IP address of the machine and copy SSH keys to enable passwordless access:

grep devops /etc/hosts
192.168.0.18  	devops

ssh-copy-id devops 

On VM add user to sudo group without password:

sudo adduser seba sudo

sudo visudo
seba   ALL=(ALL) NOPASSWD:ALL

After basic configuration use playbooks to automatically provision VM:

cd playbooks
./cicd.sh

Docker

While learning Docker some time ago, I have created gist with many examples of useful commands.

For installing Docker I used great tutorial, which I modiifed to use Docker on Debian. Besides Docker, I installed Docker Compose and Ctop.

Besides creating single images for containers, in developing environment there is very useful pattern - multi-stage builds, which allow you aterfacts build in 1 container to be used on another one.

While containerizing app, important topic is improve performance e.g. for Spring

Docker Registry

For storing Docker images you can use Docker Hub or deploy a registry server. After starting it use commands to store images in new registry:

docker image tag sebastian-czech/simple-rest-api-python-flask  192.168.0.18:5000/python-api
docker push 192.168.0.18:5000/python-api

docker image tag sebastian-czech/simple-rest-api-java-spring  192.168.0.18:5000/java-api
docker push 192.168.0.18:5000/java-api

At first I was using insecure registry and then with self-signed certificate.

To display all images you can use URLs:

http://192.168.0.18:5000/v2/_catalog
http://192.168.0.18:5000/v2/api-java/tags/list

Docker Compose

To start Docker Compose from pipeline I used Docker Compose Build Step Plugin.

From CLI to start and stop server defined in compose file we should commands:

docker-compose up -d
docker-compose start    
docker-compose stop
docker-compose start webapp    
docker-compose stop webapp
docker-compose down

Docker Swarm

To start with Docker Swarm there is a tutorial about creating swarm and deploy service. In my solution I used following commands:

docker info 
docker swarm init --advertise-addr 192.168.0.27

docker swarm join-token worker
docker swarm join --token SWMTKN-1-3hnvuy1bwvcrq398b616t1waaapzh0vgwvaxt048nktjb98470-3x2ejgu5jqjbtojib8t1i702y 192.168.0.27:2377

docker node ls

docker service create --replicas 1 --name helloworld alpine ping docker.com
docker service ls
docker service inspect --pretty helloworld
docker service ps helloworld
docker service scale helloworld=2
docker service rm helloworld

docker service create \
  --name api-java \
  --publish published=36080,target=48080 \
  --replicas 2 \
  192.168.0.27/api-java:cicd
docker service rm api-java

While creating pipeline to deploy on Docker Swarm using Ansible, I used module docker_swarm_service.

Kubernetes

While learning Kubernets some time ago, I have created gist with many examples of useful commands.

In another my repository DevOps-Engineer I have included many commands for K8s.

For learning there is a great Kubernetes - K3s. To use kubectl I used following commands to configure it:

# mkdir /home/seba/.kube
# cp /etc/rancher/k3s/k3s.yaml /home/seba/.kube/config
# chown -R seba:seba /home/seba/.kube
$ export KUBECONFIG=/home/seba/.kube/config

Using following commands we can check default configuration:

kubectl api-resources

kubectl get pods --all-namespaces 
NAMESPACE     NAME                                     READY   STATUS      RESTARTS   AGE
kube-system   helm-install-traefik-r46s6               0/1     Completed   0          11d
kube-system   metrics-server-7566d596c8-mx6bk          1/1     Running     22         11d
kube-system   local-path-provisioner-6d59f47c7-t8266   1/1     Running     42         11d
kube-system   svclb-traefik-vtcb6                      2/2     Running     44         11d
kube-system   coredns-8655855d6-nppnc                  1/1     Running     24         11d
kube-system   traefik-758cd5fc85-vhgzb                 1/1     Running     33         11d

kubectl cluster-info 
Kubernetes master is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

kubectl get nodes -o wide
NAME     STATUS   ROLES    AGE   VERSION        INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                       KERNEL-VERSION    CONTAINER-RUNTIME
devops   Ready    master   11d   v1.18.6+k3s1   192.168.0.18   <none>        Debian GNU/Linux 10 (buster)   4.19.0-10-amd64   containerd://1.3.3-k3s2

kubectl get namespaces   
NAME              STATUS   AGE
default           Active   11d
kube-system       Active   11d
kube-public       Active   11d
kube-node-lease   Active   11d

kubectl get all --all-namespaces 
NAMESPACE     NAME                                         READY   STATUS      RESTARTS   AGE
kube-system   pod/helm-install-traefik-r46s6               0/1     Completed   0          11d
kube-system   pod/metrics-server-7566d596c8-mx6bk          1/1     Running     22         11d
kube-system   pod/local-path-provisioner-6d59f47c7-t8266   1/1     Running     42         11d
kube-system   pod/svclb-traefik-vtcb6                      2/2     Running     44         11d
kube-system   pod/coredns-8655855d6-nppnc                  1/1     Running     24         11d
kube-system   pod/traefik-758cd5fc85-vhgzb                 1/1     Running     33         11d

NAMESPACE     NAME                         TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                      AGE
default       service/kubernetes           ClusterIP      10.43.0.1       <none>         443/TCP                      11d
kube-system   service/kube-dns             ClusterIP      10.43.0.10      <none>         53/UDP,53/TCP,9153/TCP       11d
kube-system   service/metrics-server       ClusterIP      10.43.163.21    <none>         443/TCP                      11d
kube-system   service/traefik-prometheus   ClusterIP      10.43.177.118   <none>         9100/TCP                     11d
kube-system   service/traefik              LoadBalancer   10.43.101.73    192.168.0.18   80:31584/TCP,443:32753/TCP   11d

NAMESPACE     NAME                           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
kube-system   daemonset.apps/svclb-traefik   1         1         1       1            1           <none>          11d

NAMESPACE     NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/metrics-server           1/1     1            1           11d
kube-system   deployment.apps/local-path-provisioner   1/1     1            1           11d
kube-system   deployment.apps/coredns                  1/1     1            1           11d
kube-system   deployment.apps/traefik                  1/1     1            1           11d

NAMESPACE     NAME                                               DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/metrics-server-7566d596c8          1         1         1       11d
kube-system   replicaset.apps/local-path-provisioner-6d59f47c7   1         1         1       11d
kube-system   replicaset.apps/coredns-8655855d6                  1         1         1       11d
kube-system   replicaset.apps/traefik-758cd5fc85                 1         1         1       11d

NAMESPACE     NAME                             COMPLETIONS   DURATION   AGE
kube-system   job.batch/helm-install-traefik   1/1           36s        11d

While preparing pipeline to deploy app in K8s, I used blog post about CI/CD and K8s and tutorial in which GKE was used.

To integrate Ansible with K8s I used module k8s and k8s_info.

For deployment and service from command line we can use commands:

kubectl apply -f deployment.yml
kubectl apply -f service.yml
kubectl apply -f .

For lab only I created private registry for k3s in file /etc/rancher/k3s/registries.yaml:

mirrors:
  docker.io:
    endpoint:
      - "http://192.168.0.18:5000"

To check pod and restart deployment, we can use commands from K8s Cheat Sheet:

kubectl describe pods api-java-deployment-75bb8f97df-gfss4    

kubectl delete -f deployment.yml
kubectl delete -f service.yml 
kubectl delete -f .

While integrating with Kubernetes, problem with managing certficates need to resolved.

While creating deployment and service, I used tutorial about exposing external IP.

To access IP from outside, I have changed iptables using Oracle documentation:

sudo iptables -L -v -n    
sudo iptables-legacy -L -v -n      

sudo iptables-save > /home/seba/iptables-20200904                
sudo iptables-legacy-save > /home/seba/iptables-legacy-20200904  

sudo iptables -P FORWARD ACCEPT                
# or 
sudo iptables -F                                                 
sudo iptables -X                                                 

After that I found great article, which gives me more ideas what to do with Kubernetes and Ansible: How useful is Ansible in a Cloud-Native Kubernetes Environment?.

Using following commands you can nanually scale deployment:

kubectl scale deployment --replicas=2 api-java-deployment

Using following commands you can use config maps:

kubectl create configmap api-java-config --from-file=application.properties
kubectl describe configmaps api-java-config 
kubectl get configmaps 
kubectl get configmaps api-java-config
kubectl get configmaps api-java-config -o yaml

Using following commands you can use secrets:

echo -n 'secret123' | base64  

kubectl apply -f secret.yaml  
kubectl describe secret api-java-password   
kubectl get secrets   
kubectl get secret api-java-password -o jsonpath='{.data.password}' | base64 --decode 

Using following commands you can use creatie simple Ingress:

kubectl apply -f ingress.yml  
kubectl get ing --all-namespaces   
kubectl delete -f ingress.yml

After that simple Java API can be access via http://api-java.192.168.0.18.nip.io/.

To configure Traefik with dashboard I used Deploying Traefik as Ingress Controller for Your Kubernetes Cluster:

kubectl create -f traefik-webui-svc.yaml
kubectl describe svc traefik-web-ui --namespace=kube-system
kubectl create -f traefik-ingress.yaml
kubectl get ing --namespace=kube-system   

In another solution I used K3S: Traefik Dashboard activation and Traefik - Helm chart:

sudo vi /var/lib/rancher/k3s/server/manifests/traefik.yaml

    dashboard:
        enabled: true
        domain: "dashboard-traefik.192.168.0.18.traefik.me"

sudo kubectl apply -f /var/lib/rancher/k3s/server/manifests/traefik.yaml 

After that Traefik dashboard can be access via http://dashboard-traefik.192.168.0.18.traefik.me/dashboard/.

Detailed information about Traefik can be found on Connecting Users to Applications with Kubernetes Ingress Controllers, 13 Key Considerations When Selecting an Ingress Controller for Kubernetes and Ingress Controllers.

With that subject there is connected point about wildcard DNS:

Another important topic is Custom Resources explained in Kubernetes Operators Explained.

Next topic after that is Operator.

At the end I have created comparission between service mesh, ingress controller and API gateway.

Service Mesh vs. Ingress Controller vs. API gateway

Service Mesh Ingress Controller API gateway
Definition dedicated infrastructure layer for facilitating service-to-service communications between microservices, often using a sidecar proxy an API object that manages external access to the services in a cluster, typically HTTP takes all API calls from clients, then routes them to the appropriate microservice with request routing, composition, and protocol translation
Example of product Istio Traefik, Envoy KrakenD, Kong
Key points decorator, circuit breaker, traffic management, security, observability (tracing, metrics and logging ) edge router, reverse proxy, auto service discovery, routing, load balancing, security, observability business logic, monitoring, security, cache, throttling, aggregation, manipulation, proxy, filtering, QoS, decoding

Civo - k3s-powered Kubernetes service

vi ~/.kube/config
kubectx k3s_cicd 
kubectl cluster-info
kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes" | jid
kubectl top pod --all-namespaces
kubectl apply -f kubernetes/civo/ingress-jenkins.yaml
kubectl get all --all-namespaces
export JENKINS_URL=http://jenkins.e596da70-1439-44e8-8ce9-dd0076eef9e9.k8s.civo.com

civo apikey add K3S_CICD ***
civo quota

civo kubernetes config k3s_cicd -s --merge
civo kubernetes ls 
civo kubernetes show k3s_cicd
civo kubernetes applications list
civo kubernetes scale k3s_cicd --nodes=3
civo kubernetes create --remove-applications=traefik --nodes=2 --wait
civo kubernetes rename k3s_cicd --name="k3s_cicd_new"
civo kubernetes applications add Longhorn --cluster=k3s_cicd
civo kubernetes recycle k3s_cicd --node kube-node-f0de 

civo firewall list
civo firewall rule ls k3s_cicd   

export DNS="e89c398e-afac-4f2e-908b-3716147cb1c8.k8s.civo.com" # As per dashboard
export OPENFAAS_URL=http://$DNS:31112
cat /tmp/passwd | faas-cli login --username admin --password-stdin
faas-cli store list
faas-cli list --verbose

faas-cli store deploy nodeinfo
faas-cli describe nodeinfo
echo | faas-cli invoke nodeinfo
echo -n "verbose" | faas-cli invoke nodeinfo

kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes"
kubectl top node
kubectl top pod --all-namespaces

Jenkins

There are many ways to start journey - it's very simple to do it using Docker, for which we need to do following commands:

docker network create jenkins
docker network ls

docker volume create jenkins-docker-certs
docker volume create jenkins-data
docker volume ls

docker container run --name jenkins-docker --rm --detach --privileged --network jenkins --network-alias docker --env DOCKER_TLS_CERTDIR=/certs --volume jenkins-docker-certs:/certs/client --volume jenkins-data:/var/jenkins_home --publish 2376:2376 docker:dind
docker container run --name jenkins-blueocean --rm --detach --network jenkins --env DOCKER_HOST=tcp://docker:2376 --env DOCKER_CERT_PATH=/certs/client --env DOCKER_TLS_VERIFY=1 --publish 8080:8080 --publish 50000:50000 --volume jenkins-data:/var/jenkins_home --volume jenkins-docker-certs:/certs/client:ro jenkinsci/blueocean

docker volume inspect jenkins-data 
sudo cat /var/lib/docker/volumes/jenkins-data/_data/secrets/initialAdminPassword 

In bigger environments there is very useful pattern - Cluster, which is great to architecting for scale. Another great tutorial - building master and slave.

Another important topics:

After installing Jenkins define new Pipeline from SCM e.g.:

http://192.168.0.18:9080/seba/simple-rest-api-java-spring

Then create API token for user in Jenkins and configure build trigger for pipeline in Jenkins configured as web hook in GitLab:

http://admin:USER_TOKEN@192.168.0.18:8080/job/API-java/build?token=PIPELINE_TOKEN

To debug remote trigger for pipeline, you can use:

curl -u admin:USER_TOKEN "http://192.168.0.18:8080/job/API-java/build?token=PIPELINE_TOKEN"

If you have error Url is blocked: Requests to the local network are not allowed, then allow in GitLab in Admin Area settings:

http://192.168.0.18:9080/admin/application_settings/network

Sometimes there is no need to use Docker, but global tools defined in Jenkins.

While developing pipelines, I used Jenkins Pipeline Linter Connector, for which we need to use linter described in pipeline development tools.

To use linter from command line, use:

export JENKINS_URL=devops:8080                                        
curl -Lv http://$JENKINS_URL/login 2>&1 | grep -i 'x-ssh-endpoint'  
< X-SSH-Endpoint: devops:7788  
ssh -l admin -p 7788 devops help 

export JENKINS_SSHD_PORT=7788
export JENKINS_HOSTNAME=devops
export JENKINS_USER=admin
ssh -l $JENKINS_USER -p $JENKINS_SSHD_PORT $JENKINS_HOSTNAME declarative-linter < Jenkinsfile

export JENKINS_URL=http://admin:***@devops:8080/ 
JENKINS_CRUMB=`curl "$JENKINS_URL/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,\":\",//crumb)"`
curl -X POST -H $JENKINS_CRUMB -F "jenkinsfile=<Jenkinsfile" $JENKINS_URL/pipeline-model-converter/validate

In Visual Studio Code besides user and password I configured:

  • Crumb URL: http://devops:8080/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,\":\",//crumb)
  • Linter URL: http://devops:8080/pipeline-model-converter/validate

To abort job, which cannot be stopped from UI, we can usi Manage Jenkins -> Script Console:

Jenkins.instance.getItemByFullName("CI-CD-pipeline-analyze-code")
  .getBuildByNumber(1)
  .finish(
          hudson.model.Result.ABORTED,
          new java.io.IOException("Aborting build")
  );

Jenkins and security

Articles connected with Jenkins and certificates:

Example of use keytool:

keytool -genkeypair -keyalg RSA -alias self_singed -keypass test -keystore test.keystore.p12 -storepass test
keytool -importkeystore -scrkeystore test.keystore.p12 -destkeystore test2.keystore.p12 -deststoretype pkcs12
keytool -list -keystore /etc/pki/java/cacerts -storepass changeit

Jenkins and high availability

Articles about HA in Jenkins:

Jenkins and Helm

Using Jenkins Helm Chart to install Jenkins:

brew install helm
helm repo list

helm repo add stable https://kubernetes-charts.storage.googleapis.com/
helm search repo stable

helm repo add jenkins https://charts.jenkins.io
helm search repo jenkins

helm show values jenkins/jenkins
helm install jenkins/jenkins -f kubernetes/jenkins/helm-jenkins.yaml --generate-name

NAME: jenkins-1602621398
LAST DEPLOYED: Tue Oct 13 22:36:41 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get your 'admin' user password by running:
  printf $(kubectl get secret --namespace default jenkins-1602621398 -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo
2. Get the Jenkins URL to visit by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/component=jenkins-master" -l "app.kubernetes.io/instance=jenkins-1602621398" -o jsonpath="{.items[0].metadata.name}")
  echo http://127.0.0.1:8080
  kubectl --namespace default port-forward $POD_NAME 28080:8080

3. Login with the password from step 1 and the username: admin

4. Use Jenkins Configuration as Code by specifying configScripts in your values.yaml file, see documentation: http:///configuration-as-code and examples: https://github.com/jenkinsci/configuration-as-code-plugin/tree/master/demos

For more information on running Jenkins on Kubernetes, visit:
https://cloud.google.com/solutions/jenkins-on-container-engine
For more information about Jenkins Configuration as Code, visit:
https://jenkins.io/projects/jcasc/

helm list  
helm uninstall jenkins-1602621398           

Jenskins on Kubernetes

Articles about K8s and Jenkins:

Jenkisn and performance optimization

Gitlab

There are many ways to install GitLab, but the simplest one is that using Docker. In this scenario we need to do following commands:

docker volume create gitlab-data
docker volume create gitlab-config
docker volume create gitlab-logs

docker run --detach \
  --hostname devops \
  --publish 9443:443 --publish 9080:80 --publish 2022:22 \
  --name gitlab \
  --restart always \
  --volume gitlab-config:/etc/gitlab \
  --volume gitlab-logs:/var/log/gitlab \
  --volume gitlab-data:/var/opt/gitlab \
  gitlab/gitlab-ce:latest

Another important topics:

To register GitLab runner installed using Docker use following comman:

docker run --rm -it -v gitlab-runner-config:/etc/gitlab-runner gitlab/gitlab-runner:latest register

To use container registry from command line use followin commands:

docker login registry.gitlab.com
docker build -t registry.gitlab.com/sebastianczech/simple-rest-api-java-spring .
docker image tag 192.168.0.27/api-java:cicd registry.gitlab.com/sebastianczech/simple-rest-api-java-spring
docker push registry.gitlab.com/sebastianczech/simple-rest-api-java-spring
docker logout

Other CI/CD

Ansible

For preparing each component of CI/CD environment, I created many playbooks.

Besides typical playbooks there are other important topics to learn:

Robot Framework

To install Robot Framework, I used Docker. To integrate it with Jenkins, there is needed additional plugin. After tests are finished, results should be published to Jenkins.

SonarQube

To install SonarQube, I used Docker. To integrate it with Jenkins, there is needed additional plugin. For Sonar Quality Gate it's important to configure web hook in project settings e.g. jenkins http://192.168.0.18:8080/sonarqube-webhook/.

JFrog Artifactory

To install Artifactory, I used Docker. To start working with Artifactory, it's good to read examples Jenkins Pipeline - Working With Artifactory and tutorials:

Sonatype Nexus

To install Nexus, I used Docker. After that I started to integrate it with Jenkins using tutorial about publishing Maven artifacts to Nexus. While talking about artifacts and version, it's worth to read about Maven snapshot.

Terraform

Interesting articles to start with Terraform:

Example of using Terraform:

terraform login  

more example.tf 

terraform {
  backend "remote" {
    organization = "sebastianczech"

    workspaces {
      name = "Learning-Terraform"
    }
  }
}

terraform init 
terraform plan
terraform apply 
terraform apply -var-file="terraform.tfvars"                      

Terraform provider for libvirt

git clone https://github.com/dmacvicar/terraform-provider-libvirt
cd terraform-provider-libvirt

sudo apt install libvirt-dev 
sudo apt install genisoimage
sudo cp /usr/bin/genisoimage /usr/local/bin/mkisofs   

make 
mkdir ~/.terraform.d/plugins/
cp terraform-provider-libvirt ~/.terraform.d/plugins/ 
mkdir -p ~/.local/share/terraform/plugins/registry.terraform.io/dmacvicar/libvirt/0.6.2/linux_amd64
cp terraform-provider-libvirt ~/.local/share/terraform/plugins/registry.terraform.io/dmacvicar/libvirt/0.6.2/linux_amd64/

mkdir -p /tmp/terraform-provider-libvirt-pool-ubuntu

sudo grep security_driver /etc/libvirt/qemu.conf
security_driver = "none"

sudo systemctl restart libvirtd.service

cd examples/v0.13/ubuntu
terraform init   
terraform plan
terraform apply -auto-approve
terraform destroy -auto-approve
cat ubuntu-example.tf
terraform {
 required_version = ">= 0.13"
  required_providers {
    libvirt = {
      source  = "dmacvicar/libvirt"
      version = "0.6.2"
    }
  }
}

# instance the provider
provider "libvirt" {
  uri = "qemu:///system"
}

resource "libvirt_pool" "ubuntu" {
  name = "ubuntu"
  type = "dir"
  path = "/tmp/terraform-provider-libvirt-pool-ubuntu"
}

# We fetch the latest ubuntu release image from their mirrors
resource "libvirt_volume" "ubuntu-qcow2" {
  name   = "ubuntu-qcow2"
  pool   = libvirt_pool.ubuntu.name
  source = "https://cloud-images.ubuntu.com/releases/xenial/release/ubuntu-16.04-server-cloudimg-amd64-disk1.img"
  format = "qcow2"
}

data "template_file" "user_data" {
  template = file("${path.module}/cloud_init.cfg")
}

data "template_file" "network_config" {
  template = file("${path.module}/network_config.cfg")
}

# for more info about paramater check this out
# https://github.com/dmacvicar/terraform-provider-libvirt/blob/master/website/docs/r/cloudinit.html.markdown
# Use CloudInit to add our ssh-key to the instance
# you can add also meta_data field
resource "libvirt_cloudinit_disk" "commoninit" {
  name           = "commoninit.iso"
  user_data      = data.template_file.user_data.rendered
  network_config = data.template_file.network_config.rendered
  pool           = libvirt_pool.ubuntu.name
}

# Create the machine
resource "libvirt_domain" "domain-ubuntu" {
  name   = "ubuntu-terraform"
  memory = "512"
  vcpu   = 1

  cloudinit = libvirt_cloudinit_disk.commoninit.id

  network_interface {
    network_name = "default"
  }

  # IMPORTANT: this is a known bug on cloud images, since they expect a console
  # we need to pass it
  # https://bugs.launchpad.net/cloud-images/+bug/1573095
  console {
    type        = "pty"
    target_port = "0"
    target_type = "serial"
  }

  console {
    type        = "pty"
    target_type = "virtio"
    target_port = "1"
  }

  disk {
    volume_id = libvirt_volume.ubuntu-qcow2.id
  }

  graphics {
    type        = "spice"
    listen_type = "address"
    autoport    = true
  }
}

# IPs: use wait_for_lease true or after creation use terraform refresh and terraform show for the ips of domain

Terraform with Docker

Instruction step by step:

cd terraform/docker 
terraform init

terraform fmt
terraform validate

terraform plan
terraform apply
terraform apply -var "container_name=YetAnotherName"

terraform show
terraform state list
terraform output

docker ps
curl http://localhost:8000/

terraform destroy

Terraform with localstack

Following commands were prepared after reading material about Localstack with Terraform and Docker running AWS locally.

cd terraform/localstack
vi main.tf

terraform init
terraform plan
terraform apply --auto-approve

aws --endpoint-url=http://localhost:4566 dynamodb list-tables

aws dynamodb scan --endpoint-url http://localhost:4566 --table-name dogs

aws --endpoint-url=http://localhost:4566 s3 mb s3://demo-bucket
aws --endpoint-url=http://localhost:4566 s3api put-bucket-acl --bucket demo-bucket --acl public-read

aws --endpoint-url=http://localhost:4566 s3 ls
aws --endpoint-url=http://localhost:4566 s3 ls s3://demo-bucket

Localstack

Start:

pip install --upgrade localstack
localstack start
SERVICES=s3 KINESIS_PROVIDER=kinesalite localstack --debug start

or:

docker run --rm -it -p 4566:4566 -p 4571:4571 --env SERVICES=s3 --env KINESIS_PROVIDER=kinesalite --name localstack localstack/localstack

or:

cd terraform/localstack
docker-compose up

Check status:

curl http://127.0.0.1:4566/health | jq

Localstack problem with DynamoDB

docker exec -it localstack_main bash

bash-5.0# cd /opt/code/localstack/localstack/infra/dynamodb

bash-5.0# java -Djava.library.path=./DynamoDBLocal_lib -Xmx256m -jar DynamoDBLocal.jar -port 53703 -inMemory
Initializing DynamoDB Local with the following configuration:
Port:   53703
InMemory:       true
DbPath: null
SharedDb:       false
shouldDelayTransientStatuses:   false
CorsParams:     *

Exception in thread "main" java.lang.ExceptionInInitializerError
        at org.eclipse.jetty.http.MimeTypes.<clinit>(MimeTypes.java:191)
        at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:836)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:167)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:119)
        at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:167)
        at org.eclipse.jetty.server.Server.start(Server.java:418)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110)
        at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
        at org.eclipse.jetty.server.Server.doStart(Server.java:382)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
        at com.amazonaws.services.dynamodbv2.local.server.DynamoDBProxyServer.start(DynamoDBProxyServer.java:83)
        at com.amazonaws.services.dynamodbv2.local.main.ServerRunner.main(ServerRunner.java:76)
Caused by: java.nio.charset.IllegalCharsetNameException: l;charset=iso-8859-1
        at java.base/java.nio.charset.Charset.checkName(Unknown Source)
        at java.base/java.nio.charset.Charset.lookup2(Unknown Source)
        at java.base/java.nio.charset.Charset.lookup(Unknown Source)
        at java.base/java.nio.charset.Charset.forName(Unknown Source)
        at org.eclipse.jetty.http.MimeTypes$Type.<init>(MimeTypes.java:113)
        at org.eclipse.jetty.http.MimeTypes$Type.<clinit>(MimeTypes.java:69)
        ... 15 more

bash-5.0# mkdir JAR
bash-5.0# cd JAR/
bash-5.0# wget https://github.com/intoolswetrust/jd-cli/releases/download/jd-cli-1.2.0/jd-cli-1.2.0-dist.tar.gz
bash-5.0# java -jar jd-cli.jar ../com/amazonaws/services/dynamodbv2/local/main/ServerRunner.class 

AWS CLI

AWS CLI can be used to access Localstack:

aws configure --profile default

AWS Access Key ID [None]: test
AWS Secret Access Key [None]: test
Default region name [None]: us-east-1
Default output format [None]:

aws --endpoint-url=http://localhost:4566 kinesis list-streams
aws --endpoint-url=http://localhost:4566 lambda list-functions
aws --endpoint-url=http://localhost:4566 dynamodb list-tables

X11 forwarding

ssh -X homelab
ssh -Y homelab   
xauth list $DISPLAY
echo $DSIPLAY

sudo su - 
xauth add homelab/unix:10  MIT-MAGIC-COOKIE-1  d6c4b66d7e77a9b88011ae46afdec2a8
export DISPLAY=localhost:10.0

Packer

packer validate packer.json
PACKER_LOG=1 packer build -timestamp-ui packer.json
git clone https://github.com/goffinet/packer-kvm.git
cd packer-kvm
vi packer.json

{
    "variables":
    {
      "cpu": "2",
      "ram": "2048",
      "name": "focal",
      "disk_size": "40000",
      "version": "",
      "iso_checksum_type": "sha256",
      "iso_urls": "http://releases.ubuntu.com/20.04/ubuntu-20.04.1-live-server-amd64.iso",
      "iso_checksum": "443511f6bf12402c12503733059269a2e10dec602916c0a75263e5d990f6bb93",
      "headless": "true",
      "config_file": "focal",
      "ssh_username": "ubuntu",
      "ssh_password": "ubuntu",
      "destination_server": "download.goffinet.org"
    },
  "builders": [
    {
      "name": "{{user `name`}}{{user `version`}}",
      "type": "qemu",
      "format": "qcow2",
      "accelerator": "kvm",
      "qemu_binary": "/usr/bin/qemu-system-x86_64",
      "net_device": "virtio-net",
      "disk_interface": "virtio",
      "disk_cache": "none",
      "qemuargs": [[ "-m", "{{user `ram`}}M" ],[ "-smp", "{{user `cpu`}}" ]],
      "ssh_wait_timeout": "45m",
      "ssh_timeout": "45m",
      "http_directory": ".",
      "http_port_min": 10082,
      "http_port_max": 10089,
      "ssh_host_port_min": 2222,
      "ssh_host_port_max": 2229,
      "ssh_username": "{{user `ssh_username`}}",
      "ssh_password": "{{user `ssh_password`}}",
      "ssh_handshake_attempts": 500,
      "iso_urls": "{{user `iso_urls`}}",
      "iso_checksum": "{{user `iso_checksum`}}",
      "boot_wait": "3s",
      "boot_command": [
        "<enter><enter><f6><esc><wait>",
        "<bs><bs><bs><bs>",
        "autoinstall net.ifnames=0 biosdevname=0 ip=dhcp ipv6.disable=1 ds=nocloud-net;s=http://{{ .HTTPIP }}:{{ .HTTPPort }}/http/{{ user `config_file` }}/ ",
        "--- <enter>"
      ],
      "disk_size": "{{user `disk_size`}}",
      "disk_discard": "ignore",
      "disk_compression": true,
      "headless": "{{user `headless`}}",
      "shutdown_command": "echo '{{user `ssh_password`}}' | sudo -S shutdown -P now",
      "output_directory": "artifacts/qemu/{{user `name`}}{{user `version`}}"
    }
  ],
  "provisioners": [
    {
      "type": "shell",
      "execute_command": "{{ .Vars }} sudo -E bash '{{ .Path }}'",
      "inline": [
        "sudo apt-get update",
        "sudo apt-get -y install software-properties-common",
        "sudo apt-add-repository --yes --update ppa:ansible/ansible",
        "sudo apt update",
        "sudo apt -y install ansible"
      ]
    },
    {
      "type": "ansible-local",
      "playbook_file": "ansible/playbook.yml",
      "playbook_dir": "ansible"
    },
    {
      "type": "shell",
      "execute_command": "{{ .Vars }} sudo -E bash '{{ .Path }}'",
      "inline": [
        "sudo apt -y remove ansible",
        "sudo apt-get clean",
        "sudo apt-get -y autoremove --purge"
      ]
    }
  ],
  "post-processors": [
  ]
}
sudo virt-install \
 --name ubuntu \
 --description "Ubuntu20" \
 --os-type=linux \
 --os-variant=ubuntu18.04 \
 --ram=1024 \
 --vcpus=1 \
 --disk path=artifacts/qemu/focal/packer-focal,device=disk,bus=virtio,size=40,format=qcow2 \
 --graphics none \
 --console pty,target_type=serial \
 --network network:default \
 --graphics spice,listen=127.0.0.1 \
 --import \
 --noautoconsole

ClouInit

# download image
wget http://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img

# original image is 2G, create snapshot and make it 10G
qemu-img create -b bionic-server-cloudimg-amd64.img -f qcow2 snapshot-bionic-server-cloudimg.qcow2 10G

# show snapshot info
qemu-img info snapshot-bionic-server-cloudimg.qcow2

# ssh keys
ssh-keygen -t rsa -b 4096 -f id_rsa -C test1 -N "" -q
vi cloud_init.cfg

#cloud-config
hostname: test1
fqdn: test1.example.com
manage_etc_hosts: true
users:
  - name: ubuntu
    sudo: ALL=(ALL) NOPASSWD:ALL
    groups: users, admin
    home: /home/ubuntu
    shell: /bin/bash
    lock_passwd: false
    ssh-authorized-keys:
      - ssh-rsa *** test1
# only cert auth via ssh (console access can still login)
ssh_pwauth: false
disable_root: false
chpasswd:
  list: |
     ubuntu:linux
  expire: False
packages:
  - qemu-guest-agent
# written to /var/log/cloud-init-output.log
final_message: "The system is finally up, after $UPTIME seconds"
vi network_config_static.cfg

version: 2
ethernets:
  enp1s0:
    dhcp4: false
    # default libvirt network
    addresses: [ 192.168.122.158/24 ]
    gateway4: 192.168.122.1
    nameservers:
      addresses: [ 192.168.122.1,8.8.8.8 ]
sudo apt-get install -y cloud-image-utils

# insert network and cloud config into seed image
sudo cloud-localds -v --network-config=network_config_static.cfg test1-seed.qcow2 cloud_init.cfg

# show seed disk just generated
qemu-img info test1-seed.qcow2 

sudo apt install  libosinfo-bin      
osinfo-query os| grep ubuntu

sudo virt-install --name test1 \
  --virt-type kvm --memory 2048 --vcpus 2 \
  --boot hd,menu=on \
  --disk path=test1-seed.qcow2,device=cdrom \
  --disk path=snapshot-bionic-server-cloudimg.qcow2,device=disk \
  --graphics vnc \
  --os-type Linux --os-variant ubuntu18.04 \
  --network network:default \
  --console pty,target_type=serial

sudo virsh console test1

ssh ubuntu@192.168.122.158 -i id_rsa

# final cloud-init status
cat /run/cloud-init/result.json

# cloud logs
vi /var/log/cloud-init.log
vi /var/log/cloud-init-output.log

# flag that signals that cloud-init should not run
sudo touch /etc/cloud/cloud-init.disabled

# optional, remove cloud-init completely
sudo apt-get purge cloud-init

# shutdown VM so CDROM seed can be ejected
sudo shutdown -h now

# get name of target path
targetDrive=$(sudo virsh domblklist test1 | grep test1-seed | awk {' print $1 '})

# force ejection of CD
sudo virsh change-media test1 --path $targetDrive --eject --force

KVM

Install packages on Debian and check status of libvirtd

sudo apt install qemu qemu-kvm qemu-system qemu-utils
sudo apt install libvirt-clients libvirt-daemon-system virtinst virt-top

systemctl status libvirtd

List networks

virsh net-list --all

Start network

virsh net-start default
virsh net-autostart default

Prepare directories

sudo mkdir -pv /kvm/{disk,iso}

List all VMs

virsh list  --all

virsh -c qemu:///system list

sudo usermod -G libvirt -a seba        
virsh -c qemu+ssh://seba@homelab/system list

Create new VM

virt-install \
 --name debian10 \
 --description "Debian10" \
 --os-type=linux \
 --os-variant=debian10 \
 --ram=1024 \
 --vcpus=1 \
 --disk path=/kvm/disk/debian10.img,device=disk,bus=virtio,size=10,format=qcow2 \
 --graphics none \
 --console pty,target_type=serial \
 --location '/kvm/iso/debian-firmware-10.5.0-amd64-netinst.iso' \
 --extra-args 'console=ttyS0,115200n8 serial' \
 --network network:default \
 --graphics spice,listen=127.0.0.1 \
 --force --debug 

remote-viewer spice://127.0.0.1:5900
remote-viewer vnc://127.0.0.1:5900

virsh dumpxml debian10 | grep vnc
virsh vncdisplay debian10

ssh user@hostname -L 5901:127.0.0.1:5901

Edit config file

ls -l /etc/libvirt/qemu/debian10.xml
virsh edit debian10

Operations on VM - start, shutdown, suspend, resume

virsh shutdown debian10
virsh start debian10
virsh reboot debian10

virsh suspend debian10
virsh resume debian10

Info about VM

virsh dominfo debian10
virsh vncdisplay debian10
virt-top

Connect to console

virsh console debian10

Delete VM

virsh destroy debian10
virsh undefine debian10

Delete storage pool

sudo virsh pool-destroy ubuntu 
sudo virsh pool-delete ubuntu 
sudo virsh pool-undefine ubuntu 

X forwarding

ssh -X homelab    
ssh -Y homelab    

Virt builder

sudo apt install libguestfs-tools  

virt-builder --list

sudo virt-builder debian-10 \
--size=10G \
--format qcow2 -o /var/lib/libvirt/images/debian10.qcow2 \
--hostname debian10 \
--network \
--timezone Europe/Warsaw

sudo virt-install --import --name debian10 \
--ram 1024 \
--vcpu 1 \
--disk path=/var/lib/libvirt/images/debian10.qcow2,format=qcow2 \
--os-variant debian10 \
--network network:default \
--noautoconsole

virsh console debian10

dpkg-reconfigure openssh-server
useradd -r -m -d /home/seba -s /bin/bash seba
passwd seba
systemctl enable ssh

usermod -aG sudo seba
sudoedit /etc/sudoers
# ...
seba    ALL=(ALL) NOPASSWD:ALL

### [ Disable root user login when using ssh ] ###
echo 'PermitRootLogin no' >> /etc/ssh/sshd_config
systemctl restart ssh

cat /etc/network/interfaces
# ...
auto enp1s0
allow-hotplug enp1s0
iface enp1s0 inet dhcp

ip a s

sudo virsh net-dhcp-leases default

Move VM

scp /var/lib/libvirt/images/VMNAME seba@hostname:/var/lib/libvirt/images/
virsh dumpxml VMNAME > domxml.xml 
virsh net-dumpxml NETNAME > netxml.xml
scp domxml.xml seba@hostname:/home/seba/
virsh net-define netxml.xml && virsh net-start NETNAME & virsh net-autostart NETNAME
virsh define domxml.xml

Resize disk

sudo virsh shutdown debian10     
sudo virsh list --all    
sudo virsh domblklist debian10       
sudo virsh dumpxml debian10 | grep 'disk type' -A 5
sudo qemu-img info /var/lib/libvirt/images/debian10.qcow2         

sudo virsh snapshot-list debian10
sudo virsh snapshot-delete --domain debian10 --snapshotname snapshot1

sudo qemu-img resize /var/lib/libvirt/images/debian10.qcow2 +5G

sudo virsh start debian10
sudo virsh blockresize debian10 /var/lib/libvirt/images/debian10.qcow2 15G  
sudo fdisk -l /var/lib/libvirt/images/debian10.qcow2

lsblk
sudo apt -y install cloud-guest-utils
sudo growpart /dev/vda 1

# if LVM
sudo pvresize /dev/vda1
sudo pvs
sudo vgs
sudo lvextend -l +100%FREE /dev/name-of-volume-group/root
df -hT | grep mapper
## ext4
sudo resize2fs /dev/name-of-volume-group/root
## xfs
sudo xfs_growfs /

# if no LVM
## ext4
sudo resize2fs /dev/vda1
## xfs
sudo xfs_growfs /

Add RAM or CPU to VM

sudo virsh dominfo debian10  
sudo virsh edit debian10 
# ...
<memory unit='KiB'>1548576</memory>
# ...
<vcpu placement='static'>1</vcpu>

Create snaphost and restore

sudo virsh snapshot-list --domain debian10  
sudo virsh snapshot-create --domain debian10 
sudo virsh snapshot-create-as --domain debian10 \
--name "20201141651" \
--description "Snapshot before upgrading"
sudo virsh dumpxml debian10 | grep 'disk type' -A 5  
qemu-img snapshot -l /var/lib/libvirt/images/debian10.qcow2

sudo virsh shutdown debian10 
sudo virsh snapshot-revert --domain debian10 --snapshotname 20201141651 --running

virsh snapshot-delete --domain debian10 --snapshotname 20201141651

SSL/TLS

Resources about SSL/TLS and certificates:

Use following commands to create own certificate:

sudo vi /etc/ssl/openssl.cnf  
# in the section [ v3_ca ]
subjectAltName=IP:192.168.0.27

mkdir -p certs

openssl req \
  -newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key \
  -x509 -days 365 -out certs/domain.crt
# CN = 192.168.0.27

openssl x509  -noout -text -in certs/domain.crt 

OpenSSL Cookbook

Command use while learning from book OpenSSL Cookbook:

Getting started

openssl version
openssl version -a
openssl help
man ciphers

Trust Store

Perl:

https://hg.mozilla.org/mozilla-central/raw-file/tip/security/nss/lib/ckfw/builtins/certdata.txt
https://raw.github.com/bagder/curl/master/lib/mk-ca-bundle.pl

Go:

https://github.com/agl/extract-nss-root-certs
wget https://raw.github.com/agl/extract-nss-root-certs/master/convert_mozilla_certdata.go
wget https://hg.mozilla.org/mozilla-central/raw-file/tip/security/nss/lib/ckfw/builtins/certdata.txt --output-document certdata.txt
go run convert_mozilla_certdata.go > ca-certificates

Key and Certificate Management

  1. Generate a strong private key,
  2. Create a Certificate Signing Request (CSR) and send it to a CA,
  3. Install the CA-provided certificate in your web server.

Key generation

openssl genrsa -aes128 -out fd.key 2048
openssl rsa -text -in fd.key
openssl rsa -in fd.key -pubout -out fd-public.key

Creating Certificate Signing Requests

# openssl req -new -keyform PEM -key fd.key -outform PEM -out fd.csr -sha256 -batch -subcj "..."
openssl req -new -key fd.key -out fd.csr
openssl req -text -in fd.csr -noout

Creating Certificate Signing Requests from existing certificate

openssl x509 -x509toreq -in fd.crt -out fd.csr -signkey fd.key

Unattended CSR Generation

more fd.cnf

[req]
prompt = no
distinguished_name = dn
req_extensions = ext
input_password = PASSPHRASE

[dn]
CN = www.feistyduck.com
emailAddress = webmaster@feistyduck.com
O = Feisty Duck Ltd
L = London
C = GB

[ext]
subjectAltName = DNS:www.feistyduck.com,DNS:feistyduck.com

openssl req -new -config fd.cnf -key fd.key -out fd.csr

Signing Your Own Certificates

openssl x509 -req -days 365 -in fd.csr -signkey fd.key -out fd.crt

Creating Certificates Valid for Multiple Hostnames

more fd.ext

subjectAltName = DNS:*.feistyduck.com, DNS:feistyduck.com

openssl x509 -req -days 365 \
-in fd.csr -signkey fd.key -out fd.crt \
-extfile fd.ext

Examining Certificates

openssl x509 -text -in fd.crt -noout

PEM and DER Conversion

openssl x509 -inform PEM -in fd.pem -outform DER -out fd.der
openssl x509 -inform DER -in fd.der -outform PEM -out fd.pem

PKCS#12 (PFX) Conversion

# openssl pkcs12 -export \
#    -name "My Certificate" \
#    -out fd.p12 \
#    -inkey fd.key \
#    -in fd.crt \
#    -chain
#    -caname root
#    -CAfile ca.crt

openssl pkcs12 -export \
    -name "My Certificate" \
    -out fd.p12 \
    -inkey fd.key \
    -in fd.crt \
    -certfile fd-chain.crt

openssl pkcs12 -in fd.p12 -out fd.pem -nodes

openssl pkcs12 -in fd.p12 -nocerts -out fd.key -nodes
openssl pkcs12 -in fd.p12 -nokeys -clcerts -out fd.crt
openssl pkcs12 -in fd.p12 -nokeys -cacerts -out fd-chain.crt

Obtaining the List of Supported Suites

openssl ciphers -v 'ALL:COMPLEMENTOFALL'

Performance

openssl speed rc4 aes rsa ecdh sha

Connecting to SSL Services

openssl s_client -connect www.google.com:443
openssl s_client -connect www.google.com:443 -servername www.google.com -CAfile self_signed.crt

SSL server

openssl s_server -key public.pem -cert cert.crt -accept 8025 -wwww

cURL

Articles:

Examples of using:

curl --cacert ca.crt \
     --key client.key \
     --cert client.crt \
     https://domain.com

OCSP (Online Certificate Status Protocol)

Resources about OCSP:

Testing OCSP with OpenSSL:

# Step 1: Get the server certificate
openssl s_client -connect www.akamai.com:443 < /dev/null 2>&1 |  sed -n '/-----BEGIN/,/-----END/p' > certificate.pem

# Step 2: Get the intermediate certificate
openssl s_client -showcerts -connect www.akamai.com:443 < /dev/null 2>&1 |  sed -n '/-----BEGIN/,/-----END/p'
openssl s_client -showcerts -connect www.akamai.com:443 < /dev/null 2>&1 |  sed -n '/-----BEGIN/,/-----END/p' > chain.pem        

# Step 3: Get the OCSP responder for server certificate
openssl x509 -noout -ocsp_uri -in certificate.pem 
openssl x509 -text -noout -in certificate.pem 

# Step 4: Make the OCSP request
openssl ocsp -issuer chain.pem -cert certificate.pem -text -url http://ocsp.digicert.com
openssl ocsp -issuer chain.pem -cert certificate.pem -text -url http://ocsp2.globalsign.com/cloudsslsha2g3 -header "HOST" "ocsp2.globalsign.com"

Security scanning tools

OSS index is free catalogue of open source components. Using public REST API it's possible to scan your dependencies. Using Docker we can easily check our packages.

R

oysteR

cd containers/security-r
docker build --rm -t centos-r-image .
docker run --name centos-r-container --rm -it centos-r-image

Python

ossaudit

cd containers/security-python
docker build --rm -t centos-python-image .
docker run --name centos-python-container --rm -it centos-python-image

Summary

After finishing work we can stop all container using command:

docker stop $(docker ps -a -q)

About

Configurations templates, playbooks, pipelines, script used while learning CI/CD

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published