Skip to content
This repository has been archived by the owner on Feb 8, 2024. It is now read-only.

Deploy VM Hosted Repo

Divya Vijayakumar edited this page Jun 24, 2021 · 47 revisions

Deploy VM: Hosted Repo Method

Provisioner CLI Commands for Single and Multi-node VM

NOTE: We recommend not using auto_deploy_vm CLI command for deployments as it is not actively maintained anymore.

Before You Start

Checklist:

  • Please create VM(s) with at least 4 CPUs and 8GB of RAM.
  • For single-node VM deployment, ensure the VM is created with 2+ attached disks.
    OR
  • For multi-node VM deployment, ensure the VM is created with 8+ attached disks.
  • Do you see the devices on execution of this command: lsblk ?
  • Do the systems on your setup have valid hostnames, are the hostnames accessible: ping ?
  • Do you have IPs' assigned to all NICs eth0, eth1 and eth2?
  • Identify primary node and run below commands on primary node
    NOTE: For single-node VM, the VM node itself is treated as primary node.

VM Preparation for Deployment

  1. Set root user password on all nodes:

    sudo passwd root
    
  2. Install provisioner api (NOTE: To be run on all 3 nodes)

    Production Environment

    1. Set repository URL

      export CORTX_RELEASE_REPO="<URL to Cortx R2 stack release repo>"
      
    2. Install Provisioner API and requisite packages

      yum install -y yum-utils
      yum-config-manager --add-repo "${CORTX_RELEASE_REPO}/3rd_party/"
      yum-config-manager --add-repo "${CORTX_RELEASE_REPO}/cortx_iso/"
      
    3. Run following command (this is one command starting from cat to EOF) to create /etc/pip.conf

      cat <<EOF >/etc/pip.conf
      [global]
      timeout: 60
      index-url: $CORTX_RELEASE_REPO/python_deps/
      trusted-host: $(echo $CORTX_RELEASE_REPO | awk -F '/' '{print $3}')
      EOF
      
    4. Cortx Pre-requisites

      yum install --nogpgcheck -y java-1.8.0-openjdk-headless
      yum install --nogpgcheck -y python3 cortx-prereq sshpass
      
    5. Pre-reqs for Provisioner

      yum install --nogpgcheck -y python36-m2crypto salt salt-master salt-minion
      
    6. Provisioner API

      yum install --nogpgcheck -y python36-cortx-prvsnr
      
    7. Cleanup temporary repos

      rm -rf /etc/yum.repos.d/*3rd_party*.repo
      rm -rf /etc/yum.repos.d/*cortx_iso*.repo
      yum clean all
      rm -rf /var/cache/yum/
      rm -rf /etc/pip.conf
      
  3. Verify provisioner version (0.36.0 and above)

    provisioner --version
    
  4. Create config.ini file to some location:
    IMPORTANT NOTE: Please check every details in this file correctly according to your node.
    Verify interface names are correct as per your node

    Update required details in ~/config.ini use sample config.ini

    vi ~/config.ini
    

    Sample config.ini for single node VM

    [cluster]
    mgmt_vip=
    
    [srvnode_default]
    network.data.private_interfaces=eth3,eth4
    network.data.public_interfaces=eth1,eth2
    network.mgmt.interfaces=eth0
    bmc.user=None
    bmc.secret=None
    storage.cvg.0.data_devices=/dev/sdc
    storage.cvg.0.metadata_devices=/dev/sdb
    network.data.private_ip=None
    storage.durability.sns.data=1
    storage.durability.sns.parity=0
    storage.durability.sns.spare=0
    
    [srvnode-1]
    hostname=srvnode-1.localdomain
    roles=primary,openldap_server,kafka_server
    
    [enclosure_default]
    type=virtual
    controller.type=virtual
    
    [enclosure-1]
    

    Sample config.ini for 3 Node VM (config.ini should be available only on primary node)

    Note: Find the devices on each node separately using the command provided below, to fill into respective config.ini sections.
    Complete list of attached devices:
    device_list=$(lsblk -nd -o NAME -e 11|grep -v sda|sed 's|sd|/dev/sd|g'|paste -s -d, -)
    Values for storage.cvg.0.metadata_devices:
    echo ${device_list%%,*}
    Values for storage.cvg.0.data_devices:
    echo ${device_list#*,}

    [cluster]
    mgmt_vip=
    
    [srvnode_default]
    network.data.private_interfaces=eth3,eth4
    network.data.public_interfaces=eth1,eth2
    network.mgmt.interfaces=eth0
    bmc.user=None
    bmc.secret=None
    network.data.private_ip=None
    
    [srvnode-1]
    hostname=srvnode-1.localdomain
    roles=primary,openldap_server,kafka_server
    storage.cvg.0.data_devices=<Values for storage.cvg.0.data_devices from command executed earlier>
    storage.cvg.0.metadata_devices=<Values for storage.cvg.0.metadata_devices from command executed earlier>
    
    [srvnode-2]
    hostname=srvnode-2.localdomain
    roles=secondary,openldap_server,kafka_server
    storage.cvg.0.data_devices=<Values for storage.cvg.0.data_devices from command executed earlier>
    storage.cvg.0.metadata_devices=<Values for storage.cvg.0.metadata_devices from command executed earlier>
    
    [srvnode-3]
    hostname=srvnode-3.localdomain
    roles=secondary,openldap_server,kafka_server
    storage.cvg.0.data_devices=<Values for storage.cvg.0.data_devices from command executed earlier>
    storage.cvg.0.metadata_devices=<Values for storage.cvg.0.metadata_devices from command executed earlier>
    
    [enclosure_default]
    type=virtual
    
    [enclosure-1]
    
    [enclosure-2]
    
    [enclosure-3]
    

    NOTE :

    1. private_ip, bmc_secret, bmc_user should be None for VM.
    2. mgmt_vip must be provided for 3 node deployments.

Deploy VM Manually:

Manual deployment of VM consists of following steps from Auto-Deploy, which could be individually executed:
NOTE: Ensure VM Preparation for Deployment has been addressed successfully before proceeding

Bootstrap VM(s): Run setup_provisioner provisioner cli command:

Single Node VM: Bootstrap

If using remote hosted repos:

provisioner setup_provisioner srvnode-1:$(hostname -f) \
--logfile --logfile-filename /var/log/seagate/provisioner/setup.log --source rpm --config-path ~/config.ini \
--dist-type bundle --target-build ${CORTX_RELEASE_REPO}

Multi Node VM: Bootstrap (Run this command only on primary node)

If using remote hosted repos:

provisioner setup_provisioner --console-formatter full --logfile \
    --logfile-filename /var/log/seagate/provisioner/setup.log --source rpm \
    --config-path ~/config.ini --ha \
    --dist-type bundle \
    --target-build ${CORTX_RELEASE_REPO} \
    srvnode-1:<fqdn:primary_hostname> \
    srvnode-2:<fqdn:secondary_hostname> \
    srvnode-3:<fqdn:secondary_hostname>

Example:

provisioner setup_provisioner \
--logfile --logfile-filename /var/log/seagate/provisioner/setup.log --source rpm --config-path ~/config.ini \
--ha --dist-type bundle --target-build ${CORTX_RELEASE_REPO} \
srvnode-1:host1.localdomain srvnode-2:host2.localdomain srvnode-3:host3.localdomain

Prepare Pillar Data (Run this command only on primary node)

Update data from config.ini into Salt pillar. Export pillar data to provisioner_cluster.json.

provisioner configure_setup ./config.ini <number of nodes in cluster>
salt-call state.apply components.system.config.pillar_encrypt
provisioner confstore_export

NOTE :

  1. target-build should be link to base url for hosted 3rd_party and cortx_iso repos

  2. For --target_build use builds from below url based on OS:
    centos-7.8.2003: <build_url>/centos-7.8.2003/ OR Contact Cortx RE team for latest URL.

  3. This command will ask for each node's root password during initial cluster setup.
    This is one time activity required to setup passwordless ssh across nodes.

  4. For setting up a cluster of more than 3 nodes do append --name <setup_profile_name> to auto_deploy_vm command input parameters.

Bootstrap Validation

Once deployment is bootstrapped (auto_deploy or setup_provisioner command is executed successfully), verify salt master setup on both nodes (setup verification checklist)

salt '*' test.ping  
salt "*" service.stop puppet
salt "*" service.disable puppet
salt '*' pillar.get release  
salt '*' grains.get node_id  
salt '*' grains.get cluster_id  
salt '*' grains.get roles  

Deployment Based On Component Groups:

If provisioner setup is completed and you want to deploy in stages based on component group.

NOTE: At any stage, if there is a failure, it is advised to run destroy for that particular group.
For help on destroy commands, refer to https://github.com/Seagate/cortx-prvsnr/wiki/Teardown-Node(s)#targeted-teardown

Non-Cortx Group: System & 3rd-Party Softwares

  1. System component group
    Single Node
    provisioner deploy --setup-type single --states system
    Multi Node
    provisioner deploy --setup-type 3_node --states system
  2. Prereq component group
    Single Node
    provisioner deploy --setup-type single --states prereq
    Multi Node
    provisioner deploy --setup-type 3_node --states prereq

Cortx Group: Utils, IO Path & Control Path

  1. Utils component Single Node

    provisioner deploy --setup-type single --states utils

    Multi Node

    provisioner deploy --setup-type 3_node --states utils
    
  2. IO path component group
    Single Node

    provisioner deploy --setup-type single --states iopath

    Multi Node

    provisioner deploy --setup-type 3_node --states iopath
  3. Control path component group
    Single Node

    provisioner deploy --setup-type single --states controlpath

    Multi Node

    provisioner deploy --setup-type 3_node --states controlpath

Cortx Group: HA

  1. HA component group
    Single Node
    provisioner deploy --setup-type single --states ha
    Multi Node
    provisioner deploy --setup-type 3_node --states ha

Start cluster (irrespective of number of nodes):

  1. Execute the following command on primary node to start the cluster:

    cortx cluster start
  2. Verify Cortx cluster status:

    hctl status

Auto Deploy VM (One Click Deployment - Provided Component Mini-Provisioners Comply)

  1. Run auto_deploy_vm provisioner cli command:

    Single Node VM

    If using remote hosted repos:

    provisioner auto_deploy_vm srvnode-1:$(hostname -f) \
    --logfile --logfile-filename /var/log/seagate/provisioner/setup.log --source rpm --config-path ~/config.ini \
    --dist-type bundle --target-build ${CORTX_RELEASE_REPO}

    Multi Node VM

    If using remote hosted repos:

    provisioner auto_deploy_vm --console-formatter full --logfile \
        --logfile-filename /var/log/seagate/provisioner/setup.log --source rpm \
        --config-path ~/config.ini --ha \
        --dist-type bundle \
        --target-build '<path to base url for hosted repo>' \
        srvnode-1:<fqdn:primary_hostname> \
        srvnode-2:<fqdn:secondary_hostname> \
        srvnode-3:<fqdn:secondary_hostname>

    Example:

    provisioner auto_deploy_vm \
    --logfile --logfile-filename /var/log/seagate/provisioner/setup.log --source rpm --config-path ~/config.ini \
    --ha --dist-type bundle --target-build ${CORTX_RELEASE_REPO} \
    srvnode-1:host1.localdomain srvnode-2:host2.localdomain srvnode-3:host3.localdomain
  2. Start cluster (irrespective of number of nodes):
    NOTE: Execute this command only on primary node (srvnode-1).

    cortx cluster start
  3. Check if the cluster is running:

    hctl status

NOTE :

  1. target-build should be link to base url for hosted 3rd_party and cortx_iso repos

  2. For --target_build use builds from below url based on OS:
    centos-7.8.2003: <build_url>/centos-7.8.2003/ OR Contact Cortx RE team for latest URL.

  3. This command will ask for each node's root password during initial cluster setup.
    This is one time activity required to setup password-less ssh across nodes.

  4. For setting up a cluster of more than 3 nodes do append --name <setup_profile_name> to auto_deploy_vm command input parameters.

Known issues:

  1. Known Issue 19: Known Issue 19: LVM issue - auto-deploy fails during provisioning of storage component (EOS-12289)
Clone this wiki locally