Skip to content

Percas Node Update Guide

Overview

This document outlines the process for updating Percas nodes to new specifications with NVMe SSD storage.

Step 1: Create New Nodegroup

  • new nodegroup name should be scopedb-percas
  • on aws the instance we will use is i8g.2xlarge, on other cloud we can use node with similar specs
  • on AWS make sure to enable allow SSH access to nodes and select the keypair shown. the private key of keypair exists on the bastion host, which will be granted access to you by the guance SRE

Step 2: Disk Partitioning and Mounting for New Nodes

If it's AWS, Contact the guance SRE and ask for bastion host access. From the bastion host we will SSH into the percas node in eks clusters

When you SSH into the new spec nodes, follow these steps to initialize the NVMe SSD:

  1. Identify the new NVMe SSD disk:

    bash
    lsblk
    # Look for unmounted NVMe devices
    # Common names: /dev/nvme0n1, /dev/nvme1n1, /dev/sdb, /dev/xvdf, etc.
    # Device names vary across different cloud providers
    # IDENTIFY YOUR ACTUAL DEVICE NAME before proceeding

select, or fill in the actual device name here. the commands that you need to run will be populated with this value

  1. Partition the disk:
bash
sudo fdisk /dev/<DEVICE>
# Example: sudo fdisk /dev/nvme0n1 or sudo fdisk /dev/sdb
# Interactive commands:
# n (new partition)
# p (primary partition)
# 1 (partition number)
# Enter (default first sector)
# Enter (default last sector - uses entire disk)
# w (write changes)
  1. Update partition table (install parted if needed):

    bash
    sudo partprobe /dev/<DEVICE>
    # If partprobe command not found:
    # sudo yum install -y parted  # For RHEL/CentOS
    # sudo apt install -y parted  # For Ubuntu/Debian
  2. Format the partition (note: targeting partition1, not the disk):

    bash
    # CRITICAL: Format the PARTITION (/dev/<DEVICE>p1), not the disk (/dev/<DEVICE>)
    sudo mkfs.ext4 /dev/<DEVICE>p1
    # Example: sudo mkfs.ext4 /dev/nvme0n1p1 or sudo mkfs.ext4 /dev/sdb1
  3. Create mount point and mount:

    bash
    sudo mkdir -p /data
    sudo mount /dev/<DEVICE>p1 /data
    sudo chown root:root /data
    sudo chmod 755 /data
  4. Configure /etc/fstab for persistent mounting:

    bash
    # Get the UUID of the partition
    sudo blkid /dev/<DEVICE>p1
    
    # Add to /etc/fstab (replace UUID with actual value from blkid output)
    echo "UUID=<ACTUAL-UUID-FROM-BLKID> /data ext4 defaults 0 2" | sudo tee -a /etc/fstab
  5. Verify fstab configuration is working:

    bash
    # a. Unmount the partition to test auto-mount
    sudo umount /dev/<DEVICE>p1
    
    # b. Reload all fstab entries (this will re-mount /data automatically)
    sudo mount -a
    
    # c. Verify the filesystem is mounted correctly
    mount | grep /data
    # Should show: /dev/<DEVICE>p1 on /data type ext4 (rw,relatime)

? CRITICAL REMINDERS:

  • DO NOT copy-paste commands directly - replace <DEVICE> with your actual device name
  • Always format the PARTITION (/dev/<DEVICE>p1) not the DISK (/dev/<DEVICE>)
  • Device names vary by cloud provider (AWS: xvd*, Azure: sd*, GCP: sd*, bare metal: nvme*)
  • Use lsblk to identify your specific device before proceeding

Step 3: Cordon Old Percas Nodes

bash
kubectl cordon <old-percas-node-name>

Step 4: Delete Percas StatefulSet and PVCs

bash
# Delete the StatefulSet
kubectl delete statefulset <percas-statefulset-name>

# record PV names for verification later
kubectl get pvc -nguancedb | grep percas
kubectl get pv | grep percas

# Delete associated PVCs/PV
kubectl delete pvc <percas-pvc-name>

# Record PV names for later verification that storage is actually deleted
kubectl get pv | grep <pv-name>

# delete pv manually if it wasn't deleted automatically
kubectl delete pv  <pv-name>

Step 5: Update Values Configuration

Update the percas section in your values file to use local storage:

yaml
  percas:
    enabled: true
    dataVolume:
      sizeGi: 1400 ## match this to the actual capacity of the disk
      local:
        enabled: true
        createStorageClass: true
        path: /data
        nodeAffinity:
          required:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/hostname
                    operator: In
                    values:
                    - scopedb-percas-nodename ## this needs to be the actual node name!
    resources:
      limits:
        cpu: '7'
        memory: 55Gi
    nodeSelector:
      nodepool: scopedb-percas
    tolerations:
      - effect: NoSchedule
        key: nodepool
        operator: Equal
        value: scopedb-percas

This configuration will make percas use the local NVMe storage you mounted at /data instead of network-attached storage.

Step 6: Helm Upgrade

bash
helm upgrade

Step 7: Verify PV is Successfully Mounted

exec into the percas pod

bash
df -h
## you should see `/data`