Percas Node Update Guide
Overview
This document outlines the process for updating Percas nodes to new specifications with NVMe SSD storage.
Step 1: Create New Nodegroup
- new nodegroup name should be
scopedb-percas - on aws the instance we will use is
i8g.2xlarge, on other cloud we can use node with similar specs - on AWS make sure to enable
allow SSH access to nodesand select the keypair shown. the private key of keypair exists on the bastion host, which will be granted access to you by the guance SRE
Step 2: Disk Partitioning and Mounting for New Nodes
If it's AWS, Contact the guance SRE and ask for bastion host access. From the bastion host we will SSH into the percas node in eks clusters
When you SSH into the new spec nodes, follow these steps to initialize the NVMe SSD:
Identify the new NVMe SSD disk:
bashlsblk # Look for unmounted NVMe devices # Common names: /dev/nvme0n1, /dev/nvme1n1, /dev/sdb, /dev/xvdf, etc. # Device names vary across different cloud providers # IDENTIFY YOUR ACTUAL DEVICE NAME before proceeding
select, or fill in the actual device name here. the commands that you need to run will be populated with this value
- Partition the disk:
sudo fdisk /dev/<DEVICE>
# Example: sudo fdisk /dev/nvme0n1 or sudo fdisk /dev/sdb
# Interactive commands:
# n (new partition)
# p (primary partition)
# 1 (partition number)
# Enter (default first sector)
# Enter (default last sector - uses entire disk)
# w (write changes)Update partition table (install parted if needed):
bashsudo partprobe /dev/<DEVICE> # If partprobe command not found: # sudo yum install -y parted # For RHEL/CentOS # sudo apt install -y parted # For Ubuntu/DebianFormat the partition (note: targeting partition1, not the disk):
bash# CRITICAL: Format the PARTITION (/dev/<DEVICE>p1), not the disk (/dev/<DEVICE>) sudo mkfs.ext4 /dev/<DEVICE>p1 # Example: sudo mkfs.ext4 /dev/nvme0n1p1 or sudo mkfs.ext4 /dev/sdb1Create mount point and mount:
bashsudo mkdir -p /data sudo mount /dev/<DEVICE>p1 /data sudo chown root:root /data sudo chmod 755 /dataConfigure /etc/fstab for persistent mounting:
bash# Get the UUID of the partition sudo blkid /dev/<DEVICE>p1 # Add to /etc/fstab (replace UUID with actual value from blkid output) echo "UUID=<ACTUAL-UUID-FROM-BLKID> /data ext4 defaults 0 2" | sudo tee -a /etc/fstabVerify fstab configuration is working:
bash# a. Unmount the partition to test auto-mount sudo umount /dev/<DEVICE>p1 # b. Reload all fstab entries (this will re-mount /data automatically) sudo mount -a # c. Verify the filesystem is mounted correctly mount | grep /data # Should show: /dev/<DEVICE>p1 on /data type ext4 (rw,relatime)
? CRITICAL REMINDERS:
- DO NOT copy-paste commands directly - replace
<DEVICE>with your actual device name - Always format the PARTITION (
/dev/<DEVICE>p1) not the DISK (/dev/<DEVICE>) - Device names vary by cloud provider (AWS: xvd*, Azure: sd*, GCP: sd*, bare metal: nvme*)
- Use
lsblkto identify your specific device before proceeding
Step 3: Cordon Old Percas Nodes
kubectl cordon <old-percas-node-name>Step 4: Delete Percas StatefulSet and PVCs
# Delete the StatefulSet
kubectl delete statefulset <percas-statefulset-name>
# record PV names for verification later
kubectl get pvc -nguancedb | grep percas
kubectl get pv | grep percas
# Delete associated PVCs/PV
kubectl delete pvc <percas-pvc-name>
# Record PV names for later verification that storage is actually deleted
kubectl get pv | grep <pv-name>
# delete pv manually if it wasn't deleted automatically
kubectl delete pv <pv-name>Step 5: Update Values Configuration
Update the percas section in your values file to use local storage:
percas:
enabled: true
dataVolume:
sizeGi: 1400 ## match this to the actual capacity of the disk
local:
enabled: true
createStorageClass: true
path: /data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- scopedb-percas-nodename ## this needs to be the actual node name!
resources:
limits:
cpu: '7'
memory: 55Gi
nodeSelector:
nodepool: scopedb-percas
tolerations:
- effect: NoSchedule
key: nodepool
operator: Equal
value: scopedb-percasThis configuration will make percas use the local NVMe storage you mounted at /data instead of network-attached storage.
Step 6: Helm Upgrade
helm upgradeStep 7: Verify PV is Successfully Mounted
exec into the percas pod
df -h
## you should see `/data`