VM Migration Guide¶
This guide explains how to migrate VMs between providers using VirtRigaud's VM migration feature.
Overview¶
VirtRigaud supports migrating VMs across different providers (Libvirt, Proxmox, vSphere) without data loss. The migration uses a cold migration approach:
- Snapshot: Create a snapshot of the source VM (optional)
- Export: Export the VM disk from the source provider
- Transfer: Upload the disk to intermediate storage (S3/HTTP/NFS)
- Import: Download and import the disk to the target provider
- Create: Create the target VM with the migrated disk
- Validate: Verify the migration succeeded
Prerequisites¶
Storage Backend¶
You must configure intermediate storage for disk transfer. Supported options:
- S3-compatible storage (AWS S3, MinIO)
- HTTP/HTTPS file server
- NFS shared storage
Provider Requirements¶
- Source and target providers must be running and healthy
- Source VM must be provisioned and accessible
- Target provider must have sufficient storage space
- Network connectivity between providers and storage
Basic Migration¶
Step 1: Create Storage Configuration¶
If using S3:
apiVersion: v1
kind: Secret
metadata:
name: migration-storage-credentials
namespace: virtrigaud-system
type: Opaque
stringData:
accessKey: "YOUR_ACCESS_KEY"
secretKey: "YOUR_SECRET_KEY"
Step 2: Create VMMigration Resource¶
apiVersion: infra.virtrigaud.io/v1beta1
kind: VMMigration
metadata:
name: migrate-vm-to-proxmox
namespace: default
spec:
# Source VM configuration
source:
vmRef:
name: my-libvirt-vm
createSnapshot: true # Create snapshot before migration
# Target configuration
target:
name: my-vm-on-proxmox
providerRef:
name: proxmox-provider
namespace: virtrigaud-system
classRef:
name: medium-vm
networks:
- name: default
# Storage configuration
storage:
type: s3
bucket: vm-migrations
region: us-east-1
endpoint: https://s3.amazonaws.com
credentialsSecretRef:
name: migration-storage-credentials
# Migration options
options:
diskFormat: qcow2
compress: true
verifyChecksums: true
timeout: 4h
retryPolicy:
maxRetries: 3
retryDelay: 5m
backoffMultiplier: 2
Step 3: Monitor Migration Progress¶
# Watch migration status
kubectl get vmmigration migrate-vm-to-proxmox -w
# View detailed status
kubectl describe vmmigration migrate-vm-to-proxmox
# Check migration logs
kubectl logs -n virtrigaud-system deployment/virtrigaud-manager -f | grep migrate-vm-to-proxmox
Step 4: Verify Migration¶
Once the migration reaches Ready phase:
# Check target VM was created
kubectl get virtualmachine my-vm-on-proxmox
# Verify VM is running
kubectl describe virtualmachine my-vm-on-proxmox
Advanced Features¶
Using Existing Snapshot¶
To migrate from a specific snapshot:
spec:
source:
vmRef:
name: my-vm
snapshotRef:
name: pre-migration-snapshot
createSnapshot: false # Use existing snapshot
Delete Source VM After Migration¶
spec:
source:
vmRef:
name: my-vm
deleteAfterMigration: true # Delete source VM after successful migration
Cross-Namespace Migration¶
spec:
source:
vmRef:
name: vm-in-prod
namespace: production
providerRef:
name: vsphere-provider
namespace: infrastructure
target:
name: vm-in-staging
namespace: staging
providerRef:
name: proxmox-provider
namespace: infrastructure
NFS Storage Backend¶
For on-premises deployments:
The NFS share must be mounted on all provider pods at the same path.
HTTP Storage Backend¶
For simple file server storage:
Migration Phases¶
The migration goes through the following phases:
| Phase | Description |
|---|---|
Pending | Migration created, awaiting processing |
Validating | Validating source VM, target provider, storage |
Snapshotting | Creating snapshot of source VM |
Exporting | Exporting disk from source provider |
Importing | Importing disk to target provider |
Creating | Creating target VM |
ValidatingTarget | Verifying target VM is ready |
Ready | Migration completed successfully |
Failed | Migration failed (will retry if configured) |
Monitoring¶
Status Conditions¶
Key conditions: - Ready: Migration completed - Validating: Validation in progress - Exporting: Disk export in progress - Importing: Disk import in progress - Failed: Migration failed
Progress Tracking¶
Shows: - Bytes transferred - Overall percentage - Current phase - Estimated time remaining
Troubleshooting¶
Migration Stuck in Exporting Phase¶
Problem: Export task not completing
Solutions: 1. Check source provider is healthy: kubectl get provider <name> 2. Verify storage credentials are correct 3. Check network connectivity to storage 4. Review provider logs for errors
Migration Failed with Storage Error¶
Problem: Can't access intermediate storage
Solutions: 1. Verify storage configuration is correct 2. Check credentials secret exists and is valid 3. Test connectivity: aws s3 ls s3://bucket or curl for HTTP 4. Ensure storage has sufficient space
Target VM Creation Failed¶
Problem: VM creation fails on target provider
Solutions: 1. Check target provider has sufficient resources 2. Verify VMClass and network references exist 3. Review target provider logs 4. Check imported disk is valid
Migration Keeps Retrying¶
Problem: Migration retries but keeps failing
Solutions: 1. Check the error message in status.message 2. Review the retry count: kubectl get vmmigration -o jsonpath='{.status.retryCount}' 3. If max retries reached, migration will stay failed 4. Delete and recreate migration after fixing root cause
Cleanup¶
VM Lifecycle and Deletion Behavior¶
Important: VirtRigaud follows a separation of concerns pattern for VM lifecycle management:
- Successful migrations: The target
VirtualMachineis completely independent of theVMMigrationresource - Failed migrations: Partially created VMs are automatically cleaned up
- Migration deletion: Deleting a
VMMigrationresource never deletes successfully migrated VMs
This ensures that: - ✅ Production VMs persist independently after migration completes - ✅ You can safely delete migration resources to clean up artifacts - ✅ Failed/partial VMs are automatically cleaned up - ✅ Clear audit trail remains via VM annotations
Manual Cleanup¶
Delete the migration resource after completion:
This automatically cleans up: - ✅ Intermediate storage artifacts (PVCs, uploaded disks) - ✅ Migration-created snapshots - ✅ Partially created target VMs (only if migration failed) - ❌ Never deletes successfully migrated VMs
The migrated VM remains and must be deleted separately if needed:
Migration Completion Marker¶
When a migration completes successfully, the target VM receives a special annotation:
metadata:
annotations:
virtrigaud.io/migration-completed: "true"
virtrigaud.io/migration-completed-at: "2025-11-08T15:30:00Z"
This marker: - Protects the VM from deletion when the migration is removed - Provides an audit trail of when the migration completed - Indicates the VM is production-ready and independent
Cleanup Configuration¶
Control cleanup behavior for source resources:
spec:
source:
deleteAfterMigration: true # Delete source VM after success
options:
cleanupPolicy:
keepSnapshots: false # Delete migration snapshots
keepIntermediateStorage: false # Delete storage artifacts
Best Practices¶
1. Pre-Migration Checklist¶
- Verify source VM is in a consistent state
- Create a manual snapshot for rollback
- Test storage connectivity
- Verify sufficient storage space (2x VM disk size)
- Plan for downtime if source VM must be powered off
2. Storage Selection¶
- S3: Best for cloud deployments, multi-site migrations
- NFS: Best for on-premises, high-speed local network
- HTTP: Simple setups, temporary migrations
3. Network Considerations¶
- Migration time depends on network bandwidth
- 100GB VM over 1Gbps network: ~15 minutes
- 1TB VM over 100Mbps network: ~24 hours
- Use compression for slow networks
4. Validation¶
After migration: - Test VM boots successfully - Verify applications work correctly - Check network connectivity - Validate disk integrity
5. Rollback Plan¶
Before migration: 1. Document source VM configuration 2. Create manual snapshot 3. Test restore procedure 4. Keep source VM until validation complete
Performance Tuning¶
Compression¶
Enable compression for slower networks:
Parallel Migrations¶
Run multiple migrations concurrently:
Timeout Configuration¶
Adjust timeout for large VMs:
Security Considerations¶
Storage Credentials¶
- Store credentials in Kubernetes Secrets
- Use RBAC to restrict access
- Rotate credentials regularly
- Use encrypted storage (S3 SSE, HTTPS)
Network Security¶
- Use HTTPS/TLS for all transfers
- Verify checksums are enabled
- Use VPN for cross-site migrations
- Isolate migration traffic if possible
Examples¶
See docs/examples/ for complete examples:
vmmigration-basic.yaml: Simple migrationvmmigration-s3.yaml: Migration with S3 storagevmmigration-nfs.yaml: Migration with NFS storagevmmigration-advanced.yaml: Advanced configurationvmmigration-cross-namespace.yaml: Cross-namespace migration
How Migrated VMs Are Created¶
ImportedDisk Feature¶
Migrated VMs use the ImportedDisk field instead of ImageRef in the VirtualMachine spec. This provides:
Benefits: - Full audit trail via migrationRef - Disk metadata (format, source, size) - Type-safe validation - Clear separation between template-based and disk-based VMs
Example:
When a migration completes, the target VM is automatically created with:
apiVersion: infra.virtrigaud.io/v1beta1
kind: VirtualMachine
metadata:
name: my-vm-migrated
namespace: default
annotations:
virtrigaud.io/migrated-from: "default/my-source-vm"
virtrigaud.io/migration: "default/my-migration"
virtrigaud.io/imported-disk-id: "my-vm-migrated-disk"
spec:
providerRef:
name: target-provider
classRef:
name: medium-vm
# ImportedDisk is set instead of imageRef
importedDisk:
diskID: my-vm-migrated-disk
format: qcow2
source: migration
migrationRef:
name: my-migration
networks:
- name: default
Key Points: - importedDisk and imageRef are mutually exclusive - importedDisk.diskID references the disk imported during migration - migrationRef provides traceability back to the migration resource - Providers automatically resolve disk paths based on diskID
Manual Disk Import (Advanced)¶
You can also create VMs from manually imported disks:
apiVersion: infra.virtrigaud.io/v1beta1
kind: VirtualMachine
metadata:
name: manual-disk-vm
spec:
providerRef:
name: libvirt-provider
classRef:
name: medium-vm
importedDisk:
diskID: my-manual-disk
path: /var/lib/libvirt/images/my-manual-disk.qcow2 # Optional: explicit path
format: qcow2
source: manual # Options: migration, clone, import, snapshot, manual
sizeGiB: 40 # Optional: for capacity planning
This is useful for: - Importing VMs from external sources - Using cloned disks - Working with snapshot-based disks - Migrating VMs manually
API Reference¶
For detailed API documentation, see: - VMMigration CRD Reference - Migration Status Fields - Storage Configuration - ImportedDiskRef API