Migration is possible only in manual mode. To perform the migration, it is necessary to prepare VMs: switch guest OS to UEFI boot mode, export VMs to ova/ovf, send the resulting ova/ovf to us so that we can import it. A little later a tool will be ready to perform this task by the user.
In the public cloud, as in vCloud, only the virtual layer is visible. In your installation, you can also see the physical layer, just like in vSphere.
First a node is added to the cluster. Then VMs are migrated to re-created pools using new disks. In version 2.1 the mechanism of increasing space without VM migration will be implemented, which will give the opportunity to expand the pool by 1 disk.
You can add resources to existing hosts. A heavier option is to add hosts to the cluster.
The vDC quotas are managed by the system owner and the request for resources is handled by external processes.
The cost of 1GB SSD is higher than HDD, which negates the economic effect when calculating the topology. At the moment we offer vStack on SSD disks, as we think that HDD does not correspond to modern realities. However, if necessary, we are ready to promptly add HDD support.
In each node, the minimum number of disks is equal to the number of nodes. This postulate is mathematically derived, otherwise the principle of hyperconvergence and failover function is meaningless, and the failure of at least one disk will be fatal.
Presenting the VM to an external datastore is a convergent approach. In this case, the principles of hyperconvergence are violated.
Backups can be done by means of the guest operating system. To do this, you need to install the backup agent for the selected backup solution, connect it to the backup server, select the backup/retention policies and schedule.
Yes, you need to fix the location of the disks in the server slots, move the disks to the new server, and then cabling the new server to the network hardware. All this is the advantage of hyperconvergence.
There is currently no option to add additional nodes to the cluster, but it is planned in the near future. For now, this feature is available without a management interface in the Managed vStack service.
We issue a cluster resource utilization report on a monthly basis. If necessary, we can implement these reports with discretion up to VMs.
We recommend 10Gbe. If I/O performance is important, 25Gbe is the priority.
Yes, it is possible, but preferably at least 8 cores. As a rule, on modern motherboards some PCI slots will not be available.
There is no relation between the number of nodes and the number of network ports. Our recommended minimum is 4 ports per node on two different NICs.
Any modern x86 servers with Intel processors in quantities of 4 or more can be selected as vStack cluster nodes. Example of node configuration:
- 2U Intel server platform
- 2 x 16-core Xeon 6226R (2.90 GHz)
- 8 x 32GB Dual Rank RDIMM 3200МHz Kit
- 4 x 960TB SSD SAS Read Intensive 12Gbps 512e 2.5in Drive PM5-R
- 2 x Intel 10/25Gbe Intel dual port SFP+ PCIe
- 2 x Power Supply (1 PSU) 750W Hot Plug
Overloading is already a significant distinction. Besides, layer synchronization semantics also should be noted. In the discrete API we will have to implement it “manually”, as a consequence – to create mechanisms for such synchronization in the user’s layer.
Please fill out the registration form. We will contact you shortly to answer your questions.