We have successfully tested a variety of equipment to get vStack solutions into your business as quickly as possible

Check your hardware compatibility with vStack using our matrix
gradient
gradient

Switching and server equipment

Switching equipment

As a switching solution, any modern Enterprise-class switches can be selected to interconnect all resources, such as:

  • Cisco Nexus 3524X
  • Extreme Networks X590-24x-1q-2c
  • Huawei CloudEngine S6730-H24X6C
  • Juniper EX4600-40F-AFI
  • Eltex MES5448

Network structure

No less than two switches supporting tagged vlan + LACP/vpc, total number of ports in switch = number of nodes in cluster * 4

Server hardware

The vStack cluster nodes can be any modern x86 servers with four or more Intel processors.

Example of filling 1 node:

  • 1 x 2U Intel server platform
  • 2 x 16-core Xeon 6226R (2.90 GHz)
  • 8 x 32GB Dual Rank RDIMM 3200МHz Kit
  • 4 x 960TB SSD SAS Read Intensive 12Gbps 512e 2.5in Drive PM5-R
  • 2 x Intel 10/25Gbe Intel dual port SFP+ PCIe
  • 2 x Power Supply (1 PSU) 750W Hot Plug

or:

  • 1 x 2U Intel server platform
  • 2 x 24-core Xeon Gold 6258R (2.70 GHz)
  • 8 x 64GB Dual Rank RDIMM 3200МHz Kit
  • 12 x 8TB NLSAS HDD 3.5in Drive
  • 2 x Intel 10/25Gbe Intel dual port SFP+ PCIe
  • 2 x Power Supply (1 PSU) 750W Hot Plug

Possible cluster configurations

Minimum possible cluster topology 3+1 (4 nodes; n+1)

Host configuration (minimum recommended):

  • CPU: Intel Xeon v5 or higher with at least 8 cores (for example Intel® Xeon® Processor E5-2448L)
  • RAM: 256GB RAM
  • size to at least 960GB; x+2 drives per node, where x == nodes in cluster
  • HBA: 1 LSI layer 3216 or higher
Minimum cluster configuration 3+1 (4 nodes; n+1)
  • 4 x 2U Intel server platform
  • 8 x 8-core Intel(r) Xeon(r) Processor E5-2448L or higher
  • 32 x 32GB Dual Rank RDIMM 3200МHz Kit
  • 16 x 1TB SSD Intel DC P4510 Series, PCIe 3.1 x4, 3D2 TLC, 2,5"
  • 16 x 1TB SSD Intel DC P4510 Series, PCIe 3.1 x4, 3D2 TLC, 2,5"
  • 8 х 240GB SSD Samsung SM883
  • 8 x Power Supply (1 PSU) 750W Hot Plug
  • 8 x Intel 10/25Gbe Intel dual port SFP+ PCIe
Recommended cluster configuration 7+2 (9 nodes; n+2)
  • 9 x 2U Intel server platform
  • 18 x 24-core Intel(r) Xeon(r) Xeon(r) Gold 6258R
  • 72 x 64GB Dual Rank RDIMM 3200МHz Kit
  • 99 x 1,92TB SSD Samsung PM1643a, V-NAND, SAS, 2.5"
  • 18 x Power Supply (1 PSU) 750W Hot Plug
  • 18 x Intel 10/25Gbe Intel dual port SFP+ PCIe
Maximum cluster configuration of 24 nodes
  • 24 x 2U Intel server platform
  • 48 x 24-core Intel(r) Xeon(r) Xeon(r) Gold 6258R
  • 192 x 64GB Dual Rank RDIMM 3200МHz Kit
  • 576 x 1,92TB SSD Samsung PM1643a, V-NAND, SAS, 2.5"
  • 48 x Power Supply (1 PSU) 750W Hot Plug
  • 48 x Intel 10/25Gbe Intel dual port SFP+ PCIe
gradient

Successfully tested equipment with vStack

Intel, SuperMicro (SSG-2029P-ACR24L, SSG-6049P-E1CR24L, X11DPi-N(T), SYS-6029P-WTRT), AIC (HP201-AD), Huawei (FusionServer 2488H V5), YADRO (VEGMAN S220 Server), Lenovo (ThinkSystem SR650)

System Disks System board CPU Shared Disks NIC
Intel X11DPH-T Intel Xeon Silver KINGSTON SEDC500 J2.7 Intel XXV710 25GbE
Samsung X11DPH-T Intel Xeon Gold INTEL SSDSC2KB96 0132 Intel X710 10GbE
Seagate AIC AIDOS WUS4BB019D7P3E1 NVMe Intel 82599ES 10GBe
Western Digital BC62MBHA KINGSTON SEDC500 J2.7 Mellanox MT27800 Family [ConnectX-5]
Micron MBDX86781001A4 SAMSUNG MZ7L3960 Mellanox MT27710 Family [ConnectX-4 Lx]
Micron MBDX86781001A4 SAMSUNG MZ7L3960 Mellanox MT27710 Family [ConnectX-4 Lx]
Patriot 7X06CTO1WW SEAGATE ST2400MM0129 Emulex OneConnect NIC (Skyhawk)
Lenovo X11DPi-N INTEL SSDSC2KB01 0110
Kingston X11DDW-NT INTEL SSDSC2KB01 0110
SAMSUNG MZ7KH1T9
MTFDDAK960TDC-1A MG39
Do you still have questions?

Contact us and our managers will advise you.

By using our website, you agree to with the fact that we use cookies.