What to do with old hardware and storage when migrating to hyperconverged infrastructure

gradient

When migrating from classic infrastructure to hyperconverged infrastructure, some of the hardware from the old converged infrastructure, such as storage, remains unclaimed. This is the case when implementing vStack, a hyperconverged platform developed at ITGLOBAL.COM. If a company had 24 Dell servers and three storage systems, when it switches to vStack, it assembles two clusters of 12 nodes. The servers will continue to work, but the storage systems will fall out of the new infrastructure.

This raises the question of how to find a new use for them and at the same time optimize costs. Together with vStack we’ve looked at a few options and found out how you can effectively use old iron.

Ensure that the test environments work

The development of any complex software product quickly slows down if you do not have a test environment for the whole solution. The test loop requires the use of a commensurate amount of resources. It is not necessary to allocate as many resources as on the production loop, but if it is available, it will give more predictability and also eliminate the problems associated with unequal resource bases.

But over time, the number of loops may approach the number of teams. That is, sooner or later the manager or CIO, who is responsible for the budget, finds the following picture: each of the teams “knocked out” its own test contour.

Even if initially there are fewer contours than teams, over time, the manager comes to the leaders and say: “It’s not quite fair – some have their contour, and we do not, how are we worse? In the last sprint, the interface team rolled out an update and broke the whole scheme. They ended up having a successful sprint, and we lost two weeks of work.”

Evgeny Gavrilov, vStack CEO

If there are 12 teams in the product, then there are 12 loops. At the same time, the budget for test environments is several times higher than for the product environment, and, of course, any manager would want to reduce the cost of test environments.

There are two ways to do this. The first option is to use old hardware which has been removed from support. The second option is virtualization, which allows you to use resource oversubscription.

In the second case, there is no need for dedicated hardware in each loop. Even if you have 12 test loops, this is not an issue because they do not consume 100% of the shared resources 24/7.

For example, the interface team deploats on Tuesdays, and load tests on Thursdays; another team deploats on Wednesdays, and load tests on Fridays. At the same time, one team’s testing takes an hour, another takes an hour and a half, and a third takes only 20-30 minutes. The teams efficiently use the same hardware and do not compete for its resources.

It is enough to place the circuits in a single virtual environment, which does not require expensive hardware. The infrastructure of this virtual environment can be deployed on consumer hardware, and you’ll end up saving a decent amount of money.

Keep redundancy resources running

Simple applications that don’t require storage work well in public clouds, but as soon as something more complex comes along, other needs arise. For example, if you need to synchronize data on work and standby servers or make regular backups.

Let’s take a large company, a well-known EDI provider and EDI operator. Its application has not worked and will not work in the clouds because it uses huge databases. It requires its own infrastructure and environments with characteristics not available in public clouds.

As with any infrastructure that has large databases, you need to create some number of replicas of those databases. Having a replica ensures that the data doesn’t go missing in an instant. A company may have 2-3 industrial database replicas on different hardware, including old storage systems.

Many companies still have outdated storage systems, which do not provide productivity, and perform the only function – data storage. And this happens until the equipment is no longer working at all.

Allocate for information systems with high RTO and RPO

Let’s imagine that a company uses a large number of massive information systems: ERP, CRM, BI, ABS, and others. Some of them are super-valuable, the efficiency of which directly affects key business metrics. RTO and RPO targets for them should be minimal and close to zero.

However, there are less important systems, whose downtime in an hour will not affect the performance of the company. For these information systems, you can use old storage systems. It is impossible to say for sure what systems are worth allocating unclaimed hardware – the company should decide this on the basis of RTO/RPO of each system, as well as on the degree of impact on the business.

What is the “old iron problem”?

The migration of complex information systems or infrastructures always happens “on the side,” on “other” hardware. This is due to the fact that migration itself is a very long process and can be considered a failure. In this case, the operation of the existing equipment or infrastructure continues.

It is therefore incorrect to argue that the “old iron problem” arises exclusively in the transition to hyperconverged solutions. To the same extent, it is present when complex information systems migrate from one storage system to another, from one SAN to another, and so on. As a consequence, the problem of old hardware is not new, and its solution is not original.

By using our website, you agree to with the fact that we use cookies.