stratus-blog5

The Risks and Rewards of Virtualization

Virtualization is more than just an industry buzzword or IT trend. This technology enables multiple instances of an operating environment to run on a single piece of hardware. These virtual machines (VMs) then run applications and services just like any other physical server and eliminate the costs related to purchasing and supporting additional servers.

Virtualization delivers other benefits, too, such as the faster provisioning of applications and resources. Additionally, it can increase IT productivity, efficiency, agility, and responsiveness, freeing IT resources to focus on other tasks and initiatives.

How did virtualization evolve?

To best understand the business case for virtualization – as well as potential virtualization risks – we need to look back to the time when mainframes ruled the computing world.

Mainframes were used by large organizations to manage their most critical applications and systems. Yet they could also act as servers, offering the ability to host multiple instances of operating systems at the same time. In doing so, they pioneered the concept of virtualization.

Many organizations were quick to see the potential. They began carving up workloads for different departments or users to give them dedicated compute resources for more capacity and better performance. This was the very beginning of the client-server model.

In most cases, on application ran on one server, which was accessed by many different PCs. Other advancements, such as the emergence of Intel’s x86 technology, all helped make client-server computing faster, cheaper, and more effective.

It all worked great, until its popularity caught up. Eventually, it seemed like everyone in the company wanted a server to host his/her application. This resulted in too many servers – “server sprawl” – that quickly filled up even the largest data center.

Space wasn’t the only concern. All these servers were expensive and required extensive services to support and maintain them. Overall IT costs surged, and many companies began looking for a new approach.

One solution: A virtualized approach for any servers using x86 technology. With virtualization, one physical server could now host many VMs and could provide the full isolation and resources each application required.

A new approach leads to new concerns

All of this worked well, except for the new concern that the virtualization layer – the hypervisor – could fail. Worse, a single failure in the virtualized environment would trigger a domino effect where all virtualized applications would also fail, leading to unacceptable downtime risk. To prevent this scenario, many companies chose to virtualize their non-production systems. This way, if any failure did occur, critical systems wouldn’t go down.

As technology improved, organizations realized that the hypervisors can deliver the performance and stability they required, and they started virtualizing all their applications, even production workloads.

On one hand, the effort wasn’t difficult, and seemed to pave the way for many significant benefits. Yet on the other, it did present new risks related to hardware and availability. For example, consider the case where one company might have 20 business-critical VMs on one server, only to have it fail.

How long would it take to resolve the problem? How much would this downtime cost? What long-term implications would it have on customers, prospects, and the company’s reputation? All of these are reasonable questions, but often, don’t have satisfactory answers.

This scenario points to the need for the right hardware infrastructure and always-available systems as part of any successful virtualization strategy. We’ll cover these topics – while covering some common misconceptions – in our next article. Stay tuned.

stratus-blog4

Questions for IIoT Success

The industrial internet of things is sweeping across industries from food and beverage to manufacturing, and with the rise of IIoT comes the possibility for new efficiencies and more optimized operations. Thus, leading to new opportunities to control risk and decrease costs.

Although the real tipping point comes when making the transition to IIoT is deriving meaningful ROI — which is not always easy. While the road to IIoT may be marked with twists and turns, it does not have to be fraught with so much uncertainty. There is a very essential balancing act between managing the existing systems and processes against the introduction of new technologies. Combine this with the need to remain up and running with zero downtime, and the task might feel impossible.

To ensure success when undergoing an IIoT project, start by asking yourself these four questions (as explained in more detail on IoT Agenda):

1. How can we encourage synergies across teams?
2. Are applications in the right place?
3. Are you set up to scale the edge effectively?
4. What’s the best way to secure this new connected edge?

For most the path to IIoT will be an evolutionary journey. Before you can start to tap the potential of next-generation, big data-driven, intelligent automation, you must modernize the foundation on which it is built. And that means taking a hard look at existing operational technology.

Modernizing your infrastructure will deliver incredible benefits in terms of reliability and manageability to create a future-proof platform to build your organization’s IIoT strategy.

Want to hear more on common questions surrounding IIoT?

Check out our short video with Jason Andersen, Vice President of Business Line Management as he provides insight on and addresses common questions in Industrial IoT.

stratus-blog2

What’s the Real Cost of Integrating High Availability Software?

If you were trick or treating in October of 1999, chances are your bag of treats held considerably less Hershey products. In September of that same year, the company admitted to having issues with their newly implemented order-taking and distribution system.

The company had spent approximately $112 million dollars on a combination of ERP, SRM and supply chain management software. Hershey’s merger of this new software with their existing systems experienced failures that prevented the company from fulfilling $100 million dollars worth of customer orders. Not a good Halloween for Hershey.

Hershey’s had the financial reserves to weather the costly implementation setback but the failure of the new software to seamlessly integrate with their existing systems more than doubled the cost of their upgrade in the end. Preventing downtime is critical in all businesses, but is especially high on the list in manufacturing company’s like Hershey.

For example, when implementing a manufacturing execution system (MES) application, the risk is considerably higher due to the complex nature of production. Critical Manufacturing’s article “10 Reasons Why So Many MES Projects Fail,” explains that there is typically, “a complex web of components which broadly classified are- material, man and machine.”

The article goes on to say that, “even though there might be no other MES earlier installed, except in the case of a completely new factory, it is very unlikely that there are no other applications on the shop-floor. An MES is supposed to integrate the operation along with the existing IT infrastructure, so the application would be a failure if separate systems exist and users need to now work with both these and the MES separately. MES application needs to be a single comprehensive platform for optimum.”

Stratus’s Downtime Prevention Buyer’s Guide, talks about the six questions you should be asking to prevent downtime. Stratus suggests before agreeing to high availability software integration like MES, Supervisory Control and Data Acquisition (SCADA) or Historian systems that you ask, “Can your solution integrate seamlessly into existing computing environments with no application changes required?”.

“Some availability solutions integrate more easily into existing computing environments than others. Certain solutions may require that you make changes to your existing applications — a process that is time-consuming and typically requires specialized IT expertise.”

The guide goes on to give an example of a potential issue, “high availability clusters may need cluster-specific APIs to ensure proper fail-over. If ease of deployment and management are top priorities for your organization, you may want to consider a fault-tolerant solution that allows your existing applications to run without the risk and expense associated with modifications, special programming, and complex scripting.”

stratus

Gartner Research Emphasizes the Importance of Edge Computing

The term “edge computing” may seem like another technical buzzword, but respected research firm Gartner believes that edge computing is fast becoming an industry standard. The world is getting faster and our need for real-time data processing is picking up as well.

So, what exactly is the edge? Edge computing are the solutions that facilitate data processing at or near the source of data generation. For example, in the context of the Internet of Things (IoT), the sources of data generation are usually things with sensors or embedded devices. Edge computing serves as the decentralized extension of the campus networks, cellular networks, data center networks or the cloud.

In the newsletter, we share Gartner research that boldly states that “the edge will eat the cloud” and that, “the architecture of IT will flip upside down, as data and content move from centralized cloud and data centers to the edge, pulling compute and storage with it.” Gartner predicts that as the demand for greater immersion and responsiveness grows, so will edge computing. “Edge computing provides processing, storage and services for things and people far away from centralized cores, and physically close to things and people.”

The offline-first functionality that the edge provides also eliminates issues like; latency, bandwidth, autonomy and security. For example, when a question is posed to devices like Alexa or Google Home there is an almost imperceptible lag while the data is retrieved from the cloud and relayed to the user. A scenario that becomes dangerous when applied to other emerging technologies.

Gartner breaks it down, “For a self-driving car traveling 70 miles per hour, 100 ms equals 10 feet. But if we have two self-driving cars, or two dozen all traveling toward the same location, 100 ms is an eternity. A lot can happen in a few milliseconds – lives could be at risk.” The cloud simple can’t keep up.

The Gartner research presented also discusses the importance of edge technology as IoT continues to explode. “More and more physical objects are becoming networked and contain embedded technology to communicate and sense or interact with their internal states or the external environment. By 2020, 20 billion “things” will be connected to the internet.” Gartner states, “A more interactive, immersive human-machine interface will force data and computing to move closer physically, and to live in the world with people.”