stratus-blog-4

The 5 Benefits of Virtualization at the Edge

The concept of virtualization was a breakthrough in computer technology when it was developed 40 years ago to enable shared use of computing resources, increasing efficiency. Virtualization was first adopted by IT as part of select technology applications. But today, with the advent of Edge Computing and because of a variety of real-world benefits, it has now moved into complex plant control systems and other automation scenarios to enable digital transformation. Let’s get to the bottom of why virtualization should be a core strategy for Edge Computing right now.

Virtualization allows the capabilities of a physical machine to be distributed across multiple environments and takes several forms, including desktop, server, or operating system virtualization.

  • Virtualization can be used with desktops to create one environment that is simulated and shared with multiple physical machines at the same time.
  • Server virtualization allows a server to be partitioned so that multiple functions can be run simultaneously.
  • Virtualization can also be used with operating systems so that one physical machine can run multiple operating systems side by side.

Whichever type of virtualization you choose, the benefits of virtualization at the edge are the same:

1. Reduced Engineering Hours and Greatly Improved Productivity

Instead of performing a single task multiple times on multiple physical machines, the task is only performed once. Depending on the task, engineering hours can be decreased by up to 75%

2. Improved Speed of Time to Market

Virtualization provides a single pane of glass view, allowing companies to quickly access information and make changes to respond to customer needs.

3. Multiple Revenue Streams for your Organization – Especially System Integrators

Virtualization allows servers to be fully optimized. By partitioning the server, multiple clients running different programs can all use the same server, which allows for multiple sources of revenue.

4. Stronger Competitive Advantage

Moving from physical machines to virtual machines provides a competitive advantage. Virtualization protects data analytics and systems in a simple and secure environment which is easy to deploy and helps reduce the number of PCs and software licenses needed while also allowing for protected data by offering high availability and software fault tolerance.

5. Reduced Ongoing Support Burden

In the same way that reducing repetitive tasks saves time, fewer physical machines reduces the time IT staff spends on troubleshooting hardware problems, managing upgrades and patches, and performing backups.

In a recent trend report, industry analyst firm Gartner stated that “Edge Computing will become a dominant factor across virtually all industries and use cases” naming it one of the top 10 strategic technology trends for 2020, and another firm, IDC, identified edge computing as one of the top 10 key drivers for IT over the next five yearsVirtualization is an essential component of Edge Computing, allowing admins to quickly and easily manage workloads, shifting between servers. Virtualization plays a critical role in many edge scenarios, including gateways or micro data centers that process data produced by sensors at the edge, or apps running in containers that are hosted on virtual machines.

Sources: https://blog.stratus.com/5-benefits-virtualization-at-the-edge/

stratus-blog-3

12 Gartner Edge Computing Use Cases We Believe Can Help You Win at Edge Computing

Intense interest in Edge Computing has grown rapidly over the past several years, and Gartner believes “…by year-end 2023, more than 50% of large enterprises will deploy at least six edge computing use cases deployed for IoT or immersive experiences, versus less than 1% in 2019.” As companies automate and digitally transform their core business operations, Edge Computing will be key to providing the real-time data processing and analysis required to create business intelligence and increase value.

The Cultural Shift

In a recent report titled “Exploring the Edge: 12 Frontiers of Edge Computing,” Gartner details how Edge Computing increases the possibilities, business promise and use cases of IoT significantly. We believe this is important because culturally we are shifting from simple connections powered by technology to more immersive, interactive, and natural connections. The benefits of Edge Computing, like decreased latency, better bandwidth management, and zero-touch operations are key to supporting these new expectations of how people, businesses, and things interact.

Business, Things, And People

Gartner has identified 12 Edge Computing use case categories, divided into three distinct interaction types. These are centered around Business, Things and People. In our opinion these interaction types drive activities like industrial automation, streaming video, financial transactions and smart meters. This can be helpful to organizations looking to build a strategy to support their digital transformation vision and create a strategy that includes multiple Edge Computing use cases, rather than treating each use case as a single deployment.

Creating an Edge Computing Strategy

We believe each organization will need to decide which use cases categories are most relevant to helping them reach their Edge Computing and Digital Transformation goals. IT and OT infrastructure managers should work with business leaders to identify opportunities for business value enabled by edge computing deployments as part of the overall digital business strategy. Building out a long-term multi-year plan for edge computing use case deployments, as well as developing guidelines and standards will help organizations choose the right vendors to achieve success.

Sources: https://blog.stratus.com/12-gartner-edge-computing-use-cases-win-edge-computing/

stratus-blog-2

3 Developments Influencing Edge Computing in 2020

While Edge Computing will continue to spread across industries in 2020, there will still be growing pains as organizations learn what types of implementations can achieve the best results and how to use data to power digital transformation. For me, I see three areas where we will see increased clarity and confidence in Edge Computing.

Security at the Edge

One concern people have had for a while has been security at the edge. At Stratus we’ve seen that you can’t take the same security technology you’re using in the data center and apply it at the edge. The primary difference is the sheer number of connected devices, and each one represents a potential vulnerability point.

In 2020, security requirements at the edge will become more defined, either through work by industry consortiums, or by end users establishing specific requirements. Security criteria are likely to vary greatly by industry, with financial services companies having different Edge Computing needs and objectives than wastewater treatment facilities, for example. Enterprises should create security controls based on what data is collected, where it is used, and who needs access to it. For example, a device at the edge may not need to be connected to the cloud at all times, and can be configured to only initiate a connection when specific data needs to be transferred.

IT and OT

Another ongoing edge discussion focuses on how IT and OT teams interact with each other and who is responsible for various aspects of edge implementations. I believe in 2020, IT and OT will begin to collaborate more effectively as they gain better understanding around the role of each team member and clarification of swim lanes for Edge Computing. As responsibilities become clearer, the organization as a whole will adapt, in structure and through budget support. There are many benefits to this approach including delivering better customer experience via the application of predictive analytics.

OEMs

Finally, I believe that OEM builders will bake more intelligence into their machines in recognition of the shortage of well-qualified technical staff in the field. It’s amazing to see the level of interest coming from people who are making very smart machines. They will add features like predictive maintenance, fault-tolerance and increased autonomy.

Machines will leverage the data they share better through software solutions like complex event processing. This will reduce the need for supervisory intervention and be the initial steps towards more machine adaptive processes. And this brings it around full circle to these smart machines helping with what I mentioned earlier about IT and OT and how they work together by incorporating technology into one machine, reducing complexity, but producing data that drives business results.

So as we enter 2020, I think we will see enterprises recognizing more use cases for Edge Computing and reacting accordingly by changing internal staffing structures, defining the data they need and how to best manage it, and being more specific about requirements from vendors and really optimizing Edge Computing as they move forward in their digital transformation initiatives.

Sources: https://blog.stratus.com/3-developments-influencing-edge-computing-in-2020/

stratus-blog-1

Industrial Automation Growth Lags, but IIoT Presents and Opportunity for Early Adopters of Edge Computing

Recent results from industrial automation companies have been uneven, as forecasts in late 2019 for investment in U.S. manufacturing declined for the first time in 10 years. Part of this can be traced to investor uncertainty due to tariffs, the U.S./China relationship and the recently passed USMCA. This mirrors a pattern seen in the EU, UK, and Japan.

Factors Influencing IIoT Growth

But for the Industrial Internet of Things (IIoT), the outlook is more positive. Rather than declining, the IIoT market has been projected to grow between 29% and 40% between 2019 and 2023 depending on which analyst you talk to, with a general consensus in the range of a 33% CAGR. This growth in adoption of IIoT will be driven by developments like the rollout of 5G, the increased adoption of wearable technology, continued development of smart operations and connected assets and interest in developing smart buildings and smart cities.

For manufacturing organizations, the benefits of IIoT deployments are many, including increased efficiency, increased productivity, decreased maintenance costs, and supply chain optimization.  These deployments also provide new areas of revenue generation opportunities for suppliers as they try to better service their customer base and provide a higher degree of customer satisfaction.  

Digital Transformation is Key to Success

To achieve these benefits, manufacturers are focusing on Digital Transformation. A Deloitte Industry 4.0 survey of 361 executives across 11 countries shows that 94% report digital transformation as their organization’s top strategic initiative. Increasingly, Edge Computing is powering these digital business interactions. Wherever real-time processing is critical, when large quantities of data are being produced and when minimizing downtime is imperative, Edge Computing is key. Gartner believes that it’s the interactions between people, businesses, and things that will define Edge Computing use cases.

From our research, we know that more than 50% of enterprises are already implementing or testing Edge Computing use cases. The most popular use cases include device failure detection, advanced process control, asset performance, and SCADA/HMI. Gartner believes that by year-end 2023, more than 50% of large enterprises will deploy at least six edge computing use cases for IoT or immersive experiences[1].

Smart enterprises looking to digitally transform and disrupt the status quo will reap the benefits of making capital expenditures now and be ready for the increased demand and opportunity when the market rebounds. More conservative organizations may get left behind, lose market share and may not be able to take full advantage of the benefits of Edge Computing when it really matters.

[1] Gartner Exploring the Edge: 12 Frontiers of Edge Computing, 6 May 2019, Thomas Bittman

Sources: https://blog.stratus.com/industrial-automation-growth-lags-iiot-presents-opportunity-early-edge-computing-adopters/

stratus-blog5

The Risks and Rewards of Virtualization

Virtualization is more than just an industry buzzword or IT trend. This technology enables multiple instances of an operating environment to run on a single piece of hardware. These virtual machines (VMs) then run applications and services just like any other physical server and eliminate the costs related to purchasing and supporting additional servers.

Virtualization delivers other benefits, too, such as the faster provisioning of applications and resources. Additionally, it can increase IT productivity, efficiency, agility, and responsiveness, freeing IT resources to focus on other tasks and initiatives.

How did virtualization evolve?

To best understand the business case for virtualization – as well as potential virtualization risks – we need to look back to the time when mainframes ruled the computing world.

Mainframes were used by large organizations to manage their most critical applications and systems. Yet they could also act as servers, offering the ability to host multiple instances of operating systems at the same time. In doing so, they pioneered the concept of virtualization.

Many organizations were quick to see the potential. They began carving up workloads for different departments or users to give them dedicated compute resources for more capacity and better performance. This was the very beginning of the client-server model.

In most cases, on application ran on one server, which was accessed by many different PCs. Other advancements, such as the emergence of Intel’s x86 technology, all helped make client-server computing faster, cheaper, and more effective.

It all worked great, until its popularity caught up. Eventually, it seemed like everyone in the company wanted a server to host his/her application. This resulted in too many servers – “server sprawl” – that quickly filled up even the largest data center.

Space wasn’t the only concern. All these servers were expensive and required extensive services to support and maintain them. Overall IT costs surged, and many companies began looking for a new approach.

One solution: A virtualized approach for any servers using x86 technology. With virtualization, one physical server could now host many VMs and could provide the full isolation and resources each application required.

A new approach leads to new concerns

All of this worked well, except for the new concern that the virtualization layer – the hypervisor – could fail. Worse, a single failure in the virtualized environment would trigger a domino effect where all virtualized applications would also fail, leading to unacceptable downtime risk. To prevent this scenario, many companies chose to virtualize their non-production systems. This way, if any failure did occur, critical systems wouldn’t go down.

As technology improved, organizations realized that the hypervisors can deliver the performance and stability they required, and they started virtualizing all their applications, even production workloads.

On one hand, the effort wasn’t difficult, and seemed to pave the way for many significant benefits. Yet on the other, it did present new risks related to hardware and availability. For example, consider the case where one company might have 20 business-critical VMs on one server, only to have it fail.

How long would it take to resolve the problem? How much would this downtime cost? What long-term implications would it have on customers, prospects, and the company’s reputation? All of these are reasonable questions, but often, don’t have satisfactory answers.

This scenario points to the need for the right hardware infrastructure and always-available systems as part of any successful virtualization strategy. We’ll cover these topics – while covering some common misconceptions – in our next article. Stay tuned.

stratus-blog4

Questions for IIoT Success

The industrial internet of things is sweeping across industries from food and beverage to manufacturing, and with the rise of IIoT comes the possibility for new efficiencies and more optimized operations. Thus, leading to new opportunities to control risk and decrease costs.

Although the real tipping point comes when making the transition to IIoT is deriving meaningful ROI — which is not always easy. While the road to IIoT may be marked with twists and turns, it does not have to be fraught with so much uncertainty. There is a very essential balancing act between managing the existing systems and processes against the introduction of new technologies. Combine this with the need to remain up and running with zero downtime, and the task might feel impossible.

To ensure success when undergoing an IIoT project, start by asking yourself these four questions (as explained in more detail on IoT Agenda):

1. How can we encourage synergies across teams?
2. Are applications in the right place?
3. Are you set up to scale the edge effectively?
4. What’s the best way to secure this new connected edge?

For most the path to IIoT will be an evolutionary journey. Before you can start to tap the potential of next-generation, big data-driven, intelligent automation, you must modernize the foundation on which it is built. And that means taking a hard look at existing operational technology.

Modernizing your infrastructure will deliver incredible benefits in terms of reliability and manageability to create a future-proof platform to build your organization’s IIoT strategy.

Want to hear more on common questions surrounding IIoT?

Check out our short video with Jason Andersen, Vice President of Business Line Management as he provides insight on and addresses common questions in Industrial IoT.

stratus-blog2

What’s the Real Cost of Integrating High Availability Software?

If you were trick or treating in October of 1999, chances are your bag of treats held considerably less Hershey products. In September of that same year, the company admitted to having issues with their newly implemented order-taking and distribution system.

The company had spent approximately $112 million dollars on a combination of ERP, SRM and supply chain management software. Hershey’s merger of this new software with their existing systems experienced failures that prevented the company from fulfilling $100 million dollars worth of customer orders. Not a good Halloween for Hershey.

Hershey’s had the financial reserves to weather the costly implementation setback but the failure of the new software to seamlessly integrate with their existing systems more than doubled the cost of their upgrade in the end. Preventing downtime is critical in all businesses, but is especially high on the list in manufacturing company’s like Hershey.

For example, when implementing a manufacturing execution system (MES) application, the risk is considerably higher due to the complex nature of production. Critical Manufacturing’s article “10 Reasons Why So Many MES Projects Fail,” explains that there is typically, “a complex web of components which broadly classified are- material, man and machine.”

The article goes on to say that, “even though there might be no other MES earlier installed, except in the case of a completely new factory, it is very unlikely that there are no other applications on the shop-floor. An MES is supposed to integrate the operation along with the existing IT infrastructure, so the application would be a failure if separate systems exist and users need to now work with both these and the MES separately. MES application needs to be a single comprehensive platform for optimum.”

Stratus’s Downtime Prevention Buyer’s Guide, talks about the six questions you should be asking to prevent downtime. Stratus suggests before agreeing to high availability software integration like MES, Supervisory Control and Data Acquisition (SCADA) or Historian systems that you ask, “Can your solution integrate seamlessly into existing computing environments with no application changes required?”.

“Some availability solutions integrate more easily into existing computing environments than others. Certain solutions may require that you make changes to your existing applications — a process that is time-consuming and typically requires specialized IT expertise.”

The guide goes on to give an example of a potential issue, “high availability clusters may need cluster-specific APIs to ensure proper fail-over. If ease of deployment and management are top priorities for your organization, you may want to consider a fault-tolerant solution that allows your existing applications to run without the risk and expense associated with modifications, special programming, and complex scripting.”

stratus

Gartner Research Emphasizes the Importance of Edge Computing

The term “edge computing” may seem like another technical buzzword, but respected research firm Gartner believes that edge computing is fast becoming an industry standard. The world is getting faster and our need for real-time data processing is picking up as well.

So, what exactly is the edge? Edge computing are the solutions that facilitate data processing at or near the source of data generation. For example, in the context of the Internet of Things (IoT), the sources of data generation are usually things with sensors or embedded devices. Edge computing serves as the decentralized extension of the campus networks, cellular networks, data center networks or the cloud.

In the newsletter, we share Gartner research that boldly states that “the edge will eat the cloud” and that, “the architecture of IT will flip upside down, as data and content move from centralized cloud and data centers to the edge, pulling compute and storage with it.” Gartner predicts that as the demand for greater immersion and responsiveness grows, so will edge computing. “Edge computing provides processing, storage and services for things and people far away from centralized cores, and physically close to things and people.”

The offline-first functionality that the edge provides also eliminates issues like; latency, bandwidth, autonomy and security. For example, when a question is posed to devices like Alexa or Google Home there is an almost imperceptible lag while the data is retrieved from the cloud and relayed to the user. A scenario that becomes dangerous when applied to other emerging technologies.

Gartner breaks it down, “For a self-driving car traveling 70 miles per hour, 100 ms equals 10 feet. But if we have two self-driving cars, or two dozen all traveling toward the same location, 100 ms is an eternity. A lot can happen in a few milliseconds – lives could be at risk.” The cloud simple can’t keep up.

The Gartner research presented also discusses the importance of edge technology as IoT continues to explode. “More and more physical objects are becoming networked and contain embedded technology to communicate and sense or interact with their internal states or the external environment. By 2020, 20 billion “things” will be connected to the internet.” Gartner states, “A more interactive, immersive human-machine interface will force data and computing to move closer physically, and to live in the world with people.”