stratus-blog2

What’s the Real Cost of Integrating High Availability Software?

If you were trick or treating in October of 1999, chances are your bag of treats held considerably less Hershey products. In September of that same year, the company admitted to having issues with their newly implemented order-taking and distribution system.

The company had spent approximately $112 million dollars on a combination of ERP, SRM and supply chain management software. Hershey’s merger of this new software with their existing systems experienced failures that prevented the company from fulfilling $100 million dollars worth of customer orders. Not a good Halloween for Hershey.

Hershey’s had the financial reserves to weather the costly implementation setback but the failure of the new software to seamlessly integrate with their existing systems more than doubled the cost of their upgrade in the end. Preventing downtime is critical in all businesses, but is especially high on the list in manufacturing company’s like Hershey.

For example, when implementing a manufacturing execution system (MES) application, the risk is considerably higher due to the complex nature of production. Critical Manufacturing’s article “10 Reasons Why So Many MES Projects Fail,” explains that there is typically, “a complex web of components which broadly classified are- material, man and machine.”

The article goes on to say that, “even though there might be no other MES earlier installed, except in the case of a completely new factory, it is very unlikely that there are no other applications on the shop-floor. An MES is supposed to integrate the operation along with the existing IT infrastructure, so the application would be a failure if separate systems exist and users need to now work with both these and the MES separately. MES application needs to be a single comprehensive platform for optimum.”

Stratus’s Downtime Prevention Buyer’s Guide, talks about the six questions you should be asking to prevent downtime. Stratus suggests before agreeing to high availability software integration like MES, Supervisory Control and Data Acquisition (SCADA) or Historian systems that you ask, “Can your solution integrate seamlessly into existing computing environments with no application changes required?”.

“Some availability solutions integrate more easily into existing computing environments than others. Certain solutions may require that you make changes to your existing applications — a process that is time-consuming and typically requires specialized IT expertise.”

The guide goes on to give an example of a potential issue, “high availability clusters may need cluster-specific APIs to ensure proper fail-over. If ease of deployment and management are top priorities for your organization, you may want to consider a fault-tolerant solution that allows your existing applications to run without the risk and expense associated with modifications, special programming, and complex scripting.”

netapp-blog2

NetApp Leads Market for the Next Generation of Persistent Memory

Real Time Analytics Requires Real Time Data Processing

Real time data requirements are increasing and becoming the norm whether you’re driving a Tesla with the latest version of software or tracking cyber threats. New and existing enterprise workloads are requiring higher application-level performance than current flash technologies can often deliver. Response times and low latency are key attributes for an agile enterprise, and the volume of data collected by organizations is growing exponentially, thanks to key trends that are driven by the following use cases:

• AI (artificial intelligence)ML (machine learning), and DL (deep learning)
• Real-time analytics
• IoT (Internet of Things, requiring orders of magnitude more data)
• Video, social, mobile, and blockchain

Next Generation Storage Class Memory

These use cases require memory hungry applications with massive data sets to be analyzed and shared in real time, driving the need for more new approaches to data management that deliver radically lower latency. Next generation memory technologies like Persistent Memory (PMEM) are designed for this purpose. What is PMEM? Think of it this way: Persistent memory is accessed like volatile memory (RAM) by the CPU and application, achieving orders of magnitude lower latency than direct attached storage, but it retains its contents after a power loss like a regular storage device.

Real Time Use Cases

As mentioned above, performance-sensitive databases (relational and NoSQL), real-time applications and in-memory databases are the primary workloads. The most relevant use cases would be fraud and cyber threat detection, financial trading and real-time market analytics, health care diagnostics, forecasting, social media environments leveraging personalization algorithms, Internet of Things (IoT) workloads, or AI/ML-driven inference models. Any environment where customers need ultra-low latency as measured at the application-level, the ability to support huge data sets, or want to enable enterprise-class data services for these demanding tier 0 applications will benefit from PMEM.

Challenges and Solutions

Regular memory, or Random-accessed memory (RAM) is directly addressed by the CPU. PMEM is byte addressable, so an application accessing the data in PMEM needs to be rewritten to access this data and use it. NetApp MAX Data software simplifies the adoption of PMEM by enabling applications like Oracle to access data on PMEM without any changes. This means customers looking to adopt this next generation storage can do so immediately. No waiting for vendors to modify their code for PMEM.

What is MAX Data?

MAX Data (Memory Accelerated Data) software runs on the application server and provides a file system that spans PMEM and a storage tier. Applications stored on this filesystem get instant access to the data for both reads and writes (which means MAX Data is not caching software for those unclear on the concept who shall go nameless). With MAX Data, you get vastly improved application performance with high throughput and ultra low latency. However, performance is just the beginning…

Advanced Data Services

In the past real time applications were restricted to volatile memory. While PMEM brings write support to real time workloads, it also brings the need for enterprise data services. As you explore using persistent memory, you’ll need enterprise data services; MAX Data includes the ability to mirror and protect persistent memory within a server as well as utilize snapshots for fast data recovery. With NetApp, you can tier data to an ONTAP-based AFF all-flash system and leverage all the data management capabilities in ONTAP including high availability, cloning, backup and disaster recovery.

Next Generation PMEM Eco-system is Getting Ready for MAX Data

We are very excited about MAX Data, and not just because of capabilities the product brings to our customers around performance, data protection, and efficiency. MAX Data represents NetApp as a leader in next generation flash/PMEM ecosystem. MAX data already has the consideration of large, global enterprises for real time data analytics capabilities. We’re also excited to partner with Intel, Cisco, and Lenovo as part of the broader ecosystem of vendors looking to enable the adoption of PMEM in the server.

Don’t Look Back

The NetApp Data Fabric is built for the future, supporting both traditional and emerging applications, such as NoSQL databases and artificial intelligence. It offers the industry’s only unified data management platform that supports SAN and NAS, all-flash storage, software-defined storage, hybrid cloud, and cloud. You can scale up and out dynamically in seconds or in minutes, instead of taking hours or days. And you can allocate applications to where they run best, whether it’s on the premises or in the cloud. With MAX Data, you can extend the data fabric capability all the way into your servers with your applications and data that are critical to your business.

avigin-blog2

The Demand for AI and Video Analytics in an Increasingly Connected World

Through advanced AI technology, video analytics and our cloud platform, Avigilon is changing the way our customers interact with their surveillance systems. Read our blog post, and the full article originally featured in SourceSecurity.com.

Today’s security industry has reached a critical mass in the volume of collected data and the limits of human attention to effectively search through that data. As such, the demand for video analytics is increasing globally and we believe that most video surveillance systems will eventually feature video analytics.

Artificial Intelligence Solutions

Through the power of artificial intelligence (AI), Avigilon is developing technologies and products that dramatically increase the effectiveness of security systems by focusing human attention on what matters most. As AI solutions become adopted, it provides scalable solutions that can be deployed across a range of verticals and applications to better address security challenges.

GPU Technology Increases in Value

As the world becomes increasingly connected, the way we think about and interact with our security systems will continue to evolve across various verticals and applications. The emergence of GPU technology, in particular, has led to a dramatic increase in performance and value. With the democratisation of video analytics, and increased use of AI and deep learning, we believe that video analytics will be inherent in digital surveillance and used in broader applications. Cybersecurity will become more important as we move toward a more connected approach to security—particularly as our collected data becomes more sophisticated and critical.

netapp-blog1

Hybrid Multi-Cloud Experience: Are You Ready for the New Reality?

Determining the right way to deliver a consumption experience that public cloud providers offer, regardless of location or infrastructure, is top-of-mind for many IT leaders today. You need to deliver the agility, scale, speed, and services on-premises that you can easily get from the public cloud.

Most enterprises can’t operate 100% in the public cloud. Between traditional applications that can’t be moved from the datacenter and regulatory compliance, security, performance, and cost concerns, it’s not realistic. But there is a way to have the best of both worlds. You can deliver an experience based on frictionless consumption, self-service, automation, programmable APIs, and infrastructure independence. And deploy hybrid cloud services between traditional and new applications, and between your datacenters and all of your public clouds. It’s possible to do cloud your way, with a hybrid multi-cloud experience.

At NetApp Insight™ 2018, we showed the world that we’re at the forefront of the next wave of HCI. Although, typically standing for hyper converged infrastructure, our solution is a hybrid cloud infrastructure. With our Data Fabric approach, you can build your own IT, act like a cloud, and easily connect across the biggest clouds:

Make it easier to deploy and manage services.

You can provide a frictionless, cloudlike consumption experience, simplifying how you work on-premises and with the biggest clouds.

Free yourself from infrastructure constraints.

You can automate management complexities and command performance while delivering new services.

Never sacrifice performance again.

Scale limits won’t concern you. You can use the public cloud to extend from core to cloud and back and move from idea to deployment in record time.

When you stop trying to stretch your current infrastructure beyond its capabilities to be everything to everyone and adopt a solution that was created to let you meet – and exceed – the demands of your organization, regardless of its size, you’re able to take command and deliver a seamless experience.

Command Your Multi-Cloud Like a Boss

If you’re ready to unleash agility and latent abilities in your organization, and truly thrive with data, it’s time to break free from the limits of what HCI was and adopt a solution that lets you enable what it can be.

With the NetApp hybrid multi-cloud experience, delivered by the Data Fabric and hybrid cloud infrastructure, you’ll drive business success, meeting the demands of your users and the responsibilities of your enterprise. You’ll deliver the best user experiences while increasing productivity, maintaining simplicity, and delivering more services at scale. You won’t be controlled by cloud restrictions; you’ll have your clouds at your command.

And isn’t that the way it should have always been?

Start Your Mission.

Your Clouds at Your Command with NetApp HCI.

avigin-blog-1

How Artificial Intelligence Is Changing Video Surveillance Today

Avigilon recently contributed an article to Security Informed that discusses how artificial intelligence (AI) is changing video surveillance today. The article outlines the need for AI in surveillance systems, how it can enable faster video search, and how it can help focus operators’ attention on key events and insights to reduce hours of work to minutes.

Below is the full article, modified from its original version to fit this blog post, which can also be found on SecurityInformed.com.

There’s a lot of excitement around artificial intelligence (AI) today — and rightly so. AI is shifting the modern landscape of security and surveillance and dramatically changing the way users interact with their security systems. But with all the talk of AI’s potential, you might be wondering: what problems does AI help solve today?

The Need for AI

The fact is, today there are too many cameras and too much recorded video for security operators to keep pace with. On top of that, people have short attention spans. AI is a technology that doesn’t get bored and can analyze more video data than humans ever possibly could.

It is designed to bring the most important events and insight to users’ attention, freeing them to do what they do best: make critical decisions. There are two areas where AI can have a significant impact on video surveillance today: search and focus of attention.

Faster Search

Imagine using the internet today without a search engine. You would have to search through one webpage at a time, combing through all its contents, line-by-line, to hopefully find what you’re looking for. That is what most video surveillance search is like today: security operators scan hours of video from one camera at a time in the hope that they’ll find the critical event they need to investigate further. That’s where artificial intelligence comes in.

With AI, companies such as Avigilon are developing technologies that are designed to make video search as easy as searching the internet. Tools like Avigilon Appearance Search™ technology — a sophisticated deep learning AI video search engine — help operators quickly locate a specific person or vehicle of interest across all cameras within a site.

When a security operator is provided with physical descriptions of a person involved in an event, this technology allows them to initiate a search by simply selecting certain descriptors, such as gender or clothing color. During critical investigations, such as in the case of a missing or suspicious person, this technology is particularly helpful as it can use those descriptions to search for a person and, within seconds, find them across an entire site.

Focused Attention

The ability of AI to reduce hours of work to mere minutes is especially significant when we think about the gradual decline in human attention spans. Consider all the information a person is presented with on a given day. They don’t necessarily pay attention to everything because most of that information is irrelevant. Instead, they prioritise what is and is not important, often focusing only on information or events that are surprising or unusual.

Now, consider how much information a security operator who watches tens, if not hundreds or thousands of surveillance cameras, is presented with daily. After just twenty minutes, their attention span significantly decreases, meaning most of that video is never watched and critical information may go undetected. By taking over the task of “watching” security video, AI technology can help focus operators’ attention on events that may need further investigation.

For instance, technology like Avigilon Unusual Motion (UMD) uses AI to continuously learn what typical activity in a scene looks like and then detect and flag unusual events, adding a new level of automation to surveillance.

This helps save time during an investigation by allowing operators to quickly search through large amounts of recorded video faster, automatically focusing their attention on the atypical events that may need further investigation, enabling them to more effectively answer the critical questions of who, what, where and when.

As AI technology evolves, the rich metadata captured in surveillance video — like clothing color, age or gender — will add even more relevance to what operators are seeing. This means that in addition to detecting unusual activities based on motion, this technology has the potential to guide operators’ attention to other “unusual” data that will help them more accurately verify and respond to a security event.

The Key to Advanced Security

There’s no denying it, the role of AI in security today is transformative. AI-powered video management software is helping to reduce the amount of time spent on surveillance, making security operators more efficient and effective at their jobs. By removing the need to constantly watch video screens and automating the “detection” function of surveillance, AI technology allows operators to focus on what they do best: verifying and acting on critical events.

This not only expedites forensic investigations but enables real-time event response, as well. When integrated throughout a security system, AI technology has the potential to dramatically change security operations. Just as high-definition imaging has become a quintessential feature of today’s surveillance cameras, the tremendous value of AI technology has positioned it as a core component of security systems today, and in the future.

stratus

Gartner Research Emphasizes the Importance of Edge Computing

The term “edge computing” may seem like another technical buzzword, but respected research firm Gartner believes that edge computing is fast becoming an industry standard. The world is getting faster and our need for real-time data processing is picking up as well.

So, what exactly is the edge? Edge computing are the solutions that facilitate data processing at or near the source of data generation. For example, in the context of the Internet of Things (IoT), the sources of data generation are usually things with sensors or embedded devices. Edge computing serves as the decentralized extension of the campus networks, cellular networks, data center networks or the cloud.

In the newsletter, we share Gartner research that boldly states that “the edge will eat the cloud” and that, “the architecture of IT will flip upside down, as data and content move from centralized cloud and data centers to the edge, pulling compute and storage with it.” Gartner predicts that as the demand for greater immersion and responsiveness grows, so will edge computing. “Edge computing provides processing, storage and services for things and people far away from centralized cores, and physically close to things and people.”

The offline-first functionality that the edge provides also eliminates issues like; latency, bandwidth, autonomy and security. For example, when a question is posed to devices like Alexa or Google Home there is an almost imperceptible lag while the data is retrieved from the cloud and relayed to the user. A scenario that becomes dangerous when applied to other emerging technologies.

Gartner breaks it down, “For a self-driving car traveling 70 miles per hour, 100 ms equals 10 feet. But if we have two self-driving cars, or two dozen all traveling toward the same location, 100 ms is an eternity. A lot can happen in a few milliseconds – lives could be at risk.” The cloud simple can’t keep up.

The Gartner research presented also discusses the importance of edge technology as IoT continues to explode. “More and more physical objects are becoming networked and contain embedded technology to communicate and sense or interact with their internal states or the external environment. By 2020, 20 billion “things” will be connected to the internet.” Gartner states, “A more interactive, immersive human-machine interface will force data and computing to move closer physically, and to live in the world with people.”