stratus-blog2

What’s the Real Cost of Integrating High Availability Software?

If you were trick or treating in October of 1999, chances are your bag of treats held considerably less Hershey products. In September of that same year, the company admitted to having issues with their newly implemented order-taking and distribution system.

The company had spent approximately $112 million dollars on a combination of ERP, SRM and supply chain management software. Hershey’s merger of this new software with their existing systems experienced failures that prevented the company from fulfilling $100 million dollars worth of customer orders. Not a good Halloween for Hershey.

Hershey’s had the financial reserves to weather the costly implementation setback but the failure of the new software to seamlessly integrate with their existing systems more than doubled the cost of their upgrade in the end. Preventing downtime is critical in all businesses, but is especially high on the list in manufacturing company’s like Hershey.

For example, when implementing a manufacturing execution system (MES) application, the risk is considerably higher due to the complex nature of production. Critical Manufacturing’s article “10 Reasons Why So Many MES Projects Fail,” explains that there is typically, “a complex web of components which broadly classified are- material, man and machine.”

The article goes on to say that, “even though there might be no other MES earlier installed, except in the case of a completely new factory, it is very unlikely that there are no other applications on the shop-floor. An MES is supposed to integrate the operation along with the existing IT infrastructure, so the application would be a failure if separate systems exist and users need to now work with both these and the MES separately. MES application needs to be a single comprehensive platform for optimum.”

Stratus’s Downtime Prevention Buyer’s Guide, talks about the six questions you should be asking to prevent downtime. Stratus suggests before agreeing to high availability software integration like MES, Supervisory Control and Data Acquisition (SCADA) or Historian systems that you ask, “Can your solution integrate seamlessly into existing computing environments with no application changes required?”.

“Some availability solutions integrate more easily into existing computing environments than others. Certain solutions may require that you make changes to your existing applications — a process that is time-consuming and typically requires specialized IT expertise.”

The guide goes on to give an example of a potential issue, “high availability clusters may need cluster-specific APIs to ensure proper fail-over. If ease of deployment and management are top priorities for your organization, you may want to consider a fault-tolerant solution that allows your existing applications to run without the risk and expense associated with modifications, special programming, and complex scripting.”

suse-blog3

OpenStack—The Next Generation Software-defined Infrastructure for Service Providers

Many service providers face the challenge of competing with the pace of innovation and investments made by hypercloud vendors. You constantly need to enable new services (e.g., containers, platform as a service, IoT, etc.) while remaining cost competitive. The proprietary cloud platforms used in the past are expensive and struggle to keep up with emerging technologies. It’s time to start planning your future with an open source solution that enables a software defined infrastructure for rapid innovation.

A growing number of service providers have selected OpenStack due to its low cost and its rapid pace of innovation. Many new technologies are introduced early in their development in OpenStack prior to making their way to proprietary and hyper-cloud platforms. Well known examples include containers, platform as a service and network function virtualization. Why not leverage the work of a growing community of thousands of open source developers to gain a competitive edge?

For those service providers unfamiliar with OpenStack, SUSE recently published a paper entitled, “Service Providers: Future-Proof Your Cloud Infrastructure,”to highlight some of the architectural choices you will need to make when implementing an OpenStack environment. While the concepts are not new, several decisions will need to be made up-front based on the data center footprint you wish to address.

While OpenStack may seem a bit complex at first, the installation and operations of vendor supplied distributions have greatly improved over the years. Support is available from the vendors themselves as well as from a large community of developers. Most service providers start with a relatively small cloud and build from there. Since OpenStack is widely supported by most hardware and software vendors, you can even repurpose your existing investments. The upfront cost to begin your OpenStack journey is low. When you’re ready to get started, SUSE offers a free 60-day evaluation trial of our solution (www.suse.com/cloud).

Now is the time to map out the future of your software-defined infrastructure. Take advantage of the most rapidly evolving cloud platform with no vendor lock-in. Build your offering on some of the best operations automation available today. OpenStack is the best way to control your own destiny. For more information, please visit our site dedicated to cloud service providers at www.suse.com/csp.

printronix-blog1

How the Bottling and Beverage Sector Utilizes Line Matrix Printers to Meet Their Needs

Globally, the Bottling and Beverage sector is a large portion of one of the biggest industries in the world. The beverage and bottling sector is an extremely competitive market. If you want to succeed, your business plan needs to include a well-crafted supply chain. Every link in the chain needs to work in conjunction from the production line to distribution. Facilities that bottle and produce drinks can differ in the type of bottling lines and beverages they produce, but one consistent factor they all employ is the need to keep proper record of inventory, shipping and receipt of products. All of these applications are managed in multiple departments, so clear lines of communication and accurate reports are important to keep the supply chain moving. Whether it be in an office environment printing a report showing how many bottles, cases and products are produced and shipped each day, or on a loading dock printing a trucking report that shows the contents of each truck, reliable printing is important. Without documentation to show how much, when and where these beverages have been, chaos could ensue.

A common application employed in this industry is the use of multi-part forms to produce invoices and track chain of custody on products. Invoices are printed on multi-part forms daily for each truck and the truck driver takes the invoices for each location where they are delivering product. The drivers have their customers inspect deliveries and then sign the invoice. The delivery person gives one part of the form to the customer the other stays with the truck driver. The Driver then returns the signed invoices to the office when they have finished their route, thus completing the chain of custody and keeping solid records of product transport. This process happens 1000’s of times a day and upwards of 30,000 pages need to be printed every month.

The Right Solution

With such high print volumes and the widespread types of environments where printing takes place, no ordinary laser printer could handle the job that the bottling and beverage industry demands. Line matrix printers deliver what this industry needs by providing speed, reliability and ruggedness all mixed with a low cost of printing. With 1,000s of different types of crucial reports, invoices and delivery sheets being printed daily, printer downtime is unacceptable. Many reports are required to be printed on non-standard paper sizes and green bar forms. With line matrix printers, standard and custom paper stock widths from 3 to 17” can be used to meet most report size requirements and delivery sheets can be printed quickly due to the high speed performance of line matrix printers (up to 2000 lines per minute). Since often times reports and invoices are printed on the fly in dirty, busy, sometimes harsh temperature loading docks, a printer that can withstand a harsh environment is also important. Line matrix printers are designed with this in mind; engineered to deliver non-stop performance and withstand humidity, temperature, static electricity, dust and other airborne particles which can lead to premature failure, frequent paper jams, print quality issues and more.

With the reliance on multi-part forms for delivery invoices being of the utmost importance, it’s no surprise line matrix printers are the best choice for the job. For example, with a laser printer a delivery invoice would have to be printed and filled out multiple times on separate sheets of paper, then gathered for a driver to take on their route. Once the delivery is made, the driver has to get the customer to sign multiple pages for receipt and finally bring one copy back to the facility, wasting valuable time and resources. Along with loss of time it also greatly increases the risk of misplacing copies, having incomplete information, and accidentally mixing customers invoices. The use of a multi-part forms eliminates these problems. Line matrix printers provide the most reliable source for creating multi-part forms. Laser printers cannot produce these forms in one pass. With a line matrix printer the user can print up to 6-part forms in one pass with high-quality, easy to read text and graphics, without compromising output speed. Print quality and style can be easily adjusted to meet the demands for any particular report. Since line matrix printers have SAP® compatibility along with other enterprise host platforms, on demand invoices can be printed right on the production or loading dock floor.

For over 40 years some of the biggest names in the bottling and beverage industry have trusted Printronix line matrix printers to take care of their printing needs. Line matrix printers will continue to be the toughest, most reliable printers on the market for customers in industries worldwide.

netapp-blog1

Hybrid Multi-Cloud Experience: Are You Ready for the New Reality?

Determining the right way to deliver a consumption experience that public cloud providers offer, regardless of location or infrastructure, is top-of-mind for many IT leaders today. You need to deliver the agility, scale, speed, and services on-premises that you can easily get from the public cloud.

Most enterprises can’t operate 100% in the public cloud. Between traditional applications that can’t be moved from the datacenter and regulatory compliance, security, performance, and cost concerns, it’s not realistic. But there is a way to have the best of both worlds. You can deliver an experience based on frictionless consumption, self-service, automation, programmable APIs, and infrastructure independence. And deploy hybrid cloud services between traditional and new applications, and between your datacenters and all of your public clouds. It’s possible to do cloud your way, with a hybrid multi-cloud experience.

At NetApp Insight™ 2018, we showed the world that we’re at the forefront of the next wave of HCI. Although, typically standing for hyper converged infrastructure, our solution is a hybrid cloud infrastructure. With our Data Fabric approach, you can build your own IT, act like a cloud, and easily connect across the biggest clouds:

Make it easier to deploy and manage services.

You can provide a frictionless, cloudlike consumption experience, simplifying how you work on-premises and with the biggest clouds.

Free yourself from infrastructure constraints.

You can automate management complexities and command performance while delivering new services.

Never sacrifice performance again.

Scale limits won’t concern you. You can use the public cloud to extend from core to cloud and back and move from idea to deployment in record time.

When you stop trying to stretch your current infrastructure beyond its capabilities to be everything to everyone and adopt a solution that was created to let you meet – and exceed – the demands of your organization, regardless of its size, you’re able to take command and deliver a seamless experience.

Command Your Multi-Cloud Like a Boss

If you’re ready to unleash agility and latent abilities in your organization, and truly thrive with data, it’s time to break free from the limits of what HCI was and adopt a solution that lets you enable what it can be.

With the NetApp hybrid multi-cloud experience, delivered by the Data Fabric and hybrid cloud infrastructure, you’ll drive business success, meeting the demands of your users and the responsibilities of your enterprise. You’ll deliver the best user experiences while increasing productivity, maintaining simplicity, and delivering more services at scale. You won’t be controlled by cloud restrictions; you’ll have your clouds at your command.

And isn’t that the way it should have always been?

Start Your Mission.

Your Clouds at Your Command with NetApp HCI.

suse-blog2

Three Key Best Practices for DevOps Teams to Ensure Compliance

Driving Compliance with Greater Visibility, Monitoring and Audits

Ensuring Compliance in DevOps

DevOps has fundamentally changed the way software developers, QA, and IT operations professionals work. Businesses are increasingly adopting a DevOps approach and culture because of its power to virtually eliminate organizational silos by improving collaboration and communication. The DevOps approach establishes an environment where there is continuous integration and continuous deployment of the latest software with integrated application lifecycle management, leading to more frequent and reliable service delivery. Ultimately, adopting a DevOps model increases agility and enables the business to rapidly respond to changing customer demands and competitive pressures.

While many companies aspire to adopt DevOps, it requires an open and flexible infrastructure. However, many organizations are finding that their IT infrastructure is becoming more complex. Not only are they trying to manage their internal systems, but are now trying to get a handle on the use of public cloud infrastructure, creating additional layers of complexity. This complexity potentially limits the agility that organizations are attempting to achieve when adopting DevOps and significantly complicates compliance efforts.

Ensuring compliance with a complex infrastructure is a difficult endeavor. Furthermore, in today’s digital enterprise, IT innovation is a growing priority. However, many IT organizations still spend great time and money on merely maintaining the existing IT infrastructure. To ensure compliance and enable innovation, this trend must shift.

With a future that requires innovation and an immediate need for compliance today, the question remains: How can IT streamline infrastructure management and reduce complexity to better allocate resources and allow more time for innovation while ensuring strict compliance?

Infrastructure management tools play a vital role in priming the IT organization’s infrastructure for innovation and compliance. By automating management, streamlining operations, and improving visibility, these tools help IT reduce infrastructure complexity and ensure compliance across multiple dimensions— ultimately mitigating risk throughout the enterprise.

Adopting a Three-Dimensional Approach to Compliance

For most IT organizations, the need for compliance goes without saying. Internal corporate policies and external regulations like HIPAA and Sarbanes Oxley require compliance. Businesses in heavily regulated industries like healthcare, financial services, and public service are among those with the greatest need for strong compliance programs.

However, businesses in every industry need to consider compliance, whether maintaining compliance to the latest OS patch levels to avoid the impacts of the latest security virus or compliance for software licensing agreements to avoid contract breaches. Without compliance, the business puts itself at risk for a loss of customer trust, financial penalties, and even jail time for those involved.

When examining potential vulnerabilities in IT, there are three dimensions that guide an effective compliance program: security compliance, system standards, and licensing or subscription management.

Security compliance typically involves a dedicated department that performs audits to monitor and detect security vulnerabilities. Whether a threat is noted in the press or identified through network monitoring software, it must be quickly remediated. With new threats cropping up daily, protecting the business and its sensitive data is critical.

For system standards compliance, most IT departments define an optimal standard for how systems should operate (e.g., operating system level, patch level, network settings, etc.). In the normal course of business, systems often move away from this standard due to systems updates, software patches, and other changes. The IT organization must identify which systems no longer meet the defined standards and bring them back into compliance.

The third dimension of compliance involves licensing or subscription management which reduces software license compliance concerns and unexpected licensing costs. Compliance in this area involves gaining better visibility into licensing agreements to manage all subscriptions and ensure control across the enterprise.

To mitigate risk across the business in all three dimensions of compliance, the IT organization needs infrastructure management tools that offer greater visibility, automation, and monitoring. According to Gartner’s Neil MacDonald, vice president and distinguished analyst, “Information security teams and infrastructure must adapt to support emerging digital business requirements, and simultaneously deal with the increasingly advanced threat environment. Security and risk leaders need to fully engage with the latest technology trends if they are to define, achieve, and maintain effective security and risk management programs that simultaneously enable digital business opportunities and manage risk.”

Best Practice #1:

Optimize Operations and Infrastructure to Limit Shadow IT

With so many facets to an effective compliance program, the complexity of the IT infrastructure makes compliance a difficult endeavor. One of the most significant implications of a complex infrastructure is the delay and lack of agility from IT in meeting the needs of business users, ultimately driving an increase in risky shadow IT activities.

As business users feel pressure to quickly exceed customer expectations and respond to competitive pressures, they will circumvent the internal IT organization altogether to access services they need. They see that they can quickly provision an instance in the public cloud with the simple swipe of a credit card.

These activities pose a threat to the organization’s security protections, wreaks havoc on subscription management, and takes system standard compliance out of the purview of IT.

Optimizing IT operations and reducing infrastructure complexity go a long way toward reducing this shadow IT. With an efficient server, VM, and container infrastructure, the IT organization can improve speed and agility in service delivery for its business users. An infrastructure management solution offers the tools IT needs to drive greater infrastructure simplicity. It enables IT to optimize operations with a single tool that automates and manages container images across development, test, and production environments, ensuring streamlined management across all DevOps activities. Automated server provisioning, patching, and configuration enables faster, consistent, and repeatable server deployments. In addition, an infrastructure management solution enables IT to quickly build and deliver container images based on repositories and improve configuration management with parameter-driven updates. Altogether, these activities support a continuous integration/continuous deployment model that is a hallmark of DevOps environments.

When DevOps runs like a well-oiled machine in this way, IT provisions and delivers cloud resources and services to business users with speed and agility, making business users less likely to engage in shadow IT behaviors that pose risks to the business. As a result, compliance in all three dimensions—security, licensing, and system standards—is naturally improved.

Best Practice #2:

Closely Monitor Deployments for Internal Compliance

In addition to optimizing operations, improving compliance requires the ability to easily monitor deployments and ensure internal requirements are met. With a single infrastructure management tool, IT can easily track compliance to ensure the infrastructure complies with defined subscription and system standards.

License tracking capabilities enable IT to simplify, organize, and automate software licenses to maintain long-term compliance and enforce software usage policies that guarantee security. With global monitoring, licensing can be based on actual data usage which creates opportunities for cost improvements.

Monitoring compliance with defined system standards is also important to meeting internal requirements and mitigating risk across the business. By automating infrastructure management and improving monitoring, the IT organization can ensure system compliance through automated patch management and daily notifications of systems that are not compliant with the current patch level.

Easy and efficient monitoring enables oversight into container and cloud VM compliance across DevOps environments. With greater visibility into workloads in hybrid cloud and container infrastructures, IT can ensure compliance with expanded management capabilities and internal system standards. By managing configuration changes with a single tool, the IT organization can increase control and validate compliance across the infrastructure and DevOps environments.

Best Practice #3:

Closely Monitor Deployments for Internal Compliance

The fundamental goal of any IT compliance effort is to remedy any security vulnerabilities that pose a risk to the business. Before that can be done, however, IT must audit deployments and gain visibility into those vulnerabilities.

An infrastructure management tool offers graphical visualization of systems and their relationship to each other. This enables quick identification of systems deployed in hybrid cloud and container infrastructures that are out of compliance.

This visibility also offers detailed compliance auditing and reporting with the ability to track all hardware and software changes made to the infrastructure. In this way, IT can gain an additional understanding of infrastructure dependencies and reduce any complexities associated with those dependencies. Ultimately, IT regains control of assets by drilling down into system details to quickly identify and resolve any health or patch issues.

veritas-blog-2

The Future of Data Protection

Enterprises to spend 56% more of their IT budgets on cloud technologies by 2019.
The cloud momentum

As I meet with customers, most of whom are large global enterprises, the topic of the cloud continues to come up. Getting cloud right means new ways to stay competitive and stand out in their respective markets. For example, moving test/dev operations to the cloud has allowed many organizations to reap the benefits of increased productivity, rapid product delivery and accelerated innovation. Or another benefit the cloud provides is an on demand infrastructure which can be used as a landing zone for business operations in the event of a disaster.

No longer do IT staff have to spend countless hours installing a set of SQL, DB2 or Oracle servers to run your in-house databases, CRM or analytics platform. Databases are offered as services that are ready for the largest, most intense data warehouse needs, and the ability to add analytics capabilities on top gives organizations more opportunities to gain more insights from your data. Additionally, companies have choice. Subscribing to multiple services from multiple cloud vendors simultaneously to test products or services in real time, only paying for what resources are used or consumed, is hugely beneficial.

It’s this increased agility companies are after, and what allows them to grow faster and better meet the needs of their customer.

Persisting concerns

But of course, there’s still quite a bit of uncertainty when it comes to cloud, which causes concern. Some of the most common concerns I hear about are related to data protection and service interruptions. There’s a fear of accidentally deleting critical data, being held hostage to ransomware, and the risk of application or resource failure. There’s also a general misunderstanding regarding how much of the responsibility for addressing these concerns sits with customers versus cloud providers.

In a traditional sense, the perception that because servers and data were ‘tucked away’ safe and sound within the confines of the on-premises data center, those concerns were more easily addressed. But, in the cloud, that’s not the case. When the data center moves to the cloud, rows and rows of 42U racks filled with blades and towers transform into on-demand cloud instances that can be spun up or down at will. This causes a sense of ‘losing control’ for many.

Some argue that the risks actually increase when you move to the cloud and no longer own the resources, but we believe those risks can be minimized, without sacrificing the rewards.

The trick here is to keep things simple, especially for IT teams that are responsible for protecting company data – wherever that data is stored. And that’s an important point, because it’s not an either/or conversation. According to RightScale’s 2018 State of the Cloud survey, 51% of enterprises operate with a hybrid strategy and 81% are multi-cloud. This information further provides support for clouds existing alongside an existing on-premises data center strategy for most large enterprise customers. More point solutions, creating silos is a losing strategy. Equally so are platform specific technologies that are inflexible and do not account for the persisting heterogeneous, hybrid nature of enterprise IT environments.

Veritas has you covered

In the midst of this cloud evolution, Veritas has taken its years of data management expertise and leadership, and developed a data protection technology called Veritas CloudPoint, that is cloud-native, light-weight and flexible, yet robust with core enterprise-grade data protection capabilities that can be extended to protect workloads in public, private, and hybrid cloud infrastructures. Veritas CloudPoint can easily be introduced to your AWS, Google Cloud, Microsoft Azure, or data center environments. Utilizing the available cloud infrastructure APIs, CloudPoint delivers an automated and unified snapshot-based data protection experience with a simple, intuitive, and modern UI. Figure 1 below shows the basics of how it works.

Figure 1 

But that is just the tip of the iceberg…

With the recent Microsoft and Google press releases announcing version 2.0 of Veritas CloudPoint, we have expanded the reach of CloudPoint to VMware environments as well as support for high-performance, on-premises databases such as MongoDB.

We are already working on our next release of CloudPoint, targeted for availability in the coming quarters, where we plan to add cloud support for VMware Cloud on AWS and IBM. For private cloud environments, we plan to offer VM-level and application-level support for Microsoft’s private cloud platform Azure Stack. We already announced in-guest support for Azure Stack with Veritas NetBackup earlier this year.

And, in staying consistent with my comment above regarding point solutions and platform specific solutions being a losing strategy, we plan to integrate CloudPoint with the next release of Veritas NetBackup, see figure 2 below. This should be welcome news for NetBackup customers in particular, as they will have an integrated way to address data protection requirements in the most optimized way possible, without adding more silos, and no matter where their workloads run. But, I’ll save the details and specifics on that for my next blog!

Figure 2 

Be on the lookout for more news in the coming months.

[1]Forward-looking Statement: Any forward-looking indication of plans for products is preliminary and all future release dates are tentative and are subject to change at the sole discretion of Veritas. Any future release of the product or planned modifications to product capability, functionality, or feature are subject to ongoing evaluation by Veritas, may or may not be implemented, should not be considered firm commitments by Veritas, should not be relied upon in making purchasing decisions, and may not be incorporated into any contract. The information is provided without warranty of any kind, express or implied.

gemalto-blog2

Breached Records More Than Doubled in H1 2018, Reveals Breach Level Index

Break Down of the 2018 Breach Level Index Stats:

• 18,525,816 records compromised every day
• 771,909 records compromised every hour
• 12,865 records compromised every minute
• 214 records compromised every second

Data breaches had a field day in 2018. According to the Breach Level Iindex, a database compiled by Gemalto to track publicly reported data breaches disclosed in news media reports, 2018 is one of the only years where more than two billion records were compromised in publicly disclosed data breaches. The only other year to do so was 2013 due to the exposure of all three billion Yahoo users’ accounts.

Gemalto has analyzed the Breach Level Index during the first half of 2018 and the findings are truly staggering. In just six months, the system tracked more than 3.3 billion breached data files. This figure represents a 72 percent increase over the first half of 2017.

The Breach Level Index didn’t contain as many reported incidents in the first half of 2018 as it did over the same period last year with 944 reported security events during the reporting period compared to 1,162 breaches reported in the first half of 2017.

Break Down of the 2018 Breach Level Index Stats:

• Identity theft yet again the top data breach type: Identity theft was responsible for nearly four billion records compromised in the first half of the year, which represents growth of more than a thousand percent compared to the previous year. During the same time frame, the number of incidents involving identity theft decreased by a quarter.

• Malicious outsiders and accidental loss the most prevalent sources of data breach: The number of events involving malicious outsiders accounted for 56 percent and 34 percent of all data breaches, respectively.

• Social media weathered the greatest number of compromised records: Facebook wasn’t the only social giant that suffered a data breach in the first half of 2018. Twitter also experienced a security incident where a software glitch potentially exposed the login credentials of its 330 million users. In total, data breaches compromised 2.5 billion records stored by social media giants.

• Incidents in healthcare and financial services declined: The number of compromised files and data breaches decreased for both healthcare and financial services. These declines at least in part reflected the introduction of new national regulations that help regulate health data and financial transactions.

• North America led the way in publicly disclosed data breaches: This region represented more than 97 percent of data records compromised in the first half of 2018. In total, there were 559 events in the region, a number which represented 59 percent of all data breaches globally in the first half of 2018.

New Data Privacy Regulations Take Effect:

In the wake of new data protection regulations, reporting of security incidents is on the rise. Following the passage of the Australian Privacy Amendment (Notifiable Data Breaches) Act, the Office of the Australian Information Commissioner (OAIC) received 305 data breach notifications by the end of the second quarter of 2018. This number is nearly triple the amount of the number submitted to the OAIC for the entire 2016-2017 fiscal year. Such growth in data breach reporting will likely continue through the rest of 2018 and beyond under GDPR and New York’s Cybersecurity Requirements for Financial Services Companies.

avigin-blog-1

How Artificial Intelligence Is Changing Video Surveillance Today

Avigilon recently contributed an article to Security Informed that discusses how artificial intelligence (AI) is changing video surveillance today. The article outlines the need for AI in surveillance systems, how it can enable faster video search, and how it can help focus operators’ attention on key events and insights to reduce hours of work to minutes.

Below is the full article, modified from its original version to fit this blog post, which can also be found on SecurityInformed.com.

There’s a lot of excitement around artificial intelligence (AI) today — and rightly so. AI is shifting the modern landscape of security and surveillance and dramatically changing the way users interact with their security systems. But with all the talk of AI’s potential, you might be wondering: what problems does AI help solve today?

The Need for AI

The fact is, today there are too many cameras and too much recorded video for security operators to keep pace with. On top of that, people have short attention spans. AI is a technology that doesn’t get bored and can analyze more video data than humans ever possibly could.

It is designed to bring the most important events and insight to users’ attention, freeing them to do what they do best: make critical decisions. There are two areas where AI can have a significant impact on video surveillance today: search and focus of attention.

Faster Search

Imagine using the internet today without a search engine. You would have to search through one webpage at a time, combing through all its contents, line-by-line, to hopefully find what you’re looking for. That is what most video surveillance search is like today: security operators scan hours of video from one camera at a time in the hope that they’ll find the critical event they need to investigate further. That’s where artificial intelligence comes in.

With AI, companies such as Avigilon are developing technologies that are designed to make video search as easy as searching the internet. Tools like Avigilon Appearance Search™ technology — a sophisticated deep learning AI video search engine — help operators quickly locate a specific person or vehicle of interest across all cameras within a site.

When a security operator is provided with physical descriptions of a person involved in an event, this technology allows them to initiate a search by simply selecting certain descriptors, such as gender or clothing color. During critical investigations, such as in the case of a missing or suspicious person, this technology is particularly helpful as it can use those descriptions to search for a person and, within seconds, find them across an entire site.

Focused Attention

The ability of AI to reduce hours of work to mere minutes is especially significant when we think about the gradual decline in human attention spans. Consider all the information a person is presented with on a given day. They don’t necessarily pay attention to everything because most of that information is irrelevant. Instead, they prioritise what is and is not important, often focusing only on information or events that are surprising or unusual.

Now, consider how much information a security operator who watches tens, if not hundreds or thousands of surveillance cameras, is presented with daily. After just twenty minutes, their attention span significantly decreases, meaning most of that video is never watched and critical information may go undetected. By taking over the task of “watching” security video, AI technology can help focus operators’ attention on events that may need further investigation.

For instance, technology like Avigilon Unusual Motion (UMD) uses AI to continuously learn what typical activity in a scene looks like and then detect and flag unusual events, adding a new level of automation to surveillance.

This helps save time during an investigation by allowing operators to quickly search through large amounts of recorded video faster, automatically focusing their attention on the atypical events that may need further investigation, enabling them to more effectively answer the critical questions of who, what, where and when.

As AI technology evolves, the rich metadata captured in surveillance video — like clothing color, age or gender — will add even more relevance to what operators are seeing. This means that in addition to detecting unusual activities based on motion, this technology has the potential to guide operators’ attention to other “unusual” data that will help them more accurately verify and respond to a security event.

The Key to Advanced Security

There’s no denying it, the role of AI in security today is transformative. AI-powered video management software is helping to reduce the amount of time spent on surveillance, making security operators more efficient and effective at their jobs. By removing the need to constantly watch video screens and automating the “detection” function of surveillance, AI technology allows operators to focus on what they do best: verifying and acting on critical events.

This not only expedites forensic investigations but enables real-time event response, as well. When integrated throughout a security system, AI technology has the potential to dramatically change security operations. Just as high-definition imaging has become a quintessential feature of today’s surveillance cameras, the tremendous value of AI technology has positioned it as a core component of security systems today, and in the future.

suse

5 Steps to Getting Started with Open Source Software Defined Storage and Why you should take them

Executive Summary

Back in 2013, analyst group IDC calculated that the total amount of data created and replicated in the world had edged beyond 4.4 zettabytes – a staggering number. The statement made the headlines and was widely repeated across media websites dealing with Big Data and the related storage issues. At the time, IDC attributed the enormous growth to approximately 11bn connected devices, – all generating and transmitting data, many containing sensors which also generate data.

IDC also predicted that the number of connected devices would triple to 30bn by 2020, before near tripling again to 80bn a few years later. If you’ve ever wondered what analysts mean by ‘exponential’ data growth this is what they are talking about, and the growth keeps on coming, even the forecasts for data growth are growing: three years later in 2016, IDC revised their predictions upwards, forecasting that by 2025 the total volume of data stored globally would hit 180 zettabytes. Divide 180 by 4.4 and you have a staggering growth rate of 40 x in just nine years.

Of course not all of that data is made by enterprises, but IDC say they are responsible for 85% of it at some point in its lifecycle. So, whilst enterprises might not make all the data, and might not drive all its growth, they still have to architect and manage storage systems that can cope with the multiple challenges it brings. OPERATIONAL CHALLENGES: VOLUME GROWTH, DIGITAL TRANSFORMATION AND ANALYTICS

Storage costs may have come down a lot in recent years, but the operational issues associated with managing it keep on pilling up. Systems reach capacity and must be replaced. The surrounding architecture is shifting as organisations undergo digital transformation and migrate to hybrid and public cloud environments. Decisions must be made about what data should be kept and what should be deleted -decisions which must be kept on the right side of the law, and which revolve not only around data itself, but on the value of that data to the enterprise; a bigger challenge than some might think as the financial potential in data is not always clear to the IT team, who are after all better placed to understand volume than value: a shortcoming which can lead to the enterprise equivalent of assessing the complete works of Shakespeare based on the number of pages in the book.

There are also substantial problems that come from moving large data sets over limited cabling: the backup routines that have decreasing windows, the challenges with replication and recovery that increase with the related increase in disk failure, the volume of unstructured data that comes with data like video, the security and compliance challenges, making data available for analytics, and for many, the ongoing cost of skilled technical staff for management.
These challenges aren’t going away: like your data, they are only going to get bigger. Unsurprisingly, enterprises are turning to software defined storage as the solution, indeed IT Brand Pulse predict that not only will SDS overtake traditional storage by 2020, but that 70 to 80% of storage will be managed on less expensive or commodity hardware managed by software in the same timeframe. If software defined is the answer to this challenge, why SUSE?

SEVEN REASONS WHY YOU SHOULD CHOOSE OPEN SOURCE SDS from SUSE.

Open source software defined storage on Ceph platform offers several key advantages:

• Cost reduction through elimination of proprietary software licensing costs
• Avoidance of proprietary vendor software lock-down
• Reduction of hardware costs by moving to commodity hardware
• Support for Object, Block and File and key protocols on a single platform
• Scale out infrastructure – simply add new servers and nodes as capacity increases
• Service, support and management to mitigate risks and control operational cost
• Consistent innovation and first-to-market roadmap improvements

GETTING STARTED WITH OPEN SOURCE SOFTWARE DEFINED STORAGE

1. Start small. Storage administrators are rightly risk averse – so choose your first deployment where you can prove the value in terms of cost reduction without putting mission critical data or processes at risk.

2. Find the right use cases. Good applications for Ceph Jewell include unstructured data like video footage, where sheer volume of data presents challenges in costs, volumes, back-up and retention – simply being able to keep video files into the mid-term. Another good example is the cold store – where Ceph can be cheaper than services like Amazon Glacier in terms of dollars per GB, yet remain on premise and avoid hidden costs for retrieval should you need your data back quickly.

3. Scale your usage with your skillset. As with any new technology, it takes time to become familiar with Ceph and build skills and confidence – both your own and your organisations’. Up your deployment in line with your knowledge and capability.

4. Align your strategy for storage with your strategy for the data centre – its not only storage that is moving to software defined. Consider what your infrastructure will look in the future as enterprises moved towards software defined everything. How will your data centre look in five years’ time?

5. Seek expert help when and where you need it. As you more from the periphery to the centre, complexity and risk increase – manage that risk and maximise the benefits by working with skilled third parties.

Veritas-NetBackup-2

Top Reasons to use Veritas NetBackup 8.1 data Protection for Nutanix Workloads.

The continual growth of data increases the use of virtualization and drives the need for highly scalable data protection and disaster recovery solutions. As a result, organizations are turning to hyperconverged solutions as way to keep deployment and management of their infrastructure simple, by managing the entire stack in a single system. As more and more organizations are adopting hyperconverged infrastructure, they are moving their mission critical data and applications to them.

Read how you can protect modern workloads in hyperconverged environments with Veritas NetBackupTM 8.1 including Parallel Streaming Framework, which simplifies modern workload backup and recovery, and delivers the performance required to accelerate the transformation to the digital enterprise.

1. DATA PROTECTION FOR SIMPLE, EFFICIENT HYPERCONVERGED INFRASTRUCTURES.

According to Stratistics MRC1, the Global Hyperconverged Infrastructure (HCI) Market accounted for approximately $1460 million in 2016 and is expected to reach $17027 million by 2023 growing at a CAGR of 42.0 percent from 2016 to 2023. Nutanix is the clear market leader in the HCI space.

hyperconverged is about keeping IT simple. Data protection should be too. Veritas NetBackup 8.1 with Parallel Streaming framework takes multi-node infrastructure running Nutanix Acropolis and AHV and streams from all nodes simultaneously. This is a unique way of backing up Nutanix. In fact, we have partnered with Nutanix to certify protection of those workloads on HCI.

2. ELIMINATE POINT PRODUCTS IN A HIGHLY VIRTUALIZED NUTANIX AHV ENVIRONMENTS.

NetBackup, the market leader of enterprise backup and recovery software, delivers to any size enterprise, unified data protection for Nutanix AHV virtual environments with proven enterprise scalability, and automated VM protection and performance. Veritas and Nutanix combined deliver an integrated, hyperconverged solution that eliminates silos.

3. ON-DEMAND, AGENTLESS, DOWNLOADABLE PLUGIN ARCHITECTURE.

Commvault and Veeam require dedicated resources on a Nutanix server. NetBackup Parallel Streaming technology with scale-out, agentless workload plugins can be used to efficiently protect virtual machines in Nutanix HCI or other hyperconverged cluster environments. The backup environment can be scaled in the same fashion as the production environment it was protecting. The Nutanix plugin is available on-demand for as many backup hosts as you select. No agents, clients, or software are installed on the cluster itself.

4. REDUCED RISK WITH RECOVERY OF POINT-IN-TIME HISTORICAL DATA.

Unlike any major competitive products, NetBackup 8.1 with Parallel Streaming technology enables customers to perform point-in-time backup while eliminating the need for an extra replication cluster, and at lower costs. Snapshots alone cannot refer to point-in-time data, so you need a data protection solution that help you quickly retrieve historical data without worrying about replicating data from human errors. Ensure that you can consistently meet SLAs and compliance mandates.

5. CHOICE OF HARDWARE, HYPERVISORS, AND CLOUD CONNECTORS.

Veritas protects petabyte-scale workloads running on hyperconverged infrastructure and offers a choice of hardware, hypervisor or cloud vendors.

Simplify backup with our Veritas Flex appliance and create a very streamlined solution, or use cloud as another storage tier for data. NetBackup has 40+ fully tested, cloud-connectors, which enables customers to leverage multi-cloud for long-term retention.