netapp-blog1

Hybrid Multi-Cloud Experience: Are You Ready for the New Reality?

Determining the right way to deliver a consumption experience that public cloud providers offer, regardless of location or infrastructure, is top-of-mind for many IT leaders today. You need to deliver the agility, scale, speed, and services on-premises that you can easily get from the public cloud.

Most enterprises can’t operate 100% in the public cloud. Between traditional applications that can’t be moved from the datacenter and regulatory compliance, security, performance, and cost concerns, it’s not realistic. But there is a way to have the best of both worlds. You can deliver an experience based on frictionless consumption, self-service, automation, programmable APIs, and infrastructure independence. And deploy hybrid cloud services between traditional and new applications, and between your datacenters and all of your public clouds. It’s possible to do cloud your way, with a hybrid multi-cloud experience.

At NetApp Insight™ 2018, we showed the world that we’re at the forefront of the next wave of HCI. Although, typically standing for hyper converged infrastructure, our solution is a hybrid cloud infrastructure. With our Data Fabric approach, you can build your own IT, act like a cloud, and easily connect across the biggest clouds:

Make it easier to deploy and manage services.

You can provide a frictionless, cloudlike consumption experience, simplifying how you work on-premises and with the biggest clouds.

Free yourself from infrastructure constraints.

You can automate management complexities and command performance while delivering new services.

Never sacrifice performance again.

Scale limits won’t concern you. You can use the public cloud to extend from core to cloud and back and move from idea to deployment in record time.

When you stop trying to stretch your current infrastructure beyond its capabilities to be everything to everyone and adopt a solution that was created to let you meet – and exceed – the demands of your organization, regardless of its size, you’re able to take command and deliver a seamless experience.

Command Your Multi-Cloud Like a Boss

If you’re ready to unleash agility and latent abilities in your organization, and truly thrive with data, it’s time to break free from the limits of what HCI was and adopt a solution that lets you enable what it can be.

With the NetApp hybrid multi-cloud experience, delivered by the Data Fabric and hybrid cloud infrastructure, you’ll drive business success, meeting the demands of your users and the responsibilities of your enterprise. You’ll deliver the best user experiences while increasing productivity, maintaining simplicity, and delivering more services at scale. You won’t be controlled by cloud restrictions; you’ll have your clouds at your command.

And isn’t that the way it should have always been?

Start Your Mission.

Your Clouds at Your Command with NetApp HCI.

suse-blog2

Three Key Best Practices for DevOps Teams to Ensure Compliance

Driving Compliance with Greater Visibility, Monitoring and Audits

Ensuring Compliance in DevOps

DevOps has fundamentally changed the way software developers, QA, and IT operations professionals work. Businesses are increasingly adopting a DevOps approach and culture because of its power to virtually eliminate organizational silos by improving collaboration and communication. The DevOps approach establishes an environment where there is continuous integration and continuous deployment of the latest software with integrated application lifecycle management, leading to more frequent and reliable service delivery. Ultimately, adopting a DevOps model increases agility and enables the business to rapidly respond to changing customer demands and competitive pressures.

While many companies aspire to adopt DevOps, it requires an open and flexible infrastructure. However, many organizations are finding that their IT infrastructure is becoming more complex. Not only are they trying to manage their internal systems, but are now trying to get a handle on the use of public cloud infrastructure, creating additional layers of complexity. This complexity potentially limits the agility that organizations are attempting to achieve when adopting DevOps and significantly complicates compliance efforts.

Ensuring compliance with a complex infrastructure is a difficult endeavor. Furthermore, in today’s digital enterprise, IT innovation is a growing priority. However, many IT organizations still spend great time and money on merely maintaining the existing IT infrastructure. To ensure compliance and enable innovation, this trend must shift.

With a future that requires innovation and an immediate need for compliance today, the question remains: How can IT streamline infrastructure management and reduce complexity to better allocate resources and allow more time for innovation while ensuring strict compliance?

Infrastructure management tools play a vital role in priming the IT organization’s infrastructure for innovation and compliance. By automating management, streamlining operations, and improving visibility, these tools help IT reduce infrastructure complexity and ensure compliance across multiple dimensions— ultimately mitigating risk throughout the enterprise.

Adopting a Three-Dimensional Approach to Compliance

For most IT organizations, the need for compliance goes without saying. Internal corporate policies and external regulations like HIPAA and Sarbanes Oxley require compliance. Businesses in heavily regulated industries like healthcare, financial services, and public service are among those with the greatest need for strong compliance programs.

However, businesses in every industry need to consider compliance, whether maintaining compliance to the latest OS patch levels to avoid the impacts of the latest security virus or compliance for software licensing agreements to avoid contract breaches. Without compliance, the business puts itself at risk for a loss of customer trust, financial penalties, and even jail time for those involved.

When examining potential vulnerabilities in IT, there are three dimensions that guide an effective compliance program: security compliance, system standards, and licensing or subscription management.

Security compliance typically involves a dedicated department that performs audits to monitor and detect security vulnerabilities. Whether a threat is noted in the press or identified through network monitoring software, it must be quickly remediated. With new threats cropping up daily, protecting the business and its sensitive data is critical.

For system standards compliance, most IT departments define an optimal standard for how systems should operate (e.g., operating system level, patch level, network settings, etc.). In the normal course of business, systems often move away from this standard due to systems updates, software patches, and other changes. The IT organization must identify which systems no longer meet the defined standards and bring them back into compliance.

The third dimension of compliance involves licensing or subscription management which reduces software license compliance concerns and unexpected licensing costs. Compliance in this area involves gaining better visibility into licensing agreements to manage all subscriptions and ensure control across the enterprise.

To mitigate risk across the business in all three dimensions of compliance, the IT organization needs infrastructure management tools that offer greater visibility, automation, and monitoring. According to Gartner’s Neil MacDonald, vice president and distinguished analyst, “Information security teams and infrastructure must adapt to support emerging digital business requirements, and simultaneously deal with the increasingly advanced threat environment. Security and risk leaders need to fully engage with the latest technology trends if they are to define, achieve, and maintain effective security and risk management programs that simultaneously enable digital business opportunities and manage risk.”

Best Practice #1:

Optimize Operations and Infrastructure to Limit Shadow IT

With so many facets to an effective compliance program, the complexity of the IT infrastructure makes compliance a difficult endeavor. One of the most significant implications of a complex infrastructure is the delay and lack of agility from IT in meeting the needs of business users, ultimately driving an increase in risky shadow IT activities.

As business users feel pressure to quickly exceed customer expectations and respond to competitive pressures, they will circumvent the internal IT organization altogether to access services they need. They see that they can quickly provision an instance in the public cloud with the simple swipe of a credit card.

These activities pose a threat to the organization’s security protections, wreaks havoc on subscription management, and takes system standard compliance out of the purview of IT.

Optimizing IT operations and reducing infrastructure complexity go a long way toward reducing this shadow IT. With an efficient server, VM, and container infrastructure, the IT organization can improve speed and agility in service delivery for its business users. An infrastructure management solution offers the tools IT needs to drive greater infrastructure simplicity. It enables IT to optimize operations with a single tool that automates and manages container images across development, test, and production environments, ensuring streamlined management across all DevOps activities. Automated server provisioning, patching, and configuration enables faster, consistent, and repeatable server deployments. In addition, an infrastructure management solution enables IT to quickly build and deliver container images based on repositories and improve configuration management with parameter-driven updates. Altogether, these activities support a continuous integration/continuous deployment model that is a hallmark of DevOps environments.

When DevOps runs like a well-oiled machine in this way, IT provisions and delivers cloud resources and services to business users with speed and agility, making business users less likely to engage in shadow IT behaviors that pose risks to the business. As a result, compliance in all three dimensions—security, licensing, and system standards—is naturally improved.

Best Practice #2:

Closely Monitor Deployments for Internal Compliance

In addition to optimizing operations, improving compliance requires the ability to easily monitor deployments and ensure internal requirements are met. With a single infrastructure management tool, IT can easily track compliance to ensure the infrastructure complies with defined subscription and system standards.

License tracking capabilities enable IT to simplify, organize, and automate software licenses to maintain long-term compliance and enforce software usage policies that guarantee security. With global monitoring, licensing can be based on actual data usage which creates opportunities for cost improvements.

Monitoring compliance with defined system standards is also important to meeting internal requirements and mitigating risk across the business. By automating infrastructure management and improving monitoring, the IT organization can ensure system compliance through automated patch management and daily notifications of systems that are not compliant with the current patch level.

Easy and efficient monitoring enables oversight into container and cloud VM compliance across DevOps environments. With greater visibility into workloads in hybrid cloud and container infrastructures, IT can ensure compliance with expanded management capabilities and internal system standards. By managing configuration changes with a single tool, the IT organization can increase control and validate compliance across the infrastructure and DevOps environments.

Best Practice #3:

Closely Monitor Deployments for Internal Compliance

The fundamental goal of any IT compliance effort is to remedy any security vulnerabilities that pose a risk to the business. Before that can be done, however, IT must audit deployments and gain visibility into those vulnerabilities.

An infrastructure management tool offers graphical visualization of systems and their relationship to each other. This enables quick identification of systems deployed in hybrid cloud and container infrastructures that are out of compliance.

This visibility also offers detailed compliance auditing and reporting with the ability to track all hardware and software changes made to the infrastructure. In this way, IT can gain an additional understanding of infrastructure dependencies and reduce any complexities associated with those dependencies. Ultimately, IT regains control of assets by drilling down into system details to quickly identify and resolve any health or patch issues.

veritas-blog-2

The Future of Data Protection

Enterprises to spend 56% more of their IT budgets on cloud technologies by 2019.
The cloud momentum

As I meet with customers, most of whom are large global enterprises, the topic of the cloud continues to come up. Getting cloud right means new ways to stay competitive and stand out in their respective markets. For example, moving test/dev operations to the cloud has allowed many organizations to reap the benefits of increased productivity, rapid product delivery and accelerated innovation. Or another benefit the cloud provides is an on demand infrastructure which can be used as a landing zone for business operations in the event of a disaster.

No longer do IT staff have to spend countless hours installing a set of SQL, DB2 or Oracle servers to run your in-house databases, CRM or analytics platform. Databases are offered as services that are ready for the largest, most intense data warehouse needs, and the ability to add analytics capabilities on top gives organizations more opportunities to gain more insights from your data. Additionally, companies have choice. Subscribing to multiple services from multiple cloud vendors simultaneously to test products or services in real time, only paying for what resources are used or consumed, is hugely beneficial.

It’s this increased agility companies are after, and what allows them to grow faster and better meet the needs of their customer.

Persisting concerns

But of course, there’s still quite a bit of uncertainty when it comes to cloud, which causes concern. Some of the most common concerns I hear about are related to data protection and service interruptions. There’s a fear of accidentally deleting critical data, being held hostage to ransomware, and the risk of application or resource failure. There’s also a general misunderstanding regarding how much of the responsibility for addressing these concerns sits with customers versus cloud providers.

In a traditional sense, the perception that because servers and data were ‘tucked away’ safe and sound within the confines of the on-premises data center, those concerns were more easily addressed. But, in the cloud, that’s not the case. When the data center moves to the cloud, rows and rows of 42U racks filled with blades and towers transform into on-demand cloud instances that can be spun up or down at will. This causes a sense of ‘losing control’ for many.

Some argue that the risks actually increase when you move to the cloud and no longer own the resources, but we believe those risks can be minimized, without sacrificing the rewards.

The trick here is to keep things simple, especially for IT teams that are responsible for protecting company data – wherever that data is stored. And that’s an important point, because it’s not an either/or conversation. According to RightScale’s 2018 State of the Cloud survey, 51% of enterprises operate with a hybrid strategy and 81% are multi-cloud. This information further provides support for clouds existing alongside an existing on-premises data center strategy for most large enterprise customers. More point solutions, creating silos is a losing strategy. Equally so are platform specific technologies that are inflexible and do not account for the persisting heterogeneous, hybrid nature of enterprise IT environments.

Veritas has you covered

In the midst of this cloud evolution, Veritas has taken its years of data management expertise and leadership, and developed a data protection technology called Veritas CloudPoint, that is cloud-native, light-weight and flexible, yet robust with core enterprise-grade data protection capabilities that can be extended to protect workloads in public, private, and hybrid cloud infrastructures. Veritas CloudPoint can easily be introduced to your AWS, Google Cloud, Microsoft Azure, or data center environments. Utilizing the available cloud infrastructure APIs, CloudPoint delivers an automated and unified snapshot-based data protection experience with a simple, intuitive, and modern UI. Figure 1 below shows the basics of how it works.

Figure 1 

But that is just the tip of the iceberg…

With the recent Microsoft and Google press releases announcing version 2.0 of Veritas CloudPoint, we have expanded the reach of CloudPoint to VMware environments as well as support for high-performance, on-premises databases such as MongoDB.

We are already working on our next release of CloudPoint, targeted for availability in the coming quarters, where we plan to add cloud support for VMware Cloud on AWS and IBM. For private cloud environments, we plan to offer VM-level and application-level support for Microsoft’s private cloud platform Azure Stack. We already announced in-guest support for Azure Stack with Veritas NetBackup earlier this year.

And, in staying consistent with my comment above regarding point solutions and platform specific solutions being a losing strategy, we plan to integrate CloudPoint with the next release of Veritas NetBackup, see figure 2 below. This should be welcome news for NetBackup customers in particular, as they will have an integrated way to address data protection requirements in the most optimized way possible, without adding more silos, and no matter where their workloads run. But, I’ll save the details and specifics on that for my next blog!

Figure 2 

Be on the lookout for more news in the coming months.

[1]Forward-looking Statement: Any forward-looking indication of plans for products is preliminary and all future release dates are tentative and are subject to change at the sole discretion of Veritas. Any future release of the product or planned modifications to product capability, functionality, or feature are subject to ongoing evaluation by Veritas, may or may not be implemented, should not be considered firm commitments by Veritas, should not be relied upon in making purchasing decisions, and may not be incorporated into any contract. The information is provided without warranty of any kind, express or implied.

gemalto-blog2

Breached Records More Than Doubled in H1 2018, Reveals Breach Level Index

Break Down of the 2018 Breach Level Index Stats:

• 18,525,816 records compromised every day
• 771,909 records compromised every hour
• 12,865 records compromised every minute
• 214 records compromised every second

Data breaches had a field day in 2018. According to the Breach Level Iindex, a database compiled by Gemalto to track publicly reported data breaches disclosed in news media reports, 2018 is one of the only years where more than two billion records were compromised in publicly disclosed data breaches. The only other year to do so was 2013 due to the exposure of all three billion Yahoo users’ accounts.

Gemalto has analyzed the Breach Level Index during the first half of 2018 and the findings are truly staggering. In just six months, the system tracked more than 3.3 billion breached data files. This figure represents a 72 percent increase over the first half of 2017.

The Breach Level Index didn’t contain as many reported incidents in the first half of 2018 as it did over the same period last year with 944 reported security events during the reporting period compared to 1,162 breaches reported in the first half of 2017.

Break Down of the 2018 Breach Level Index Stats:

• Identity theft yet again the top data breach type: Identity theft was responsible for nearly four billion records compromised in the first half of the year, which represents growth of more than a thousand percent compared to the previous year. During the same time frame, the number of incidents involving identity theft decreased by a quarter.

• Malicious outsiders and accidental loss the most prevalent sources of data breach: The number of events involving malicious outsiders accounted for 56 percent and 34 percent of all data breaches, respectively.

• Social media weathered the greatest number of compromised records: Facebook wasn’t the only social giant that suffered a data breach in the first half of 2018. Twitter also experienced a security incident where a software glitch potentially exposed the login credentials of its 330 million users. In total, data breaches compromised 2.5 billion records stored by social media giants.

• Incidents in healthcare and financial services declined: The number of compromised files and data breaches decreased for both healthcare and financial services. These declines at least in part reflected the introduction of new national regulations that help regulate health data and financial transactions.

• North America led the way in publicly disclosed data breaches: This region represented more than 97 percent of data records compromised in the first half of 2018. In total, there were 559 events in the region, a number which represented 59 percent of all data breaches globally in the first half of 2018.

New Data Privacy Regulations Take Effect:

In the wake of new data protection regulations, reporting of security incidents is on the rise. Following the passage of the Australian Privacy Amendment (Notifiable Data Breaches) Act, the Office of the Australian Information Commissioner (OAIC) received 305 data breach notifications by the end of the second quarter of 2018. This number is nearly triple the amount of the number submitted to the OAIC for the entire 2016-2017 fiscal year. Such growth in data breach reporting will likely continue through the rest of 2018 and beyond under GDPR and New York’s Cybersecurity Requirements for Financial Services Companies.

avigin-blog-1

How Artificial Intelligence Is Changing Video Surveillance Today

Avigilon recently contributed an article to Security Informed that discusses how artificial intelligence (AI) is changing video surveillance today. The article outlines the need for AI in surveillance systems, how it can enable faster video search, and how it can help focus operators’ attention on key events and insights to reduce hours of work to minutes.

Below is the full article, modified from its original version to fit this blog post, which can also be found on SecurityInformed.com.

There’s a lot of excitement around artificial intelligence (AI) today — and rightly so. AI is shifting the modern landscape of security and surveillance and dramatically changing the way users interact with their security systems. But with all the talk of AI’s potential, you might be wondering: what problems does AI help solve today?

The Need for AI

The fact is, today there are too many cameras and too much recorded video for security operators to keep pace with. On top of that, people have short attention spans. AI is a technology that doesn’t get bored and can analyze more video data than humans ever possibly could.

It is designed to bring the most important events and insight to users’ attention, freeing them to do what they do best: make critical decisions. There are two areas where AI can have a significant impact on video surveillance today: search and focus of attention.

Faster Search

Imagine using the internet today without a search engine. You would have to search through one webpage at a time, combing through all its contents, line-by-line, to hopefully find what you’re looking for. That is what most video surveillance search is like today: security operators scan hours of video from one camera at a time in the hope that they’ll find the critical event they need to investigate further. That’s where artificial intelligence comes in.

With AI, companies such as Avigilon are developing technologies that are designed to make video search as easy as searching the internet. Tools like Avigilon Appearance Search™ technology — a sophisticated deep learning AI video search engine — help operators quickly locate a specific person or vehicle of interest across all cameras within a site.

When a security operator is provided with physical descriptions of a person involved in an event, this technology allows them to initiate a search by simply selecting certain descriptors, such as gender or clothing color. During critical investigations, such as in the case of a missing or suspicious person, this technology is particularly helpful as it can use those descriptions to search for a person and, within seconds, find them across an entire site.

Focused Attention

The ability of AI to reduce hours of work to mere minutes is especially significant when we think about the gradual decline in human attention spans. Consider all the information a person is presented with on a given day. They don’t necessarily pay attention to everything because most of that information is irrelevant. Instead, they prioritise what is and is not important, often focusing only on information or events that are surprising or unusual.

Now, consider how much information a security operator who watches tens, if not hundreds or thousands of surveillance cameras, is presented with daily. After just twenty minutes, their attention span significantly decreases, meaning most of that video is never watched and critical information may go undetected. By taking over the task of “watching” security video, AI technology can help focus operators’ attention on events that may need further investigation.

For instance, technology like Avigilon Unusual Motion (UMD) uses AI to continuously learn what typical activity in a scene looks like and then detect and flag unusual events, adding a new level of automation to surveillance.

This helps save time during an investigation by allowing operators to quickly search through large amounts of recorded video faster, automatically focusing their attention on the atypical events that may need further investigation, enabling them to more effectively answer the critical questions of who, what, where and when.

As AI technology evolves, the rich metadata captured in surveillance video — like clothing color, age or gender — will add even more relevance to what operators are seeing. This means that in addition to detecting unusual activities based on motion, this technology has the potential to guide operators’ attention to other “unusual” data that will help them more accurately verify and respond to a security event.

The Key to Advanced Security

There’s no denying it, the role of AI in security today is transformative. AI-powered video management software is helping to reduce the amount of time spent on surveillance, making security operators more efficient and effective at their jobs. By removing the need to constantly watch video screens and automating the “detection” function of surveillance, AI technology allows operators to focus on what they do best: verifying and acting on critical events.

This not only expedites forensic investigations but enables real-time event response, as well. When integrated throughout a security system, AI technology has the potential to dramatically change security operations. Just as high-definition imaging has become a quintessential feature of today’s surveillance cameras, the tremendous value of AI technology has positioned it as a core component of security systems today, and in the future.

suse

5 Steps to Getting Started with Open Source Software Defined Storage and Why you should take them

Executive Summary

Back in 2013, analyst group IDC calculated that the total amount of data created and replicated in the world had edged beyond 4.4 zettabytes – a staggering number. The statement made the headlines and was widely repeated across media websites dealing with Big Data and the related storage issues. At the time, IDC attributed the enormous growth to approximately 11bn connected devices, – all generating and transmitting data, many containing sensors which also generate data.

IDC also predicted that the number of connected devices would triple to 30bn by 2020, before near tripling again to 80bn a few years later. If you’ve ever wondered what analysts mean by ‘exponential’ data growth this is what they are talking about, and the growth keeps on coming, even the forecasts for data growth are growing: three years later in 2016, IDC revised their predictions upwards, forecasting that by 2025 the total volume of data stored globally would hit 180 zettabytes. Divide 180 by 4.4 and you have a staggering growth rate of 40 x in just nine years.

Of course not all of that data is made by enterprises, but IDC say they are responsible for 85% of it at some point in its lifecycle. So, whilst enterprises might not make all the data, and might not drive all its growth, they still have to architect and manage storage systems that can cope with the multiple challenges it brings. OPERATIONAL CHALLENGES: VOLUME GROWTH, DIGITAL TRANSFORMATION AND ANALYTICS

Storage costs may have come down a lot in recent years, but the operational issues associated with managing it keep on pilling up. Systems reach capacity and must be replaced. The surrounding architecture is shifting as organisations undergo digital transformation and migrate to hybrid and public cloud environments. Decisions must be made about what data should be kept and what should be deleted -decisions which must be kept on the right side of the law, and which revolve not only around data itself, but on the value of that data to the enterprise; a bigger challenge than some might think as the financial potential in data is not always clear to the IT team, who are after all better placed to understand volume than value: a shortcoming which can lead to the enterprise equivalent of assessing the complete works of Shakespeare based on the number of pages in the book.

There are also substantial problems that come from moving large data sets over limited cabling: the backup routines that have decreasing windows, the challenges with replication and recovery that increase with the related increase in disk failure, the volume of unstructured data that comes with data like video, the security and compliance challenges, making data available for analytics, and for many, the ongoing cost of skilled technical staff for management.
These challenges aren’t going away: like your data, they are only going to get bigger. Unsurprisingly, enterprises are turning to software defined storage as the solution, indeed IT Brand Pulse predict that not only will SDS overtake traditional storage by 2020, but that 70 to 80% of storage will be managed on less expensive or commodity hardware managed by software in the same timeframe. If software defined is the answer to this challenge, why SUSE?

SEVEN REASONS WHY YOU SHOULD CHOOSE OPEN SOURCE SDS from SUSE.

Open source software defined storage on Ceph platform offers several key advantages:

• Cost reduction through elimination of proprietary software licensing costs
• Avoidance of proprietary vendor software lock-down
• Reduction of hardware costs by moving to commodity hardware
• Support for Object, Block and File and key protocols on a single platform
• Scale out infrastructure – simply add new servers and nodes as capacity increases
• Service, support and management to mitigate risks and control operational cost
• Consistent innovation and first-to-market roadmap improvements

GETTING STARTED WITH OPEN SOURCE SOFTWARE DEFINED STORAGE

1. Start small. Storage administrators are rightly risk averse – so choose your first deployment where you can prove the value in terms of cost reduction without putting mission critical data or processes at risk.

2. Find the right use cases. Good applications for Ceph Jewell include unstructured data like video footage, where sheer volume of data presents challenges in costs, volumes, back-up and retention – simply being able to keep video files into the mid-term. Another good example is the cold store – where Ceph can be cheaper than services like Amazon Glacier in terms of dollars per GB, yet remain on premise and avoid hidden costs for retrieval should you need your data back quickly.

3. Scale your usage with your skillset. As with any new technology, it takes time to become familiar with Ceph and build skills and confidence – both your own and your organisations’. Up your deployment in line with your knowledge and capability.

4. Align your strategy for storage with your strategy for the data centre – its not only storage that is moving to software defined. Consider what your infrastructure will look in the future as enterprises moved towards software defined everything. How will your data centre look in five years’ time?

5. Seek expert help when and where you need it. As you more from the periphery to the centre, complexity and risk increase – manage that risk and maximise the benefits by working with skilled third parties.

Veritas-NetBackup-2

Top Reasons to use Veritas NetBackup 8.1 data Protection for Nutanix Workloads.

The continual growth of data increases the use of virtualization and drives the need for highly scalable data protection and disaster recovery solutions. As a result, organizations are turning to hyperconverged solutions as way to keep deployment and management of their infrastructure simple, by managing the entire stack in a single system. As more and more organizations are adopting hyperconverged infrastructure, they are moving their mission critical data and applications to them.

Read how you can protect modern workloads in hyperconverged environments with Veritas NetBackupTM 8.1 including Parallel Streaming Framework, which simplifies modern workload backup and recovery, and delivers the performance required to accelerate the transformation to the digital enterprise.

1. DATA PROTECTION FOR SIMPLE, EFFICIENT HYPERCONVERGED INFRASTRUCTURES.

According to Stratistics MRC1, the Global Hyperconverged Infrastructure (HCI) Market accounted for approximately $1460 million in 2016 and is expected to reach $17027 million by 2023 growing at a CAGR of 42.0 percent from 2016 to 2023. Nutanix is the clear market leader in the HCI space.

hyperconverged is about keeping IT simple. Data protection should be too. Veritas NetBackup 8.1 with Parallel Streaming framework takes multi-node infrastructure running Nutanix Acropolis and AHV and streams from all nodes simultaneously. This is a unique way of backing up Nutanix. In fact, we have partnered with Nutanix to certify protection of those workloads on HCI.

2. ELIMINATE POINT PRODUCTS IN A HIGHLY VIRTUALIZED NUTANIX AHV ENVIRONMENTS.

NetBackup, the market leader of enterprise backup and recovery software, delivers to any size enterprise, unified data protection for Nutanix AHV virtual environments with proven enterprise scalability, and automated VM protection and performance. Veritas and Nutanix combined deliver an integrated, hyperconverged solution that eliminates silos.

3. ON-DEMAND, AGENTLESS, DOWNLOADABLE PLUGIN ARCHITECTURE.

Commvault and Veeam require dedicated resources on a Nutanix server. NetBackup Parallel Streaming technology with scale-out, agentless workload plugins can be used to efficiently protect virtual machines in Nutanix HCI or other hyperconverged cluster environments. The backup environment can be scaled in the same fashion as the production environment it was protecting. The Nutanix plugin is available on-demand for as many backup hosts as you select. No agents, clients, or software are installed on the cluster itself.

4. REDUCED RISK WITH RECOVERY OF POINT-IN-TIME HISTORICAL DATA.

Unlike any major competitive products, NetBackup 8.1 with Parallel Streaming technology enables customers to perform point-in-time backup while eliminating the need for an extra replication cluster, and at lower costs. Snapshots alone cannot refer to point-in-time data, so you need a data protection solution that help you quickly retrieve historical data without worrying about replicating data from human errors. Ensure that you can consistently meet SLAs and compliance mandates.

5. CHOICE OF HARDWARE, HYPERVISORS, AND CLOUD CONNECTORS.

Veritas protects petabyte-scale workloads running on hyperconverged infrastructure and offers a choice of hardware, hypervisor or cloud vendors.

Simplify backup with our Veritas Flex appliance and create a very streamlined solution, or use cloud as another storage tier for data. NetBackup has 40+ fully tested, cloud-connectors, which enables customers to leverage multi-cloud for long-term retention.

stratus

Gartner Research Emphasizes the Importance of Edge Computing

The term “edge computing” may seem like another technical buzzword, but respected research firm Gartner believes that edge computing is fast becoming an industry standard. The world is getting faster and our need for real-time data processing is picking up as well.

So, what exactly is the edge? Edge computing are the solutions that facilitate data processing at or near the source of data generation. For example, in the context of the Internet of Things (IoT), the sources of data generation are usually things with sensors or embedded devices. Edge computing serves as the decentralized extension of the campus networks, cellular networks, data center networks or the cloud.

In the newsletter, we share Gartner research that boldly states that “the edge will eat the cloud” and that, “the architecture of IT will flip upside down, as data and content move from centralized cloud and data centers to the edge, pulling compute and storage with it.” Gartner predicts that as the demand for greater immersion and responsiveness grows, so will edge computing. “Edge computing provides processing, storage and services for things and people far away from centralized cores, and physically close to things and people.”

The offline-first functionality that the edge provides also eliminates issues like; latency, bandwidth, autonomy and security. For example, when a question is posed to devices like Alexa or Google Home there is an almost imperceptible lag while the data is retrieved from the cloud and relayed to the user. A scenario that becomes dangerous when applied to other emerging technologies.

Gartner breaks it down, “For a self-driving car traveling 70 miles per hour, 100 ms equals 10 feet. But if we have two self-driving cars, or two dozen all traveling toward the same location, 100 ms is an eternity. A lot can happen in a few milliseconds – lives could be at risk.” The cloud simple can’t keep up.

The Gartner research presented also discusses the importance of edge technology as IoT continues to explode. “More and more physical objects are becoming networked and contain embedded technology to communicate and sense or interact with their internal states or the external environment. By 2020, 20 billion “things” will be connected to the internet.” Gartner states, “A more interactive, immersive human-machine interface will force data and computing to move closer physically, and to live in the world with people.”

gemalto-cloud-security

Cloud Security: How to Secure Your Sensitive Data in the Cloud

In today’s always-connected world, an increasing number of organisations are moving their data to the cloud for operational efficiency, cost management, agility, scalability, etc.

As more data is produced, processed, and stored in the cloud – a prime target for cybercriminals who are always lurking around to lay their hands on organisations’ sensitive data – protecting the sensitive data that resides on the cloud becomes imperative.

Data Encryption Is Not Enough

While data encryption definitely acts as a strong deterrence, merely encrypting the data is not enough in today’s perilous times where cyber attacks are getting more sophisticated with every passing day. Since the data physically resides with the CSP, it is out of the direct control of the organisations that own the data.

In a scenario like this where organisations encrypt their cloud data, storing the encryption keys securely and separately from the encrypted data is of paramount importance.

Enter BYOK

To ensure optimal protection of their data in the cloud, an increasing number of organisations are adopting a Bring Your Own Key (BYOK) approach that enables them to securely create and manage their own encryption keys, separate from the CSP’s where their sensitive data is being hosted.

However, as more encryption keys are created for an increasing number of cloud environments like Microsoft Azure, Amazon Web Services (AWS), Salesforce, etc., efficiently managing the encryption keys of individual cloud applications and securing the access, becomes very important. Which is why many organisations use External Key Management (EKM) solutions to cohesively manage all their encryption keys in a secure manner that is bereft of any unauthorised access.

Take the example of Office 365, Microsoft’s on-demand cloud application that is widely used by organisations across the globe to support employee mobility by facilitating anytime, anywhere access to Microsoft’s email application – MS Outlook and business utility applications like MS Word, Excel, PowerPoint, etc.

Gemalto’s BYOK solutions (SafeNet ProtectApp and SafeNet KeySecure) for Office 365 not only ensure that organisations have complete control over their encrypted cloud data, but also seamlessly facilitate efficient management of the encryption keys of other cloud applications like Azure, AWS, Google Cloud and Salesforce.

Below is a quick snapshot of how SafeNet ProtectApp and SafeNet KeySecure seamlessly work with Azure BYOK:

1. SafeNet ProtectApp and KeySecure are used to generate a RSA Key Pair or required Key size using the FIPS 140-2 certified RNG of KeySecure.

2. A Self-SignedCertificateUtility.jar (which is a Java-based application) then interacts with KeySecure using a TLS-protected NAE service to fetch the Key Pair and create a Self-signed Certificate.

3. The Key Pair and Self-signed Certificate are stored securely in a PFX or P12 container that encrypts the contents using a Password-based Encryption (PBE) Key.

4. The PFX file (which is an encrypted container using a PBE Key) is then uploaded on Azure Key Vault using Azure Web API / Rest.

5. The transmission of the PFX file to the Azure Key Vault is protected using security mechanisms implemented by Azure on their Web API (TLS / SSL, etc.).

6. Since the PFX files will be located on the same system on which the SelfSignedCertificateUtility.jar utility will be executed, industry-best security practices like ensuring pre-boot approval, enabling two-factor authentication (2FA), etc. should be followed.

7. Once the Keys are loaded on Azure Key Vault, all encryption operations happen on Azure platform itself.