mcafee five ways to rethink your endpoint protection strategy

Five Ways to Rethink Your Endpoint Protection Strategy

Device security is no longer about traditional antivirus versus next-generation endpoint protection. The truth is you need a layered and integrated defense that protects your entire digital terrain and all types of devices—traditional and nontraditional. ESG Senior Principal Analyst Jon Oltsik frames it this way: “… endpoint security should no longer be defined as antivirus software. No disrespect to tried-and-true AV, but endpoint security now spans a continuum that includes advanced prevention technologies, endpoint security controls, and advanced detection/response tools.”

In today’s survival of the fitte st landscape , he re are five ways to not just survive , but thrive:

1. More tools do not make for a better defense.

Scrambling to adapt to the evolving landscape, many security teams have resorted to bolting on the latest “best-of-breed” point solutions. While each solution may bring a new capability to the table, it’s important to look at your overall ecosystem and how these different defenses work together.

There are serious shortfalls in deploying disparate, multivendor endpoint security technologies that don’t collaborate with each other. Because point solutions have limited visibility and see only what they can see, the burden of connecting the dots falls on you. Adversaries are quick to take advantage of the windows of opportunity these manual processes create, evading defenses or slipping through the cracks unnoticed.

2. It’s not about any one type of countermeasure.

As a never-ending array of “next-generation” solutions started to emerge and flood the marketplace, you were likely told more than once that antivirus isn’t enough and what you need to do is switch to next-gen. In reality, it’s not about achieving a next-generation approach or finding the best use for antivirus. It’s really about implementing a holistic device security strategy that connects and coordinates an array of defenses. This includes signature-based defense (which eliminates 50% of the attack noise—allowing algorithmic approaches to run more aggressively with less false alarms), plus exploit protection, reputations, machine learning, ongoing behavioral analytics, and roll-back remediation to reverse the effects of ransomware and other threats.

Each device type has its own security needs and capabilities. You need to be able to augment built-in device security with the right combination of advanced protection technologies. The key to being resilient is to deliver inclusive, intelligently layered countermeasures— and antivirus is a tool that has its place in with benefits and limitations just like all countermeasures do in this unified, layered approach to device security.

3. All devices are not created equal.

Today, “endpoint” has taken on a whole new meaning. The term now encompasses traditional servers, PCs, laptops mobile devices (both BYOD and corporate- issued), cloud environments, and IoT devices like printers, scanners, point-of-sale handhelds, and even wearables.

Adversaries don’t just target one type of device—they launch organized campaigns across your entire environment to establish a foothold and then move laterally. It’s important to harness the defenses built into modern devices while extending their overall posture with advanced capabilities. Some endpoints, like Internet of Things (IoT) devices, lack built-in protection and will need a full-stack defense. Ultimately, the goal is to not duplicate anything and not leave anything exposed.

4. All you need is a single management console.

If you’ve been deploying bolted-on endpoint security technologies or several new, next-generation solutions, you may be seeing that each solution typically comes with its own management console. Learning and juggling multiple consoles can overtax your already stretched- thin security team and make them less effective, as they are unable to see your entire environment and the security posture of all your devices in one place. But it doesn’t have to be this way. Practitioners can more quickly glean the insights they need to act when they can view all the policies, alerts, and raw data from a centralized, single-pane-of-glass console.

5. Mobile devices are among the most vulnerable.

Mobile devices are an easy target for attackers and provide a doorway to corporate networks. We’re seeing more app-based attacks, targeted network-based attacks, and direct device attacks that take advantage of low-level footholds. For this reason, it’s essential to include mobile devices in your security strategy and protect them as you would any other endpoint.

 

veeam-data-protection-for-sharepoint-2

Veeam Data Protection for Sharepoint

Microsoft Office 365 adoption is bigger than ever. When Veeam introduced Veeam Backup for Microsoft Office 365 in November 2016, it became an immense success and Veeam has continued building on top of that. When we released version 1.5 in 2017, we added automation and scalability improvements which became a tremendous success for service providers and larger deployments. Today, Veeam is announcing v2 which takes our solution to a completely new level by adding support for Microsoft SharePoint and Microsoft OneDrive for Business. Download it right now!

Data protection for SharePoint

By adding support for SharePoint, Veeam extends its granular restore capabilities known from the Veeam Explorer for Microsoft SharePoint into Office 365. This allows you to restore individual items – documents, calendars, libraries and lists – as well as a complete SharePoint site when needed. With the new release, Veeam can also help you back up your data if you are still in the migration process and are still using Microsoft SharePoint on premises or running in a hybrid scenario.

Data protection for OneDrive for Business

The most requested feature was support for OneDrive for Business as more and more companies are using it to share files, folders and OneNote books internally. With Veeam Explorer for Microsoft OneDrive for Business, you can granularly restore any item available in your OneDrive folder (including Microsoft OneNote notebooks). You have the option to perform an in-place restore, restore to another OneDrive user or another folder in OneDrive, export files as an original or zip file, and if you get hit by a ransomware attack and your complete OneDrive folder gets encrypted Veeam can perform a full restore as well.

Enhancements

Besides the introduction of new platform support, there are also several enhancements added.

Major ease-of-use and backup flexibility improvements with a newly redesigned job wizard for easier and more flexible selection of Exchange Online, OneDrive for Business and SharePoint Online objects. Making it easier than ever to set-up, search and maintain visibility into your Office 365 data. Granularly search, scale and perform management of backup jobs for tens-of-thousands of Office 365 users!

Restore data located in Microsoft Teams! You can protect Microsoft Teams when the underlying storage of the Teams data is within SharePoint Online, Exchange Online or OneDrive for Business. While data can be protected and restored, the Teams tabs and channels cannot. After restoring the item, it can however be reattached manually.

Compare items with Veeam Explorer for Microsoft Exchange. It is now possible to perform a comparison on items with your production mailbox to see which properties are missing and only restore those without restoring the full file.

As with the 1.5 release, everything is also available for automation by either leveraging PowerShell or the Restful API which now fully supports OneDrive for Business and SharePoint.

Another enhancement is the possibility to change the GUI color as you like. This option made its way into Veeam Backup for Microsoft Office 365 after being introduced in Veeam Backup & Replication.

Starting with version 2, Veeam Backup for Microsoft Office 365 is now able to automatically check for updates, so you can rest assured you are always up to date.

And finally, the log collection wizard has been updated as it now allows you to collect logs for support in case you run into an issue, as well as configure extended logging for all components.

Source: https://www.veeam.com/blog/onedrive-sharepoint-backup.html

mcafee-blog1

Embedded Whitelisting Meets Demand for Cost Effective, Low-Maintenance, and Secure Solutions

McAfee® Embedded Control frees Hitachi KE Systems’ customers to focus on production, not security
Hitachi KE Systems, a subsidiary of Hitachi Industrial Equipment Systems, part of the global Hitachi Group, develops and markets network systems, computers, consumer products, and industrial equipment for a wide variety of industries. Hitachi KE meets the needs of customers who seek high quality yet cost-effective, low-maintenance systems for their operational technology (OT) environments—they don’t want to have to think about security at all.

In addition to the custom tablet and touch panel terminals and other hardware and software Hitachi KE sells, the Narashino, Japan-based company, also offers a one-stop shop for its solutions—from solution construction (hardware and software development) to operation and integration to maintenance and replacement. To provide the best solutions across this wide spectrum of offerings, the company often turns to partners to augment its technology.

“To expand our Internet of Things [IoT] solutions and operational features and functionality, we enhance our own products and systems with the latest digital and network technologies,” says Takahide Kume, an engineer in the Terminal Group at Hitachi KE. “We strive to provide the technologically optimal as well as most cost-effective solution for our customers.”

Highest Customer Concern: Production

Although the risk of a zero-day attack in their OT environments has increased dramatically as IoT has become commonplace, most of Hitachi KE’s customers do not have information security personnel on staff. For them, the only thing that counts is production. Does the technology solution enable faster, higher-quality, or more cost-effective production?

“Despite many malware-related incidents in the news, many of our customers honestly don’t care as much as they should about cybersecurity,” acknowledges Kume. “We have to educate their management that lack of security, if malware strikes, could seriously hurt production and business in general. Thankfully, making that point is becoming easier and easier with malware incidents on the rise.”

“We decided that embedded whitelisting was the best solution for reduced operating cost and high security in an OT environment… We felt McAfee offered the best long-term support and the highest quality technical support.”
—Takahide Kume, Engineer, Hitachi KE Systems

Best Solution for Minimal Overhead Yet High Security

Even before its customers began to catch on to the need for secure solutions, Hitachi KE began looking for a way to build security into its systems that have Microsoft Windows, Linux, and Google Android operating systems and often multiple versions within the customer’s environment. “Because our customers often lack security personnel, security must be extremely easy and basically run itself,” explains Kume “When a system is infected in the field, the person on the front line usually can’t do anything about it.”

“We decided that embedded whitelisting was the best solution for reduced operating cost and high security in an OT environment,” adds Kume. After examining leading whitelisting solutions, Hitachi KE chose McAfee® Embedded Control software.

“We felt McAfee offered the best long-term support and the highest quality technical support along with robust security,” he continues. “With McAfee Embedded Control installed, no one has to take care of the system in the field… Industrial systems are often set and left alone for a long time—they can be overtaken by malware without anyone realizing it. For such systems, McAfee Embedded Control is the best solution.”

McAfee Embedded Control maintains the integrity of Hitachi KE systems by only allowing authorized code to run and only authorized changes to be made. It automatically creates a dynamic whitelist of the authorized code on the system on which it resides. Once the whitelist is created and enabled, the system is locked down to the “known good” baseline, thereby blocking execution of any unauthorized applications or zero-day malware attacks.

“Almost Maintenance-Free” Solution Reduces TCO

Users of Hitachi KE Systems with McAfee Embedded Control can easily configure the machines, specifying exactly which applications and actions that will be allowed to run and who has authority to make modifications in the future. The minimal impact of the McAfee software on performance also means fewer problems to troubleshoot.

“McAfee Embedded Control is an almost maintenancefree solution,” says Kume. “It is extremely easy to update when needed and doesn’t require our customers to have a security expert on staff. Minimal maintenance lowers the total cost of ownership for our customers.”

Even if security hasn’t been their top priority, Hitachi KE customers have been very pleased with the addition of McAfee Embedded Control to their solutions. “Having McAfee security built in gives our customers and end users peace of mind that they can connect our systems to the Internet,” says Kume. “McAfee has had many success stories within the Hitachi Group, and this is just one of them.”

“Having McAfee security built in gives our customers and end users peace of mind that they can connect our systems to the Internet.”
—Takahide Kume, Engineer, Hitachi KE Systems

autodesk-blog2

Creating Japanese Mountain Shrine with 3ds Max

Manuel Fuentes, architect and aspiring games artist, breaks down his process for creating his Japanese Mountain Shrine. Turn up your audio and press play, we hope you enjoy this Zen and charming scene as much as we do.

Hi, my name is Manuel and I am an architect and aspiring games environment artist from Mexico. In the beginning I started working with 3ds Max doing mostly architectural visualization. Over the years as I got more familiar with it, I’ve used it for a variety of details such as rapid prototyping of buildings, rendering realistic architectural scenes, and more recently to creating game ready environments. The scene in this article was created as my entry for the Artstation Feudal Japan Challenge in the real time environment category.

All the architectural elements, the rocks, and the small shrubs where modelled in 3ds Max. The detail sculpting of trees and rocks was done in ZBrush, and the texturing with Substance Painter/Designer. Later, the meshes where adjusted in 3ds Max for final optimization and UV adjustments before exporting to UE4 for the final rendering of the scene.

How to build the scene

The initial blockout of the scene was done using boxes with very low subdivisions to easily adjust the proportions and properly balance the scene. After this was completed, using 3ds Max’s Modifier Stack I could easily add more complexity to the models without destroying the original geometry. This allowed me to quickly adjust general proportions as the scene grew more complex by going to the first levels of the Modifier Stack, and then back to my higher levels and continue adjusting the higher poly details.

Adding in the elements

The roof and wood details around the scene were created using a basic spline with a Sweep Modifier and then some edit Poly Modifiers to create the desired final shape. Again, this non-destructive approach allowed me to duplicate an element and reuse it somewhere else in the scene. I would simply go to the lower levels of the Modifier Stack, adjusting the spline to fit the new building, and then use edit poly to modify it and rotate it into place.

I used V-Ray to render some previews of my scene during the workflow, and before exporting the elements. All the modular terrain elements where first modeled and dimensioned in 3ds Max to make sure they fit together to shape the mountain and landscape scene. They were modelled using basic boxes with edit poly modifiers in 3ds Max, and later the detail sculpt was done in ZBrush.

Character animation

Once the scene was complete the final step was to do an animation with a ghost dragon flying around the scene. This was a first for me as I had never animated a character before, but the CAT rig was very easy to understand. After applying a skin modifier to a model I imported from ZBrush, and a basic motion animation modified using curves, I changed the default walk into something that resembled a flying motion. The model and animation were ready to export as an FBX and integrated into the scene.

sendquick-blog1

7 Tips to Help Choose an SMS Service Provider

You’ve done your legwork and have now decided to leverage the powerful benefits of using SMS technology to engage with your customers more effectively. The ubiquitous SMS (text) can help companies improve their communications flow, internally as well as with customers. It is one of the most cost effective broadcasting medium with one of the highest open-read rates.

So how does an organization choose a right SMS provider? A simple google search will give you endless options. With the plethora of options in an increasingly complex market, it is a daunting task to choose the right one. There are simply too many SMS vendors in the market offering a myriad of solutions and often they all seem to fulfill your project requirements. Apart from pricing to consider in choosing the right SMS service provider, here are the other key factors to take into consideration in making the best choice for your business.

1. Cost: Pricing is a key consideration especially for SMBs or for companies who need to reach out to thousands of customers regularly. Do confirm with the SMS vendor that the quotation provided for the SMS service needed is all explicitly reflected such as setup fee, monthly hosting fee, per SMS fee etc, and there are no hidden costs.

2. SMS API for ease of Integration: Make sure your vendor’s SMS API documents are comprehensive, uncomplicated. The API should be able to easily integrate with all your company’s existing network applications including mobile apps, open source software, CRM system, social messengers and collaboration tools. TalariaX can fully support all formats like SMTP email, SNMP Traps, Syslog and HTTP Post, all IT equipment & devices. Furthermore, sendQuick (flagship mobile messaging product of TalariaX) integrates with any existing applications to send messages via SMS, email, social messengers (WhatsApp Business, Facebook Messenger, LINE, WeChat, Viber, Telegram) and collaboration tools (Microsoft Teams, Slack, Cisco WebEx).

3. Reliable Message Delivery: Cheap pricing does not necessarily account for good delivery. A reliable SMS provider should deliver messages quickly and efficiently at competitive rates. They should have direct and strong partnerships with the local and global aggregators and telecom network providers to ensure messages are delivered with minimum delay and bounce backs.

4. Support: Is there a local account manager attending to your project requirements responsibly and proactively? If so, he or she needs to listen to your project requirements and limitations, then propose you the appropriate solutions or methodology to fulfill your requirements and allow room for scalability in the future. Furthermore, he or she needs to be able to walk-through with your team the evaluation, purchasing and post-purchase processes closely. Also, do check if they provide other means of support in addition to email, such as phone, web chat, accessibility 24/7, anything that is relevant for you.

5. Global reach: The SMS vendor’s network coverage and reach are an important factor to consider. With globalisation and evolution of e-commerce, more businesses are expanding their operations outside of their home country. It is important that the SMS provider should have global connectivity and send SMS texts to different countries across multiple mobile networks. TalariaX SMS gateways have been deployed across multiple industry verticals in over 50 countries across the globe.

6. Scalability and Testing: An important item on the checklist is scalability and testing of the system. Is there a proof-of-concept or trial account during the user acceptance testing (UAT) stage to confirm whether you can send and receive messages from your chosen mobile operators or mobile phone numbers through the SMS vendor? This will ensure minimal hiccups when initiating a campaign.

7. 2-way messaging: If you are looking for interactive responses to your SMS texts, you should ask the SMS gateway provider if they provide 2-way SMS messaging. Many companies are moving towards 2-way messaging as it allows them to interact with their consumers more closely and can be used for various job functions like job dispatch, appointment reminders, promotional messaging, security alerts, notifications, etc. sendQuick can send and receive 2-way alerts from IP addressable infrastructure, third-party applications from users across the enterprise.

sap-blog1

Why Artificial Intelligence Will Make Work More Human

What does the rise of artificial intelligence mean for the world of work?

First, it’s clear that it’s a huge opportunity for increased productivity. Gartner believes that this year alone, half a billion users will save two hours a day using artificial intelligence—that’s up to half a million years of improved efficiency!

McKinsey has estimated the percentage of various work tasks and sectors that could now be automated using new technology. After predictable physical work (which can increasingly be done by robots) the biggest opportunities are in mainstream business tasks such as data collection and data processing. McKinsey believes that over 60% of these tasks could now be automated.

Given these efficiencies, it’s only natural that some worry about the effects on employment. The good news is that, so far at least, these technologies have been displacing work rather than replacing workers.

In other words, machine learning excels at replacing the more boring and repetitive aspects of knowledge work—freeing workers to spend time on more rewarding and empowering tasks.

The Payoffs of Machine Learning for Workers Made Simple

An analogy can help illustrate the point. Remember when you were a child, and you had to spend months learning to do long division? After a while, to your relief, you were allowed to use a calculator. Far from slowing your ability to do mathematics, it freed you to move on to more complex and challenging tasks. Machine learning is doing the same for daily work in enterprises worldwide.

For example, machine learning has proved successful at automating repetitive finance tasks such as the automatic matching of invoices and payments, increasing rates from 70% to 94% in just a few weeks—resulting in massive savings in time and effort.

Machine learning also helps augmented human intelligence. For example, a sales person can now receive more intelligent lists of potential prospects—based on historic patterns, algorithms can automatically provide information about which prospects are most likely to buy, what products they are most likely to purchase, how long the deal is likely to take, etc. The end result is that every sales person gets closer to the best in the organization.

Indeed, we may be at the dawn of a new golden age for knowledge workers. Just as the invention of tractors multiplied physical labor, allowing a single farmer to plough many fields in a fraction of the time, these new technologies will do the same thing for knowledge workers, allowing them to multiply efforts in ways it is hard for us to currently imagine.

A Net Increase in Jobs—for Everyone?

But what about non-knowledge workers? Since the dawn of time, new technologies have been greeted with skepticism (Socrates famously feared that books were a bad idea because students would no longer have to use their memory). But the result has consistently been richer societies. For example, thanks to mechanization, the number of farm workers in the US has gone from 83% in 1800 to less than 2% today—but few of us would like to return to that era.

Clearly, some workers and jobs will be affected. But there’s reason to believe that we shouldn’t be too pessimistic. Generally, we’re very good at thinking about jobs that will be lost to automation, but it’s much harder for us to imagine the new jobs that will be created thanks to the new opportunities.

A study by Gartner shows that machine learning will result in a net increase in jobs from the year 2020. And in fact, it may be earlier: another study shows that in companies using AI today, 26% report job increases, compared to just 16% saying that it has reduced jobs.

There’s a tendency to think that the new jobs created will inevitably only go to the highly-skilled — for example, increased use of machine learning has led to increased demands for data scientists.

But history gives a rosier view. New technology also enables people to do jobs that they wouldn’t previously have been qualified for. For example, to work in a general store a century ago, you would have had to be able to do fast mental arithmetic, in order to calculate the amount of the bill. The advent of cash registers meant that stores could hire people for their customer service skills, rather than their mathematics prowess.

Machine learning is making computers easier to use in many different ways. For example, new enterprise digital assistants let us access the information we need to do our jobs faster and easier than ever before (think of how Jarvis in Iron Man” helps Tony Stark do his job faster). This will enable workers to do more with less effort and resources than in the past.

This process has happened many times in the past. For example, the spinning jenny was introduced in the UK in 1760. It automated the process of spinning, drawing, and twisting cotton. But the lower costs and higher demands for cloth meant that far from reducing employment, the number of workers exploded from around 8,000 skilled artisans to more than 300,000 less-skilled workers a few decades later.

In the End…

Ultimately, the rise of artificial intelligence will raise the premium on tasks that only humans can do. Because repetitive intellectual tasks can increasingly be automated, skills like leadership, adaptability, creativity, and caring will become relatively more scarce and more important.

Instead of forcing people to spend time and effort on tasks that we find hard but computers find easy (such as mental arithmetic), we will be rewarded for doing what humans do best—and artificial intelligence help make us all more human.

Posted By Timo Elliott, November 1, 2018

Source: https://blog-sap.com/analytics/2018/11/01/why-artificial-intelligence-will-make-work-more-human/

gemalto-blog3

Why Data Encryption and Tokenization Need to be on Your Company’s Agenda

As children we all enjoyed those puzzles where words had their letters scrambled and we had to figure out the secret to make the words or sentences legible. This simple example of encryption is deployed in vastly more complex forms across many of the services we use everyday, working to protect sensitive information. In recent years the financial services industry has added a new layer of encryption called tokenization. This concept works by taking your real information and generating a one-time code, or token, that is transmitted across networks. The benefit is that if the communication is intercepted your real details are not compromised.

According to our Breach Level Index there were 1,765 breaches in 2017. And these breaches are getting faster and larger in scope, over two billion records were lost last year. The fallout for companies is significant so it is in their interests to do whatever they can to protect their customer’s data.

Of course, encryption is a very complicated field of research, and one shouldn’t expect board level executives to understand how the cryptographic algorithms work. But they must understand just how vitally important it is that data is secure, whether at rest or in motion.

Those working on encryption face a challenge to ensure that access to applications, databases and files is unimpeded by the need to encrypt and decrypt data. There is a performance issue here, and so companies need to evaluate and test while decided what data, when, how and where should be encrypted.

The worrying thing is that despite the clear need for such work, there is a distinct lack of cyber security professionals worldwide—and especially in encryption. Indeed, you’ll often see job postings for security positions where experience of encryption isn’t even mentioned.

As the statistics show, this is having a huge effect on companies. In 2017, less than 3% of data breaches involved encrypted data. If we accept that companies are going to get hacked it is imperative that any data that is stolen is rendered useless through encryption.

Encryption would have mitigated the damage to brand image, reputation, company financial losses, government fines and falls in stock prices as well as damage to their executives image and reputation. It is also a major disincentive to criminals as the effort needed to crack the algorithms makes it entirely unprofitable while there are so many other available targets.

So if the problem is so clear, and the solution so obvious, why are companies delaying investing in encrypting data?

Well, many executives I speak to daily in Latin America tell me that the security of their Big Data is handled by their cloud service provider. And if there was a leak, it would be the supplier’s responsibility.

This completely overlooks that customers, authorities, investors and the wider public do not care about this distinction. They will all associate any breach with the company, never a supplier of services. So, while ultimately liability may fall at the feet of the cloud service provider, the immediate and potentially catastrophic impact will be felt by the breached company.

It is therefore crucial that companies start taking serious responsibility for the data of their customers. Whether internal staff or cloud provider, conversations need to be had about how data is encrypted. This includes:

• Checking that the cryptographic algorithms used are certified by international bodies 
• Checking to ensure that your cryptographic keys are stored in an environment fully segregated from where you store your encrypted information (whether held by third parties or in your own systems, files, or databases).

PwC suggests that one of the biggest concerns CEOs fear is a cyber-attack. Given the severity of the threat, we must recognize that we are all responsible for promoting data security. And that means adopting best practices for data protection, deploying encryption, and optimizing management of cryptographic keys.

autodesk-blog1

Creating the Mind Flayer in Stranger Things’ Season 2 Finale

We initially got a bid for just two episodes of “Stranger Things” Season 2. In the end, we worked on every single episode. The shot counts and the amount of work just grew and grew. It was exciting to have them come back to us – it went from being a fairly small project to being really large, especially for our studio at the time.

I was FX Lead on this project. As far as my experience goes, the Shadow Monster is probably the thing I’m most proud of from the show.

DESIGNING THE SHADOW MONSTER (AKA MIND FLAYER)

Before production, there were some stills and references that were drawn up for us. The client had an idea of what they wanted, but as far as the end result, they felt that they’d know it when they saw it.

There was one point when the Method VFX supervisor was on location with the client and he called me up and said, “I have to ask a favor. We’ve got to redo the look of the Shadow Monster. They want it done while I’m still here. Take these suggestions, take these notes, and redo the look – send me something as soon as you can, and we’ll try and get something approved before I leave.”

Within 24 hours, we turned around a brand-new look for the Shadow Monster in the final episode and the client loved it. It was a challenging and scary thing! You often can’t get it right on your first try, but having that ability to do a quick back-and-forth and be more creatively involved was satisfying. It was also scary!

UPPING THE FEAR FACTOR

The creators wanted the Shadow Monster to feel more solid. Kubrick was a huge influence since the primary reference for this was the wall of blood from The Shining. You get your first look at the “original” Shadow Monster in episode 3, where Will confronts him unsuccessfully. Season one of Stranger Things ends with Eleven making the Demogorgon burst into particles and disappear. That served as a springboard for the Shadow Monster.

Initially, we started with a reimagining of the Smoke Monster in Lost. We wanted something that was smoky and not quite there but still had wispy particles. We took some inspiration just from Lost’s season one ending where the Big Bad disperses into a cloud of particles that then vanish. We also liked the way the pseudopod in The Abyss reached forward, and the Symbiote from Spiderman 3 also served as in inspiration in episode 3 where you have these little arms reaching out and then pulling back in.

Our Shadow Monster started off a lot less concrete than what made the final cut; it initially wasn’t looking scary enough. In the final episode, when the arm was reaching out toward Eleven and Hopper, it needed to feel like a solid and substantial threat, nearly tactile in nature so you felt the strength of fight.

We created a string of static particles that weren’t built through a simulation, but rather we built up noise patterns and modeled points into a line, which we then deformed and weaved four lines together to form a cord of particles – an arm. We then took that arm and weaved those cords, spiraling them around each other, and that’s how we achieved that twisting, reaching limb. This made it go from being a mass of smoke to something amorphous; you could see the claws coming out, it had pointy tips, it felt crunchy in the middle, yet still had a wispy, smoky, ethereal quality to it.

The tentacle animation was done in Maya and then brought into Houdini where we created a procedural particle system. That was finally brought into Katana and it was rendered with RenderMan.

The key to maintaining that air of threat is that the bulk of the Shadow Monster is still behind the curtain of membrane that it’s reaching through.

TYING IT ALL TOGETHER

For the filming of the scene in the rift chamber at the end, Eleven, played by Millie Bobby Brown, and Hopper, played by David Harbour, were hanging from a cherry picker surrounded by only green screens. Our VFX Supervisor, Seth Hill, was telling us about how the crew would be hanging off the bottom of the cherry picker and shaking it to try and make it dynamic while the actors were trying to be serious and fight the monster.

There were lots of considerations that we needed to take as the VFX team. We needed to make the shots work with Eleven’s eye line. From different perspectives, it became a little challenging to match her eye line and make sure that everything felt consistent, all the while maintaining this connection between her, Hopper and this CG thing that we’re making.

BEING CREATIVE IN VFX

A big takeaway is that this wasn’t a traditional VFX relationship; our studio was allowed more responsibility with creative decisions. The production welcomed ideas and gave us a voice to share creative thoughts.

It’s gratifying to have that creative relationship and have more creative freedom than on a lot of the projects that come through here. For me, that was the most rewarding part of Stranger Things – how great that creative collaboration was between the client and us.

suse-blog3

OpenStack—The Next Generation Software-defined Infrastructure for Service Providers

Many service providers face the challenge of competing with the pace of innovation and investments made by hypercloud vendors. You constantly need to enable new services (e.g., containers, platform as a service, IoT, etc.) while remaining cost competitive. The proprietary cloud platforms used in the past are expensive and struggle to keep up with emerging technologies. It’s time to start planning your future with an open source solution that enables a software defined infrastructure for rapid innovation.

A growing number of service providers have selected OpenStack due to its low cost and its rapid pace of innovation. Many new technologies are introduced early in their development in OpenStack prior to making their way to proprietary and hyper-cloud platforms. Well known examples include containers, platform as a service and network function virtualization. Why not leverage the work of a growing community of thousands of open source developers to gain a competitive edge?

For those service providers unfamiliar with OpenStack, SUSE recently published a paper entitled, “Service Providers: Future-Proof Your Cloud Infrastructure,”to highlight some of the architectural choices you will need to make when implementing an OpenStack environment. While the concepts are not new, several decisions will need to be made up-front based on the data center footprint you wish to address.

While OpenStack may seem a bit complex at first, the installation and operations of vendor supplied distributions have greatly improved over the years. Support is available from the vendors themselves as well as from a large community of developers. Most service providers start with a relatively small cloud and build from there. Since OpenStack is widely supported by most hardware and software vendors, you can even repurpose your existing investments. The upfront cost to begin your OpenStack journey is low. When you’re ready to get started, SUSE offers a free 60-day evaluation trial of our solution (www.suse.com/cloud).

Now is the time to map out the future of your software-defined infrastructure. Take advantage of the most rapidly evolving cloud platform with no vendor lock-in. Build your offering on some of the best operations automation available today. OpenStack is the best way to control your own destiny. For more information, please visit our site dedicated to cloud service providers at www.suse.com/csp.

suse-blog2

Three Key Best Practices for DevOps Teams to Ensure Compliance

Driving Compliance with Greater Visibility, Monitoring and Audits

Ensuring Compliance in DevOps

DevOps has fundamentally changed the way software developers, QA, and IT operations professionals work. Businesses are increasingly adopting a DevOps approach and culture because of its power to virtually eliminate organizational silos by improving collaboration and communication. The DevOps approach establishes an environment where there is continuous integration and continuous deployment of the latest software with integrated application lifecycle management, leading to more frequent and reliable service delivery. Ultimately, adopting a DevOps model increases agility and enables the business to rapidly respond to changing customer demands and competitive pressures.

While many companies aspire to adopt DevOps, it requires an open and flexible infrastructure. However, many organizations are finding that their IT infrastructure is becoming more complex. Not only are they trying to manage their internal systems, but are now trying to get a handle on the use of public cloud infrastructure, creating additional layers of complexity. This complexity potentially limits the agility that organizations are attempting to achieve when adopting DevOps and significantly complicates compliance efforts.

Ensuring compliance with a complex infrastructure is a difficult endeavor. Furthermore, in today’s digital enterprise, IT innovation is a growing priority. However, many IT organizations still spend great time and money on merely maintaining the existing IT infrastructure. To ensure compliance and enable innovation, this trend must shift.

With a future that requires innovation and an immediate need for compliance today, the question remains: How can IT streamline infrastructure management and reduce complexity to better allocate resources and allow more time for innovation while ensuring strict compliance?

Infrastructure management tools play a vital role in priming the IT organization’s infrastructure for innovation and compliance. By automating management, streamlining operations, and improving visibility, these tools help IT reduce infrastructure complexity and ensure compliance across multiple dimensions— ultimately mitigating risk throughout the enterprise.

Adopting a Three-Dimensional Approach to Compliance

For most IT organizations, the need for compliance goes without saying. Internal corporate policies and external regulations like HIPAA and Sarbanes Oxley require compliance. Businesses in heavily regulated industries like healthcare, financial services, and public service are among those with the greatest need for strong compliance programs.

However, businesses in every industry need to consider compliance, whether maintaining compliance to the latest OS patch levels to avoid the impacts of the latest security virus or compliance for software licensing agreements to avoid contract breaches. Without compliance, the business puts itself at risk for a loss of customer trust, financial penalties, and even jail time for those involved.

When examining potential vulnerabilities in IT, there are three dimensions that guide an effective compliance program: security compliance, system standards, and licensing or subscription management.

Security compliance typically involves a dedicated department that performs audits to monitor and detect security vulnerabilities. Whether a threat is noted in the press or identified through network monitoring software, it must be quickly remediated. With new threats cropping up daily, protecting the business and its sensitive data is critical.

For system standards compliance, most IT departments define an optimal standard for how systems should operate (e.g., operating system level, patch level, network settings, etc.). In the normal course of business, systems often move away from this standard due to systems updates, software patches, and other changes. The IT organization must identify which systems no longer meet the defined standards and bring them back into compliance.

The third dimension of compliance involves licensing or subscription management which reduces software license compliance concerns and unexpected licensing costs. Compliance in this area involves gaining better visibility into licensing agreements to manage all subscriptions and ensure control across the enterprise.

To mitigate risk across the business in all three dimensions of compliance, the IT organization needs infrastructure management tools that offer greater visibility, automation, and monitoring. According to Gartner’s Neil MacDonald, vice president and distinguished analyst, “Information security teams and infrastructure must adapt to support emerging digital business requirements, and simultaneously deal with the increasingly advanced threat environment. Security and risk leaders need to fully engage with the latest technology trends if they are to define, achieve, and maintain effective security and risk management programs that simultaneously enable digital business opportunities and manage risk.”

Best Practice #1:

Optimize Operations and Infrastructure to Limit Shadow IT

With so many facets to an effective compliance program, the complexity of the IT infrastructure makes compliance a difficult endeavor. One of the most significant implications of a complex infrastructure is the delay and lack of agility from IT in meeting the needs of business users, ultimately driving an increase in risky shadow IT activities.

As business users feel pressure to quickly exceed customer expectations and respond to competitive pressures, they will circumvent the internal IT organization altogether to access services they need. They see that they can quickly provision an instance in the public cloud with the simple swipe of a credit card.

These activities pose a threat to the organization’s security protections, wreaks havoc on subscription management, and takes system standard compliance out of the purview of IT.

Optimizing IT operations and reducing infrastructure complexity go a long way toward reducing this shadow IT. With an efficient server, VM, and container infrastructure, the IT organization can improve speed and agility in service delivery for its business users. An infrastructure management solution offers the tools IT needs to drive greater infrastructure simplicity. It enables IT to optimize operations with a single tool that automates and manages container images across development, test, and production environments, ensuring streamlined management across all DevOps activities. Automated server provisioning, patching, and configuration enables faster, consistent, and repeatable server deployments. In addition, an infrastructure management solution enables IT to quickly build and deliver container images based on repositories and improve configuration management with parameter-driven updates. Altogether, these activities support a continuous integration/continuous deployment model that is a hallmark of DevOps environments.

When DevOps runs like a well-oiled machine in this way, IT provisions and delivers cloud resources and services to business users with speed and agility, making business users less likely to engage in shadow IT behaviors that pose risks to the business. As a result, compliance in all three dimensions—security, licensing, and system standards—is naturally improved.

Best Practice #2:

Closely Monitor Deployments for Internal Compliance

In addition to optimizing operations, improving compliance requires the ability to easily monitor deployments and ensure internal requirements are met. With a single infrastructure management tool, IT can easily track compliance to ensure the infrastructure complies with defined subscription and system standards.

License tracking capabilities enable IT to simplify, organize, and automate software licenses to maintain long-term compliance and enforce software usage policies that guarantee security. With global monitoring, licensing can be based on actual data usage which creates opportunities for cost improvements.

Monitoring compliance with defined system standards is also important to meeting internal requirements and mitigating risk across the business. By automating infrastructure management and improving monitoring, the IT organization can ensure system compliance through automated patch management and daily notifications of systems that are not compliant with the current patch level.

Easy and efficient monitoring enables oversight into container and cloud VM compliance across DevOps environments. With greater visibility into workloads in hybrid cloud and container infrastructures, IT can ensure compliance with expanded management capabilities and internal system standards. By managing configuration changes with a single tool, the IT organization can increase control and validate compliance across the infrastructure and DevOps environments.

Best Practice #3:

Closely Monitor Deployments for Internal Compliance

The fundamental goal of any IT compliance effort is to remedy any security vulnerabilities that pose a risk to the business. Before that can be done, however, IT must audit deployments and gain visibility into those vulnerabilities.

An infrastructure management tool offers graphical visualization of systems and their relationship to each other. This enables quick identification of systems deployed in hybrid cloud and container infrastructures that are out of compliance.

This visibility also offers detailed compliance auditing and reporting with the ability to track all hardware and software changes made to the infrastructure. In this way, IT can gain an additional understanding of infrastructure dependencies and reduce any complexities associated with those dependencies. Ultimately, IT regains control of assets by drilling down into system details to quickly identify and resolve any health or patch issues.