The-Power-Of-Colour-In-Printed-Media

The Power of Colour in Printed Media

Colour is a fascinating subject. Besides the visual sensation of colour in our everyday world, there is far more to how we experience and perceive colour. When it comes to colour printing, it’s an important facet that can totally change the impact of your design and how customers react to it. Understanding how colour is formed and, more importantly, the connections between different colours is vital to a successful design. Effectively applying colour to a design project has a lot to do with balance — and the more colours you use, the more complicated it is to achieve balance

It is important to understand how colour is handled and reproduced when dealing with printed media. The role of colour in printed material can be particularly impactful – with strong vibrant colours often standing out. If used haphazardly, these same colours may not help get your message across, hence why it is important to understand the psychology of colour when you are designing for print.

Colour Systems

There are two primary colour systems by which colour is reproduced – additive and subtractive (also known as reflective.) We use both methods in our daily lives – the screen you are viewing this on uses additive colour to generate all the colours you see, while a printed version of this article would use subtractive. In simple terms – anything that emits light (such as a screen, a projector, even the sun) uses additive. While everything else, which instead reflects light, uses subtractive colour.

Additive colour is based on red, green and blue (RBG) and works with anything that emits light. The mixture of different wavelengths of light creates different colours, and the more light you add, the brighter and lighter the colour becomes. In additive colour, white is the combination of colour, while black is the absence of colour.

Subtractive colour is based on cyan, magenta and yellow. It works on the basis of reflected light, rather than pushing more light out. The way a particular pigment reflects different wavelengths of light determines its apparent colour to the human eye. In subtractive colour white is the absence of colour, while black is the combination of colour – but it’s an imperfect system. The pigments we have available to use don’t fully absorb light (preventing reflected colour wavelengths), so we have to add a fourth compensating pigment to account for this limitation. We call this ‘key’ (hence why it’s called CMYK) but essentially it’s black. Without this additional pigment, the closest to black we’d be able to render in print would be a muddy brown.

Due to these differences, designers need a way to get consistent colour results when working with both systems — for instance, if you’re designing a logo to use on your website but also want to get a business card printed. That’s where the Pantone Matching System (or PMS) can help. Colours can be matched for web and print (as well as for different types of printing surfaces) to ensure a uniform appearance. The Pantone system makes it easy for designers, clients, and printers to collaborate and ensure that the final product looks as intended.

 

Understanding Colour

Our minds see something (like grass, for example) and information sent from our eyes to our brain tells us it’s a certain colour (green). Objects reflect light in different combinations and translate them into the phenomenon we know as ‘colour.’ For example, when you are looking for a can of Coca-Cola, your brain immediately searches out the red colour first and the logo/branding second. When it comes to printed media, people decide whether or not they like a product in 90 seconds or less, with 90% of that decision based solely on colour choice.

But how are you meant to design for everyone, when everyone responds differently to colour? The truth is, colour is too dependent on personal experiences to be universally translated to specific feelings. There are, however, broader patterns to be found in colour perceptions that can guide you. So, how do people typically respond to different colours?

Yellow = Optimistic

Let’s start with yellow. Thanks to our sun (and its subsequent sunshine) yellow is often associated with feelings of optimism, warmth, and hope. Yellow is also thought to release serotonin in the brain, and speed up metabolism. Pure/bright yellow used in printed media can be very successful at grabbing attention, but can also be visually jarring or even hard to view if not used thoughtfully. Yellow works well with contrasting colours (think black, greys and navy) but can be a disaster if used with white. Be careful with desaturated and greenish yellows, as they can give the perception of being sickly or unpleasant. Historically, yellow was used to signify a quarantined area. Yellow can quickly become overpowering if used in excess, but effective when applied thoughtfully.

Red = Energetic

Red has many connotations. It is the colour of blood and can convey violence, but it is also the colour of the heart, bringing feelings of love and affection. Fire is also red, which brings both feelings of warmth and danger. It is an energetic and striking colour which gets the pulse racing. It is a primary colour that will dominate your design if not used sparingly. Red is used to snag attention and is popular and arguably the most overused colour in branding – think Coca-Cola, Netflix, YouTube, etc. In print design, red can be a powerful accent colour. Just remember it can have an overwhelming effect, especially in its purest form.

Orange = Ambitious

Orange is a fun and exciting colour. It emits the same brightness as yellow and commands the same energy level as red, but is not as confronting. Orange is often associated with nature – think the colour of the changing seasons, earth and the fruit. Orange is the colour of creativity, change and movement. Its playfulness makes it a fun colour to use in your designs – especially when complimenting it with blue/green tones. Being such a strong colour with high visibility; orange is great for promoting and highlighting certain aspects of your design.

Black = Sophisticated

Like red, there are both negative and positive connotations connected to black. Black means death, fear, mystery and the unknown. Black is technically not a colour though. In additive colour, black is the absence of light and in subtractive colour, it absorbs all the colours of the visible spectrum and reflects none of them to our eyes. Regardless – black is a very sophisticated colour when it comes to print media. Black is professional and credible, and it can be edgy as well. It is often associated with wealth and power. Its neutral tone allows it to work well with just about any other colour – an absolute joy to work with.

Blue = Calming

When asking people what their favourite colour is, blue is a popular choice. Blue brings feelings of tranquillity, peace, and strength. Blue is the colour of the sea and the sky, which evokes a sense of trust and security – hence why it’s popular in branding and print media. However, in the English language it also signifies sadness or depression, so using it thoughtfully in conveying your message is paramount.

Green = Pleasing

Green is the colour of life and nature. As humans, we are instinctively drawn to green, as it represents fertile land and is very pleasing to the senses. Green is the colour of peace, envy, wealth, luck, generosity, and fertility. It is widely used in the health sector because it is relaxing to look at and evokes a sense of calm. It is often used in print media to convey a natural and organic aesthetic.

Purple = Elegant

Purple evokes a sense of elegance and class. It is often associated with royalty, magic, mystery, and piety. It stimulates feelings of elegance but like blue, it has soothing and calming influences. It is often used in the beauty and health sectors. It is a very powerful colour to use in design, being almost universally appreciated.

Brown = Durable

Brown may not be the most glamorous of colours, but it serves a great purpose in design. It is a completely natural colour and is associated with wood, soil, human hair colour, eye colour and skin pigmentation. You need to be cautious when using brown though, as it is often associated with dirt and lack of cleanliness, poverty, faeces, and plainness. Used in design, it is commonly applied as a background colour or texture. When used intelligently, brown gives the impression of reliability, durability, and friendship.

White = Purity

White represents purity and cleanliness. It evokes feelings of innocence, divinity, and perfection. On the contrary, it can also feel sterile, clinical and empty. Like black, white is not technically a colour, rather the combination of all the colour waves in the spectrum (sunlight). Paper is white, so you will probably work with ‘white space’ when designing for print. White works with nearly every colour, except for lighter shades of yellow and orange. When designing a successful logo, the rule of thumb is it should always be designed in black and white first, with colour added for emphasis/branding.

 

Colour stimulates our brain and decision making, so it is paramount to be thoughtful when used in printed media. The psychology of colour is a complex subject that lands at the intersection of art and science – a dynamic that makes designing for print so interesting. The next time you are choosing colour for your printed media, keep this guide in mind – happy designing.

Stratasys-Resin

3D Printing Production Parts with FDM Pro

In a production environment, the need for consistent builds and mechanical properties can pose a challenge to additive manufacturing (AM). AM provides real benefits for these industries through improved manufacturing processes and supply chain flexibility, but companies continually push for further industry advancements in quality, reliability, and repeatability to meet their stringent needs.

The Solution

One answer to this challenge is FDM Pro. FDM Pro utilizes ULTEM™ 9085 CG resin to deliver mechanically enhanced ULTEM™ 9085 resin parts with the repeatability necessitated by high-requirements industries. This material delivers the repeatability necessitated by the aerospace industry and beyond, ensuring that the 1000th part will be the same as the 1st and resulting in industry-leading coefficient of variance (<7-10%)

The ULTEM™ 9085 CG resin material has improved mechanical properties with upwards of 39% increase to tensile strength and upwards of 65% increase to elongation at break in the Z orientation compared to standard ULTEM™ 9085 resin. The improvement of mechanical properties allows for engineers and designers to expand their use of printed parts.

The FDM Pro solution builds the material in .010” thick layers. Our honed internal processes, enhancements of the printer including new extrusion tips, and finely tuned software produces cosmetic parts with better aesthetics than typically seen in ULTEM™ resin material.

Applications

Because of these benefits, FDM Pro is a solution for plethora of production applications. The higher strength of ULTEM™ 9085 CG resin can aid in the transition of overly engineered parts to more streamlined lightweight plastic parts. Engineers can utilize FDM Pro to produce components such as replacement parts, functional prototyping, under the hood applications in automotive and aircraft interiors.

FDM Pro and Aircraft Interiors

Aerospace companies are perfectly positioned to leverage the benefits of FDM Pro alongside their existing production specifications as well as expand the use of 3D printing for a broad array of components within aircraft interiors.

Aircraft interior applications include components related to:

    • Environmental control systems
    • Interior panels
    • Cosmetic bezels
    • IFEC (in-flight entertainment & communication)
    • Lighting
    • Lavatory components
    • Custom cosmetic components
Neri_Oxman_00236

3D Printed Art & Design World

IMAGINARY BEINGS, 2012

By Neri Oxman, in collaboration with STRATASYS
Produced on a Stratasys Objet500 Connex3 3D Printer

Imaginary Beings: Mythologies of the Not Yet collection, includes 18 sculptures of ‘beings’ inspired by Jorge Luis Borges’ Book, ‘Imaginary Beings,’ an encyclopaedia of imaginative zoology that contains descriptions of 120 mythical beasts from folklore and literature. Each ‘being’ in this series encapsulates the amplification and personalization of a particular human function such as the ability to fly, or the secret of becoming invisible. Ancient myths are united with their futuristic counterpart, brought to life by design fabrication and 3D printing technologies.

Photo Credit: Yoram Reshef

Kafka

Size: 500 x 250 x 200 mm

Kafka demonstrates the powerful combination of 3D printing and new design algorithms inspired from nature. Drawing inspiration from author Franz Kafka’s famous novel, ‘The Metamorphosis’, Oxman sets out to represent a physical, wearable metamorphosis: a material counterpart to Kafka’s chimerical writing. Kafka’s intentional use of ambiguous terms in the novel, inspired here an equally ambiguous use of physical properties and behaviour, embedding several functions. The bestiary artwork is composed of several animal parts and combines a soft internal texture with stiff armor-like material.

Photo Credit: Yoram Reshef

PNEUMA

Size: 40 x 20.1 x 25 cm

Greek for ‘air in motion’, the ancient word Pneuma is used in religious contexts to denote the spirit or the soul housed by the human ribcage. Pneuma 1 marks a series of design explorations depicting this ethereal constituent in material form, as a housing unit for the spirit from which breath emerges. Inspired by animals of the phylum Porifera such as sponges, this soft armor is designedto protect the body while providing comfort and flexibility. Two bodies filled with pores and channels allowing air to circulate throughout are printed using multiple materials with varying mechanical properties making up the stiff continuous shell and soft inner regions.

Photo Credit: Yoram Reshef

ARACHNE

Size: 44.1 x 35.2 x 74.7 cm

The imaginary being, Arachne, is inspired by the myth of Arachné, the mortal weaver who was transformed into a spider by the Godess Athena. The 3D printed corset is inspired by the construction of a spider’s web. The piece combines shades of blue and white in both rigid and flexible materials, providing a protective armour for the rib cage, while the softer materials around the inter-costal muscles enhance movement and comfort.

In more ways than one, spider spinnerets can be seen as the antecedents of multi-material printers.

Able to produce up to eight different silks during their lifetime, each spinneret gland within a spider’s abdomen produces a thread for a special purpose: sticky silk is produced for trapping prey and fine silk for enshrouding it.

sendquick-blog1

7 Tips to Help Choose an SMS Service Provider

You’ve done your legwork and have now decided to leverage the powerful benefits of using SMS technology to engage with your customers more effectively. The ubiquitous SMS (text) can help companies improve their communications flow, internally as well as with customers. It is one of the most cost effective broadcasting medium with one of the highest open-read rates.

So how does an organization choose a right SMS provider? A simple google search will give you endless options. With the plethora of options in an increasingly complex market, it is a daunting task to choose the right one. There are simply too many SMS vendors in the market offering a myriad of solutions and often they all seem to fulfill your project requirements. Apart from pricing to consider in choosing the right SMS service provider, here are the other key factors to take into consideration in making the best choice for your business.

1. Cost: Pricing is a key consideration especially for SMBs or for companies who need to reach out to thousands of customers regularly. Do confirm with the SMS vendor that the quotation provided for the SMS service needed is all explicitly reflected such as setup fee, monthly hosting fee, per SMS fee etc, and there are no hidden costs.

2. SMS API for ease of Integration: Make sure your vendor’s SMS API documents are comprehensive, uncomplicated. The API should be able to easily integrate with all your company’s existing network applications including mobile apps, open source software, CRM system, social messengers and collaboration tools. TalariaX can fully support all formats like SMTP email, SNMP Traps, Syslog and HTTP Post, all IT equipment & devices. Furthermore, sendQuick (flagship mobile messaging product of TalariaX) integrates with any existing applications to send messages via SMS, email, social messengers (WhatsApp Business, Facebook Messenger, LINE, WeChat, Viber, Telegram) and collaboration tools (Microsoft Teams, Slack, Cisco WebEx).

3. Reliable Message Delivery: Cheap pricing does not necessarily account for good delivery. A reliable SMS provider should deliver messages quickly and efficiently at competitive rates. They should have direct and strong partnerships with the local and global aggregators and telecom network providers to ensure messages are delivered with minimum delay and bounce backs.

4. Support: Is there a local account manager attending to your project requirements responsibly and proactively? If so, he or she needs to listen to your project requirements and limitations, then propose you the appropriate solutions or methodology to fulfill your requirements and allow room for scalability in the future. Furthermore, he or she needs to be able to walk-through with your team the evaluation, purchasing and post-purchase processes closely. Also, do check if they provide other means of support in addition to email, such as phone, web chat, accessibility 24/7, anything that is relevant for you.

5. Global reach: The SMS vendor’s network coverage and reach are an important factor to consider. With globalisation and evolution of e-commerce, more businesses are expanding their operations outside of their home country. It is important that the SMS provider should have global connectivity and send SMS texts to different countries across multiple mobile networks. TalariaX SMS gateways have been deployed across multiple industry verticals in over 50 countries across the globe.

6. Scalability and Testing: An important item on the checklist is scalability and testing of the system. Is there a proof-of-concept or trial account during the user acceptance testing (UAT) stage to confirm whether you can send and receive messages from your chosen mobile operators or mobile phone numbers through the SMS vendor? This will ensure minimal hiccups when initiating a campaign.

7. 2-way messaging: If you are looking for interactive responses to your SMS texts, you should ask the SMS gateway provider if they provide 2-way SMS messaging. Many companies are moving towards 2-way messaging as it allows them to interact with their consumers more closely and can be used for various job functions like job dispatch, appointment reminders, promotional messaging, security alerts, notifications, etc. sendQuick can send and receive 2-way alerts from IP addressable infrastructure, third-party applications from users across the enterprise.

stratus-blog5

The Risks and Rewards of Virtualization

Virtualization is more than just an industry buzzword or IT trend. This technology enables multiple instances of an operating environment to run on a single piece of hardware. These virtual machines (VMs) then run applications and services just like any other physical server and eliminate the costs related to purchasing and supporting additional servers.

Virtualization delivers other benefits, too, such as the faster provisioning of applications and resources. Additionally, it can increase IT productivity, efficiency, agility, and responsiveness, freeing IT resources to focus on other tasks and initiatives.

How did virtualization evolve?

To best understand the business case for virtualization – as well as potential virtualization risks – we need to look back to the time when mainframes ruled the computing world.

Mainframes were used by large organizations to manage their most critical applications and systems. Yet they could also act as servers, offering the ability to host multiple instances of operating systems at the same time. In doing so, they pioneered the concept of virtualization.

Many organizations were quick to see the potential. They began carving up workloads for different departments or users to give them dedicated compute resources for more capacity and better performance. This was the very beginning of the client-server model.

In most cases, on application ran on one server, which was accessed by many different PCs. Other advancements, such as the emergence of Intel’s x86 technology, all helped make client-server computing faster, cheaper, and more effective.

It all worked great, until its popularity caught up. Eventually, it seemed like everyone in the company wanted a server to host his/her application. This resulted in too many servers – “server sprawl” – that quickly filled up even the largest data center.

Space wasn’t the only concern. All these servers were expensive and required extensive services to support and maintain them. Overall IT costs surged, and many companies began looking for a new approach.

One solution: A virtualized approach for any servers using x86 technology. With virtualization, one physical server could now host many VMs and could provide the full isolation and resources each application required.

A new approach leads to new concerns

All of this worked well, except for the new concern that the virtualization layer – the hypervisor – could fail. Worse, a single failure in the virtualized environment would trigger a domino effect where all virtualized applications would also fail, leading to unacceptable downtime risk. To prevent this scenario, many companies chose to virtualize their non-production systems. This way, if any failure did occur, critical systems wouldn’t go down.

As technology improved, organizations realized that the hypervisors can deliver the performance and stability they required, and they started virtualizing all their applications, even production workloads.

On one hand, the effort wasn’t difficult, and seemed to pave the way for many significant benefits. Yet on the other, it did present new risks related to hardware and availability. For example, consider the case where one company might have 20 business-critical VMs on one server, only to have it fail.

How long would it take to resolve the problem? How much would this downtime cost? What long-term implications would it have on customers, prospects, and the company’s reputation? All of these are reasonable questions, but often, don’t have satisfactory answers.

This scenario points to the need for the right hardware infrastructure and always-available systems as part of any successful virtualization strategy. We’ll cover these topics – while covering some common misconceptions – in our next article. Stay tuned.

stratus-blog4

Questions for IIoT Success

The industrial internet of things is sweeping across industries from food and beverage to manufacturing, and with the rise of IIoT comes the possibility for new efficiencies and more optimized operations. Thus, leading to new opportunities to control risk and decrease costs.

Although the real tipping point comes when making the transition to IIoT is deriving meaningful ROI — which is not always easy. While the road to IIoT may be marked with twists and turns, it does not have to be fraught with so much uncertainty. There is a very essential balancing act between managing the existing systems and processes against the introduction of new technologies. Combine this with the need to remain up and running with zero downtime, and the task might feel impossible.

To ensure success when undergoing an IIoT project, start by asking yourself these four questions (as explained in more detail on IoT Agenda):

1. How can we encourage synergies across teams?
2. Are applications in the right place?
3. Are you set up to scale the edge effectively?
4. What’s the best way to secure this new connected edge?

For most the path to IIoT will be an evolutionary journey. Before you can start to tap the potential of next-generation, big data-driven, intelligent automation, you must modernize the foundation on which it is built. And that means taking a hard look at existing operational technology.

Modernizing your infrastructure will deliver incredible benefits in terms of reliability and manageability to create a future-proof platform to build your organization’s IIoT strategy.

Want to hear more on common questions surrounding IIoT?

Check out our short video with Jason Andersen, Vice President of Business Line Management as he provides insight on and addresses common questions in Industrial IoT.

stratus-blog3

Not Just for Servers Anymore Virtualizing the Desktop and Beyond

We tend to think of virtualization as a solution to reduce IT’s reliance on growing data centers and server farms. This is true – virtualization can be an extremely effective way to improve agility and reduce total costs.

Yet this is not the only benefit virtualization can provide. Today, businesses are increasingly turning to virtual desktop infrastructure (VDI) to create individual desktop environments on virtualized servers running in a data center or a cloud. In this example, virtualization can reduce costs, improve productivity and security, and help IT departments regain control over the entire enterprise.

Critical components for a successful deployment

VDI provides connection brokers that act like traffic cops, directing users’ request to the right place in the virtual infrastructure to access their personal devices. The connection broker and other core VDI components are critical parts of the overall virtualization strategy.

They’re so critical that any organization must have a resilient availability solution as part of their virtual desktop infrastructure deployment. To understand why, consider the impact if the host servers supporting the virtual desktops fail. Many (if not all) users will be affected, and business will inevitably grind to a halt. Failure is simply not an option.

Expect the unexpected

VDI changes the way we think (and act) about hardware. It used to be that companies would own a few business-critical applications that required highly available hardware. These companies then chose to run “less important” software on commodity servers.

Yet in a virtualized world, these types of applications become business critical. Why? Because they are aggregated on those physical servers dedicated to so many virtual machines. So now, their loss would have a much greater business impact.

Worse, any downtime could affect the management controls of a virtualized environment, preventing IT from creating and managing virtualized machines – wasting precious time, energy, and money.

All of this is evidence for the right availability strategy that properly aligns the overall infrastructure with the existing mix. In our next article, we’ll take a closer look at the virtualization paradox: the fact that it removes hardware dependencies while also making hardware more important. We’ll also examine ways the right hardware can prevent downtime and keep businesses running.

samsung-blog2

CRG9 49” Gaming Monitor: Super Ultra-Wide Screen with Dual QHD Resolution

The CRG9 is the world’s first high-resolution super ultra-wide gaming monitor with 32:9 aspect ratio. It offers a 120Hz refresh rate with a fast 4ms response time on a 49-inch display that minimizes image lag and motion blur to effortlessly keep up with fast-paced games. Built for a superior gaming experience, the monitor also features AMD Radeon FreeSync™ 2 HDR technology to reduce stutter, screen tearing and input latency to ensure the best possible frame rate and smoothest gaming experience.

Content truly comes alive on the CRG9, with dual QHD resolution (5120×1440) and HDR10 with a peak brightness of 1,000 nits, providing superfine detail in the brightest and the darkest parts of an image. HDR10 delivers outstanding local dimming, and high-contrast HDR offers advanced spectacular highlights not available in non-HDR monitors. The monitor also leverages Samsung’s revolutionary Quantum dot technology for an exceptionally wide range of accurate color reproduction, and a 1,800mm screen curvature and an ultra-wide field of view for complete visibility.

Equivalent to two 27-inch QHD 16:9 monitors placed side-by-side, the 32:9 super ultra-wide screen also provides ultimate multitasking flexibility assisted by PBP (Picture-by-Picture) functionality to allow two video sources to be viewed on the same screen. The CRG9 includes one HDMI port, two Display Ports, USB 3.0 and headphone connectivity options. Samsung also designed the CRG9 with a smaller stand size for convenience and flexibility in every gamer’s space.

samsung-blog1

Space Monitor: Modern, Minimal and Flexible Design

Samsung Space Monitor leverages its sleek design and functionality to allow users to focus on what’s on the screen and not what’s around it. Its unique built-in space saving solution, a minimalist fully-integrated arm, clamps to the desk and frees up desk space for ultimate user productivity. Samsung Space Monitor is easy to set up and adjust when you aren’t using it, and simple to push back and store flat against the wall. Through easy ergonomic adjustment, port access and a discreet cable management system, Space Monitor improves the form and function of any workstation or home office.

Beyond aesthetics, Space Monitor is a feature-rich, high-performance monitor. The 27-inch model offers QHD resolution for incredibly detailed, pin-sharp images, while the 32-inch model presents content in 4K UHD.

Space Monitor gives users a unique arm stand, which can disappear into the back of the monitor’s slim-bezel. When using the stand, Space Monitor can be easily tilted or extended from the wall. It can also be lowered to the desk surface, and Samsung’s Zero Height Adjustable Stand feature provides the ultimate versatility for any type of viewing preference. The stand not only makes viewing more comfortable, but also eliminates the hassle of cable management by integrating power and HDMI cords through the arm for a clean, flexible look.

netiq-blog1

Five Things You Need to know About Disaster Recovery Planning

It’s time to make disaster recovery a high priority

IT is integral to business. The infrastructure you support is critical to keeping your company up and running. But faced with the constant challenge to do more with less, you may have been forced to put some projects on the back burner. If disaster recovery is among them, you’re probably keen to return it to the front of the priority queue, and with good reason.

Reports of potential risk in the world—such as turbulent weather, natural disasters and man-made accidents—seem to clog the media. What would happen if the ceiling collapsed in your data centre? What would you do if an employee forgot to unplug a humidifier and the power grid feeding your servers and storage imploded?

This guide discusses five elements that should be considered as part of your disaster recovery planning:

1. Mixed-platform data centres—the challenges they pose
2. Virtualisation—how it can change everything
3. Cloud computing—on-demand resource delivery
4. Measuring the return on investment (ROI) of disaster recovery
5. Planning and testing for greater confidence

The challenges of the multi-platform data centre

Most data centres used to be based on single-vendor mainframe computers and so were fairly easy to manage. Then inexpensive servers built with cheap x86 processors came along and quickly found their way into the data centre. The move from mainframes to smaller servers made the data centre strategy simple. When you needed torun more applications, you just bought more servers—leading to server sprawl.

Virtualisation solved the problem of sprawl by enabling data centres to be consolidated to a more manageable size and footprint. But all these changes overlapped, and because a typical data centre has evolved over a number of years, it now contains a variety of platforms from various vendors—a mix of mainframe, x86 servers and virtualised resources.

The impact of that on disaster recovery planning means multiple plans, one for each platform. Or does it?

Virtualisation: Simplified disaster recovery planning

You can use the virtual resources in your data centre for more than just virtual machine recovery. Virtual machines are flexible. They can run many different types of workloads, so you can create a virtual recovery platform that offers protection for all your workloads—physical, virtual, Windows and Linux.

Virtual recovery plans can simplify, and in some cases eliminate, many of the platform-specific headaches of recovering from a disaster.

The process for recovering a physical server typically involves a number of steps, from acquiring an equivalent or compatible physical server; through installing the operating system, applications, patches and updates; to uploading the backup data.

When you use virtual recovery, all those steps become a thing of the past. To recover a physical server you simply need to power on its virtual equivalent—an elegant approach that can save a great deal of time for users and the IT department.

Cloud computing: Disaster recovery resources on demand

However efficient your disaster recovery planning is, there are still a number of unknowns. You don’t know when you’ll need those extra resources, how many you’ll need, or for how long.

Cloud computing has a number of characteristics that make it easier and cheaper to plan for the unknown aspects of disaster recovery: rapid elasticity; on-demand, self-service resource acquisition; and per-use billing. Taken together they offer a resource-consumption model that makes cloud computing a perfect solution for disaster recovery.

But how do you get there? Virtualisation is the main technology underlying the cloud delivery model. So the first step is to virtualise as many of the elements that make up your disaster recovery plan and infrastructure as possible.

Measuring the ROI of disaster recovery

Virtual machines are a cheaper way to run workloads, this is an accepted reality. Not only do you dramatically lower your server hardware costs, you also reduce the cost of power, cooling and maintenance.

As part of a disaster recovery plan, one virtual machine host can take the place of up to 20 or more traditional standby physical servers. You can eliminate the expensive duplicate infrastructure that data centres used to need, and the burden of keeping backup versions of each make, model and vendor version of all the servers you run.

Replacing all of that with a simple pool of virtual resources immediately reduces the cost of your disaster recovery plan. But infrastructure isn’t the only thing that costs money, time costs too. Virtualisation saves you time by reducing the overhead and labour associated with older backup and recovery-based protection approaches. With virtual machines, everything from day-to-day maintenance to recovery and testing, can be as simple as a few clicks of a mouse.

Planning and testing for greater confidence

When was the last time you tested your ability to restore a workload? How many did you test? And what fraction of your total workloads was that? If organising the testing is simply too challenging, the risk is that it never gets done.

But lack of adequate testing leads to lack of confidence. How can you be sure your plans will work? How can you publish and guarantee service levels without an accurate prediction of what a worst-case scenario will look like? Ambiguous service levels, such as ‘one or two days’, are no longer acceptable.

Virtualisation makes testing much more straightforward by eliminating the issues associated with a bare-metal restore. You don’t need to match hardware or go through multiple steps to get a test server up and running. You simply select the virtual machines you want to test, create copies of them, and power them on—with no disruption to your production processes.

The ability to run frequent disaster recovery tests lets you accurately measure the time you expect to take to recover workloads when needed. Instead of hoping that you can recover from an outage in a day or two, now you can guarantee an accurate recovery time objective (RTO).

This can even be a competitive advantage for your company. People want to know that you can help them get back to routine as quickly as possible after a disaster. Your customers’ confidence in your organisation will get a huge boost when you publish service levels that you’re confident about.