Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

When Big Data Hurts

by Robin Izsak 5. December 2013 03:38

I recently spoke with Bell Mobility's Adam Moore, a member of the organization's OSS Systems Integration Team. Bell Mobility is Bell Canada's wireless division, employing a multitude of analysts who eat, sleep, and breathe Big Data. They capture metrics and run analytics on call failures, call drops, and call volume—helping the company provide better service to their customers.

Sure we hear a lot about Big Data these days, but we need these catch phrases to talk about abstract concepts. And Big Data is as important as it is abstract: it represents a smarter way to do business, to create value from all this data we have, and to make better decisions. Big Data enables Bell Mobility directors to pinpoint inefficiencies and see where optimization is needed to maintain optimal services to a broad customer base.

So on our call, Adam told me that things had slowed down. His users were dealing with longer and longer SQL query times, which was impacting their ability to do their jobs. Faced with significant data growth and a need for faster delivery of that data to meet SLAs with their users, Adam's team needed a solution to escalating performance problems, like right now. 

In assessing their options, Adam and team conducted an evaluation of V-locity® VM™. The results? A 61% reduction in I/O to the SAN, which led to 98% faster data processing times. And backups? "They used to run at 10MB per minute and sometimes didn't complete at all. Now they run at 60-120MB per minute and complete consistently." 

Read more about the team's success with V-locity in the 
Bell Mobility case study.

Tags:

Big Data | Channel | Cloud | General | Hyper-V | IntelliMemory | IntelliWrite | SAN | SSD, Solid State, Flash | Success Stories | virtualization | V-Locity | VMware

The Notorious 45-Second Query

by Robin Izsak 19. November 2013 05:17

You just clicked OK. Now wait 45 seconds for your query to complete. Now do this a bunch more times until you've compiled all the data you need for a report. And right about now, frustrated, you think about getting a snack and more coffee. 

That's what was happening at SunCoke Energy, as sys admin Chris Mueller's business users started complaining of painfully slow queries and application response.

When the "45-second query" became notorious around the office, Chris and his team started a months' long troubleshooting mission, trying to tune Oracle performance and improve the speed of the business's Cognos VMs. “A bad day for my team is when our people can’t do their jobs—it’s like trying to find a needle-in-a-haystack, troubleshooting for performance.”

After a number of attempts, including consolidating all the VMs and updgrading the SAN, the team brought in V-locity® VM™ and dramatically improved speed—overnight.

Read the SunCoke Energy case study to learn more about their immediate success with V-locity.

Tags:

Big Data | Cloud | General | IntelliMemory | IntelliWrite | SAN | Success Stories | virtualization | V-Locity | VMware

When Speed Matters Most

by Robin Izsak 15. November 2013 05:22

We talk a lot about data around here. Particularly, how to make it perform better, how to make applications respond faster, how to solve some of IT's most formidable challenges around managing increasingly complex data centers.

But nothing compares to the importance of data performance when it comes to patient records and load times in a hospital ER, where every second spent waiting might be a second too long.

I recently spoke with Ryan Barker, Technology Specialist with Hancock Regional Hospital, hoping to get real insight into why speed matters—and I got exactly that. Ryan's users, which consist of ER doctors, nurses—any and all staff who touch the hospital's MEDITECH systems—were complaining of extremely slow load times and the inability to save records. Sometimes in dire situations, like a busy day in the ER.

Read the case study to learn how Hancock Regional used V-locity® VM™ to improve load times from 2 patient records every 7 seconds to 6 records in 4 seconds.

Tags: , , , , ,

MEDITECH

What is unnecessary I/O? Why does it exist?

by Brian Morin 5. November 2013 07:09

Modern IT infrastructures deal with enough I/O traffic as it is. The last thing they need is unnecessary I/O.

It's no surprise that IT struggles with performance problems caused by the tidal wave of data that travels back and forth across the infrastructure in the form of read and write I/O. Organizations that have virtualized find themselves in the position of trading more and more costs to the storage backend to keep up with I/O demand. The negative impact that virtualization has had on the storage layer is felt, that’s for sure, but it isn’t well understood.

With the proliferation of multiple VMs accessing the same bytes of data and the nature of the “I/O blender effect” that further randomizes I/O streams from multiple VMs before funneling down to storage, a large amount of I/O cycles are completely unnecessary. In a world where organizations are already crushed under the weight of I/O demand, the last thing they need from their IT infrastructure is lost cycles spent processing unnecessary I/O.

Even though this random I/O chaos can be easily prevented in the virtual machine layer before it ever leaves the gates, organizations continue to invest in more hardware to battle an increasingly complex problem.

Check out this new paper from IDG, Eliminate the Unnecessary: Unnecessary I/O and its Impact on Performance. You'll understand unnecessary I/O, why it matters, and how getting rid of it will solve performance problems—overnight—without more hardware.

Tags: , , ,

Big Data | Cloud | IntelliMemory | IntelliWrite | SAN | virtualization | V-Locity | VMware

Help! I deleted a file off the network drive!!

by Robin Izsak 31. October 2013 08:01

What if the recycle bin on your clients could be expanded to include file servers? And what if you could enable your users to recover their own files with self-service recovery? You would never have to dig through backups to restore files again, or schedule incessant snapshots to protect data.

One of the most persistent—and annoying—help desk calls is to help users recover files accidentally deleted off network drives, or support users who ‘saved over’ a PowerPoint they need for a meeting—in 15 minutes.

There are some pretty serious holes in true continuous data protection: First, any data that was created between backups might not be recoverable. Second, who wants to dig through backups anyway? Third, you’d have to schedule an insane amount of snapshots to protect every version of every file. Fourth, the Windows recycle bin doesn’t catch files deleted off a network drive, which is how most of us work in the real world—networks, clouds—not local drives.

Check out our latest guide that explains the gap between backup and the Windows recycle bin, and how to bridge that gap with Undelete® to ensure continuous data protection and self-service file recovery.

Meet the recycle bin for file servers. You’re welcome.

Month List

Calendar

<<  September 2017  >>
MoTuWeThFrSaSu
28293031123
45678910
11121314151617
18192021222324
2526272829301
2345678

View posts in large calendar