Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

Permission to Fail

by Robin Izsak 20. December 2013 05:38

A few months back, Jerry Baldwin, our CEO, gave me a strange assignment. "We need something different, something fun," he said. "Go write an ebook."

I must have looked puzzled because he went on, "Write what you want, be creative. If I like it, we'll figure out what to do with it. If I don't, it was a nice try."

I think he was serious at the time, but he probably figured I wouldn't get around to it or other projects would take priority. But as far as I was concerned, it was a challenge. And I aimed to come back with something different, and compelling, and relevant, and cool.

This turned out to be an awesome project, not only because I could run wild, but because I had permission to fail, which made all the difference. So in an unselfconscious manner I started typing—thinking about how drastically our lives have changed because of the Internet—the most significant invention of our lifetime.

I'm proud to share the result of this unusual assignment. I hope you get a chance to rise to a similar challenge one of these days: go do something different, you have permission to fail.

Smile, share, and enjoy The Everything Age (A Pop Culture eBook for Geeks).

Tags:

Big Data | Cloud | General | SAN | virtualization | V-Locity | VMware

When Big Data Hurts

by Robin Izsak 5. December 2013 03:38

I recently spoke with Bell Mobility's Adam Moore, a member of the organization's OSS Systems Integration Team. Bell Mobility is Bell Canada's wireless division, employing a multitude of analysts who eat, sleep, and breathe Big Data. They capture metrics and run analytics on call failures, call drops, and call volume—helping the company provide better service to their customers.

Sure we hear a lot about Big Data these days, but we need these catch phrases to talk about abstract concepts. And Big Data is as important as it is abstract: it represents a smarter way to do business, to create value from all this data we have, and to make better decisions. Big Data enables Bell Mobility directors to pinpoint inefficiencies and see where optimization is needed to maintain optimal services to a broad customer base.

So on our call, Adam told me that things had slowed down. His users were dealing with longer and longer SQL query times, which was impacting their ability to do their jobs. Faced with significant data growth and a need for faster delivery of that data to meet SLAs with their users, Adam's team needed a solution to escalating performance problems, like right now. 

In assessing their options, Adam and team conducted an evaluation of V-locity® VM™. The results? A 61% reduction in I/O to the SAN, which led to 98% faster data processing times. And backups? "They used to run at 10MB per minute and sometimes didn't complete at all. Now they run at 60-120MB per minute and complete consistently." 

Read more about the team's success with V-locity in the 
Bell Mobility case study.

Tags:

Big Data | Channel | Cloud | General | Hyper-V | IntelliMemory | IntelliWrite | SAN | SSD, Solid State, Flash | Success Stories | virtualization | V-Locity | VMware

The Notorious 45-Second Query

by Robin Izsak 19. November 2013 05:17

You just clicked OK. Now wait 45 seconds for your query to complete. Now do this a bunch more times until you've compiled all the data you need for a report. And right about now, frustrated, you think about getting a snack and more coffee. 

That's what was happening at SunCoke Energy, as sys admin Chris Mueller's business users started complaining of painfully slow queries and application response.

When the "45-second query" became notorious around the office, Chris and his team started a months' long troubleshooting mission, trying to tune Oracle performance and improve the speed of the business's Cognos VMs. “A bad day for my team is when our people can’t do their jobs—it’s like trying to find a needle-in-a-haystack, troubleshooting for performance.”

After a number of attempts, including consolidating all the VMs and updgrading the SAN, the team brought in V-locity® VM™ and dramatically improved speed—overnight.

Read the SunCoke Energy case study to learn more about their immediate success with V-locity.

Tags:

Big Data | Cloud | General | IntelliMemory | IntelliWrite | SAN | Success Stories | virtualization | V-Locity | VMware

What is unnecessary I/O? Why does it exist?

by Brian Morin 5. November 2013 07:09

Modern IT infrastructures deal with enough I/O traffic as it is. The last thing they need is unnecessary I/O.

It's no surprise that IT struggles with performance problems caused by the tidal wave of data that travels back and forth across the infrastructure in the form of read and write I/O. Organizations that have virtualized find themselves in the position of trading more and more costs to the storage backend to keep up with I/O demand. The negative impact that virtualization has had on the storage layer is felt, that’s for sure, but it isn’t well understood.

With the proliferation of multiple VMs accessing the same bytes of data and the nature of the “I/O blender effect” that further randomizes I/O streams from multiple VMs before funneling down to storage, a large amount of I/O cycles are completely unnecessary. In a world where organizations are already crushed under the weight of I/O demand, the last thing they need from their IT infrastructure is lost cycles spent processing unnecessary I/O.

Even though this random I/O chaos can be easily prevented in the virtual machine layer before it ever leaves the gates, organizations continue to invest in more hardware to battle an increasingly complex problem.

Check out this new paper from IDG, Eliminate the Unnecessary: Unnecessary I/O and its Impact on Performance. You'll understand unnecessary I/O, why it matters, and how getting rid of it will solve performance problems—overnight—without more hardware.

Tags: , , ,

Big Data | Cloud | IntelliMemory | IntelliWrite | SAN | virtualization | V-Locity | VMware

A Blog About Bloggers Who Blog About Us

by Jerry Baldwin 7. February 2013 09:49

On contemplating the impact of his calculating engine, the world’s first computer, Charles Babbage wrote “In turning from the smaller instruments in frequent use to the larger and more important machines, the economy arising from the increase of velocity becomes more striking.” He said that in 1832.

I mention this because the idea holds true today—the bigness of everything, the immediacy of everything, the pace of everything—the greater the increase from one state to another, the more striking the difference. And that’s exactly why—when we put V-locity 4 trialware into the hands of virtualization wizards to test in their lairs—we want them to really, really put it through the wringer. The heavier the workload, the greater the application demand, the more striking the results.

Recently two virtualization pros got their hands on the V-locity 4 30-day trial, set up rigorous testing, and blogged the entire experience:

VMware technical architect amazed by V-locity 4 results

Another virtualization blogger amazed by V-locity 4

 

Tags: , , ,

Big Data | Hyper-V | IntelliMemory | virtualization | V-Locity | VMware

RecentComments

Comment RSS

Month List

Calendar

<<  September 2018  >>
MoTuWeThFrSaSu
272829303112
3456789
10111213141516
17181920212223
24252627282930
1234567

View posts in large calendar