Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

Teaser: Coming Soon! Intelligent Caching and Fragmentation Prevention = IO Heaven

by Brian Morin 19. September 2016 04:53

Sometimes the performance of physical servers, PCs and laptops slows to a crawl. No matter what you do, it takes half an eternity to open some files. It’s tied into the architecture of the Windows operating system. The OS becomes progressively slower the longer it is used and the more it is burdened with added software and large volumes of data.

In the old days, the solution was easy – defragment the hard drive. However, many production servers can’t be taken offline to defragment, and many laptops only have solid state drives (SSDs) that don’t submit to defragmentation. So is there any hope?

Condusiv has solved these dilemmas in the soon to be released version of Diskeeper®. With over 100 million licenses sold, Diskeeper has been the undisputed leader for decades when it comes to keeping Windows systems fragment free and performing well. And with Diskeeper 16 coming out soon, feedback from Beta testers is that it goes way beyond a mere incremental release with a few added frills, bells and whistles. Instead, the consensus among them is that it is a “next generation” release that goes well beyond just keeping Windows systems running like new but actually boosts performance faster than new.

How is this being achieved? The company had been perfecting two technologies within its portfolio and is now bringing them together – fragmentation prevention and DRAM caching.

On the one side, the idea is that you prevent fragmentation before data is written to a production server. This is a lifesaver for IT administrators who need to immediately boost the performance of critical applications like MS-SQL running on physical servers. Diskeeper keeps systems running optimally with its patented fragmentation prevention engine that ensures large, clean, contiguous writes from Windows, eliminating the small, tiny writes that rob performance with “death by a thousand cuts” by inflating IOPS and stealing throughput.

But that’s only the half of it.  A little known fact about Condusiv is that it is also a world leader in caching. In addition to their incredible work on Diskeeper, the Condusiv development team has evolved a unique DRAM caching approach that has been implemented via OEM partners for several years. So popular has this technology become that the company has sold over 5 million caching licenses that have been tied to ultrabooks but now is being made available commercially.

Soon to be released Diskeeper 16’s DRAM caching electrifies performance:

·         Benchmark tests show MS-SQL workload performance boosts of up to 6X

·         An average of 40% latency reduction across hundreds of servers

·         No hint of memory contention or resource starvation

·         Fleets of laptops suddenly running like a dream

·         PCMark MS Office productivity tests show an increase of 73% on Windows 10 machines

·         Huge leaps in SSD write speed and extended SSD lifespan

·         Solves even the worst performing physical servers or Windows PCs backed by a money-back guarantee.

Could it be, then, that there really is hope to get PCs and physicals servers to be running faster than new?

 

You’ll have to wait until Diskeeper 16 is unveiled to hear the full story. 

VMware Advises on Defrag

by Brian Morin 27. July 2016 01:40

VMware: Defrag or Not?

Dave Lewis sent in a question, “There is such a quandary about disk fragmentation in the VMware environment. One says defrag and another says never. Who's right? This has been a hard subject to track and define.”

I’m going to debunk “defragging” in a minute, but if you read VMware’s own best practice guide on improving performance (found here), page 17 reveals “adding more memory” as the top recommendation while the second most important recommendation is to “defrag all guest machines.”

As much as VMware is aware that fragmentation impacts performance, the real question is how relevant is the task of defragging in today’s environment with sophisticated storage services and new mediums like flash that should never be defragged? First of all, no storage administrator would defrag an entire “live” disk volume without the tedious task of taking it offline due to the impact that change block activity has against services like replication and thin provisioning, which means the problem goes ignored on HDD-based storage systems. Second, organizations who utilize flash can do nothing about the write amplification issues from fragmentation or the resulting slow write performance from a surplus of small, fractured writes.

The beauty behind V-locity® I/O reduction software in a virtual environment is that fragmentation is never an issue because V-locity optimizes the I/O stream at the point of origin to ensure Windows executes writes in the most optimum manner possible. This means large, contiguous, sequential writes to the backend storage for every write and subsequent read. This boosts the performance of both HDD and SSD systems. As much as flash performs well with random reads, it chokes badly on random writes. A typical SSD might spec random reads at 300,000 IOPS but drop to 23,000 IOPS when it comes to writes due to erase cycles and housekeeping that goes into every write. This is why some organizations continue to use spindles for write heavy apps that are sequential in nature.

When most people think of fragmentation, they think in terms of it being a physical layer issue on a mechanical disk. However, in an enterprise environment, Windows is extracted from the physical layer. The real problem is an IOPS inflation issue where the relationship between I/O and data breaks down and there ends up being a surplus of small, tiny I/O that chews up performance no matter what storage media is used on the backend. Instead of utilizing a single I/O to process a 64K file, Windows will break that down into smaller and smaller chunks….with each chunk requiring its own I/O operation to process.

This is bad enough if one virtual server is being taxed by Windows write inefficiencies and sending down twice as many I/O requests as it should to process any given workload…now amplify that same problem happening across all the VMs on the same host and there ends up being a tsunami of unnecessary I/O overwhelming the host and underlying storage subsystem.

As much as virtualization has been great for server efficiency, the one downside is how it adds complexity to the data path. This means I/O characteristics from Windows that are much smaller, more fractured, and more random than they need to be. As a result, performance suffers “death by a thousand cuts” from all this small, tiny I/O that gets subsequently randomized at the hypervisor.

So instead of taking VMware’s recommendation to “defrag,” take our recommendation to never worry about the issue again and put an end to all the small, split I/Os that are hurting performance the most.

Tags: , ,

Defrag | Diskeeper | General | virtualization | V-Locity

The Real Meaning of Disruption

by Jerry Baldwin 25. March 2014 07:26

Disruption is a popular word these days. Is it the replacement for innovation, which we’ve overused into pointlessness over the past ten years? Maybe, but disruption means more: it carries the weight of shaking up the status quo—of not only creating something new—but creating something that shifts models and opens our eyes to possibility.

We talk about disruption like it’s a new thing, cooked-up by a team of marketers in a conference room somewhere. But its roots lie in a theory introduced by an Austrian economist in 1942. Joseph Schumpeter captured the world’s attention in only six pages, where he clarified the free market’s muddled way of delivering progress. He called this theory Creative Destruction.

I don’t have a PhD in economics so I’ll dodge the minutiae, but the overall meaning of the theory is what resonates—as it should to anyone engaged in business today. “Creative destruction is the same process of industrial mutation that revolutionizes the economic structure from within, incessantly destroying the old one and incessantly creating a new one.”

Simply put, to continually move forward we must be willing to embrace the destruction of what came before.

Economists often use the Montgomery Ward example. MW started as a mail-order company in the nineteenth century. With no cars or trucks in those days, and most Americans living in small rural pockets, it was too costly to ship products to widely-dispersed local stores. That model would result in a high cost to the consumer, making goods too expensive for the average buyer. So instead MW turned its enormous Chicago warehouse (complete with a railroad running through it) into its hub, and used the already well-established US mail to deliver products directly to customers at a low cost.

And a successful model it was. With a high-volume of sales, MW could charge lower prices. By the 1890s Montgomery Ward was the world’s largest retailer. But all that came to an end. Why? Because over time Americans were moving to urban centers and could afford a higher standard of living.

Robert Wood was an MW exec, and may well be the first adopter of Big Data on record. He analyzed data, studied the census, and saw the shift in American lifestyle. He presented his findings to leadership, suggesting that selling goods through a chain of urban department stores would replace the mail-order model as the most profitable path for retail.

So MW did what any unmovable enterprise would do. They fired him.

Meanwhile James Cash Penney recognized the same trends as Robert Wood, and it wasn’t long before J.C. Penney stores put a serious dent in MW’s profits. The mail-order giant was late to the party and couldn’t change direction fast enough. And Robert Wood? He went to work for Sears, who also took a big bite out of MW’s market share.

Remind you of anything? Netflix and Blockbuster. Blockbuster was the established enterprise, staring streaming revenue in the face, yet unable to let go of profits from the rental market. Netflix is the newcomer—the creative destructor—free from the ball and chain of a dying business model, free to focus 100% on new streaming revenue. And we all know the end of that story.

We also know that business is anything but stagnant, there are waves and cycles, and the same is true of companies. It’s very difficult (impossible?) for an established enterprise to turn around, to be truly disruptive, and to compete with newcomers. But if you’re an organization that can stomach the destruction of what came before to create new growth and opportunity, you might stand a chance.

Condusiv is unique. We’re a software company with a 30-year history as an established player in file system management and caching technologies. But in the spirit of disruption—of creative destruction—we’ve shifted all our focus and resources to V-locity, our flagship product that in itself signifies disruption in how organizations run their data centers. A 100% software approach to solving today’s application performance issues at the root on the VM or OS layer, without requiring any additional hardware.

When you embrace creative destruction you must ceaselessly devalue existing wealth in order to clear the ground for the creation of new wealth.

It’s true, our discussions may be very different from those had in other conference rooms. But that’s what disruption should be about—that’s how Condusiv can deliver performance in a software solution at 1/10th the cost of the hardware alternative. While others go for margin erosion—trying to win business by saving customers 20 cents on every dollar spent, we help customers save 90 cents on every dollar.

As a channel partner, we allow you to fulfill your mission to your customers with added value, while also protecting your bottom line with a generous software margin that grows your profit over razor thin commodity hardware margins. You win. Your customer wins. A new market is emerging where hardware becomes the 2nd option for improved application performance.

Welcome to our conference room.

Mad Men, Awesome Chairs, and a Pretty Big IT Problem

by Robin Izsak 13. March 2014 07:57

Next month Mad Men returns to AMC for its final season. I'll miss it. The show had some great character development, terrific dialogue, and cool cars. But the set design! Oh the set design. I'm a fan of mid-century modern—the sharp angles, clean lines, and wood tones mixed with bold, primary colors. Don Draper's New York apartment and the Sterling Cooper offices are the pinnacle of perfectly curated spaces, designed for maximum form and function. 

Which brings me to Creative Office Pavilion, a large Boston-based firm of workplace consultants, and the primary US dealer of Herman Miller furniture—the undisputed kings of mid-century modern.

I mention all this as an excuse for my daydreaming. During my call with Robert Del Vecchio, Creative Office Pavilion's IT Infrastructure Manager, I was imagining conference rooms filled with Eames chairs of every color, wall clocks reminiscent of Sputnik, and enormously awesome lamps casting warm, non-fluorescent light. But Robert had more important issues to discuss, and discuss we did.

Robert's team supports a large base of business users: designers, mobile CRM users, support staff, accounting, and customer service. But he's also responsible for tons of data and heavy workload generated by an order entry system, CRM application, Lotus Notes, SQL Server, and hundreds of employees constantly accessing architect drawings, massive PowerPoint presentations, and database files. That's a lot of Eames chairs, going to a lot of workplaces, including Harvard University's Innovation Lab and Boston's Spaulding Hospital.

Sluggishness and poor performance had become a constant problem for his users, and Robert's team felt the full force of it—spending more and more time troubleshooting and doing storage health checks. So Robert did what any of us would do when faced with mounting pressure and scrutiny.

He Googled. And stumbled across a software solution that seemed to be the answer to his application performance problems, without requiring any new hardware.


What happened next is more exciting than Don Draper after one too many old-fashioneds.

Read the Creative Office Pavilion case study to learn more.

 

V-locity version 5 – The Director’s Cut

by Brian Morin 5. March 2014 07:08

As you know, we just released V-locity version 5. Here’s the director’s cut.

We committed a slew of engineers to several months of development to build an enterprise-class management console for V-locity. In a world where a couple developers with a few pizzas can create a robust app from scratch in 6 weeks, that represents a lot of apps!

Our previous management console didn’t scale beyond 500 nodes and didn’t play well with modern environments that span geographic locations with a hybrid of virtual and physical servers while provisioning some workloads to the cloud.

That meant a console needed to be built that has the ability to auto-detect the most complex environments and batch deploy V-locity in seconds. A management console that is aware of the new world order of hybrid environments – virtual, physical, cloud – and deploy and manage to all from a single point.

Customers asked for flexible pricing models whether it be volume perpetual licenses or site licenses or even subscription, and so we listened. They asked for I/O performance management that delivers insight into the anatomy of I/O behavior on all their workloads from virtual server (or physical server) to storage to help take the guesswork out of performance troubleshooting. Customers wanted to be able to set up alerts based on workload thresholds. They asked for a console that could validate V-locity before/after performance across workloads and have ongoing performance validation for continued ROI transparency.

So we built it. The whole enchilada.

Typically, when the baton is handed to marketing, the first two questions are almost always the same – “What do we call it?” and “What do we charge for it?”

When you commit engineering resource the size of a small island, the very first temptation is to productize, to monetize, to ROI-ize what you put in because there is a cost to building products.

Then again, this wasn’t really a stand-alone product, but rather a big enhancement to an existing product.

A lot of companies charge for that enhancement. Many of you have purchased hardware or software products, only to find a separate line item and SKU for the management software itself to manage the product you purchased – the never ending high tech rabbit hole of monetization where you buy cars but batteries and steering and tires are not included.

As my daughter tells me, “Dad, everyone does it.” So, in our initial brainstorming session, we kicked around the idea of doing it too. But when it came down to it, we agreed it’s not in the core tenet of our business model to disrupt.

V-locity provides performance at 1/10th the cost of the hardware alternative. That’s disruption. And in that spirit of disruption, we decided against productizing and charging for the management console.

It’s bundled with V-locity and available for free to every V-locity customer under maintenance. No extra charge required. No extra hardware required.

RecentComments

Comment RSS

Month List

Calendar

<<  November 2018  >>
MoTuWeThFrSaSu
2930311234
567891011
12131415161718
19202122232425
262728293012
3456789

View posts in large calendar