Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

It’s Diskeeper Groundhog Day

by Brian Morin 12. July 2017 06:08

Every week, I find myself sending the same email to at least one Diskeeper® customer. Almost every time, it’s a new manager/director/VP who joins the company and hears Diskeeper is running on their physical servers or clients. This is how it goes:

Hi Christopher,

I’m the SVP of WW Sales here and received news XYZ company may not be renewing support on your Diskeeper licenses because of concerns with it on MS-SQL servers attached to SAN storage with SSDs.

Since you own these licenses, I wanted to reach out for a quick 15-min tech conversation only to make sure you understand what you have in the latest version. Many still have legacy ideas of Diskeeper when it was a “defrag” product and not applicable to the new world order of SSDs and modern SANs.

I guarantee the current version of our product will offload anywhere from 30-40% of your I/O traffic from your underlying storage to provide a nice performance boost and give some precious I/O headroom back to that subsystem. Many customers offload >50% of I/O by simply adding more memory, enabling them to sweat their storage assets significantly and use those IOPS for other things. It’s a free upgrade while you are active on your maintenance.

As a primer, you can read this case study we published last month with the University of Illinois in which we doubled performance of their SQL and Oracle applications sitting on all-flash arrays. And if you want the short, short summary of what the technology does now, here’s the 2-min video: https://www.youtube.com/watch?v=Ge-49YYPwBM. This is why Gartner named us Cool Vendor of the Year a couple years back.

The good news is that they have heard of Diskeeper. The bad news is that they still associate it with legacy versions that emphasized defragmentation applicable only to spinning disk. For those customers who virtualized and converted their licenses to V-locity®, we don’t run into this issue.

If you are a current Diskeeper customer and have difficulty educating new management, I suggest setting up a tech review between Condusiv and new team members. If you are running all-flash, an alternative approach others have taken is simply replacing their Diskeeper with SSDkeeper® so they don’t run into old “defrag” objections. The core features are identical and both auto-detect if the storage is HDD or SSD and apply the best optimization method.

Keep in mind, since Diskeeper now proactively eliminates excessively small tiny writes and reads in the first place, the whole concept of “defragmentation” is a dead concept for the most part except in some extenuating circumstances. An example would be a heavily fragmented volume on spinning disk that didn’t have Diskeeper on it and could use a one-time clean up. A good example of this is a new customer last week whose full backup was taking over a day to complete! After running Diskeeper on their physical servers, the backup time was cut in half.

Tags:

Defrag | Diskeeper | SAN | SSD, Solid State, Flash

Recently Discovered SSD Vulnerabilities Could Cripple Global Markets with Data Corruption if Exploited by Attackers

by Brian Morin 15. June 2017 10:38

Recently discovered multi-level cell (MLC) solid-state drive (SSD) vulnerabilities by researchers from Carnegie Mellon University, Seagate, and the Swiss Federal Institute of Technology in Zurich, reveal the first-ever security weakness of its kind against MLC SSDs that store much of the world’s data. Two different types of malicious attacks are reported to corrupt data, leaving much of the world’s data currently exposed while organizations search for answers.

If security experts and data protection experts didn’t have enough to worry about already, the latest discovery from Carnegie Mellon University has set off brand new alarms that could be far more crippling than the recent WannaCry virus or any ransomware attack. In this case, data is not infected or held hostage, but is lost entirely - not even the host SSD hardware can be salvaged after such an attack. This is not simply alarming to organizations that stand the most to lose like financial institutions, but we’re talking about real lives here if patient care is compromised as we saw earlier this month at hospitals across the UK.

In a recently published report by researchers from Carnegie Mellon University, Seagate, and the Swiss Federal Institute of Technology in Zurich, there are two types of malicious attacks that can corrupt data and shorten the lifespan of MLC SSDs – a write attack (“program interference”) and a read attack (“read disturb”). Both attacks inundate the SSD with a large number of operations over a short period of time, which can corrupt data, shorten lifespan, and render an SSD useless to store data in a reliable manner into the future. However, both attacks rely upon native read and write operations from the operating system to the solid-state drive, which is circumvented by Condusiv® I/O reduction software on Windows systems (V-locity®, SSDkeeper®, Diskeeper® 16).

The only reason this story has been covered lightly by the media and not sensationalized across headlines is because no one has died yet or lost a billion dollars. This is a new and very different kind of vulnerability. Protection from this kind of an attack is not something that can be addressed by traditional lines of defense like anti-virus software, firmware upgrades, or OS patches. Since it is cost prohibitive for organizations to “rip-and-replace” multi-cell SSDs with single-cell SSDs, they are forced to rely on data sets that have been “backed-up.” However, what good is restoring data to hardware that can no longer reliably store data?

By acting as the “gatekeeper” between the Windows OS and the underlying SSD device, Condusiv I/O reduction software solutions perform inline optimizations at the OS-level before data is physically written or read from the solid-state drive. As a result, Condusiv’s patented technology is the only known solution that can disrupt “program interference” write operation attacks as well as “read disturb” read operation attacks that would attempt to exploit SSD vulnerabilities and corrupt data. While most known for boosting performance of applications running on Windows systems while extending the longevity of SSDs, Condusiv solutions go a step further as the only line of defense against these malicious attacks.

Condusiv’s patented write optimization engine (IntelliWrite®) mitigates the first vulnerability, “program interference,” by disrupting the write pattern that would otherwise generate errors and corrupt data. IntelliWrite eliminates excessively small writes and subsequent reads by ensuring large, clean contiguous writes from Windows so write operations to solid-state devices are performed in the most efficient manner possible on Windows servers and PCs. An attack could only be successful in the rare instance of limited free space or zero free space on a volume that results in writes occurring natively, circumventing the benefit of IntelliWrite.

Condusiv’s second patented engine (IntelliMemory®) disrupts the second vulnerability, “read disturb,” by establishing a tier-0 caching strategy that leverages idle, available memory to serve hot reads. This renders the “read disturb” attack useless since the storage target for hot reads becomes memory instead of the SSD device. A “read disturb” attack could only be successful in the rare instance that a Windows system is memory constrained and has no idle, available memory to be leveraged for cache.

While organizations use Condusiv software on Windows systems to maintain peak performance and extend the longevity of their SSDs, they can trust Condusiv to protect against malicious attacks that would otherwise corrupt user data and bring great harm to their business and service to customers.

Condusiv Launches SSDkeeper Software that Guarantees “Faster than New” Performance for PCs and Physical Servers and Extends Longevity of SSDs

by Brian Morin 17. January 2017 09:30

The company that sold over 100 Million Diskeeper® licenses for hard disk drive systems, now releases SSDkeeper™ to keep solid-state drive systems running longer while performing “faster than new.”

Every Windows PC or physical server fitted with a solid-state drive (SSD) suffers from very small, fractured writes and reads, which dampen optimal SSD performance and ultimately erodes the longevity of SSDs from write amplification issues. SSDkeeper’s patented software ensures large, clean contiguous writes and reads for more payload with every I/O operation, reduced Program/Erase (P/E) cycles that shorten SSD longevity, and boosts performance even further with its ability to cache hot reads within idle, available DRAM.

Solid-state drives can only handle a number of finite writes before failing. Every write kicks off P/E cycles that shorten SSD lifespan otherwise known as write amplification. By reducing the number of writes required for any given file or workload, SSDkeeper significantly boosts write performance speed while also reducing the number of P/E cycles that would have otherwise been executed. This enables individuals and organizations to reclaim the write speed of their SSD drives while ensuring the longest life possible.

Patented Write Optimization

SSDkeeper’s patented write optimization engine (IntelliWrite®) prevents excessively small, fragmented writes and reads that rob the performance and endurance of SSDs. SSDkeeper ensures large, clean contiguous writes from Windows, so maximum payload is carried with every I/O operation. By eliminating the “death by a thousand cuts” scenario of many, tiny writes and reads that slow system performance, the lifespan of an SSD is also extended due to reduction in write amplification issues that plague all SSD devices.

Patented Read Optimization

SSDkeeper electrifies Windows system performance further with an additional patented feature - dynamic memory caching (IntelliMemory®). By automatically using idle, available DRAM to serve hot reads, data is served from memory which is 12-15X faster than SSD and further reduces wear to the SSD device. The real genius in SSDkeeper’s DRAM caching engine is that nothing has to be allocated for cache. All caching occurs automatically. SSDkeeper dynamically uses only the memory that is available at any given moment and throttles according to the need of the application, so there is never an issue of resource contention or memory starvation. If a system is ever memory constrained at any point, SSDkeeper's caching engine will back off entirely. However, systems with just 4GB of available DRAM commonly serve 50% of read traffic. It doesn't take much available memory to have a big impact on performance.

Enhanced Reporting

If you ever wanted to know how much Windows inefficiencies were robbing system performance, SSDkeeper tracks time saved due to elimination of small, fragmented writes and time saved from every read request that is served from DRAM instead of being served from the underlying SSD. Users can leverage SSDkeeper’s built-in dashboard to see what percentage of all write requests are reduced by sequentializing otherwise small, fractured writes and what percentage of all read requests are cached from idle, available DRAM.

SSDkeeper is a lightweight file system driver that runs invisibly in the background with near-zero intrusion on system resources. All optimizations occur automatically in real-time.

While SSDkeeper provides the same core patented functionality and features as the latest Diskeeper® 16 for hard disk drives (minus defragmentation functions for hard disk drives only), the benefit to a solid-state drive is different than to a hard disk drive. Hard disk drives do not suffer from write amplification that reduces longevity. By eliminating excessively small writes, IntelliWrite goes beyond improved write performance but extends endurance as well.

Available in Professional and Server Editions

>SSDkeeper Professional for Windows PCs with SSD drives greatly enhances the performance of corporate laptops and desktops.

>SSDkeeper Server speeds physical server system performance of the most I/O intensive applications such as MS-SQL Server by 2X to 10X depending on the amount of idle, unused memory.  

>Options include Diskeeper Administrator management console to automate network deployment and management across hundreds or thousands of PCs or servers.  

>A free 30-day software trial download is available at http://www.condusiv.com/evaluation-software/

>Now available for purchase on our online store:  http://www.condusiv.com/purchase/SSDKeeper/

 

How Can I/O Reduction Software Guarantee to Solve the Toughest Performance Problems?

by Brian Morin 14. January 2017 01:00

The #1 request I’ve been getting from customers is a white board video that succinctly explains the two silent killers of VM performance and how our I/O reduction guarantees to solve performance problems, so applications run perfectly on every Windows server.

Expensive backend storage upgrades should ONLY take place when needing more capacity – not more performance. Anytime I tell someone our I/O reduction software guarantees to solve their toughest performance problems…the very first response is invariably the same…HOW? Not only have I answered this question hundreds of times, our own customers find themselves answering this question repeatedly to other team members or new hires.

To make this easier, I’ve answered it all here in this 10-min White Board Video ->, or you can continue reading.

 Most of us have been upgrading hardware to get more performance ever since we can remember. It’s become so engrained, it’s often times the ONLY approach we think of when needing a performance upgrade.

For many organizations, they don’t necessarily need a performance boost on EVERY application, but they need it on one or two I/O intensive applications. To throw a new all-flash array or new hybrid array at a performance problem ends up being the most expensive and disruptive way to solve a performance problem when all you have to do is the same thing thousands of our customers have done: simply try our I/O reduction software on any Windows server and watch the application run at least 50% faster and in many cases 2X-10X faster.

Most IT professionals are unaware of the fact that as great as virtualization has been for server efficiency, the one downside is how it adds complexity to the data path. On top of that, Windows doesn’t play well in a virtual environment (or any environment where it is abstracted from the physical layer). This means I/O characteristics that are a lot smaller, more fractured and more random than they need to be – the perfect trifecta for bad storage performance.

This “death by a thousand cuts” scenario means systems are processing workloads about 50% slower than they should. Condusiv’s I/O reduction software solves this problem by displacing many small tiny writes and reads with large, clean contiguous writes and reads. As huge as that patented engine is for our customers, it’s not the only thing we’re doing to make applications run smoothly. Performance is further electrified by establishing a tier-0 caching strategy - automatically using idle, available memory to serve hot reads. This is the same battle-tested technology that has been OEM’d by some of the largest out there – Dell, Lenovo, HP, SanDisk, Western Digital, just to name a few.

Although we might be most known for our first patented engine that solves Windows write inefficiencies to HDDs or SSDs, more and more customers are discovering just how important our patented DRAM caching engine is. If any customer can maintain even just 4GB of available memory to be used for cache, they most often see cache hit rates in the range of 50%. That means serving data out of DRAM, which is 15X faster than SSD and opens up even more precious bandwidth to and from storage for everything else. Other customers who really need to crank up performance are simply provisioning more memory on those systems and seeing >90% cache hit rates.

See all this and more described in the latest Condusiv I/O Reduction White Board video that explains eeevvvveeerything you need to know about the problem, how we solve it, and the typical results that should be expected in the time it takes you to drink a cup of coffee. So go get a cup of coffee, sit back, relax, and see how we can solve your toughest performance problems – guaranteed.

 

First-ever “Time Saved” Dashboard = Holy Grail for ROI

by Brian Morin 2. November 2016 10:03

If you’ve ever wondered about the exact business value that Condusiv® I/O reduction software provides to your systems, the latest “time saved” reporting does exactly that.

Prior to V-locity® v6.2 for virtual servers and Diskeeper® 16 for physical servers and endpoints, customers would conduct expansive before/after tests to extract the intrinsic performance value, but struggled to extract the ongoing business benefit over time. This has been especially true during annual maintenance renewal cycles when key stakeholders need to be “re-sold” to allocate budget for ongoing maintenance, or push new licenses to new servers.

The number one request from customers has been to better understand the ongoing business benefit of I/O reduction in terms that are easily relatable to senior management and makes justifying the ROI painless. This “holy grail” search on part of our engineering team has led to the industry’s first-ever “time saved” dashboard for an I/O optimization software platform.

When Condusiv software proactively eliminates the surplus of small, fractured writes and reads and ensures more “payload” with every I/O operation, the net effect is fewer write and read operations for any given workload, which saves time. When Condusiv software caches hot reads within idle, available DRAM, the net effect is fewer reads traversing the full stack down to storage and back, which saves time.

In terms of benefits, the new dashboard shows:

    1. How many write I/Os are eliminated by ensuring large, clean, contiguous writes from Windows

    2. How many read I/Os are cached from idle DRAM

    3. What percentage of write and read traffic is offloaded from underlying SSD or HDD storage

    4. Most importantly – the dashboard relates I/O reduction to the business benefit of … “time saved”

This reporting approach makes the software fully transparent on the type of benefit being delivered to any individual system or groups of systems. Since the software itself sits within the Windows operating system, it is aware of latency to storage and understands just how much time is saved by serving an I/O from DRAM instead of the underlying SSD or HDD. And, most importantly, since the fastest I/O is the one you don’t have to write, Condusiv software understands how much time is saved by eliminating multiple small, fractured writes with fewer, larger contiguous writes.  

Have you ever wondered how much time V-locity will save a VDI deployment? Or an application supported by all-flash? Or a Hyperconverged environment? Rather than wonder, just install a 30-day version of the software and monitor the “time saved” dashboard to find out. Benefits are fully transparent and easily quantified.

Have you ever needed to justify Diskeeper’s endpoint solution across a fleet of corporate laptops with SSDs? Now you can see the “time saved” on individual systems or all systems and quantify the cost of labor against the number of hours that Diskeeper saved in I/O time across any time period. The “no brainer” benefit will be immediately obvious.

Customers will be pleasantly surprised to find out the latest dashboard doesn’t just show granular benefits but also granular performance metrics and other important information to assist with memory tuning. See the avg., min, and max of idle memory used for cache over any time period (even by the hour) to make quick assessments on which systems could use more memory to take better advantage of the caching engine for greater application performance. Customers have found if they can maintain at least 2GB used for cache, that's where they begin to get into the sweet spot of what the product can do. If even more can be maintained to establish a tier-0 cache strategy, performance rises even further. Systems with at least 4GB idle for cache will invariably serve 60% of reads or more. 

 

 

       Lou Goodreau, IT Manager, New England Fishery

      “32% of my write traffic has been eliminated and 64% of my read traffic has been cached within idle memory. This saved over 20 hours in I/O time after 24 days of testing!”

       David Bruce, Managing Partner, David Bruce & Associates

                                    “Over 50% of my reads are now served from DRAM and over 30% of write traffic has

                                   been eliminated by ensuring large, contiguous writes. Now everything is more

                                   responsive!"

 

Month List

Calendar

<<  November 2017  >>
MoTuWeThFrSaSu
303112345
6789101112
13141516171819
20212223242526
27282930123
45678910

View posts in large calendar