Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

How to Improve Application Performance by Decreasing Disk Latency like an IT Engineer

by Spencer Allingham 13. June 2018 06:49

You might be responsible for a busy SQL server, for example, or a Web Server; perhaps a busy file and print server, the Finance Department's systems, documentation management, CRM, BI, or something else entirely.

Now, think about WHY these are the workloads that you care about the most?

 

Were YOU responsible for installing the application running the workload for your company? Is the workload being run business critical, or considered TOO BIG TO FAIL?

Or is it simply because users, or even worse, customers, complain about performance?

 

If the last question made you wince, because you know that YOU are responsible for some of the workloads running in your organisation that would benefit from additional performance, please read on. This article is just for you, even if you don't consider yourself a "Techie".

Before we get started, you should know that there are many variables that can affect the performance of the applications that you care about the most. The slowest, most restrictive of these is referred to as the "Bottleneck". Think of water being poured from a bottle. The water can only flow as fast as the neck of the bottle, the 'slowest' part of the bottle.

Don't worry though, in a computer the bottleneck will pretty much always fit into one of the following categories:

•           CPU

•           DISK

•           MEMORY

•           NETWORK

The good news is that if you're running Windows, it is usually very easy to find out which one the bottleneck is in, and here is how to do it (like an IT Engineer):

 •          Open Resource Monitor by clicking the Start menu, typing "resource monitor", and pressing Enter. Microsoft includes this as part of the Windows operating system and it is already installed.

 •          Do you see the graphs in the right-hand pane? When your computer is running at peak load, or users are complaining about performance, which of the graphs are 'maxing out'?

This is a great indicator of where your workload's bottleneck is to be found.         

 

SO, now you have identified the slowest part of your 'compute environment' (continue reading for more details), what can you do to improve it?

The traditional approach to solving computer performance issues has been to throw hardware at the solution. This could be treating yourself to a new laptop, or putting more RAM into your workstation, or on the more extreme end, buying new servers or expensive storage solutions.

BUT, how do you know when it is appropriate to spend money on new or additional hardware, and when it isn't. Well the answer is; 'when you can get the performance that you need', with the existing hardware infrastructure that you have already bought and paid for. You wouldn't replace your car, just because it needed a service, would you?

Let's take disk speed as an example.  Let’s take a look at the response time column in Resource Monitor. Make sure you open the monitor to full screen or large enough to see the data.  Then open the Disk Activity section so you can see the Response Time column.  Do it now on the computer you're using to read this. (You didn't close Resource Monitor yet, did you?) This is showing the Disk Response Time, or put another way, how long is the storage taking to read and write data? Of course, slower disk speed = slower performance, but what is considered good disk speed and bad?

To answer that question, I will refer to a great blog post by Scott Lowe, that you can read here...

https://www.techrepublic.com/blog/the-enterprise-cloud/use-resource-monitor-to-monitor-storage-performance/

In it, the author perfectly describes what to expect from faster and slower Disk Response Times:

"Response Time (ms). Disk response time in milliseconds. For this metric, a lower number is definitely better; in general, anything less than 10 ms is considered good performance. If you occasionally go beyond 10 ms, you should be okay, but if the system is consistently waiting more than 20 ms for response from the storage, then you may have a problem that needs attention, and it's likely that users will notice performance degradation. At 50 ms and greater, the problem is serious."

Hopefully when you checked on your computer, the Disk Response Time is below 20 milliseconds. BUT, what about those other workloads that you were thinking about earlier. What's the Disk Response Times on that busy SQL server, the CRM or BI platform, or those Windows servers that the users complain about?

If the Disk Response Times are often higher than 20 milliseconds, and you need to improve the performance, then it's choice time and there are basically two options:

           In my opinion as an IT Engineer, the most sensible option is to use storage workload reduction software like Diskeeper for physical Windows computers, or V-locity for virtualised Windows computers. These will reduce Disk Storage Times by allowing a good percentage of the data that your applications need to read, to come from a RAM cache, rather than slower disk storage. This works because RAM is much faster than the media in your disk storage. Best of all, the only thing you need to do to try it, is download a free copy of the 30 day trial. You don't even have to reboot the computer; just check and see if it is able to bring the Disk Response Times down for the workloads that you care about the most.

           If you have tried the Diskeeper or V-locity software, and you STILL need faster disk access, then, I'm afraid, it's time to start getting quotations for new hardware. It does make sense though, to take a couple of minutes to install Diskeeper or V-locity first, to see if this step can be avoided. The software solution to remove storage inefficiencies is typically a much more cost-effective solution than having to buy hardware!

Visit www.condusiv.com/try to download Diskeeper and V-locity now, for your free trial.

 

The Revolution of Our Technology

by Rick Cadruvi, Chief Architect 18. October 2017 12:38

I chose to use the word “Revolution” instead of “Evolution” because, with all due modesty, our patented technology has been more a series of leaps to stay ahead of performance-crushing bottlenecks. After all, our company purpose as stated by our Founder, Craig Jensen, is:

“The purpose of our company is to provide computer technology that enormously increases

the production and income of an area.”

We have always been about improving your production. We know your systems are not about having really cool hardware but rather about maximizing your organization’s production. Our passion has been about eliminating the stops, slows and stalls to your application performance and instead, to jack up that performance and give you headroom for expansion. Now, most of you know us by our reputation for Diskeeper®. What you probably don’t know about us is our leadership in system performance software.

We’ve been at this for 35 years with a laser focus. As an example, for years hard drives were the common storage technology and they were slow and limited in size, so we invented numerous File System Optimization technologies such as Defragmentation, I-FAAST®1 and Directory Consolidation to remove the barriers to getting at data quickly. As drive sizes grew, we added new technologies and jettisoned those that no longer gave bang for the buck. Technologies like InvisiTasking® were invented to help maximize overall system performance, while removing bottlenecks.

As SSDs began to emerge, we worked with several OEMs to take advantage of SSDs to dramatically reduce data access times as well as reducing the time it took to boot systems and resume from hibernate. We created technologies to improve SSD longevity and even worked with manufacturers on hybrid drives, providing hinting information, so their drive performance and endurance would be world class.

As storage arrays were emerging we created technologies to allow them to better utilize storage resources and pre-stage space for future use. We also created technologies targeting performance issues related to file system inefficiencies without negatively affecting storage array technologies like snapshots.

When virtualization was emerging, we could see the coming VM resource contention issues that would materialize. We used that insight to create file system optimization technologies to deal with those issues before anyone coined the phrase “I/O Blender Effect”.

We have been doing caching for a very long time2. We have always targeted removal of the I/Os that get in your applications path to data along with satisfying the data from cache that delivers performance improvements of 50-300% or more. Our goal was not caching your application specific data, but rather to make sure your application could access its data much faster. That’s why our unique caching technology has been used by leading OEMs.

Our RAM-based caching solutions include dynamic memory allocation schemes to use resources that would otherwise be idle to maximize overall system performance. When you need those resources, we give them back. When they are idle, we make use of them without your having to adjust anything for the best achievable performance. “Set It and Forget It®” is our trademark for good reason.

We know that staying ahead of the problems you face now, with a clear understanding of what will limit your production in 3 to 5 years, is the best way we can realize our company purpose and help you maximize your production and thus your profitability. We take seriously having a clear vision of where your problems are now and where they will be in the future. As new hardware and software technologies roll out, we will be there removing the new barriers to your performance then, just as we do now.

1. I-FAAST stands for Intelligent File Access Acceleration Sequencing Technology, a technology designed to take advantage of different performing regions on storage to allow your hottest data to be retrieved in the fastest time.

2. If I can personally brag, I’ve created numerous caching solutions over a period of 40 years.

Condusiv Launches SSDkeeper Software that Guarantees “Faster than New” Performance for PCs and Physical Servers and Extends Longevity of SSDs

by Brian Morin 17. January 2017 09:30

The company that sold over 100 Million Diskeeper® licenses for hard disk drive systems, now releases SSDkeeper™ to keep solid-state drive systems running longer while performing “faster than new.”

Every Windows PC or physical server fitted with a solid-state drive (SSD) suffers from very small, fractured writes and reads, which dampen optimal SSD performance and ultimately erodes the longevity of SSDs from write amplification issues. SSDkeeper’s patented software ensures large, clean contiguous writes and reads for more payload with every I/O operation, reduced Program/Erase (P/E) cycles that shorten SSD longevity, and boosts performance even further with its ability to cache hot reads within idle, available DRAM.

Solid-state drives can only handle a number of finite writes before failing. Every write kicks off P/E cycles that shorten SSD lifespan otherwise known as write amplification. By reducing the number of writes required for any given file or workload, SSDkeeper significantly boosts write performance speed while also reducing the number of P/E cycles that would have otherwise been executed. This enables individuals and organizations to reclaim the write speed of their SSD drives while ensuring the longest life possible.

Patented Write Optimization

SSDkeeper’s patented write optimization engine (IntelliWrite®) prevents excessively small, fragmented writes and reads that rob the performance and endurance of SSDs. SSDkeeper ensures large, clean contiguous writes from Windows, so maximum payload is carried with every I/O operation. By eliminating the “death by a thousand cuts” scenario of many, tiny writes and reads that slow system performance, the lifespan of an SSD is also extended due to reduction in write amplification issues that plague all SSD devices.

Patented Read Optimization

SSDkeeper electrifies Windows system performance further with an additional patented feature - dynamic memory caching (IntelliMemory®). By automatically using idle, available DRAM to serve hot reads, data is served from memory which is 12-15X faster than SSD and further reduces wear to the SSD device. The real genius in SSDkeeper’s DRAM caching engine is that nothing has to be allocated for cache. All caching occurs automatically. SSDkeeper dynamically uses only the memory that is available at any given moment and throttles according to the need of the application, so there is never an issue of resource contention or memory starvation. If a system is ever memory constrained at any point, SSDkeeper's caching engine will back off entirely. However, systems with just 4GB of available DRAM commonly serve 50% of read traffic. It doesn't take much available memory to have a big impact on performance.

Enhanced Reporting

If you ever wanted to know how much Windows inefficiencies were robbing system performance, SSDkeeper tracks time saved due to elimination of small, fragmented writes and time saved from every read request that is served from DRAM instead of being served from the underlying SSD. Users can leverage SSDkeeper’s built-in dashboard to see what percentage of all write requests are reduced by sequentializing otherwise small, fractured writes and what percentage of all read requests are cached from idle, available DRAM.

SSDkeeper is a lightweight file system driver that runs invisibly in the background with near-zero intrusion on system resources. All optimizations occur automatically in real-time.

While SSDkeeper provides the same core patented functionality and features as the latest Diskeeper® 16 for hard disk drives (minus defragmentation functions for hard disk drives only), the benefit to a solid-state drive is different than to a hard disk drive. Hard disk drives do not suffer from write amplification that reduces longevity. By eliminating excessively small writes, IntelliWrite goes beyond improved write performance but extends endurance as well.

Available in Professional and Server Editions

>SSDkeeper Professional for Windows PCs with SSD drives greatly enhances the performance of corporate laptops and desktops.

>SSDkeeper Server speeds physical server system performance of the most I/O intensive applications such as MS-SQL Server by 2X to 10X depending on the amount of idle, unused memory.  

>Options include Diskeeper Administrator management console to automate network deployment and management across hundreds or thousands of PCs or servers.  

>A free 30-day software trial download is available at http://www.condusiv.com/evaluation-software/

>Now available for purchase on our online store:  http://www.condusiv.com/purchase/SSDKeeper/

 

How Can I/O Reduction Software Guarantee to Solve the Toughest Performance Problems?

by Brian Morin 14. January 2017 01:00

The #1 request I’ve been getting from customers is a white board video that succinctly explains the two silent killers of VM performance and how our I/O reduction guarantees to solve performance problems, so applications run perfectly on every Windows server.

Expensive backend storage upgrades should ONLY take place when needing more capacity – not more performance. Anytime I tell someone our I/O reduction software guarantees to solve their toughest performance problems…the very first response is invariably the same…HOW? Not only have I answered this question hundreds of times, our own customers find themselves answering this question repeatedly to other team members or new hires.

To make this easier, I’ve answered it all here in this 10-min White Board Video ->, or you can continue reading.

 Most of us have been upgrading hardware to get more performance ever since we can remember. It’s become so engrained, it’s often times the ONLY approach we think of when needing a performance upgrade.

For many organizations, they don’t necessarily need a performance boost on EVERY application, but they need it on one or two I/O intensive applications. To throw a new all-flash array or new hybrid array at a performance problem ends up being the most expensive and disruptive way to solve a performance problem when all you have to do is the same thing thousands of our customers have done: simply try our I/O reduction software on any Windows server and watch the application run at least 50% faster and in many cases 2X-10X faster.

Most IT professionals are unaware of the fact that as great as virtualization has been for server efficiency, the one downside is how it adds complexity to the data path. On top of that, Windows doesn’t play well in a virtual environment (or any environment where it is abstracted from the physical layer). This means I/O characteristics that are a lot smaller, more fractured and more random than they need to be – the perfect trifecta for bad storage performance.

This “death by a thousand cuts” scenario means systems are processing workloads about 50% slower than they should. Condusiv’s I/O reduction software solves this problem by displacing many small tiny writes and reads with large, clean contiguous writes and reads. As huge as that patented engine is for our customers, it’s not the only thing we’re doing to make applications run smoothly. Performance is further electrified by establishing a tier-0 caching strategy - automatically using idle, available memory to serve hot reads. This is the same battle-tested technology that has been OEM’d by some of the largest out there – Dell, Lenovo, HP, SanDisk, Western Digital, just to name a few.

Although we might be most known for our first patented engine that solves Windows write inefficiencies to HDDs or SSDs, more and more customers are discovering just how important our patented DRAM caching engine is. If any customer can maintain even just 4GB of available memory to be used for cache, they most often see cache hit rates in the range of 50%. That means serving data out of DRAM, which is 15X faster than SSD and opens up even more precious bandwidth to and from storage for everything else. Other customers who really need to crank up performance are simply provisioning more memory on those systems and seeing >90% cache hit rates.

See all this and more described in the latest Condusiv I/O Reduction White Board video that explains eeevvvveeerything you need to know about the problem, how we solve it, and the typical results that should be expected in the time it takes you to drink a cup of coffee. So go get a cup of coffee, sit back, relax, and see how we can solve your toughest performance problems – guaranteed.

 

Everything You Need to Know about SSDs and Fragmentation in 5 Minutes

by Howard Butler 17. November 2016 05:42

When reading articles, blogs, and forums posted by well-respected (or at least well intentioned people) on the subject of fragmentation and SSDs, many make statements about how (1) SSDs don’t fragment, or (2) there’s no moving parts, so no problem, or (3) an SSD is so fast, why bother? We all know and agree SSDs shouldn’t be “defragmented” since that shortens lifespan, so is there a problem after all?

The truth of the matter is that applications running on Windows do not talk directly to the storage device.  Data is referenced as an abstracted layer of logical clusters rather than physical track/sectors or specific NAND-flash memory cells.  Before a storage unit (HDD or SSD) can be recognized by Windows, a file system must be prepared for the volume.  This takes place when the volume is formatted and in most cases is set with a 4KB cluster size.  The cluster size is the smallest unit of space that can be allocated.  Too large of a cluster size results in wasted space due to over allocation for the actual data needed.  Too small of a cluster size causes many file extents or fragments.  After formatting is complete and when a volume is first written to, most all of the free space is in just one or two very large sections.  Over the course of time as files of various sizes are written, modified, re-written, copied, and deleted, the size of individual sections of free space as seen from the NTFS logical file system point of view becomes smaller and smaller.  I have seen both HDD and SSD storage devices with over 3 million free space extents.  Since Windows lacks file size intelligence when writing a file, it never chooses the best allocation at the logical layer, only the next available – even if the next available is 4KB. That means 128K worth of data could wind up with 32 extents or fragments, each being 4KB in size. Therefore SSDs do fragment at the logical Windows NTFS file system level.  This happens not as a function of the storage media, but of the design of the file system.

Let’s examine how this impacts performance.  Each extent of a file requires its own separate I/O request. In the example above, that means 32 I/O operations for a file that could have taken a single I/O if Windows was smarter about managing free space and finding the best logical clusters instead of the next available. Since I/O takes a measurable amount of time to complete, the issue we’re talking about here related to SSDs has to do with an I/O overhead issue.

Even with no moving parts and multi-channel I/O capability, the more I/O requests needed to complete a given workload, the longer it is going to take your SSD to access the data.  This performance loss occurs on initial file creation and carries forward with each subsequent read of the same data.  But wait… the performance loss doesn’t stop there.  Once data is written to a memory cell on an SSD and later the file space is marked for deletion, it must first be erased before new data can be written to that memory cell.  This is a rather time consuming process and individual memory cells cannot be individually erased, but instead a group of adjacent memory cells (referred to as a page) are processed together.  Unfortunately, some of those memory cells may still contain valuable data and this information must first be copied to a different set of memory cells before the memory cell page (group of memory cells) can be erased and made ready to accept the new data.  This is known as Write Amplification.  This is one of the reasons why writes are so much slower than reads on an SSD.  Another unique problem associated with SSDs is that each memory cell has a limited number of times that a memory cell can be written to before that memory cell is no longer usable.  When too many memory cells are considered invalid the whole unit becomes unusable.  While TRIM, wear leveling technologies, and garbage collection routines have been developed to help with this behavior, they are not able to run in real-time and therefore are only playing catch-up instead of being focused on the kind of preventative measures that are needed the most.  In fact, these advanced technologies offered by SSD manufacturers (and within Windows) do not prevent or reverse the effects of file and free space fragmentation at the NTFS file system level.

The only way to eliminate this surplus of small, tiny writes and reads that (1) chew up performance and (2) shorten lifespan from all the wear and tear is by taking a preventative approach that makes Windows “smarter” about how it writes files and manages free space, so more payload is delivered with every I/O operation. That’s exactly why more users run Condusiv’s Diskeeper® (for physical servers and workstations) or V-locity® (for virtual servers) on systems with SSD storage. For anyone who questions how much value this approach adds to their systems, the easiest way to find out is by downloading a free 30-day trial and watch the “time saved” dashboard for yourself. Since the fastest I/O is the one you don’t have to write, Condusiv software understands exactly how much time is saved by eliminating multiple, fractured writes with fewer, larger contiguous writes. It even has an additional feature to cache reads from idle, available DRAM (15X faster than SSD), which further offloads I/O bandwidth to SSD storage. Especially for businesses with many users accessing a multitude of applications across hundreds or thousands of servers, the time savings are enormous.

 

ATTO Benchmark Results with and without Diskeeper 16 running on a 120GB Samsung SSD Pro 840. The read data caching shows a 10X improvement in read performance.

RecentComments

Comment RSS

Month List

Calendar

<<  June 2018  >>
MoTuWeThFrSaSu
28293031123
45678910
11121314151617
18192021222324
2526272829301
2345678

View posts in large calendar