Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

Top 10 Webinar Questions – Our Experts Get Technical

by Marissa Newman 7. January 2020 12:58

As we enter the new year and reflect on the 25 live webinars that we held in 2019, we are thrilled with the level of interaction and thought we’d take a look back at some of the great questions asked during the lively Q&A sessions. Here are the top questions and the responses that our technical experts gave.

 

Q. We run a Windows VM on a Microsoft Azure, is your product still applicable?

A. Yes. Whether the Windows system is a physical system or a virtual system, it still runs into the I/O tax and the I/O blender effect.  Both which will degrade the system performance.  Whether the system is on premise or in the cloud, V-locity® can optimize and improve performance.

 

Q. If a server is dedicated to running multiple SQL jobs for different applications, would you recommend installing V-locity?

A. Yes, we would definitely recommend using V-locity. However, the software is not specific to SQL instances, as it looks to improve the I/O performance on any system. SQL just happens to be a sweet spot because of how I/O intensive it is.

 

Q. Will V-locity/Diskeeper® help with the performance of my backup jobs?

A. We have a lot of customers that buy the software to increase their backup performance because their backup windows are going past the time they have allotted to do the backup. We’ve had some great success stories of customers that have reduced their backup windows by putting our software on their system.

 

Q. Does the software work in physical environments?

A. Yes, although we are showing how the software provides benefits in a virtual environment, the same performance gains can be had on physical systems. That same I/O tax and blender effect that degrade performance on virtual systems can also happen on physical systems. The I/O tax occurs on any Windows systems when nice, sequential I/O is broken up into less efficient smaller, random I/O, which can also apply to physical workstation environments. The Blender Effect that we see when all of those small, random I/Os from multiple VMs have to get sorted by the Hypervisor and can occur on physical environments too. For example, when multiple physical systems are read/writing to different LUNs on the same SAN.

 

Q. What about the safety of this caching? If the system crashes, how safe is my data?

A. The software uses read-only caching, as data integrity is our #1 priority when we develop these products. With read-only caching, the data that’s in our cache is already in your storage. So, if the system unexpectedly goes down (i.e. Power outage), it’s okay because that data in cache is already on your storage and completely safe.

 

Q. How does your read cache differ from SQL that has its own data cache?

A. SQL is not too smart or efficient with how it uses your valuable available memory. It tries to load up all of its databases as much as it can to the available memory that is there, even though some of the databases or parts of those database aren’t even being accessed. Most of the time, your databases are much larger than the amount of memory you have so it can never fit everything. Our software is smarter in that it can determine the best blocks of data to optimize in order to get the best performance gains. Additionally, the software will also be caching other noisy I/Os from the system that can improve performance on the SQL server.

 

Q. In a Virtual environment, does the software get installed on the Host or the VMs?

A. The software gets installed on the actual VMs that are running Windows, because that’s where the I/Os are getting created by the applications and the best place to start optimizing. Now, that doesn’t necessarily mean that it has to get installed on all of the VMs on a host. You can put it just on the VMs that are getting hit the most with I/O activity, but we’ve seen the best performance gains if it gets installed on all of the VMs on that host because if you only optimize one VM, you still have the other VMs causing performance degradation issues on that same network. By putting the software on all of them, you’ll get optimal performance all around.

 

Q. Is your product needed if I have SSDs as my storage back-end?

A. Our patented I/O reduction solutions are very relevant in an SSD environment. By reducing random write I/Os to back end SSD’s, we also help mitigate and reduce write amplification issues. We keep SSDs running at “like new” performance levels. And, although SSDs are much faster than HDDs, the DRAM used in the product’s intelligent caching feature is 10x-15x faster than SSDs. We have many published customer use cases showing the benefits of our products on SSD based systems. Many of our customers have experienced 50, 100, even 300% performance gains in an all flash/SSD environment!

 

Q. Do we need to increase our RAM capacity to utilize your software?

A. That is one of the unique Set-It-and-Forget-It features of this product. The software will just use the available memory that’s not being used at the time and will give it back if the system or user applications need it. If there’s no available memory on the system, you just won’t be able to take advantage of the caching. So, if there’s not enough available RAM, we do recommend adding some to take advantage of the caching, but of course you’re always going to get the advantage of all the other technology if you can’t add RAM. Best practice is to reserve 4-8GB at a minimum.

 

Q. What teams can benefit most from the software? The SQL Server Team/Network Team/Applications Development Team?

A. The software can really benefit everyone. SQL Servers are usually very I/O intensive, so performance can be improved because we’re reducing I/O in the environment, but any system/applications (like File Server or Exchange Server) that are I/O intensive will benefit. The whole throughput and network team can benefit from it because it decreases the meta traffic that has to go through the network to storage, so it increases bandwidth for others. Because the software also improves and reduces I/O across all Microsoft applications, it really can benefit everyone in the environment.

 

There you have it – our top 10 questions asked during our informative webinars! Have more questions? Check out our FAQs, ask us in the comments below or send an email to info@condusiv.com.

Tags:

Application Performance | SSD, Solid State, Flash

Causes and Solutions for Latency

by Kim Amezcua 19. December 2019 04:14

Sometimes the slowdown of a Windows server occurs because the device or its operating system is outdated. Other times, the slowdown is due to physical constraints on the retrieval, processing, or transmitting of data. There are other causes as we will cover. In any case, the delay between when a command is made and a response is received is referred to as "latency."

Latency is a measure of time. For example, the latency of a command might be 0.02 seconds. To humans, this seems extraordinarily fast. However, computer processors can execute billions of instructions per second. This means that latency of a few millionths of a second can cause visible delays in the operation of a computer or server.

To figure out how to improve latency, you must identify the source of any latency issues. There are many possible sources of latency and, for each one, there are high latency fixes. Here are two possible causes of latency as well as a brief explanation for how to improve latency. In this case, I/O latency where the computer process is waiting for the I/O to complete, so it can process the data of that I/O, is a   waste of your computer processing power.

Data Fragments

Logical data fragments occur when files are written, deleted, and rewritten to a hard drive or solid-state drive.

When files are deleted from a drive, the files actually still exist on the drive. However, the logical address on the Windows operating file system for those files is freed up for use. This means that "deleted" files remain on the logical drive until another file is written over it by reusing the address. (This also explains why it is possible to recover lost files). 

When an address is re-used, the likelihood that the new file is exactly the same length as the "deleted" file is remote. As a result, little chunks or fragments of data remaining from the "deleted" file remain on the logical drive. As a logical drive fills up, new files are sometimes broken up to fit into the available segments. At its worst, a logical fragmented drive contains both old fragments left over from deleted files (free space fragments) and new fragments that were intentionally created (data file fragments).

Logical data fragments can be a significant source of latency in a computer or server. Storing to, and retrieving from, a fragmented logical drive introduces additional steps in searching for and reassembling files around the fragments. For example, rather than reading a file in one or two I/Os, fragmentation can require hundreds, even thousands of I/Os to read or write that same data.

One way for how to improve latency from these logical data fragments is to defragment the logical drive by collecting data fragments and making them contiguous. The main disadvantages of defragmenting are that it must be repeated periodically because the logical drive will inevitably fragment again and also defragmenting SSDs can cause them to wear out prematurely.

A better method for how to improve latency from disk fragments is to prevent the logical disk from becoming fragmented. Diskeeper® 18 manages writes so that large, contiguous segments are kept together from the very start, thereby preventing the fragments from developing in the first place.

Limited Resources

No matter how "fast" the components of a computer are, they are still finite and tasks must be scheduled and performed in order. Certain tasks must be put off while more urgent tasks are executed. Although the latency in scheduling is often so short that it is unnoticeable, there will be times when limited resources cause enough of a delay that it hampers the computer or server.

For example, two specifications that are commonly used to define the speed of a computer are processor clock speed and instructions per cycle. Although these numbers climb steadily as technology advances, there will always be situations where the processor has too many tasks to execute and must delay some of them to get them all done.

Similarly, data buses and RAM have a particular speed. This speed limits the frequency with which data can be moved to the processor. These kinds of Input/output performance delays can reduce a system’s capacity by more than 50%.

One way to address latency is a method used by Diskeeper® 18. In this method, idle available DRAM is used to cache hot reads. By caching, it eliminates having to travel all the way to the storage infrastructure to read the data; and remember that DRAM can be 10x-15x faster than SSDs and even many factors more than HDDs. This allows faster retrieval of data; in fact, Windows systems can run faster than when new.

Reducing latency is mostly a matter of identifying the source of latencies and addressing them. By being proactive and preventing fragmentation before it happens and by caching hot reads using idle & available DRAM, Diskeeper® 18 makes Windows computers faster and more reliable.

 

Condusiv’s V-locity Technology Was Recently Certified as Citrix Ready

by Dawn Richcreek 11. September 2019 09:51

 

We are proud to announce that Condusiv’s V-locity® I/O reduction software has been certified as Citrix Ready®. The Citrix Ready program helps customers identify third-party solutions that enhance virtualization, networking and cloud computing solutions from Citrix Systems, Inc. V-locity, our innovative and dynamic alternative to costly hardware overhauls, has completed a rigorous verification process to ensure compatibility with Citrix solutions, providing confidence in joint solution efficiency and value. The Citrix Ready program makes it easy for customers to identify complementary products and results-driven solutions that can enhance Citrix environments and increase productivity.

 

 

 

Verified Performance Improvements of 50 Percent or More

To obtain the Citrix Ready certification, we ran IOMeter benchmark tests—an industry standard tool for testing I/O performance—on a Windows 10 system powered by Citrix’s XenDesktop virtual desktop access (VDA).  

The IOMeter benchmark utility was set up to run 5 different tests with variations in the following parameters:

 •  Different read/write size packets (512b to 64kb)
 •  Different read/write ratios, i.e. 50% read/50% writes, 75% reads/25% writes
 •  Different mixture of random and sequential I/Os

The tests determined that drastic improvements were made with V-locity enabled versus disabled. With V-locity enabled, we found that performance rates improved around 50% on average. In one test case, IOps (I/Os per second) increased from 2,903 to 5,525, a performance rate improvement of 90%.  

 

 

 

 This chart shows the detailed test results of the 5 test variations:  

 

 

 

We also compared the results of the V-locity Dashboard running the same IOMeter benchmark, with V-locity disabled and then enabled and found some additional improvements.

With V-locity enabled, it was able to eliminate over 8 million I/Os from having to go through the network and storage to get satisfied which immensely increased the I/O capacity of the system.  By knowing the latency times of these ‘eliminated’ I/Os, another improvement to highlight is that it saved more than an hour of storage I/O time.   

 

 

 

Additionally, the workload (amount of data read/written) increased from 169GB to 273GB, meaning 60% more work was being done in the same amount of time.  

 

 

 

 

 

Customers can be confident that V-locity has successfully passed an exhaustive series of tests established by Citrix. The V-locity technology works effectively with Citrix solutions and can provide customers with 50% or more faster performance gain on their heaviest workloads. V-locity allows customers to “set it and forget it,” meaning that once it is installed, systems will instantly improve with little to no maintenance.

Our CEO, Jim D’Arezzo noted, “We are proud to partner with Citrix Systems. It’s important to remember that most I/O performance issues are caused by the operating system, particularly in the Windows environment. When compared to a hardware upgrade, the software solutions Condusiv offers are far more effective—both in terms of cost and result—in increasing overall system performance. We offer customers intelligent solutions that now combine our V-locity with Citrix XenDesk. We can’t wait to continue to work with the trusted partners in the Citrix Ready ecosystem.” 

 

Download free 30-day trial of V-locity

 

Condusiv’s V-locity I/O reduction software has been certified as Citrix Ready

 

 

Case Study: Non-Profit Eliminates Frustrating Help Desk calls, Boosts Performance and Extends Useful Hardware Lifecycle

by Marissa Newman 9. September 2019 11:47

When PathPoint was faced with user complaints and productivity issues related to slow performance, the non-profit organization turned to Condusiv’s I/O reduction software to not only optimize their physical and virtual infrastructure but to extend their hardware lifecycles, as well. 

As technology became more relevant to PathPoint’s growing organization and mission of providing people with disabilities and young adults the skills and resources to set them up for success, the IT team had to find a solution to make the IT infrastructure as efficient as possible. That’s when the organization looked into Diskeeper® as a solution for their physical servers and desktops.

“Now when we are configuring our workstations and laptops, the first thing we do is install Diskeeper. We have several lab computers that we don’t put the software on and the difference is obvious in day-to-day functionality. Diskeeper has essentially eliminated all helpdesk calls related to sluggish performance.” reported Curt Dennett, PathPoint’s VP of Technology and Infrastructure.

Curt also found that workstations with Diskeeper installed have a 5-year lifecycle versus the lab computers without Diskeeper that only last 3 years and he found similar results on his physical servers that are running full production workloads. Curt observed, “We don’t need to re-format machines running Diskeeper nearly as often. As a result, we gained back valuable time for other important initiatives while securing peak performance and longevity out of our physical hardware assets. With limited budgets, that has truly put us at ease.”

When PathPoint expanded into the virtual realm, Curt looked at V-locity® for their VM’s and, after reviewing the benefits, brought the software into the rest of their environment. The organization found that with the powerful capabilities of Diskeeper and V-locity, they were able to offload 47% of I/O traffic from storage, resulting in a much faster experience for their users.

The use of V-locity and Diskeeper is now the standard for PathPoint. Curt concluded, “The numbers are impressive but what’s more for me, is the gut feeling and the experience of knowing that the machines are actually performing efficiently. I wouldn’t run any environment without these tools.”

 

Read the full case study

 

Try V-locity FREE for yourself – no reboot is needed

Cost-Effective Solutions for Healthcare IT Deficiencies

by Jim D’Arezzo, CEO 26. August 2019 05:22

Managing healthcare these days is as much about managing data as it is about managing patients themselves.  The tsunami of data washing over the healthcare industry is a result of technological advancements and regulatory requirements coming together in a perfect storm.  But when it comes to saving lives, the healthcare industry cannot allow IT deficiencies to become the problem rather than the solution.

The healthcare system generates about a zettabyte (a trillion gigabytes) of data each year, with sources including electronic health records (EHRs), diagnostics, genetics, wearable devices and much more. While this data can help improve our health, reduce healthcare costs and predict diseases and epidemics, the technology used to process and analyze it is a major factor in its value.

According to a recent report from International Data Corporation, the volume of data processed in the overall healthcare sector is projected to increase at a compound annual growth rate of 36 percent through 2025, significantly faster than in other data-intensive industries such as manufacturing (30 percent projected CAGR), financial services (26 percent) and media and entertainment (25 percent).

Healthcare faces many challenges, but one that cannot be ignored is information technology. Without adequate technology to handle this growing tsunami of often-complex data, medical professionals and scientists can’t do their jobs. And without that, we all pay the price.

Electronic Health Records

Over the last 30 years, healthcare organizations have moved toward digital patient records, with 96 percent of U.S. hospitals and 78 percent of physician’s offices now using EHRs, according to the National Academy of Medicine. A recent report from market research firm Kalorama Information states that the EHR market topped $31.5 billion in 2018, up 6 percent from 2017.

Ten years ago, Congress passed the Health Information Technology for Economic and Clinical Health (HITECH) Act and invested $40 billion in health IT implementation.

The adoption of EHRs is supposed to be a solution, but instead it is straining an overburdened healthcare IT infrastructure. This is largely because of the lack of interoperability among the more than 700 EHR providers. Healthcare organizations, primarily hospitals and physicians’ offices, end up with duplicate EHR data that requires extensive (not to mention non-productive) search and retrieval, which degrades IT system performance.

More Data, More Problems

IT departments are struggling to keep up with demand.  Like the proverbial Dutch boy with his finger in the dyke, it is difficult for IT staff to manage the sheer amount of data, much less the performance demands of users.

We can all relate to this problem.  All of us are users of massive amounts of data.  We also have little patience for slow downloads, uploads, processing or wait times for systems to refresh. IT departments are generally measured on three fundamentals: the efficacy of the applications they provide to end users, uptime of systems and speed (user experience).  The applications are getting more robust, systems are generally more reliable, but speed (performance) is a constant challenge that can get worse by the day.

From an IT investment perspective, improvements in technology have given us much faster networks, much faster processing and huge amounts of storage.  Virtualization of the traditional client-server IT model has provided massive cost savings.  And new hyperconverged systems can improve performance as well in certain instances.  Cloud computing has given us economies of scale. 

But costs will not easily be contained as the mounting waves of data continue to pound against the IT breakwaters.   

Containing IT Costs

Traditional thinking about IT investments goes like this.  We need more compute power; we buy more systems.  We need faster network speeds; we increase network bandwidth and buy the hardware that goes with it.  We need more storage; we buy more hardware.  Costs continue to rise proportionate to the demand for the three fundamentals (applications, uptime and speed).

However, there are solutions that can help contain IT costs.  Data Center Infrastructure Management (DCIM) software has become an effective tool for analyzing and then reducing the overall cost of IT.  In fact, the US government Data Center Optimization Initiative claims to have saved nearly $2 billion since 2016.

Other solutions that don’t require new hardware to improve performance and extend the life of existing systems are also available. 

What is often overlooked is that processing and analyzing data is dependent on the overall system’s input/output (I/O) performance, also known as throughput. Many large organizations performing data analytics require a computer system to access multiple and widespread databases, pulling information together through millions of I/O operations. The system’s analytic capability is dependent on the efficiency of those operations, which in turn is dependent on the efficiency of the computer’s operating environment.

In the Windows environment especially (which runs about 80% of the world’s computers), I/O performance degradation progresses over time. This degradation, which can lower the system’s overall throughput capacity by 50 percent or more, happens in any storage environment. Windows penalizes optimum performance due to server inefficiencies in the handoff of data to storage. This occurs in any data center, whether it is in the cloud or on premises.  And it gets worse in a virtualized computing environment.  In a virtual environment the multitude of systems all sending I/O up and down the stack to and from storage create tiny, fractured, random I/O that results in a “noisy” environment that slows down application performance.  Left untreated, it only worsens with time.

Even experienced IT professionals mistakenly think that new hardware will solve these problems. Since data is so essential to running organizations, they are tempted to throw money at the problem by buying expensive new hardware.  While additional hardware can temporarily mask this degradation, targeted software can improve system throughput by up to 30 to 50 percent or more.  Software like this has the advantage of being non-disruptive (no ripping and replacing hardware), and it can be transparent to end users as it is added in the background.  Thus, a software solution can handle more data by eliminating overhead, increase performance at a much, much lower cost and extend the life of existing systems. 

With the tsunami of data threatening IT, solutions like these should be considered in order to contain healthcare IT costs.


Download V-locity - I/O Reduction Software  

Tags:

Application Performance | EHR

RecentComments

Comment RSS

Month List

Calendar

<<  January 2020  >>
MoTuWeThFrSaSu
303112345
6789101112
13141516171819
20212223242526
272829303112
3456789

View posts in large calendar