Condusiv Technologies Blog

Condusiv Technologies Blog

Blogging @Condusiv

The Condusiv blog shares insight into the issues surrounding system and application performance—and how I/O optimization software is breaking new ground in solving those issues.

For Larger V-locity Deployments: Getting Up and Running Using V-locity Management Console (VMC)

by Spencer Allingham 18. June 2019 05:32

V-locity® I/O Reduction software is designed to solve and prevent application performance degradation at the source, relieving stress and frustration from user complaints and reliability issues. V-locity eliminates the two big I/O inefficiencies in a virtual environment that generates a minimum of 30-40% I/O traffic chewing up storage IOPS and slowing down your system. Installing V-locity 7 is a quick, easy and effective fix, no reboot required. The first step to improving your server performance is downloading V-locity and starting your Proof of Concept.

 

Here are some helpful tips that will set you up to get the very best results

1. Login to your account to access your V-locity software OR download a free 30-day trial of the software.

2. Make sure you are using Windows Server 2008 R2 or later.

3. Ensure there is at least 4 GB of Available Physical Memory on each machine when checking MSINFO32.EXE (System Information Report) at a peak load time. Note: It is necessary to limit the amount of memory that certain applications can take in order to leave at least 4 GB of memory free. Examples are SQL server, Exchange and Oracle on Windows.

4. Plan to install V-locity on as many Windows machines that share the same storage as possible. If V-locity is only installed onto a small number of machines, you will only be optimizing a small percentage of the overall storage I/O traffic, which is likely to limit the results.

5. IMPORTANT: If using V-locity in a firewalled environment, make sure the correct ports are open. The following ports need to be open for incoming and outgoing TCP packets between the V-locity Management Console and the other machines running V-locity: 135, 137, 139, 443, 445, 5985, 13568-13572, 3102

6. Set aside a dedicated VM for the V-Locity Management Console (VMC). This should be running at least Windows Server 2008 R2 and have Internet Explorer 9 or later installed, or Chrome or Firefox.

Video Tutorials

This is a series of video tutorials to make sure your POC goes smoothly.

1. How to install the V-locity 7 Management Consoles: https://youtu.be/IpdI6aU_8wI

2. How to open the V-locity 7 Management Console for the first time: https://youtu.be/0_mtsZlu-YM

3. How to add a license to the V-locity 7 Management Console: https://youtu.be/OZ2sa4Bb8_w

4. How to deploy V-locity 7 from the V-locity Management Console: https://youtu.be/i6ogxKHia9o

5. How to gather V-locity 7 dashboard report data for analysis: https://youtu.be/RusU_rkNX_U

 

Once you have gathered the V-locity 7 dashboard report data, email this to your sales rep or sales@condusiv.com for additional analysis.

 

Undelete Saves Your Bacon, An In-depth Video Series

by Spencer Allingham 13. May 2019 03:43

Undelete® is a lot more than those simple file recovery utilities that just search through free space on Windows machines looking for recoverable data. Undelete does so much more; protecting files in network shared folders and capturing versions of any number of file types.

If you've ever had to rely on restoring from backup or a snapshots to get a deleted file back, watch now to find out how Undelete makes the recovery faster and more convenient on workstations, laptops and Windows servers.

Undelete, the world’s #1 file recovery software, as a first line of defense in your disaster recovery strategy can save your bacon!

“Undelete saved my bacon.” — Ken C, Cleveland State University

Why are some deleted files not in the Windows Recycle Bin?

Were you aware that the Windows Recycle Bin falls short of capturing all file deletions?

Whilst the Recycle Bin is very quick and convenient, it doesn’t capture:

· Files deleted from the Command Prompt

· Files deleted from within some applications

· Files deleted by network users from a Shared Folder

Undelete from Condusiv Technologies can capture ALL deletions, regardless of how they occur.

“It saved our bacon when a file on my system was accidentally deleted from another workstation. That recovery saved hours of work and sold us on the usefulness of the product.”

“Our entire commissions database was saved by the Undelete program. Very happy about that. We would have lost a week of commissions (over 2000 records easily). We were very grateful that we had your product." Frank B, Technical Manager, World Travel, Inc.

Watch this video for a demonstration of why the Recycle Bin falls short and how the Undelete software can pick up the slack and truly become the first line of defense in your disaster recovery strategy. 

What is Undelete File Versioning?

Have you ever accidentally overwritten a Microsoft Word document, spreadsheet or some other file?

Would it be helpful to have several versions of the same file available for recovery in the Windows Recycle Bin? Sorry, but the Recycle Bin can’t do that.

However, the Undelete Recovery Bin can!

“I'm glad I found yours -- it works very well, and the recovery really saved my bacon!” — John

Watch this video to see a demonstration of how capturing several versions of the same file when they get overwritten can really help save time as well as data.

Searching the Undelete Recovery Bin

Recover deleted files quickly and conveniently with Undelete’s easy search functions.

Even if you only know part of the file name, or aren’t sure what folder it was deleted from, see in this video how easy it is to find and recover the file that you need.

“I would recommend undelete as it has saved my bacon a couple of times when I was able to recover something that I deleted by accident.” — Joseph

Inclusion and Exclusion lists in Undelete

Find out how to use Inclusion and Exclusion Lists in the Undelete software to only capture those files that you really might want to recover and exclude all of those files that you don’t really care about.

Have you ever needed to get a file back that was deleted during a Windows Update? Probably not, so why have those files take up space in your Recovery Bin?

“It saved my bacon a few times.” — Jason

Watch this to see how configurable the Undelete Recovery Bin is.

Emergency Undelete Software

See a demonstration showing how easy it is to recover deleted files, even BEFORE you install the Undelete software from Condusiv Technologies.

Prevent that awful moment of extreme realization when you delete a file that isn’t backed up.

Oh! And if you’ve found this page because you need to recover a file right now, click here to get the free 30-day trialware of Undelete. We hope this helps you out of the jam!

“It has saved my bacon a couple of times when I was able to recover something that I deleted by accident.”

How to safely delete files before recycling your computer with Undelete

Want to get a new computer, but worry what would happen to your personal data if you recycled your old one, or sold it?

Watch now to see how to securely wipe your files from your computer’s hard drives with SecureDelete®, which is included in the Undelete software from Condusiv Technologies, before recycling your old computer, selling it, or passing it on to a friend.

We hope these videos help you navigate Undelete like a pro, and perhaps save your bacon, too!

Watch the Series - here!

Tags:

Data Protection | Data Recovery | File Protection | File Recovery | General | Undelete

SysAdmins Discover That Size Really Does Matter

by Spencer Allingham 25. April 2019 03:53

(...to storage transfer speeds...)

 

I was recently asked what could be done to maximize storage transfer speeds in physical and virtual Windows servers. Not the "sexiest" topic for a blog post, I know, but it should be interesting reading for any SysAdmin who wants to get the most performance from their IT environment, or for those IT Administrators who suffer from user or customer complaints about system performance.

 

As it happens, I had just completed some testing on this very subject and thought it would be helpful to share the results publicly in this article.

The crux of the matter comes down to storage I/O size and its effect on data transfer speeds. You can see in this set of results using an NVME-connected SSD (Samsung MZVKW1T0HMLH Model SM961), that the read and write transfer speeds, or put another way, how much data can be transferred each second is MUCH less when the storage I/O sizes are below 64 KB in size:

 

You can see that whilst the transfer rate maxes out at around 1.5 GB per second for writes and around 3.2 GB per second for reads, when the storage I/O sizes are smaller, you don’t see disk transfer speeds at anywhere near that maximum rate. And that’s okay if you’re only saving 4 KB or 8 KB of data, but is definitely NOT okay if you are trying to write a larger amount of data, say 128 KB or a couple of megabytes, and the Windows OS is breaking that down into smaller I/O packets in the background and transferring to and from disk at those much slower transfer rates. This happens way too often and means that the Windows OS is dampening efficiency and transferring your data at a much slower transfer rate than it could, or it should. That can have a very negative impact on the performance of your most important applications, and yes, they are probably the ones that users are accessing the most and are most likely to complain about.

 

The good news of course, is that the V-locity® software from Condusiv® Technologies is designed to prevent these split I/O situations in Windows virtual machines, and Diskeeper® will do the same for physical Windows systems. Installing Condusiv’s software is a quick, easy and effective fix as there is no disruption, no code changes required and no reboots. Just install our software and you are done!

You can even run this test for yourself on your own machine. Download a free copy of ATTO Disk Benchmark from The Web and install it. You can then click its Start button to quickly get a benchmark of how quickly YOUR system processes data transfer speeds at different sizes. I bet you quickly see that when it comes to data transfer speeds, size really does matter!

Out of interest, I enabled our Diskeeper software (I could have used V-locity instead) so that our RAM caching would assist the speed of the read I/O traffic, and the results were pretty amazing. Instead of the reads maxing out at around 3.2 GB per second, they were now maxing out at around a whopping 11 GB per second, more than three times faster. In fact, the ATTO Disk Benchmark software had to change the graph scale for the transfer rate (X-axis) from 4 GB/s to 20 GB/s, just to accommodate the extra GBs per second when the RAM cache was in play. Pretty cool, eh?

 

Of course, it is unrealistic to expect our software’s RAM cache to satisfy ALL of the read I/O traffic in a real live environment as with this lab test, but even if you satisfied only 25% of the reads from RAM in this manner, it certainly wouldn’t hurt performance!!!

If you want to see this for yourself on one of your computers, download the ATTO Disk Benchmark tool from The Web, if you haven’t already, and as mentioned before, run it to get a benchmark for your machine. Then download and install a free trial copy of Diskeeper for physical clients or servers, or V-locity for virtual machines from www.condusiv.com/try and run the ATTO Disk Benchmark tool several times. It will probably take a few runs of the test, but you should easily see the point at which the telemetry in Condusiv’s software identifies the correct data to satisfy from the RAM cache, as the read transfer rates will increase dramatically. They are no longer being confined to the speed of your disk storage, but instead are now happening at the speed of RAM. Much faster, even if that disk storage IS an NVME-connected SSD. And yes, if you’re wondering, this does work with SAN storage and all levels of RAID too!

NOTE: Before testing, make sure you have enough “unused” RAM to cache with. A minimum of 4 GB to 6 GB of Available Physical Memory is perfect.

Whether you have spinning hard drives or SSDs in your storage array, the boost in read data transfer rates can make a real difference. Whatever storage you have serving YOUR Windows computers, it just doesn’t make sense to allow the Windows operating system to continue transferring data at a slower speed than it should. Now with easy to install, “Set It and Forget It®” software from Condusiv Technologiesyou can be sure that you’re getting all of the speed and performance you paid for when you purchased your equipment, through larger, more sequential storage I/O and the benefit of intelligent RAM caching.

If you’re still not sure, run the tests for yourself and see.

Size DOES matter!

Thinking Outside the Box - How to Dramatically Improve SQL Performance, Part 1

by Howard Butler 3. April 2019 04:10

If you are reading this article, then most likely you are about to evaluate V-locity® or Diskeeper® on a SQL Server (or already have our software installed on a few servers) and have some questions about why it is a best practice recommendation to place a memory limit on SQL Servers in order to get the best performance from that server once you’ve installed one of our solutions.

To give our products a fair evaluation, there are certain best practices we recommend you follow.  Now, while it is true most servers already have enough memory and need no adjustments or additions, a select group of high I/O, high performance, or high demand servers, may need a little extra care to run at peak performance.

This article is specifically focused on those servers and the best-practice recommendations below for available memory. They are precisely targeted to those “work-horse” servers.  So, rest assured you don’t need to worry about adding tons of memory to your environment for all your other servers.

One best practice we won’t dive into here, which will be covered in a separate article, is the idea of deploying our software solutions to other servers that share the workload of the SQL Server, such as App Servers or Web Servers that the data flows through.  However, in this article we will shine the spotlight on best practices for SQL Server memory limits.

We’ve sold over 100 million licenses in over 30 years of providing Condusiv® Technologies patented software.  As a result, we take a longer term and more global view of improving performance, especially with the IntelliMemory® caching component that is part of V-locity and Diskeeper. We care about maximizing overall performance knowing that it will ultimately improve application performance.  We have a significant number of different technologies that look for I/Os that we can eliminate out of the stream to the actual storage infrastructure.  Some of them look for inefficiencies caused at the file system level.  Others take a broader look at the disk level to optimize I/O that wouldn’t normally be visible as performance robbing.  We use an analytical approach to look for I/O reduction that gives the most bang for the buck.  This has evolved over the years as technology changes.  What hasn’t changed is our global and long-term view of actual application usage of the storage subsystem and maximizing performance, especially in ways that are not obvious.

Our software solutions eliminate I/Os to the storage subsystem that the database engine is not directly concerned with and as a result we can greatly improve the speed of I/Os sent to the storage infrastructure from the database engine.  Essentially, we dramatically lessen the number of competing I/Os that slow down the transaction log writes, updates, data bucket reads, etc.  If the I/Os that must go to storage anyway aren’t waiting for I/Os from other sources, they complete faster.  And, we do all of this with an exceptionally small amount of idle, free, unused resources, which would be hard pressed for anyone to even detect through our self-learning and dynamic nature of allocating and releasing resources depending on other system needs.

It’s common knowledge that SQL Server has specialized caches for the indexes, transaction logs, etc.  At a basic level the SQL Server cache does a good job, but it is also common knowledge that it’s not very efficient.  It uses up way too much system memory, is limited in scope of what it caches, and due to the incredible size of today’s data stores and indexes it is not possible to cache everything.  In fact, you’ve likely experienced that out of the box, SQL Server will grab onto practically all the available memory allocated to a system.

It is true that if SQL Server memory usage is left uncapped, there typically wouldn’t be enough memory for Condusiv’s software to create a cache with.  Hence, why we recommend you place a maximum memory usage in SQL Server to leave enough memory for IntelliMemory cache to help offload more of the I/O traffic.  For best results, you can easily cap the amount of memory that SQL Server consumes for its own form of caching or buffering.  At the end of this article I have included a link to a Microsoft document on how to set Max Server Memory for SQL as well as a short video to walk you through the steps.

A general rule of thumb for busy SQL database servers would be to limit SQL memory usage to keep at least 16 GB of memory free.  This would allow enough room for the IntelliMemory cache to grow and really make that machine’s performance 'fly' in most cases.  If you can’t spare 16 GB, leave 8 GB.  If you can’t afford 8 GB, leave 4 GB free.  Even that is enough to make a difference.  If you are not comfortable with reducing the SQL Server memory usage, then at least place a maximum value of what it typically uses and add 4-16 GB of additional memory to the system.  

We have intentionally designed our software so that it can’t compete for system resources with anything else that is running.  This means our software should never trigger a memory starvation situation.  IntelliMemory will only use some of the free or idle memory that isn’t being used by anything else, and will dynamically scale our cache up or down, handing memory back to Windows if other processes or applications need it.

Think of our IntelliMemory caching strategy as complementary to what SQL Server caching does, but on a much broader scale.  IntelliMemory caching is designed to eliminate the type of storage I/O traffic that tends to slow the storage down the most.  While that tends to be the smaller, more random read I/O traffic, there are often times many repetitive I/Os, intermixed with larger I/Os, which wreak havoc and cause storage bandwidth issues.  Also keep in mind that I/Os satisfied from memory are 10-15 times faster than going to flash.  

So, what’s the secret sauce?  We use a very lightweight storage filter driver to gather telemetry data.  This allows the software to learn useful things like:

- What are the main applications in use on a machine?
- What type of files are being accessed and what type of storage I/O streams are being generated?
- And, at what times of the day, the week, the month, the quarter? 

IntelliMemory is aware of the 'hot blocks' of data that need to be in the memory cache, and more importantly, when they need to be there.  Since we only load data we know you’ll reference in our cache, IntelliMemory is far more efficient in terms of memory usage versus I/O performance gains.  We can also use that telemetry data to figure out how best to size the storage I/O packets to give the main application the best performance.  If the way you use that machine changes over time, we automatically adapt to those changes, without you having to reconfigure or 'tweak' any settings.


Stayed tuned for the next in the series; Thinking Outside The Box Part 2 – Test vs. Real World Workload Evaluation.

 

Main takeaways:

- Most of the servers in your environment already have enough free and available memory and will need no adjustments of any kind.
- Limit SQL memory so that there is a minimum of 8 GB free for any server with more than 40 GB of memory and a minimum of 6 GB free for any server with 32 GB of memory.  If you have the room, leave 16 GB or more memory free for IntelliMemory to use for caching.
Another best practice is to deploy our software to all Windows servers that interact with the SQL Server.  More on this in a future article.

 

 

Microsoft Document – Server Memory Server Configuration Options

https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/server-memory-server-configuration-options?view=sql-server-2017

 

Short video – Best Practices for Available Memory for V-locity or Diskeeper

https://youtu.be/vwi7BRE58Io

At around the 3:00 minute mark, capping SQL Memory is demonstrated.

SQL Server Database Performance

by Dawn Richcreek 15. March 2019 07:42

How do I get the most performance from my SQL Server?

SQL Server applications are typically the most I/O intensive applications for any enterprise and thus are prone to suffer performance degradation. Anything a database administrator can do to reduce the amount of I/O necessary to complete a task will increase the server’s performance of the application.

Excess and noisy I/O has typically been found to be the root cause of numerous SQL performance problems such as:SQL Server

  • -SQL query timeouts
  • -SQL crashes
  • -SQL latency
  • -Slow data transfers
  • -Slow or sluggish SQL based-applications
  • -Reports taking too long
  • -Back office batch jobs bleeding over into production hours
  • -User complaints; users having to wait for data

 

Some of the most common actions DBAs often resort to are:

  • -Tuning queries to minimize the amount of data returned.
  • -Adding extra spindles or flash for performance
  • -Increased RAM
  • -Index maintenance to improve read and/or write performance.

 

Most performance degradation is a software problem that can be solved by software

None of these actions will prevent hardware bottlenecks that occur due to the FACT that 30-40% of performance is being robbed by small, fractured, random I/O being generated due to the Windows operating system (that is, any Windows operating system, including Windows 10 or Windows Server 2019).

 

Two Server I/O Inefficiencies



IO StreamAs the storage layer has been logically separated from the compute layer and more systems are being virtualized, Windows handles I/O logically rather than physically which means it breaks down reads and writes to their lowest common denominator, creating tiny, fractured, random I/O that creates a “noisy” environment that becomes even worse in a virtual environment due to the “I/O blender effect”.



IO StreamThis is what a healthy I/O stream SHOULD look like in order to get optimum performance from your hardware infrastructure. With a nice healthy relationship between I/O and data, you get clean contiguous writes and reads with every I/O operation.

 

 

 

 

 

Return Optimum Performance – Solve the Root Cause, Instantly

 

Condusiv®’s patented solutions address root cause performance issues at the point of origin where I/O is created by ensuring large, clean contiguous writes from Windows to eliminate the “death by a thousand cuts” scenario of many small writes and reads that chew up performance. Condusiv solutions electrify performance of windows servers even further with the addition of DRAM caching – using idle, unused DRAM to serve hot reads without creating an issue of memory contention or resource starvation. Condusiv’s “Set It and Forget It” software optimizes both writes and reads to solve your toughest application performance challenges Video: Condusiv I/O Reduction Software Overview.

 

Lab Test Results with V-locity I/O reduction software installed

labtest 

 

Best Practice Tips to Boost SQL Performance with V-locity

 

 

By following the best practices outlined here, users can achieve a 2X or faster boost in MS-SQL performance with Condusiv’s V-locity® I/O reduction software.

-Provision an additional 4-16GB of memory to the SQL Server if you have additional memory to give 

-Cap MS-SQL memory usage, leaving the additional memory for the OS and our software. Note - Condusiv software will leverage whatever is unused by the OS 

-If no additional memory to add, cap SQL memory usage leaving 8GB for the OS and our software Note – This may not achieve 2X gains but will likely boost performance 30-50% as SQL is highly inefficient with its memory usage

-Download and install the software – condusiv.com/try. No SQL code changes needed. No reboot required. Note - Allow 24 hours for algorithms to adjust.

-After a few days in production, pull up the dashboard and look for a 50% reduction in I/O traffic to storage

Note – if offloading less than 50% of I/O traffic, consider adding more memory for the software to leverage and watch the benefit rise on read heavy apps.

RecentComments

Comment RSS

Month List

Calendar

<<  December 2019  >>
MoTuWeThFrSaSu
2526272829301
2345678
9101112131415
16171819202122
23242526272829
303112345

View posts in large calendar