133

Do you have a large production VHS? Don’t fall into the allocation predicament!

June 28, 2016 / 0 comments / in General Support  / by Jacob Allred

If your production database is larger than 500 Gb, I’d like to take a moment to talk about a potential ticking time bomb that might be present in your production environment so that you have the tools necessary to disarm it before it detonates.

Extreme file fragmentation + Small disk allocation size = Disaster

The VHS uses ESE as its database technology, the same database technology used in Microsoft’s Exchange Server. This database technology allows for massive sizes but NTFS has some gotchas that can cause issues when any one file becomes very large and is experiencing fragmentation.

First let’s get a primer on how files are stored within an NTFS volume. Go read these two articles from Microsoft:

https://blogs.technet.microsoft.com/askcore/2009/10/16/the-four-stages-of-ntfs-file-growth/

https://blogs.technet.microsoft.com/askcore/2015/03/12/the-four-stages-of-ntfs-file-growth-part-2/

Now that we’re acquainted with NTFS, lets discuss this limitation and how it affects large database files such as the VHS.

  1. NTFS has a limitation on how large the “Attribute List” can grow.
  2. File fragmentation accelerates the growth of the attribute list.
  3. Larger files require more attribute lists.
  4. Once you run out of room for growth on the attribute list, the file cannot be grown any further.

What does this failure look like once it occurs?

  1. The VHS service will crash and report the following error:
    1. Unknown software exception 40000015
  2. Restarting the VHS service will not resolve the issue, the service will start but will be unable to grow the VHS history file and will return the following error in the log file:
    1. EXCEPTION(S): JetUpdate(nobookmark): JET_errDiskIO, Disk IO error (class CEseError) Handled: VhsHistoryReceiver.cpp

As of right now, there isn’t a SVCMON point that will be able to give you insight into this problem. You may have to use windows utilities to determine the root cause of the issue (such as CONTIG.exe to determine the number of fragments). Run contig with this “-a”  switch it will analyze the fragmentation count of the file you are investigating. If contig reports ~1.5 million fragments then you are dangerously close to the failure condition.

How do we avoid hitting this issue? There are a couple options:

  1. Maintain a smaller VHS. Use thinning mechanisms either on point update, or as part of a thinning rule
  2. Adjust your NTFS volume to allow for more data to be stored in your sector sizes

If you choose option 2 be aware that you will need to:

  1. Copy all files off of the affected drive
  2. Reformat the affected volume to increase the sector size
  3. Copy all file back on the newly formatted drive

Copy and pasting files from one volume to another will also make them contiguous, so any fragmentation the file is experiencing will be addressed as part of that operation.

We hope that this will help you to be familiar with issues that can arise from large databases and the NTFS file system. If you have any questions about this issue or need assistance investigating please contact us at support@cygnet.com

*updated to reflect additional info requested by Mike McElveen

Share this entry
Share on Facebook Share on Twitter Share on Linkedin Share by Mail



Comments

Blog post currently doesn't have any comments.

Leave Comment

Subscribe to this blog post

Enter your email address to subscribe to this post and receive notifications of new comments by email.


Search the CygNet Blog

Subscribe to the CygNet Blog

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Tags