Monthly Archives: May 2013

Sizing The Windows Page File

A common question is how big should the Windows page file be?  In my opinion, as small as possible.

The Windows page file is analogous to the Linux swap file.  It's a file on the hard disk that serves as virtual memory.  If all running the applications won't fit inside physical RAM, Windows swaps out lesser-used segments of memory to the page file so it can make room for other applications.  But it's a relic from the days when computers had no where near the amount of RAM they have to today.

In practice, you never want to use the page file.  Why?  Because paging operations (swapping memory segments between physical RAM and the page file) is expensive.  Very expensive!  Disk I/O is orders of magnitude slower than physical RAM, and when Windows is doing a lot of paging operations, system performance takes a big hit.  It can slow the system to the point that the screen cursor needs minutes to respond when the mouse is moved.  And then the disk activity light and the power light are indistinguishable from each other — both shining with a steady glow.  Usually the only way to recover when this happens is a hard reboot of the system.

The problem is that the default for automatically allocating the page file hasn't changed since Windows NT Server 3.5.  It creates a page file that is 1.5 times the size of physical RAM.  A good value when servers had 512 MB of RAM, but extremely wasteful on a server with 16 GB of RAM.  That's a 24 GB page file!  And did I mention that the default location for the page file is the C drive?  With everything else that is vying for space on the C drive, the last thing you need is a gigantic file you never want to use.

You can have no page file, but this isn't a good idea.  There are a few rarely used things that Windows wants to keep in the page file, but these are never that big.  A good value for the page file is between 2 – 4 GB.  Resist the temptation to make it any bigger than 4 GB, regardless of what "best practices" say.  Remember, you never want to use the page file.  And you only ever need one page file.  Having multiple page files is even more wasteful than having a extremely large page file.  Don't do it!

Use a custom size for the page file and set the initial and maximum size to the same value.  If these values are not the same, then the page file will become fragmented as Windows shrinks and expands the page file.  You want to keep disk I/O operations for the page file to an absolute minimum, so let Windows create the page file once and keep it from constantly resizing it.

Posted in Windows.

DNS: Understanding The SOA Record

In the hosting industry, the Domain Name System (DNS) is one of the most critical pieces, right behind websites themselves.  Without DNS, that website you've worked so hard on would be completely invisible.  (Although it's possible to access some sites using only the IP address of their web server, this is not the case for virtual websites, which require that their hostname be included in the HTTP request header.  Without a working DNS record, virtual websites are completely inaccessible.)  But I've found that DNS is something that is not well understood by many website operators.  The basics of creating A records (which translate a hostname to an IP address) are simple enough, but when it comes to understanding how changes are propagated in DNS, this is often something of a mystery.

There is a widely held belief that any change made the DNS zone file of a domain is instantly seen throughout the Internet.  Yet nothing could be further from the truth.  When advising that changes be made to a zone file to fix a problem, I routinely add the following caveat:

Please allow up to 24 hours for any change to completely propagate throughout the world-wide DNS system.

Changes to a zone file are almost never instantaneous regardless of how despreate you are that they be instantaneous.  Any change requires time before it will be seen everywhere on the Internet.  But what many don't understand is that how fast or slow these updates are propagated is actually under their direct control through the SOA record.

Let me be completely clear on this one point.  Although you have control over the speed that updates are propagated throughtout the Internet, they will never, ever, be instantenous!  There will always be a delay.  Your only control is over how short or long this delay will be.

SOA: Start Of Authority

The SOA record is perhaps the least understood record in the entire zone file.  But it controls the speed that any update is propagated thourghout the Internet.  The purpose of the SOA record is:

  • Identify the DNS server that is authoritative for all information within the domain.
  • List the email address of the person in charge of the domain.
  • Control how often secondary servers check for changes to the zone file.
  • Control how long secondary servers keep the zone file active when the primary server cannot be contacted.
  • Control how long a negative response is cached by a DNS resolver (but for some DNS servers, this is also how long a DNS resolver should cache any response).

Now if you control all of the authorative DNS servers for a domain (that is, the DNS servers that actually host the zone files and can answers queries for the domain as opposed to having to ask another DNS server), then with the exception of how long negative responses should be cached, these settings may not seem as important since you can force the secondary servers to update whenever needed.  By if you are using third-party name servers which you do not control as your secondary name servers (such as Peer 1's SuperDNS servers), then these settings are vitally important to how fast any changes are propagated.  So let's go over each of these settings.

I will be using the official names for each of these fields as listed in RFC 1035: Domain Names — Implementation and Specification.

MNAME:  Primary Name Server

Fully-qualified domain name of the primary or master name server for the zone file.  Within the structure of DNS, there can only be one server that holds the master, editable zone file.  (Yes, there are exceptions, but I won't cover them here.)  All secondary name servers create their zone files by transferring the contents from the primary name server.  Changes to the domain's resource records are made to the primary name server's zone file and are then propagated to the secondary name servers when they check for updates.

The domain name of the primary name server must end with a period.

RNAME:  Responsible Person

Email address of the person responsible for the domain's zone file.  Often it will be an alias or group address rather than a particular idividual.  It uses a special format where the "@" character is replaced with a "." (period) character and the email address ends with a period.  So the email address would become (note that the endding period is part of the email address).

Never use an email address which uses a period before the "@" character (such as since DNS will automatically interpret the first period as the "@" character (where would become


Serial number of the zone file that is incremented each time a change is made.  The secondary name servers compare the serial number returned by the primary name server with the serial number in their copy of the zone file to determine if they should update their zone file.  If the serial number from the primary name server is greater than their serial number, they will do a zone update transfer.  Otherwise, no action is taken.

If you make a change to the zone file on the primary name server and forget to increment the serial number, the change will not be propagated to the secondary name servers even if you attempt to force a zone update transfer.  The primary and secondary name servers will remain out of sync until the serial number is incremented on the primary name server.  Unless you are manually editing the zone files (something that is not uncommon when using BIND), most DNS servers or frontend DNS applications will increment the serial number for you.  But if you find that updates are not being propagated to the secondary name servers, the serial number is the first thing you should check.

In the early days of DNS, the serial number was just that — a number that was incremented by 1 each time the zone file was changed.  So that one could have a better idea of when the zone file was actually changed, it's recommended (but not required) that you use the format YYYYMMDDnn, where YYYY is the year, MM is the month, DD is the day, and nn is the revision number (in case the zone file is changed more than once in a single day).

Never use a decimal in the serial number, such as 20130511.01, even if it is allowed by your DNS server.  The serial number is an unsigned 32-bit number, so using a decimal in the serial number will cause it be converted to something unexpected.

REFRESH:  Refresh Interval

Time in seconds that a secondary name server should wait between zone file update checks.  The value should not be so short that the primary name server is overwhelmed by update checks and not so long that propagation of changes to the secondary name servers are unduely delayed.  If you control the secondary name servers and the zone file doesn't change that often, then you might want to set this to as long as day (86400 seconds), especially if you can force an update on the secondary name servers if needed.  But if your secondary name servers are not under your control, then you'll probably want to set this to somewhere between 30 minutes (1800 seconds) and 2 hours (7200 seconds) to ensure any changes you make are propagated in a timely fashion.

Even if you configure your primary name server to send NOTIFY messages (which I will cover in a future article) to the secondary name servers whenever a change is made, you should never completely depend on this to ensure timely propagation of the changes, especially when using third-party secondary name servers. The decision to honor a NOTIFY message is entirely up to the secondary name server and some DNS servers do not support NOTIFY.

RETRY:  Retry Interval

Time in seconds that a secondary name server should wait before trying to contact the primary name server again after a failed attempt to check for a zone file update.  There are all kinds of reasons why a zone file update check could fail, and not all of them mean that there is something wrong with the primary name server.  Perhaps it was too busy handling other requests just then.  The Retry Interval simply tells the secondary name server to wait for a period of time before trying again.  A good retry value would be between 10 minutes (600 seconds) and 1 hour (3600 seconds), depending on the length of the Refresh Interval.

The retry interval should always be shorter than the refresh interval.  But don't make this value too short.  When in doubt, use a 15 minute (900 second) retry interval.

EXPIRE:  Expiry Interval

Time in seconds that a secondary name server will treat its zone file as valid when the primary name server cannot be contacted.  If your primary name server goes offline for some reason, you want the secondary name names to keep answering DNS queries for your domain until you can get the primary back online.  Make this value too short and your domain will disapear from the Internet before you can bring the primary back online.  A good value would be something between 2 weeks (1209600 seconds) and 4 weeks (2419200 seconds).

If you stop using a domain and delete it from the configuration of the primary name server, remember to remove it from the secondary name servers as well.  This is especially important if you use third-party secondary name servers since they will continue to answer queries for the deleted domain — answers which could now be completely incorrect — until the expiry interval is reached.

MINIMUM:  Negative Caching Time To Live

This field requires special attention since how it's interpreted depends on the DNS server you are using.  There have been three possible meanings for the MINIMUM field:

  • Defines the minimum time in seconds that a resource record should be cached by any name server.  Though this was the original meaning of this field (and it still retains the name from this meaning), it was never actually used this way by most name servers.  This meaning is now officially deprecated.
  • Defines the default Time To Live (TTL) for all resource records that do not have an explicit TTL.  This only applies to the zone file on the primary name server since a zone transfer to the secondary server adds the explicit TTL to the resource record if it is missing.  Versions of BIND prior to 8.2 use the MINIMUM field as the default TTL for all resource records, as do all versions of Windows DNS Server.
  • Defines the time in seconds that any name server or resolver should cache a negative response.  This is now the official meaning of this field as set by RFC 2308.

Unlike all the other SOA fields, MINIMUM effects every name server or resolver that queries your domain.  If your DNS server is compliant with RFC 2308, then this field only applies to how long a negative response (that is, for a query where no resource record is found) is cached.  But if your DNS server uses this as the default TTL for resource records without an explicit TTL, then it controls how long any response could be cached by a name server.

If you make this too long, then name servers and resolvers will keep using their cached result even after all the secondary name servers have updated their zone files.  And there is no method available for you to force these name servers and resolvers to flush their cache.  Again, if your DNS server is compliant with RFC 2308, it only applies to negative responses.  But if not, then all resource records without an explicit TTL will use this value as the default TTL.  If you were to set this to 1 week (604800 seconds), then it could take up to a week for any change to finally be seen everywhere on the Internet.

$TTL:  Default Time To Live

This was added in RFC 2308 to define the default TTL to should be used for any resource record that does not have an explicit TTL.  But as pointed out earlier, not all DNS servers support it.  BIND 8.2 and higher use $TTL to define the default TTL in their zone files, but Windows DNS Server does not, relying on the SOA MINIMUM field instead,  So check you DNS server manual to find out how it sets the default TTL.

Final Thoughts

There is no hard and fast rule for setting the refresh, retry, and TTL values.  For domains where changes are rarely done, longer values are usually preferred.  But if are planning to make changes, then reducing these values before hand, especially the default TTL, can go a long way to ensuring your changes get propagated in a timely fashion.  But you must change these values at least as far in advance as the default TTL.  If, for example, the current default TTL is set to one week, you'll need to change the default TTL at least a week before the zone file is changed to ensure that every DNS server and resolver is using the new TTL.  Otherwise you could find that scattered sections of the Internet don't see the change until the older, cached record finally expires.

Posted in DNS.

Disappearing VSS System Writer and ASP.NET

When a server backup program utilizes Volume Shadow Copy Service (VSS), sometimes you have to fix problems when one or more of the writers fails or disappears.  This can usually be repaired by running a script that re-registers the various DLLs used by VSS.  But we ran into a problem on one server that defied this traditional fix.  Even though the script was executed and the server rebooted several times, the System Writer would disappear every time the backup was attempted.  And it always failed in the same place — backing up the System State.

Since the tried and true fix of re-registering the DLLs wasn't working, we needed to look elsewhere.  And the problem turned out to be something we hadn't seen before.  Too many files in the Temporary ASP.NET Files directories.

The System State

According Microsoft TechNet, the System State is comprised of the following:

  • Boot files, including the system files, and all files protected by Windows File Protection (WFP).
  • Active Directory (on a domain controller only).
  • Sysvol (on a domain controller only).
  • Certificate Services (on certification authority only).
  • Cluster database (on a cluster node only).
  • The registry.
  • Performance counter configuration information.
  • Component Services Class registration database.

System files includes everything under the C:\Windows folder whether or not it's actually needed to restore the System State.  (Since the majority of the files found in the C:\Windows folder are required for the operation of the system, it's much safer to simply back up everything rather than miss a critical file.)  To create a System State backup, the VSS System Writer must first enumerate all the files and folders that make up the system files, and this where it ran afoul of the ASP.NET temporary files.

ASP.NET Temporary Files

One of the principal features of Microsoft.NET is that applications can be run on multiple operating systems without having to rebuild the program.  (The open source Mono Project uses this very feature to bring the .NET Framework to Linux, Apple's OS X, and many other non-Microsoft platforms.)  They are initially compiled into a machine-independent Intermediate Language (IL), and it's in this form that they are installed on the target system.  Then when the program is executed, the program's IL code is compiled into machine code by an operating system-specific compiler.

Having to compile the IL code is an expensive operation, so to speed up subsequent executions of the program, .NET saves the compiled machine code in a temporary directory.  When the program is run again, .NET checks to see if there is a cached copy of the compiled machine code.  If there is, it skips compiling the IL code and runs the cached machine code instead.  For ASP.NET programs, the compiled machine code is cached in a folder under C:\Windows\Microsoft.NET.

ASP.NET is different from other types of .NET programs in that each ASP.NET page is considered to be a separate program.  This means that lots of compiled machine code gets cached.  On a server that hosts hundreds of ASP.NET websites, there can be thousands, or even tens of thousands, of cached machine code files.

The System Writer Disappears

Because the default location of the ASP.NET temporary directories is under C:\Windows\Microsoft.NET, these files are considered to be system files, and therefore part of the System State.  So when VSS is used to create a backup of the System State, the System Writer enumerates all the system files, including the ASP.NET temporary files.  But there is a limit to the number of files that the System Writer can deal with.  When this limit is exceeded, the System Writer aborts with an error, causing the System State backup to fail.

Once the System Writer has aborted, it disappears from the list of VSS writers until the service that controls it — which is the Cryptographic Services — is restarted.  But even when the Cryptographic Services is restarted, the System Writer will simply abort again the next time it tries to enumerate all the ASP.NET temporary files.

Relocating the ASP.NET Temporary Directories

None of the ASP.NET temporary files are required to restore the System State from a backup.  If they are deleted, they are recreated by compiling the IL code the next time the ASP.NET application is executed.  But by virtue of their location under the C:\Windows\Microsoft.NET folder, they are considered to be part of the System State.  So to get the ASP.NET temporary files out of the System State, we need to move these temporary directories to a different location.

Doing this requires that you perform a number of steps, which I describe below.  The names and locations for the new folders are just my personal preference.  Change them as desired to meet the needs of your system so long as they are not located under the C:\Windows folder.

Create New ASP.NET Temporary Directories

Despite the various versions of .NET Framework, only three versions of .NET actually have ASP.NET temporary directories.  These are .NET Framework 1.1 (which is not included with Windows Server 2008 and higher and can largely be ignored), .NET Framework 2.0 (which includes .NET 3.0 and 3.5, as these are just extensions for .NET 2.0), and .NET Framework 4.0.  (.NET Framework 4.5 has just been released.  When installed, it replaces .NET Framework 4.0 if it's already installed.  I didn't include it in the example here, but relocating its ASP.NET temporary directories will be similar to .NET 4.0.)  And except for .NET Framework 1.1 (which is 32-bit only), each of these versions has a 32-bit and 64-bit temporary directory.

The first thing we need to do is to create new directories for the ASP.NET temporary files.  These commands will create new directories for the ASP.NET 2.0 and 4.0 temporary files on the D drive.

md "D:\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files"
md "D:\Microsoft.NET\Framework\v4.0.30319\Temporary ASP.NET Files"
md "D:\Microsoft.NET\Framework64\v2.0.50727\Temporary ASP.NET Files"
md "D:\Microsoft.NET\Framework64\v4.0.30319\Temporary ASP.NET Files"

Set Folder Permissions

Next we need to set the folder permissions to match the existing default directories.  The easiest way to do this is with the ICACLS command.  The first command removes the removes the inherited permissions and replaces them with a copy of the inherited permissions.  Thus changes to the permissions of the drive (root directory) will not effect the new temporary directories.  The remaining commands grant the required permissions.

icacls "D:\Microsoft.NET" /inheritance:d
icacls "D:\Microsoft.NET" /grant:r "BUILTIN\Administrators:(OI)(CI)(F)"
icacls "D:\Microsoft.NET" /grant:r "NT AUTHORITY\SYSTEM:(OI)(CI)(F)"
icacls "D:\Microsoft.NET" /grant:r "CREATOR OWNER:(OI)(CI)(IO)(F)"
icacls "D:\Microsoft.NET" /grant:r "BUILTIN\IIS_IUSRS:(OI)(CI)(M,WDAC,DC)"
icacls "D:\Microsoft.NET" /grant:r "BUILTIN\Users:(OI)(CI)(RX)"
icacls "D:\Microsoft.NET" /grant:r "NT SERVICE\TrustedInstaller:(CI)(F)"
icacls "D:\Microsoft.NET" /grant:r "NT SERVICE\WMSvc:(OI)(CI)(M,DC)"

Add Attribute tempDirectory To The compilation Tag In web.config

For ASP.NET to use temporary directories anywhere other than the default location, the directory must be specified using the tempDirectory attribute of the <compilation> tag in the system web.config file.  There is one file for each version of the .NET Framework.  (Again, these are the same versions that have ASP.NET temporary directories, so there is no web.config file for .NET 3.0 and 3.5.)  The tempDirectory attribute specifies the directory where the compiled machine code will be cached.  The web.config file is a XML file that can be edited with Notepad.

For ASP.NET 2.0 32-bit, we would edit the web.config file and locate this tag:


and change it as follows to use the new temporary directory:

<compilation tempDirectory="D:\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files">

The web.config file is located in the CONFIG folder for the .NET Framework version.  In our example, we will need to edit the following web.config files.

.NET Framework 2.0 – 32-Bit

.NET Framework 2.0 – 64-Bit

.NET Framework 4.0 – 32-Bit

.NET Framework 4.0 – 64-Bit

Restart IIS

For the changes made to the web.config files to take effect, IIS has to be restarted.  This is easily done from the command line.


Delete Files In The Old Temporary Directories

Now we need to delete the files in the old ASP.NET temporary directories so they are no longer part of the system state.  These files are actually in a subfolder named root, so we'll actually delete this folder along with all it's files and subfolders.  Again, this is easily done from the command line.

rmdir /s /q "C:\Windows\Microsoft.Net\Framework\v2.0.50727\Temporary ASP.NET Files\root"
rmdir /s /q "C:\Windows\Microsoft.Net\Framework64\v2.0.50727\Temporary ASP.NET Files\root"
rmdir /s /q "C:\Windows\Microsoft.Net\Framework\v4.0.30319\Temporary ASP.NET Files\root"
rmdir /s /q "C:\Windows\Microsoft.Net\Framework64\v4.0.30319\Temporary ASP.NET Files\root"

Restart The Cryptographic Service

To get the VSS System Writer back, we must restart the service that controls it, which as previously mentioned, was the Cryptographic Service.

net stop cryptsvc
net start cryptsvc

Verifying Everything Is Working

If you did everything correctly, you should see files created in the new ASP.NET temporary directories the next time the website is accessed.  And to verify the System Writer has returned, run the vssadmin command to list the writers.

vssadmin list writers

Now that the ASP.NET temporary files are no longer considered to be system files, the System State backup should complete without errors.

Posted in Windows.
All information in this blog is provided "AS IS" with no warranties and confers no rights.
The opinions expressed in this blog are mine alone and do not represent those of my employer.
Powered By PEER 1 Managed Hosting