Sunday, April 19, 2015

Whitelist Domains for Exchange 2010 Content Filter

Our standard antispam solution for clients is Symantec Mail Security. The main benefit of this software is a very low number of false positives. However, we've been having issues at a few clients where more spam gets through than they'd like. For these clients, we've added the built-in Exchange 2010 content filtering as another layer.

With the Exchange 2010 content filter, we've run into issues where some domains are not able send pdf attachments. It seems that most of these senders are hosting their domains using Google mail where you can't blame the content filter for being a bit overly sensitive.

To resolve this, we add the domain to the whitelist for the content filter with the following command:
Set-ContentFilterConfig -BypassedSenderDomains "",""

When you use this command, it overwrites the existing list of domains. If this is a long list, rather than risk making a typo, you can use these few commands to add a new domain to the existing list:
$domains = (Get-ContentFilterConfig).BypassedSenderDomains
Set-ContentFilterConfig -BypassedSenderDomains $domains
To simplify this process and make it less likely that a typo wipes out your whitelist of domains, you can use the following script:
$newDom = Read-Host "Domain to add"
$domains = (Get-ContentFilterConfig).BypassedSenderDomains
Set-ContentFilterConfig -BypassedSenderDomains $domains

Friday, April 17, 2015

Critical Update for Windows Web Servers

Most of the security updates released by Microsoft fall in the category of apply them soon. This week Microsoft released an update that falls in the category of apply NOW!

There is a flaw in http.sys for Windows Server 2008 R2 and later that allows a malformed packet to crash your server and perhaps remotely execute code. Since the patch was released Tuesday, the details of the flaw are widely known and trivial to implement. This means that anyone that can access your web server can crash it at will.

Two common scenarios I work with that are cause for concern:
  • Exchange servers. Exchange servers use the Windows web server (IIS) to provide services. This means that your Exchange servers are vulnerable.
  • Small Business Server. Organizations with SBS typically provide both remote access and Exchange web services. Both done with IIS and vulnerable to this flaw.
Best practice is install the patch (which requires a restart) from here:
If you can't do that because of testing then you can disable kernel caching in IIS. That mitigates the flaw but also reduces performance. It's a reasonable workaround in the short term.

To disable kernel caching in IIS:
  1. Open IIS Manager.
  2. In IIS Manager, select the server node and double-click Output Caching.
  3. On the Output Caching page, in the Actions pane, click Edit Feature Settings.
  4. In the Edit Output Cache Settings window, shown below, uncheck the Enable kernel cache check box and click OK.
  5. Close IIS Manager.

If you have a reverse proxy server in front of your web server, it may protect you from this flaw. However, you would need to test to be sure. This article provides a command-line to Curl utility to send the malformed packet:
You can download Curl here:

Saturday, April 11, 2015

Disk2VHD for Dynamic Disks

I was virtualizing an old server for a customer today and ran into an issue I've never had before. The server has a C: drive for the operating systems and a D: drive for data. Like I've done before, we used Disk2VHD to perform the conversion.

After creating the virtual machine and starting it, the D: drive was showing as Dynamic and Offline. So, it appears that the D: drive was a dynamic disk rather than a basic disk. I suspect that at some point it was configured to use mirroring functionality in Windows which requires dynamic disks.

Ok, fair enough. How do we properly import this disk? According to several searches, I should be able to reactivate the disk. However, this didn't work in my case.

At this point, I'm a bit annoyed. An obvious solution is to do a simple file copy from the old D: drive in the original server, but I'm doing this conversion remotely and have already shut down the original server. I would need to abort the conversion for the weekend if I can't figure this out.

While I was searching for potential ways to repair the disk, I ran into a few articles about converting a dynamic disk to a basic disk without data loss. When you convert a dynamic disk to a basic disk in Disk Management, or by using diskpart, you lose all of the data. What wizardry is this that they're trying to sell me?

I'm not into low level disk stuff and likely never will be, but here is my overall understanding. The disk configuration for dynamic disks vs basic disks is pretty close. Close enough that you can use a hex editor and edit the disk to make it a functional basic disk without losing your data. There are a few utilities that do this for you (for $$ of course), or you can do it yourself.

Here are the basic instructions:
  1. Download a hex editor for the disk. I used this one:
  2. In the hex editor, rows are labeled as Offset and columns are labelled 0 to F. The first partition for a dynamic disk has a value of 42 for Offset 00000001C0 and column 2. Change this value to 07. The value 07 is used for basic disks with NTFS partitions.
  3. If your disk has multiple partitions, then you need to go down to the next row and make the same change. Note that basic disks can have a maximum of 4 partitions. Dynamic disks do not have this limitation.
  4. After modifying all the neccessary values, save the changes.
  5. At this point, I rescanned the drives and was able to see a basic disk, but not do anything with it. So, I restarted the VM.
  6. After the restart I was able to assign a drive letter to the modified disk, but it had no data. To recover the data I needed to repair the disk. You can use chkdsk /f, but I used the disk repair option available in Windows Explorer.
  7. The repair completed very quickly and I was able to see all of the data.
  8. After a final reboot the applications using the data on D: drive started properly and appear to be functional.
Some additional resources:

Tuesday, April 7, 2015

Free Online Technical Conference

On May 14-15, 2015 the MVP Virtual Conference is free online for all that register. The sessions are open to anyone, but presented by Microsoft MVPs.

It looks pretty cool with topics such as:
  • Migrating to Office 365
  • Windows Azure
  • Ransomeware - prevention and recovery

Check it out here:

Friday, April 3, 2015

IE Compatibility View Woes

Many organizations have web-based applications that require Compatibility View in Internet Explorer to work properly. In cases where there are only a few computer or users, it's quite easy to add a specific web site to the list of sites for Compatibility View from the IE user interface.

We ran into a hiccup recently where several computers were not keeping a site in the Compatibility View list. You could add the site, but when you restarted Internet Explorer, it was gone. This was caused because the option Delete browsing history on exit was enabled. This is a known issue and has been known since IE 8. This client is using IE 11.

Internet Explorer - Internet Options
To accommodate having this option on, I pushed out the Compatibility View site through Group Policy. Which is probably what I should have done before. The Group Policy setting exists for both Computer configuration and User configuration in:
  • Policies\Administrative Templates\Windows Components\Internet Explorer\Compatibility View\Use Policy List of Internet Explorer 7 sites
When you enable this policy and provide a domain then all sites in that domain are rendered in IE 7 mode (Compatibility View).  Note that the required format is to provide a domain name an not a URL. So, works, but does not work.

GP Setting for List of Compatibility View Sites
After the Group Policy has been applied to the computers, you cannot verify application of it by looking at the list of Compatibility View sites. Even though the setting is applied for the sites, you cannot see them in the IE user interface. To verify that the setting is effective, you can use the Developer Tools option in IE.

To use the Developer Tools option in IE, press F12. This opens a pane at the bottom of the IE window. Select the Emulation tab and read the value listed for Document mode. If this value is 7 (Default) then it is using Compatibility View.

IE 11 - Developer Tools

Tuesday, March 17, 2015

Throttling Hyper-V Replication Traffic

We have a client with two physical locations and a 100 Mbps link between them. For disaster recovery purposes, we are using Hyper-V replication between the two sites.

Recently we made a disk change to one of the VMs that forced us to delete and recreate replication for one of the VMs. Unfortunately it was the VM with 650 GB of data. Because replicating that amount of data in the best case will take about 13 hours over the WAN link, we need to control the replication and prevent it from interfering with normal business.

Hyper-V does not have any built in functionality to control bandwidth for replication traffic. Fortunately Windows Server 2012 R2 has quality of service (QoS) functionality built into the operating system.

In my case, the receiving server is using port 443 to receive the data. On the source server, vmms.exe is replicating the data to the destination server. I created policy that limited traffic to 50 Mbps from vmms.exe to port 443 with the following command:
New-NetQosPolicy "VM Replication" -IPPortMatchCondition 443 -AppPathNameMatchCondition vmms.exe -ThrottleRateActionBitsPerSecond 50000000
Be aware that there are other options that you can use to refine the policy. For example, you can specify destination IP, source port, destination port, or protocol (TCP or UDP). There is lots of flexibility to be as precise as you need to be.

After creating a policy, you can manage it by using:
  • Set-NetQosPolicy
  • Remove-NetQosPolicy

Microsoft documentation for New-NetQosPolicy:

Monday, March 16, 2015

Exchange 2013 Is Filling My C: Drive

So, Exchange 2013 does a few things differently than previous versions of Exchange. If you are not careful about how you allocate your disk space, you'll end up constantly wondering why the C: drive is filling up. And if your monitoring isn't up to snuff, you'll notice C: drive is filling up because mail flow almost stops due to back pressure.

If you are using direct attached storage for your C: drive as Microsoft suggests, then you likely have a fairly large C: drive. Something like 300GB or more. If you have a C: drive that large you're likely OK and don't need to worry about it. On the other hand, if you have your Exchange server using a SAN and you tried to keep your C: drive to 80GB because that SAN space is expensive, you will have issues.

There are three common things can use up the C: drive:
  • Internet Information Services (IIS) logs
  • Exchange Server diagnostic logs
  • Transport queues
IIS Logs
Exchange Server has a variety of web-based services such as Outlook Web App (OWA) and Exchange Web Services (EWS). IIS hosts these web-based services and generates logs for them each day. Unfortunately IIS does not have an option to delete log files after a certain number of days. This is a problem that has existed for Exchange 2010 and Exchange 2007 also.

In most cases, the issue isn't so much the size of log files generated each day. Instead, it's the fact that the log files build up over time and begin to take up and significant amount of space.

You can move the IIS logs off of the C: drive by modifying the log settings for the Default Web Site. However, that still leaves you with a constantly building collection of log files. What we have started to do is delete all log files older than 14 days by using a scheduled task. This link provides instructions:
Exchange Server Diagnostic Logs
Exchange Server 2013 generates significantly more diagnostic logs than Exchange Server 2010. For Exchange 2010 Microsoft suggested 1.2GB available on D: drive. Now 30GB is recommended for C: drive and a big chunk of this is diagnostic logs. The logs are automatically purged after 14 days, but they are a significant amount of space. One of our clients with about 1700 users has 20GB of diagnostic logs.

If you don't allocate enough space originally on your server, there is no method for moving the logs to an alternate drive. However, you can use a junction point in the file system (created with mklink.exe) to redirect the folder to an alternate location.  Alternatively expanding C: to support the diagnostic logs is also good.

If you can't have a large C: drive when you install your server, consider installing Exchange Server on a separate drive. The diagnostic logs are kept in the installation folder:
  • [install location]\Microsoft\Exchange Server\v15\Logging
Transport Queues
In Exchange 2013 the transport queues are significantly larger than Exchange 2010. This is due mostly to the new SafetyNet feature. This feature keeps a copy of mail messages for 2 day to help when recovering from disasters. As you can guess, this can end up a significant amount of data. One of our clients had transport queues growing to about 60GB per server.

Again, you could install Exchange initially on a separate drive to allow for the size of the mail.que file. However, if Exchange is installed to the C: drive, you can do a few things to mange mail.que.

First, you can move the transport queues to an alternate location. Microsoft provides a set of directions for this:
Secondly, you can manage Safety Net to reduce the amount of data that is cached to less than 2 days if you deem that appropriate for your environment. You can do this from Exchange Administrative Center (EAC) at Mail flow > Receive connectors > More options > Organization transport settings > Safety Net > Safety Net hold time. You can also use the Set-TransportConfig cmdlet with the -SafetyNetHoldTime parameter.

For more information about Safety Net, see: