Thursday, August 25, 2016

Interpreting RepAdmin.exe /ReplSummary

One of the basic tasks you can do to verify Active Directory replication health is to run RepAdmin.exe /ReplSummary. The question becomes, what exactly do the results mean?
If you’re looking for a quick analysis, here it is. With no fails and all largest deltas less than 1 hour, you’re all good.


Now, for a more detailed look…

Total is the number of replication connections that a domain controller has with other domain controllers. The number of connections is probably higher than you expect because a separate connection is created for each Active Directory partition that’s being replicated. Most of the time you will have 5 per domain controller (domain, configuration, schema, DomainDnsZones, ForestDnsZones).

Fails is the number of connections that are not replicating properly. The number of fails should be zero.

Largest delta is where some of the confusion comes in. This is the longest period of time that a connection has not been used between two domain controllers. So, if a domain controller has one connection that has not replicated for 45 minutes and the others have replicated in less, then this value is 45 minutes. That is why this value tends to be high.

Your other thought is likely: How can it take up to an hour for replication to occur?

Within an AD site, all changes are replicated within seconds. However, there are partitions that do not change often. In particular, the schema partition probably changes only every few months. However, even when there are no changes, the connection for a partition will communicate occasionally to verify that it didn’t miss a change notification (polling). The default value for polling within a site is once per hour. That’s where the up to 60 minutes comes from.

You can change the time limit for polling within a site to twice per hour or four times per hour. That will reduce largest delta value, but it won’t really make a difference in the replication of changed AD data. You’re just triggering that polling to happen more often. And, almost all of the time, polling doesn’t trigger any data replication because replication notifications within the site would have already triggered it.

If you do want to change this value, it’s done in the Schedule for the NTDS Settings object in the AD site. It can also be modified for each connection individually, but it’s preferred to do it at the site level.

Between sites, the schedule on the site links will determine how often the polling happens. Most organizations that I work with have dropped the schedule down to every 15 minutes from the default of 180 minutes. If it were left at the default of 180 minutes then largest delta could range up to 180 minutes.

Friday, August 12, 2016

Update Mount-ADDatabase for PowerShell v2

I'm working on some Active Directory (AD) disaster recovery projects right now and one of the recovery methods we're implementing is AD snapshots. With AD snapshots, you have a copy of your AD data to identify and recover from accidental changes.

The client I'm working with has Windows 2008 R2 with PowerShell 2.0 for their domain controllers. PowerShell is my preferred method for automating anything at this point but AD snapshots don't have any PowerShell cmdlets.

Fortunately Ashley McGlone, a Microsoft PFE, has created some PowerShell functions that help you manage and use AD snapshots. One of the coolest things in there is a function (Repair-ADAttribute) that lets you pull attributes from the snapshot and apply them to the same object in the production AD. You can read more about these functions and download them from these two locations:
The minor issue I ran into is with the Mount-ADDatabase function. This function has a -Filter parameter which displays a list AD snapshots and lets you choose which one to mount. In Ashley's function this is done by using Out-GridView with the -OutputMode parameter which requires PowerShell v3. Using Out-GridView is an easy way to allow the user to select the snapshot. I wish it worked for my servers using PowerShell v2. Here is the line from the function:

 $Choice = $snaps | Select-String -SimpleMatch '/' |  
       Select-Object -ExpandProperty Line |  
       Out-GridView -Title 'Select the snapshot to mount' -OutputMode Single  

For my project, getting all of the DCs upgraded to using PowerShell v3 would take a while. I also didn't want to leave the project in a place where a whole bunch of manual steps were required to mount an AD snapshot older than the previous day. So, let's convert this to a method that works in PowerShell v2.

Now I needed a way to convert a list of snapshots in to a menu. My starting point was a TechNet discussion posting from Grant Ward (Bigteddy). You can view his solution for a discussion here:
Using that example I created the following code:

 $choices = $snaps | Select-String -SimpleMatch '/' | Select-Object -ExpandProperty Line  
 $menu = @{}  
 $i=0  
 foreach ($s in $choices) {  
      $i++  
      Write-Host "$i --> $s"  
      $menu.Add($i,$s)  
 }  
   
 [int]$ans = Read-Host 'Enter selection'  
 $Choice = $menu.Item($ans)  

This code takes the list of snapshots in the variable $snaps and does two things:
  • Writes a menu to the screen
  • Add each menu item to the array $menu
After the menu is displayed on the screen and the user selects an option (the $ans variable), the option is used to place the snapshot name into the $Choice variable for further processing. Now we have a version that works in PowerShell v2.




Monday, August 8, 2016

Finding the User or Group Name from a SID

I'm working on project where we needed to set AD security permissions in a test environment based on the permission based on production. When I generated a report of AD permissions that had been applied, several of the entries came back with SID numbers instead of user or group names. Typically this means that the user or group has been deleted, but I wanted to confirm.

I wanted to take the SID and identify the user or group account that was associated with it. After a quick search I found a few examples that looked similar to this:

 $objSID = New-Object System.Security.Principal.SecurityIdentifier("S-1-5-21-1454471165-1004335555-1606985555-5555")  
 $objUser = $objSID.Translate([System.Security.Principal.NTAccount])  
 $objUser.Value  


Above example taken from: https://technet.microsoft.com/en-us/library/ff730940.aspx

It seemed to me that there had to be an easier way using the ActiveDirectory module for PowerShell which isn't used by these examples. Good news, there is!

You can't use Get-ADUser or Get-ADGroup to identify the SID name because it could be either one. However, you can use Get-ADObject:

 Get-ADObject -Filter {objectSID -eq "S-1-5-21-1454471165-1004335555-1606985555-5555"}  

If the command does not return any results then there is no AD object with that SID.

Friday, July 29, 2016

Finding PCs Infected with WORM_ZLADER.B

A virus has recently been making the rounds that propagates by renaming folders and storing an executable file in a Recycle Bin folder. Just today we saw this on some shared folders.

Here is a link with more info about the virus:

So, we can identify that someone is infected by finding the $Recycle.Bin folders in the shares. But we need to track down where this came from. To do that we want to see the owner of the $Recycle.Bin folder because that is the person that created it.

You can't get the owner of the $Recycle.Bin folder by using Windows Explorer because Explorer treats this as a special folder type and limits what you can see. However, you can find the owner by using Get-ACL in PowerShell.

On a large file server with many shares, it's useful to scan the whole server to verify the extent of the infection. The script below starts in the current directory, finds all of the Recycle.Bin folders, displays the full folder path and the owner of each folder.

 $Folders=Get-ChildItem -Force -Recurse -ErrorAction SilentlyContinue | Where {$_.psiscontainer -eq $true -and $_.Name -like '$Recycle.B*' }  
 Foreach ($f in $Folders) {  
   $owner=($f | get-acl).owner  
   $Name=$f.Fullname   
   Write-Host "Folder: $Name"  
   Write-Host "Owner: $owner"  
   Write-Host ""  
 }  

Some others have written scripts to rename folders back to their original names which is useful if many folders were infected. See these links for more information:

Monday, July 25, 2016

June 2016 Security Update Breaks Group Policy

So, this was a thing back in the middle of June and I missed it at the time. Looking back there are a few articles about it, but I just ran into this with a client (happily, before the update was applied) and I think it's worth raising awareness of.

In some security updates for Vista and later released on June 14, 2016 the process for Group Policy object (GPO) downloading has been changed. To make the download process more secure, the computer account is now responsible for downloading all GPOs. In the past, the user account could download the GPOs also.

The result of this change is that any computer that is downloading GPOs needs to have Read access to the GPO. The computer accounts do not need Apply permission, only Read is required.

By default the Authenticated Users group has Read and Apply permission for all GPOs. The Authenticated Users group includes all users and all computer accounts. So, if you haven't changed this, then you'll have no issues when these security updates are applied.

Even though you don't think you modified the permissions on your GPOs, you might have. If you use security filtering to control GPO application, that modifies the GPO permissions. When you remove Authenticated Users for security filtering, you also remove Read permission for Authenticated Users.

The quick fix is to add Authenticated Users or Domain Computers with Read permission to all of your GPOs. This doesn't modify which users or computers apply the GPOs, but it does give the computer accounts the necessary permissions to download the GPOs. Some of the links below show how to automate this process.

If you'd like to verify whether you have any GPOs without any permissions assigned to Authenticated Users, I've created a short script that looks for that and dumps the list of GPOs and their current permissions to a file.

 #Script Requires the GroupPolicy cmdlets  
 #A manual import of the GroupPolicy module is required for pre-Win2012  
   
 Import-Module GroupPolicy  
   
 #Define badGPO as an array or you get one big string  
 #that's hard to work with  
   
 $badGPO = @()  
   
 #Get a list of all Group Policy objects  
 $gpo = Get-GPO -All  
   
 #Find Group Policy objects that don't have permissions  
 #assigned to Authenticated Users  
    
 Foreach ($g in $gpo) {  
      Try { Get-GPPermissions -GUID $g.id -TargetName "Authenticated Users" -TargetType Group -ErrorAction Stop }  
      Catch { $BadGPO += $g.DisplayName }  
 }  
   
 Write-Host "List of GPOs without Authenticated Users permissions"  
 $badGPO  
   
 #Create File for Report  
 "Permissions Report of GPOs without Authenticated Users" | Out-File GPO-NoAuthUser.txt  
   
 Foreach ($b in $badGPO) {  
      "GPO: $b" | Out-File GPO-NoAuthUser.txt -NoClobber -Append  
      Get-GPPermissions -Name $b -All | Out-File GPO-NoAuthUser.txt -NoClobber -Append  
 }  
   
 Write-Host "Created GPO-NoAuthUser.txt with permissions report"  
 Write-Host "Use this report to verify that computer accounts"  
 Write-Host "have read access to the GPO for application after"  
 Write-Host "applying June 2016 Security Updates. For more info"  
 Write-Host "see: https://blogs.technet.microsoft.com/askpfeplat/2016/07/05/who-broke-my-user-gpos/"  

Some links with more information:

    Friday, July 15, 2016

    A Reason To Use Preauthentication with Exchange

    Those that are really paranoid about security (and I don't mean that in a bad way) have always like the idea of pre-authentication in a reverse proxy for Exchange server. When you implement pre-authentication at the reverse proxy, only requests from authenticated users ever get to the Exchange server.

    This sounds great, but I think from a practical perspective, it doesn't buy you much. It also adds significant complexity. Even with vendor support, getting pre-authentication going always seems to be a hassle.

    Now, if you want to implement two factor authentication for Outlook on the web/OWA, then pre-authentication is a good point to do that. Most vendors have support for adding two factor authentication. For my customers, this is seldom a concern.

    Recently though I ran into an issue with account lockouts caused by repeated password attacks on OWA. In this case, the user account is locked and the user can't work. There is no simple solution for this without pre-authentication. Disabling OWA for the user doesn't prevent the failed authentication and the account lockouts. In complex environments with directory synchronization, changing a user logon name could break all sorts of things.

    Most pre-authentication solutions allow you to lock out accounts at the reverse proxy before you hit the limits for account lockout in Active Directory. The net effect is that OWA is locked but the user can continue to work internally.

    Now you need to think about whether that password attack is likely to happen.

    UPDATE:
    As an alternative to pre-authentication, you can also look at preventing automated sign in attempts. One way to do this is with a captcha on the OWA login page. Here are a few links on how to do this with free captcha software available from Google. This authentication is spoofable of the attacker builds their own login page to send the authentication requests, but they'd need to be pretty motivated.
    Another alternative that implements a captcha and also does much more is OWA Guard from Messageware. I haven't had a chance to install the trial yet, but it looks like a good product with the ability to do geo-blocking and OWA specific lockouts that will prevent denial of service for AD accounts when a password attack is being performed on OWA.

    Sunday, July 10, 2016

    Getting an Integer from Get-MailboxStatistics

    I was doing some analysis of an existing Exchange 2010 organization for a migration project and was wanting to calculate the average mailbox size. So, I used Get-MailboxStatistics to gather the information and export it to a csv file. Unfortunately, Get-MailboxStatistics gives you the mailbox size in this format:
    2.169 GB (2,329,318,152 bytes)
    When you're trying to use Excel to get an average, this isn't going to work. Instead you need to get that value as an integer. To do this, you need to create your own property for the object with an integer value.

    The TotalItemSize property from Get-MailboxStatistics is more complex than it appears. When you use Get-Member to look at the TotalItemSize property returned by Get-MailboxStatistics, it actually contains an IsUnlimited property and a Value property. The Value property has the size of the mailbox.

    If you do a Get-Member on the Value property, you can see that there are methods to convert the value to an integer. For example:
    (Get-MailboxStatistics Bob).TotalItemSize.Value.ToMB()
    To create a new property with the correct value as an integer, you use the Add-Member cmdlet. A complete 3 command example that gets all mailboxes and exports the size to csv is below:
    $mbx = Get-Mailbox -Resultsize Unlimited | Get-MailboxStatistics
    $mbx | Add-Member -MemberType ScriptProperty -Name MbxSizeInMB -Value {$this.TotalItemSize.Value.ToMB()}
    $mbx | Select-Object DisplayName,MbxSizeInMB | Export-Csv MailboxSize.csv
    Line 1 gets the mailboxes and their statistics. Line 2 adds the property. Line 3 exports the mailbox name and size to a csv file.