Wednesday, September 20, 2017

ACMESharp and Visual Studio Code Error

I lost a fair bit of time troubleshooting an error that turned out to be an odd compatibility issue between the ACMESharp module and Visual Studio Code. Hopefully this saves someone else the time.

In Visual Studio Code, when running Submit-ACMECertificate, I got this error:
Submit-ACMECertificate : Error resolving type specified in JSON 'ACMESharp.PKI.CsrDetails, ACMESharp'. Path '$type', line 2, position 48.
At line:1 char:1
+ Submit-ACMECertificate nosub
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [Submit-ACMECertificate], JsonSerializationException
    + FullyQualifiedErrorId : Newtonsoft.Json.JsonSerializationException,ACMESharp.POSH.SubmitCertificate



I read a bunch of stuff about Newtonsoft.Json being installed in the Global Assembly Cache, but it wasn't on my computer.

I tested the same script on my desktop instead of the laptop. Nope, same error.

It turned out that the command worked just fine at a normal PowerShell prompt. So, my best guess is that Visual Studio Code is doing something different with that Json conversion that PowerShell by itself doesn't.

Worked all good:


I've run into oddities in the past when using PowerShell ISE for some things. However, that was minor stuff like text colors not updating. First time I've seen something this large dependent on the dev tool.

Thursday, September 14, 2017

Getting Detailed Error Messages for Mailbox Moves

In Office 365 or Exchange Server 2013/2016, you can use the administration console to view information about migration batches. To find out information about failing moves, you can view the details of the migration batch and then view the report for individual mailboxes. When you view the report for a mailbox a text file is downloaded for viewing.

The report provides detailed information about how much data has been downloaded. Also, if there are errors, they are contained in the report. Unfortunately sometimes the errors are pretty generic. For example, one error I got recently was:
Transient error TimeoutErrorTransientException has occurred. The system will retry (200/1300).
Instructions on how to review the report:
Since the error was happening often, we needed to get more information. Fortunately that detail is available, but not in that report. Instead, you need to use Windows PowerShell to view the move request statistics. If you are moving the mailbox to Office 365, you need to use a PowerShell prompt connected to Exchange online.

$stats = Get-MoveRequest user@domain.com | Get-MoveRequestStatistics -IncludeReport
$moveErr = $stats.Report.Entries | Where-Object {$_.Type -eq "Error"}

This leaves you will an array named $moveErr that contains all of the errors. You can view each error by specifying an individual array item.

$moveErr[1]

In my case, I got this more detailed information in the Failure property:

TimeoutErrorTransientException: The call to 'https://hybridserver/EWS/mrsproxy.svc
hybridserver (15.1.1034.26 caps:07FD6FFFBF5FFFFFCB07FFFF)' timed out. Error
details: The request channel timed out while waiting for a reply after 00:00:30.9527011.
Increase the timeout value passed to the call to Request or increase the SendTimeout
value on the Binding. The time allotted to this operation may have been a portion of a
longer timeout. --> The request operation did not complete within the allotted timeout of
00:00:50. The time allotted to this operation may have been a portion of a longer
timeout. --> The request channel timed out while waiting for a reply after
00:00:30.9527011. Increase the timeout value passed to the call to Request or increase
the SendTimeout value on the Binding. The time allotted to this operation may have been a
portion of a longer timeout. --> The request operation did not complete within the
allotted timeout of 00:00:50. The time allotted to this operation may have been a portion
of a longer timeout.

That error information gave me enough information to start looking at timeout settings for the MRS proxy service.

Tuesday, September 12, 2017

Using Let's Encrypt Certificates for Exchange Server

Have you ever fantasized about using free SSL/TLS certificates for Exchange Server? If so, then this blog post is for you.

I’ve always hated the cost associated with SSL/TLS certificates. For what seemed like a pretty basic service some of the certificate authorities (CAs) were charging hundreds or thousands of dollars. You could always set up your own CA, but that didn’t work well with random clients on the Internet because they won’t trust certificates generated by your CA.

At the end of 2015, there was a game changing development. Let’s Encrypt started giving away SSL/TLS certificates for free. At the time, the certificates were only for a single name. So, without SAN support, not good for Exchange Server. However, now there is support for SAN/UCC certificates. And, in 2018 they are planning to support wildcard certificates.

What’s the Catch?

The certificates are free. There is no catch there. But, they do have a short lifetime of 90 days. The short lifetime is to ensure that compromised certificates are not available for an extended period of time. Because of the short lifetime, it is strongly recommended that you automate certificate renewal.

Note: This blog post only shows the manual steps for obtaining a certificate. I'll put up another one showing automation.

The process for generating and renewing a certificate is a bit complex. But, once the initial process is defined, it’s pretty easy to work with.

Unlike a typical CA, Let’s Encrypt does not provide a web site to manage your certificate requests. Instead you need client software that communicates with the Let’s Encrypt servers. Since I already work with Windows PowerShell on a regular basis, I like the ACMESharp module that provides PowerShell cmdlets for working with Let’s Encrypt.

Installing the ACMESharp Module

The ACMESharp module is available in the PowerShell Gallery. To download and install modules from the PowerShell Gallery, you use the Install-Module cmdlet that is part of the PowerShellGet module. The PowerShellGet modules is included as part of the Windows Management Framework 5 (part of Windows 10 and Windows Server 2016).

If you are not using Windows 10 or Windows Server 2016, you can download and install WMF 5 or a standalone MSI installer for PowerShellGet here:

After you have PowerShellGet, installed run the command:
Install-Module AcmeSharp

When you run this command, you might be prompted to install NuGet. If you are prompted, say yes to install it. NuGet is provides the functionality to obtain packages from the PowerShell Gallery. The PowerShellGet cmdlets use NuGet.

You might also be prompted that the repository PSGallery is untrusted. This is the PowerShell Gallery that you want to download files from. So, say yes to trust PSGallery.



Connecting to Let’s Encrypt


The first step after installing the ACMESharp module is creating a local data store for the ACMESharp client. The data store is used by the client for secure storage of requests and keys.
To create the local data store run the following command:
Initialize-ACMEVault

Then, to create an account with Let’s Encrypt, run the following command:
New-ACMERegistration -Contacts mailto:youremail@yourdomain.com -AcceptTos

Validating DNS Names

Let’s Encrypt requires you to verify ownership of each DNS name that you want to include in a certificate. Each DNS name is referred to as an identifier. For a SAN certificate, you will generate 2 or more identifiers then specify the identifiers when you create the certificate.

You can validate an identifier in three ways:
  • Dns - You need to create a TXT record in DNS that is verified by Let’s Encrypt.
  • HTTP - You need to place a file on your web server that is verified by Let’s Encrypt.
  • TLS-SNI - You need to place an SSL/TLS certificate on your web server that is verified by Let’s Encrypt.
In the projects I work on, I typically do not have access to the main company web server/site, but do have access to create DNS records. So, I use DNS validation.

To create a new identifier:
New-ACMEIdentifier -dns server.domain.com -alias idAlias
You should include an alias each time you create an identifier. If you don’t create an alias, there is no easy way to refer to the identifier in later steps. If you forget to create an alias, just create a another new identifier with the same DNS name and include the alias.

In my example, the DNS name has four parts because I was testing by using a subdomain. In most cases, the DNS name will have only three parts.


Next you need to specify how the identifier will be verified. When you do this the cmdlet reports back the proof you need to provide. For HTTP verification, it identifies the file name. For DNS verification, it identifies the TXT record that needs to be created.

The command to start verifying the identifier is:
Complete-ACMEChallenge idAlias -ChallengeType dns-01 -Handler manual
Use the alias of the identifier to specify which identifier is being verified.

The manual handler identifies that you are specifying the challenge type. There are other automated handlers that automatically create the response for the HTTP-based challenges. The automated handlers are specific to different web servers.

The challenge type dns-01 identifies that you will create a TXT record in DNS. Note that dns-01 must be in lowercase. 


After you have created the DNS record that corresponds to the challenge, then you submit the challenge. When you submit the challenge, Let’s Encrypt verifies it.

To submit a challenge:
Submit-ACMEChallenge idAlias -ChallengeType dns-01

When you submit the challenge, you need to specify the alias of the identifier and the challenge type.
The validation may or may not complete immediately. You can verify the status of the validation with the following command:
(Update-ACMEIdentifier idAlias -ChallengeType dns-01).challenges
The Update-ACMEIdentifier cmdlet queries the status of the identifier from the Let’s Encrypt servers. The challenges property contains the challenges generated when the identifier was created. The status for the dns-01 challenge will change to valid when validation is complete.



Creating a Certificate

After you have validated all of the identifiers that will be included in your certificate, you can generate the certificate request. When you generate the certificate request, you need to specify the identifiers to include and a new alias for the certificate.

To generate the certificate request:
New-ACMECertificate idAlias -generate -AlternativeIdentifierRefs idAlias1,idAlias2 -Alias certAlias
The initial alias identifier that you provide is the subject for the certificate. This name is also added to the subject alternative names automatically. You don’t need to repeat it as an alternative identifier.
The -AlternativeIdentifierRefs parameter is used to identify additional identifiers that are included in the certificate. All identifiers used here need to be validated.



To submit the certificate request:
Submit-ACMECertificate certAlias
The certificate alias used here is the alias that was set when running the New-ACMECertificate cmdlet.


To export the completed certificate as a pfx file that includes the certificate and private key:
Get-ACMECertificate certAlias -ExportPkcs12 filename.pfx -CertificatePassword “password
The -ExportPkcs12 parameter can be given a filename or a full path. If there are spaces in either one, you need to put quotes around it.


When I first ran the Get-ACMECertificate cmdlet to export the pfx file, I got an error:
Issuer certificate hasn’t been resolved.

This error was caused by the computer with ACMESharp not having the necessary intermediate certificate for the Let’s Encrypt CA. After I installing the X3 intermediate certificate, it worked without issue.


If necessary, you can get the X3 intermediate certificate here:
After you have the pfx file, you can import it and assign to Exchange Server using the normal Exchange Management cmdlets.

Friday, September 1, 2017

Remove Proxy Address from Office 365 User

I ran into an issue today where I needed to remove a proxy address from a cloud-based administrative user in Office 365 that was unlicensed. This user had a proxy address that was conflicting with a proxy address that was being synced with Azure AD Connect for another user account.

The cloud user was originally created as byron@OnPremDomain.com and renamed to be byron@CloudDom.onmicrosoft.com. When this was done, the original address (byron@OnPremDomain.com) was kept as a proxy address. You could view both addresses when using Get-MsolUser. This address caused a synchronization error for an on-premises user named byron@OnPremDomain.com.

To resolve this error, I need to remove byron@OnPremDomain.com from the list of proxy addresses. However, you can't do this with Set-MsolUser. The mechanism for managing proxy addresses in Office 365 is Set-Mailbox. But, without a license, there is no mailbox for the user account.

The solution is to add a license temporarily:
  1. Add a license for byron@CloudDom.onmicrosoft.com which creates a mailbox.
  2. Use Set-Mailbox -EmailAddresses to remove the incorrect proxy address.
  3. Verify Get-MsolUser shows only the correct proxyaddresses.


Monday, August 28, 2017

AD Synchronization Error When Adding Exchange 2016

When I implement hybrid mode for an organization, we typically implement Exchange Server 2016 to be the long term hybrid server. This provides the most recent Exchange Server version for management.

Today when I was installing Exchange Server 2016 into an Exchange 2010 organization we started getting directory synchronization errors for some system mailboxes (four of them). This occurred after I ran /PrepareAD, but before the remainder of Exchange Server 2016 was installed.

SystemMailbox{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}@XXXXX.com

Unable to update this object in Azure Active Directory, because the attribute [Username], is not valid. Update the value in your local directory services.
We only noticed these errors because Office 365 sent an Identity Synchronization Error Report that listed them.

When I looked in Synchronization Service Manager to see the details in Azure AD Connect, the following errors were there in the export to the Office 365 tenant:
I did some searching around and found a few articles that talked about manually updating attributes for these objects. However, when I looked at these objects the data didn't match what those articles were talking about.

Eventually I found a posting in an MS discussion forum that said to just wait until the install was finished. Apparently /PrepareAD creates the objects, but their configuration is not complete until the rest of Exchange Server 2016 is installed.

Sure enough, after the Exchange Server 2016 install was finished the synchronization errors went away.

Posting in MS discussion forum for reference (see answer from Ytsejamer1):
I did also refresh the connector schema as suggested in this post. However, I did that after running /PrepareAD and before Exchange Server 2016 was installed. I can't say for sure whether this step is required.




 

Monday, August 21, 2017

Updating SIP Addresses in Skype for Business

When you migrate to Office 365, the preferred configuration is to have user email addresses and UPNs the same. Having a single identity makes it easier for users to understand.

If you are implementing Skype for Business in Office 365, it will take the UPN of the user as the Skype address. Again, keeping a single identity is good.

However, if you have an on-premises implementation of Skype for Business, then the Skype identity is configured in the attribute msRTCSIP-PrimaryUserAddress.  This attribute contains a SIP (session initiation protocol) address that looks like an email address but with “sip:” at the start. For example: “sip:user@contoso.com”.

The SIP addresses defined in your on-premises Skype for Business may or may not match the email addresses of the users. You need to verify whether the addresses match. If the SIP address does not match the email address, it is easy to change.

On the Skype Server run the following PowerShell command:
Set-CsUser -Identity userUPN -SipAddress “sip:emailaddress
When you update the SIP address for UserA there are two considerations:
  • UserA will be signed out of Skype within about 1 minute. After UserA is signed out, they need to sign in again by using their new SIP address.
  • Any users with UserA as a contact are not updated immediately. These users need to sign-out and sign-in again for the contact to be updated and properly view presence information. However, this only works if the address book as been updated.
Address book updates are the big gotcha in SIP address updates. By default, Skype for Business clients use a cached address book that only updates once per day. Even if you force address book updates on the server, the new address information is not retrieved until the next day unless you manually delete the cached address book files from the client to force a reload.

To address this problem, you can switch clients to use an online address book. Clients using the online address book query the Skype for Business server each time they search the address book. When using the online address book, changes show up within a few minutes.

Address book look ups are controlled by client policies. By default, there is one policy named Global. You can update this policy to use online address book resolution with the following command:
Set-CsClientPolicy -Identity Global -AddressBookAvailability WebSearchOnly
Alternatively, you can create a new policy and assign that to users incrementally:
New-CsClientPolicy -Identity OnlyWebAB -AddressBookAvailability WebSearchOnly
Grant-CsClientPolicy -Identity UserName -PolicyName OnlyWebAB
The new client policy takes effect the next time the user signs in. To increase the likelihood that users have the new policy, you should configure it and then wait until the next day before making large scale changes.

Some useful links:

Sunday, August 13, 2017

Scripting Complexity vs. Simplicity

Note: Most of my blog posts are technical items that relate to performing a specific task or fixing a specific error. This one is more of an opinion piece. So, if my arguments sound reasonable, take the advice. If you disagree, by all means, go your own way.

I am a big believer that I need to understand the details of any script I run in a production environment. Unless that script is from Microsoft and provided to perform a specific task (such as migrating public folders), I will go through a script line by line to verify I understand it. This even applies to my own scripts. If I haven’t used a script for several months, I’ll review it before I use it again to make sure I know exactly what it’s doing.

I expect that most system administrators operate with the same requirement to understand the scripts they are running. At least I hope they do. I don’t want anyone blindly running a script I created without understanding the script and knowing what it will do in their environment.

So, from my perspective, running a script is very different from running a program/utility. A program is distributed as an executable and you simply need to trust the developer. I don’t need to trust a script writer if I can understand what the script is doing. The writer just saved me the effort of creating the script myself. It’s much faster to review and verify an existing script than it is to create one.

Simple is good, right?

If you need to understand a script, then simple is good. However, we often attempt want a script to be resilient to user error. That is, we want the script to do things like:
  • Not accept invalid values
  • Not make incorrect/invalid changes to objects
  • Warn when mistakes are about to be made
A more user-friendly script is good, but it becomes more complex and more difficult to interpret. So, does that mean complexity is good?

Know your audience

The balance of simple (but you need to know what your doing) and complex (with lots of error checking) depends on your audience. If the audience needs to understand the details of your script, then simplicity is better. If the audience does not care about the details inside the script then simplicity is less important. For example, if the help desk staff in your organization run standardized scripts to complete some specific tasks then user friendly is more important than simplicity. The help desk staff will be running the script without reviewing the contents or understanding how it works. So, if they give an invalid value, you need to account for that in your script. It’s worthwhile to make it user friendly to prevent errors in the production environment.

The customers I work with typically will look at my scripts before running them. So, these scripts need to be relatively short and understandable. The documentation that I provide for running these scripts also needs to be short and understandable. In this case simplicity is most important for the audience. For example, if there are several tasks that need to be accomplished such as updating email addresses, UPNs, etc. I will create separate scripts for each task. This makes each individual script easier to understand and document.

Striking a balance

I tend to work with customers that have varying knowledge levels and needs. Some of them would run a script without looking and others want a detailed understanding how the scripts work. I want my scripts to work in both scenarios. So, I need to put in some of the user-friendly bits like prompting for values when required and some basic checking for invalid values, but I don’t try to account for every possible error. To account for every possible error would be too much complexity.

Because I can’t (or won’t) put in tons of error checking, one thing I often do is display a confirmation on screen before performing an action. For example, I have a script to remove email addresses for a specified domain. Before script performs the action, it displays on screen the pattern being searched for and provides an email address from the first mailbox as an example of what is about to be removed. The script also displays the number of mailboxes that are being modified. Displaying this information allows the user to provide a final sanity check before approving removal from all mailboxes.

To make scripts more understandable, I use lots of comments that describe what each section of a script is doing.  This is useful for customers that want to review the script. It’s also useful for me when I want to review the script contents or modify the script. Embedding comment-based help in a script makes it easier to use, but detailed comments in the script are better for understanding how it works.

If you don’t get the right balance then your scripts won’t get used. That help desk person will think your script generates errors all the time and ignore it. The administrators you gave complex scripts to will build their own because they can’t understand yours.