Configure SCOM to monitor servers in the DMZ

SCOM requires Mutual Authentication to Trust and Communicate with the agents for Monitoring and reporting.Initially SCOM tries to establish kerberos authentication with the agents. This happens for all internal agents which is joined in the domain.
For the workgroup machines which are in the DMZ network SCOM use the certificate based authentication for secure communication and then it monitors them.

Below are the high level steps:

1)Configure your firewall to pass traffic from DMZ agents(DMZ servers) to SCOM management server’s port 5723 & 5724.
2)Request certificate from all DMZ machines(certificate type must be server authentication & Client Authentication)
3)Request certificate from SCOM machine (certificate type must be server authentication & Client Authentication)
4)Import the server authentication & Client Authentication certificates on the DMZ machines
5)Import the server authentication & Client Authentication certificates on the SCOM 2012
6)Run the MOMCERTIMPORT on all Machines and assign the certificate
7)Approve the DMZ agents in the SCOM Server.

For Publish Certificate request for SCOM  there are 2 types based on the CA we have.

  1. Enterprise CA.
  2. StandAlone CA.

1) Enterprise CA

If we are going to request certificate from Enterprise CA then we need to use Publish a Certificate Template for SCOM through your enterprise CA.

To perform the task  through enterprise CA do the below :
Open Certificate Authority – Navigate to Certificate Templates – And Select Manage

sc1

Right click the Computer Certificate and Click Duplicate

dmzsc

Make sure the option allow private keys to be exported is chosen.

dmzsc

The most important thing that we need to note is that in the extensions it need to have both server and client authentication enabled. This is applicable for both the SCOM and the DMZ hosts throughout the configuration no matter we are requesting them either from Enterprise CA or Stand Alone CA.

dmzsc

Once the above is completed we can import this duplicate certificate to the SCOM.

2) StandAlone CA:

Below are the steps that needs to be carried over for Stand Alone CA SCOM Certificate Request:

Go to the SCOM 2012 Server

Connect to the computer hosting certificate services

https://ca.exchangequery.com/certsrv

dmzsc

Click request a certificate and submit advance certificate request

dmzsc

Click create and submit request to this CA

After that we will get confirmation on web access information as below and click yes

dmzsc

Below are the information that needs to be filled

Name – name of the server requesting the cert.

Type of Certificate – Choose Other

In OID  enter – 1.3.6.1.5.5.7.3.1,1.3.6.1.5.5.7.3.2 (This plays a major role in enhanced key usage)

dmzsc

Keyoptions – Select Create new key set

CSP – Select Microsoft Enhanced Cryptographic Provider v1.0

Key Usage – Select Both

Key Size – 1024

Select – Mark Keys as exportable.

Request Format – CMC

Hash Algorithm – SHA1 and give friendly name and submit.

DMZsc.png

Once the CA request is completed from the CA we can go ahead and import them on the SCOM server.

Request certificate for DMZ Servers to be Monitored:

First and the foremost thing is that wecan request the Certificate from internal domain server since most of the times the DMZ servers will not have access to certificate web enrollment services on port 443 to the internal certificate authority server.

So what we can do is generate cert request from one machine in the domain nw and then import them to the DMZ servers.

Perform the same process of submitting the certificate request for all the DMZ servers

Below are the information that needs to be filled

Name – name of the  DMZ server that requires the certificate.

Type of Certificate – Choose Other

In OID  enter – 1.3.6.1.5.5.7.3.1,1.3.6.1.5.5.7.3.2 (This plays a major role in enhanced key usage)

Keyoptions – Select Create new key set

CSP – Select Microsoft Enhanced Cryptographic Provider v1.0

Key Usage – Select Both

Key Size – 1024

Select – Mark Keys as exportable.

Request Format – CMC

Hash Algorithm – SHA1 and give friendly name and submit.

Once the above is done we need to approve the request from the CA and then import them on the server from where we requested the certificate for those DMZ machines.

Now we need to export this certificate from this requested machine and them import them on all DMZ servers which needs to be monitored.

There are multiple ways of doing this. I prefer doing this via Digicert Windows Utility Tool.

Download  the DigiCert Windows utility tool from the below url on the certificate requested machine

https://www.digicert.com/util/

On opening we  can see all the issued SSL certificate which owns the private key on that machine.

Select the DMZ  servers requested certificate and click on export

dmzsc

Select the option export the private key and export them with password.

dmzsc

Once the above steps are completed we need to import these certificates on the DMZ servers computer personal store.

We can use the same certificate import wizard like below and import the above certificate on DMZ servers

dmzsc

Now the final step is to run the MOMCERTIMPORT on all Machines and select this certificate and we are done.

This tool MOMCERTIMPORT GUI can be found on SCOM 2012 Installation Media path in below directory

E:\supporttools\AMD64\MOMCERTIMPORT

Make sure the same version of the tool from the setup is copied to all machines

Just run this tool on all machines and we will get a pop up window to confirm the certificate. Please confirm  by choosing our relevant requested certificate on all servers.

After the above is completed wait for some time and these DMZ servers will appear on the Administration – pending in the SCOM server and just we need to approve them and we are done.

Thanks & Regards
Sathish Veerapandian
MVP – Office Servers & Services

Skype for Business Persistent Chat Migration to new Pool

We might come across a scenario where we need to migrate the SFB servers to new pool.
There are few cases where we need to upgrade the hardware from old servers to New High performance servers on which they are running or there might be case where they need to be virtualized from hardware to VM.
This article focuses only on migrating the persistent chat pool from old server to the new server.

Below are the readiness to be completed before starting the Persistent Chat Migration:

1) The new Persistent Chat Pool should be already published in the Topology.
2) The new Persistent Chat nodes should be already added in the new pool and SFB setup Wizard should be completed.
3) Certificates should be already assigned to the new Persistent Chat Pool.
4) Connectivity from the OLD PC pool to the new SQL DB is already established.
5) Connectivity from the new PC pool to the old SQL DB is already Established.
6) Establish a connectivity from the old PC hosts to the new PC hosts

To Start the Migration:

Check your current persistent category,Addin,Policy and configuration.
This can be verified by checking through control panel persistent chat tab or through Shell.

To Check Persistent Chat Category:

Get-CsPersistentChatCategory -PersistentChatPoolFqdn “Pchat.exchangequery.com”
Make a note of the current number persistent chat rooms

To Check the rooms:

Get-CsPersistentChatRoom | select Name

To Check the Disabled rooms:

Get-CsPersistentChatRoom -Disabled:$True

After confirming that these disabled rooms will not be in use we can remove them before we migrate since there is no use of moving these obsolete ones to the new pool.

Get-Cspersistentchatroom -Disabled:$True | Remove-CsPersistentChatRoom

Export the Old Pool Persistent Chat Configuration by Running the below command:

Export-CsPersistentChatData -DBInstance “SQLCL01.Exchangequery.com\SFBDB” -FileName “c:\temp\PChatBckup.zip”

The exported Configuration data will look in XML as below

untitled2

Import Persistent Chat data that we exported to new Skype for Business Pool:

Import-CsPersistentChatData -DBInstance “SQLCL02.Exchangequery.com\SkypeDB” -FileName “c:\temp\PChatBckup.zip”

We will get a confirmation as below before the import and  the progress bar

untitled3

untitled4

Once the above command is done we can see the old PC config data imported in the MGC DB in the SQL.

After the above command is run we can see the chat rooms are duplicated since it created the new instance in the new pool.

Later we can delete them by running the below command:

Get-CsPersistentChatRoom -PersistentChatPoolFqdn “Pchat.exchangequery.com” |Remove-CsPersistentChatRoom

Then remove the persistent chat category:

Get-CsPersistentChatCategory -PersistentChatPoolFqdn “Pchat.exchangequery.com”| remove-cspersistentchatcategory
After this is done go ahead and try logging into the Persistent Chat Enabled User and see the results.

In my case what happened was the connections were still going to the old Persistent Chat Pool

Guess it was because the Old Persistent Chat Pool was First in the Persistent Chat Pools in the list on Topology Builder.
So Went ahead and removed the old persistent chat pool from the Topology , Publised the Topology , rerun the setup on new PCHAT nodes.

After this the new connections were going to the new Persistent Chat pool.
All my Persistent Chat rooms that i was member of was present AS IS and only thing is that the rooms that i was following disappeared from my list.
That was a small thing only and i was able to search those rooms and follow them again.

Thanks & Regards
Sathish Veerapandian
MVP – Office Servers & Services

Active Manager operation failed attempt to copy the last logs from the sourceserver failed

During a fail over DR cases when the Main site is completely not available we need to carry over few steps to activate Exchange Services according to the type of DR setup we have.

Sequential steps needs to be carried over in terms of  restoring the DAG,activating the DB’s on the DR site pointing the exchange DNS records to the DR site ip’s.

Failover scenarios varies according to the namespaces, no of sites in Exchange :

UnBound Name Space- Single name space for all Exchange URL’s for both the main and DR sites which is best recommended.
Bound Name Space – Very complicated and not recommended since we need to use seperate URL’s for Main and DR site.

If we have a three site setup with FSW in third site or if the FSW is placed in the Azure directory in the 3rd site then no manual activation of the database copies on the DR site is required. Only exchange DNS job on the DR site is required.

For detailed information on DAG DR setup i have written a previous blog which can be referred:

https://exchangequery.com/2016/05/04/dag-in-exchange-2016-and-windows-server-2012-r2/

From Exchange 2013 the Dynamic Quorum in the failover cluster adjusts automatically and recalculates the active nodes if its on a sequential shutdown for a two site setup.

During a DR activation in the DR site when the main site is completely not available after rebuilding the DAG cluster on the DR site we might come across the below error for some databases

In my test case it was the below:

Stop-DatabaseAvailablityGroup – for the Main site completed successfully with no errors
Restore-DatabaseAvailabilityGroup – completed successfully except some warnings for one mailbox node on the DR site.

On the server with warning noticed that all the DB’s were in failed state.Tried to mount them and got the below error

An Active Manager operation failed. Error The database action failed. Error: The database was not mounted because its experienced data loss as a result of a switchover or failover, and the attempt to copy the last logs from the sourcserver failed. Please check the event log for more detailed information. Specific error message: Attempt to copy remaing log files failed for database DBNAME. Error: Microsoft.Exchange.Cluster.Replay.AcllUnboundedDatalossDetectedEeption:

By looking into the above message its very interesting to see that the DR site DB’s are trying to reach the Main site copies to the get the information though the DAG cluster is activated on the DR site and the PAM is on the DR.

The below command can be used just in case if the DR copies are not mounted after activating the DR site DAG.

Move-ActiveMailboxDatabase “DBNAME” -ActivateOnServer DRMailboxServer -SkipHealthChecks -SkipActiveCopyChecks -SkipClientExperienceChecks -SkipLagChecks -MountDialOverride:besteffort

So we need to be very clear that this error will not occur normally until and unless there is some data loss for any DB’s during the DAG DR activation.

Usually when we do a Restore-DatabaseAvailabilitygroup on the DR site all the DB’s should be mounted on the DR site.

The above command can be run only if the database copies are in a failed state after DR site activation and if they are not getting  mounted.

Thanks & Regards
Sathish Veerapandian
MVP – Office Servers & Services

Troubleshooting endpoint URL’s for Exchange & Skype for Business

This article outlines the client troubleshooting end points that can be used for Exchange and Skype for Business services.

For Exchange

To verify Exchange autodiscover Service endpoints:
https://yourdomain.com/autodiscover/autodiscover.xml

Usage:Main purpose of autodiscover is to establish,discover and make initial connections to their mailboxes.
Also it keeps updated on the outlook on frequent changes of mailboxes and updates the offline address book.

To verify Exchange Exchange Web Service endpoints:
https://yourdomain.com/ews/exchange.asmx

Usage: EWS applications to communicate with the Exchange server mainly for developers to connect their clients and get the email connectivity for their applications via SOAP.

To verify Offinle Address Book Service endpoints:
https://yourdomain.com/oab/oab.xml

Usage: An offline address book provides local copy of address list to Microsoft Outlook which can be accessed when the outlook is in disconnected state.

To verify ActiveSync Service endpoints:
https://yourdomain.com/Microsoft-Server-ActiveSync

Usage:By using Activesync protocol users can configure and sync their emails on their mobile devices.

To verify Webmail Service endpoints:
https://yourdomain.com/owa/owa.xml

Usage:Outlook Web App is a browser based email client used for accessing emails via browser.

To verify exchange control panel Service endpoints:
https://yourdomain.comecp/ecp.xml

Usage:The Exchange Control Panel is a Web application that runs on a Client Access service providing services for the Exchange organization

To verify MAPI service end points:
https://yourdomain.com/mapi/mapi.xml

Usage:New protocol outlook connections introduced from Exchange 2013 SP1 which enhances faster connections only through TCP and eliminating the legacy RPC

To verify the RPC service end points:
https://yourdomain.com/rpc/rpc.xml

Usage:Not used on new versions of exchange and almost retiring type for client connections.

All the above URL’s will be listening on Exchange 2016 Mailbox Server Virtual Directories.

pastedimage

For Skype for Business:

Mostly for the chat services provided through Skype for business the main URL end points are Chat,Meet,Conference,Audio/Video and lyncdiscover.
We usually check these URL’s during any troubleshooting scenarios.

Below are the additional end points which can be seen and kept for additional references.

To test conferencing URL:
https://meet.domain.com/meet/

Usage: Meet is the base URL for all conferences in the organization.

To Verify  Dial in URL :
https://dialin.domain.com/dialin/
Usage:Dial-in enables access to the Dial-in Conferencing Settings webpage

To Verify Lync control panel:
https://sip.internaldomain.com/cscp

Usage:Must be only added and accessed from intranet site and no need to publish on the internet.

To verify the autodiscover web site and retrieve the redirection information for Client:

https://poolexternaluri/autodiscover/autodiscover.svc/root
https://poolexternaluri/reach/sip.svc

Usage: They are the service entry points for the Autodiscover service and they are required.They are the Lync Server Web Service Autodiscover Response which was sent from the clients.They are the URL for the Authentication Broker (Reach) web service

To Verify Mobile Client Connectivity:
https://poolexternaluri/webticket/webticketservice.svc

Usage:Specifies the default authentication method used for mobile client connectivity.
This is a SOAP web service that authenticates a user via NTLM or Kerberos (if configured) and returns a SAML Assertion (Ticket) as part of the SOAP Message response.

To check that the mobility service is working use the following url.
https://poolexternaluri/mcx/mcxservice.svc
This is the URL required for the Skype Mobility Services

https://poolexternaluri/supportconferenceconsole

Usage:Listening port for the Support Conferencing Console. The default value is 6007
Port used by the Office 365 Support Conference Console. This console is used by support personnel to troubleshoot problems with conferences and online meetings.
To verify the persistent chat:

https://PCpoolexternaluri/persistentchat/rm/

Usage:There are actually a Virtual directory for Persistent Chat, both on External and Internal web site So for external testing access the url from the published persistent chat FQDN

Verify hybridconfig service:
https://poolexternaluri/shybridconfig/hybridconfigservice.svc

Usage:Not sure this might be used for hybrid connectivity beween Skype for Business Server and Skype for Business Online

To check the address book issues:
https://poolexternaluri/abs/handler

Usage:GAL files are downloded from the FE server IIS

Check the below URL for distribution group expansion:
https://poolexternaluri/groupexpansion/service.svc

Usage:They are configured for via windows authentication by default.

https://poolexternaluri/certprov/certprovisioningservice.svc

Usage:This parameter can be used instead of the WebServer parameter in order to specify the full URL of the Certificate Provisioning Web service. This can be useful when the calculation used in WebServer will not yield the correct URL.This parameter is optional, and is used only when SipServer is provided.

This is needed when the Lync Server web server is not collocated with either the main Director or within the Front End pool in a site.
This might be due to a load balancer configuration where web traffic is load balanced differently to SIP traffic resulting in different FQDNs for the SIP and web servers.

All the above SFB URL’s will be listening on front end server

sgf

On accessing these URL’s if we are not prompted with username and password then troubleshooting steps needs to be performed accordingly to the message we received  to identify the issue. In most cases the URL’s might not be published correctly to be accessed from the remote end points or there might be the issue with the authentication or the virtual directory/server/services itself.

Thanks & Regards
Sathish Veerapandian
MVP – Office Servers & Services 

Event Viewer Warning 1040- Active Sync Direct Push technology

We Might notice this error on the Event Viewer on Exchange Servers for the source MsExchangeActiveSync

Untitled.png

Event Type: Warning
Event Source: MSExchange ActiveSync
Event Category: Requests
Event ID: 1040
Date: 3/10/2016
Time: 12:54:22 PM
The average of the most recent [513] heartbeat intervals used by clients is less than or equal to [540].
Make sure that your firewall configuration is set to work correctly with Exchange ActiveSync and Direct Push technology. Specifically, make sure that your firewall is configured so that requests to Exchange ActiveSync do not expire before they have the opportunity to be processed.

This warning is not an issue on the Exchange Servers.This is something mismatch value configured on the Network Load Balancer which serves the Client is not configured correctly.

Active Sync Uses Direct push Technology to retrieve the emails from the server. Inorder to initiate a direct push communication between the ActiveSync Client and the Exchange Server it uses the heart beat interval values.

In order for the Direct Push Technology to Work it involves 2 process one request from the ActiveSync Mobile(Client) and the response from the Exchange Server.When the Client notifies any changes on the users mailbox the changes are transmitted over persistent http or https connection through direct push.

Below is the process of ActiveSync Request to the server:

1)The Client issues a http request to Exchange Server asking for any changes occurred in the user mailbox in the specified time.Basically it queries inbox,contacts,calendar etc…

2) After Exchange Receives this request it looks for the specific mailbox and sees the changes in the folders until the specified time limit expires.After the time out period exceeds it issues an http 200 OK response to the clients. It then gives a response request to the client with all the update about the folders.

3)The Client then receives the response from Exchange and can be any of the below :

HTTP 200 OK – No Change on Folders . If this is the case the client will reissue the ping request on next heartbeatinterval value.
HTTP 200 OK – Change in folders – And will get the updates on each folders that was changed. After the sync is done it will reissue the request in next interval.
NO Response – It lowers the time interval in the ping request and then re-issues the request again in the minimum heartbeatinterval value to get the update.

So basically these HearBeatInterval values should match between the values set on Network Load Balancers and the Exchange .Servers.

Lets have a look at the values of HearBeatInterval on Exchange Servers.

Where are these Values Stored in Exchange 2016 ?

These Values can be seen in the web.config file in the below location in the installation directory

C:\Program Files\Microsoft\Exchange Server\V15\ClientAccess\Sync

There are 4 values as below

untitled1
MinHeartBeatInterval – The minimum number of seconds that the client waits between issuing heartbeat commands to the server.The default value in Exchange 2016 is 60 seconds. If this value is too small the client will send the http request very often and will consume the power of the device.

MaxHeartBeatInterval –The maximum number of seconds that a client waits between issuing heartbeat commands.The Default value is 59 Minutes on Exchange 2016 Server.

HeartBeatSampleSize- This is a bucket where the server collects all the recent heart beat intervals that the server received from the Active Sync Clients.It keeps this value to see how the clients are sending the activesync http request to the server and ensures they are matching with the specified values. The default value is it waits for 200 heart beat intervals.

HeartBeatAlertThreshold- If the collected HBsamplesize  value is more than or not meeting the configured value heartbeat maximum or minimum value in this specified time interval then it logs an event in the application log. The default value configured is 9 minutes.

Lets say if the HTTP(S) connections time out value is not configured as longer than 59 minutes on the firewall and if its value is lesser than the value on Exchange Servers, Once a ActiveSync http request is timeout on the F/W, ActiveSync Mobile client will sent another Http request which may cause connection overload.
In-order to avoid this the Exchange server will trigger an alert and mark an event in the event log.

A short living time-out value will initiate new http requests from the mobile device more frequently.This will also drain the battery of the device very quickly considering more http requests are initiated from the device.

The best practice is to increase the firewall Time Out Values for http requests to Exchange Servers Active Sync Virtual Directory to give a better experience to the users. The time out value on the firewall can be equal to or greater than the values specified on the Exchange 2016 servers.

Thanks & Regards
Sathish Veerapandian
MVP – Office Servers & Services.

Load Balancing Edge services over internet for Skype for Business

In-order for the users to connect externally from the organization’s network we need to publish the Skype for business services.In this article we will have a look at best ways to publish the Skype for Business Edge servers over the internet.
By doing this the users can participate from external N\W in IM,AV ,web conferencing sessions.

There is lot of confusion in the architectural part of load balancing the Skype for Business Edge servers and cannot be taken as easy deployment. If the SFB deployment is extended to communicate with federated partners, remote connected users and Public Instant Messaging users then a real proper planning of the edge servers deployment needs to be carried over.

If we have 2 or more edge servers deployed in the DMZ they need to be load balanced to equally distribute the load in all the edge interfaces.
In general Microsoft recommends to use a DNS Load Balancer for Edge High Availability.

Load balancing distributes the traffic among the servers in a pool so that the services are provided without any delay.

Below are 3 types of load balancing solution that we can use based on our requirement:

DNS Load Balancer Using NAT :

This is the best recommended approach.
We are actually load balancing each edge services namespace over the internet with multiple A records NATTING them via firewall and then to Edge servers.
These Ip addresses are bound to each services seperately routed to internal individual Ip’s assigned to the external NIC.
Three private IP addresses are assigned to this network adapter, for example 131.107.155.10 for Access Edge service, 131.107.155.20 for Web Conferencing Edge service, 131.107.155.30 for A/V Edge service. These private Ip’s listen individual public IPs Natted from the f/w.
These Ips are not participated in the load balancer and used only for NATing.
They are basically behind a port forwarding firewall which is good.

Advantages of doing this:

1) We are assigning a separate public IP’s for each service and using standard ports. So the remote users will not have any issues on connecting behind their firewall since all are standard ports.
2) Its very good to troubleshoot in analyzing a particular service traffic statistics, Logging and easy to identify the issues with the logs packet capture etc..,

Disadvantages of doing this:

1) The edge services rely on multiple A records with the same name but different IP addresses. So its not service aware configuration and failure detection rate and routing to the available server is not possible.

But still i would go with this option considering the failure detection rate is very minimal in a well planned deployment and strong n/w considering very helpful and easy during any troubleshooting scenarios.

Below is the example of DNS load balancing using NAT

Lets assume i need to load balance 2 edge servers using DNS Load-balancing NAT as per below environment.

sfb

Below is the DNS configuration

sfb3

sfb2
DNS Load balancer using Public Ip Addresses:

By doing this we are using one public IP for all 3 services on each server and differentiate them by TCP/UDP port value.
We are directly assigning the public IP’s on the edge servers one of the 2 NIC’s which should be external NIC.
Three private IP addresses are assigned to this network adapter, for example 131.107.155.10 for Access Edge service, 131.107.155.20 for Web Conferencing Edge service, 131.107.155.30 for A/V Edge service.
The Access Edge service public IP address is primary in the NIC with default gateway set to the external Firewall.
Web Conferencing Edge service and A/V Edge service private IP addresses are additional IP addresses in the Advanced section of the properties of Internet Protocol Version 4 (TCP/IPv4)

Disadvantages of doing this:
It is not recommended, to use a single public IP address for all three Edge service interfaces.
Though this does save IP addresses, it requires different port numbers for each service.

Access Edge – 5061/TCP
Web Conferencing – 444/TCP
A/V Edge – 443/TCP

These might cause issues for remote users connecting externally from a n/w where their firewall doesn’t allow the traffic over TCP 5061 port.
Having three unique IP addresses will help us in easily doing a packet filtering to identify and resolve the issues.

Hardware load balancing using public Ip Address:

Load balancing is only need for old OCS clients and xmpp, but works fine if both edge server are up. From Lync 2010 Microsoft does not recommends to load balance the Edge services from internet.

We are creating a virtual Ip address for each services that edge serves (Access, WebConferencing, A/V) on the load balancer like F5, KEMP etc..,
Behind this Virtual Ip’s we need to add the edge servers associated for the services.
The main benefit of this is failure detection rate is very quicker since it detects the failure from the server side.

Disadvantages:

1) The A/V services will not see the client’s true IP ( for example in a peer to peer audio call for a user connected from external to internal)
2)Few challenges in configuring the outbound client connections going from the edge to internet (Routing & SNAT)

Thanks & Regards
Sathish Veerapandian
MVP – Office Servers & Services

Recertify expired Notes ID

Recently few of the lotus notes users were getting the below message on logging to their notes account.

One or more certificate in your notes ID have expired.
Contact your domino administrator.

notes

By looking into this error we really think that this is something to do with the certificate.
This occurs because user ID’s expiration dates are mentioned for each account on the domino server and after expiration these messages appear.
Usually the values are mentioned as 10 years period or values accordingly set by domino developer during the deployment.
This helps the administrators not to recertify the ID’s frequently.

So basically what we need to do is to extend the expiration dates for these users on their notes ID when we come across this issue.
Inorder to extend the expiration time we need to recertify those ID’s.

The below steps can be performed to recertify the notes ID

Launch the Domino Administrator :

Navigate to People and Groups

domino

Navigate to tools – Select people – and select recertify

notes1

The next step will be prompted for a certifier process.

Here we have 2 options:

1)Supply certifier ID and password
2)Use the CA process

Its better to use the CA process which will allow us to specify a certifier of our own without access to the certifier ID file or the password.

After choosing the above option we will get the below screen of the new certificate expiration date. There is an option to inspect each entry before submitting a request which is good to enable.

notes2

After a successful processing we get the below message which says the request statistics.

notes3

After this dialog box click ok and continue. After the replication interval the user can login and he will not get the certificate expiration message anymore.

Thanks & Regards 
Sathish Veerapandian
MVP- Office Servers & Services

Quick Bites- Known issue with Security Update for Exchange 2016 CU2 KB3184736

Its been more than a week that Microsoft released Security update for Exchange 2016 CU2

The Security update can be downloaded from the location https://support.microsoft.com/en-us/kb/3184736

Yesterday we installed the KB3184736 on Exchange Server 2016 CU2 production.

We have run into the below 2 issues:

Just posting them here so that people can look into these 2 issues after the update and rectify them if they  experience the same:

1) Microsoft Search Host Controller would go disabled – So started the service ran Update-MailboxDatabaseCopy -CatalogOnly for the indexes to reseed which resolved.

2) Got ASP.Net runtime error for ECP – But strange out of all installed servers only 3 servers ECP were affected and rest all was fine.
On comparing the web config found that the ECP BinSearchFolders were showing as %ExchangeInstallDir% instead of C:\Program Files\Microsoft\Exchange Server\V15\
Changed the path location to C:\Program Files\Microsoft\Exchange Server\V15\ which solved the issue.

3) Few OWA users were getting the below message bad request , unable to login to the OWA page and the message appeared as below with the blank white screen with bad request.

ev1

Ran the UpdateCAs.PS1 script on all mailbox servers found on the location  C:\Program Files\Microsoft\Exchange Server\v15\bin\UpdateCas.ps1 after which the issue was resolved.

ev2

 

Configure DKIM and DMARC in on premise Exchange Environment

Small history on DKIM:

Cisco’s Identified Internet Mail (IIM) and Yahoo’s DomainKeys were merged and formed the DomainKeys Identified Mail (DKIM) in the year 2004, an IETF standard described in RFC 6376.

IIM and Domain keys is no longer supported by any RFC standards and they are depreceated.
These both systems were combined together as DKIM which is widely being used currently.

By using SPF we are actually letting everyone know that these are the authorized IP’s for sending emails.
But but few suggest they aren’t as secure and there are chances these authorized servers on SPF list can be compromised and spoofed messages can be sent.
DKIM is a process through which the recipient domain can validate and ensure that the messages are originated from the actual domain sender and was not spoofed message.

How DKIM Works ?

DKIM involves 2 processes signing and verifying. Signing from the sender who has this feature enabled and can be from a module Mail Transfer Agent.
By default Exchange server does not have this option to sign for emails with DKIM.
We need to have a MTA agent to perform this job on the Exchange server or the best way is to enable this feature for signing out all emails through an SMTP gateway for an on premise setup.
Almost every SMTP gateway in the market is having this option to enable DKIM and DMARC.
When performing this operation on sender organization who has this feature enabled for outgoing emails it inserts hash tag of the DKIM signature content header fields , body fields for the author organization.

The verifying is done by the receiving part domain if the DKIM is configured in that recipient domain. If at all there is no DKIM configured no DKIM verification will be performed on the receiver and the mails will be routed normally to the recipient.
The receiving SMTP server uses the domain name and the selector to perform a DNS lookup

We can rotate the keys randomly from the smtp gateway or from the application which is doing the job if at all we have a doubt if the private key is compromised.
In this case we need to change the selector name accordingly in the DNS for DKIM to reflect the new selector having the new private key.

The above scenario is very very rare and if it happens anyone will be able to get a copy of your private keys, they will be able to sign messages on your behalf.

The private key will be present on the MTA agent with the domain owner itself which performs this job and the public key will be published as a DNS text records.
By using this DNS published text records it allows anyone to verify that the signature(hash tag) present in the received email is valid and no contents in the email have been tampered.

 

Below are the core components with which the DKIM will be functional :

Selector (S) – Its usually the SMTP server which has the key pair certificate (private key usually SMTP server)
We can have multiple selectors if we have multiple SMTP servers
Or we can use the same key pair on all the SMTP servers which is best because we don’t need to publish multiple DNS records for multiple selectors.

_domainkey – Static fixed part of the protocol itself and can’t be altered.

d(Signing Domain) – This part needs to be verified so it should be our domain name.

p(Public-key data) – This portion contains the public key of our generated cert request in base encoding.It should be definitely base64 encoding format.

Once the DKIM domain records is created we need to append the TXT record in the DNS records for the newly created subdomain with the public key generated from the DKIM responsible server(selector).

Below are the additional components which can be added if required:

v –  is the version.
a –  is the signing algorithm.
c – is the canonicalization algorithm(s) for header and body.
q –  is the default query method.
l  –  is the length of the canonicalized part of the body that has been signed.
t  –  is the signature timestamp.
x  – is its expire time.
h –  is the list of signed header fields, repeated for fields that occur multiple times.

Below is the overall steps:

1) Create your signing key in the agent or server responsible for this job in your environment.
2) Publish your DKIM DNS record for your domain.
3) Enable the DKIM signing and encrypting option for all outbound emails.

Below is the standard DKIM configuration through  SMTP server MTA Agent:

DKIMimage

Benefits of DKIM:

1) DKIM will add positive points to the antispam in terms of SCL rating for our internet emails.
2) There is no possibility of Spoofed emails going on behalf of our domain if we have SPF and DKIM together.

If we have multiple SMTP Gateways do we need to have multiple selectors ?

In this case we can use the same key and profile on all SMTP Gateways.So we create a domain profile on the first Gateway as well as the signing key and publish the TXT record. Its better to have only one TXT record for a domain. The same keys generated on one SMTP GW can be used on all of the Gw’s we have. Just we can import them on all gateways. By doing this we don’t need to create multiple txt entries for the respective selectors.

After this Export Public Key and add the TXT entries in the public DNS server.
So basically a DKIM enabled org will have all the sent emails stamped with a hash tag with the private key from the DKIM MTA agent or the SMTP Gateway.
The recipient domain will perform the DKIM validator if it does by querying the DKIM text records.
The recepient domain will consider this domain valid only when the sender email has the hash tag.Basically this is a key pair.

DMARC : Domain-based Messaging, Authentication, Reporting and Conformance (DMARC) standard

DMARC is a mechanism for domains to get reports on DKIM and SPF results for our domain if we have them configured.They let us know what to do if the SPF or DKIM fails for our domain.

A DMARC policy applies clear instructions for the message receiver to follow if an email does not pass SPF or DKIM authentication—for instance, reject or junk it which we  can configure according to our requirement.
DMARC sends a report back to the sender about messages that PASS and/or FAIL DMARC evaluation.

Through DMARC, we can receive all the forensic reports sent on behalf of our domain daily.

We need to Designate the email account(s) where we want to receive these reports and all the reports will be sent to this email address.

This DMARC again requires a DMARC tag that will be inserted on all outgoing emails which are with SPF and DKIM. So we are letting the receiver to verify this DMARC tag.
DMARC tags are the language of the DMARC standard.

Below are the important required tags for DMARC:

v: Version – This tag is used to identify the TXT record as a DMARC record and is static value as is.

p: Requested Mail Receiver Policy.
Again this P can be any of these 3 values

p=none: No specific action will be taken on emails that fails in DMARC validation.
p=quarantine: By doing this we are requesting the receiver end to place the email in the spam/junk folder and mark them as suspicious.
p=reject: By doing this the domain owner says strictly reject all emails that fails DMARC validation on the receiver end.
This is the best recommended way and it provides a highest level of protection.
rua: Indicates where aggregate DMARC reports should be sent to.
Senders designate the destination address in the following format: rua=mailto:domain@example.com.

fo: Dictates what type of authentication and/or alignment vulnerabilities are reported back to the Domain Owner.
pct:We are specifying this value to the percentage of messages to which the DMARC needs to be applied for all the outgoing messages.
This can be optional and can be used to test the impact of the DMARC policy at the initial stage and later can be removed or kept 100.

Below is an example of the DMARC record of how it should be created with the above required tags:

v=DMARC1; p=reject; fo=1; rua=mailto:domain@example.com; rf=afrf; pct=100

The above method of creating a txt record is the DMARC standard.
Also we need to specify the email address where the reports should be sent.
We also need to inform ISP’s to send all the messages to the specified email address and not to block as a spam or reject them for any reason.

Important points to be considered while enabling DKIM:

1) DKIM verification is automatically verified for all messages sent over IPv6 communications if the recipient domain has DKIM verifier enabled.
2) This DMARC is again configurable in on-premise only if your SMTP Gateway is having this feature.
3) DKIM performs Cryptographic checksums on every outbound messsages sent externally.This increases the protocol load overhead on the outgoing emails and more memory system resources will be consumed to perform this operation.
4)DKIM is an IETF Draft Standard, and it is free of cost no need to pay anything for your ISP because all we need is the DKIM public key text entries.
5) If the receiver domain does not have this DKIM verifier configured all the emails sent with DKIM enabled will be received normally and there will not be any issues.

Thanks & Regards
Sathish Veerapandian 
MVP – Office Servers & Services 

Configure Throttling Policy In Exchange Server 2016/13/10

The concept of throttling policy is first introduced in Exchange 2007, by which admin can impose some policies that prevents user application from sending number of Remote Procedure Call per second.
Throttling policies are meant for enhancing the Exchange performance in the organization. It keeps a track of consumption of resources by the end-user and also imposes the bandwidth limits. Continue reading