Category Archives: Monitoring Tools

Microsoft Teams – Utilize the Azure Sentinel to facilitate SOC and Monitor Teams critical events

Few days ago Microsoft has announced the new release which provides us the opportunity to integrate MS Teams related activities that are recorded in the audit logs to Azure Sentinel. Enabling this feature benefits organization where there is a separate SOC team monitoring and analyzing the security posture as an ongoing operational procedure.

We still have the Microsoft native cloud app security which benefits in creating the alerting mechanism for MS-Teams related activities.But with the Log Analytics and Azure Sentinel we can do a lot more than it can be done from the Cloud App Security. We can further fine tune the alerting, create workbooks and dashboards for Microsoft Teams related activities which will be useful for Teams Monitoring.

To start with this new feature ,we need to enable this new option to ingest Teams Data into Azure Sentinel Work Spaces. This article can be followed to start with connecting office 365 with the Microsoft Cloud native SIEM Azure Sentinel.

Navigate to Azure Sentinel Work Spaces – Select Data Connectors – Choose Office 365

Here we can see the new option for sending Teams Audit Logs to Azure Sentinel WorkSpace.

Once it is done after a while, we could see that the workspace have received the data types Office Activity (Teams)

Live Query Teams Monitoring :

When we navigate into the workspace we have the opportunity to fine tune and see the events that are written on the Audit Logs for Teams in a more refined way.

For instance to filter only Team creation can be checked from the workspace. This can be used for filtering even specific person and creating alert for them.

This helps the SOC Team for a live reactive analysis when any security incidents are reported for Teams related activities.

OfficeActivity
| where OfficeWorkload == "MicrosoftTeams"
| sort by TimeGenerated
| where Operation has "TeamCreated"
| where UserId has "sathish@exchangequery.com"
| project UserId,AddonName,TimeGenerated,RecordType,Operation,UserType,OfficeWorkload

Create Alerting Mechanism : Azure Monitor or Azure Sentinel

In a real example we can create alerts and notify the SOC Team when a bot has been added to the Team.

OfficeActivity
| where OfficeWorkload == "MicrosoftTeams"
| sort by TimeGenerated
| where Operation has "BotAddedToTeam"
| project UserId,AddonName,TimeGenerated,RecordType,Operation,UserType,OfficeWorkload

To create the alert once after writing the query we have the new alert rule where there is an opportunity to create alerting mechanism in two methods. Create Azure Monitor Alert or Create Azure Sentinel Alert.

To experience the behavior selected the option Create Azure Monitor alert. Used the same Query. Alert logic and the time period is set for demo and can be defined based on the period and frequency that suits best for the monitoring.

The action group can be selected to send this notification alert to a email addresses.

The notification type can be selected for other options like where ITSM can be chosen to trigger an incident for the same events.

In our case email was selected and after few minutes tested by adding a bot and got the alert notified on email address.

Further information about the bots that have been added can also been seen.

Create WorkBooks and Dashboards:

Here we do have the possibility to create workbooks and dashboards for Ms Teams related activities. There is one template present by default for Office 365 and there is a item Teams Workload present over here which will help in creating a workbook for Teams.

The default workbook provides decent information on monitoring the Teams related activity.

This will be a good start to create one dedicated work book for Microsoft Teams and pin them as a separate dashboards for Microsoft Teams related activities. I have also written post on creating Azure Monitoring Workbooks which can be referred for creating dashboards for Teams Activities.

Microsoft Teams logs in Azure Sentinel is really a welcoming native cloud integration feature set where lot of organizations can be definitely beneficial in terms of actively monitoring the Teams Activities with no additional cost of investing on 3rd party SIEM integrations.

Regards

Sathish Veerapandian

Microsoft Teams – Utilize Power BI to get more details on the Call Quality Dashboards

With Microsoft PowerBI we can gather more details from the call quality dashboards. As of now Microsoft have released 7 power BI desktop templates to accumulate more details on the Microsoft teams call quality dashboard.

PowerBI being a very potential platform for data gathering and analysis these new templates for Microsoft Teams have been more outstanding in terms of analyzing the Microsoft Teams data.

We will go through the overview of the reports and the configuration on this post.

Firstly the PowerBI Query Templates for Microsoft Teams needs to be downloaded.

We have below 7 templates report:

  1. CQD Helpdesk Report.pbit
  2. CQD Location Enhanced Report.pbit
  3. CQD Mobile Device Report.pbit
  4. CQD PSTN Direct Routing Report.pbit
  5. CQD Summary Report.pbit
  6. CQD Teams Utilization Report.pbit
  7. CQD User Feedback (Rate My Call) Report.pbit

These are customizable templates which can be used to analyze data. These above are PBIT file formats which can be used from PowerBI desktop which has the data source configured. If we need to open them directly from the powerbi portal they need to be renamed as pbix. If we are importing them from the powerbi desktop the following file MicrosoftCallQuality.pqx needs to be imported to the location [Documents]\Power BI Desktop\Custom Connectors folder.

From Desktop:

The initial requirement is that the PowerBI Desktop version must be installed and the data gateway already configured. The steps from Microsoft can be followed from here

Place the pqx file in below location. The below location will be automatically created once the desktop version is installed.

Set the data source:

Option 1: Use the Microsoft Call Quality (Beta)

In-order to set the data source open Power Bi desktop – Select get data – choose Microsoft Call Quality (Beta)

Once it has been connected we could see the below message as disclaimer since it is on beta roll-out at the time of writing this blog.

Next we will be having the below option which has all the details to build the query.

The moment when we click on load we will be presented with the below screen. Here we need to select the option direct query since we are getting the data directly from the Microsoft call quality dashboards.

Once connected we will have all the options to build our own custom reports by selecting all the required fields from the right , visualizations and filter. This option is very beneficial where we have our office network details uploaded on the call quality dashboard for detailed analysis and building our own custom dashboards. Here we have selected few fields for example and could see they are populated on the dashboard.

Option 2: Import the Teams PowerBI Templates report and publish them from the desktop.

The second option is to import the PowerBI Templates and publish them on the desktop. Inorder to import them navigate to file- import – select power bi template and import all the pbit format files. These templates have to be imported one by one.

Once imported we get all the details as per the template imported. We do further have an option to customize the reports. Click on publish to publish the reports directly to the workspace.

Choose destination the workspace to be published. In our case we have selected Microsoft Teams – CQD and thats the workspace created in PowerBI for Teams CQD.

Once its published we have the dashboards published in the workspace and ready to share.

When clicked on share we have the below options while sharing the report. Users will need powerbi pro license and CQD access role to access this report.

Importing from the PowerBI Web Portal:

Importing them from the web portal is very much easier. We need to click on the datasets – files and select get option since we need to import the downloaded files here to create the new content.

Select files and click on local file and choose the powerbi templates. Here we need to rename all the file formats to pbix since the portal will not recognize the pbit format version.

Once uploaded we can see the dashboards. The template dashboards have lot of information especially with user details breakdown which is very nice. The below example is from CQD Helpdesk Report. Here we have an option to search by users, conference or by date which is very convinient.

Further from the user activities tab it gives us more report as example below. The good thing is that we could see the device information on the end point.

Below example comes from CQD Teams Utilization Report. This gives more info on how the Teams is utilized by users in our organization.Few samples from the templates. The call count summary gives all the information in one view.

We get the location details as well in the over all call quality and gives the data for past 180 days.

User details are very impressive where we can see the app version, drivers and further we have filters on the right to customize the view.

Below example shows day details breakdown with further customization filters and fields to get data based on our requirement. The default report itself has lots of required data which is very great.

The mobile devices all quality also have lot of useful information with overall summary.

We get the mobile devices call quality with rendered devices, call quality trend and number of conference attended from the mobile.

The desktop version is very much convenient to create customization dashboards.Well there are more reports which are handy and available from these default templates which will be definitely useful and in the above examples we have gone through few of them. These reports can be customized easily and shared with less efforts and it gives a very good view with rich data experience.

Thanks & Regards

Sathish Veerapandian

Create Azure Dashboards for workbooks created from log analytics for monitoring

In the previous post we had a look at how to group multiple azure log analytics queries ,group them and display them in one screen. There are few real challenges in displaying the queries directly from the workbook. Firstly they are not having the capability to auto refresh the live data until we reload the workbook. There is no option to fit the dashboard and customize them as per our requirement. Finally there is no option to set the refresh rate, setting up the local time zone and sharing them to the required persons to view them with read access.

Creating the dashboards is much easier and there are multiple ways to do them. In this post we will have a look at creating one from the workbook.

Inorder to create a workbook navigate to Azure Log Analytics Workspace – Click on WorkBooks – Select the workbook that needs to be created in dashboard.

In below example just for demonstration the default health agent work book is selected. Once selected choose edit and go to pin options

We have the below pinning options

Pin Blade to Dashboard – Pins the entire workbook.

Show pin options has below ranges to choose

Pin Workbook – It again pins the entire workbook as a workbook template

Pin All – Pins all the created queries as dashboard. This is best recommended option

Individual Pin – Individual pin option can be used to choose only selected queries and pin them on the dashboard.

Once its pinned – We can navigate to Azure Dashboards – Navigate to azure portal – Click on Azure Dashboard we can see all the selected queries.

Now we need to align them by just clicking on edit on the dashboard. Here we get lot of options like add, pin , move and resizing the tiles. We have few metrics in the tiles gallery which can be added.

We have options in tile settings.There is option to configure the timespan and choose the time granularity as per our requirement.

Even we have an option to choose the time as per our requirement.

There option to name the dashboard as requested.

Once customized when navigated to full screen it shows the below option in the dashboard. Below is just a sample of dashboard created from the log analytics workspace.

Furthermore we have options to choose the refresh interval rate which refreshes the data from the logs present in the log analytics which is inturn collected from the agents installed in the active systems.

There are also other options like to download the created dashboard in json format. Even upload option is present which accepts json format file.

Sharing option is present where we can share this dashboard to a group of people by targeting them to a read only group. When clicked on share it is private.

After it has been shared we will get the access control options.

Once clicked on managed access we have option to add users in role assignments. There is an option to unpublish the dashboard as well and when done it is made again as private dashboard.

Clone option is also present where we can just clone one existing dashboard and modify the queries on the background.

Creating azure dashboards made admins life simpler in lot many ways in deployment of monitoring solutions for newly installed windows , linux , network devices and even databased through azure log analytics.

Regards

Sathish Veerapandian

Visualize Microsoft Teams Room Systems health components through Azure Monitor Workbooks

In the previous post we looked on how to configure Azure Monitor Alerts for Critical events that occurs on Microsoft Windows Devices which can be used for monitoring the Teams Room Systems. With Azure Log Analytics we could leverage few more components that will help us to visualize the status of the systems which are monitored through selected event logs and the performance counters.

Creating the Workbooks and making them visualize purely depends on the data that is been ingested on the corresponding log analytics workspace. So at the first stage its very important that we are sending all the required logs and counters which is mandatory for visualizing the metrics.

Firstly before creating the workbooks we need to devise a strategy on how to build a skeleton for the dashboard. This is very important since there are multiple options available and need to understand what important data that needs to be projected on the dashboard.

We will go through few examples of how to get started with creating the workbooks and visualizing the data.

We need to prepare the required Kusto Query Language which is required for visualizing the data. Below is a small example of one which will visualize the count of the perf counters by object name

Perf
| where TimeGenerated > ago(1h)
| summarize count() by ObjectName

To Render them as a pie chart we can use the below information

Perf
| where TimeGenerated > ago(1h)
| summarize count() by ObjectName
| render piechart 

Example below will project only the affected systems which has failed windows updates, driver updates or any devices connected with room systems which are in a failed state.

search *
| where Type == "Event" 
| where EventLog == "System"
| where EventLevelName == "Error"
| extend Status = parse_json(RenderedDescription).Description
| where RenderedDescription has "failed"
| project TimeGenerated, Computer , RenderedDescription 

If we need to visualize them on a graphical pie chart we could do that as well by summarizing them to a string value which is available from the  parsed json file. Example it can be computer, Ip address , Device name or any data which is present on the raw event data.

search *
| where Type == "Event"
| where EventLevelName == "Error"
| extend Status = parse_json(RenderedDescription).Description
| project TimeGenerated, Computer,RenderedDescription 
| where RenderedDescription has "failed"
| summarize Count=count() by tostring(Computer) 
| render piechart

Above are just very few examples of rendering the data and making them visualize through kusto query language. There is a lot to explore and can project more data based on the logs that we are adding on the azure log analytics.

Now we have got some idea of how to create the visualization through the kusto query language there is an option to combine multiple queries and display them as a dashboard through Azure Workbooks. Earlier this option was enabled by view designer which is now replaced by enhanced version called Azure Workbooks.

There are multiple options which can be utilized and created dashboards with Azure Workbooks and below we will go through few of the options which will help us in creating our customized workbooks.

In order to get started with Workbook – Navigate to the log analytics workspace – Choose Workbooks

Click on New

We get the default summary of our query from our workspace with the below piechart view.

If we want to go with our own query we can remove the default query and select Add. Here in Add we have multiple options like below out of which Add Group seems to be very much interesting. With Add group we have the ability to add multiple queries and group them in a single workbook.

At the top of this group we have an option to add text which visualizes the workbook name and the details.

After selecting the group , now we have option to add query into the group.

When going into the advanced settings we have these options now to display the chart titles specific for this query.

In the style tab we do have some options to modify the HTML settings. By default this will fit in to one query per row and if we need to add three queries we need to adjust the width settings.
In below case I have added the width to 50 since trying to add 2 queries in a row. But its very important to note here that adding 3 columns and making them visible as a dashboard is fine only in Azure Dashboards. If we try to view them from Azure Workbooks 3 queries in a row is not sufficient to accommodate and we do not have option to modify the HTML editor at this moment.

Have added another query which will let us know the status of the systems which have reported the heartbeat in last 30 minutes through the perfom counters. In below case since I have only one system for demo it shows only 1 system.

The moment when we group them and display it shows the  view as below. By adding multiple queries based on our requirement it makes us easier to create the dashboards.

Further to this we have a lot of options in the visualization of the display based on the metric units. We can go through few of them.
For instance there are below options available to set visualization.

We can reduce the size of the visualization and we have 5 options.

Further in the chart settings we have option to define the column and the units.

In the series we have option to change the colour and add a custom display label.

To interpret further have chosen Graph which is very interesting.When entering into the graph settings we have the below options in the node format settings. This helps us to choose what fields that we can display on the view of these images in the dashboard.

We have furthermore tweaking information on the layout settings. The hive clusters are looking really nice like honeycomb in the visualization. And there is a category to group by field to select based on the available fields.

Now we have the category to choose based on the coloring type. Ideally this is very good to categorize based on healthy and unhealthy systems. This will group the healthy and unhealthy systems separately and finally display them as dashboards.

This blog gives an overview of how to visualize , group and create Azure Workbooks from Log Analytics WorkSpace. With Azure log analytics and Azure Workbooks it makes very much easier to monitor the modern Windows 10 & Linux devices. This facility can be very much leveraged easily in a direct cloud deployment model without the need of installing, configuring and maintaining a local monitoring solution.

Thanks & Regards

Sathish Veerapandian

Use Azure Log Analytics to notify critical events occurring on Microsoft Teams Room Systems

In the previous post we had an overview of how to create Azure Log Analytics and configure them to collect data from windows systems. Once the information is ingested in the workspace we currently have a choice to make alarms and notify the responsible team dependent on various signal logics which will be useful on monitoring these devices.

These alerts are scoped to each log analytics workspace. It will be a smart thought to isolate the services ,group them on singular workspace and create separate alerts for critical events happening on these monitored devices.

In order to create the alerts Navigate to alerts on the same workspace  – Click on New Alert Rule

Navigate to signal logic and choose the signal logic. There are multiple we need to see if any more interesting which suits our requirement can be added over here.

Now we have the required critical signals based on which the alert needs to be triggered. Usually the signal type will be from the collected events and the performance counters. In our scenario we could go with some default events from the list and also custom log search.

Device Restart Alert:

In our example for default one did choose the Choose the signal logic of heartbeat from the existing one – (Useful when the device turns off)

Select the required devices  – make operator threshold value 0 – aggregation 5 minutes & frequency of evaluation 1 minute (The frequency of aggregation and evaluation can be chosen based on the interval of how many times we want to check the heartbeat). In normal cases it is best recommended not to choose a smaller frequency time range for large volume of devices and probably for critical devices alone it can be selected on a smaller frequency time period.

Disk Threshold Alert:

Similarly like device restart we are having disk threshold alert by default which can be configured.

It notifies when it exceeds the configured space. Select the resource configured for Teams – Select the Condition – Select the computers – the object name whenever the % free space is greater than and choose the value 60 percent. The percentage can be altered based on our requirement.

Then we need to select the required object, instance , counter path and source system. In our case we have selected one performance counter % free space. This will alert us when the disk space crosses 60 percent of overall capacity.

Chosen aggregate period is 5 minutes and the frequency time is 1 minute for every evaluation. Again we can change the frequency of evaluation for this probably on two times in a day one on the earlier time and other one  on the evening.

Custom Alerts:

Custom Alerts are more intriguing. With custom alerts we must be able to avail most of our alerting mechanisms. We have to select the signal custom log search for the custom alerts.

Event  | where EventLog == "System" | where EventLevelName == "Error"
|where RenderedDescription != "*updatefailed*" 
| where EventData != "DCOM"
| project TimeGenerated, Computer, RenderedDescription

Example used the above query to report only the events which has error messages apart from windows update and DCOM alerts . We can further filter on not contains operator and create custom query based on  our requirement.

When any error messages apart from the excluded events comes up in the targeted devices we will be alerted for the same.

Note there are multiple action types – Email/SMS/Push/Voice, ITSM and Webhook will be more convenient for us in this case on Skype room systems monitoring.

Email – We can send email Email/SMS/Push/Voice when the alert is triggered. This will be the most convenient and easiest part to start with. This will help us to collect all the used cases initially and see which ones are really helpful and the ones which is not helping us. Once we devise a strategy from the email alerts then probably we can go with the other alerting mechanisms.

ITSM – We can integrate with IT service desk management tool to create incidents when these alerts are triggered. Most of the IT service desk management tools are capable of API integration especially with Azure AD and must be easier to suffice this requirement.

Webhook- We can configure to send push notification to teams channels when these alerts are triggered. Probably a dedicated teams channel can be created for the first level of NOC monitoring team. Post that the webhook can be configured to trigger the critical events alert to the teams channel.

Now with the email alert – Created action group – Chosen action type email/SMS/Push/Voice

By default there are no action group created. So an action group must be created and targeted to NOC team email group.

Added the email address for notification. Well there are other options as well like sending SMS and Voice which could also be leveraged.

We do have an option to modify the email subject based on the alert details.

Finally we name the alert details , mark the severity , enable and create them.

We have the option to see all the  configured rules.

Once after configuration, we can see the statistical dashboards which provides us the summary of total alerts that have been triggered and their status.

We are receiving the email alerts when the disk space exceeds the configured level of 60 percentage.

Similarly when the device was turned off, the configured heartbeat alert triggered an email to the recipient.

Similar like this we can create multiple required alerts for critical events.

At this moment we have option to create alerts for every action type which can be targeted for all computers and they are charged individually on a very nominal price. So for multiple alerting types we need to create multiple action types. These alerts are purely based only on the collected logs which are present on the azure log analytics workspace. Just in case if we are trying to collect more details which are not present on the collected logs then we wouldn’t be able to create the alerts. The Azure logs Alerting mechanisms provide a great way to alert the critical events happening across the monitoring systems.

Thanks in Advance

Sathish Veerapandian

Microsoft Teams – Configure Azure Log Analytics for Monitoring Teams Room Systems

Microsoft Teams being the best collaborative solution there are lots of smart devices which are equipped with Microsoft teams for providing the smart meeting room systems with modern cameras, microphones and smart display screens. The best part on Teams application is it can function well in all ranges of devices with a support of basic hardware and running on a windows 10 operating system.

While there are numerous approaches to monitor the Microsoft Teams room systems this article we will go through the steps to monitor them through Azure Log Analytics.Like other applications Microsoft Teams App running on room devices will write all the events on the event logs.Through the Microsoft Monitoring agent in Microsoft Teams it allows these events to be collected in Azure log Analytics.

Prerequisites:

  1. Subscription with Azure to configure log analytics workspace.
  2. Teams meeting room system with internet connectivity. There are other methods to collect the logs without internet through  Log Analytics gateway in this approach we are going with direct agent method.
  3. The Teams devices must be running on a windows operating system on all meeting rooms on a KIOSK mode or probably on a full operating system mode based on the requirements.

Create Azure Log Analytics and integrate with Microsoft windows agent.

Log into log analytics workspace

Create new log analytics workspaces. We can use the existing workspace as well and it purely depends on the requirement.

Choose the  required subscription

Once the Log analytics workspace is created , we need to go ahead and download the windows agent. The agent can be downloaded by navigating to Log Analytics Workspaces – Workspace name – Advanced Settings – Connected sources – Windows servers – Download the windows agent.

Install the MMA agent on Teams Skype room system device –

Select only the option connect the agent to azure log analytics (OMS) because in our case we are not monitoring them via a local monitoring agent SCOM.

Enter the workspace ID and the key from the log analytics workspace and select Azure Commercial. If the network is going through proxy then click advanced and provide the proxy configuration. If the device is not having connection to the internet then the agent cannot send the logs to log analytics workspace.

Once installed we can see the Microsoft Monitoring Agent present on the control panel.

Once opened can see the Azure log analytics (OMS) and see the status to be successful.

On editing the workspace we can see the workspace ID and the Workspace Key.

Usually it takes a while to collect the logs to Azure Monitoring agent.

Configure the required logs to monitor:

Once the log analytics workspace is being collected we need to configure the data sources so that the log analytics workspace can start collecting the  required data for monitoring the Teams Room Systems.

In our case for monitoring the teams device, we need to collect teams app logs and few hardware related events. We will look into configuring them now.

Note: We have to be very choosy here on collecting only the required events, since dumping logs to azure log analytics involves cost in it and best recommended to choose only the required events.

In order to collect the logs navigate to advanced settings – Choose data sources – select windows event logs

The key primary log that needs to be collected is Skype Room System (we have to type them completely and click add as this log entry will not autocomplete)

There are few more log events that can be added, but added these logs which might be helping on monitoring the Teams room devices.

Having added the windows event logs, we can navigate to windows performance counters and there are few events which can be added and useful for us to notify when the devices are having any of the below issues on them.

Querying the logs:

Once we have configured the required log sources it’s the time for us to run some queries and see if the logs are been collected. The azure log analytics workspace works well with Kusto Query Language and SQL Query Language.

There are default queries like Computers availability today , list heartbeats and unavailable computers.

Once selecting on the default templates list heart beats and can click on run the below results is obtained.

To see only the Application Event logs we can run the below query

search * | where Type == "Event" | where EventLog == "Application"

To see only the Errors generated in the application event logs

search * | where Type == "Event" | where EventLog == "Application" | where EventLevelName == "Error"

To drill down more and look into the perfmon logs ran the below query to check the system up time.

Perf| where CounterName == "System Up Time"|summarize avg(CounterValue) by bin(TimeGenerated, 1h)

There are lot of queries which can be built from these collected events. Having collected these events , we can configure them to display as dashboards and collect alerting mechanisms for the critical events. In the next post we will have a look at how to configure the alerting systems for critical events that’s happening on the meeting room devices.

Thanks & Regards

Sathish Veerapandian

Update – ExPerfWiz 1.4 has been released

ExPerfWiz 1.4 has been released on October 25th 2014

Following are the recent updates in the Experfwiz 1.4

Fixed Circular Logging bug in Windows 2008+
Added ability to convert BLG to CSV for 3rd party application analysis (does not need to be run from EMS, just Powershell 2.0+)
Updated maxsize for Exchange 2013 to default to 1024MB
Fixed filepath bug on Windows 2003
Added/Removed various counters
Fixed location of webhelp
Updated -help syntax

ExPerfWiz is a script developed by Microsoft to to collect the performance data together on Servers running Exchange 2007,2010 and 2013.

In the earlier version we have the option of running -nofull switch by which it will collect only the role based counters.The current version runs in full mode meaning which it collects all the performance counters related for Exchange troubleshooting purposes.

Below is the example to run the perfmon for a duration of 4 hours

Set duration to 4 hours, change interval to collect data every 5 seconds and set Data location to d:\Logs

.\experfwiz.ps1 -duration 04:00:00 -interval 5 -filepath D:\Logs

experf

If it finds previous data of Perfwiz logs it prompts for an option to delete the old entries, Stops the data collector sets, creates a new data collector sets and then it starts collecting the data.

Note: This script will take the local server name and will run locally on the serve  if no  remote server parameter  is specified.

More Examples can be found at – http://experfwiz.codeplex.com/

Source of Information  – https://social.technet.microsoft.com/Forums/exchange/en-US/f8aa3e90-d49f-479f-b00b-c8444afefa65/experfwiz-14-has-been-released?forum=exchangesvrgeneral

Thanks 
Sathish Veerapandian

MVP – Exchange Server 

PortQueryUI – GUI tool that can be used for troubleshooting port connectivity issues

At times we might run into scenarios where user unable to do  access any Exchange ,Lync,Mobility or any related External User Access functionalities. This might happen in multiple scenarios like in a new deployment, a firewall upgrade, a switch replacement or a network change etc.,

Microsoft has this Graphical User Interface of tool called PortQueryUI which can be used to troubleshoot these kind of scenarios with port connectivity issues.

Below explained is the functionality of this tool PortQueryUI.

Download the tool from the below link –

http://download.microsoft.com/download/3/f/4/3f4c6a54-65f0-4164-bdec-a3411ba24d3a/PortQryUI.exe

Accept the license agreement and proceed. Now we will be directed to unzip the files and choose a location to unzip.

 

PortQuery

Now we can open portquery UI application. There is no need to install this app and it opens up the GUI interface as shown below.

Its better to run this tool from the affected machine/server where we are experiencing the issues and then specify the destination IP of the server where we are experiencing the connectivity issues.

We could see there are 2 types of query.

1) Query Predefined Service – Which has few predefined services like, SQL,Web Service ,Exchange etc., .When we choose any predefined service it queries all the required ports and provides us the output of the result.

portquery3

2) Manually input Query ports – Which can be used to query any specific ports on UDP ,TCP or both as shown below.

portquery2

Also we have an option called predefined services  in the help tab which helps us to see the list of ports that it queries for any specific service that we choose.

portquery4

 

Below is an example for set of predefined services that it queries for Exchange.

portquery6

 

It has an option to save the query result as shown below. Also it allows the end user to customize config.xml or provide a config input file for list of query that defines their own services. The config file should follow the same format as config.xml since it accepts only xml inputs.

PortQuery5

 

This tool can be used to query open ports during any kind of troubleshooting scenarios.

Also published in – http://social.technet.microsoft.com/wiki/contents/articles/27661.portqueryui-gui-tool-that-can-be-used-for-troubleshooting-port-connectivity-issues.aspx

References – http://windowsitpro.com/windows/gui-tool-displays-status-tcp-and-udp-ports

Thanks 

Sathish Veerapandian

MVP – Exchange Server

Product Review: SPAMfighter Exchange Module

Protecting the the IT infrastructure from Spam mails,Malicious codes ,Malwares is one of the important and challenging task and needs to be monitored always. There are different types of spam attack through which an user can try to crack the perimeter network of any organization and intrude to inject any kind of malicious codes or phishing emails. While the most widely used type of method for circulating SPAM is Email through which unwanted emails, more number of spam emails, reverse NDR attacks etc.,  are circulated by which the productivity of an organization will be adversely affected.

Its always better to have 2 step anti-spam filtering feature or even more in any organization to ensure that the spam never reaches our network especially the Messaging system.

Microsoft has built in Anti spam features which can be enabled from Exchange 2003 versions and they work perfectly fine and more accurate in filtering the spam emails. Its always recommended to have this feature enabled as a part of additional security along with additional spam configurations and settings  in an environment.

But we need to always ensure that we are aware of all the settings configured in the spam filtering in all levels in our organization as it can interrupt the end users in sending and receiving emails if this configuration is not correct.

I just happened to walk through one of the most recent version of additional  spam security feature from product SPAMfighter and was much impressed with all the Configurations, Options and user friendliness of the product r.

In this article lets walk through the installation and few functionalities of the product SPAMfighter Exchange Module.

What is SPAMfighter ?

It is an add-on to Exchange Server that fully integrates and offers anti spam protection.  It works with Exchange versions Exchange 200,2003,2007,2010 and 2013.

How Does it works ?

Spam Fighter administration is managed through web interface which is much user friendly and has more options to explore.

It works integrated fully with Microsoft Exchange Server. It creates its own security groups and user account in AD which integrates with Exchange servers. This will be easier for us to manage easier way in terms of policy management and having separate control over Spam Fighter. Also by using this we can designate an individual to take care of these tasks who has control only on this software.

Prerequisites 

There is no prerequisites required to install this software as i ran it from a member server ( Windows server 2008) . The only thing i noticed was it required install the Microsoft Visual C++ Run-time which it prompted for it and it found the software by its own and installed them which made my job simple.

Installation

The product can be downloaded from here

http://www.spamfighter.com/SPAMfighter/Product_SEM.asp

Its a 30 day trial version and should be downloaded on to Windows Servers.

The installation was pretty much standard as all the software does and it prompted me for the latest virus definition updates so i would not walk through the entire setup.

One interesting thing i found during the installation was it asked for user name and password for Spam Fighter administration and it automatically created respective AD account to integrate with the exchange modules.

 

s1

 

Once the installation is done you can open up the web console through add or remove programs and select spam fighter and opens web console as below

Give the user name and password given during installation.

S2

 

Was astonished to see more options

S3

 

In addition to the administration part from the server end spam fighter has outlook add in as well which users can install and further customize filtering on their own.

s4

 

 

It has good policies which can be filtered in various levels as shown below.

I can see policy defined for inbound,outbound and internal emails.

Also i could notice policy filter settings for user level too which is very good.

s5

 

All the users can be modified individually as well.

s6

 

 

Finally a statistics report can also be pulled over which shows up the graphical value of filtered emails as below.

s7

 

Cost Factor

Like most of the  apps which integrate with exchange makes licensing cost per user the spam fighter also have licensing structure  cost per user  basis for one year. However the cost factor reduces very well for organizations more than 2500 users.

You can view the pricing list here

http://www.spamfighter.com/SPAMfighter/Payment_Choose_Product_SEM.asp

Conclusion 

Overall SPAMfighter product is much user friendly and latest version  has much effective cool new features which can be integrated with Exchange Servers  for better spam filtering.

Thanks 

Sathish Veerapandian 

MVP – Exchange Server

SysTools OST Recovery Software

OST files are just an image of the content from the server.When Outlook is used with Exchange Server in ‘Cached Exchange Mode’ we get the OST file downloaded , then OST files allows to access entire components.

There is no built-in option in Outlook to open or import OST files without configuring Outlook profile for that associated mailbox account.

At times we might run into a few complex scenarios for an VIP user where we need to recover emails in scenarios where  we are running out of backup options,no exchange database and if we have the last option to recover only from an old OST file.

There could be multiple reasons for converting the OST file. There are multiple ways in recovering and repairing orphaned and lost .OST files. There are 3rd party applications that allows to convert OST to PST,repair and corrupted ost file ,filtering and gathering required data from the ost file.

If only the client PC has crashed, We can always recover the data from the exchange server itself (ost is just a online copy of the mails and one copy resides on the server all the time).

In some circumstances there may be a need to open/import an OST file.

1) User has left the organization and mailboxes have been deleted crossing the retention period. But the local IT team has only the OST  file from the PC of the mailbox from which an important data needs to be extracted.

2)If the old OST file has corrupted, user needs the data of the old ost file which he took it from his laptop to be merged into his new outlook profile in new PC.

3)For example if a user goes for long leave and if his mailbox was in a disabled state and then gets deleted, his outlook profile is deleted but somehow the OST file remains in the PC and needs the old emails from the OST  file.

4)The Exchange servers have been migrated and user’s mailbox has moved to the new version. User needs his old emails from OST  file after a long leave to be recovered.

5)If we need to access the emails from an old OST  file without configuring current Outlook profile for that account.

I just happened to have a look at this SysTools OST recovery Software and found it to be pretty much easier and user-friendly.

In this article we will be looking at how to recover data from an corrupted OST through SysTools OST recovery software.

This software allows us to recover and Convert Inaccessible OST File to Outlook (PST) / EML / MSG Format.

We have 2 versions freeware and a full version.
Freeware version can export only 25 items per folder While the full version has no limitations on the number of counts per folder.

Download the free version from the below link

http://www.systoolsgroup.com/ost-recovery.html

Just open the setup and run through the installation wizard.

11

 

 

Accept the license agreement.

22

 

 

 

Choose the installation directory.

 

 

33

 

 

Once the setup completes just open the OST recovery software.

44

 

 

 

Browse and select the damaged OST file.

 

 

55

 

 

Once the OST file is selected it starts scanning the file as shown below.

66

 

 

 

Once the scanning is completed it opens all the emails in outlook mode which are readable.Since its  a demo version it displays the below information.

We have an option to export emails one by one.

77

 

 

We have an option to export the emails in MSG format or into PST file.

 

88

 

Just Click on export and select the required format in  MSG or PST. After that we are done with the PST extract from the corrupted OST file.

Overall this tool can be useful for admins in few critical scenarios of recovering the OST files for  important mailboxes and its user friendly.

Cheers

Sathish Veerapandian

Technology  Evangelist

 

%d bloggers like this: