Microsoft Teams – Utilize the Azure Sentinel to facilitate SOC and Monitor Teams critical events

Few days ago Microsoft has announced the new release which provides us the opportunity to integrate MS Teams related activities that are recorded in the audit logs to Azure Sentinel. Enabling this feature benefits organization where there is a separate SOC team monitoring and analyzing the security posture as an ongoing operational procedure.

We still have the Microsoft native cloud app security which benefits in creating the alerting mechanism for MS-Teams related activities.But with the Log Analytics and Azure Sentinel we can do a lot more than it can be done from the Cloud App Security. We can further fine tune the alerting, create workbooks and dashboards for Microsoft Teams related activities which will be useful for Teams Monitoring.

To start with this new feature ,we need to enable this new option to ingest Teams Data into Azure Sentinel Work Spaces. This article can be followed to start with connecting office 365 with the Microsoft Cloud native SIEM Azure Sentinel.

Navigate to Azure Sentinel Work Spaces – Select Data Connectors – Choose Office 365

Here we can see the new option for sending Teams Audit Logs to Azure Sentinel WorkSpace.

Once it is done after a while, we could see that the workspace have received the data types Office Activity (Teams)

Live Query Teams Monitoring :

When we navigate into the workspace we have the opportunity to fine tune and see the events that are written on the Audit Logs for Teams in a more refined way.

For instance to filter only Team creation can be checked from the workspace. This can be used for filtering even specific person and creating alert for them.

This helps the SOC Team for a live reactive analysis when any security incidents are reported for Teams related activities.

OfficeActivity
| where OfficeWorkload == "MicrosoftTeams"
| sort by TimeGenerated
| where Operation has "TeamCreated"
| where UserId has "sathish@exchangequery.com"
| project UserId,AddonName,TimeGenerated,RecordType,Operation,UserType,OfficeWorkload

Create Alerting Mechanism : Azure Monitor or Azure Sentinel

In a real example we can create alerts and notify the SOC Team when a bot has been added to the Team.

OfficeActivity
| where OfficeWorkload == "MicrosoftTeams"
| sort by TimeGenerated
| where Operation has "BotAddedToTeam"
| project UserId,AddonName,TimeGenerated,RecordType,Operation,UserType,OfficeWorkload

To create the alert once after writing the query we have the new alert rule where there is an opportunity to create alerting mechanism in two methods. Create Azure Monitor Alert or Create Azure Sentinel Alert.

To experience the behavior selected the option Create Azure Monitor alert. Used the same Query. Alert logic and the time period is set for demo and can be defined based on the period and frequency that suits best for the monitoring.

The action group can be selected to send this notification alert to a email addresses.

The notification type can be selected for other options like where ITSM can be chosen to trigger an incident for the same events.

In our case email was selected and after few minutes tested by adding a bot and got the alert notified on email address.

Further information about the bots that have been added can also been seen.

Create WorkBooks and Dashboards:

Here we do have the possibility to create workbooks and dashboards for Ms Teams related activities. There is one template present by default for Office 365 and there is a item Teams Workload present over here which will help in creating a workbook for Teams.

The default workbook provides decent information on monitoring the Teams related activity.

This will be a good start to create one dedicated work book for Microsoft Teams and pin them as a separate dashboards for Microsoft Teams related activities. I have also written post on creating Azure Monitoring Workbooks which can be referred for creating dashboards for Teams Activities.

Microsoft Teams logs in Azure Sentinel is really a welcoming native cloud integration feature set where lot of organizations can be definitely beneficial in terms of actively monitoring the Teams Activities with no additional cost of investing on 3rd party SIEM integrations.

Regards

Sathish Veerapandian

Synology DiskStation Active Backup for Office365

Recently i was requested to review the synology diskstation ActiveBackup for Office 365 . Though Microsoft 365 provides unlimited retention period and litigation hold for office 365 applications i always had one topic in my hit list to read on why there might be a reason to have a local backup instance for Office 365 applications.This made me to do some little bit research on this topic and could see there might be few business cases ,compliance/legal requirements which demands to maintain backup copies of electronic data.

Moreover the litigation hold and retention period is not applicable for all office 365 plans. I have seen organizations consuming wide variety of Office 365 plans based on their business models.

On the other hand i see most of the office 365 backup solutions provides faster efficiency of users able to restore the content on their own mostly from the user management portal. In an ideal scenario office 365 user data recovery can be executed from a native tool set where we use the native content search or an e-discovery case from the admin portal. In a real case scenario if we don’t have an SLA for restore of data that comes in everyday for a resigned employee or an existing employee there might be some delay where only few admins are responsible in handling the operations tasks. With these third party packages we can optimize the processes for data restore.

Microsoft recommends in the service agreement page on the service availability section to regularly backup the data stored in the services. So i got an opportunity to setup and experience on how the backup and restore mechanism works with Office 365.

In this article we will have a look at Active Backup for Office365 from Synology. A little while back i setup DS920 + Diskstation with Sea Gate Iron Wolf HDD . SeaGate IronWolf is always BUILT FOR NAS Designed for 24×7 NAS workloads with better performance, spacious capacity, blazing-fast speeds and provides 2 applications sea tools and disk wizard for monitoring the drives.

In order to setup Active Backup for Office 365 we have to login to the Disk Station Manger.Keep this in mind the Active Backup for Office 365 supports only in 64 bit NAS and they must be running DSM 6.1 or later with atleast 2GB of ram.

After logged in to disk station manager in the package center , we can see the Active Backup for Office365 is present as an addon. Once its installed we can open them.

In the setup screen it provides us an option to choose in which office 365 data center our tenant resides.

Subsequent log in with the admin credentials we can see it requests for oauth permissions so that it can get the read and write access on all users data to perform the backup and restore operations.

Then it takes to the redirect page that we must confirm that we are ok with sending the office 365 data to the local DS domain.

Once it is completed Active Backup for Office 365 is opened. Navigate to Task Creation Wizard.

Here we have 2 highlighting features :

Account Discovery – When this option is turned on every new account in Office 365 gets the backup enabled automatically which is really a good features.

Enable the Active Backup Portal for end users – User logins with his own credentials and can see his own data and perform restore.

Here we have options to select the users where we need to take a backup for them. On a business standpoint we can think of 2 cases first one being the users not having the appropriate license for Litigation hold and retention policies and the latter one being VIP users or critical Financial Mailbox data that might require a local copy as per the business model concerning the audit and compliance requirements.

Selected few users for our testing and we also have the option to choose what service that needs to be backed up.

We have the site list where there is an option to choose the sites.

We have the backup and retention policy here to choose based on our requirement. We do have the file version retention policy as well.

Finally we choose the destination folder location in the NAS drive and the backup task creation is successful.

We can see the overall status and the backup summary in the Active Backup for Office 365 Dashboard once after few backup schedules have been successful.

Restore Operation:

There is an option to choose which service that we need to restore for the user. Here it can be one drive, Mail , Site , Calendar or Contacts. At the moment of writing this blog I do not see a separate option to restore for Teams Data.

Option 1 : Admin Restore

When admin logins he has the full privilege to navigate to all employees ,their data and restore them.

Option 2 : User Restore

User logins with his own credentials in the Office 365 Active Backup Portal and can see his own data.

Email Restore:

From the Admin console logged in – have the option to choose the users.

Here we can see the option to select our restore point date and choose the required emails individually. In the restore we have two options first one to restore them directly to users mailbox and the second one where we can export them as individual email messages.

We have the option to search for the individual files with keywords, subject ,date and including with attachments which looks like a promising feature.The ability to perform a granular brick level backup will minimize most of the native recovery operations task.

In the final screen we have the option to change the user destination. In a real case scenario this can be useful where a current employee might require a data from a resigned employee after getting prior approval for a valid business reason.

Once the restore operation is successful we can see in the user mailbox it has been stored them on a separate destination folder and only the selected emails are restored.

On a export operation we the selected files are exported individually as emails.

File Restore:

File restore is also very promising. It makes an easy task to restore the file directly to the destination or take an export which downloads the requested file in the same format.

On doing a direct restore we can see that there is an option to restore the file sharing permission which looks great.

Below were the highlights identified from the evaluation:

1) License-free for unlimited Office 365 backups.
2) Option Monitor and manage your backup even from multiple tenants from same single dashboard.
3) There is account discovery – when this option is turned on every new account in Office 365 gets the backup enabled automatically.
4) There is an advanced search engine which allows to find any files containing the keyword including mail attachments
5) Option to preview the content of each file before we could restore them

Of course Microsoft does provide enough ways to protect data against corruption, deletion , ransomware and disaster scenarios with security, retention policies and litigation hold. If that convinces then we are ok with the native backup mechanism.As an alternative we can choose these packages that can hold data locally mostly for compliance/legal purposes , Volume of users not covering the licensing requirements to retain their data and enhanced recovery mechanism based on the business requirements.

I find this software to be beneficial for organizations that might require to backup Office 365 data as a part of their legal and compliance regulatory requirements.

Thanks & Regards

Sathish Veerapandian

Overview of Microsoft Teams Graph API and its benefits

Microsoft Teams Graph API have been there for a quite a long time and it can be beneficial in various ways to perform Teams related tasks in API operations. In this article we will go through the Teams Graph API overview and API calls available as of now.

Using Graph API developers have a unified Rest API to access Teams underpinned components. For instance, using Graph API we can post a survey notification to a channel with option to vote for the survey. With this we can create a Team, add members and owners ,configure team settings and even archive a Team.

The overview of Graph API can be explored by navigating to the below URL.

https://developer.microsoft.com/en-us/graph/graph-explorer

Before consuming the Graph API we need the required permissions on the Graph API to run the query. So we need to consent the required permission based on the action that will be performed from the Graph API. In below example have granted the below permissions to execute the Graph API operations for Teams.

Once logged in initially we can check for the basic me option which returns the user information as below.

As of now while writing the blog post the Teams has below graph options 9 general and 4 beta versions. So if any issues identified on beta while pulling from the applications, powerapps , logic apps or other APIs it will be fixed sooner.

Now moving to the Teams part below query shows the teams where im member of

https://graph.microsoft.com/v1.0/me/joinedTeams

So now trying with the post request to create a channel on a Team called Test Team.The prerequisite of creating a channel is to mention the Teams ID under which the channel needs to be created.

Upon a successful post we can see that the new channel called architecture discussion have been created.

Interestingly we have an option to send a channel message. Example below message.

The test post message we set have been received in Team Channel.

When attempted the get message present in a channel we get the actual messages present in them.

Now we have tested the API operations from the Graph explorer there are multiple ways to access them from different programs. In our example we will try to access them from the powershell and create a Team. The first prerequisite for Creating the Teams from Graph API is they need to be as an Office365 Group. The Office365 group is created as a Post Operation

Once the Office365 Team is created we can use the Put operation to enable Teams in them. While there are lots of samples available on the internet to create the teams i found this video which is very helpful start the graph API operations for Microsoft Teams via Powershell.

As a initial prerequisite we need o install the SharepointPNPPowershell online and connect to them.

Below is a sample script that can be used to create a Team from the Graph API via Powershell.

#connect to graph API
Install-Module SharepointPNPPowershellOnline
connect-pnponline -scopes "Group.ReadWrite.All"
$accesstoken = Get-PnPGraphAccessToken

#Prepare generic OAuth Bearer token header
$headers = @{
"Content-Type" = "application/json"
Authorization = "Bearer $accessToken"
}

#Create the Office 365 Group - Post Request
$NewGroup = @{
Description = "Team Fun Friday"
DisplayName = "Fun Friday"
groupTypes = @("Unified")
mailEnabled = $true
mailNickname = "teamfunfriday"
securityenabled = $false
}
$creategroupbody = ConvertTO-Json -InputObject $NewGroup

$response = Invoke-RestMethod -Uri "https://graph.microsoft.com/v1.0/groups" -Body $creategroupbody -Method Post -Headers $headers -UseBasicParsing
$groupid = $response.id

#Create the Team - Put Request
$NewTeamRequest = @{
membersettings = @{
allowcreateupdatechannels = $true
}
messagingsettings = @{
allowusereditmessages = $true
allowuserdeletemessages = $true
}
funsettings = @{
allowgiphy = $true
giphycontentrating = "strict"
}
}
$createTeamBody = ConvertTo-Json -InputObject $NewTeamRequest
$response = Invoke-RestMethod -Uri "https://graph.microsoft.com/v1.0/groups/$groupid/team" -Body $createTeamBody -Method Put -Headers $headers -UseBasicParsing
Write-Host $response

Upon a successful execution of the above script we will get the below message on the powershell session.

On verification could see the associated office 365 group and the Team have been created.

Also the team is automatically created and visible from the Teams client

With the Microsoft Graph APIs it becomes a unified REST API experience and it helps us to perform multiple Teams operations and automate the Team management life cycle.

Microsoft Teams – Change the Supported Meeting Mode on existing Skype Room Systems to Teams Mode by leveraging Intune Scripts

Microsoft Teams have been the highly adopted collaborative platform in few months time.It has been helping a ton worldwide and the new features that is been released every now and then makes us stay connected and expands the efficiency in every organization who have been using them.

By default Microsoft Certified Room systems are forward compatible with the new Skype for Business or Teams services while maintaining the same client user experience.Usually when any organization has only Skype then these meeting rooms will have the options only Skype enabled on them.

In this article we will be looking at how to enable the existing Skype room systems to have the capacity to host Teams Meetings in them.

Example screen of a Skype room system panel where we have the below options on the supported meeting mode while configuring them at the initial stage .

These devices are basically on KIOSK mode running on recommended versions of Windows 10 currently supported one being 1909 at the time of writing this blog.

Ideally when a Skype room system account have been migrated to Teams with all the prerequisites this mode on the meeting room devices needs to be changed to support Skype for Business and Microsoft Teams.

This can be done by using the local admin credentials of this Skype room system , logging into the system context and change the mode to support both Teams and Skype. In a real scenario for a small scale deployment for lesser than 10 rooms changing them manually from the local IT support is possible. But in a huge deployments where there are 100 plus systems deployed across the globe and making them change manually will be a uncomfortable experience.

This supported meeting mode on the Skype room systems is controlled via an XML file present on below location. This location is standard for all the meeting rooms running on KIOSK mode and have a file named skypesettings.xml

C:\Users\Skype\AppData\Local\Packages\Microsoft.SkypeRoomSystem_8wekyb3d8bbwe\LocalState

At startup these devices looks up for this XML file named SkypeSettings.xml on the above location. If it finds them it applies the configuration settings indicated by the XML file then deletes the XML file. The best thing is that we can mention only the changes that we require on the system on the XML file and it will update the delta changes and keep the other settings as same.

In order to enable Teams, Skype and have Teams as default client we can use the below XML

<SkypeSettings>
    <IsTeamsDefaultClient>true</IsTeamsDefaultClient>
    <SkypeMeetingsEnabled>true</SkypeMeetingsEnabled>
    <TeamsMeetingsEnabled>true</TeamsMeetingsEnabled>
</SkypeSettings>

Now this XML can be easily pushed to all the Skype Room Systems via Intune Scripting Profile.

Below are the prerequisites before performing this action:

  1. The Skype Room Systems accounts must have thee Teams license assigned to them. This offers an easy migration path from Skype for Business to Teams by just enabling Teams on the device.
  2. The Skype Room Systems must have been registered on Microsoft Intune to target this intune scripting profile to them.

Login to Microsoft Intune- Navigate to Device Configuration – Create the Scripts as below. Ensure the script settings have all the default settings. Target them to the meeting room devices which requires this change.

Copy save them as ps1 and Use the below script on the script settings page.

$target = "C:\Users\Skype\AppData\Local\Packages\Microsoft.SkypeRoomSystem_8wekyb3d8bbwe\LocalState\SkypeSettings.xml"
$xml = "<SkypeSettings>
    <IsTeamsDefaultClient>True</IsTeamsDefaultClient>
    <SkypeMeetingsEnabled>false</SkypeMeetingsEnable
    <TeamsMeetingsEnabled>true</TeamsMeetingsEnabled>
</SkypeSettings>"
$xml | Out-File -FilePath $target -Force

After the next azure AD sync is completed on the targeted devices we can see the XML file to be successfully deployed on the below location.

Also we can see the overview of assigned and failed devices on the intune script profile. In our case it was successful since it deployed to targeted system without any issue.

Once the Skype room systems gets this XML and usually these systems reboots every night to check for the system updates install them as a maintenance window. During that time this XML will be updated since the device will be rebooted. Once this change has been applied to all the systems the Intune Script profile can be removed since it is a one time configuration change on the systems after the user accounts have the teams enabled.

Option2:

Create storage container in Azure , store the XML file and make intune to pull the xml file from there. Keeping this option is beneficial just in case if we need to modify the XML file frequently for device settings.

Navigate to azure portal – Storage accounts – Select File shares

Create a new file share for the pushing the Teamsxml

Once it is shared we can use the appropriate url in the script.

Below script can be used for the same

$source = "storage file source url"
$target = "C:\Users\Skype\AppData\Local\Packages\Microsoft.SkypeRoomSystem_8wekyb3d8bbwe\LocalState\SkypeSettings.xml"
Invoke-Webrequest $source -Outfile $target
$xml | Out-File -FilePath $target -Force

Regards

Sathish Veerapandian

Microsoft Teams – Utilize Power BI to get more details on the Call Quality Dashboards

With Microsoft PowerBI we can gather more details from the call quality dashboards. As of now Microsoft have released 7 power BI desktop templates to accumulate more details on the Microsoft teams call quality dashboard.

PowerBI being a very potential platform for data gathering and analysis these new templates for Microsoft Teams have been more outstanding in terms of analyzing the Microsoft Teams data.

We will go through the overview of the reports and the configuration on this post.

Firstly the PowerBI Query Templates for Microsoft Teams needs to be downloaded.

We have below 7 templates report:

  1. CQD Helpdesk Report.pbit
  2. CQD Location Enhanced Report.pbit
  3. CQD Mobile Device Report.pbit
  4. CQD PSTN Direct Routing Report.pbit
  5. CQD Summary Report.pbit
  6. CQD Teams Utilization Report.pbit
  7. CQD User Feedback (Rate My Call) Report.pbit

These are customizable templates which can be used to analyze data. These above are PBIT file formats which can be used from PowerBI desktop which has the data source configured. If we need to open them directly from the powerbi portal they need to be renamed as pbix. If we are importing them from the powerbi desktop the following file MicrosoftCallQuality.pqx needs to be imported to the location [Documents]\Power BI Desktop\Custom Connectors folder.

From Desktop:

The initial requirement is that the PowerBI Desktop version must be installed and the data gateway already configured. The steps from Microsoft can be followed from here

Place the pqx file in below location. The below location will be automatically created once the desktop version is installed.

Set the data source:

Option 1: Use the Microsoft Call Quality (Beta)

In-order to set the data source open Power Bi desktop – Select get data – choose Microsoft Call Quality (Beta)

Once it has been connected we could see the below message as disclaimer since it is on beta roll-out at the time of writing this blog.

Next we will be having the below option which has all the details to build the query.

The moment when we click on load we will be presented with the below screen. Here we need to select the option direct query since we are getting the data directly from the Microsoft call quality dashboards.

Once connected we will have all the options to build our own custom reports by selecting all the required fields from the right , visualizations and filter. This option is very beneficial where we have our office network details uploaded on the call quality dashboard for detailed analysis and building our own custom dashboards. Here we have selected few fields for example and could see they are populated on the dashboard.

Option 2: Import the Teams PowerBI Templates report and publish them from the desktop.

The second option is to import the PowerBI Templates and publish them on the desktop. Inorder to import them navigate to file- import – select power bi template and import all the pbit format files. These templates have to be imported one by one.

Once imported we get all the details as per the template imported. We do further have an option to customize the reports. Click on publish to publish the reports directly to the workspace.

Choose destination the workspace to be published. In our case we have selected Microsoft Teams – CQD and thats the workspace created in PowerBI for Teams CQD.

Once its published we have the dashboards published in the workspace and ready to share.

When clicked on share we have the below options while sharing the report. Users will need powerbi pro license and CQD access role to access this report.

Importing from the PowerBI Web Portal:

Importing them from the web portal is very much easier. We need to click on the datasets – files and select get option since we need to import the downloaded files here to create the new content.

Select files and click on local file and choose the powerbi templates. Here we need to rename all the file formats to pbix since the portal will not recognize the pbit format version.

Once uploaded we can see the dashboards. The template dashboards have lot of information especially with user details breakdown which is very nice. The below example is from CQD Helpdesk Report. Here we have an option to search by users, conference or by date which is very convinient.

Further from the user activities tab it gives us more report as example below. The good thing is that we could see the device information on the end point.

Below example comes from CQD Teams Utilization Report. This gives more info on how the Teams is utilized by users in our organization.Few samples from the templates. The call count summary gives all the information in one view.

We get the location details as well in the over all call quality and gives the data for past 180 days.

User details are very impressive where we can see the app version, drivers and further we have filters on the right to customize the view.

Below example shows day details breakdown with further customization filters and fields to get data based on our requirement. The default report itself has lots of required data which is very great.

The mobile devices all quality also have lot of useful information with overall summary.

We get the mobile devices call quality with rendered devices, call quality trend and number of conference attended from the mobile.

The desktop version is very much convenient to create customization dashboards.Well there are more reports which are handy and available from these default templates which will be definitely useful and in the above examples we have gone through few of them. These reports can be customized easily and shared with less efforts and it gives a very good view with rich data experience.

Thanks & Regards

Sathish Veerapandian

Create Azure Dashboards for workbooks created from log analytics for monitoring

In the previous post we had a look at how to group multiple azure log analytics queries ,group them and display them in one screen. There are few real challenges in displaying the queries directly from the workbook. Firstly they are not having the capability to auto refresh the live data until we reload the workbook. There is no option to fit the dashboard and customize them as per our requirement. Finally there is no option to set the refresh rate, setting up the local time zone and sharing them to the required persons to view them with read access.

Creating the dashboards is much easier and there are multiple ways to do them. In this post we will have a look at creating one from the workbook.

Inorder to create a workbook navigate to Azure Log Analytics Workspace – Click on WorkBooks – Select the workbook that needs to be created in dashboard.

In below example just for demonstration the default health agent work book is selected. Once selected choose edit and go to pin options

We have the below pinning options

Pin Blade to Dashboard – Pins the entire workbook.

Show pin options has below ranges to choose

Pin Workbook – It again pins the entire workbook as a workbook template

Pin All – Pins all the created queries as dashboard. This is best recommended option

Individual Pin – Individual pin option can be used to choose only selected queries and pin them on the dashboard.

Once its pinned – We can navigate to Azure Dashboards – Navigate to azure portal – Click on Azure Dashboard we can see all the selected queries.

Now we need to align them by just clicking on edit on the dashboard. Here we get lot of options like add, pin , move and resizing the tiles. We have few metrics in the tiles gallery which can be added.

We have options in tile settings.There is option to configure the timespan and choose the time granularity as per our requirement.

Even we have an option to choose the time as per our requirement.

There option to name the dashboard as requested.

Once customized when navigated to full screen it shows the below option in the dashboard. Below is just a sample of dashboard created from the log analytics workspace.

Furthermore we have options to choose the refresh interval rate which refreshes the data from the logs present in the log analytics which is inturn collected from the agents installed in the active systems.

There are also other options like to download the created dashboard in json format. Even upload option is present which accepts json format file.

Sharing option is present where we can share this dashboard to a group of people by targeting them to a read only group. When clicked on share it is private.

After it has been shared we will get the access control options.

Once clicked on managed access we have option to add users in role assignments. There is an option to unpublish the dashboard as well and when done it is made again as private dashboard.

Clone option is also present where we can just clone one existing dashboard and modify the queries on the background.

Creating azure dashboards made admins life simpler in lot many ways in deployment of monitoring solutions for newly installed windows , linux , network devices and even databased through azure log analytics.

Regards

Sathish Veerapandian

Microsoft Azure – Leverage Manage Engine AD Manager and delegate MFA reset action to the Helpdesk Team

Currently there is no option as per this uservoice to delegate the MFA reset action to help desk team via an admin role. As of now only the global admin have the required privileges to perform this action from the azure portal. In this article we had a look into how to reset this option by creating an automation account and integrating with Microsoft Flow. Though this is a good option there is another way where this action can be delegated via ManageEngine AD manager plus. 

Most of the organizations have AD Manager plus and its features integrated on their on premise tenant. This can be used to execute office 365 and Azure AD operations in a hybrid environment. In this article we will have a look at the steps to integrate AD manager plus with Azure AD to  delegate this action to the help desk team.

Below are the prerequisites :

  1. AD manager plus server must be present in the hybrid domain. Not necessarily a hybrid domain it works well for cloud only accounts as well.
  2. The connectivity to the Azure IPs and URLs are required to connect azure module connect-msolservice
  3. Azure AD modules must be downloaded  on the AD manager plus server.
  4. AD delegation must be already assigned to the help desk team with AD management role.
  5. Global admin account is required to specify them as encrypted credentials with key on the AD manager plus server. This global admin account will only be used by the manage engine AD manager server in the backend and not exposed to the helpdesk team.

Implementation Steps:

First we need to create the encrypted credentials and key . Below command can be used.Kindly note that if we try to execute with plain text password it will not work, Since in our case we are doing an invoke session from AD manager plus and hence it works only with key file.

A very important note here is if there is a password policy for the global admin accounts, ensure to regenerate this key by re-running this script once after the new password is changed on the Global admin account.

$KeyFile = "Z:\ManageEngine\ADManager Plus\bin\AES256.key"
$Key = New-Object Byte[] 32
$Key | out-file $KeyFile
$credential = Get-Credential
$credential.Password | ConvertFrom-SecureString -Key $Key | Out-File "C:\ManageEngine\ADManager Plus\bin\credential.cred"

Later place this script on the AD manager plus bin folder as .ps1.

Connect-MsolService -Credential $cred
"`nConnected to MSOL" | Out-File $MFAlog -Append
Set-MsolUser -UserprincipalName $userPrincipalName -StrongAuthenticationMethod @()
"`nUpdated User $userprincipalname" | Out-File $MFAlog -Append

The above script will also  generate MFAActions.log file in the bin folder which will help us to track the MFA actions performed via AD manager by the help desk admins. Even this script must be placed in the bin folder in the AD manager plus server.

Now having done the Azure AD part we need to access Manage Engine AD Manager Plus admin portal and perform the below action:

  1. Go to AD Mgmt – User Modification Templates – Click Create New Template.
  2. Leave all the fields on all the tabs as default – Navigate to Custom Attributes – Select Run Custom Script on successful user modification script command:  add the below format to call our script via AD manager plus – PowerShell  -File mfa.ps1 %userprincipalname%
  3. Once done click on save template.
  4. Assign this template to the helpdesk team.

Once this above action is completed help desk can reset via below method – 

AD mgmt – Modify Single user – Search for affected user – Modify user – Change template – Choose MFA reset template – then click on update user.

Now the MFA value will be cleared for the requested user.

We can also check the status from Azure AD connected Powershell 

(Get-MSOlUser  -UserPrincipalName user@domain.com).strongauthenticationmethods

The value should return null for a user where the MFA reset is successful.

This action will help in achieving the delegation of MFA reset via manage engine. Helpdesk admins can for perform the MFA reset through the manage engine delegated help desk portal by selecting the assigned template and can perform this action.

Thanks & Regards
Sathish Veerapandian

Visualize Microsoft Teams Room Systems health components through Azure Monitor Workbooks

In the previous post we looked on how to configure Azure Monitor Alerts for Critical events that occurs on Microsoft Windows Devices which can be used for monitoring the Teams Room Systems. With Azure Log Analytics we could leverage few more components that will help us to visualize the status of the systems which are monitored through selected event logs and the performance counters.

Creating the Workbooks and making them visualize purely depends on the data that is been ingested on the corresponding log analytics workspace. So at the first stage its very important that we are sending all the required logs and counters which is mandatory for visualizing the metrics.

Firstly before creating the workbooks we need to devise a strategy on how to build a skeleton for the dashboard. This is very important since there are multiple options available and need to understand what important data that needs to be projected on the dashboard.

We will go through few examples of how to get started with creating the workbooks and visualizing the data.

We need to prepare the required Kusto Query Language which is required for visualizing the data. Below is a small example of one which will visualize the count of the perf counters by object name

Perf
| where TimeGenerated > ago(1h)
| summarize count() by ObjectName

To Render them as a pie chart we can use the below information

Perf
| where TimeGenerated > ago(1h)
| summarize count() by ObjectName
| render piechart 

Example below will project only the affected systems which has failed windows updates, driver updates or any devices connected with room systems which are in a failed state.

search *
| where Type == "Event" 
| where EventLog == "System"
| where EventLevelName == "Error"
| extend Status = parse_json(RenderedDescription).Description
| where RenderedDescription has "failed"
| project TimeGenerated, Computer , RenderedDescription 

If we need to visualize them on a graphical pie chart we could do that as well by summarizing them to a string value which is available from the  parsed json file. Example it can be computer, Ip address , Device name or any data which is present on the raw event data.

search *
| where Type == "Event"
| where EventLevelName == "Error"
| extend Status = parse_json(RenderedDescription).Description
| project TimeGenerated, Computer,RenderedDescription 
| where RenderedDescription has "failed"
| summarize Count=count() by tostring(Computer) 
| render piechart

Above are just very few examples of rendering the data and making them visualize through kusto query language. There is a lot to explore and can project more data based on the logs that we are adding on the azure log analytics.

Now we have got some idea of how to create the visualization through the kusto query language there is an option to combine multiple queries and display them as a dashboard through Azure Workbooks. Earlier this option was enabled by view designer which is now replaced by enhanced version called Azure Workbooks.

There are multiple options which can be utilized and created dashboards with Azure Workbooks and below we will go through few of the options which will help us in creating our customized workbooks.

In order to get started with Workbook – Navigate to the log analytics workspace – Choose Workbooks

Click on New

We get the default summary of our query from our workspace with the below piechart view.

If we want to go with our own query we can remove the default query and select Add. Here in Add we have multiple options like below out of which Add Group seems to be very much interesting. With Add group we have the ability to add multiple queries and group them in a single workbook.

At the top of this group we have an option to add text which visualizes the workbook name and the details.

After selecting the group , now we have option to add query into the group.

When going into the advanced settings we have these options now to display the chart titles specific for this query.

In the style tab we do have some options to modify the HTML settings. By default this will fit in to one query per row and if we need to add three queries we need to adjust the width settings.
In below case I have added the width to 50 since trying to add 2 queries in a row. But its very important to note here that adding 3 columns and making them visible as a dashboard is fine only in Azure Dashboards. If we try to view them from Azure Workbooks 3 queries in a row is not sufficient to accommodate and we do not have option to modify the HTML editor at this moment.

Have added another query which will let us know the status of the systems which have reported the heartbeat in last 30 minutes through the perfom counters. In below case since I have only one system for demo it shows only 1 system.

The moment when we group them and display it shows the  view as below. By adding multiple queries based on our requirement it makes us easier to create the dashboards.

Further to this we have a lot of options in the visualization of the display based on the metric units. We can go through few of them.
For instance there are below options available to set visualization.

We can reduce the size of the visualization and we have 5 options.

Further in the chart settings we have option to define the column and the units.

In the series we have option to change the colour and add a custom display label.

To interpret further have chosen Graph which is very interesting.When entering into the graph settings we have the below options in the node format settings. This helps us to choose what fields that we can display on the view of these images in the dashboard.

We have furthermore tweaking information on the layout settings. The hive clusters are looking really nice like honeycomb in the visualization. And there is a category to group by field to select based on the available fields.

Now we have the category to choose based on the coloring type. Ideally this is very good to categorize based on healthy and unhealthy systems. This will group the healthy and unhealthy systems separately and finally display them as dashboards.

This blog gives an overview of how to visualize , group and create Azure Workbooks from Log Analytics WorkSpace. With Azure log analytics and Azure Workbooks it makes very much easier to monitor the modern Windows 10 & Linux devices. This facility can be very much leveraged easily in a direct cloud deployment model without the need of installing, configuring and maintaining a local monitoring solution.

Thanks & Regards

Sathish Veerapandian

Use Azure Log Analytics to notify critical events occurring on Microsoft Teams Room Systems

In the previous post we had an overview of how to create Azure Log Analytics and configure them to collect data from windows systems. Once the information is ingested in the workspace we currently have a choice to make alarms and notify the responsible team dependent on various signal logics which will be useful on monitoring these devices.

These alerts are scoped to each log analytics workspace. It will be a smart thought to isolate the services ,group them on singular workspace and create separate alerts for critical events happening on these monitored devices.

In order to create the alerts Navigate to alerts on the same workspace  – Click on New Alert Rule

Navigate to signal logic and choose the signal logic. There are multiple we need to see if any more interesting which suits our requirement can be added over here.

Now we have the required critical signals based on which the alert needs to be triggered. Usually the signal type will be from the collected events and the performance counters. In our scenario we could go with some default events from the list and also custom log search.

Device Restart Alert:

In our example for default one did choose the Choose the signal logic of heartbeat from the existing one – (Useful when the device turns off)

Select the required devices  – make operator threshold value 0 – aggregation 5 minutes & frequency of evaluation 1 minute (The frequency of aggregation and evaluation can be chosen based on the interval of how many times we want to check the heartbeat). In normal cases it is best recommended not to choose a smaller frequency time range for large volume of devices and probably for critical devices alone it can be selected on a smaller frequency time period.

Disk Threshold Alert:

Similarly like device restart we are having disk threshold alert by default which can be configured.

It notifies when it exceeds the configured space. Select the resource configured for Teams – Select the Condition – Select the computers – the object name whenever the % free space is greater than and choose the value 60 percent. The percentage can be altered based on our requirement.

Then we need to select the required object, instance , counter path and source system. In our case we have selected one performance counter % free space. This will alert us when the disk space crosses 60 percent of overall capacity.

Chosen aggregate period is 5 minutes and the frequency time is 1 minute for every evaluation. Again we can change the frequency of evaluation for this probably on two times in a day one on the earlier time and other one  on the evening.

Custom Alerts:

Custom Alerts are more intriguing. With custom alerts we must be able to avail most of our alerting mechanisms. We have to select the signal custom log search for the custom alerts.

Event  | where EventLog == "System" | where EventLevelName == "Error"
|where RenderedDescription != "*updatefailed*" 
| where EventData != "DCOM"
| project TimeGenerated, Computer, RenderedDescription

Example used the above query to report only the events which has error messages apart from windows update and DCOM alerts . We can further filter on not contains operator and create custom query based on  our requirement.

When any error messages apart from the excluded events comes up in the targeted devices we will be alerted for the same.

Note there are multiple action types – Email/SMS/Push/Voice, ITSM and Webhook will be more convenient for us in this case on Skype room systems monitoring.

Email – We can send email Email/SMS/Push/Voice when the alert is triggered. This will be the most convenient and easiest part to start with. This will help us to collect all the used cases initially and see which ones are really helpful and the ones which is not helping us. Once we devise a strategy from the email alerts then probably we can go with the other alerting mechanisms.

ITSM – We can integrate with IT service desk management tool to create incidents when these alerts are triggered. Most of the IT service desk management tools are capable of API integration especially with Azure AD and must be easier to suffice this requirement.

Webhook- We can configure to send push notification to teams channels when these alerts are triggered. Probably a dedicated teams channel can be created for the first level of NOC monitoring team. Post that the webhook can be configured to trigger the critical events alert to the teams channel.

Now with the email alert – Created action group – Chosen action type email/SMS/Push/Voice

By default there are no action group created. So an action group must be created and targeted to NOC team email group.

Added the email address for notification. Well there are other options as well like sending SMS and Voice which could also be leveraged.

We do have an option to modify the email subject based on the alert details.

Finally we name the alert details , mark the severity , enable and create them.

We have the option to see all the  configured rules.

Once after configuration, we can see the statistical dashboards which provides us the summary of total alerts that have been triggered and their status.

We are receiving the email alerts when the disk space exceeds the configured level of 60 percentage.

Similarly when the device was turned off, the configured heartbeat alert triggered an email to the recipient.

Similar like this we can create multiple required alerts for critical events.

At this moment we have option to create alerts for every action type which can be targeted for all computers and they are charged individually on a very nominal price. So for multiple alerting types we need to create multiple action types. These alerts are purely based only on the collected logs which are present on the azure log analytics workspace. Just in case if we are trying to collect more details which are not present on the collected logs then we wouldn’t be able to create the alerts. The Azure logs Alerting mechanisms provide a great way to alert the critical events happening across the monitoring systems.

Thanks in Advance

Sathish Veerapandian

Microsoft Teams – Configure Azure Log Analytics for Monitoring Teams Room Systems

Microsoft Teams being the best collaborative solution there are lots of smart devices which are equipped with Microsoft teams for providing the smart meeting room systems with modern cameras, microphones and smart display screens. The best part on Teams application is it can function well in all ranges of devices with a support of basic hardware and running on a windows 10 operating system.

While there are numerous approaches to monitor the Microsoft Teams room systems this article we will go through the steps to monitor them through Azure Log Analytics.Like other applications Microsoft Teams App running on room devices will write all the events on the event logs.Through the Microsoft Monitoring agent in Microsoft Teams it allows these events to be collected in Azure log Analytics.

Prerequisites:

  1. Subscription with Azure to configure log analytics workspace.
  2. Teams meeting room system with internet connectivity. There are other methods to collect the logs without internet through  Log Analytics gateway in this approach we are going with direct agent method.
  3. The Teams devices must be running on a windows operating system on all meeting rooms on a KIOSK mode or probably on a full operating system mode based on the requirements.

Create Azure Log Analytics and integrate with Microsoft windows agent.

Log into log analytics workspace

Create new log analytics workspaces. We can use the existing workspace as well and it purely depends on the requirement.

Choose the  required subscription

Once the Log analytics workspace is created , we need to go ahead and download the windows agent. The agent can be downloaded by navigating to Log Analytics Workspaces – Workspace name – Advanced Settings – Connected sources – Windows servers – Download the windows agent.

Install the MMA agent on Teams Skype room system device –

Select only the option connect the agent to azure log analytics (OMS) because in our case we are not monitoring them via a local monitoring agent SCOM.

Enter the workspace ID and the key from the log analytics workspace and select Azure Commercial. If the network is going through proxy then click advanced and provide the proxy configuration. If the device is not having connection to the internet then the agent cannot send the logs to log analytics workspace.

Once installed we can see the Microsoft Monitoring Agent present on the control panel.

Once opened can see the Azure log analytics (OMS) and see the status to be successful.

On editing the workspace we can see the workspace ID and the Workspace Key.

Usually it takes a while to collect the logs to Azure Monitoring agent.

Configure the required logs to monitor:

Once the log analytics workspace is being collected we need to configure the data sources so that the log analytics workspace can start collecting the  required data for monitoring the Teams Room Systems.

In our case for monitoring the teams device, we need to collect teams app logs and few hardware related events. We will look into configuring them now.

Note: We have to be very choosy here on collecting only the required events, since dumping logs to azure log analytics involves cost in it and best recommended to choose only the required events.

In order to collect the logs navigate to advanced settings – Choose data sources – select windows event logs

The key primary log that needs to be collected is Skype Room System (we have to type them completely and click add as this log entry will not autocomplete)

There are few more log events that can be added, but added these logs which might be helping on monitoring the Teams room devices.

Having added the windows event logs, we can navigate to windows performance counters and there are few events which can be added and useful for us to notify when the devices are having any of the below issues on them.

Querying the logs:

Once we have configured the required log sources it’s the time for us to run some queries and see if the logs are been collected. The azure log analytics workspace works well with Kusto Query Language and SQL Query Language.

There are default queries like Computers availability today , list heartbeats and unavailable computers.

Once selecting on the default templates list heart beats and can click on run the below results is obtained.

To see only the Application Event logs we can run the below query

search * | where Type == "Event" | where EventLog == "Application"

To see only the Errors generated in the application event logs

search * | where Type == "Event" | where EventLog == "Application" | where EventLevelName == "Error"

To drill down more and look into the perfmon logs ran the below query to check the system up time.

Perf| where CounterName == "System Up Time"|summarize avg(CounterValue) by bin(TimeGenerated, 1h)

There are lot of queries which can be built from these collected events. Having collected these events , we can configure them to display as dashboards and collect alerting mechanisms for the critical events. In the next post we will have a look at how to configure the alerting systems for critical events that’s happening on the meeting room devices.

Thanks & Regards

Sathish Veerapandian

%d bloggers like this: