Category Archives: Azure

Use Azure Log Analytics to notify critical events occurring on Microsoft Teams Room Systems

In the previous post we had an overview of how to create Azure Log Analytics and configure them to collect data from windows systems. Once the information is ingested in the workspace we currently have a choice to make alarms and notify the responsible team dependent on various signal logics which will be useful on monitoring these devices.

These alerts are scoped to each log analytics workspace. It will be a smart thought to isolate the services ,group them on singular workspace and create separate alerts for critical events happening on these monitored devices.

In order to create the alerts Navigate to alerts on the same workspace  – Click on New Alert Rule

Navigate to signal logic and choose the signal logic. There are multiple we need to see if any more interesting which suits our requirement can be added over here.

Now we have the required critical signals based on which the alert needs to be triggered. Usually the signal type will be from the collected events and the performance counters. In our scenario we could go with some default events from the list and also custom log search.

Device Restart Alert:

In our example for default one did choose the Choose the signal logic of heartbeat from the existing one – (Useful when the device turns off)

Select the required devices  – make operator threshold value 0 – aggregation 5 minutes & frequency of evaluation 1 minute (The frequency of aggregation and evaluation can be chosen based on the interval of how many times we want to check the heartbeat). In normal cases it is best recommended not to choose a smaller frequency time range for large volume of devices and probably for critical devices alone it can be selected on a smaller frequency time period.

Disk Threshold Alert:

Similarly like device restart we are having disk threshold alert by default which can be configured.

It notifies when it exceeds the configured space. Select the resource configured for Teams – Select the Condition – Select the computers – the object name whenever the % free space is greater than and choose the value 60 percent. The percentage can be altered based on our requirement.

Then we need to select the required object, instance , counter path and source system. In our case we have selected one performance counter % free space. This will alert us when the disk space crosses 60 percent of overall capacity.

Chosen aggregate period is 5 minutes and the frequency time is 1 minute for every evaluation. Again we can change the frequency of evaluation for this probably on two times in a day one on the earlier time and other one  on the evening.

Custom Alerts:

Custom Alerts are more intriguing. With custom alerts we must be able to avail most of our alerting mechanisms. We have to select the signal custom log search for the custom alerts.

Event  | where EventLog == "System" | where EventLevelName == "Error"
|where RenderedDescription != "*updatefailed*" 
| where EventData != "DCOM"
| project TimeGenerated, Computer, RenderedDescription

Example used the above query to report only the events which has error messages apart from windows update and DCOM alerts . We can further filter on not contains operator and create custom query based on  our requirement.

When any error messages apart from the excluded events comes up in the targeted devices we will be alerted for the same.

Note there are multiple action types – Email/SMS/Push/Voice, ITSM and Webhook will be more convenient for us in this case on Skype room systems monitoring.

Email – We can send email Email/SMS/Push/Voice when the alert is triggered. This will be the most convenient and easiest part to start with. This will help us to collect all the used cases initially and see which ones are really helpful and the ones which is not helping us. Once we devise a strategy from the email alerts then probably we can go with the other alerting mechanisms.

ITSM – We can integrate with IT service desk management tool to create incidents when these alerts are triggered. Most of the IT service desk management tools are capable of API integration especially with Azure AD and must be easier to suffice this requirement.

Webhook- We can configure to send push notification to teams channels when these alerts are triggered. Probably a dedicated teams channel can be created for the first level of NOC monitoring team. Post that the webhook can be configured to trigger the critical events alert to the teams channel.

Now with the email alert – Created action group – Chosen action type email/SMS/Push/Voice

By default there are no action group created. So an action group must be created and targeted to NOC team email group.

Added the email address for notification. Well there are other options as well like sending SMS and Voice which could also be leveraged.

We do have an option to modify the email subject based on the alert details.

Finally we name the alert details , mark the severity , enable and create them.

We have the option to see all the  configured rules.

Once after configuration, we can see the statistical dashboards which provides us the summary of total alerts that have been triggered and their status.

We are receiving the email alerts when the disk space exceeds the configured level of 60 percentage.

Similarly when the device was turned off, the configured heartbeat alert triggered an email to the recipient.

Similar like this we can create multiple required alerts for critical events.

At this moment we have option to create alerts for every action type which can be targeted for all computers and they are charged individually on a very nominal price. So for multiple alerting types we need to create multiple action types. These alerts are purely based only on the collected logs which are present on the azure log analytics workspace. Just in case if we are trying to collect more details which are not present on the collected logs then we wouldn’t be able to create the alerts. The Azure logs Alerting mechanisms provide a great way to alert the critical events happening across the monitoring systems.

Thanks in Advance

Sathish Veerapandian

Microsoft Intune – Configure customized role based access control in a redistributed IT environment.

In a huge enterprise scale deployments there will be various teams who handles the services with multiple administrator accounts.These executives must be furnished with administrator accounts which are appropriate to their boundaries.Microsoft intune being a device,apps and office 365 administration management there are high prospects that this element may be used over various departments,applications,devices and from various areas. Microsoft Intune having lots of features and capabilities now most of the organizations are moving as managed tenant with Microsoft intune.

For instance there can be multiple app protection policies, device compliance policies, app configuration policies ,etc., are created for multiple services one for meeting room management, another for BYOD devices and for corporate windows devices. In these situations we need to create customized role based access control for each users.

With the default intune admin role assignments, we cannot manage to provide custom permissions and hence need to take little bit different approach in order to deploy in a decentralized environment.

We shall consider a scenario where there are 2 different cases of leveraging Microsoft Intune as a managment authority one for Meeting rooms and another one for managing office 365 and Line of business apps in BYOD devices.

Ideally in this scenario we must be having two sets of policies ,intune services with different role sets and visibility of policies to the administrators.

Below policies for BYOD devices were created –

App Protection Policies

App Configuration Policies

Compliance Policies

Below Policies for Meeting rooms were created –

App Protection Policies

App Configuration Policies

Compliance Policies

Having the policies created now we need to segregate them by tagging to associated admin groups, device groups and scope tags.

Created Admin Groups –

Group 1: MRM Admins – To manage only the Meeting room  intune policies.

Group 2: Pilot Mobile Admins – To manage only the Andriod/IOS Intune  device policies.

Created Device Groups –

Group 1: Meeting Rooms – Created to add the meeting room devices and service accounts. This is required to scope this group in the custom RBAC role that we are creating and targeting for meeting room systems and their service accounts.

Group 2: IntuneMobileDevices – Created to add the BYOD users accounts . This is required to scope this group in the custom RBAC role that we are creating and targeting for byod users.

Created Scope Tags –

Scope Tag 1: Mobile-Admin – To tag all the BYOD mobile IOS/Andriod policies, users and devices. We have added the created group for intune users. One important point to note here is that all new users who needs to be part of intune policies needs to be added to this group.

The policies can be tagged to their related scope tags from the properties page.

Scope Tag 2: SRS-Admin – To tag all the meeting room devices and the service accounts.

In the same way we did for BYOD devices meeting room policies were tagged to this scope tag.

Scope tags are very much required and they are the basic benchmarks which are used to segregate the roles, permissions, devices and users. In this case we have created two scope tags and associated them to their corresponding policies,users,devices and admins.

Created 2 Custom RBAC roles –

Role 1: Meeting Room Admin – Clone copy of policy and profile manager role scoped only to MRM admins group. Tagged this role to SRS-Admin scope tag.

Role 2: Mobile Administrators – Clone copy of policy and profile manager role scoped only to Pilot Mobile Admins  admins group. Tagged this role to Mobile-Admin Scope tag.

The default RBAC roles will provide visibility to all the policies and hence we need to create new roles.Here we have created two clones of the default policy (policy and profile manager). Tagging these two roles to the appropriate scope tags is very important. Ideally scope tags are the components which seperates the role segregation based on policies and users defined on them.

Finally created few policies and tagged them separately for Mobile devices and meeting rooms.

Admin log in experience:

Policies  visibility from Global Admin account where we could see all the policies in the intune portal.

When logging in from mobile admin we see only mobile device policies for byod associated with him.

Only BYOD device compliance policies are present.

In the same way when logging from SRS admin we see only the meeting room policies associated with him.

Only meeting room app protection policies are found.

Caveats :

  1. For custom RBAC role it is requesting an EMS license to be assigned mandatorily for the admin accounts. I attempted the admin accounts without the licenses and it is not working.
  2. Once the policy is applied to admin accounts it is taking almost 24 hours’ time to be in effect.

We can utilize role based access control in combination with scope tags to ensure that the privilege administrator accounts have the correct access and perceivability to the right Intune objects. Scope tags figure out which objects administrators can see from their admin portal.

Thanks & Regards

Sathish Veerapandian

Microsoft Teams – Enforce Multifactor Authentication on guest accounts

Post the ignite sessions last month on Microsoft Teams, we have enhancements on security perspective that can be enabled which adds extra protection in any organization.

Inviting the external guest users to the teams channel have been a welcoming option for all of us which increases the communication between them and surges the productivity. However, there are few security guidelines that needs to be followed to ensure that our data is always secure even when they are shared outside the boundary. For instance, a guest account getting compromised where he is a member of a finance team will become a major security incident in any organization.

This article outlines the steps that can be carried over to enhance the security on Microsoft Teams guest accounts by enforcing the multi factor authentication.

Below are the steps to enforce the MFA on guest accounts:

First create a dynamic distribution group and target the guest account

Login to Azure AD Tenant with Admin privilege’s- Go to Groups – Create new group – make them security – membership type make them dynamic.

Now we need to add a dynamic query where the property is usertype  and the value is guest.

Once done populate the rule syntax and save them.

After some time now, we could see that the populated guest users in our Azure AD tenant will become the members of this group. Since it’s a dynamic query all the new upcoming accounts will be getting occupied automatically.

Create conditional access policy for guest accounts:

Now we need to create a conditional access policy for the Microsoft Teams guest accounts.

Navigate to enterprise applications – click on conditional access.

Now we need to target the dynamic group on this conditional access policy.

In cloud apps select Microsoft Teams , also better to select Sharepoint online which will enforce MFA for these Sharepoint guest users as well.

In conditions we are selecting only the locations. Further it can be manipulated based on the business prerequisite.

In the access control we are selecting only require MFA and the IT policy.

Now we have the MFA enforced on the guest accounts and we will see the action of this configuration from the invited user.

Experience of the guest users enforced with MFA:

In order to simulate this behavior , we are just adding one guest user a teams channel

Post after that the invited user receives  a welcome email and this is usual behavior for any invited Azure AD guest user accounts.

When clicking to login the user will be prompted to register and enroll in MFA.

User will be prompted to enter the mobile number in the invited tenant for MFA and needs to complete the initial authentication process.

If we have enabled the IT policy user will be prompted to read and accept the IT policy.

Finally the user is logged in with the guest account and able to participate on the invited team through a secured way of authentication.

With very nominal steps through the conditional access it creates a overall better security for Microsoft Teams.

Use Azure Automation accounts, Run Books and Schedules to start stop VMs automatically running in Azure

By using this article, we can start/stop VMs during off-business houses.This greatly benefits the customers especially in cost optimization and manual task overhead of performing this action manually. But we need to make sure that the VMs that we are selecting is present in the same subscription where the automation account and this schedule is created by selecting only the required VMs and excluding the other VMs.

Login to Azure portal

Go to ALL Services and Type Automation Account and Create Automation Account.

Under Process Automation, click Runbooks to open the list of runbooks.

Click on + Create a runbook button to create a new runbook

Provide and Name and Runbook Type as PowerShell for the new runbook and then click Create button.

The Run Book is Empty, must deploy the Code  then Save and Publish

Note: You can Find lot of powershell scripts for this task in the github and technet gallery. You can use the below one as well taken from github.

#PowerShell Script to Start and Stop VM's in Azure on Scheduled Intervals using Azure Automation Account#
Param(
[Parameter(Mandatory=$true,HelpMessage="Enter the value for Action. Values can be either stop or start")][String]$Action,
[Parameter(Mandatory=$false,HelpMessage="Enter the value for WhatIf. Values can be either true or false")][bool]$WhatIf = $false,
[Parameter(Mandatory=$false,HelpMessage="Enter the VMs separated by comma(,)")][string]$VMList
)

function ValidateVMList ($FilterVMList)
{
[boolean] $ISexists = $false
[string[]] $invalidvm=@()
$ExAzureVMList=@()

foreach($filtervm in $FilterVMList) 
{
    $currentVM = Get-AzureRmVM | where Name -Like $filtervm.Trim()  -ErrorAction SilentlyContinue

    if ($currentVM.Count -ge 1)
    {
        $ExAzureVMList+= @{Name = $currentVM.Name; Location = $currentVM.Location; ResourceGroupName = $currentVM.ResourceGroupName; Type = "ResourceManager"}
        $ISexists = $true
    }
    elseif($ISexists -eq $false)
    {
        $invalidvm = $invalidvm+$filtervm                
    }
}

if($invalidvm -ne $null)
{
    Write-Output "Runbook Execution Stopped! Invalid VM Name(s) in the VM list: $($invalidvm) "
    Write-Warning "Runbook Execution Stopped! Invalid VM Name(s) in the VM list: $($invalidvm) "
    exit
}
else
{
    return $ExAzureVMList
}
}

function CheckExcludeVM ($FilterVMList)
{
[boolean] $ISexists = $false
[string[]] $invalidvm=@()
$ExAzureVMList=@()
<pre><code>foreach($filtervm in $FilterVMList) 
{
    $currentVM = Get-AzureRmVM | where Name -Like $filtervm.Trim()  -ErrorAction SilentlyContinue
    if ($currentVM.Count -ge 1)
    {
        $ExAzureVMList+=$currentVM.Name
        $ISexists = $true
    }
    elseif($ISexists -eq $false)
    {
        $invalidvm = $invalidvm+$filtervm
    }

}
if($invalidvm -ne $null)
{
    Write-Output "Runbook Execution Stopped! Invalid VM Name(s) in the exclude list: $($invalidvm) "
    Write-Warning "Runbook Execution Stopped! Invalid VM Name(s) in the exclude list: $($invalidvm) "
    exit
}
else
{
    Write-Output "Exclude VM's validation completed..."
}    </code></pre>
}
<h1>-----L O G I N - A U T H E N T I C A T I O N-----</h1>
$connectionName = "AzureRunAsConnection"
try
{
# Get the connection "AzureRunAsConnection "
$servicePrincipalConnection=Get-AutomationConnection -Name $connectionName
<pre><code>"Logging in to Azure..."
Add-AzureRmAccount `
    -ServicePrincipal `
    -TenantId $servicePrincipalConnection.TenantId `
    -ApplicationId $servicePrincipalConnection.ApplicationId `
    -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint </code></pre>
}
catch
{
if (!$servicePrincipalConnection)
{
$ErrorMessage = "Connection $connectionName not found."
throw $ErrorMessage
} else{
Write-Error -Message $_.Exception
throw $_.Exception
}
}
<h1>---------Read all the input variables---------------</h1>
$StartResourceGroupNames = Get-AutomationVariable -Name 'RG-Of-VMs-To-Start'
$StopResourceGroupNames = Get-AutomationVariable -Name 'RG-Of-VMs-To-Stop'
$ExcludeVMNames = Get-AutomationVariable -Name 'Excluded-List-Of-VMs'

try
{
$Action = $Action.Trim().ToLower()
<pre><code>    if(!($Action -eq "start" -or $Action -eq "stop"))
    {
        Write-Output "`$Action parameter value is : $($Action). Value should be either start or stop."
        Write-Output "Completed the runbook execution..."
        exit
    }            
    Write-Output "Runbook Execution Started..."
    [string[]] $VMfilterList = $ExcludeVMNames -split ","
    #If user gives the VM list with comma seperated....
    $AzurVMlist = Get-AutomationVariable -Name $VMList
    if(($AzurVMlist))
    {
        Write-Output "list of  all VMs = $($AzurVMlist)"
        [string[]] $AzVMList = $AzurVMlist -split ","        
    }
    if($Action -eq "stop")
    {
        Write-Output "List of  all ResourceGroups = $($StopResourceGroupNames)"
        [string[]] $VMRGList = $StopResourceGroupNames -split ","
    }

    if($Action -eq "start")
    {
        Write-Output "List of  all ResourceGroups = $($StopResourceGroupNames)"
        [string[]] $VMRGList = $StartResourceGroupNames -split ","
    }

    #Validate the Exclude List VM's and stop the execution if the list contains any invalid VM
    if (([string]::IsNullOrEmpty($ExcludeVMNames) -ne $true) -and ($ExcludeVMNames -ne "none"))
    {
        Write-Output "Values exist on the VM's Exclude list. Checking resources against this list..."
        CheckExcludeVM -FilterVMList $VMfilterList 
    } 
    $AzureVMListTemp = $null
    $AzureVMList=@()

    if ($AzVMList -ne $null)
    {
        ##Action to be taken based on VM List and not on Resource group.
        ##Validating the VM List.
        Write-Output "VM List is given to take action (Exclude list will be ignored)..."
        $AzureVMList = ValidateVMList -FilterVMList $AzVMList 
    } 
    else
    {

        ##Getting VM Details based on RG List or Subscription
        if (($VMRGList -ne $null) -and ($VMRGList -ne "*"))
        {
            foreach($Resource in $VMRGList)
            {
                Write-Output "Validating the resource group name ($($Resource))"  
                $checkRGname = Get-AzureRmResourceGroup -Name $Resource.Trim() -ev notPresent -ea 0  

                if ($checkRGname -eq $null)
                {
                    Write-Warning "$($Resource) is not a valid Resource Group Name. Please verify your input"
                }
                else
                {    
                    # Get resource manager VM resources in group and record target state for each in table
                    $taggedRMVMs =  Get-AzureRmVM | ? { $_.ResourceGroupName -eq $Resource} 
                     foreach($vmResource in $taggedRMVMs)
                    {
                        if ($vmResource.ResourceGroupName -Like $Resource)
                        {
                            $AzureVMList += @{Name = $vmResource.Name; ResourceGroupName = $vmResource.ResourceGroupName; Type = "ResourceManager"}
                        }
                    }
                }
            }
        } 
        else
        {
            Write-Output "Getting all the VM's from the subscription..."  
            $ResourceGroups = Get-AzureRmResourceGroup 

            foreach ($ResourceGroup in $ResourceGroups)
            {    

                # Get resource manager VM resources in group and record target state for each in table
                $taggedRMVMs = Get-AzureRmVM | ? { $_.ResourceGroupName -eq $ResourceGroup.ResourceGroupName}

                foreach($vmResource in $taggedRMVMs)
                {
                    Write-Output "RG : $($vmResource.ResourceGroupName) , ARM VM $($vmResource.Name)"
                    $AzureVMList += @{Name = $vmResource.Name; ResourceGroupName = $vmResource.ResourceGroupName; Type = "ResourceManager"}
                }
            }

        }
    }

    $ActualAzureVMList=@()

    if($AzVMList -ne $null)
    {
        $ActualAzureVMList = $AzureVMList
    }
    #Check if exclude vm list has wildcard       
    elseif(($VMfilterList -ne $null) -and ($VMfilterList -ne "none"))
    {
        $ExAzureVMList = ValidateVMList -FilterVMList $VMfilterList 

        foreach($VM in $AzureVMList)
        {  
            ##Checking Vm in excluded list                         
            if($ExAzureVMList.Name -notcontains ($($VM.Name)))
            {
                $ActualAzureVMList+=$VM
            }
        }
    }
    else
    {
        $ActualAzureVMList = $AzureVMList
    }

    Write-Output "The current action is $($Action)"

    $ActualVMListOutput=@()

    if($WhatIf -eq $false)
    {   
        $AzureVMListARM=@()


        # Store the ARM VM's
        $AzureVMListARM = $ActualAzureVMList | Where-Object {$_.Type -eq "ResourceManager"}


        # process the ARM VM's
        if($AzureVMListARM -ne $null)
        {
            foreach($VM in $AzureVMListARM)
            {  
                $ActualVMListOutput = $ActualVMListOutput + $VM.Name + " "
                #ScheduleSnoozeAction -VMObject $VM -Action $Action

                #------------------------
                try
                {          
                    Write-Output "VM action is : $($Action)"
                    Write-OutPut $VM.ResourceGroupName

                    $VMState = Get-AzureRmVM -ResourceGroupName $VM.ResourceGroupName -Name $VM.Name -status | Where-Object { $_.Name -eq  $VM.Name }
                    if ($Action.Trim().ToLower() -eq "stop")
                    {
                        Write-Output "Stopping the VM : $($VM.Name)"
                        Write-Output "Resource Group is : $($VM.ResourceGroupName)"


                        if($VMState.Statuses[1].DisplayStatus -ne "VM running")
                        {
                            Write-Output "VM is not in running state, No actions performed"
                        }
                        else
                        {
                            $Status = Stop-AzureRmVM -Name $VM.Name -ResourceGroupName $VM.ResourceGroupName -Force

                            if($Status -eq $null)
                            {
                                Write-Output "Error occured while stopping the Virtual Machine."
                            }
                            else
                            {
                                Write-Output "Successfully stopped the VM $($VM.Name)"
                            }            
                        }

                    }
                    elseif($Action.Trim().ToLower() -eq "start")
                    {
                        Write-Output "Starting the VM : $($VM.Name)"
                        Write-Output "Resource Group is : $($VM.ResourceGroupName)"

                        if($VMState.Statuses[1].DisplayStatus -eq "VM running")
                        {
                            Write-Output "VM already Running, No actions performed"
                        }
                        else
                        {
                            $Status = Start-AzureRmVM -Name $VM.Name -ResourceGroupName $VM.ResourceGroupName

                            if($Status -eq $null)
                            {
                                Write-Output "Error occured while starting the Virtual Machine $($VM.Name)"
                            }
                            else
                            {
                                Write-Output "Successfully started the VM $($VM.Name)"
                            }
                        }
                    }      

                }
                catch
                {
                    Write-Output "Error Occurred..."
                    Write-Output $_.Exception
                }
                #--------------------------
            }
            Write-Output "Completed the $($Action) action on the following ARM VMs: $($ActualVMListOutput)"
        }


    }
    elseif($WhatIf -eq $true)
    {
        Write-Output "WhatIf parameter is set to True..."
        Write-Output "When 'WhatIf' is set to TRUE, runbook provides a list of Azure Resources (e.g. VMs), that will be impacted if you choose to deploy this runbook."
        Write-Output "No action will be taken at this time..."
        Write-Output $($ActualAzureVMList) 
    }
    Write-Output "Runbook Execution Completed..."
}
catch
{
    $ex = $_.Exception
    Write-Output $_.Exception
}

Now in Automation Account, under Shared Resources click Variables and add these variables [Note: while creating the variables you have to provide some Value, later you can delete the value if required]


Explanation of these variables-

RG-Of-VMs-To-Start: ResourceGroup that contains VMs to be started. Must be in the same subscription that the Azure Automation Run-As account has permission to manage.

RG-Of-VMs-To-Stop: ResourceGroup that contains VMs to be stopped. Must be in the same subscription that the Azure Automation Run-As account has permission to manage.

Excluded-List-Of-VMs: VM names to be excluded from being started/Stopped

Note: These Variables are defined in the PowerShell Code and should not be changed if used by default.

VMstoStartStop: Provide the list of the VMs to be started or stopped.

Now we will create schedule for the start & stop of VMs. Under Shared Resources, click on Schedules and then click + Add a schedule and provide a Name and time and recurrence frequency for the schedule and Click Create

Similarly create schedule for Stop_VM


Now we will attach these schedules with the runbook. Go to Runbook and then open the runbook, click schedules and then + Add a schedule

Now link Start_VM and then provide parameter values. ACTION can have values like start or stop. In VMLIST provide the variable name “VMstoStartStop” which contains the VM names. Click create

Similarly attach the Stop_VM and provide the Action value Stop and VMList value VMstoStartstop.

So now we have two schedules start-vm and stop-vm which would be running on the defined schedules.

Now your runbook will execute the start of VM and stop of VM as per the schedule attached. The results of runbook execution can be seen under Process Automation –> Jobs.

This task greatly benefits the hassle of maintaining and performing this task manually from the admin side.

Extend local AD extension attributes to Azure AD in a non-hybrid exchange online only environment

There might be a scenario where the environment has Azure AD synced users from local Active Directory. The mailboxes will be created directly in exchange online with no hybrid configured from the underlying time as a rule for new businesses.

Usually developers for customizing the login experience for different business units in their application consume the local extension AD attributes and its usually fine for fully on premise environments.

If we have exchange installed in the environment , the active directory schema will be extended to include user extensionattributes in the exchange mailbox properties.

There is another option of Using the Exchange Server install media, extend only the local Active Directory schema. Usually this option is not recommended. Doing this would add Exchange attributes to the local Active Directory. These attributes could then be set, and Azure AD Sync would then be configured to sync these attributes to Office 365.This option requires much testing, and there is always risk associated with AD schema changes.

Even in hybrid setup these values gets populated in Exchange online via exchange hybrid configuration for all users.

In the third scenario where we do not own a exchange hybrid and if the developer is using Azure AD via graph API and expecting these values on azure AD for the customization. In this case we have a better option of extending these values from the Azure AD connect by running them again and selecting only the required AD extension attributes.

Login to Azure AD with global admin credentials and select customize synchronization options

Select directory extension attribute sync.

Here we will have the option to choose the local active directory attributes. In our case we are selecting the two atttributes extensionattribute7 and extensionattribute8 .

Once done go ahead and click on configure.

It must be working usually in this steps but in this case we did a directory refresh schema.

Selected the directory for refresh.

Now went to the local Active Directory and populated the extensionattribute8 for one user.

Once after the sync is completed we can verify if the value is populated in the azure ad via graph explorer.

Login to the graph explorer from the below url.

https://developer.microsoft.com/en-us/graph/graph-explorer

We can login with any valid credentials from your tenant.

We will be asked for the admin consent and needs to be selected based on the requirement.

Run the below query.

https://graph.microsoft.com/v1.0/me?$select=mail,jobTitle,companyName,onPremisesExtensionAttributes

For Reading on premise attributes (mail, jobTitle, company Name and onPremisesExtensionAttributes) using Graph API. You should see the extensionAttribute8 under onPremisesExtensionAttributes which is being used currently.

In our case we can see the extension attribute8 value which has been synched and available in Azure AD.

Using the directory extension option in the azure ad connect achieves this task in a lot less simpler way.

Thanks & Regards

Sathish Veerapandian

Configure SendGrid in Microsoft Azure for email campaigns and smtp relay

With Microsoft Azure and SendGrid sending email campaigns for the organization will be a lot simpler. The SMTP relay configuration on applications for developers will be hassle free and much secure. We can go up to two SendGrid subscriptions on every azure account. Sendgrid gives a lot of adaptability towards utilizing either webapi on the application sending messages or to utilize the normal SMTP relay configuration.

This article outlines the steps carried over to create send grid accounts in Microsoft Azure.
Login to azure portal – Search for SendGrid and create SendGrid account.

We must select the pricing tier. Good thing is that we get F1 free with Azure subscription of 25000 emails per month which has custom API integration with advanced tracking mechanism.

Once created we must run through few initial configuration steps.

Now once the account is created, we would need to authenticate our domain so that the send grid can send emails on behalf of our registered domain.

We need to add the cname records on our DNS portal.

Once after entering the domain we have options to use automated security which will rotate the DKIM keys for our domain, custom return path and use custom DKIM selector.

Create the associated CNAME records for SMTP and DKIM on our public DNS.

Once after publishing the records our domain validation will be successful.

Upon successful verification navigate to the setup option and choose first option to configure Web API or SMTP relay. If we are going with the latter option we just need to generate the API key and use them on the php file or the api depending on the workload of the website which requires this service.

Now we have two options to set up using webapi or SMTP relay

Once completed the below integration we get the option to use API key and regular SMTP relay on the application

One of the best things is that we do have an option to create multiple API keys. This is ideal for developers to use their own API keys which will be tracked and used only by them.

We still do have multiple options to further reiterate the permission levels while creating the API keys. Once the API keys are created it will be displayed only once on the portal and can’t be seen again. This is for security concern and must be copied and shared with the application developer who would be using this API key to send emails.

Plentiful  of options available on the email tracking with send grid like people who have opened, clicked,unsubscribed , emails bounced and all of the actions which are available below.

Below options are present in Suppression

There is design library and template section which is very useful and can be used to create email campaigns

We have decent options to create a well drafted marketing email. There is sufficient amount of modules that can be used for a perfect email campaign.

Before sending the email to all audiences we have option to send test with few recipients and below is the test email received from sendgrid.

There are few templates available in the design libraries which can be utilized for creating new marketing templates.

We have full and partial html and can choose the best html based on our experience.

Statistics overview gives much detailed information on the email campaign delivery and customer interests.

On the activity field we can see the detailed information on the recent mass campaign sent through sendgrid

On a attempt to send an email from an unverified sendgrid account we do get immediate bounce back

Spam Reports are triggered when a user who receives the email marks them as spam button or places the email in their spam folder within their email client gmail,yahoo or other service providers.

With Microsoft Azure and very few clicks we can enable organizations to have a fully capable email campaign and modern smtp relay solution.
This avoids the major efforts of creating a dedicated server on the on premise network , creating allow lists , configuring permissions, performing timely updates ,securing and maintaining them.We need to ensure that the sendgrid SPF,DKIM records are populated in the DNS portal to get aligned with the email authentication policy.

Thanks & Regards

Sathish Veerapandian

Migrate onpremise SQL DB to the Azure SQL Database

Azure dataplatform also provides Azure SQL database as a relational database as a service PAAS which is fully managed by Microsoft.This helps the developers to build their apps very quickly and removes the overhead of database administration.

There are few methods to migrate an on premise SQL database to Azure SQL Database and in this article we will have a look at migrating them with two options.

1) Using BACPAC export and import.

2) Data Migration Assistant.

Using BACPAC export and import:

With BACPAC export and import firstly we need to export the SQL database from the on premise SQL instance as a data tier application.

To export – Open SQL Management Studio – Right Click on the desired database and click on tasks – select export data tier application.

Now we need to save them in bacpac format.

The exported bacpac file will be successful.

Now this bacpac file needs to be imported to the Azure SQL database. Now we need to connect to the Azure SQL database to from SQL Management Studio.

Once after it is connected right click on the database folder and select import data tier application.

Select the exported bacpac file from the local disk and select the new database name that needs to be mentioned. Here we need to choose the Edition of Microsoft Azure , size of the database and the service type for this database in Microsoft Azure.

Having selected the required option select import and the import operation will start.

After a successful import we can see the status to be green and result successful.

Now we can see the migrated database in the Azure SQL database which have been successfully imported. Now we need to provide the username and the connection strings to the application owner to access their data which is present on the Azure SQL database.

Data Migration Assistant:

We can use the SQL migration assistant with source and target end points and migrate the data to SQL PAAS Azure easily.

Below are the readiness to be prepared for migrating the SQL data from on premise to Azure :

  • Download and install SQL data migration assistant.
  • We need to enable TCPIP on source.
  • Port 1433 must be accessible.
  • SQL browser service must be started.
  • Enable SQL remote connection must be enabled.

Once the Data Migration Assistant is installed open  and click on new

Here we have two options assessment or migration. Assessment helps us to identify the readiness required for this migration and will let us know if any connection or prerequisites missing. Here we can click on assessment.

Now we can select the authentication type and click on next

Select the desired DB’s that needs to be migrated to Azure.

Now we have the option to click on start assessment

 

  Check the assessment report once it is completed.  

To Migrate – rerun the agent and choose the option migrate and specify the source server details.

Once after its authenticated successfully now we have an option to choose the database that needs to be migrated.   

Now we need to specify the target Azure SQL PAAS Db details and the credentials.

Once after its been authenticated successfully , we can see the schema objects from the source database which we would like to migrate to Azure SQL database.

For the schema to migrate successfully we need to deploy the schema which will help us to migrate the schema initially.

Later once the schema is populated successfully now we have an option to migrate the data.Click on migrate data.

Choose the list of tables that needs to be migrated.

Once the table have initiated the migration  we can see the progress.

On a successful migration we get the below message.

The result of the online migration is that the data is successfully migrated to the cloud.

Thanks & Regards

Sathish Veerapandian

%d bloggers like this: