Deploying Azure Application Gateway to host multiple Static Web Apps across multiple Subdomains Internally

Azure Application Gateway is a powerful tool in the overall Azure arsenal. It offers flexibility in getting started, with the ability to start with basic configurations to get a grasp on things while offering advanced functionalities from a single pain of glass.

This is a tutorial for anyone else that stumbled over Microsoft’s documentation and looking for a walkthrough on how to get one stood up. This article focuses only on the Azure App Gateway, but paints the overall architecture to achieve for testing purposes. It does not include deploying the Virtual Networks, Private DNS, Static Website Storage Accounts, etc.

Spend consideration: This architecture runs at about $10/day, or $15/day if you don’t shut the VM off when not using it. A personal mantra of mine is if you’re going to invest in anything, invest in yourself. Ok, enough of that, let’s get started.

When planning out any cloud architecture, Azure or other, always take some time to first document clear requirements to achieve the goal.

Goals:

  • Create an Azure Application Gateway not accessible on the Public Internet
  • Use port 443 with a known domain (I’m using mclaughlin.solutions)
  • Host Static Websites on Storage Accounts
    • The Static Websites are split across multiple subdomains
    • Initially deploy through Azure Portal with transformation into IaC deployments

In order to accomplish the aforementioned goals, take some time to first think about what the final state will look like, and what resources you’ll need. For this Project, we’ll need the following:

  • Domain with pfx cert
  • Virtual Network
  • Azure Private DNS
  • Virtual Machine
  • Azure Bastion
  • Three Storage Accounts
  • Azure App Gateway
  • Patience

There’s some sub-resources that will be created, but we don’t need to worry about them. These include Disks (For the VM), Network Interfaces, Private Endpoints, and Public IP Addresses (Which we won’t be using but are necessary – We’ll talk more on that later). With the goals and approach thought out, I then like to try and visualize how it will work. Some quick work in Visio and we have a design architecture:

The picture we’re painting is a completely isolated, internally established Development environment. This allows for development without the risk exposure of having our in progress, non-tested work being public facing. Secure development for the win!

Focusing on the Azure App Gateway, I walked through deploying the Resource and chose to customize it to my liking afterwards.

Configure the Backend Pools. For each Backend Pool, you’ll have one target that points to your Storage Account using IP or FQDN. I did FQDN for mine. This is what mine looks like:

Configure the Backend Settings. You’ll do this three times (once for each storage account):

  • Set the Backend protocol to HTTPS and Backend port to 443
  • Set Override with new host name to Yes
  • Set Host name override to Override with specific domain name
  • Set Host name to the Storage Account
  • Set Custom Probe to No, we’ll revisit after.

Mine looks like this:

Configure the Listener. Because Azure App Gateway deploys as both Public and Private, and because we want it to be Private, we need to create a Listener on the Private IP. This is what it should look like:

  • Frontend IP is Private
  • Use port 443
  • Upload your domain PFX certificate
  • Set the Listener Type to Multi site
  • Set the Host type to Multiple/Wildcard
  • Add in your hostnames/subdomains

Mine looks like this:

Configure the Rule:

  • Set the Priority to 1
  • Choose the Listener you created
  • The next set of options are under Backend Targets
  • Set Target Type fo Backend Pool
  • Set Backend Target to one of your Backend Targets
  • Set Backend Settings to one of your Backend Settings
  • Click Add multiple targets to create a path-based rule
  • My static website structure is basic, where 1/2 are on Storage Account 1,3/4 are on 2, 5/6 are on 3. They have basic index.html files that just denote which webapp it is, and what host it is on. Nothing crazy.
  • Set the path to /1/*
  • Set the Target name to 1
  • Set the Backend Settings to the Storage Account hosting the Static Website in question
  • Set the backend target to the corresponding Backend Pool

Rinse and repeat. Yours should look something like this at the end:

Remember when we set Custom Probes to no? Querying the Probes without specifying the website will fail, because it will return a 404 error as it only queries the top level of the Storage account.

So to fix this, go under Health Probes, and create 1 per Storage Account which points to the path to your Static Website. Referencing my structure above, I just chose one of the Static Website paths per host: storageaccount1.web.core.windows.net/1/index.html
storageaccount2.web.core.windows.net/3/index.html
storageaccount3.web.core.windows.net/5/index.html

Once you have the Health Probes configured, go back to your Backend Pools, and update each one to match, then click Backend Health and it will so Healthy:

Now to test, go into your VM which is on the same VNET as the App Gateway.

Fire up Edge, and just try to walk each website. This is what mine look like from the internal VM:

Site 1 on Storage Account 1 on Subdomain 1:

Site 2 on Storage Account 1 on Subdomain 1:

Site 3 on Storage Account 2 on Subdomain 1:

Site 4 on Storage Account 2 on Subdomain 1:

Site 5 on Storage Account 3 on Subdomain 2:

Site 6 on Storage Account 3 on Subdomain 2:

When trying from the Public Internet, you’ll see it fails:

While the Azure App Gateway offers way more, this was deployed using an Azure App Gateway Standard v2 SKU and Internal only so we can skip using the WAF v2 SKU.

There are other considerations when using Azure App Gateway to achieve a secure solution, such as configuring Alerts and Diagnostic Settings. I also highly recommend once you’ve configured it, ensured it’s working, to go under Automation > Export Template and save your work. This will allow you to delete your Azure App Gateway and underlying Resources, save money, but give you a deployable solution in the future should you need to stand one up quickly.

Thanks for checking out my post!

Automating Azure Policy Non Compliance on False Positive Findings using PowerShell

There’s nothing I loathe more then Microsoft’s never ending pursuit to get everyone to signup and use their most expensive licensing models regardless of the product. If you use 3rd parties for Identity Providers (IdP), anti-malware, vulnerability scanning, or cloud security posture management (CSPM) solutions, be prepared for your Microsoft and Azure Advisor Secure Score’s to absolutely suck.

I got tired of seeing how crappy our score was. It doesn’t look great, and this is definitely by design from Microsoft. So, rather then trudge through the portal and manually make changes, I decided to automate making exemptions for findings that we have compensating controls for.

While not mandatory, it will help you immensely if you use my other script to generate the CSV in my other post – You can grab that here: https://mclaughlin.ai/exporting-azure-policy-assignment-resource-compliance-across-the-tenant/

Before getting started, we’ll be using these PowerShell commands, so if you don’t have the underlying modules on your machine, Google away my friend. Once installed circle back here for the goods. They are:

  • Get-AzSubscription
  • Set-AzContext
  • Get-AzPolicyAssignment
  • Get-AzPolicyState
  • New-AzPolicyExemption

Some other basic PowerShell commands are in place but you should be good there. With that said, here’s the code, and we’ll break it down afterwards:


$date = (Get-Date -Format MM-dd-yy-hh-mm-ss).ToString()
$subs = get-azsubscription
       $i = 1
       $ii = 0

$pol2rem = Read-Host -Prompt "Which policy would you like to place a Waiver for? Enter the PolicyDefinitionReferenceId attribute"
$polWaiverNote = Read-Host -Prompt "Enter a SHORT description to be placed in with the Policy Waiver, 10 chars or less (This doesn't check length so it is up to you)"

Write-Host $subs.Count "Subscriptions" -ForegroundColor Green
foreach ($sub in $subs) {

    write-host "Setting subscription to "$sub.Name -ForegroundColor Green
    set-azcontext -subscriptionid $sub.Id
    write-host "Set subscription to "$sub.Name -ForegroundColor Green
    
    $assignedPols = get-azpolicyassignment
    write-host "Got policy assignments for"$sub.Name -ForegroundColor Green
            
    foreach ($pol in $assignedPols) {      

                $polDefRefId = $polDef.PolicyDefinitionReferenceId.tostring()
                $polDefResId = $polDef.ResourceId.tostring()
                if ($polDef.ComplianceState -eq 'NonCompliant' -and $polDefRefId -eq $pol2rem.ToString()) {
                    $ii += 1

                            new-azpolicyexemption -name "PS $polWaiverNote" -policydefinitionreferenceid $polDefRefId -exemptioncategory Waiver -policyassignment $pol -scope $polDefResId

                            write-host "Exemption created for $polDefRefId within policy assignment"$pol.Name"in sub"$sub.Name -ForegroundColor Cyan
                }
            }
    }

    write-host "Procsessed $i of"$subs.Count"subscriptions" -ForegroundColor Green
    write-host "Processed $ii policies" -ForegroundColor Cyan
    $i++

}

The script starts off by gathering Date information.. Then it collects all the Azure Subscriptions across the Tenant and sets some variables we’ll use to keep track of the Subscriptions we’ve processed and the number of Exemptions put in place.

There are two inputs which you’ll use to run the script. The first prompt is the PolicyDefinitionReferenceId that you want to make exemptions for. The second prompt is for a brief description for the Waiver mitigation description. This attribute has a max character limit of 64, so we need to keep this somewhat brief. I was too lazy to add in code to ensure we don’t go over the 10 character limit, but consider this your warning. You’ll know it bombs out if it’s too long because I didn’t to a try/catch block on the Exemption piece so you can see what the actual error is.

With all that said, the first foreach block loops through the Subscriptions, setting the PowerShell boundaries to each one individually. Simple enough

The second foreach block does what we want it to do: Put in Exemption Waiver’s for false positives. This done with the following command:

new-azpolicyexemption -name "PS $polWaiverNote" -policydefinitionreferenceid $polDefRefId -exemptioncategory Waiver -policyassignment $pol -scope $polDefResId

Let’s break this down:

The -name parameter is the description of why we are marking it as Exempt.

The -policydefinitionreferenceid is the name of the policy we want to be Exempted. You can find this on the earlier CSV generated.

The -exemptioncategory we set to Waiver so it falls off the report and helps clean up the numbers/scores.

The -policyassignment is the Azure Policy Assignment that has the Azure Policy Definition that is being marked as a False Positive Non Compliant finding. It that sentence makes you feel like you’re in the movie Inception, you’re not alone.

The last parameter, -scope, pulls the PolicyDefinitionResourceId (not to be confused with the PolicyDefinitionReferenceId), which targets the Subscription, Resource Group, or Resource which is being flagged a Non Compliant.

Couple this together and you get automated way to get rid of False Positive findings so you can then start trudging through the True Positives. Start with the low hanging fruit is what I say. This script has made my life immensely easier when dealing with Azure Advisor, Azure Secure Score, and Defender for Cloud findings.

Hope this helps and enjoy!

Exporting Azure Policy Assignment Resource Compliance Across the Tenant to CSV

Azure Policies are a foundational component to securing your cloud environment. One of the challenges you may run into is that it can bloat your tab memory and slow response over time when trying to navigate large Azure implementations using the Portal. Hopping around the Azure Portal can become unmanageable, evaluating different policies for Compliance, only to have it crash due to the large number of information it attempts to front load in your browser. I find it silly that it doesn’t release memory when you jump to different areas of Policy, but I guess fixing that (if they are aware of it at all) is in the almighty backlog.

In short: It caused me frustration. And this frustration led me down a path of figuring out how to export all of the Policies across all Subscriptions to CSV. I ended up using PowerShell to accomplish this and am sharing if anyone else has felt the pain.

The first thing we need to become familiar with are the Azure Policy PowerShell commands. Microsoft’s documentation on a few of these are lacking, and they don’t fully explain in enough detail what each of the parameters does. This is something else that caused additional frustration. Anyway, this script uses:

  • Get-AzSubscription
  • Set-AzContext
  • Get-AzPolicyAssignment
  • Get-AzPolicyState

Collectively, with some other built in PowerShell commands, we can paint a picture of our entire Azure Tenant and All Subscription Policy through Excel. Much, much easier.

Here’s the code, and then I’ll break out how it works in detail after if you’re interested. Note: You’ll need to use Connect-AzAccount first.

$date = (Get-Date -Format MM-dd-yy-hh-mm-ss).ToString()
$subs = get-azsubscription
       $i = 1

Write-Host $subs.Count "Subscriptions" -ForegroundColor Green
foreach ($sub in $subs) {


    write-host "Setting subscription to "$sub.Name -ForegroundColor Green
    set-azcontext -subscriptionid $sub.Id
    write-host "Set subscription to "$sub.Name -ForegroundColor Green

    $assignedPols = get-azpolicyassignment
    write-host "Got policy assignments for"$sub.Name -ForegroundColor Green
            
    foreach ($pol in $assignedPols) {  
            Get-AzPolicyState -PolicyAssignmentName $pol.Name | Select-Object *, @{Name='SubscriptionName';Expression={$($sub.Name)}}, @{Name='State';Expression={$($sub.State)}} | Export-Csv -Path "c:\temp\polexp-$date.csv" -NoTypeInformation -Append
            $polState = Get-AzPolicyState -PolicyAssignmentName $pol.Name
            write-host "Got policy assignment details for"$pol.Name -ForegroundColor Green
    }

    write-host "$i of"$subs.Count"subs processed." -ForegroundColor Cyan
    $i++

}

The first thing the script does is use Get-Date to a variable. The second thing it does is collect all of our Azure Subscriptions across the entire tenant into another variable.

The first foreach block begins looping through each Subscription and uses Set-AzContext to set the PowerShell boundaries to that Subscription. Then it collects all Policy Assignments to a variable. But this alone will not give us all the information we need, and instead just provides a high level export of what Azure Policy Assignments exist against the Subscription. It is not detailed at all, and it is annoying. This is where the next foreach loop comes into play.

The nested foreach block then takes each Policy Assignment in the Subscription and does a deep dive against it. Get-AzPolicyState takes each Policy within the Assignment to truly get an understanding of what is Compliant, NonCompliant, or Exempt. It ended up working pretty well.

This is really the meat and potatoes of the script:

Get-AzPolicyState -PolicyAssignmentName $pol.Name | Select-Object *, @{Name='SubscriptionName';Expression={$($sub.Name)}}, @{Name='State';Expression={$($sub.State)}} | Export-Csv -Path "c:\temp\polexp-$date.csv" -NoTypeInformation -Append

Let’s break it down.

We get the state of all Policies within a unique Assignment by using the -PolicyAssignmentName parameter. We pull the Assignment name by querying against the $pol.Name variable attribute.

Because I would rather have all of the information and filter out what I don’t need, I use Select-Object * to return every attribute available against the Policy. I then create two custom Object’s pulling in the Subscription Name (Which is much easier to understand then the Subscription ID) and State of the Subscription, which tells me if it’s Enabled or Disabled.

Lastly, it then pumps all of the individual Resources that are applicable to that Policy, and exports everything into a nice, clean, CSV. These are the headers you get in the report:

  • PolicySetDefinitionParameters
  • ManagementGroupIds
  • PolicyDefinitionReferenceId
  • ComplianceState
  • PolicyEvaluationDetails
  • PolicyDefinitionGroupNames
  • PolicyDefinitionVersion
  • PolicySetDefinitionVersion
  • PolicyAssignmentVersion
  • SubscriptionName
  • State

I hope this helps. Architecting secure cloud environments can be difficult, but thankfully we can automate things to make our lives a bit easier.