Mastering the AZ-104 Exam - Foundations of Azure Administration
Embarking on the journey to pass the Microsoft AZ-104 exam is a significant step for any IT professional aiming to specialize in cloud administration. This certification is the cornerstone for anyone aspiring to become a Microsoft Certified: Azure Administrator Associate. It validates the skills and knowledge required to implement, manage, and monitor an organization's Microsoft Azure environment. The exam is not just a test of theoretical knowledge but a practical assessment of your ability to handle real-world administrative tasks within the Azure ecosystem. Passing it demonstrates a clear proficiency in the core services that make up a well-governed and efficient Azure infrastructure.
The AZ-104 exam is designed for individuals who have a foundational understanding of cloud services and hands-on experience with Azure. The ideal candidate has at least six months of practical experience administering Azure resources. This includes familiarity with the Azure portal, command-line interface (CLI), and PowerShell. The certification covers a broad range of topics, ensuring that a certified administrator is well-rounded. These domains include managing Azure identities and governance, implementing and managing storage, deploying and managing Azure compute resources, configuring and managing virtual networking, and monitoring and backing up Azure resources.
Successfully preparing for this exam requires a structured approach and a commitment to understanding both the 'how' and the 'why' of Azure services. It is about more than just memorizing features; it is about understanding how different services interact to create secure, scalable, and resilient solutions. This series will break down the essential domains of the AZ-104 exam, providing a comprehensive guide to help you build the necessary skills. We will explore each key area in depth, offering insights into the concepts and practical knowledge you need to master to achieve certification and excel in your career as an Azure administrator.
Core Responsibilities of an Azure Administrator
An Azure Administrator is the central figure responsible for the day-to-day management of an organization's cloud infrastructure on the Azure platform. This role involves a wide array of tasks crucial for maintaining the health, security, and efficiency of the cloud environment. A key responsibility is resource provisioning. This means deploying and configuring virtual machines, storage accounts, and virtual networks based on the needs of the organization's applications and services. The administrator ensures that these resources are set up correctly, following best practices for performance and cost-effectiveness from the very beginning of their lifecycle.
Beyond initial deployment, an administrator is tasked with ongoing management and monitoring. This includes monitoring resource performance to identify potential bottlenecks, ensuring that services are running optimally, and reacting to alerts that indicate problems. They are also responsible for implementing and managing security controls to protect the organization's data and infrastructure from threats. This involves configuring network security groups, applying security policies, and managing user access. The administrator acts as the first line of defense, ensuring the integrity and confidentiality of the cloud environment through diligent oversight and proactive measures.
Another critical aspect of the role is cost management and optimization. Azure administrators are expected to monitor spending and find opportunities to reduce costs without compromising performance or security. This might involve resizing virtual machines, selecting the appropriate storage tiers, or leveraging Azure's cost management tools to analyze usage patterns. They also play a vital role in backup and disaster recovery planning, ensuring that critical data can be restored and services can be failed over in the event of an outage. Essentially, the Azure administrator ensures the cloud environment is secure, reliable, and cost-efficient.
Managing Azure Subscriptions and Management Groups
Understanding the hierarchical structure of Azure is fundamental for effective governance and administration, a key topic in the AZ-104 exam. At the top of this hierarchy are management groups, which are containers that help you manage access, policy, and compliance for multiple subscriptions. By organizing subscriptions into management groups, an administrator can apply governance conditions, such as Azure Policies and Role-Based Access Control (RBAC), to all subscriptions within that group. This provides a powerful way to enforce organizational standards consistently across a large number of subscriptions without having to configure each one individually.
Subscriptions are the next level down in the hierarchy and serve as a unit of management, billing, and scale. Each subscription is associated with a single Azure Active Directory (Azure AD) tenant, which provides identity and access management. Within a subscription, you create and manage resources. For an administrator, managing subscriptions involves tasks like monitoring costs, setting spending limits, and ensuring that resources are deployed in a way that aligns with the organization's billing and management strategies. It is common for organizations to use multiple subscriptions to separate environments, such as development, testing, and production, for better isolation and control.
Below subscriptions, resources are organized into resource groups. A resource group is a logical container for resources that share a common lifecycle. For example, all the resources for a specific application, like its virtual machine, database, and virtual network, could be placed in the same resource group. This makes it easier to manage and delete them together. An administrator must understand how to effectively use resource groups to organize assets, apply tags for cost tracking and management, and implement resource locks to prevent accidental deletion or modification of critical components. Mastering this hierarchy is essential for maintaining an organized and well-governed Azure environment.
Securing Identities with Azure Active Directory
Azure Active Directory (Azure AD) is the backbone of identity and access management in Microsoft's cloud, and a deep understanding of it is non-negotiable for the AZ-104 exam. Azure AD is a cloud-based identity service that provides single sign-on (SSO), multi-factor authentication (MFA), and conditional access to protect users and applications. As an administrator, your primary role concerning Azure AD involves managing user and group objects. This includes creating and managing user accounts for employees, as well as creating groups to simplify the process of assigning permissions to multiple users at once.
Effective identity management goes beyond creating users. It involves securing those identities. A key feature you must be proficient with is Multi-Factor Authentication (MFA). MFA adds a critical second layer of security to user sign-ins and transactions. An administrator is responsible for enabling and configuring MFA for users, ensuring that even if a password is compromised, the user's account remains secure. You will need to know how to enforce MFA registration for users and manage different authentication methods, such as the Microsoft Authenticator app, phone calls, or SMS messages.
Furthermore, you must understand how to manage different types of Azure AD identities, including cloud-only users, synchronized users from an on-premises Active Directory via Azure AD Connect, and guest users for external collaboration. Each identity type has different management considerations. The administrator's job is to ensure that the right identities have the right access to the right resources at the right time. This includes regularly reviewing user access, managing group memberships, and ensuring that the principle of least privilege is applied throughout the Azure environment to minimize the attack surface.
Implementing Role-Based Access Control (RBAC)
Role-Based Access Control, or RBAC, is the primary mechanism for managing permissions in Azure, and it is a topic you must master for the AZ-104 exam. RBAC allows you to grant users, groups, or service principals the specific permissions they need to perform their jobs, adhering to the principle of least privilege. Instead of giving everyone unrestricted access to the Azure subscription, you can segregate duties and grant only the amount of access that users need. This granular control is essential for maintaining a secure and well-governed cloud environment.
The core of RBAC is the role assignment, which consists of three components: a security principal, a role definition, and a scope. The security principal represents the user, group, or application that is being granted access. The role definition is a collection of permissions, such as 'Reader', 'Contributor', or 'Owner', which specifies the actions that can be performed, like reading, creating, or deleting resources. The scope defines the set of resources to which the access applies, which can be a management group, a subscription, a resource group, or even an individual resource.
As an Azure administrator, you will be responsible for creating and managing these role assignments. A common task is assigning the 'Virtual Machine Contributor' role to a team of developers at the scope of a specific resource group, allowing them to manage their virtual machines without having access to other resources in the subscription. You must understand how permissions are inherited down the hierarchy. For instance, a role assigned at a subscription scope will apply to all resource groups and resources within that subscription. Properly implementing RBAC is critical for preventing unauthorized changes and ensuring operational stability.
Understanding Built-in and Custom RBAC Roles
While Azure provides a comprehensive set of built-in roles, there are situations where they may not be granular enough for your organization's specific needs. The AZ-104 exam expects you to be proficient in using both built-in roles and creating custom roles. Built-in roles, such as Owner, Contributor, Reader, and User Access Administrator, cover the most common management scenarios. For example, the 'Reader' role allows a user to view all resources but not make any changes, which is perfect for auditing or monitoring purposes. The 'Contributor' role allows for managing all resources but cannot grant access to others.
An administrator's first step should always be to evaluate if a built-in role meets the requirements before creating a custom one. There are over 100 built-in roles available, covering a wide range of services from virtual machines to storage and databases. You should familiarize yourself with the most common ones, such as the service-specific contributor and reader roles, like 'Virtual Machine Contributor' or 'Storage Blob Data Reader'. Using built-in roles simplifies management and ensures that the permissions are maintained and updated by Microsoft as Azure services evolve.
However, when a built-in role is too permissive or doesn't include a specific required permission, you will need to create a custom RBAC role. Custom roles are defined using a JSON template that specifies the actions, not actions, and assignable scopes. For instance, you might need a role that allows users to restart virtual machines and read network configurations but nothing else. You would define this by specifying the exact permission strings, such as Microsoft.Compute/virtualMachines/restart/action, in the 'actions' section of the JSON. Knowing how to create, update, and assign these custom roles is a key skill for an Azure administrator.
Enforcing Compliance with Azure Policy
Azure Policy is a powerful governance tool that allows administrators to enforce organizational standards and assess compliance at scale. For the AZ-104 exam, you must understand how to use Azure Policy to ensure that resources deployed in your environment adhere to company rules. Policies are defined as rules, expressed in JSON format, that are evaluated against your resources. These rules can enforce things like only allowing certain VM SKUs to be deployed, requiring specific tags on all resources, or restricting deployments to certain geographic regions.
Azure Policy works by evaluating resources upon creation or update and can also audit existing resources for non-compliance. Policies can have different effects. An 'Append' effect can add required tags to a resource during creation, while a 'Deny' effect will block a resource deployment that violates the rule. The 'Audit' effect simply flags a resource as non-compliant without blocking it, which is useful for reporting. Understanding these different effects and when to use them is crucial for effective implementation of governance without disrupting development workflows.
To manage policies at scale, you use initiatives, which are collections of policy definitions grouped together to achieve a specific goal. For example, you could create an initiative for ISO 27001 compliance that includes multiple policies related to data encryption, network security, and access control. As an administrator, you would assign this initiative at a management group or subscription scope to ensure all resources under it are evaluated against these standards. You will be expected to know how to create and assign policies and initiatives, and how to interpret the compliance dashboard to identify and remediate non-compliant resources.
Organizing Resources with Tags and Resource Locks
Effective resource organization is a fundamental aspect of Azure administration. Tags are a simple yet powerful tool for this purpose. A tag is a key-value pair of metadata that you can apply to your Azure resources. They do not affect the resources themselves but provide a way to logically organize them for management, billing, and automation. For example, you could tag resources with the name of the department that owns them, the application they belong to, or the environment they are in, such as 'production' or 'development'.
Tags are particularly important for cost management and reporting. By consistently applying tags, you can use the Azure Cost Management service to filter and analyze spending based on different criteria. For instance, you can easily determine the monthly cost of all resources associated with a specific project or department. The AZ-104 exam will expect you to understand how to apply tags to resources and resource groups, and how to enforce a tagging strategy using Azure Policy. For example, you can create a policy that requires a 'CostCenter' tag to be present on all deployed resources.
In addition to organization, protecting critical resources from accidental changes is a top priority for any administrator. Azure provides resource locks for this purpose. There are two types of locks: 'CanNotDelete' and 'ReadOnly'. As the names suggest, a 'CanNotDelete' lock prevents anyone from deleting a resource, although they can still modify it. A 'ReadOnly' lock is more restrictive, preventing any modifications or deletions. You should know how to apply and remove these locks at different scopes, such as on a specific resource or an entire resource group, to safeguard your most important infrastructure components.
Introduction to Azure Storage Accounts
Azure Storage is a foundational service that provides highly scalable, secure, and durable cloud storage for a wide variety of data objects. For any Azure administrator, a deep understanding of Azure Storage Accounts is essential, as it is a core component tested on the AZ-104 exam. A storage account is a unique namespace in Azure for your data. It acts as a container that groups a set of Azure Storage services together. Every object you store in Azure Storage resides within a storage account, and access to this data is managed through this account.
When creating a storage account, you need to make several important configuration choices that will impact performance, cost, and features. You must select a storage account type, such as General-purpose v2 (GPv2), which is the standard and recommended type for most scenarios, offering access to all storage services like blobs, files, queues, and tables. You also need to choose a performance tier, either Standard for magnetic drives or Premium for solid-state drives (SSDs), which offers lower latency for high-performance workloads like virtual machine disks.
Another critical configuration is the replication strategy. Azure Storage always stores multiple copies of your data to protect it from planned and unplanned events. You must be familiar with the different replication options, such as Locally-Redundant Storage (LRS), Zone-Redundant Storage (ZRS), Geo-Redundant Storage (GRS), and Geo-Zone-Redundant Storage (GZRS). Each option provides a different level of durability and availability at varying price points. Your ability to choose the appropriate storage account type, performance tier, and replication strategy based on a given scenario is a key skill evaluated in the exam.
Managing Azure Blob Storage
Azure Blob Storage is Microsoft's object storage solution, designed for storing massive amounts of unstructured data, such as text, images, and videos. As an Azure administrator, you will frequently work with Blob Storage, and the AZ-104 exam covers its management in detail. Data in Blob Storage is organized into containers, which are similar to folders in a file system. Within these containers, you store individual blobs. There are three types of blobs: block blobs for text and binary data, append blobs optimized for append operations like logging, and page blobs for random read/write operations like virtual hard disk files.
A crucial aspect of managing Blob Storage is controlling access. You must understand the different methods for securing your data. This includes using storage account access keys, which provide full administrative access and should be used sparingly. A more secure method is using Shared Access Signatures (SAS), which provide delegated, time-bound access to specific resources with defined permissions. For the most secure and granular control, you should leverage Azure Active Directory (Azure AD) integration to assign RBAC roles like 'Storage Blob Data Contributor' or 'Storage Blob Data Reader' to users and applications.
You will also need to be proficient in performing common management tasks. This includes uploading and downloading blobs, managing container properties and metadata, and configuring access policies. For exam purposes, you should be comfortable performing these actions using the Azure portal, Azure PowerShell, and the Azure CLI. Understanding how to interact with Blob Storage programmatically and through different management tools is a hallmark of a competent Azure administrator and a key area of focus for the AZ-1_04 certification.
Configuring Blob Storage Tiers and Lifecycle Management
A significant part of managing Blob Storage effectively involves optimizing costs. Azure provides different access tiers for blob data, allowing you to store your data in the most cost-effective way based on how frequently it is accessed. The AZ-104 exam requires you to understand these tiers and how to manage the data lifecycle between them. The three main online tiers are Hot, Cool, and Archive. The Hot tier is optimized for frequently accessed data, offering the lowest access costs but the highest storage costs. The Cool tier is for infrequently accessed data that needs to be stored for at least 30 days, offering lower storage costs but higher access costs.
The Archive tier is designed for long-term data archival, for data that is rarely accessed and can tolerate retrieval latencies of several hours. It offers the lowest storage costs but the highest retrieval costs. As an administrator, you need to be able to set the access tier for a blob at the time of upload or change the tier of an existing blob. This process is known as rehydrating when moving a blob from the Archive tier to an online tier. Understanding the cost implications and performance characteristics of each tier is critical for both the exam and real-world cost optimization.
To automate the process of moving data between these tiers, you can use Lifecycle Management policies. These are rule-based policies that you can create within your storage account to transition blobs to a cooler storage tier or delete them at the end of their lifecycle. For example, you could create a rule that automatically moves blobs to the Cool tier after they haven't been accessed for 30 days, and then to the Archive tier after 90 days. You could also create a rule to delete old log files after a year. Mastering lifecycle policies is essential for managing large datasets cost-effectively.
Implementing Object Replication for Blob Storage
Data protection and disaster recovery are critical responsibilities for an Azure administrator. For Blob Storage, one of the features you need to understand for the AZ-104 exam is object replication. While geo-redundant storage (GRS) automatically replicates your data to a secondary region, object replication provides a more flexible and granular way to copy blobs between storage accounts. You can configure object replication policies on a source storage account to asynchronously copy blobs to a destination account in any Azure region.
Object replication offers several advantages over traditional GRS. It allows you to replicate data between any two regions, not just the paired regions defined by Azure. This gives you more control over your data residency and disaster recovery strategy. Furthermore, you can apply replication policies at the container level, meaning you can choose to replicate only specific containers rather than the entire storage account. You can also define filters based on blob prefixes, allowing you to replicate only blobs that match a certain naming pattern within a container.
An administrator needs to know how to configure an object replication policy. This involves enabling change feed on the source account, which is a prerequisite, and then defining the replication rule. The rule specifies the source and destination containers and any prefix-based filters. You also need to monitor the replication status to ensure that blobs are being copied successfully. Understanding when to use object replication, for instance, to minimize read latency by keeping data closer to users in different regions or for meeting specific compliance requirements, is a key skill for the exam.
Managing Azure Files and File Sync
Azure Files provides fully managed file shares in the cloud that are accessible via the standard Server Message Block (SMB) protocol and Network File System (NFS) protocol. This makes it easy to lift and shift applications that rely on on-premises file shares to the cloud without requiring code changes. For the AZ-104 exam, you must be able to create and manage Azure file shares, configure access permissions, and connect to them from both cloud-based virtual machines and on-premises machines.
When creating a file share, you can choose between Standard and Premium performance tiers. Standard shares use HDD-based hardware, while Premium shares use SSDs for higher performance and lower latency, making them suitable for I/O-intensive workloads. A key management task is configuring access. You can control access using the storage account key, but a more robust method is to integrate the file share with either on-premises Active Directory Domain Services (AD DS) or Azure AD Domain Services (Azure AD DS) to enforce NTFS-like permissions for users and groups.
For organizations with a hybrid environment, Azure File Sync is a powerful service that you need to be familiar with. It allows you to centralize your organization's file shares in Azure Files while keeping the performance and compatibility of an on-premises file server. It works by installing an agent on a Windows Server, which synchronizes files between the local server and your Azure file share. A key feature is cloud tiering, which archives infrequently accessed files to Azure Files, leaving only a pointer on the local server. This frees up local storage while providing seamless access for users.
Understanding and Provisioning Managed Disks
Azure Managed Disks are block-level storage volumes that are managed by Azure and used with Azure Virtual Machines. They are a fundamental component of Azure IaaS, and the AZ-104 exam thoroughly tests your knowledge of their provisioning and management. Unlike unmanaged disks, where you had to manage the underlying storage accounts, managed disks simplify disk management significantly. When you use managed disks, you just need to specify the disk type and size, and Azure handles the creation and management of the storage account for you, ensuring scalability and availability.
There are several types of managed disks, each designed for different performance needs. Ultra Disks offer the highest performance with sub-millisecond latency for I/O-intensive workloads. Premium SSDs provide high performance for production workloads. Standard SSDs are a cost-effective option for web servers and lightly used applications. Standard HDDs are the most economical choice for development, testing, and backup scenarios. As an administrator, you must be able to choose the appropriate disk type based on the performance and cost requirements of a given workload.
Your responsibilities also include managing the lifecycle of these disks. This involves tasks such as attaching a data disk to a running virtual machine, detaching disks, resizing a disk to increase its capacity or change its performance tier, and creating snapshots. Snapshots are point-in-time backups of your disks that can be used to create new managed disks, providing a simple mechanism for backup and disaster recovery. You should be comfortable performing all these operations through the Azure portal, PowerShell, or CLI, as these are common day-to-day tasks for an Azure administrator.
Securing Azure Storage
Securing the data stored in Azure is a paramount responsibility for an administrator. The AZ-104 exam places a strong emphasis on your ability to implement various security measures for Azure Storage. The first layer of defense is controlling network access. You can configure storage account firewalls and virtual network service endpoints to restrict access to your storage account from only specific public IP addresses or Azure virtual networks. For even more secure and private access, you can use Private Endpoints, which assign a private IP address from your virtual network to the storage service, ensuring traffic never leaves the Microsoft network.
Data protection extends to data in transit and at rest. All data written to Azure Storage is automatically encrypted at rest using Storage Service Encryption (SSE) with Microsoft-managed keys. For enhanced control, you can opt to use customer-managed keys stored in Azure Key Vault. Data in transit between the client and the service is protected by enabling the 'Secure transfer required' option on the storage account, which forces all connections to use HTTPS. You must understand how to configure and enforce these encryption settings.
Finally, access management is critical. As discussed earlier, you should favor using Azure AD integration and RBAC over shared keys or SAS tokens whenever possible. For scenarios where SAS tokens are necessary, you must know how to create them with the principle of least privilege in mind, by specifying limited permissions, a short expiry time, and restricting access to specific IP addresses. Regularly rotating storage account access keys is another important security practice. A holistic understanding of these security features is vital for success in the exam.
Deploying Azure Virtual Machines
Deploying Azure Virtual Machines (VMs) is one of the most fundamental tasks for an Azure Administrator and a major topic on the AZ-104 exam. A VM is a core Infrastructure as a Service (IaaS) offering that provides on-demand, scalable computing resources. When deploying a VM, you are required to make a series of crucial decisions that will define its characteristics, performance, and cost. This starts with selecting an image from the Azure Marketplace, which can be a standard operating system like Windows Server or Ubuntu, or a pre-configured image with specific software already installed.
After selecting an image, you must choose a VM size. Azure offers a vast array of VM sizes that are optimized for different workloads, such as general-purpose, compute-optimized, memory-optimized, and storage-optimized. Your ability to select the appropriate size based on CPU, RAM, and storage requirements for a given application is a critical skill. You will also configure networking by assigning the VM to a virtual network and subnet, and you can assign a public IP address if it needs to be accessible from the internet. You must also configure an initial administrator account with a username and password or an SSH public key for Linux VMs.
The deployment process also involves configuring storage for the VM. This includes selecting the type of OS disk (Premium SSD, Standard SSD, or Standard HDD) and attaching one or more data disks for application data. The exam will test your ability to perform these deployment steps using various tools, including the Azure portal for a guided experience, as well as Azure PowerShell, Azure CLI, and ARM templates for automation and repeatable deployments. Understanding the end-to-end process of provisioning a VM is non-negotiable for any aspiring Azure administrator.
Ensuring Virtual Machine Availability
Ensuring the high availability of applications running on virtual machines is a top priority for any administrator. The AZ-104 exam requires you to be proficient with the Azure features designed to protect your VMs from both planned maintenance events and unplanned hardware failures. The primary mechanism for this is the use of Availability Sets. An Availability Set is a logical grouping of VMs that allows Azure to understand how your application is built to provide redundancy and availability. When you place two or more VMs in an Availability Set, Azure distributes them across different physical hardware.
An Availability Set distributes VMs across Update Domains (UDs) and Fault Domains (FDs). Fault Domains represent groups of VMs that share a common power source and network switch, essentially a rack in the datacenter. Spreading VMs across FDs protects against localized hardware failures. Update Domains are groups of VMs that can be rebooted at the same time for planned maintenance. By distributing VMs across UDs, you ensure that only a subset of your VMs will be rebooted at any given time, keeping your application available. You should understand how to create and manage Availability Sets and the recommended number of FDs and UDs.
For workloads that require higher availability and scalability, you should use Virtual Machine Scale Sets (VMSS). A VMSS allows you to create and manage a group of identical, load-balanced VMs. The number of VM instances can automatically increase or decrease in response to demand or a defined schedule. VMSS not only simplifies the management of a large number of VMs but also integrates with Availability Zones for even higher fault tolerance, distributing instances across physically separate datacenters within a region. Understanding when to use an Availability Set versus a VMSS is a key exam objective.
Managing Virtual Machine Scale Sets (VMSS)
Virtual Machine Scale Sets (VMSS) are a critical compute resource for building large-scale, resilient, and auto-scaling applications. As an administrator preparing for the AZ-104 exam, you need a solid understanding of how to deploy and manage them. A VMSS allows you to deploy a set of identical virtual machines from a single configuration. This is incredibly efficient for scenarios where you need multiple instances of a web server or a compute-intensive application. All instances in a scale set are created from the same base image and configuration, ensuring consistency across the fleet.
One of the most powerful features of VMSS is autoscaling. You can configure rules that automatically adjust the number of VM instances in the scale set based on performance metrics, such as CPU utilization, or on a fixed schedule. For example, you can set a rule to add a new instance whenever the average CPU usage across the scale set exceeds 75% for five minutes, and another rule to remove an instance when the usage drops below 25%. This elasticity allows your application to handle varying loads cost-effectively, as you only pay for the instances you need.
Managing a VMSS involves tasks like updating the base image for all instances, changing the VM size, and configuring networking. When you update the scale set model, you can choose an upgrade policy that determines how the instances are updated. A 'Manual' policy requires you to manually trigger the update for each instance. An 'Automatic' policy will automatically update instances one by one or in batches, which is ideal for stateless applications. A 'Rolling' upgrade provides more control, allowing you to specify a batch size and a pause between batches. Understanding these management aspects is crucial for the exam.
Introduction to Azure Containers
While virtual machines are a cornerstone of IaaS, containers represent a more modern and lightweight approach to application virtualization. The AZ-104 exam expects you to have a foundational understanding of container concepts and the Azure services that support them. Containers package an application's code along with all its dependencies, such as libraries and configuration files, into a single, isolated unit. This allows the application to run consistently across different computing environments, from a developer's laptop to a production cluster in Azure.
Azure offers several services for running containers. The simplest one is Azure Container Instances (ACI). ACI provides a way to run a single container or a small group of co-located containers without having to manage any underlying virtual machine infrastructure. You can launch a container in seconds, making ACI perfect for simple applications, task automation, and build jobs. As an administrator, you should know how to deploy a container image from a container registry, like Docker Hub or Azure Container Registry, to ACI and configure its resource requirements and networking.
For managing containerized applications at scale, Azure provides Azure Kubernetes Service (AKS). AKS is a fully managed Kubernetes orchestration service that simplifies the deployment, scaling, and management of containerized applications. It offloads the operational overhead of managing a Kubernetes cluster to Azure, allowing you to focus on your applications. While the AZ-104 exam does not require you to be a Kubernetes expert, you should understand the basic concepts of AKS, such as nodes, pods, and services, and know how to deploy a simple application to an AKS cluster.
Managing Azure App Service Plans
Azure App Service is a fully managed Platform as a Service (PaaS) offering for building and hosting web apps, mobile backends, and RESTful APIs. The underlying compute resources for your App Service apps are defined by an App Service Plan. For the AZ-104 exam, you must understand how to create and manage these plans. An App Service Plan is essentially a set of compute resources, or a virtual server farm, on which your web apps run. When you create a plan, you define the operating system (Windows or Linux), the region, the number of VM instances, and the size of the instances (the pricing tier).
The pricing tier is a critical aspect of an App Service Plan as it determines the performance, features, and cost. Tiers range from 'Free' and 'Shared' for development and testing, to 'Basic', 'Standard', 'Premium', and 'Isolated' for production workloads. Higher tiers provide more CPU and memory, as well as features like deployment slots for staging, custom domains, and auto-scaling. As an administrator, you need to be able to select the appropriate tier for an application's needs and know how to scale the plan up (by changing the tier) or out (by increasing the number of instances).
Multiple web apps can run in the same App Service Plan. All apps in a plan share the compute resources defined by that plan. This can be a cost-effective way to host multiple small websites. However, for resource-intensive applications, it is best to place them in their own dedicated plans for isolation. You should be familiar with the process of creating a new web app, associating it with an App Service Plan, and monitoring the plan's resource utilization to ensure that all the apps running on it are performing well.
Automating VM Deployments with ARM Templates
Automating infrastructure deployment is a key principle of modern cloud management, and Azure Resource Manager (ARM) templates are the primary tool for achieving this in Azure. An ARM template is a JSON file that defines the infrastructure and configuration for your project. For the AZ-104 exam, you should be familiar with the structure of an ARM template and how to use it to deploy resources like virtual machines in a declarative and repeatable way. Instead of manually clicking through the portal, you define all the resources and their properties in the template and let Azure Resource Manager handle the deployment.
An ARM template consists of several sections. The 'parameters' section allows you to input values when the template is deployed, making it reusable for different environments. The 'variables' section is used to define values that are used throughout the template, simplifying management. The 'resources' section is where you define the Azure resources to be deployed, such as virtual machines, storage accounts, and network interfaces. Finally, the 'outputs' section can be used to return values from the deployed resources, like a VM's public IP address.
As an administrator, you will use ARM templates to ensure consistent and reliable deployments. For example, you can create a template to deploy a standard three-tier application, including the web servers, application servers, and a database, all configured correctly every time. You should know how to deploy a template using the Azure portal, PowerShell, or the CLI. You should also be familiar with exporting an existing resource group as a template, which can be a good starting point for creating your own custom templates. Proficiency with ARM templates demonstrates a key DevOps skill.
Monitoring Azure Compute Resources
Once compute resources like VMs and App Services are deployed, it is the administrator's job to monitor their health and performance. The AZ-104 exam covers Azure's native monitoring capabilities extensively. The primary service for this is Azure Monitor. Azure Monitor collects, analyzes, and acts on telemetry from your cloud and on-premises environments. For virtual machines, Azure Monitor collects platform metrics by default, such as CPU percentage, disk I/O, and network traffic. These metrics give you a high-level overview of the VM's performance.
To get deeper insights into the guest operating system and the workloads running inside the VM, you need to enable diagnostics and install the Log Analytics agent. This agent collects performance counters, event logs, and other data from within the OS and sends it to a Log Analytics workspace. In the workspace, you can use the powerful Kusto Query Language (KQL) to query this data, create visualizations, and build interactive dashboards. You should have a basic understanding of how to write simple KQL queries to retrieve information about your VMs.
A critical aspect of monitoring is alerting. You can configure alert rules in Azure Monitor to proactively notify you when a specific condition is met, such as when a VM's CPU usage stays above 90% for 10 minutes. These alerts can trigger various actions, such as sending an email, firing a webhook to an external system, or even running an Azure Automation runbook to attempt to remediate the issue automatically. Knowing how to set up and manage these metrics, logs, and alerts is a fundamental skill for maintaining a healthy and reliable compute environment in Azure.
Fundamentals of Azure Virtual Networks (VNet)
The foundation of networking in Azure is the Virtual Network, or VNet. A VNet is a logical representation of your own network in the cloud. It provides a private, isolated environment for your Azure resources, such as virtual machines, to communicate with each other, the internet, and your on-premises networks. For the AZ-104 exam, a thorough understanding of VNet concepts is absolutely essential. When you create a VNet, you define a private IP address space using CIDR notation, such as 10.0.0.0/16. This address space is then segmented into one or more subnets.
Subnets allow you to divide your VNet into smaller, manageable segments. Each subnet must have an address range that is a part of the VNet's larger address space. Resources deployed in the same subnet can communicate with each other without any extra configuration. However, communication between subnets is also enabled by default. You might create separate subnets for different tiers of an application, like a web tier, an application tier, and a data tier. This segmentation is a key principle of network design and allows for better organization and security.
As an administrator, you will be responsible for planning the VNet address space to ensure it is large enough for your current and future needs without overlapping with your on-premises networks. You will create and manage VNets and subnets, and you will associate resources, like the network interfaces of your virtual machines, with specific subnets. Understanding how to design and implement this basic network structure is the starting point for building any solution in Azure and a core competency tested on the exam.
Configuring Network Security Groups (NSGs)
Securing your virtual network is just as important as creating it. The primary tool for filtering network traffic to and from Azure resources in a VNet is the Network Security Group (NSG). An NSG contains a list of security rules that allow or deny inbound or outbound network traffic. For the AZ-104 exam, you must be proficient in creating and managing NSGs to protect your infrastructure. Each rule in an NSG specifies a source, a source port, a destination, a destination port, and a protocol. It also has a priority and an action (Allow or Deny).
NSGs can be associated with either a subnet or an individual network interface (NIC). When an NSG is associated with a subnet, its rules apply to all resources within that subnet. When it is associated with a NIC, its rules apply only to that specific NIC. It is important to understand how these rules are evaluated. For inbound traffic, the rules of the subnet's NSG are processed first, followed by the rules of the NIC's NSG. For outbound traffic, the order is reversed. This allows for a layered security approach.
A common task for an administrator is to configure NSG rules to control access to a virtual machine. For example, you might create an inbound rule to allow RDP traffic on port 3389 from your corporate office's public IP address, while denying it from all other locations. You must also be aware of the default rules that are present in every NSG, which include rules that allow traffic within a VNet and from the Azure Load Balancer, and a final 'DenyAll' rule with the lowest priority.
Implementing Azure DNS
Azure DNS is a hosting service for DNS domains that provides name resolution using Microsoft Azure infrastructure. While it does not let you buy a domain name, it allows you to host your DNS zones, like contoso.com, and manage the DNS records for that domain. The AZ-104 exam requires you to understand how to use Azure DNS for both public and private name resolution. For public DNS, you create a public DNS zone in Azure and then delegate your domain to Azure's name servers from your domain registrar.
Once the zone is created, you can manage its DNS records just as you would with any other DNS provider. You can create A records to map hostnames to IPv4 addresses, AAAA records for IPv6 addresses, CNAME records for aliases, MX records for mail servers, and so on. As an administrator, you will be responsible for creating and managing these records to ensure that your public-facing services are correctly resolved by users on the internet. You should be familiar with the different record types and their purpose.
For name resolution within a virtual network, Azure provides a default internal DNS service. However, for more control or for scenarios involving custom domain names, you can use Azure Private DNS zones. A private DNS zone is only accessible from within the virtual networks that you link it to. It allows you to use your own custom domain names rather than the Azure-provided names for your VMs. For example, you can have a VM resolve as webserver.corp.contoso.com instead of its default internal name. You must know how to create, link, and manage records in private DNS zones.
Connecting Virtual Networks with Peering
In many scenarios, you will need to enable communication between resources in different virtual networks. The primary way to achieve this in Azure is through VNet peering. VNet peering connects two virtual networks, allowing resources in either VNet to communicate with each other as if they were in the same network. The traffic between the peered VNets uses the Microsoft backbone network, ensuring that it remains private and does not traverse the public internet. For the AZ-104 exam, you must understand how to configure and manage VNet peering.
Peering can be established between VNets in the same Azure region (VNet peering) or in different Azure regions (Global VNet peering). The configuration is straightforward but requires setting up the peering connection from both VNets. Once the peering is established, the address spaces of the two VNets should not overlap. Resources in the peered networks can then resolve each other's names if you have configured DNS correctly, and traffic will flow seamlessly between them, subject to any NSG rules you have in place.
An important concept to understand with peering is gateway transit. If one of the peered VNets has a VPN or ExpressRoute gateway to an on-premises network, you can configure the peering to allow the other VNet to use that gateway to access the on-premises resources. This hub-and-spoke network topology is a common design pattern in Azure. The central 'hub' VNet contains shared services and the gateway, while multiple 'spoke' VNets are peered to the hub. This centralizes management and reduces costs. Knowing how to configure gateway transit is a key skill.
Implementing VPN Gateways for Hybrid Connectivity
Connecting your on-premises network to your Azure VNet is a common requirement for creating a hybrid cloud environment. One of the ways to achieve this is by using an Azure VPN Gateway. A VPN Gateway is a specific type of virtual network gateway that sends encrypted traffic between an Azure VNet and an on-premises location over the public internet. For the AZ-104 exam, you need to understand the components involved in setting up a site-to-site (S2S) VPN and how to configure them.
The setup involves several components in Azure. First, you need to create a special subnet in your VNet called the GatewaySubnet. This subnet is reserved exclusively for the gateway. Then, you create the virtual network gateway itself, specifying its SKU, which determines its performance and capabilities. You also create a local network gateway, which is an object in Azure that represents your on-premisess VPN device and its public IP address. Finally, you create a connection object that links the virtual network gateway to the local network gateway, establishing the VPN tunnel.
On the on-premises side, you need a compatible VPN device with a public-facing IP address. You will configure this device with the necessary parameters to establish the IKE/IPsec tunnel with the Azure VPN Gateway. As an administrator, you will be responsible for deploying all the Azure components and providing the necessary information to the on-premises network team. You will also need to know how to monitor the connection status and troubleshoot common issues. Understanding this end-to-end process is critical for hybrid connectivity scenarios.