Skip to main content
November 5, 2024

Azure Data Factory: Tutorial on its key concepts

In a data-driven landscape where companies often feel overwhelmed by the amount of data they manage, we are faced with a business situation that is in urgent need of change.
Raw and disorganized data is often stored in warehousing systems, but by itself does not have adequate context or meaning to provide meaningful information to analysts, data scientists, or business decision-makers. Here, we look at the role of tools like Azure Data Factory through a guide to becoming an expert and getting the most out of your data.

What is Azure Data Factory

Azure Data Factory is a cloud data solution that allows you to ingest, prepare, and transform data at a large scale. It facilitates its use for a wide variety of use cases, such as data engineering, migration of on-premises packages to Azure, operational data integration, analytics, ingesting data into warehouses, etc.

Big data requires a service that can orchestrate and put processes in place to refine these huge repositories of raw data and turn them into actionable business information. This is where Azure Data Factory comes in, as it is designed for these complex hybrid extract-transform-load (ETL), extract-load-transform (ELT), and data integration projects.

The main features of this solution are:

  • Data compression: allows data to be compressed and written to the target data source during data copying. This helps optimize bandwidth usage.
  • Support for different data sources: is widely supported, which is very useful when you want to extract or write data from different sources.
  • Custom event triggers: this allows you to automate data processing using custom event triggers. This facilitates the automatic execution of a certain action when a specific event occurs.
  • Data preview and validation: provides tools to preview and validate data. This helps ensure that data is copied correctly and written to the target data source correctly.
  • Customizable data flows– Allows you to create customizable flows, as well as add custom actions or steps for data processing.
  • Built-in security– provides security features such as Entra ID integration and role-based access control to control access to data streams.

Azure Data Factory Components

ADF has key components that work together to provide the platform on which you can compose data-driven workflows with steps to move and transform data:

  • Pipelines: A data factory can have one or more pipelines, which are a logical grouping of activities performed by a work unit. This allows activities to be managed as a set instead of managing each one individually, and they can be chained to work sequentially or independently.
  • Data flow mapping: allows you to create and manage transformation logic graphs that can be used to transform data of any size. You can also create a reusable library of transformation routines and run those processes in a scalable manner from the service pipelines in an automated fashion.
  • Activity: represents a processing step in a pipeline and three types are supported: data movement activities, data transformation activities, and control activities.
  • Data sets: represent data structures within data stores that point to or reference data to be used in activities.
  • Linked services: define the connection information required for ADF to connect to external resources. They are often used to represent a data store or represent a computer course that can host the execution of an activity.
  • Triggers: represent the processing unit that determines when to start the execution of a process.
  • Parameters: are defined in the pipeline and are passed during the execution created by a trigger or a manually executed pipeline. Activities within the pipeline consume parameter values or a set of data also represent a parameter.
  • Control flow: is an orchestration of pipeline activities that includes chaining activities in a sequence, branching, defining parameters at the pipeline level, and passing arguments while invoking the pipeline or from a trigger.
  • Variables: can be used within pipelines to store temporary values or to be used together with other parameters to allow passing values between pipelines, data flows, and other activities.

How Azure Data Factory works

Data Factory contains a series of interconnected systems that provide a complete end-to-end platform for data engineers. In this image you can see the complete architecture in detail:

Azure data factory connectors

The first step in creating an information production system is to connect to all the necessary data and processing sources, such as SaaS, databases, file shares, and FTP web services. Then the data is moved, as needed, to a centralized location for further processing.

Without Data Factory, companies must create custom data movement components or write custom services to integrate these data sources and their processing. Integrating and maintaining these systems is costly and difficult, and lacks the enterprise monitoring, alerts, and controls that such a managed service can provide.

With Data Factory, you can use copy activity in a data pipeline to move data from on-premises and cloud data warehouses to a centralized cloud warehouse for further analysis.

Transforming and enriching

Once the data is in a centralized cloud data warehouse, the collected data can be processed or transformed using ADF mapping data flows. These allow you to create and maintain data transformation graphs that run on Spark without the need to understand Spark clusters or their programming.

In addition, if you want to manually code transformations, the data service supports external activities to run your transformations on compute services such as HDInsight Hadoop, Spark, Data Lake Analytics, and Machine Learning.

CI/CD and publication

ADF also offers CI/CD support for its data pipeline through Azure DevOps and GitHub. This allows incremental development and delivery of ETL processes before publishing the finished product.

Once the raw data has been refined into a business-ready format, it can be loaded into Azure Data Warehouse, Azure SQL Database, Azure Cosmos DB or any analytics engine that your business users can point to from their business intelligence tools.

Monitoring

Once the data integration pipeline has been properly created and implemented, business value can be derived from refined data, monitoring activities and scheduled pipelines for success and failure rates.

Azure Data Factory vs Databricks

Analyzing Azure Data Factory and Databricks, despite being the most popular data services in the market, we found differences between them:

Purpose and cause

ADF is designed for data integration and orchestration, excelling at moving data between multiple sources, transforming it, and loading it into a centralized location for further analysis. It is therefore ideal for scenarios where you need to automate and manage data workflows across multiple environments.

Databricks focuses on data processing, analytics, and ML. It is the go-to platform for companies looking to perform large-scale data analysis, develop ML models, and collaborate on data science projects.

Transformation Capabilities

ADF offers data transformation capabilities through its Data Flow feature, which allows users to perform various transformations directly within the pipeline. While powerful, these transformations are typically best suited for ETL processes and may not be as extensive or flexible as those offered by Databricks.

The latter offers advanced data transformation capabilities. Users can leverage the full power of Spark to perform transformations, aggregations, and complex data processing tasks, making it very attractive in data manipulation and computation.

Integration with other Azure services

Both integrate with other Azure services but with different approaches. ADF is designed for ETL and orchestration, making it the best tool for managing data workflows involving multiple Azure services.

Databricks, being more focused on advanced analytics and AI, integrates better with services such as Delta Lake for data warehousing and Azure Machine Learning for model deployment.

Ease of use

Azure Data Factory’s drag-and-drop interface makes it easy to use, even for profiles with little technical knowledge.

However, Databricks requires a higher level of technical proficiency, making it more suitable for engineers and data scientists.

Scalability and performance

Both are highly scalable, but each excels in different areas. ADF is designed to handle large-scale data migration and integration tasks, making it perfect for orchestrating complex ETL workflows.

Databricks offers superior performance for processing and analyzing large volumes of data, making it the best choice for scenarios requiring scalability and high-performance computing.

Azure Data Factory Data Flow

Dataflow mapping is a visually designed data transformation in Azure Data Factory. They allow you to develop transformation logic without writing code, which are executed as pipeline activities using Apache Spark clusters with horizontal scalability.

Data flow activities can be implemented using ADF scheduling, control, flow, and monitoring capabilities.

They provide a completely visual experience that requires no programming, as ADF controls all code translation, path optimization, and execution of data flow jobs.

They are created from the Factory Resources panel as pipelines and datasets easily and follow the data flow canvas.

Azure Data Factory vs Azure Synapse

Both tools are closely related to each other, as, in Azure Synapse Analytics, data integration functionalities, such as data flows and pipelines in Synapse, are based on those in Azure Data Factory.

If we compare their core capabilities, we find the following:

Azure Data Factory Framework

Users have become accustomed to interactive, on-demand, and virtually unlimited data. This has led to the demand for a better user experience, and real-time data analysis is a key business branch, revolutionizing decision-making processes and dynamically shaping an organization’s strategies.

In the face of a rapidly changing business environment, the ability to analyze data instantly has become a necessity, and thanks to it, companies gain the ability to monitor events in real-time.

This allows you to react quickly to changes and solve potential problems. And at Plain Concepts we help you get the most out of it.

At Plain Concepts we propose a data strategy in which you can get value and get the most out of your data.

We help you discover how to get value from your data, control and analyze all your data sources, and use data to make intelligent decisions and accelerate your business:

  • Data analytics and strategy assessment: we evaluate data technology for architecture synthesis and implementation planning.
  • Modern analytics and data warehouse assessment: we provide you with a clear view of the modern data warehousing model through understanding best practices on how to prepare data for analysis.
  • Exploratory data analysis evaluation: we look at the data before making assumptions so you get a better understanding of the available datasets.
  • Digital Twin Accelerator and Smart Factory: we create a framework to deliver integrated digital twin manufacturing and supply chain solutions in the cloud.

In addition, we offer you a Microsoft Fabric Adoption Framework with which we will evaluate the technological and business solutions, we will make a clear roadmap for the data strategy, we visualize the use cases that make a difference in your company, we take into account the sizing of equipment, time and costs, we study the compatibility with existing data platforms and we migrate Power BI, Synapse and DataWarehouse solutions to Fabric.

Elena Canorea
Author
Elena Canorea
Communications Lead