526.350 kr.
Ekki til á lager
This course covers methods and practices to implement data engineering solutions by using Microsoft Fabric. Students will learn how to design and develop effective data loading patterns, data architectures, and orchestration processes. Objectives for this course include ingesting and transforming data and securing, managing, and monitoring data engineering solutions.
While there are no required prerequisites for taking this course, it is recommended that students have a foundational knowledge of core data concepts and how they’re implemented using Microsoft data services. For more information see Azure Data Fundamentals.
These following technologies and techniques will help delegates engage effectively with the course material, navigate the tools and technologies in Microsoft Fabric, and apply the learnings to real-world scenarios. These are optional but will lead to a better understating of the course.
Target Audience
This audience for this course is data professionals with experience in data extraction, transformation, and loading. DP-700 is designed for professionals who need to create and deploy data engineering solutions using Microsoft Fabric for enterprise-scale data analytics. Learners should also have experience at manipulating and transforming data with one of the following programming languages: Structured Query Language (SQL), PySpark, or Kusto Query Language (KQL).
This course, aligned with Microsoft’s DP-700 certification, provides comprehensive training on implementing data engineering solutions using Microsoft Fabric. Participants will learn to design data architectures, develop efficient data loading patterns, and orchestrate data processes. The course focuses on key areas such as data ingestion, transformation, storage, security, and monitoring, using technologies including Apache Spark, Delta Lake, Dataflows Gen2, and Fabric pipelines.
Learners will gain practical skills in real-time data processing, medallion architecture design, and querying data warehouses, along with expertise in eventstreams, KQL, and CI/CD implementation. Additionally, the course covers essential aspects of data governance, security (including row-level and column-level security), and workspace administration in Fabric environments.
Designed for experienced data professionals skilled in data extraction, transformation, and loading (ETL), the course is particularly beneficial for those familiar with SQL, PySpark, or Kusto Query Language (KQL) and cloud-based solutions like Azure. Recommended prerequisites include knowledge of relational databases, ETL workflows, data lakes, and access control mechanisms. By the end, participants will be equipped to create, deploy, and manage enterprise-scale data analytics solutions within Microsoft Fabric for real-world applications.
1 – Introduction to end-to-end analytics using Microsoft Fabric
2 – Get started with lakehouses in Microsoft Fabric
3 – Use Apache Spark in Microsoft Fabric
4 – Work with Delta Lake tables in Microsoft Fabric
5 – Ingest Data with Dataflows Gen2 in Microsoft Fabric
6 – Orchestrate processes and data movement with Microsoft Fabric
7 – Organize a Fabric lakehouse using medallion architecture design
8 – Get started with Real-Time Intelligence in Microsoft Fabric
9 – Use real-time eventstreams in Microsoft Fabric
10 – Work with real-time data in a Microsoft Fabric eventhouse
11 – Get started with data warehouses in Microsoft Fabric
12 – Load data into a Microsoft Fabric data warehouse
13 – Monitor a Microsoft Fabric data warehouse
14 – Secure a Microsoft Fabric data warehouse
15 – Implement continuous integration and continuous delivery (CI/CD) in Microsoft Fabric
16 – Monitor activities in Microsoft Fabric
17 – Secure data access in Microsoft Fabric
18 – Administer a Microsoft Fabric environment