Data Engineering with Databricks

SKU: NTV-DBDE

261.000 kr.

Data professionals from all disciplines will benefit from this comprehensive introduction to the components of the Databricks Lakehouse Platform that directly support putting ETL pipelines into production. You’ll leverage SQL and Python to define and schedule pipelines that incrementally process new data from a variety of data sources to power analytic applications and dashboards in the lakehouse. This course offers hands-on instruction in Databricks Data Science and Engineering Workspace, Databricks SQL, Delta Live Tables, Databricks Repos, Databricks Task Orchestration and Unity Catalog.

This course will prepare you to take the Databricks Certified Data Engineer Associate exam.

Forkröfur

Participants should have:

  • Beginner familiarity with basic cloud concepts (virtual machines, object storage, identity management).
  • Ability to perform basic code development tasks (e.g., creating compute instances, running code in notebooks, using basic notebook operations, and importing repositories from Git).
  • Intermediate familiarity with SQL, including commands such as CREATE, SELECT, INSERT, UPDATE, DELETE, GROUP BY, JOIN.
  • Intermediate experience with SQL concepts such as aggregate functions, filters, sorting, indexes, tables, and views.
  • Basic knowledge of Python programming, Jupyter Notebook interface, and PySpark fundamentals.

If you do not have one or more of the pre-requisites QA recommends:

Target Audience

This course is designed for:

  • Data Engineers who want to enhance their knowledge of Databricks and Delta Lake.
  • Data Analysts looking to expand their expertise in data pipelines and transformation.
  • Cloud Engineers and Developers working with big data frameworks.
  • Professionals preparing for the Databricks Associate Data Engineering certification.

Nemandi mun læra eftirfarandi

  • Leverage the Databricks Lakehouse Platform to perform core responsibilities for data pipeline development
  • Use SQL and Python to write production data pipelines to extract, transform and load data into tables and views in the lakehouse
  • Simplify data ingestion and incremental change propagation using Databricks-native features and syntax, including Delta Live Tables
  • Orchestrate production pipelines to deliver fresh results for ad hoc analytics and dashboarding

Samantekt

Day 1

  • Delta Lake
  • Relational Entities on Databricks
  • ETL with Spark SQL
  • Just Enough Python for Spark SQL
  • Incremental Data Processing with Structured Streaming and Auto Loader

Day 2

  • Medallion Architecture in the Data Lakehouse
  • Delta Live Tables
  • Task Orchestration with Databricks Jobs
  • Databricks SQL
  • Managing Permissions in the Lakehouse
  • Productionizing Dashboards and Queries on Databricks SQL