Microsoft DP-700 prep: Fabric Data Engineer Associate
2025-02-24
MP4 | Video: h264, 1920x1080 | Audio: AAC, 44.1 KHz
Language: English (US) | Size: 5.56 GB | Duration: 14h 7m
2025-02-24
MP4 | Video: h264, 1920x1080 | Audio: AAC, 44.1 KHz
Language: English (US) | Size: 5.56 GB | Duration: 14h 7m
Learn PySpark, SQL, KQL and Fabric for DP-700 exam. Also helps with the APL-3008, 3009 and 3010 Microsoft Applied Skills
What you'll learn
Implement and manage an analytics solution
Configure security and governance
Ingest and transform data
Monitor and optimize an analytics solution
Requirements
Before you begin this course, you should have a computer with access to the internet.
That's it! It would be good if you had an office/school email address (which you have if you have accessed other Microsoft apps). If you haven't, I can give you some suggestions as to how to get one.
No prior knowledge of PySpark, SQL or KQL is needed. However, if you do have some, that would be useful.
Description
This course covers the content required for the Microsoft DP-700 "Fabric Data Engineer Associate" certification exam, using the Study Guide for "Implementing Analytics Solutions Using Microsoft Fabric".This course is also useful for the following Microsoft Applied Skills:APL-3008 "Implement a Real-Time Intelligence solution with Microsoft Fabric"APL-3009 "Implement a lakehouse in Microsoft Fabric"APL-3010 "Implement a data warehouse in Microsoft Fabric"Please note: This course is not affiliated with, endorsed by, or sponsored by Microsoft.Following a quick look around Fabric, we will look at using Dataflow Gen2 and pipelines - ingesting and copying data, and scheduling and monitoring data pipeline runs.Next we'll manipulate data using PySpark and SQL in a notebook.We'll have a look at loading and saving data using notebooks.We'll then manipulating dataframes, by choosing which columns and rows to show.We'll then convert data types, aggregating and sorting dataframes,We will then be transforming data in a lakehouse, merging and joining data, together with identifying missing data or null values.We will then be improving notebook performance and automate notebooks, together with creating objects, such as shortcuts and file partitioning.Following this, we'll look at using a data warehouse - transforming data, creating an incremental data load, and managing and optimizing them.We'll then create an eventhouse, and find out how to transform data using KQL:We'll select, filter and aggregate data.We'll manipulate data using string, number, datetime and timespan functions.We'll end these sections by transforming data, merging and joining data and more.Finally, we will look at ingesting and transforming streaming data, including revising KQL knowledge from the DP-600 exam, workspace settings and monitoring.No prior knowledge is assumed. We will start from the beginning for all languages and items, although any prior knowledge of PySpark, SQL or KQL is useful.Once you have completed the course, you will have a good knowledge of using notebooks to manipulate data using PySpark. And with some practice and knowledge of some additional topics, you could even go for the official Microsoft certification DP-700 - wouldn't the "Microsoft Certified: Fabric Data Engineer Associate" certification look good on your CV or resume?I hope to see you in the course - why not have a look at what you could learn?
Who this course is for:
This course is for you if you want to implement data engineering solutions using Microsoft Fabric, You will able to able to use PySpark, SQL or KQL to query batch or streaming data, By the end of this course, after entering the official Practice Tests, you could enter (and hopefully pass) Microsoft's official DP-700 exam., Wouldn't the "Fabric Data Engineer Associate" certification look good on your CV or resume?