Excelgoodies logo +1 650 491 3131
For Modern Analysts

Power BI with Databricks:
Bridging Data Engineering & BI

Power BI | Advanced DAX | Databricks | SQL | Lakehouse Architecture

(1.5K+ Professionals enrolled)

Prove you're human: Type the code shown.

=
Excelgoodies

Program Overview

Training Schedule

Tuesday, 11 Apr

View Schedule

6 Weeks | 60 Hours

30 Sessions, 2 Hrs Each

Live Online, Instructor-Led

Certificates

5 Specialist Certificates

View Certificate Details

Course Fee

$1699

Check what’s included?

Data Engineering & BI: Databricks

floating_menu floating_menu floating_menu

Batch starts on

th

For Modern Cloud BI & Data Engineers

Databricks + Delta Lake + Power BI + Lakehouse Architecture

Where Data Engineering Meets Scalable BI.

Built for professionals designing data solutions on the modern Lakehouse stack, this course helps you master Databricks integration with Power BI, optimize pipelines, and deliver analytics at cloud scale. Gain the expertise to bridge raw data and business insights — the skillset every enterprise now demands.

Tools You'll Learn

Visualization Tool

Power BI

Data Engineering Platform

Databricks

Lakehouse Architecture

Delta Lake

Data Storage

Delta Lake / Azure Data Lake

Data Modeling

DAX / Power BI Modeling

Query Language

SQL / PySpark

Cloud Platform

Microsoft Azure

Scripting Language

Python

Integration Layer

Power BI + Databricks Connector

In just 6 weeks, you'll be able to:

Engineer Scalable Data Pipelines – Automate and orchestrate workflows using Databricks Workflows.

Build a Unified Lakehouse – Manage structured and unstructured data with Delta Lake and Azure Data Lake.

Transform Data at Scale – Cleanse, process, and enrich data using SQL, PySpark, and Python.

Integrate BI Seamlessly – Connect Databricks with Power BI to deliver real-time analytics and insights.

Optimize for Performance & Governance – Caching, security, and tuning best practices.

Power Business Decisions – Deliver production-grade, cloud-scale BI that bridges data engineering and analytics.

In just 6 weeks, you'll be able to:

Engineer Scalable Data Pipelines – Automate and orchestrate workflows using Databricks Workflows.

Build a Unified Lakehouse – Manage structured and unstructured data with Delta Lake and Azure Data Lake.

Transform Data at Scale – Cleanse, process, and enrich data using SQL, PySpark, and Python.

Integrate BI Seamlessly – Connect Databricks with Power BI to deliver real-time analytics and insights.

Optimize for Performance & Governance – Caching, security, and tuning best practices.

Power Business Decisions – Deliver production-grade, cloud-scale BI that bridges data engineering and analytics.

Ideal For:

fullstack courses

Cloud Data Engineers

Building scalable data pipelines with Databricks and Delta Lake.

fullstack courses

BI Developers

Connecting Power BI with Databricks for cloud-scale analytics.

fullstack courses

Data Analysts

Using Databricks SQL and Power BI for advanced reporting.

fullstack courses

Data Architects

Designing unified, secure Lakehouse architectures.

fullstack courses

ETL Specialists

Automating data ingestion and transformation workflows.

fullstack courses

Tech Leads & Decision Makers

Modernizing enterprise BI with the Databricks Lakehouse.

A snapshot of what you'll be learning in 6-weeks.

Course Syllabus Overview

Power BI

Building Blocks of Power BI
  • Visualizations
  • Datasets
  • Reports
  • Dashboards
  • Tiles
Building Your First Power BI Report
  • Connect to Data Sources in Power BI Desktop
  • Clean and Transform Your Data With the Query Editor
  • Create a report in Power BI Desktop
  • Publish the report in the Power BI service
Data Modelling with Power BI
  • Fundamentals of Modelling
  • How to Manage Your Data Relationships
  • Create Calculated Columns
  • Optimizing Data Models for Better Visuals
  • Create measures and work with time-based functions
  • Create Calculated Tables
  • Explore Time-Based Data
Visualizations
  • Create and Customize Simple Visualizations
Building compelling data visualizations
  • Identify metrics and pair them with appropriate data visuals
  • Using slicers
  • Creating Map Visualizations
  • Creating Tables and Matrixes
  • Creating Waterfall and Funnel Charts
  • Using Gauges and Single Number Cards
  • Charting Options including Formatting with Colors, Shapes, Text Boxes, Images, etc.
Designing User-friendly reports
  • Customize themes
  • Create versatile layouts for your reports
  • Design principles to reduce noise and highlight data stories
Creating interactive reports for data exploration
  • Filtering & drilling for insights
  • Difference between filters & slicers
  • Filter pane for reporting needs

Power Query

Overview
  • Introduction
  • Loading & Refresh
  • Combine data from multiple data sources
Data Transformation
  • Editing Queries Created with Power Query
  • Editing Column Headers in Power Query
  • Splitting Column Data with Power Query
  • Sorting Data
  • Multi-Level Sorting
  • Filtering Data
  • Aggregate data from a column
  • Insert a custom column into a table
  • Merge columns
  • Remove columns
  • Remove rows with errors
  • Promote a row to column headers
  • Transforming Text Values
  • Replacing Data
  • Using the Fill command
  • Pivot and Unpivot Column
  • Transpose Query Data
  • Pivot Column Command in Action
  • Unpivot Columns Command
  • Grouping Data
  • Create a Duplicate Query
  • Group and Summarize Data
  • Advanced Data Grouping
  • Working with multiple sources in Power Query
  • Multiple Excel Tables
  • Expand a column containing an associated table
  • Understanding Table Relationships
  • Merging Queries
Loading Power Query Data to Destinations
  • Familiarity with the Load & Refresh Settings
  • Loading it to Workbook
  • Loading it to Data Model

DAX Functions

Overview
  • What is DAX?
  • Data Types
  • Table-Valued Functions
  • Building a Calendar Table
  • Date and Time Functions
  • Filter Functions
  • Information Functions
  • Logical Functions
  • Mathematical and Trigonometric Functions
  • Statistical Functions
  • Text Functions
  • Time Intelligence Functions
  • Creating Advanced DAX Measures With Advanced DAX Functions
Creating Advanced Dax Measures With Advanced Dax Functions
  • Calculate()
  • All()
  • Filter()
  • IF()
  • Switch()
  • SumX()
Evaluation Context
  • Filter Context
  • Row Context
  • Using RELATED in a Row Context
  • Filters and Relationships
  • USERELATIONSHIP
Hierarchies in DAX Querying with DAX Relationships
  • One-to-Many Relationships
  • Many-to-Many Relationships

Advanced DAX Functions

ADVANCED DAX FUNCTIONS ADVANCED CONTEXT CONCEPTS
  • Understanding and Debugging Context Transition
  • Expanded Tables and Filter Propagation
  • Using VAR for Performance and Clarity
  • Using TREATAS() to Apply Filters Between Unrelated Tables
  • Virtual Relationships using DAX
  • Shadow Filters and Filter Overriding Techniques
ADVANCED CALCULATION PATTERNS
  • Running Totals with Custom Filter Logic
  • Rolling Averages (e.g., 7-day, 12-month)
  • Year-over-Year (YoY), Quarter-over-Quarter (QoQ), MoM with Non-Standard Calendars
  • Top N Reporting with Others Grouping
  • Parent-Child Hierarchy Navigation using PATH, PATHITEM, PATHLENGTH
  • Budget vs. Actual Comparison Patterns
  • Custom Grouping (Bucketing) in DAX
ADVANCED TIME INTELLIGENCE
  • Semi-Additive Measures (e.g., Closing Balance, Opening Balance)
  • Workdays and Custom Holiday Calendars
  • Dynamic Period Selection (MTD, QTD, YTD) based on Slicers
  • Cumulative Totals Across Multiple Tables or Years
ADVANCED FILTER + CALCULATE PATTERNS
  • Multiple Filters in a Single CALCULATE
  • Using NOT, EXCEPT, INTERSECT inside CALCULATE
  • Combine CALCULATE with FILTER(), VALUES(), ALLSELECTED(), KEEPFILTERS()
RELATIONSHIP MODELING TECHNIQUES
  • Virtual Relationships using DAX (TREATAS, LOOKUPVALUE)
  • USERELATIONSHIP vs. CROSSFILTER
  • Handling Bi-Directional Relationships with Care
  • Many-to-Many Solutions using DAX with Bridge Tables
PERFORMANCE OPTIMIZATION
  • Understanding and Using DAX Studio and VertiPaq Analyzer
  • Optimizing Calculated Columns vs. Measures
  • Query Plans and Storage Engine vs. Formula Engine
  • Cardinality and its impact on performance
  • Avoiding common DAX performance anti-patterns (e.g., misuse of SUMX inside FILTER)
DEBUGGING AND TESTING
  • Using DEFINE MEASURE and EVALUATE in DAX Studio
  • Evaluating DAX logic step-by-step with VAR and RETURN
  • Tools: Performance Analyzer in Power BI
SPECIAL FUNCTIONS AND USE CASES
  • GENERATE(), GENERATEALL(), ADDCOLUMNS()
  • SUMMARIZE(), SUMMARIZECOLUMNS(), GROUPBY()
  • ISINSCOPE(), SELECTEDVALUE() vs VALUES()
  • RANKX(), TOPN(), PERCENTILEX.INC
  • CONTAINS(), LOOKUPVALUE(), RELATEDTABLE()

SQL Querying

Introduction to MS-SQL
  • Creating a Database
  • Understanding Tables and Creating Tables
  • Inserting, Updating and Deleting Data
  • Querying Data
  • Filtering Data
  • Grouping Data
  • Ordering Data
  • Column Aliases
  • Table Aliases
DDL INSIGHTS
  • CREATE TABLE
  • Dropping Objects
  • CREATE INDEX
  • TEMPORARY OBJECTS
  • Object Naming and Dependencies
SELECT STATEMENTS
  • Simple SELECTs
  • Calculated and Derived Fields
  • SELECT TOP / BOTTOM Records
  • Derived Tables
  • Joins
  • Predicates
  • Subqueries
  • Aggregate Functions
  • GROUP BY and HAVING
  • UNION
  • ORDER BY

SQL Programming

DDL INSIGHTS
  • CREATE TABLE
  • Dropping Objects
  • TEMPORARY OBJECTS
  • Object Naming and Dependencies
INTRODUCTION TO SQL PROGRAMMING (T-SQL)
  • What is T-SQL?
  • Differences between SQL Querying and SQL Programming
  • Benefits of procedural SQL
  • Use cases in reporting, automation, and ETL
VARIABLES AND CONTROL STRUCTURES
  • Declaring and Using Variables (DECLARE, SET)
  • Conditional Logic: IF…ELSE
  • Loops: WHILE, BREAK, CONTINUE
  • Error Handling: TRY…CATCH, THROW
  • GOTO statement (rare but useful for certain scenarios)
USER-DEFINED FUNCTIONS (UDFs)
  • Scalar-Valued Functions
  • Table-Valued Functions (Inline and Multi-statement)
  • Best practices for performance
  • Use in SELECT, WHERE, JOIN clauses
STORED PROCEDURES
  • Creating and Executing Stored Procedures
  • Input Parameters, Output Parameters
  • Reusability and Modularity
  • Nested Stored Procedures
  • Use in ETL and Reporting pipelines
TEMPORARY AND TABLE VARIABLES
  • Temporary Tables (#Temp, ##GlobalTemp)
  • Table Variables (DECLARE @TableVar TABLE)
  • Differences, Use Cases, and Scope
  • CTEs (Common Table Expressions)
CURSORS
  • Introduction to Cursors
  • Declaring and Using Cursors
  • Static vs Dynamic Cursors
  • Use Cases and Performance Considerations
DYNAMIC SQL
  • Constructing SQL Statements on the Fly
  • Executing with EXEC() and sp_executesql
TRANSACTIONS AND ERROR HANDLING
  • Introduction to Transactions
  • BEGIN TRAN, COMMIT, ROLLBACK
  • Nesting Transactions
  • Isolation Levels
  • Locking and Blocking
TRIGGERS
  • AFTER INSERT, AFTER UPDATE, AFTER DELETE
  • INSTEAD OF Triggers
  • Auditing Changes
  • Performance considerations
TESTING AND DEBUGGING
  • PRINT Statements
  • RAISERROR for debugging
  • SQL Server Profiler / Extended Events (if applicable)
  • Debugging in SQL Server Management Studio (SSMS)

Power BI Administration

Objective: Get hands-on experience with advanced administration settings, permissions, Data refresh times etc.

Overview

  • Publishing Power BI Reports
  • Creating & Managing Workspaces and Its Access
  • Creating & Managing Dashboard and Its Access
  • Installing & Configuring Data gateway
  • Scheduling and configuring data refresh
  • Managing & Reusing Datasets
  • Scheduling Report Alerts
  • Setting up Row Level Permissions
  • Managing Users & Audit Log
  • Custom Branding Power BI For your Organization
  • Adding Custom Visuals for your Organization

Python Programming

  • Variables, Data Types & Operators
  • Control Flow – if, elif, else, Nested Conditions, Loops – for, while
  • Data Structures – Strings, Lists, Tuples, Dictionaries and Sets
  • Functions – User Defined Functions, lambda functions and Built-in functions
  • File Handling
  • Error Handling
  • Introduction to Modules & Libraries

Python Data Transformation for Azure

  • Introduction to Microsoft Fabric and Python Integration
  • Data Ingestion with Python
  • Data Cleaning & Standardization
  • Data Transformation Techniques
  • Date & Time Transformation
  • Data Reshaping & Pivoting
  • Writing Transformed Data to Fabric
  • Automation & Reusability

Data Engineering: Azure Blob Storage

Introduction to Azure Blob Storage

  • Overview of Azure Storage Services
  • What is Azure Blob Storage?
  • Use Cases of Blob Storage (Data Archiving, Backup, Big Data Analytics, etc.)
  • Types of Azure Storage Accounts

Understanding Azure Blob Storage Architecture

  • Containers, Blobs, and Storage Accounts

Types of Blobs:

  • Block Blobs
  • Append Blobs
  • Page Blobs

Storage Tiers:

  • Hot
  • Cool
  • Archive

Data Replication Strategies:

  • LRS (Locally Redundant Storage)
  • ZRS (Zone-Redundant Storage)
  • GRS (Geo-Redundant Storage)
  • RA-GRS (Read Access Geo-Redundant Storage)

Setting Up and Managing Azure Blob Storage

  • Creating an Azure Storage Account
  • Creating and Configuring Containers
  • Uploading, Downloading, and Managing Blobs
  • Managing Access Control:
  • Shared Access Signatures (SAS)
  • Azure Active Directory Authentication
  • Role-Based Access Control (RBAC)
  • Storage Account Keys

Working with Azure Blob Storage using Tools

  • Using Azure Portal for Blob Storage Management
  • Working with Azure Storage Explorer
  • Managing Blob Storage with Azure CLI
  • Automating Blob Operations with PowerShell

Azure Blob Storage Integration with BI & Automation

Integrating Blob Storage with Power BI:

  • Connecting Power BI to Azure Blob Storage
  • Using Dataflows to Process Blob Data

Using Blob Storage with Azure Data Factory:

  • Copying Data from Blob Storage to Azure SQL Database
  • Data Transformation using Mapping Data Flows
  • Automating Data Processing with ADF Pipelines
  • Azure Logic Apps for Automating Blob Storage Workflows
  • Azure Synapse Analytics Integration with Blob Storage for Big Data Processing

Advanced Azure Blob Storage Features

  • Soft Delete, Versioning, and Snapshot Management
  • Data Lifecycle Management and Cost Optimization
  • Azure Storage Encryption and Security
  • Cross-Origin Resource Sharing (CORS)
  • Event Grid and Blob Storage Event Handling
  • Using Azure Data Lake Gen2 with Blob Storage

Hands-on Projects & Real-World Scenarios

  • Project 1: Setting up a Data Lake using Azure Blob Storage for Power BI Analytics
  • Project 2: Automating Data Uploads from Power Automate to Azure Blob Storage
  • Project 3: Creating an ETL Pipeline using Azure Data Factory with Blob Storage
  • Project 4: Implementing Data Retention Policies using Azure Blob Storage Lifecycle Management

Data Engineering: Azure Data Lake

Introduction to Azure Data Lake

  • What is Azure Data Lake?
  • Difference Between Azure Data Lake and Azure Blob Storage
  • Data Lake Gen1 vs Gen2
  • Key Benefits and Use Cases

Azure Data Lake Architecture

  • Understanding the Hierarchical Namespace
  • Components of Data Lake:
  • Storage Accounts
  • Containers & Folders
  • Files & Metadata
  • Security Model & Data Access
  • Data Lake File Format Best Practices (Parquet, CSV, JSON, Avro)

Setting Up Azure Data Lake Storage (ADLS Gen2)

  • Creating an Azure Storage Account
  • Configuring Azure Data Lake Gen2
  • Managing Storage Containers & Files
  • Authentication & Authorization:
  • Azure Active Directory (Azure AD)
  • Role-Based Access Control (RBAC)
  • Access Control Lists (ACLs)

Working with Azure Data Lake Using Tools

  • Using Azure Portal for Data Lake Management
  • Azure Storage Explorer for File Operations
  • Azure CLI & PowerShell for Automation
  • Accessing Data with Python & Pandas
  • Azure Synapse Studio for Data Lake Exploration

Data Ingestion into Azure Data Lake

  • Ingesting Data with Azure Data Factory (ADF)
  • Copy Data from SQL Server, Blob Storage, REST API
  • Scheduling & Monitoring Pipelines
  • Power Automate Integration
  • Streaming Data into Data Lake with Azure Event Hub

Data Processing & Transformation in Data Lake

  • Using Azure Synapse Analytics with Data Lake
  • Querying Data with Serverless SQL Pools
  • Using Spark Pools for Big Data Processing
  • Transforming Data with Dataflows in Power BI
  • ETL Pipelines using Azure Data Factory Mapping Data Flows
  • Integrating with Azure Databricks for Advanced Processing

Securing & Monitoring Azure Data Lake

  • Encryption at Rest & In-Transit
  • Azure Defender for Storage
  • Data Masking & Sensitive Data Protection
  • Auditing & Logging with Azure Monitor

Real-World Projects & Use Cases

  • Project 1: Creating a Data Lake for Power BI Reporting
  • Project 2: Building an ETL Pipeline with Azure Data Factory & Data Lake
  • Project 3: Integrating Data Lake with Azure Synapse for Advanced Analytics
  • Project 4: Automating File Processing with Power Automate & Data Lake

Data Engineering: Azure Data Bricks

Introduction to Azure Databricks

  • What is Azure Databricks?
  • Key Features and Benefits
  • Azure Databricks vs Azure Synapse vs HDInsight
  • Use Cases: Big Data Processing, AI/ML, Data Science, ETL Pipelines

Azure Databricks Architecture & Components

  • Understanding Databricks Workspace
  • Clusters: Types, Auto-scaling, and Configuration
  • Notebooks: Using Python, Scala, SQL & R
  • Jobs: Automating Workflows
  • Libraries: Managing Dependencies (PySpark, MLflow, Delta Lake)

Setting Up Azure Databricks

  • Creating an Azure Databricks Workspace
  • Navigating the Databricks UI
  • Configuring and Launching Clusters
  • Managing Users & Permissions (RBAC & ACLs)

Working with Databricks Notebooks

  • Writing Code in Databricks (Python, SQL, Scala)
  • Using Magic Commands (%sql, %python, %scala)
  • Importing and Exporting Data
  • Collaborative Development in Notebooks
  • Visualizing Data with Built-in Charts

Data Engineering with Databricks & Spark

  • Introduction to Apache Spark in Databricks
  • DataFrames & Spark SQL for Data Processing
  • Connecting to Azure Blob Storage & Data Lake
  • Reading and Writing Parquet, JSON, CSV Files
  • Performance Optimization with Partitions & Caching

ETL Pipelines with Azure Databricks

  • Building ETL Pipelines Using PySpark
  • Delta Lake for ACID Transactions & Schema Evolution
  • Integrating Databricks with Azure Data Factory
  • Automating Pipelines Using Databricks Jobs & Triggers
  • Error Handling & Logging in ETL Pipelines

Real-World Projects & Hands-on Labs

  • Project 1: Building a Data Pipeline Using Azure Databricks & Delta Lake
  • Project 2: ETL Pipeline with Azure Data Factory & Databricks
  • Project 3: Real-time Data Processing with Azure Event Hub & Databricks
  • Project 4: Deploying an ML Model Using Databricks & MLflow

Data Engineering: Azure SQL Database

Introduction to Azure SQL Database

  • What is Azure SQL Database?
  • Azure SQL vs SQL Server vs Synapse Analytics vs Cosmos DB
  • Key Features & Benefits
  • Common Use Cases:
  • Cloud-based Relational Database Management
  • High Availability & Scalability
  • BI & Analytics Integration

Azure SQL Database Deployment Options

  • Single Database vs Elastic Pools vs Managed Instance
  • Understanding DTUs vs vCores
  • Choosing the Right Service Tier (Basic, General Purpose, Business Critical, Hyperscale)
  • Serverless vs Provisioned Compute Model

Setting Up an Azure SQL Database

  • Creating an Azure SQL Database using Azure Portal, PowerShell, and CLI
  • Configuring Firewall & Network Security
  • Connecting to SQL Database using SSMS, Azure Data Studio, and Power BI
  • Role-Based Access Control (RBAC) & Authentication

Working with Azure SQL Database

  • Writing SQL Queries using T-SQL
  • Creating and Managing Tables, Views, and Stored Procedures
  • Understanding Indexes, Constraints & Triggers
  • Transactions & Locking Mechanisms
  • Querying with JSON & XML Data

Data Ingestion & ETL Pipelines

  • Using Azure Data Factory to Load Data into SQL Database
  • Bulk Insert, BCP & COPY Commands for Large Data Loads
  • Using Power Automate to Automate Data Entry
  • Connecting Azure SQL Database with Power BI for Analytics

Performance Optimization & Query Tuning

  • Indexing Strategies (Clustered vs Non-Clustered, Columnstore Indexes)
  • Query Store for Performance Monitoring
  • Execution Plans & Query Tuning Techniques
  • Partitioning Tables for Large Datasets
  • Caching & Optimizing Reads with Materialized Views

Integrating Azure SQL Database with Other Azure Services

  • Power BI: DirectQuery vs Import Mode for SQL Database
  • Azure Data Factory: ETL Pipeline Development
  • Azure Logic Apps & Power Automate: Automating Workflows
  • Azure Synapse Analytics: Exporting Data for Analytics
  • Azure Functions: Triggering Events from SQL Database

Real-World Projects & Hands-on Labs

  • Project 1: Migrating an On-Premises SQL Database to Azure
  • Project 2: Automating Data Entry Using Power Automate & Azure SQL
  • Project 3: Building an ETL Pipeline Using Azure Data Factory
  • Project 4: Creating a Power BI Dashboard Using Azure SQL Database

Data Engineering: Azure Analysis Services / Power BI Dataset

Module 1: Introduction to Semantic Models in BI

  • What are Semantic Models?
  • Comparison: Azure Analysis Services vs Power BI Dataset
  • Tabular Models vs Multidimensional Models
  • When to choose AAS vs Power BI Dataset
  • Licensing and Capacity Requirements (AAS, PPU, Premium)

Module 2: Data Modeling Basics (Tabular)

  • Star Schema vs Snowflake Schema
  • Tables, Relationships, and Cardinality
  • Fact vs Dimension Tables
  • Row Context vs Filter Context (Intro)

Module 3: Creating Models in Power BI Desktop

  • Loading Data into Power BI Model
  • Defining Relationships
  • Creating Calculated Columns, Measures, and Tables
  • Formatting & Sorting Data for Semantic Models

Module 4: DAX Fundamentals

  • Overview of DAX
  • Calculated Columns vs Measures
  • Common DAX Functions: SUM, CALCULATE, FILTER, ALL, RELATED
  • Time Intelligence Basics
  • DAX Best Practices

Module 5: Building Data Models for Azure Analysis Services

  • Visual Studio (SSDT) Setup
  • Creating AAS Tabular Project
  • Import Mode vs DirectQuery
  • Tabular Model Explorer: Perspectives, Roles, KPIs
  • Creating Partitions and Hierarchies

Module 6: Deployment & Management

  • Deploying Power BI Dataset to Power BI Service
  • Deploying AAS Model to Azure
  • Managing Processing Schedules
  • Incremental Refresh (in Power BI Premium and AAS)
  • Configuring Data Source Credentials and Gateways

Module 7: Security & Governance

  • Row-Level Security (RLS)
  • Object-Level Security (OLS) (Power BI Premium)
  • Setting up Security Roles in AAS and Power BI
  • Sensitivity Labels and Auditing
  • Version Control with BIM Files (for AAS)

Module 8: Using Power BI Reports on Top of Datasets

  • Live Connection vs Import Mode
  • Shared Datasets in Power BI Service
  • Creating Thin Reports
  • Reusing Enterprise Semantic Models across multiple reports

Module 9: Tools & Utilities

  • Tabular Editor 2 & 3
  • DAX Studio
  • ALM Toolkit (Schema Comparison)
  • Best Practices Analyzer

System Requirements

  • Operating System – Windows 10 or later (Mac users will need a Windows VM)
  • RAM – Minimum 8GB (Recommended: 16GB for large datasets)
  • Power BI Desktop – Free version Download here
  • Azure Subscription – Free-tier available for practice Sign up here
  • SQL Server Express – Free version Download here
  • Python – Install the latest version Download here
  • Power Automate & Power Apps – Requires a Microsoft 365 account
fullstack courses

Taught by Microsoft Certified Trainers

All our classes are live,
hands-on and with
real-trainers.

fullstack courses

Hands-On Databricks & Power BI Projects

Real-time Projects (Practical Application)

Work on real-world projects using Azure Databricks, and Power BI to design scalable ETL pipelines, manage Delta Lake, and deliver real-time analytics. Gain practical, job-ready skills to master the end-to-end data lifecycle — from ingestion to insight.

fullstack courses Power BI

fullstack courses SQL & PySpark

fullstack courses Databricks Workflows

fullstack courses Delta Lake & Lakehouse

fullstack courses Python

Enterprise Sales Performance Dashboard

Develop an interactive Power BI dashboard to analyze regional and product-wise sales performance. Utilize DAX measures for YOY growth, sales variance, and customer segmentation, while Power Query is used to clean and transform raw sales data from multiple sources.

Customer Churn Prediction

Build a churn prediction model by integrating customer interaction data, support tickets, and transactional history. Use Power Query for data transformation and DAX measures to calculate churn probability based on behavioral patterns.

HR Analytics & Workforce Planning

Create an HR dashboard to track employee retention, hiring trends, and performance metrics. Connect multiple data sources to Power BI, apply DAX formulas to calculate attrition rates, and implement role-based security for restricted views.

Supply Chain & Inventory Optimization

Design a Power BI solution to monitor inventory levels, supplier performance, and stock movement across locations. Use Power Query to merge purchase order data with warehouse stock levels and apply DAX for predictive analytics on stock replenishment.

Financial Reporting & Budget vs. Actuals Analysis

Automate financial reporting using DAX calculations for variance analysis, custom KPIs, and trend forecasting. Apply Power Query transformations to consolidate financial statements from different departments into a unified report.

Sales & Marketing Data Integration

Combine sales and marketing datasets from multiple sources using PySpark and SQL. Cleanse data, handle missing values, and create aggregated tables for downstream analytics.

Customer Segmentation & LTV

Analyze customer behavior using PySpark to calculate lifetime value, segment customers, and generate insights for targeted marketing campaigns.

Product Performance Analysis

Use PySpark to calculate product-level KPIs, perform trend analysis, and create summary tables for reporting in Power BI.

Streaming Event Log Processing

Ingest and transform application log data in Databricks using PySpark streaming. Aggregate metrics for operational monitoring and anomaly detection.

Data Validation & Quality Checks

Build SQL and PySpark scripts to validate datasets, enforce constraints, and identify inconsistencies before loading into Delta Lake.

Automated Sales ETL Pipeline

Ingest daily sales CSV/JSON files from Azure Data Lake, transform using Databricks, and load into Delta Lake tables for reporting.

Batch Data Processing Pipeline

Design a pipeline that processes historical data in batches, aggregates KPIs, and refreshes analytics tables automatically.

Real-Time Streaming Pipeline

Ingest live IoT or clickstream data into Databricks, perform transformations, and store in Delta Lake for real-time dashboards.

Data Quality & Validation Pipeline

Automate data validation steps, generate alerts for anomalies, and ensure only clean, verified data is loaded for analytics.

Pipeline Monitoring & Logging

Create dashboards and alerts to monitor ETL workflow health, runtime performance, and data freshness.

Delta Table Management

Implement ACID-compliant Delta Lake tables, perform updates, deletes, and optimize table performance.

Time Travel Analytics

Use Delta Lake time travel to query historical versions of data and generate trend analyses for business reporting.

Partitioning & Optimization

Partition large tables to improve query performance, reduce storage costs, and speed up downstream analytics.

Lakehouse Integration Project

Integrate structured and unstructured datasets into a unified Lakehouse architecture for enterprise reporting.

Data Governance & Security

Apply access controls, enforce schema validation, and implement lineage tracking in Delta Lake tables.

Data Cleansing Pipeline for Lakehouse

Use Python (via Notebooks in Fabric) to clean, deduplicate, and normalize raw CSV/parquet files before storing them in Delta format in the Fabric Lakehouse. Integrate with a Data Factory pipeline for scheduled ingestion.

Automated Data Quality Checks on Delta Tables

Build a reusable Python script to perform column-level validation (null checks, data types, range thresholds) on Delta tables in the Lakehouse. Automatically log failures to a monitoring table and trigger alerts.

Generate KPI Summary Tables for Power BI

Write a Python routine to compute summary KPIs (e.g., weekly sales, customer retention, churn rates) and store results in a Gold layer table. These output tables are optimized for reporting in Power BI dashboards.

Notebook-Driven ETL for Semi-Structured Data

Automate ETL for JSON or nested data (e.g., API exports, logs) using Python in a Fabric Notebook. Transform and flatten the data structure, then write the result into Delta tables for KQL/Power BI consumption.

Automated Archival and Partition Management

Use Python to periodically move older data partitions to cold storage and maintain optimized table size for querying. This improves performance and cost-efficiency within the Lakehouse.

Develop an interactive Power BI dashboard to analyze regional and product-wise sales performance. Utilize DAX measures for YOY growth, sales variance, and customer segmentation, while Power Query is used to clean and transform raw sales data from multiple sources.

Build a churn prediction model by integrating customer interaction data, support tickets, and transactional history. Use Power Query for data transformation and DAX measures to calculate churn probability based on behavioral patterns.

Create an HR dashboard to track employee retention, hiring trends, and performance metrics. Connect multiple data sources to Power BI, apply DAX formulas to calculate attrition rates, and implement role-based security for restricted views.

Design a Power BI solution to monitor inventory levels, supplier performance, and stock movement across locations. Use Power Query to merge purchase order data with warehouse stock levels and apply DAX for predictive analytics on stock replenishment.

Automate financial reporting using DAX calculations for variance analysis, custom KPIs, and trend forecasting. Apply Power Query transformations to consolidate financial statements from different departments into a unified report.

Combine sales and marketing datasets from multiple sources using PySpark and SQL. Cleanse data, handle missing values, and create aggregated tables for downstream analytics.

Analyze customer behavior using PySpark to calculate lifetime value, segment customers, and generate insights for targeted marketing campaigns.

Use PySpark to calculate product-level KPIs, perform trend analysis, and create summary tables for reporting in Power BI.

Ingest and transform application log data in Databricks using PySpark streaming. Aggregate metrics for operational monitoring and anomaly detection.

Build SQL and PySpark scripts to validate datasets, enforce constraints, and identify inconsistencies before loading into Delta Lake.

Ingest daily sales CSV/JSON files from Azure Data Lake, transform using Databricks, and load into Delta Lake tables for reporting.

Design a pipeline that processes historical data in batches, aggregates KPIs, and refreshes analytics tables automatically.

Ingest live IoT or clickstream data into Databricks, perform transformations, and store in Delta Lake for real-time dashboards.

Automate data validation steps, generate alerts for anomalies, and ensure only clean, verified data is loaded for analytics.

Create dashboards and alerts to monitor ETL workflow health, runtime performance, and data freshness.

Implement ACID-compliant Delta Lake tables, perform updates, deletes, and optimize table performance.

Use Delta Lake time travel to query historical versions of data and generate trend analyses for business reporting.

Partition large tables to improve query performance, reduce storage costs, and speed up downstream analytics.

Integrate structured and unstructured datasets into a unified Lakehouse architecture for enterprise reporting.

Apply access controls, enforce schema validation, and implement lineage tracking in Delta Lake tables.

Use Python (via Notebooks in Fabric) to clean, deduplicate, and normalize raw CSV/parquet files before storing them in Delta format in the Fabric Lakehouse. Integrate with a Data Factory pipeline for scheduled ingestion.

Build a reusable Python script to perform column-level validation (null checks, data types, range thresholds) on Delta tables in the Lakehouse. Automatically log failures to a monitoring table and trigger alerts.

Write a Python routine to compute summary KPIs (e.g., weekly sales, customer retention, churn rates) and store results in a Gold layer table. These output tables are optimized for reporting in Power BI dashboards.

Automate ETL for JSON or nested data (e.g., API exports, logs) using Python in a Fabric Notebook. Transform and flatten the data structure, then write the result into Delta tables for KQL/Power BI consumption.

Use Python to periodically move older data partitions to cold storage and maintain optimized table size for querying. This improves performance and cost-efficiency within the Lakehouse.

Gain industry-recognized credentials.

5 Specialized Certificates

Shareable certificate

Add to your LinkedIn profile

Gain industry-recognized credentials.

5 Specialized Certificates

Training Schedule

Limited Seats. Registration Closing Soon

Have Questions?

Tel:

+1 650 491 3131

Email:

support@excelgoodies.com

Projects & Assignments

What's included?

  • 60 hours of live instructor-led training
  • 6 Power BI & Advanced DAX Projects
  • 8 Databricks SQL & PySpark Projects
  • 5 Data Pipeline & Orchestration Projects
  • 5 Delta Lake & Lakehouse Projects
  • 5 Python Projects
  • 2 Master Projects integrating Databricks, Delta Lake & Power BI
  • Data Engineering & BI: Databricks Expert Certificate
  • 30-day post-training support

Upcoming Cohort

Starts On

Time

Course Fee

$1699

FAQs

Databricks is one of the fastest-growing platforms for cloud data engineering and analytics. Learning Databricks with Power BI equips you to build scalable ETL pipelines, manage Delta Lake, and deliver real-time insights. Companies adopting modern Lakehouse architectures are actively seeking professionals with these skills, making this combination highly valuable for career growth.

A basic understanding of SQL or data concepts helps, but this course is designed for data engineers, BI developers, and analytics professionals at various experience levels who want to upskill in cloud data engineering.

Yes, Databricks requires a subscription, though free trials are available for learning and experimentation. Subscriptions provide enterprise features ideal for scalable data pipelines and Lakehouse analytics.

  • Operating System – Windows 10 or later (Mac users need a Windows VM)
  • RAM – Minimum 8GB (16GB recommended for large datasets)
  • Power BI Desktop – Free version
  • Azure Subscription – Free-tier available for practice
  • SQL Server Express – Free version
  • Python – Latest version

Yes! Corporate invoices are available. You can pay via company card or forward an invoice to your finance team.

These options are available on the

Yes, discounts are available for teams of 5 or more. Customized corporate training is also offered.

Contact us for group pricing.

Roles include:

  • Data Engineer
  • Cloud BI Engineer
  • Analytics Engineer
  • Data Solutions Architect
  • Business Intelligence Analyst
  • Cloud Data Analyst

These roles are in high demand in technology, finance, healthcare, and e-commerce industries.

  • Azure Data Engineering: Focuses on cloud ETL pipelines, Azure Data Lake, SQL, and Power BI. Ideal for Microsoft cloud users.
  • Fabric Data Engineering: Focuses on integrated AI, Fabric Lakehouse, KQL analytics, and automation workflows.
  • Databricks Data Engineering: Designed for high-volume, real-time data in Lakehouse architectures. Focuses on Databricks, Delta Lake, PySpark, and Power BI.

Choose based on your organization’s cloud stack and your role goals. Databricks is ideal for engineers working with large-scale, real-time data and modern Lakehouse architectures.

Yes! Hands-on projects cover end-to-end pipelines, cloud integration, Delta Lake management, and real-time analytics—giving you job-ready experience.

Yes. Tool-specific certificates are awarded for Databricks, Delta Lake, SQL, and Power BI, plus a master certificate: “Data Engineering & BI: Databricks Expert.”

Live, instructor-led classes include exercises, real-world case studies, and Q&A sessions to ensure practical learning.

No. This is a live interactive course, but you’ll receive assignments, templates, and documentation for practice.

Class notes and exercises are provided to catch up, and you can attend the same session in a future batch (subject to availability).

Yes, sessions can be retaken in future batches; full re-enrollment may require an additional fee.

Unlike pre-recorded courses, you’ll work on real-world Databricks datasets with expert guidance and personalized feedback, making it more practical and career-focused.

More questions ?

Build Real-World Solutions During the Course

Key Skills You'll Master

End-to-End Data Pipelines

Cloud Data Integration

Real-Time Processing

Data Modeling & Transformation

Lakehouse & Delta Table Management

Data Visualization

Cloud Workflow Automation

ETL Optimization

AI-Driven Analytics

Data Governance & Security

Cloud Analytics Collaboration

Scalable Solution Design

skills to master

About The Trainer

Mr. Sami

MCT, MCP, MEE, MOS

30,000+

Students Trained

18+

Year of Experience

4.9

Reviews

Mr. Sami is an exceptionally accomplished and certified Microsoft Trainer, possessing extensive expertise in the fields of Finance, HR, and Information Technology. With an impressive 14-year tenure in the industry, he has successfully trained and empowered over 23,000 professionals, and the number continues to grow.

He has undertaken assignments with the renowned IRS, The World Bank, Tata Chemicals, Buckman Laboratories, Standard Chartered, ING Barings and much more. His nature of going that Extra Mile has got him the startling popularity amongst the Excelgoodies prominent clients.

Government Institutions We've Worked With.

Trusted by Government Entities

Government Clients Government Clients Government Clients Government Clients Government Clients Government Clients Government Clients Government Clients Government Clients Government Clients Government Clients Government Clients Government Clients Government Clients

Build Real-World Solutions During the Course

Eg difference The Excelgoodies Difference

We Spot Trends Before They Become Industry Standards

The analytics industry moves fast. We move faster. We constantly update our courses to match the latest industry needs, so you’re always learning what’s in demand—before everyone else.

01
02

Learn What Matters, Not Just What’s Trending

BI & Analytics isn’t about knowing one tool—it’s about knowing how to use the right tools together. Our courses don’t just teach software; they teach end-to-end reporting, automation, and cloud-driven analytics workflows—exactly what businesses need.

Tech-Enabled Learning,
Zero Hassles

Forget scattered emails and outdated PDFs. Our AI-powered student portal keeps everything in one place—live classes, assignments, progress tracking, instructor feedback, invoices, and instant support—so you stay focused on learning.

03
04

Real Projects, Real Experience, Real Confidence

No more theory-only learning—you’ll walk out of our courses with proven expertise in the tools and techniques hiring managers want.

Corporate Training

bi_report_automation_mob

Avail additional 10% Corporate Benefit on the total course fee for 5+participants.

Get you team BI ready, today.

Mr. Perrie Smith

Business Associate

Prove you're human: Type the code shown.

=
Excelgoodies

Esteemed Clientele

Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele Esteemed Clientele

Thousands Trained. Here’s What They Say.

Total Reviews 2080k
Average Rating
4.5
Excelgoodies Excelgoodies Excelgoodies Excelgoodies Excelgoodies
Why Excelgoodies image

Thousands Trained. Here’s What They Say.

Google Reviews

Industry Insights

Industry Insights

APPLICATION DEADLINE

Registration Closes
on .

Prove you're human: Type the code shown.

=
Excelgoodies