Perform data exploratory analysis
Develop batch processing solutions by using Azure Data Lake Storage, Azure Databricks, Azure Synapse Analytics, and Azure Data Factory
Use PolyBase to load data to a SQL pool
Implement Azure Synapse Link and query the replicated data
Create tests for data pipelines
Integrate Jupyter or Python notebooks into a data pipeline
Revert data to a previous state
Configure exception handling
Configure batch retention
Read from and write to a delta lake
Create a stream processing solution by using Stream Analytics and Azure Event Hubs
Process data by using Spark structured streaming