● 4 Hours of on-demand video ● Full lifetime access ● Access on mobile and Laptop ● By Edwin Bomela
Welcome to the Big Data Analytics with PySpark + Power BI + MongoDB course. In this course we will be creating a big data analytics pipeline, using big data technologies like PySpark, MLlib, Power BI and MongoDB.
We will be working with earthquake data, that we will transform into summary tables. We will then use these tables to train predictive models and predict future earthquakes. We will then analyze the data by building reports and dashboards in Power BI Desktop.
Power BI Desktop is a powerful data visualization tool that lets you build advanced queries, models and reports. With Power BI Desktop, you can connect to multiple data sources and combine them into a data model. This data model lets you build visuals, and dashboards that you can share as reports with other people in your organization.
MongoDB is a document-oriented NoSQL database, used for high volume data storage. It stores data in JSON like format called documents, and does not use row/column tables. The document model maps to the objects in your application code, making the data easy to work with.
You will learn how to create data processing pipelines using PySpark
You will learn machine learning with geospatial data using the Spark MLlib library
You will learn data analysis using PySpark, MongoDB and Power BI
You will learn how to manipulate, clean and transform data using PySpark dataframes
You will learn how to create Geo Maps using Arc Maps for Power BI
You will also learn how to create dashboards in Power BI
WHAT YOU WILL LEARN
● Power BI Data Visualization
● PySpark Programming
● Data Analysis
● Data Transformation and Manipulation
● Big Data Machine Learning
● Geo Mapping with ArcMaps for Power BI
● Geospatial Machine Learning
● Creating Dashboards
WHO THIS COURSE IS FOR
● Python Developers at any level
● Data Engineers at any level
● Developers at any level
● Machine Learning engineers at any level
● Data Scientists at any level
● GIS Developers at any level
● The curious mind
2. Setup and Installations
3. Data Processing with PySpark and MongoDB
4. Machine Learning with PySpark and MLlib
5. Creating the Data Pipeline Scripts
6. Power BIData Visualization
7. Source Code and Notebook
Edwin Bomela is a Big Data Engineer and Consultant, involved in multiple projects ranging from Business Intelligence, Software Engineering, IoT and Big data analytics. Expertise are in building data processing pipelines in the Hadoop and Cloud ecosystems and software development. He is currently a consulting at one of the top business intelligence consultancies helping clients build data warehouses, data lakes, cloud data processing pipelines and machine learning pipelines. The technologies he uses to accomplish client requirements range from Hadoop, Amazon S3, Python, Django, Apache Spark, MSBI, Microsoft Azure, SQL Server Data Tools, Talend and Elastic MapReduce.