● 4.5 hours of on-demand video ● 1 article ● Full lifetime access ● Access on mobile and Laptop ● By Edwin Bomela
Welcome to the Big Data Analytics with PySpark + Tableau Desktop + MongoDB course. In this course we will be creating a big data analytics solution using big data technologies like PySpark for ETL, MLlib for Machine Learning as well as Tableau for Data Visualization and for building Dashboards.
We will be working with earthquake data, that we will transform into summary tables. We will then use these tables to train predictive models and predict future earthquakes. We will then analyze the data by building reports and dashboards in Tableau Desktop.
Tableau Desktop is a powerful data visualization tool used for big data analysis and visualization. It allows for data blending, real-time analysis and collaboration of data. No programming is needed for Tableau Desktop, which makes it a very easy and powerful tool to create dashboards apps and reports.
MongoDB is a document-oriented NoSQL database, used for high volume data storage. It stores data in JSON like format called documents, and does not use row/column tables. The document model maps to the objects in your application code, making the data easy to work with.
You will learn how to create data processing pipelines using PySpark
You will learn machine learning with geospatial data using the Spark MLlib library
You will learn data analysis using PySpark, MongoDB and Tableau
You will learn how to manipulate, clean and transform data using PySpark dataframes
You will learn how to create Geo Maps in Tableau Desktop
You will also learn how to create dashboards in Tableau Desktop
WHAT YOU WILL LEARN
● Tableau Data Visualization
● PySpark Programming
● Data Analysis
● Data Transformation and Manipulation
● Big Data Machine Learning
● Geo Mapping with Tableau
● Geospatial Machine Learning
● Creating Dashboards
WHO THIS COURSE IS FOR
● Python Developers at any level
● Data Engineers at any level
● Developers at any level
● Machine Learning engineers at any level
● Data Scientists at any level
● GIS Developers at any level
● The curious mind
2. Setup and Installations
3. Data Processing with PySpark and MongoDB
4. Machine Learning with PySpark and MLlib
5. Creating the Data Pipeline Scripts
6. Tableau Data Visualization
7. Source Code and Notebook
Edwin Bomela is a Big Data Engineer and Consultant, involved in multiple projects ranging from Business Intelligence, Software Engineering, IoT and Big data analytics. Expertise are in building data processing pipelines in the Hadoop and Cloud ecosystems and software development. He is currently a consulting at one of the top business intelligence consultancies helping clients build data warehouses, data lakes, cloud data processing pipelines and machine learning pipelines. The technologies he uses to accomplish client requirements range from Hadoop, Amazon S3, Python, Django, Apache Spark, MSBI, Microsoft Azure, SQL Server Data Tools, Talend and Elastic MapReduce.