Building Big Data Pipelines with SparkR & Tableau & MongoDB

Building Big Data Pipelines with SparkR & Tableau & MongoDB

Building Big Data Pipelines with SparkR & Tableau & MongoDB

Regular price Rs. 1,250.00 Sale price Rs. 599.00 Save 52%
/
  • In stock, ready to Buy
Tax included.
  • 4 hours of on-demand video
  • Full lifetime access
  • Access on mobile and Laptop
  • By Edwin Bomela

 

Welcome to the Building Big Data Pipelines with SparkR & Tableau & MongoDB course. In this course we will be creating a big data analytics solution using big data technologies for R. In our use case we will be working with raw earthquake data and we will be applying big data processing techniques to extract transform and load the data into usable datasets. Once we have processed and cleaned the data, we will use it as a data source for building predictive analytics and visualizations.
Tableau Desktop is a powerful data visualization tool, used for big data analysis and visualization. It allows for data blending, real-time analysis and collaboration of data. No programming is needed for Tableau Desktop, which makes it a very easy and powerful tool to create dashboards apps and reports. SparkR is an R package that provides a light-weight frontend to use Apache Spark from R. SparkR provides a distributed data frame implementation that supports operations like selection, filtering, aggregation etc. (similar to R data frames, dplyr but on large datasets. SparkR also supports distributed machine learning using MLlib.
MongoDB is a document-oriented NoSQL database, used for high volume data storage. It stores data in JSON like format called documents, and does not use row/column tables. The document model maps to the objects in your application code, making the data easy to work with.

WHAT YOU WILL LEARN
● How to create big data processing pipelines using R and MongoDB.
● Machine learning with geospatial data using the SparkR and the MLlib library.
● Data analysis using SparkR, R and Tableau.
● How to manipulate, clean and transform data using Spark dataframes.
● How to create Geo Maps in Tableau Desktop.
● How to create dashboards in Tableau Desktop.

WHO THIS COURSE IS FOR
● R Developers at any level
● Data Engineers at any level
● Developers at any level
● Machine Learning engineers at any level
● Data Scientists at any level
● GIS Developers at any level
● The curious mind

1. Introduction
2. Setup and Installations
3. Building the Big Data ETL Pipeline with SparkR
4. Building Data Machine Learning with SparkR and MLlib
5. Data Visualization with Tableau
6. Project Sources Code

Edwin Bomela is a Big Data Engineer and Consultant, involved in multiple projects ranging from Business Intelligence, Software Engineering, IoT and Big data analytics. Expertise are in building data processing pipelines in the Hadoop and Cloud ecosystems and software development. He is currently a consulting at one of the top business intelligence consultancies helping clients build data warehouses, data lakes, cloud data processing pipelines and machine learning pipelines. The technologies he uses to accomplish client requirements range from Hadoop, Amazon S3, Python, Django, Apache Spark, MSBI, Microsoft Azure, SQL Server Data Tools, Talend and Elastic MapReduce.

More from Edwin Bomela
Sale
Introduction to Maps in Folium and Python
Sale price Rs. 599.00 Regular price Rs. 1,250.00 Save 52%
Sale
Introduction to Maps in R Shiny and Leaflet
Sale price Rs. 599.00 Regular price Rs. 1,250.00 Save 52%
Recently viewed