Chevron Left
Вернуться к Cleaning and Exploring Big Data using PySpark

Отзывы учащихся о курсе Cleaning and Exploring Big Data using PySpark от партнера Coursera Project Network

Оценки: 51
Рецензии: 13

О курсе

By the end of this project, you will learn how to clean, explore and visualize big data using PySpark. You will be using an open source dataset containing information on all the water wells in Tanzania. I will teach you various ways to clean and explore your big data in PySpark such as changing column’s data type, renaming categories with low frequency in character columns and imputing missing values in numerical columns. I will also teach you ways to visualize your data by intelligently converting Spark dataframe to Pandas dataframe. Cleaning and exploring big data in PySpark is quite different from Python due to the distributed nature of Spark dataframes. This guided project will dive deep into various ways to clean and explore your data loaded in PySpark. Data preprocessing in big data analysis is a crucial step and one should learn about it before building any big data machine learning model. Note: You should have a Gmail account which you will use to sign into Google Colab. Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions....

Лучшие рецензии

Фильтр по:

1–13 из 13 отзывов о курсе Cleaning and Exploring Big Data using PySpark

автор: Farzad K

10 февр. 2021 г.

I was expecting a project on big data and Spark application on that, but it was only on PsSpark syntax. Not a single word on the Spark technology, only coding.

автор: Venkat C S G

13 окт. 2020 г.

The project should include more explanation.

автор: Alexandra A

22 авг. 2021 г.

Practical walk through of basic PySpark operations. Great quick-start to using Pyspark for data analysis

автор: Georgete B d P

9 февр. 2021 г.

Curso rápido e abrangente de fundamentos para utilização do PySpark

автор: Aruparna M

31 янв. 2021 г.

Very nice content

автор: Pris A

5 апр. 2021 г.


автор: Jorge G

25 февр. 2021 г.

I do not recommend taking this type of course, take one and pass it, however after a few days I have tried to review the material, and my surprise is that it asks me to pay again to be able to review the material. Of course coursera gives me a small discount for having already paid it previously. It is very easy to download the videos and difficult to get hold of the material, but with ingenuity it is possible. Then I recommend uploading them to YouTube and keeping them private for when they want to consult (they avoid legal problems and can share with friends), then they can request a refund.

автор: Saket R

15 дек. 2020 г.

More theory behind the functions used and concepts behind spark and how it works in a distributed way would've been more benefitting. Overall it was a worthy course.

автор: nawaz

23 апр. 2022 г.

use case could be explained a little better, before actually going to the code

автор: Juan C A

24 мар. 2022 г.

fast and simple explanation about ow to start to work with Spak on Colab

автор: shweta s

18 окт. 2021 г.


автор: Jeremy S

23 янв. 2022 г.

This course uses the Coursera in-browser notebook processer, Rhyme, rather than Google Colab, Python, or Anaconda. If you want to use Pyspark on your home computer or work computer, this tutorial will not show you how to get there. You will need to seek out those instructions separately and install Python/Java/Spark yourself. The instructor demonstrates quite a few functions and methods that will help you to get started with Pyspark, though he does not go into much depth about any of them. You will understand the statements and operations in this course much better if you have a solid understanding of Python, and at least a basic understanding of SQL commands. In my opinion, this course was worth the $10 I paid.

автор: Dharmendra T

6 окт. 2020 г.

Overall, it was a good course but I think if some explanations about how things are working, provided then it would have been plus in our learning of data explorations in Spark