In this tutorial, we will look at how to drop duplicate rows from a Pyspark dataframe with the help of some examples.
How to drop duplicate rows in Pyspark?
You can use the Pyspark dropDuplicates()
function to drop duplicate rows from a Pyspark dataframe. The following is the syntax –
# drop duplicates from dataframe df.dropDuplicates()
Apply the function on the dataframe you want to remove the duplicates from. It returns a Pyspark dataframe with the duplicate rows removed.
Examples
Let’s look at some examples of removing duplicate rows from a Pyspark dataframe. First, we’ll create a Pyspark dataframe that we will be using throughout this tutorial.
# import the pyspark module import pyspark # import the sparksession class from pyspark.sql from pyspark.sql import SparkSession # create an app from SparkSession class spark = SparkSession.builder.appName('datascience_parichay').getOrCreate() # data of competition participants data = [["Tim", "Germany", "A"], ["Max", "Germany", "A"], ["Viraj", "India", "A"], ["Emma", "USA", "B"], ["Emma", "USA", "B"], ["Jack", "USA", "B"], ["Max", "Germany", "A"], ["Max", "Germany", "A"]] # create a Pyspark dataframe using the above data df = spark.createDataFrame(data, ["Name", "Country", "Team"]) # display df.show()
Output:
+-----+-------+----+ | Name|Country|Team| +-----+-------+----+ | Tim|Germany| A| | Max|Germany| A| |Viraj| India| A| | Emma| USA| B| | Emma| USA| B| | Jack| USA| B| | Max|Germany| A| | Max|Germany| A| +-----+-------+----+
We now have a dataframe containing the name, country, and team information of some students participating in a case-study competition. Note that there are duplicate rows present in the data.
Drop duplicate rows from Pyspark dataframe
Let’s remove the duplicate rows from the above dataframe. For this, apply the Pyspark dropDuplicates()
function on the dataframe created above.
# drop duplicate rows df.dropDuplicates().show()
Output:
Introductory ⭐
- Harvard University Data Science: Learn R Basics for Data Science
- Standford University Data Science: Introduction to Machine Learning
- UC Davis Data Science: Learn SQL Basics for Data Science
- IBM Data Science: Professional Certificate in Data Science
- IBM Data Analysis: Professional Certificate in Data Analytics
- Google Data Analysis: Professional Certificate in Data Analytics
- IBM Data Science: Professional Certificate in Python Data Science
- IBM Data Engineering Fundamentals: Python Basics for Data Science
Intermediate ⭐⭐⭐
- Harvard University Learning Python for Data Science: Introduction to Data Science with Python
- Harvard University Computer Science Courses: Using Python for Research
- IBM Python Data Science: Visualizing Data with Python
- DeepLearning.AI Data Science and Machine Learning: Deep Learning Specialization
Advanced ⭐⭐⭐⭐⭐
- UC San Diego Data Science: Python for Data Science
- UC San Diego Data Science: Probability and Statistics in Data Science using Python
- Google Data Analysis: Professional Certificate in Advanced Data Analytics
- MIT Statistics and Data Science: Machine Learning with Python - from Linear Models to Deep Learning
- MIT Statistics and Data Science: MicroMasters® Program in Statistics and Data Science
🔎 Find Data Science Programs 👨💻 111,889 already enrolled
Disclaimer: Data Science Parichay is reader supported. When you purchase a course through a link on this site, we may earn a small commission at no additional cost to you. Earned commissions help support this website and its team of writers.
+-----+-------+----+ | Name|Country|Team| +-----+-------+----+ | Max|Germany| A| | Tim|Germany| A| | Emma| USA| B| |Viraj| India| A| | Jack| USA| B| +-----+-------+----+
You can see that the resulting dataframe does not have any duplicate rows. Note that the original dataframe is not modified yet. To modify the original dataframe, assign the resulting dataframe from the dropDuplicates()
function to the original dataframe variable.
# drop duplicate rows df = df.dropDuplicates() # display the dataframe df.show()
Output:
+-----+-------+----+ | Name|Country|Team| +-----+-------+----+ | Max|Germany| A| | Tim|Germany| A| | Emma| USA| B| |Viraj| India| A| | Jack| USA| B| +-----+-------+----+
The dataframe df
now doesn’t have any duplicate rows.
Use dropDuplicates()
to view distinct values in a Column
You can also use the Pyspark dropDuplicates()
function to view unique values in a Pyspark column. For example, let’s use this function to get the distinct values in the “Country” column of the dataframe above.
# distinct values in Country column df.select("Country").dropDuplicates().show()
Output:
+-------+ |Country| +-------+ |Germany| | India| | USA| +-------+
We get the unique values in the “Country” column – “Germany”, “India”, and “USA”. This use-case is similar to using the Pyspark distinct()
function.
You might also be interested in –
- Order PySpark DataFrame using orderBy()
- Display DataFrame in Pyspark with show()
- Filter PySpark DataFrame with where()
Subscribe to our newsletter for more informative guides and tutorials.
We do not spam and you can opt out any time.