Skip to Content

Drop Duplicate Rows from Pyspark Dataframe

In this tutorial, we will look at how to drop duplicate rows from a Pyspark dataframe with the help of some examples.

How to drop duplicate rows in Pyspark?

Drop duplicate rows from pyspark dataframe

You can use the Pyspark dropDuplicates() function to drop duplicate rows from a Pyspark dataframe. The following is the syntax –

# drop duplicates from dataframe
df.dropDuplicates()

Apply the function on the dataframe you want to remove the duplicates from. It returns a Pyspark dataframe with the duplicate rows removed.

Examples

Let’s look at some examples of removing duplicate rows from a Pyspark dataframe. First, we’ll create a Pyspark dataframe that we will be using throughout this tutorial.

# import the pyspark module
import pyspark
  
# import the  sparksession class  from pyspark.sql
from pyspark.sql import SparkSession
  
# create an app from SparkSession class
spark = SparkSession.builder.appName('datascience_parichay').getOrCreate()

# data of competition participants
data = [["Tim", "Germany", "A"],
        ["Max", "Germany", "A"],
        ["Viraj", "India", "A"],
        ["Emma", "USA", "B"],
        ["Emma", "USA", "B"],
        ["Jack", "USA", "B"],
        ["Max", "Germany", "A"],
        ["Max", "Germany", "A"]]

# create a Pyspark dataframe using the above data
df = spark.createDataFrame(data, ["Name", "Country", "Team"])

# display 
df.show()

Output:

+-----+-------+----+
| Name|Country|Team|
+-----+-------+----+
|  Tim|Germany|   A|
|  Max|Germany|   A|
|Viraj|  India|   A|
| Emma|    USA|   B|
| Emma|    USA|   B|
| Jack|    USA|   B|
|  Max|Germany|   A|
|  Max|Germany|   A|
+-----+-------+----+

We now have a dataframe containing the name, country, and team information of some students participating in a case-study competition. Note that there are duplicate rows present in the data.

Drop duplicate rows from Pyspark dataframe

Let’s remove the duplicate rows from the above dataframe. For this, apply the Pyspark dropDuplicates() function on the dataframe created above.

# drop duplicate rows
df.dropDuplicates().show()

Output:

+-----+-------+----+
| Name|Country|Team|
+-----+-------+----+
|  Max|Germany|   A|
|  Tim|Germany|   A|
| Emma|    USA|   B|
|Viraj|  India|   A|
| Jack|    USA|   B|
+-----+-------+----+

You can see that the resulting dataframe does not have any duplicate rows. Note that the original dataframe is not modified yet. To modify the original dataframe, assign the resulting dataframe from the dropDuplicates() function to the original dataframe variable.

# drop duplicate rows
df = df.dropDuplicates()
# display the dataframe
df.show()

Output:

+-----+-------+----+
| Name|Country|Team|
+-----+-------+----+
|  Max|Germany|   A|
|  Tim|Germany|   A|
| Emma|    USA|   B|
|Viraj|  India|   A|
| Jack|    USA|   B|
+-----+-------+----+

The dataframe df now doesn’t have any duplicate rows.

Use dropDuplicates() to view distinct values in a Column

You can also use the Pyspark dropDuplicates() function to view unique values in a Pyspark column. For example, let’s use this function to get the distinct values in the “Country” column of the dataframe above.

# distinct values in Country column
df.select("Country").dropDuplicates().show()

Output:

+-------+
|Country|
+-------+
|Germany|
|  India|
|    USA|
+-------+

We get the unique values in the “Country” column – “Germany”, “India”, and “USA”. This use-case is similar to using the Pyspark distinct() function.

You might also be interested in –


Subscribe to our newsletter for more informative guides and tutorials.
We do not spam and you can opt out any time.


Authors

  • Piyush

    Piyush is a data scientist passionate about using data to understand things better and make informed decisions. In the past, he's worked as a Data Scientist for ZS and holds an engineering degree from IIT Roorkee. His hobbies include watching cricket, reading, and working on side projects.

  • Gottumukkala Sravan Kumar