In this tutorial, we will look at how to use the Pyspark collect()
function to get collect data from a Pyspark dataframe.
Collect data from Pyspark dataframe
You can use the collect()
function to collect data from a Pyspark dataframe as a list of Pyspark dataframe rows.
It does not take any parameters but if you want to collect only specific column(s) you can use it in combination with the Pyspark select()
function. It returns the dataframe records as a list of rows.
Examples
Let’s look at some examples of using the collect()
function in Pyspark. First, let’s create a Pyspark dataframe that we will be using throughout this tutorial.
#import the pyspark module import pyspark # import the sparksession class from pyspark.sql from pyspark.sql import SparkSession # create an app from SparkSession class spark = SparkSession.builder.appName('datascience_parichay').getOrCreate() # books data as list of lists df = [[1, "PHP", "Sravan", 250], [2, "SQL", "Chandra", 300], [3, "Python", "Harsha", 250], [4, "R", "Rohith", 1200], [5, "Hadoop", "Manasa", 700], ] # creating dataframe from books data dataframe = spark.createDataFrame(df, ['Book_Id', 'Book_Name', 'Author', 'Price']) # display the dataframe dataframe.show()
Output:
+-------+---------+-------+-----+ |Book_Id|Book_Name| Author|Price| +-------+---------+-------+-----+ | 1| PHP| Sravan| 250| | 2| SQL|Chandra| 300| | 3| Python| Harsha| 250| | 4| R| Rohith| 1200| | 5| Hadoop| Manasa| 700| +-------+---------+-------+-----+
We now have a dataframe with 5 rows and 4 columns containing information on some books.
Collect the entire data
Using the collect()
function with default parameters in Pyspark returns the entire dataframe records as a list of Row.
Let’s get all the records from the above dataframe by using collect()
function without any parameters.
Introductory ⭐
- Harvard University Data Science: Learn R Basics for Data Science
- Standford University Data Science: Introduction to Machine Learning
- UC Davis Data Science: Learn SQL Basics for Data Science
- IBM Data Science: Professional Certificate in Data Science
- IBM Data Analysis: Professional Certificate in Data Analytics
- Google Data Analysis: Professional Certificate in Data Analytics
- IBM Data Science: Professional Certificate in Python Data Science
- IBM Data Engineering Fundamentals: Python Basics for Data Science
Intermediate ⭐⭐⭐
- Harvard University Learning Python for Data Science: Introduction to Data Science with Python
- Harvard University Computer Science Courses: Using Python for Research
- IBM Python Data Science: Visualizing Data with Python
- DeepLearning.AI Data Science and Machine Learning: Deep Learning Specialization
Advanced ⭐⭐⭐⭐⭐
- UC San Diego Data Science: Python for Data Science
- UC San Diego Data Science: Probability and Statistics in Data Science using Python
- Google Data Analysis: Professional Certificate in Advanced Data Analytics
- MIT Statistics and Data Science: Machine Learning with Python - from Linear Models to Deep Learning
- MIT Statistics and Data Science: MicroMasters® Program in Statistics and Data Science
🔎 Find Data Science Programs 👨💻 111,889 already enrolled
Disclaimer: Data Science Parichay is reader supported. When you purchase a course through a link on this site, we may earn a small commission at no additional cost to you. Earned commissions help support this website and its team of writers.
# collect data from dataframe dataframe.collect()
Output:
[Row(Book_Id=1, Book_Name='PHP', Author='Sravan', Price=250), Row(Book_Id=2, Book_Name='SQL', Author='Chandra', Price=300), Row(Book_Id=3, Book_Name='Python', Author='Harsha', Price=250), Row(Book_Id=4, Book_Name='R', Author='Rohith', Price=1200), Row(Book_Id=5, Book_Name='Hadoop', Author='Manasa', Price=700)]
You can see that we get a list of rows from the collect()
method.
Collect data from particular column
You can use the Pyspark select()
function in combination with the collect()
function to collect data from a specific column. Pass the column name as an argument to the select()
function.
# collect data from Book_Name column dataframe.select("Book_Name").collect()
Output:
[Row(Book_Name='PHP'), Row(Book_Name='SQL'), Row(Book_Name='Python'), Row(Book_Name='R'), Row(Book_Name='Hadoop')]
Here, we pass “Book_Name” as an argument to the collect()
function.
Iterate over each row of Pyspark dataframe
You can also use the collect()
function to iterate over the Pyspark dataframe row by row. For example, let’s iterate over each row in the above dataframe and print it.
# iterate over rows in dataframe for r in dataframe.collect(): print(r)
Output:
Row(Book_Id=1, Book_Name='PHP', Author='Sravan', Price=250) Row(Book_Id=2, Book_Name='SQL', Author='Chandra', Price=300) Row(Book_Id=3, Book_Name='Python', Author='Harsha', Price=250) Row(Book_Id=4, Book_Name='R', Author='Rohith', Price=1200) Row(Book_Id=5, Book_Name='Hadoop', Author='Manasa', Price=700)
We get all the rows in the dataframe printed.
Since the Pyspark collect()
function results in a list of rows, you can access a particular row using its index. Additionally, you can also access a particular value in the row using its column header.
# get second row using its index print(dataframe.collect()[1]) #get Book_Name from second row print(dataframe.collect()[1]["Book_Name"])
Output:
Row(Book_Id=2, Book_Name='SQL', Author='Chandra', Price=300) SQL
Here we print the second row and the book name in the second row.
You might also be interested in –
Subscribe to our newsletter for more informative guides and tutorials.
We do not spam and you can opt out any time.