15

I am very new to Scala and Spark, and am working on some self-made exercises using baseball statistics. I am using a case class create a RDD and assign a schema to the data, and am then turning it into a DataFrame so I can use SparkSQL to select groups of players via their stats that meet certain criteria.

Once I have the subset of players I am interested in looking at further, I would like to find the mean of a column; eg Batting Average or RBIs. From there I would like to break all the players into percentile groups based on their average performance compared to all players; the top 10%, bottom 10%, 40-50%

I've been able to use the DataFrame.describe() function to return a summary of a desired column (mean, stddev, count, min, and max) all as strings though. Is there a better way to get just the mean and stddev as Doubles, and what is the best way of breaking the players into groups of 10-percentiles?

So far my thoughts are to find the values that bookend the percentile ranges and writing a function that groups players via comparators, but that feels like it is bordering on reinventing the wheel.

I have the following imports currently:

 import org.apache.spark.rdd.RDD 
 import org.apache.spark.sql.SQLContext 
 import org.apache.spark.{SparkConf, SparkContext} 
 import org.joda.time.format.DateTimeFormat  
Marmite Bomber
  • 1,113
  • 1
  • 8
  • 11
the3rdNotch
  • 253
  • 1
  • 2
  • 7

3 Answers3

27

This is the import you need, and how to get the mean for a column named "RBIs":

import org.apache.spark.sql.functions._
df.select(avg($"RBIs")).show()

For the standard deviation, see scala - Calculate the standard deviation of grouped data in a Spark DataFrame - Stack Overflow

For grouping by percentiles, I suggest defining a new column via a user-defined function (UDF), and using groupBy on that column. See

nealmcb
  • 408
  • 5
  • 7
7

This also returns the average of the selected column:

from pyspark.sql.functions import mean
df.select(mean(df("ColumnName"))).show()
+----------------+
| avg(ColumnName)|
+----------------+
|230.522453845909|
+----------------+
Mario
  • 571
  • 1
  • 6
  • 24
Erkan ŞİRİN
  • 206
  • 2
  • 4
0

In Pyspark you can just use the function avg and the name of the column without $.

from pyspark.sql.functions import avg
avg_file_size= float(df.select(avg("col_name")).collect()[0][0])
print(avg_file_size)