Replacing spaces in all column names in the Dataframe spark box

I have a spark data block with spaces in some column names that need to be replaced with an underscore.

I know that one column can be renamed using withColumnRenamed()sparkSQL, but to rename the 'n' number of columns, this function should encode "n" times (as far as I know).

To automate this, I tried:

val old_names = df.columns()        // contains array of old column names

val new_names = old_names.map { x => 
   if(x.contains(" ") == true) 
      x.replaceAll("\\s","_") 
   else x 
}                    // array of new column names with removed whitespace.

Now how to replace the df header with new_names

+9
source share
5 answers
  var newDf = df
  for(col <- df.columns){
    newDf = newDf.withColumnRenamed(col,col.replaceAll("\\s", "_"))
  }

You can encapsulate it in some way so that there is not too much pollution.

+15

Python :

# Importing sql types
from pyspark.sql.types import StringType, StructType, StructField
from pyspark.sql.functions import col

# Building a simple dataframe:
schema = StructType([
             StructField("id name", StringType(), True),
             StructField("cities venezuela", StringType(), True)
         ])

column1 = ['A', 'A', 'B', 'B', 'C', 'B']
column2 = ['Maracaibo', 'Valencia', 'Caracas', 'Barcelona', 'Barquisimeto', 'Merida']

# Dataframe:
df = sqlContext.createDataFrame(list(zip(column1, column2)), schema=schema)
df.show()

exprs = [col(column).alias(column.replace(' ', '_')) for column in df.columns]
df.select(*exprs).show()
+11

As a best practice, you should prefer expressions and immutability. You should use val, not , as var much as possible.

Thus, it is preferable to use an operator foldLeft, in this case:

val newDf = df.columns
              .foldLeft(df)((curr, n) => curr.withColumnRenamed(n, n.replaceAll("\\s", "_")))
+9
source

You can do the same in Python:

raw_data1 = raw_data
for col in raw_data.columns:
  raw_data1 = raw_data1.withColumnRenamed(col,col.replace(" ", "_"))
+3
source

Scala has another way to achieve the same thing -

    import org.apache.spark.sql.types._

    val df_with_newColumns = spark.createDataFrame(df.rdd, 
StructType(df.schema.map(s => StructField(s.name.replaceAll(" ", ""), 
s.dataType, s.nullable))))

Hope this helps !!

0
source

All Articles