I have a dictionary "d" which has a 10 keys with pyspark dataframes as values.
>> d.keys()
dict_keys (['Py1', 'Py2', 'Py3', 'Py4', 'Py7', 'Py8', 'Py15', 'Py20', 'Py21', 'Py22']
I am currently taking each key and its value, then assigning it to a variable like so:
df1 = d['Py1']
df2 = d['Py2']
df3 = d['Py3']
.
.
.
df10 = d['Py22']
I then do various manipulations using pyspark. What is the best way achieving this without the redundancy? here is what i attempted..
newname = "df"
counter = 1
for key in df_list.keys():
key = newname + str(counter)
counter+=1
print (key)
But when i do print(df1) i get a "name 'df1' is not defined" error.