How to select all non-NaN columns and the last non-NaN column using pandas?

Forgive me if the title is a bit confusing.

Assuming I have test.h5. Below is the result of reading this file withdf.read_hdf('test.h5', 'testdata')

     0     1     2     3     4     5    6
0   123   444   111   321   NaN   NaN  NaN
1   12    234   113   67    21    32   900
3   212   112   543   321   45    NaN  NaN

I want to select the last column other than Nan. My expected result is as follows

0   321
1   900
2   45

I also want to highlight the entire column except the last column other than NaN. Perhaps my expected result. Maybe it could be in a numpy array, but I haven't solved any solution yet.

      0     1     2     3     4     5    6
0    123   444   111   
1    12    234   113   67    21    32  
3    212   112   543   321  

I searched the Internet and found df.iloc[:, :-1]to read the entire column, but the last one df.iloc[:, -1]to read the last column.

My current result using these two commands looks like this: 1. to read the entire column except the last

       0     1     2     3     4     5    
0     123   444   111   321   NaN   NaN  
1     12    234   113   67    21    32   
3     212   112   543   321   45    NaN  

2.

0   NaN
1   900
2   Nan

: - , pandas ?

.

+6
5

i.e

ndf = df.apply(lambda x : sorted(x,key=pd.notnull),1)

     0      1      2      3      4      5      6
0   NaN    NaN    NaN  123.0  444.0  111.0  321.0
1  12.0  234.0  113.0   67.0   21.0   32.0  900.0
3   NaN    NaN  212.0  112.0  543.0  321.0   45.0

i.e

ndf.iloc[:,-1]
0    321.0
1    900.0
3     45.0
Name: 6, dtype: float64
ndf.iloc[:,:-1].apply(lambda x : sorted(x,key=pd.isnull),1)
      0      1      2      3     4     5
0  123.0  444.0  111.0    NaN   NaN   NaN
1   12.0  234.0  113.0   67.0  21.0  32.0
3  212.0  112.0  543.0  321.0   NaN   NaN
+7

№ 2

, , NaN -

idx = df.notnull().cumsum(1).idxmax(1).values.astype(int)
df_out = df.mask(idx[:,None] <= np.arange(df.shape[1]))

/ NaN , , NaN- -

In [181]: df
Out[181]: 
     0      1      2    3     4     5      6
0  123  444.0  111.0  321   NaN   NaN    NaN
1   12    NaN    NaN   67  21.0  32.0  900.0
3  212    NaN    NaN  321  45.0   NaN    NaN

In [182]: idx = df.notnull().cumsum(1).idxmax(1).values.astype(int)

In [183]: df.mask(idx[:,None] <= np.arange(df.shape[1]))
Out[183]: 
     0      1      2      3     4     5   6
0  123  444.0  111.0    NaN   NaN   NaN NaN
1   12    NaN    NaN   67.0  21.0  32.0 NaN
3  212    NaN    NaN  321.0   NaN   NaN NaN

№ 1

, NumPy -

In [192]: df.values[np.arange(len(idx)), idx]
Out[192]: array([ 321.,  900.,   45.])
+6

notnull + iloc + idxmax NaNs lookup:

a = df.notnull().iloc[:,::-1].idxmax(1)
print (a)
0    3
1    6
3    4
dtype: object

print (pd.Series(df.lookup(df.index, a)))
0    321.0
1    900.0
2     45.0
dtype: float64

NaN s:

arr = df.values
arr[np.arange(len(df.index)),a] = np.nan
print (pd.DataFrame(arr, index=df.index, columns=df.columns))
       0      1      2      3     4     5   6
0  123.0  444.0  111.0    NaN   NaN   NaN NaN
1   12.0  234.0  113.0   67.0  21.0  32.0 NaN
3  212.0  112.0  543.0  321.0   NaN   NaN NaN
+4

1

df.stack().groupby(level=0).last()

0    321.0
1    900.0
3     45.0
dtype: float64

2
apply pd.Series.last_valid_index

# Thanks to Bharath shetty for the suggestion
df.apply(lambda x : x[x.last_valid_index()], 1)
# Old Answer
# df.apply(pd.Series.last_valid_index, 1).pipe(lambda x: df.lookup(x.index, x))

array([ 321.,  900.,   45.])

3
np.where

pd.Series({df.index[i]: df.iat[i, j] for i, j in zip(*np.where(df.notnull()))})

0    321.0
1    900.0
3     45.0
dtype: float64

4
pd.DataFrame.ffill

df.ffill(1).iloc[:, -1]

0    321.0
1    900.0
3     45.0
Name: 6, dtype: float64

df.stack().groupby(level=0, group_keys=False).apply(lambda x: x.head(-1)).unstack()

       0      1      2      3     4     5
0  123.0  444.0  111.0    NaN   NaN   NaN
1   12.0  234.0  113.0   67.0  21.0  32.0
3  212.0  112.0  543.0  321.0   NaN   NaN
+4

For those who are looking for an answer for this particular problem, for me I have finished using the answer given by Bharat Shetti. To facilitate access to them later, I changed the answer, and below is my code:

#assuming you have some csv file with different length of row/column
#and you want to create h5 file from those csv files
data_one = [np.loadtxt(file) for file in glob.glob(yourpath + "folder_one/*.csv")]
data_two = [np.loadtxt(file) for file in glob.glob(yourpath + "folder_two/*.csv")] 

df1 = pd.DataFrame(data_one)
df2 = pd.DataFrame(data_two)

combine = df1.append(df2, ignore_index=True)
combine_sort = combine.apply(lambda x : sorted(x, key=pd.notnull), 1)
combine.to_hdf('test.h5', 'testdata')

For reading

dataframe = pd.read_hdf('test.h5', 'testdata')
dataset = dataframe.values

q1 = dataset[:, :-1] # return all column except the last column
q2 = dataset[:, -1] # return the last column
0
source

All Articles