Creating groups / classes based on conditions inside columns

I need help converting my data so that I can read transaction data.

Business example

I am trying to combine some related transactions in order to create several groups or classes of events. This dataset represents workers attending various holidays. I want to create one leaf class based on any transaction related to 365 days of a vacation event class. For trending charts, I want the number of classes to get a sequence / pattern.

My code allows me to see when the very first event occurred, and it can determine when a new class starts, but it does not transfer every transaction to the class.

Requirements:

  • Mark all lines based on which vacation class they fall in.
  • Indicate each unique exit event. Using this example, index 0 will be a unique Leave Leave 2 event, index 1 will be a unique Leave 2 event, index 3 will be a unique Leave 2 event, and index 4 will be a unique Leave 1 event, etc.

I added the desired result in the column, labeled "Desired Result." Please note: there can be many more lines / events per person; and there may be many more people.

Some data

import pandas as pd data = {'Employee ID': ["100", "100", "100","100","200","200","200","300"], 'Effective Date': ["2016-01-01","2015-06-05","2014-07-01","2013-01-01","2016-01-01","2015-01-01","2013-01-01","2014-01"], 'Desired Output': ["Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 1","Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 1","Unique Leave Event 1"]} df = pd.DataFrame(data, columns=['Employee ID','Effective Date','Desired Output']) 

The code I tried

 df['Effective Date'] = df['Effective Date'].astype('datetime64[ns]') df['EmplidShift'] = df['Employee ID'].shift(-1) df['Effdt-Shift'] = df['Effective Date'].shift(-1) df['Prior Row in Same Emplid Class'] = "No" df['Effdt Diff'] = df['Effdt-Shift'] - df['Effective Date'] df['Effdt Diff'] = (pd.to_timedelta(df['Effdt Diff'], unit='d') + pd.to_timedelta(1,unit='s')).astype('timedelta64[D]') df['Cumul. Count'] = df.groupby('Employee ID').cumcount() df['Groupby'] = df.groupby('Employee ID')['Cumul. Count'].transform('max') df['First Row Appears?'] = "" df['First Row Appears?'][df['Cumul. Count'] == df['Groupby']] = "First Row" df['Prior Row in Same Emplid Class'][ df['Employee ID'] == df['EmplidShift']] = "Yes" df['Prior Row in Same Emplid Class'][ df['Employee ID'] == df['EmplidShift']] = "Yes" df['Effdt > 1 Yr?'] = "" df['Effdt > 1 Yr?'][ ((df['Prior Row in Same Emplid Class'] == "Yes" ) & (df['Effdt Diff'] < -365)) ] = "Yes" df['Unique Leave Event'] = "" df['Unique Leave Event'][ (df['Effdt > 1 Yr?'] == "Yes") | (df['First Row Appears?'] == "First Row") ] = "Unique Leave Event" df 
+7
python pandas
source share
2 answers

You can do this without looping or looping through your framework. Per Wes McKinney, you can use .apply() with a groupBy object and define a function to apply to the groupby object. If you use this with .shift() ( like here ), you can get the result without using any loops.

Example:

 # Group by Employee ID grouped = df.groupby("Employee ID") # Define function def get_unique_events(group): # Convert to date and sort by date, like @Khris did group["Effective Date"] = pd.to_datetime(group["Effective Date"]) group = group.sort_values("Effective Date") event_series = (group["Effective Date"] - group["Effective Date"].shift(1) > pd.Timedelta('365 days')).apply(lambda x: int(x)).cumsum()+1 return event_series event_df = pd.DataFrame(grouped.apply(get_unique_events).rename("Unique Event")).reset_index(level=0) df = pd.merge(df, event_df[['Unique Event']], left_index=True, right_index=True) df['Output'] = df['Unique Event'].apply(lambda x: "Unique Leave Event " + str(x)) df['Match'] = df['Desired Output'] == df['Output'] print(df) 

Output:

  Employee ID Effective Date Desired Output Unique Event \ 3 100 2013-01-01 Unique Leave Event 1 1 2 100 2014-07-01 Unique Leave Event 2 2 1 100 2015-06-05 Unique Leave Event 2 2 0 100 2016-01-01 Unique Leave Event 2 2 6 200 2013-01-01 Unique Leave Event 1 1 5 200 2015-01-01 Unique Leave Event 2 2 4 200 2016-01-01 Unique Leave Event 2 2 7 300 2014-01 Unique Leave Event 1 1 Output Match 3 Unique Leave Event 1 True 2 Unique Leave Event 2 True 1 Unique Leave Event 2 True 0 Unique Leave Event 2 True 6 Unique Leave Event 1 True 5 Unique Leave Event 2 True 4 Unique Leave Event 2 True 7 Unique Leave Event 1 True 

A more detailed example for clarity:

 import pandas as pd data = {'Employee ID': ["100", "100", "100","100","200","200","200","300"], 'Effective Date': ["2016-01-01","2015-06-05","2014-07-01","2013-01-01","2016-01-01","2015-01-01","2013-01-01","2014-01"], 'Desired Output': ["Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 1","Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 1","Unique Leave Event 1"]} df = pd.DataFrame(data, columns=['Employee ID','Effective Date','Desired Output']) # Group by Employee ID grouped = df.groupby("Employee ID") # Define a function to get the unique events def get_unique_events(group): # Convert to date and sort by date, like @Khris did group["Effective Date"] = pd.to_datetime(group["Effective Date"]) group = group.sort_values("Effective Date") # Define a series of booleans to determine whether the time between dates is over 365 days # Use .shift(1) to look back one row is_year = group["Effective Date"] - group["Effective Date"].shift(1) > pd.Timedelta('365 days') # Convert booleans to integers (0 for False, 1 for True) is_year_int = is_year.apply(lambda x: int(x)) # Use the cumulative sum function in pandas to get the cumulative adjustment from the first date. # Add one to start the first event as 1 instead of 0 event_series = is_year_int.cumsum() + 1 return event_series # Run function on df and put results into a new dataframe # Convert Employee ID back from an index to a column with .reset_index(level=0) event_df = pd.DataFrame(grouped.apply(get_unique_events).rename("Unique Event")).reset_index(level=0) # Merge the dataframes df = pd.merge(df, event_df[['Unique Event']], left_index=True, right_index=True) # Add string to match desired format df['Output'] = df['Unique Event'].apply(lambda x: "Unique Leave Event " + str(x)) # Check to see if output matches desired output df['Match'] = df['Desired Output'] == df['Output'] print(df) 

You get the same result:

  Employee ID Effective Date Desired Output Unique Event \ 3 100 2013-01-01 Unique Leave Event 1 1 2 100 2014-07-01 Unique Leave Event 2 2 1 100 2015-06-05 Unique Leave Event 2 2 0 100 2016-01-01 Unique Leave Event 2 2 6 200 2013-01-01 Unique Leave Event 1 1 5 200 2015-01-01 Unique Leave Event 2 2 4 200 2016-01-01 Unique Leave Event 2 2 7 300 2014-01 Unique Leave Event 1 1 Output Match 3 Unique Leave Event 1 True 2 Unique Leave Event 2 True 1 Unique Leave Event 2 True 0 Unique Leave Event 2 True 6 Unique Leave Event 1 True 5 Unique Leave Event 2 True 4 Unique Leave Event 2 True 7 Unique Leave Event 1 True 
+2
source share

This is a bit clumsy, but it gives the correct output, at least for your small example:

 import pandas as pd data = {'Employee ID': ["100", "100", "100","100","200","200","200","300"], 'Effective Date': ["2016-01-01","2015-06-05","2014-07-01","2013-01-01","2016-01-01","2015-01-01","2013-01-01","2014-01-01"], 'Desired Output': ["Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 1","Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 1","Unique Leave Event 1"]} df = pd.DataFrame(data, columns=['Employee ID','Effective Date','Desired Output']) df["Effective Date"] = pd.to_datetime(df["Effective Date"]) df = df.sort_values(["Employee ID","Effective Date"]).reset_index(drop=True) for i,_ in df.iterrows(): df.ix[0,"Result"] = "Unique Leave Event 1" if i < len(df)-1: if df.ix[i+1,"Employee ID"] == df.ix[i,"Employee ID"]: if df.ix[i+1,"Effective Date"] - df.ix[i,"Effective Date"] > pd.Timedelta('365 days'): df.ix[i+1,"Result"] = "Unique Leave Event " + str(int(df.ix[i,"Result"].split()[-1])+1) else: df.ix[i+1,"Result"] = df.ix[i,"Result"] else: df.ix[i+1,"Result"] = "Unique Leave Event 1" 

Note that this code assumes that the first line always contains the Unique Leave Event 1 .

EDIT: Some explanation.

First, I convert the dates to a date and time format, and then I reorder the data so that the dates for each employee ID increase.

Then I iterrows over the frame lines using the iterrows built-in iterator. _ in for i,_ is just a placeholder for the second variable, which I do not use, because the iterator returns line and line numbers, I only need numbers.

In the iterator, I do a comparison on a number of lines, so by default I fill in the first line manually, and then assign the i+1 row. I do it this way because I know the value of the first line, but not the value of the last line. Then I compare the i+1 1th line with the i th line inside if -safeguard, because i+1 will give an index error for the last iteration.

In the loop, I first check to see if the Employee ID has changed between the two lines. If this is not the case, I compare the dates of the two lines and see if they are separated for more than 365 days. If so, I read the line "Unique Leave Event X" from line i -th, increased the number by one, and wrote it in i+1 -row. If the dates are closer, I just copy the line from the previous line.

If I change the Employee ID , on the other hand, I just write "Unique Leave Event 1" to get started.

Note 1: iterrows() has no parameters to set, so I can not iterate over only a subset.

Note 2: Always iterate using one of the built-in iterators and only iterate if you cannot solve the problem otherwise.

Note 3: When assigning values ​​to iteration, always use ix , loc or iloc .

+3
source share

All Articles