Brain logo

Employee attrition

By Pablo Martin, Artelnics.

One of the main problems of companies and RRHH departments is employee. This phenomenon can be very expensive. Indeed, the cost of retaining an existing employee is far less than acquiring a new one. Employee churn prevention aims to predict who, when, and why employees will terminate their jobs.

Accurate methods that identify which employees are more likely to switch to another company are needed. They would allow to adapt those specific aspects of the organization needed to prevent attrition and, therefore, reduce costs.

Employee attrition

The objective is to effectively untangle all the factors that lead to employee attrition, and to determine the underlying causes, in order to prevent it. But analyzing multiple personal and social factors is complicated, to say the least. We need both rich employee data, along with complex predictive models to analyze it.

1. Data set

The data set used in this study will contain quantitative and qualitative information about a sample of employees at the company. These variables are classified into the following three groups:

  1. Personal factors: age, sex, eduction, own residence...
  2. Professional factors: position, work experience, salary, length of service...
  3. Socio-economic factors: unemployment rate, economic growth, quality of life, crime rate...

The data set contains about 1,500 employees. For each, around 35 personal, professional and socio-economical attributes will be selected as the input variables. The target variable is the satisfaction of the worker with the company (loyal or attrition). The next table lists all the variables with their corresponding use.

Variables description

As we can see, we have a total of 48 inputs, which contain the characteristics of every employee, 1 target, which is the variable "Attrition" mentioned before, and 3 unused variables ("EmployeeCount", "Over18" and "StandardHours"), which are constant and will not be used for the analysis since they do not provide any valuable information.

In order to study the dependencies between each input variable and the target, we are going to calculate the logistic correlations between them. The next chart shows the results of these calculations.

Correlation chart

As we can see, the input variables that have more importance with the attrition are "OverTime" (0.246118), "TotalWorkingYears" (0.22332) and "YearsAtCompany" (0.196728) while the ones with the least importance are "HourlyRate" (0.00678), "PerformanceRating" (0.00289) and "Research Scientist" (0.00036).

Before starting the predictive analysis, it is also important to know the ratio of negative and positive instances that we have in the data set.

Target distribution

The chart shows that the number of negative instances (1233) is much larger that the number of positive instances (237). We will use this information later to design properly the predictive model.

2. Neural network training

A neural network will take all the attributes of each of the employees and it will transform them into a probability of attrition. For that purpose, we will use a neural network with 48 inputs, one hidden layer with one neuron in it and one output.

The scaling and unscaling layer, which will be respectively found between the input and the hidden layers and between the hidden and the output layers, will both use the minimum-maximum method.

As we said before, the data set is unbalanced. As a consequence, we will set as error method the weighted squared error with the positive and negative weights shown in the next table.

Positive-Negative weights

Now, the model is ready to be trained. We will use the method conjugate gradient as training algorithm. The next chart shows how the loss decreases with the iterations.

Training chart

As we can see, the initial value for the loss was 1.05565 and, after 222 iterations, it has decreased to 0.567598.

In order to study whether during the training process over-fitting has appeared, we will also plot the selection loss history, which is shown below.

Training chart

In this case, the initial value for the selection loss was 1.02299 and it has decreased to 0.669642 after 222 iterations. As we can see, both loss and selection loss behave in a similar way along iterations which means that no over-fitting has appeared.

Then, we can move to the next step, testing the predictive capacity of our model.

3. Testing analysis

During this section, we will assess the quality of the model and we will decide if it is ready to be use in the production phase, i.e., in a real world situation.

The way to test the model will be comparing the outputs of the trained neural network against the real targets for a set of data that has not been used neither for training nor for selection, the testing subset. For that purpose, we will make use of some testing methods commonly used in binary classification problems.

The next table shows the binary classification tests. They are calculated from the values of the confusion matrix.

Binary tests

The accuracy shows that the model can predict correctly almost the 81% of all the testing instances while the error rate shows that it only fails to predict around 19% of them. The value of the sensitivity is 0.682927, which means that the model can detect around the 70% of the positive instances. The specificity is 0.83004, so it can detect around 83% of the negative instances.

In general, these binary classification tests show a good performance of the predictive model.

We are going to calculate now the ROC curve. It will help us to measure the discrimination capacity of the classifier between positives and negatives instances. The next chart shows the ROC curve for our problem.

ROC curve

For a perfect classifier, the ROC curve should pass through the upper left corner. In this case, the curve is close to it which means that the quality of the model is good. The next table shows the value of the area under the previous ROC curve.

ROC curve

The closer the area under curve to 1, the better the classifier. In this case, the area takes the value 0.836 which confirms what we saw before in the ROC chart, that the model is prediction attrition with great accuracy.

4. Model deployment

Once we know that the model can predict employee attrition accurately, it can be used to evaluate the satisfaction of a given employee with the company. The predictive model also gives us the factors which are more significant for a given employee, which allows the company to act on that variables.

The predictive model takes the form of a function of the outputs with respect to the inputs. The mathematical expression, which is listed below, represented by the model can be used to embed it into another software, in the so called production mode.

scaled_Age=2*(Age-18)/(60-18)-1;
				scaled_BusinessTravel=2*(BusinessTravel-0)/(2-0)-1;
				scaled_DailyRate=2*(DailyRate-102)/(1499-102)-1;
				scaled_Sales=2*(Sales-0)/(1-0)-1;
				scaled_Research_and_Development=2*(Research_and_Development-0)/(1-0)-1;
				scaled_Human_Resources=2*(Human_Resources-0)/(1-0)-1;
				scaled_DistanceFromHome=2*(DistanceFromHome-1)/(29-1)-1;
				scaled_Education=2*(Education-1)/(5-1)-1;
				scaled_Life_Sciences=2*(Life_Sciences-0)/(1-0)-1;
				scaled_Other=2*(Other-0)/(1-0)-1;
				scaled_Medical=2*(Medical-0)/(1-0)-1;
				scaled_Marketing=2*(Marketing-0)/(1-0)-1;
				scaled_Technical_Degree=2*(Technical_Degree-0)/(1-0)-1;
				scaled_Human_Resources_1=2*(Human_Resources_1-0)/(1-0)-1;
				scaled_EmployeeNumber=2*(EmployeeNumber-1)/(2068-1)-1;
				scaled_EnvironmentSatisfaction=2*(EnvironmentSatisfaction-1)/(4-1)-1;
				scaled_Gender=2*(Gender-0)/(1-0)-1;
				scaled_HourlyRate=2*(HourlyRate-30)/(100-30)-1;
				scaled_JobInvolvement=2*(JobInvolvement-1)/(4-1)-1;
				scaled_JobLevel=2*(JobLevel-1)/(5-1)-1;
				scaled_Sales_Executive=2*(Sales_Executive-0)/(1-0)-1;
				scaled_Research_Scientist=2*(Research_Scientist-0)/(1-0)-1;
				scaled_Laboratory_Technician=2*(Laboratory_Technician-0)/(1-0)-1;
				scaled_Manufacturing_Director=2*(Manufacturing_Director-0)/(1-0)-1;
				scaled_Healthcare_Representative=2*(Healthcare_Representative-0)/(1-0)-1;
				scaled_Manager=2*(Manager-0)/(1-0)-1;
				scaled_Sales_Representative=2*(Sales_Representative-0)/(1-0)-1;
				scaled_Research_Director=2*(Research_Director-0)/(1-0)-1;
				scaled_Human_Resources_2=2*(Human_Resources_2-0)/(1-0)-1;
				scaled_JobSatisfaction=2*(JobSatisfaction-1)/(4-1)-1;
				scaled_Single=2*(Single-0)/(1-0)-1;
				scaled_Married=2*(Married-0)/(1-0)-1;
				scaled_Divorced=2*(Divorced-0)/(1-0)-1;
				scaled_MonthlyIncome=2*(MonthlyIncome-1009)/(19999-1009)-1;
				scaled_MonthlyRate=2*(MonthlyRate-2094)/(26999-2094)-1;
				scaled_NumCompaniesWorked=2*(NumCompaniesWorked-0)/(9-0)-1;
				scaled_OverTime=2*(OverTime-0)/(1-0)-1;
				scaled_PercentSalaryHike=2*(PercentSalaryHike-11)/(25-11)-1;
				scaled_PerformanceRating=2*(PerformanceRating-3)/(4-3)-1;
				scaled_RelationshipSatisfaction=2*(RelationshipSatisfaction-1)/(4-1)-1;
				scaled_StockOptionLevel=2*(StockOptionLevel-0)/(3-0)-1;
				scaled_TotalWorkingYears=2*(TotalWorkingYears-0)/(40-0)-1;
				scaled_TrainingTimesLastYear=2*(TrainingTimesLastYear-0)/(6-0)-1;
				scaled_WorkLifeBalance=2*(WorkLifeBalance-1)/(4-1)-1;
				scaled_YearsAtCompany=2*(YearsAtCompany-0)/(40-0)-1;
				scaled_YearsInCurrentRole=2*(YearsInCurrentRole-0)/(18-0)-1;
				scaled_YearsSinceLastPromotion=2*(YearsSinceLastPromotion-0)/(15-0)-1;
				scaled_YearsWithCurrManager=2*(YearsWithCurrManager-0)/(17-0)-1;
				y_1_1=Logistic(-0.403897
				+1.15808*scaled_Age
				-2.34208*scaled_BusinessTravel
				-0.225363*scaled_DailyRate
				-0.32279*scaled_Sales
				+0.647643*scaled_Research_and_Development
				+0.214216*scaled_Human_Resources
				-1.46614*scaled_DistanceFromHome
				-0.357817*scaled_Education
				+0.262028*scaled_Life_Sciences
				-0.592495*scaled_Other
				+0.0815542*scaled_Medical
				-0.049687*scaled_Marketing
				-0.98413*scaled_Technical_Degree
				-0.238478*scaled_Human_Resources_1
				-0.0019147*scaled_EmployeeNumber
				+1.54721*scaled_EnvironmentSatisfaction
				-0.619116*scaled_Gender
				-0.263873*scaled_HourlyRate
				+1.74423*scaled_JobInvolvement
				+1.35269*scaled_JobLevel
				+0.260577*scaled_Sales_Executive
				-0.9788*scaled_Research_Scientist
				-1.41266*scaled_Laboratory_Technician
				+0.634257*scaled_Manufacturing_Director
				+0.181605*scaled_Healthcare_Representative
				-0.620446*scaled_Manager
				-0.654519*scaled_Sales_Representative
				+1.33424*scaled_Research_Director
				-1.50159*scaled_Human_Resources_2
				+1.31217*scaled_JobSatisfaction
				-0.798697*scaled_Single
				-0.179342*scaled_Married
				+0.910973*scaled_Divorced
				-0.805847*scaled_MonthlyIncome
				-0.0521658*scaled_MonthlyRate
				-1.53312*scaled_NumCompaniesWorked
				-2.10922*scaled_OverTime
				-0.66244*scaled_PercentSalaryHike
				+1.12097*scaled_PerformanceRating
				+0.712171*scaled_RelationshipSatisfaction
				+0.711251*scaled_StockOptionLevel
				+1.4649*scaled_TotalWorkingYears
				+0.668953*scaled_TrainingTimesLastYear
				+1.60514*scaled_WorkLifeBalance
				+0.275516*scaled_YearsAtCompany
				+1.89133*scaled_YearsInCurrentRole
				-1.98539*scaled_YearsSinceLastPromotion
				+0.96599*scaled_YearsWithCurrManager);
				non_probabilistic_Attrition=Logistic(2.73424
				-4.42268*y_1_1);
				(Attrition) = Probability(non_probabilistic_Attrition);

				Logistic(x){
					return 1/(1+exp(-x))
				}

				Probability(x){
					if x < 0
						return 0
					else if x > 1
						return 1
					else
						return x
				}
				

Bibliography

  • The data used for this example can be downloaded from GitHub.