HRAnalytics

Chapter 15 HR Service Desk

How to use metrics:

  • Inform your stakeholders
  • Report measurements so that stakeholders can understand activities and results
  • Promote the value of the organization
  • Determine the best way to communicate the information to the stakeholders
  • Perform better stakeholder analysis to facilitate stakeholder buy-in
  • Improve performance - people do what is measured

Four types of process metrics:

  • Monitor progress by checking in process maturity
  • Monitor efficiency by checking use of resources
  • Monitor effectiveness by checking how many correct and complete first time
  • Monitor compliance in relation to process and regulatory requirements

Factors to consider when reporting:

  • Who are the stakeholders?
  • How does what you are reporting impact the stakeholders?
  • Reports must be easy to read and understood, thus they need to be developed with the stakeholder in mind.
  • Reports need to show how the support center is contributing to the goals of each stakeholder and the business.
  • Reports must identify the appropriate channels to communicate with each of the stakeholders.

Source: https://www.kaggle.com/lyndonsundmark/service-request-analysis/data

Ensure all needed libraries are installed


Attaching package: 'lubridate'
The following object is masked from 'package:base':

    date

First, let’s get some data from our service desk by exporting a CSV. We can then read this CSV (or excel spreadsheet) into R for us to perform analysis.

Note that we can solve some things as we load the data using read_csv() like the column data types and handling different ways people can represent missing or unknown data.

We then need to get this data analysis-ready. First of all, we need to make sure dates are filled in and/or reasonable.

Then we can work out how long it took to complete different stages of a request.

# A tibble: 1,152 x 8
   RequestID DateSubmitted       DateStarted         DateCompleted       Category            WaitTime TaskTime TotalTime
   <chr>     <dttm>              <dttm>              <dttm>              <chr>                  <dbl>    <dbl>     <dbl>
 1 1         2014-11-26 13:43:00 2014-12-13 06:02:00 2014-12-13 06:02:00 HR Report           400.317         0   400.317
 2 2         2014-11-29 14:41:00 2014-12-20 06:47:00 2014-12-22 03:47:00 Job Classification  496.1          45   541.1  
 3 3         2014-11-29 14:43:00 2014-12-24 08:06:00 2014-12-27 05:06:00 Recruitment         593.383        69   662.383
 4 4         2014-11-29 14:45:00 2015-02-09 03:31:00 2015-02-11 09:31:00 Training Delivery  1716.77         54  1770.77 
 5 5         2014-11-29 14:49:00 2014-12-06 06:43:00 2014-12-06 06:43:00 HR Report           159.9           0   159.9  
 6 6         2014-11-29 14:50:00 2014-12-21 06:00:00 2014-12-21 09:00:00 HR Report           519.167         3   522.167
 7 7         2014-11-29 14:50:00 2015-01-07 00:55:00 2015-01-09 23:55:00 Training Delivery   922.083        71   993.083
 8 8         2014-12-01 08:38:00 2015-01-14 03:29:00 2015-01-15 08:29:00 Job Classification 1050.85         29  1079.85 
 9 9         2014-12-03 16:26:00 2014-12-07 01:12:00 2014-12-08 17:12:00 Job Classification   80.7667       40   120.767
10 10        2014-12-07 11:41:00 2014-12-16 00:25:00 2014-12-16 00:25:00 HR Report           204.733         0   204.733
# ... with 1,142 more rows

We should now be able to get a view as to the distribution of the times taken to start, complete, and the overall turnaround time for requests.

4 columns ignored with more than 50 categories.
RequestID: 1152 categories
DateSubmitted: 1135 categories
DateStarted: 1132 categories
DateCompleted: 1146 categories

# A tibble: 5 x 10
  Category  WaitTime_mean TaskTime_mean TotalTime_mean WaitTime_min TaskTime_min TotalTime_min WaitTime_max TaskTime_max TotalTime_max
  <chr>             <dbl>         <dbl>          <dbl>        <dbl>        <dbl>         <dbl>        <dbl>        <dbl>         <dbl>
1 Grievanc~       11.8252      69.9111         81.7363            0           19            19      163.1            109        232.9 
2 HR Report       23.8362       4.76644        28.6026            0            0             0     1909.82            20       1925.82
3 Training~       26.1803      58.6104         84.7907            0           16            16     1716.77            80       1770.77
4 Recruitm~       28.89        69.7625         98.6525            0           23            23      618.300           99        691.3 
5 Job Clas~       33.3248      35.3408         68.6656            0            3            18     1391.08            50       1417.08

Now that we’ve checked our data for issues and tidied it up, we can start understanding what’s happening in-depth.

For instance, are the differences in category mean times significant or could it be due to the different volumes of requests? We can use the ANOVA test to check to see if each category does indeed seem to have differing response times. If the resulting P-value is small then we have more certainty that there is likely to be a difference by request category.

Anova Table (Type II tests)

Response: WaitTime
            Sum Sq   Df F value Pr(>F)
Category     29404    4    0.42   0.79
Residuals 19942945 1147               
Anova Table (Type II tests)

Response: TaskTime
          Sum Sq   Df F value              Pr(>F)    
Category  673366    4    1690 <0.0000000000000002 ***
Residuals 114250 1147                                
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Anova Table (Type II tests)

Response: TotalTime
            Sum Sq   Df F value     Pr(>F)    
Category    743322    4    10.6 0.00000002 ***
Residuals 20113407 1147                       
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

As well as statistical tests, we can apply quality control principles too. The qcc package allows us to use a number of relevant models and charts to understand what is happening.

Here we use the package to take a number of samples from the data and prepare a qcc base transformation containing information needed to make common charts. We use the xbar.one transformation to get the mean using one-at-time data of a continuous process variable.


Call:
qcc(data = ., type = "xbar.one")

xbar.one chart for . 

Summary of group statistics:
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
      0       0       0      27       0    1910 

Group sample size:  1152
Number of groups:  1152
Center of group statistics:  27.1
Standard deviation:  42.9 

Control limits:
  LCL UCL
 -102 156


Call:
qcc(data = ., type = "xbar.one")

xbar.one chart for . 

Summary of group statistics:
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
      0       6      31      32      51     109 

Group sample size:  1152
Number of groups:  1152
Center of group statistics:  32
Standard deviation:  25.6 

Control limits:
   LCL UCL
 -44.6 109


Call:
qcc(data = ., type = "xbar.one")

xbar.one chart for . 

Summary of group statistics:
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
      0      11      37      59      60    1926 

Group sample size:  1152
Number of groups:  1152
Center of group statistics:  59.2
Standard deviation:  63.8 

Control limits:
  LCL UCL
 -132 251

These show overall patterns. What if we wanted one per category?

$`Grievance Resolution`
List of 11
 $ call      : language .f(data = .x[[i]], type = "xbar.one")
 $ type      : chr "xbar.one"
 $ data.name : chr ".x[[i]]"
 $ data      : num [1, 1:45] 67 72 78 91 56 44 70 84 109 88 ...
  ..- attr(*, "dimnames")=List of 2
 $ statistics: Named num [1:45] 67 72 78 91 56 44 70 84 109 88 ...
  ..- attr(*, "names")= chr [1:45] "1" NA NA NA ...
 $ sizes     : int 45
 $ center    : num 81.7
 $ std.dev   : num 34.6
 $ nsigmas   : num 3
 $ limits    : num [1, 1:2] -22 185
  ..- attr(*, "dimnames")=List of 2
 $ violations:List of 2
 - attr(*, "class")= chr "qcc"

$`HR Report`
List of 11
 $ call      : language .f(data = .x[[i]], type = "xbar.one")
 $ type      : chr "xbar.one"
 $ data.name : chr ".x[[i]]"
 $ data      : num [1, 1:441] 400 205 0 14 0 ...
  ..- attr(*, "dimnames")=List of 2
 $ statistics: Named num [1:441] 400 205 0 14 0 ...
  ..- attr(*, "names")= chr [1:441] "1" NA NA NA ...
 $ sizes     : int 441
 $ center    : num 28.6
 $ std.dev   : num 40.5
 $ nsigmas   : num 3
 $ limits    : num [1, 1:2] -92.8 150
  ..- attr(*, "dimnames")=List of 2
 $ violations:List of 2
 - attr(*, "class")= chr "qcc"

$`Job Classification`
List of 11
 $ call      : language .f(data = .x[[i]], type = "xbar.one")
 $ type      : chr "xbar.one"
 $ data.name : chr ".x[[i]]"
 $ data      : num [1, 1:355] 22 30 47 40 166 ...
  ..- attr(*, "dimnames")=List of 2
 $ statistics: Named num [1:355] 22 30 47 40 166 ...
  ..- attr(*, "names")= chr [1:355] "1" NA NA NA ...
 $ sizes     : int 355
 $ center    : num 68.7
 $ std.dev   : num 59.9
 $ nsigmas   : num 3
 $ limits    : num [1, 1:2] -111 248
  ..- attr(*, "dimnames")=List of 2
 $ violations:List of 2
 - attr(*, "class")= chr "qcc"

$Recruitment
List of 11
 $ call      : language .f(data = .x[[i]], type = "xbar.one")
 $ type      : chr "xbar.one"
 $ data.name : chr ".x[[i]]"
 $ data      : num [1, 1:80] 75 56 82 567 80 ...
  ..- attr(*, "dimnames")=List of 2
 $ statistics: Named num [1:80] 75 56 82 567 80 ...
  ..- attr(*, "names")= chr [1:80] "1" NA NA NA ...
 $ sizes     : int 80
 $ center    : num 98.7
 $ std.dev   : num 63.1
 $ nsigmas   : num 3
 $ limits    : num [1, 1:2] -90.8 288.1
  ..- attr(*, "dimnames")=List of 2
 $ violations:List of 2
 - attr(*, "class")= chr "qcc"

$`Training Delivery`
List of 11
 $ call      : language .f(data = .x[[i]], type = "xbar.one")
 $ type      : chr "xbar.one"
 $ data.name : chr ".x[[i]]"
 $ data      : num [1, 1:231] 59 59 76.8 65 57 ...
  ..- attr(*, "dimnames")=List of 2
 $ statistics: Named num [1:231] 59 59 76.8 65 57 ...
  ..- attr(*, "names")= chr [1:231] "1" NA NA NA ...
 $ sizes     : int 231
 $ center    : num 84.8
 $ std.dev   : num 55.7
 $ nsigmas   : num 3
 $ limits    : num [1, 1:2] -82.2 251.8
  ..- attr(*, "dimnames")=List of 2
 $ violations:List of 2
 - attr(*, "class")= chr "qcc"

5 Valuable Service Desk Metrics

Source: https://www.ibm.com/communities/analytics/watson-analytics-blog/it-help-desk/

Number of tickets processed and ticket/service agent ratio –Two simple metrics that add up the number of tickets submitted during specific times (i.e. shift, hour, day, week, etc.) and create a ratio of tickets/available service agents during those times. This is a key KPI that speaks to staffing levels and informs other Service Desk metrics.

Wait times – How long after a customer submits a service request do they have to wait before Service Desk agents start working on the ticket? Your wait time metrics also speak to Service Desk staffing levels. Once you identify whether your Service Desk has excessive wait times, you can drill down to see what might be causing wait times to run long (i.e. low staff levels at certain times of the day or week; not enough service agents trained for a specific service; processing issues; etc.) and create a remedy that applies to your entire Service Desk organization or to an individual IT service.

Transfer analysis (tickets solved on first-touch versus multi-touch tickets) – Number of tickets that are solved by the first agent to handle the ticket (first-touch) versus the number of tickets that are assigned to one or more groups through the ticket’s lifespan. Great for determining which tickets need special attention, particularly those tickets where automation might reduce the amount of ticket passing between technical groups.

Ticket growth over time and backlog – Trending data showing the increase (or decrease) in the number of Service Desk tickets over time. It can help spot unexpected changes in user requests that may indicate a need for more Service Desk staff or more automation. Or, it may identify that a specific change resulted in increased Service Desk resources. You also want to check the trends for your backlog of tickets in progress and the number of unresolved tickets. A growth in backlogged tickets can indicate a change in service desk demand or problems with service deployment.

Top IT services with the most incidents – Spotlights which services are failing, causing the most Service Desk support. Helpful for spotting problem IT services that need modification.