top of page

 

kippy

Screen Shot 2018-11-25 at 22.33.30.png

calculations

Anchor 1

Introduction


This article explains how scores are calculated and how to modify the calculation defaults to fit your organisation's needs. 
 

Kippy scores items so that the most important information is brought to your attention for resolution. It does this by colouring scores as either red, amber and green. This allows you to quickly visualise where you need to focus. 

 

The scoring and thresholds are configured by default to industry best practices. However, as per usual, everything in kippy is configurable and customatisable. 
 

Below is more detail about how various information gets calculated and given those colours. This article assumes a good understanding of kippy and completion of the learning courses.

 

Also, note that hovering over the labels or the place where the score is displayed will show you more information about the meaning of the score and how it was calculated.

 

Finally, before we get started, it is worth remembering that ultimately the priority here is to improve corporate performance – and not measure it for measurements sake. So understanding just enough about the scoring and calculations, so you can focus on what really matters, is the overall goal here.  

 

1) KPI Module


a) KPI scores

Each KPI has a target and actual for a given period. The score is simply the percentage of the actual over the target e.g.

 

KPI1 with Direction Increase is Better has a January Target of 80 and a January Actual of 60. Therefore, KPI1 score will be 75%.

 

KPI2 with Direction Decrease is Better has a January Target of 80 and a January Actual of 60. Therefore, KPI1 score will be 133%.

 

Each KPI score is calculated for the reporting period. So if KPI1 and KP12 had a monthly frequency, the February scores would be calculated based on each KPIs February target and actuals.

 

If a KPI is quarterly, then it would not have a target and actual for January and February, but the March score would be based on the March target and actual.


b) KPI Objective scores

The score for the objective is the average of all the objectives KPIs.

​

So using the examples above, if KPI1 and KPI2 belonged to ObjectiveA, the score for objective A would be (75+133)/2=104%

c) Weighted scores

The above assumes each KPI has a weight of 1x.

 

However, if KPI1 was 1x and KPI2 was 2x, the score for KPI2 would count double towards the average for the objective e.g. (75+(133*2))/3 = (75+(266))/3 = 341/3 = 114%

 

The same concept applies if there are more than 2 KPIs in the objective, based on whatever the weight is for each KPI.


d) KPI Perspective scores

The perspective score simply takes the average score of all the objectives within each perspective.


e) Team KPI scores

The team score simply takes the average score of all the perspectives within each team.

​

f) KPI RAG Thresholds

The point at which a KPI turns amber or green can be adjusted by your system owner.

​

By default, if a KPI is

  • less than 80 it will be shown as red.

  • between than 80 and 100 it will be shown as amber.

  • 100 or above it will be shown as green.

​

g) KPI Score Adjustments

If enabled by your system owner, KPI scores can be limited to always fall between ranges.

 

For example, an employee (Bob) has 3 KPIs and is massively underperforming on 2 of them but massively overperforming on 1 of them.

 

  • KPI1 = 70%

  • KPI2 = 85%

  • KPI3 = 90%

  • Average KPI score = (70+85+90)/3 = 81.6%

 

However, another employee (Susan) has 3 KPIs and is massively underperforming on 2 of them but massively overperforming on 1 of them.

 

  • KPI5 = 10%

  • KPI6 = 15%

  • KPI7 = 250%

  • Average KPI score = (10+15+250)/3 = 91.6%

 

This may not be fair, because KPI7 might be a much easier target to overperform in (e.g. number of referral emails sent to friends and ex-colleagues)

 

Therefore, a ‘KPI score upper limit’, can ensure scores are capped. For example, if the limit was set at 110, then for Susan’s 3 KPIs:

 

  • KPI5 = 10%

  • KPI6 = 15%

  • KPI7 = (originally 250% but capped at 110) = 110%

  • Average KPI score = (10+15+110)/3 = 45%


h) KPI Formulas

It is worth remembering that KPI scores can also be adjusted by KPI Score adjusting formulas which affect the scoring per KPI.

 

Scores can also be manipulated with the following functions:

 

Restrict score to maximum of 100
=cloud.kippy.score.top()     


Restrict score to a maximum value e.g. 110
=cloud.kippy.score.top(110)    

Restrict score to minimum of 0

=cloud.kippy.score.bottom()     

 

Restrict score to a minimum value e.g. 10

=cloud.kippy.score.bottom (10)  

 

Restrict score to be within the range specified

=cloud.kippy.score.range ( 20, 80 )    
 

Absolute value e.g. -5 will be 5

=cloud.kippy.score.positive()    

 

Absolute value multiplied by -1
=cloud.kippy.score.negative()    

 

Set score as 0, 80 or 100 if below, between or above the two parameters
=cloud.kippy.score.mid(20,90)    

 

More details at https://www.kippy.cloud/formula

 

i) KPI Adapters

It is worth remembering that KPI actuals can be overridden by adapters and APIs.

 

For example, if you manually enter an actual for a KPI e.g. 5 sales with a target of 10, you may expect a score of 50%.

 

However, if an Adapter or API is configured to push in a different value (e.g. 3 sales pushed directly from your Enterprise Sales system), then when the actual is updated to 3, and the score is updated, the score will be automatically updated 30%, whilst you expect it to be 50%.

 

2) Projects Module

​

Each objective has many KPIs. Each objective has many projects. Therefore, an objective has TWO scores – the KPI score and the projects score.

 

a) Milestone scores
Each project is made of a one or more milestones. The milestone is scored based on the percentage variance between the target and actual.

​​

So if Milestone1 has a target of 100 and actual of 70, the variance is -30.

​​

b) Project scores

The project score is the weighted average of the milestone scores.

 

So if Milestone1 has a variance of -30.

And if Milestone2 has a variance of -10.

The project score has a variance of -20.

 

c) Weighted Project scores

If Milestone1 has 1x weighting.

And Milestone2 has 2x weighting.

The project score is ((-30)+(-10*2))/3 = -17%

​

d) Project Objective, Perspective and Team scores

Like above, the score is the average of the item it contains.

The project objective score is the average of its projects’ scores.

The project perspective score is the average of its project objectives’ scores.

The project team score is the average of its project perspectives’ scores.

​

e) Project RAG Thresholds

The point at which a project turns amber or green can be adjusted by your system owner.

 

By default, if a project is

  • less than -20 it will be shown as red.

  • between than 0 and -20 it will be shown as amber.

  • above 0 it will be shown as green.

 

3) Appraisal Module

 

Each person is assessed against their ‘measurable and subjective performance’.

 

Appraisals can be done at any time and multiple times within any selected reporting period.

 

a) Owned KPI score

The measurable performance is taken from the score of the KPIs that they are an owner of.

​

b) Competency scores

Each employee is assigned some competencies. A manager can then appraise that employee based on those competencies.

 

Each competency has 5 options for the manager to select :

 

 “1. Unsatisfactory”

 “2. Needs Improvement”

 “3. Meets Expectations”

 “4. Exceeds Expectations”

 “5. Exceptional”
 

The scores for each are as follows:

 

“1. Unsatisfactory” = 0

 “2. Needs Improvement” = 25

 “3. Meets Expectations” = 50

 “4. Exceeds Expectations” = 75

 “5. Exceptional” = 100
 

The overall competency score is the average of all the employee’s competency scores. The weighted average works the same way as with KPIs and Milestones.

 

Each competency can be configured to have a weight. For example, Job knowledge might be 1x Trustworthiness might be 3x.

 

The weight for each competency can be different dependent on if the employee is a manager or not.

 

Rating floor

This is a number set by your system owner that is added to every competency score.

 

For example, if the rating floor is set to 60, even if an employee attains 0 on their competency score, that will be shows as 60.

 

In effect, that 60 represents the minimum competency score an employee can achieve.

 

Rating factor

This is the percentage number set by your system owner that the floor plus original competency score is multiplied by.

 

For example, if the rating factor is 100 and:

  • the rating floor is set to 60 and

  • an employee attains 100 on their competency score

  • that will be shows as a competency score of 160.

 

In effect, that 160 represents the maximum competency score an employee can achieve.

 

However, if the rating factor is 80 and:

  • the rating floor is set to 60 and

  • an employee attains 100 on their competency score

  • that will be shows as a competency score of (100*.8)+60=140

 

In effect, that 140 represents the maximum competency score an employee can achieve.

 

c) Approved Weighted Average

This score shows the employees weighted average at the last manager-led appraisal.

 

Once a manager submits an appraisal based on the selected competencies, the competency score is combined with the owned KPI score.

 

For example, if the score of owned KPIs is 80 and Competency score is 50, the employee’s approved weighted average will be 65.

 

In reality, the approved weighted average will factor in many other parameters e.g. “weight the competency score more when appraising executives” – but more on that later.

 

d) Live Weighted Average

This is the average of the live KPI score and competency score, weighted according to the pre-configured calculation rules.

 

This may be different to the Approved Weighted Average.

 

The live weighted average is calculated using the current KPI score and most recent appraisal.

 

This is a useful indicator of what the approved weighted average would be if an appraisal was done again now. It is based on the current owned KPIs and competencies as per the last appraisal.

 

Note, project, milestone, team and feedback scores are not considered.

 

e) Employee rating

The weighted average from the last manager-led appraisal assigns the employee into a rating bracket.

 

The default buckets, which can be overridden by a system owner, are:

 

A+ (Greatly exceeds expectations >100)

A (Exceeds expectations >80)

B (Meets expectations >70)

C (Needs development >60)

D (Non-Performing)

 

For example, if an employee has a weighted score of 75, they would have an employee rating of B (Meets expectations >70)

 

f) Self-appraisal scores

If enabled, weighted averages based on self-appraisals are highlighted in yellow. These show what the weighted average would be if the competencies were set as per the employee has set them ‘in their self-appraisal’ combined with the current KPI scores. This is a good way for an employee to tell his manager how they think they are performing themselves, to help guide the formal manager-led appraisal discussion.  

 

g) Skewing appraisals

In the previous weighted average example, if the owned KPI score is 80 and Competency score is 50, the employee’s weighted average will be 65.

 

However, in practice, organisations usually have much more complex requirements for calculating what the employee score should be.

 

There are 3 common concepts that organisations use for that calculation - Manager Skews, Grade Skews and Individual Skews.

 

If an Individual Skew is set, it takes precedence over Grade Skews and Manager Skews.

 

If a Grade Skew is set, it takes precedence over Manager Skews.

​

Manager Skews

This is dependent on if the employee being appraised is a manager or not, a different ratio can be applied to the KPI/Competency scoring. This is configured by your system owner.

 

For example:

  • if the employee is a manager

  • and the KPI/Competency ratio for managers is set at 60:40

  • if the owned KPI score is 80 and Competency score is 50

  • the employee’s weighted average will NOT be 65

  • it will be (80*.6)+(50*.4) = 68

 

Likewise, for non-managers:

  • if the employee is not a manager

  • and the KPI/Competency ratio for non-managers is set at 40:60

  • if the owned KPI score is 80 and Competency score is 50

  • the employee’s weighted average will NOT be 65

  • it will be (80*.4)+(50*.6) = 62

 

Grade Skews

These can be configured by your system owner. These override the Manager Skews.

 

Each grade skew is made up of the employee grade and the ratio of KPI and Competency to skew the weighted average by.

 

For example:

  • the competency skew for the grade Exco is 50

  • an employee with the employee grade Exco

  • has a KPI score is 80 and Competency score is 50

  • therefore, the employee’s weighted average will be 65

 

However:

  • if the grade skew for the grade Exco is 75

  • an employee with the employee grade Exco

  • has a KPI score is 80 and Competency score is 50

  • the employee’s weighted average will NOT be 65

  • It will be (80*.75)+(50*.25) = 72.5

 

Individual Skews

To set the skew per employee, the KPI_Competency_skew can be set against each employee in the employee’s additional info. These override the Manager Skews and Grade Skews.

 

h) Year to date

If enabled, displays and uses the 'year to date' (YTD) scores for the employee’s owned KPIs during appraisals. This is only relevant for KPIs with Cummulative=No.

 

The YTD score is calculated by taking the average of the period scores up until that reporting period.

 

For example, if an employee (Bob) has a target of 10 sales per month but is making only 8 sales per month, if he has a KPI=Total Sales and Cumulate=Yes, you can appraise Bob based on the running total of total sales i.e.

 

  • in January Bob should have made 10 sales but made 8

 

  • in February Bob should have made 20 sales but made 16

 

Therefore, if appraised in February, Bob has KPI score of 16/20=80%

 

The same thing could be expressed in a KPI=Absolute Sales with Cumulative=No i.e.

 

  • in January Bob should have made 10 sales but made 8

 

  • in February Bob should have made 10 sales but made 8

 

Therefore, if appraised in February, Bob has KPI score of 16/20=80%

 

However, if we now take the scenario of another employee (Susan) who has a KPI=Absolute Sales with Cumulative=No and underperforms early in the year and then overperforms just before his appraisal i.e.

 

  • in January Susan should have made 10 sales but made 1

 

  • in February Susan should have made 10 sales but made 8

 

Therefore, if appraised in February, Susan has KPI score of 16/20=80%

 

But this is not fair, because Bob was a consistent performer and Susan was not.

 

YTD score uses the average of the employee’s Cumulative=No KPIs up until the appraisal.

 

For example, Bob’s YTD score would be

  • 80 for January

  • 80 for February

  • Average YTD score = (80 + 80) / 2 = 80

 

However, Susan’s YTD score would be

  • 10 for January

  • 80 for February

  • Average YTD score = (10 + 80) / 2 = 45

 

This encourages consistent behaviour to achieve targets throughout the year, rather than ‘cramming’ before an appraisal.  

 

i) Feedback rating

This is the average of each Acknowledgement star ratings. This value is not used explicitly in any other calculations but is shown as an indication of general 360 feedback.

 

Summary

​

There are many calculations taking place in the system. There are many features than can be enabled and disabled. There are many adjustments that can be made to the parameters and logic. Therefore, it is prudent to:​

​

- take the time to understand these concepts

- understand your defaults

- review your overrides

- thoroughly test your configurations during system set up

- thoroughly test your system after each system config change

- explain the system to all your employees in the context and terminology of your own organisation

​

This builds confidence in the system and scores presented, rather than employees questioning the calculations because they are not up-skilled or privy to all the system owner settings. It is prudent to get that confidence built early, before doing it under the pressure of any impending appraisal deadlines. 

 

It is also worth noting that the system is constantly being extended. So if your organisation has a calculation use case that is not currently available or difficult to configure, let us know and we can look into an extension to support it.


Feel free to contact us if you have any questions. 

bottom of page