Smart Digital Content Platform

Using the KANO model to prioritize product features

We tell you about our experience implementing the KANO model for the prioritization of product functionalities.
Athento's product team
Lorena Ramos

Lorena Ramos

Athento has several mechanisms for its users to contribute to the product roadmap. For example, we have the page for feature requests and receive requests directly through our Customer Success Managers who actively listen to users. Once you receive this feedback, of course, you have to prioritize those features that you will actually get to implement.

In May of this year, we detected that there were some functionalities or improvements that several of our customers were asking for. We set out to evaluate them to know if we should undertake them and with what priority. The method we chose for this assessment was a simplified version of the KANO model.

The Kano Model

The Kano model is a prioritization tool introduced by Professor Noriaki Kano in the 1980s. Basically, it is about categorizing and prioritizing functionalities according to the degree of satisfaction that a functionality can give to users.

In order to categorize and prioritize the functionalities, questions are asked about them in a certain way. Once these questions have been asked, it is possible to categorize the functionalities into one of the following categories according to the emotional response that users give to these questions:

  • Acttractive: users do not expect to have them but would like to have them.
  • Must-be: users expect to have this functionality and do not like not having it.
  • Performance: customers like these features and dislike not having them.
  • Indifferent: customers don’t care whether they have them or not.
  • Questionable: the answers give contradictory or unclear results.
  • Reverse: users prefer not to have this functionality. Having them causes them dissatisfaction.

The product roadmap should avoid reverse, indifferent or questionable functionality.

Here is our experience with this prioritization tool.

Implementing the KANO model

The first thing I would like to comment is that Athento did not follow a purist approach in the implementation of this framework. For several reasons and limitations, as well as for simplicity. For example, it is not possible to make a textual translation into Spanish of some of the questions and options that the Kano model uses.

Anyway, in this article we tell you how we implemented it and you can decide if you follow a much more purist approach.

We chose 5 functionalities to ask our customers. We wanted to have a high percentage of collaboration and we know that if we make surveys too long, users might stop participating. Keep in mind that for each feature in the Kano model, two questions must be asked.

The KANO questionnaire

According to the KANO model, for each functionality, one functional and one dysfunctional question must be asked:


  • How would you feel if the product had… X functionality? 


  • How would you feel if the product did not have… X functionality? 

For both questions, the answer options are the same:

  • I like it
  • I expect it
  • I am neutral
  • I can tolerate it
  • I dislike it

Once you have the answers, you have to classify the functionalities into their categories following a matrix as shown below.

Kano Matrix

Our KANO questionnaire

We used Google Forms to do the surveys.

The 5 functionalities we chose were:

  • To be able to customize at form level which buttons are shown in the quick action bar of the document.
  • Generate QRs with URLs to share documents publicly with external users. The idea is that the current QR generation functionality allows to add a unique URL and an external user can use that URL to query information about the document. When the user uses that URL, they will see a public screen (no login required) where they will see the fields that have been marked as public for that type of document.
  • Selector from the lifecycle configuration to associate a color to a status. When this option is enabled, the color will be displayed in the document list and in the lifecycle status label within a document.
  • Allow to send to several external users at the same time a document for approval/review. At the time of the survey, only one email could be added per submission.
  • New tab next to the HTML tab where the related documents would be displayed. The tab would indicate in its title the number of attachments. When the user opens/activates the tab, he/she would see the attachments as thumbnails.

The first important point for us was for users to understand as well as possible what the proposed functionalities were about. Fortunately, with Google Forms we were able to include images and explanations to illustrate to the users.

Kano sample question

Another important point for us was the approach to questions and answers. We had to translate and adjust them to make them more natural and understandable for our users.

To facilitate the completion of the survey, we divided it into one page per functionality. On each page, we asked the functional and dysfunctional question.

Results analysis

Once we had obtained a significant sample, it was time to analyze the responses. The first joy we had was to see that none of the proposed functionalities were of the Reverse type, so at least to begin with, we could venture that we were not too far off the mark when thinking about their implementation.

As you can see, for each feature, user opinions were divided. For some, a feature could be attractive, while for others, it could be indifferent.

At this point, you have to find out what the dominant category is, that is, try to find out what most users want. To do this, you use coefficients:

  • CS+ satisfaction coefficient: sum of % attractive + % performance / sum of all percentages except reverse and questionable.
  • CS- dissatisfaction coefficient: sum of % performance + % essential / sum of all percentages except reverse and questionable.

What do these coefficients mean and what values can we take as a reference? The CS+ coefficient for customer satisfaction is between 0 and 1. The closer the result is to one, the greater the effect on customer satisfaction. Conversely, a CS+ coefficient close to 0 suggests that that particular characteristic has very little influence on customer satisfaction.

The dissatisfaction coefficient (CS-), if close to -1, the non-inclusion of this feature has a strong impact on customer dissatisfaction. Conversely, a value close to 0 means that the absence of this feature does not make customers dissatisfied.

If you look at the data in the table below, we see that the inclusion of most of the functionalities seems to show a positive impact on customer satisfaction according to their CS+ (>0.5). The dissatisfaction coefficient also shows us that the non-inclusion of the functionality of sending approval to multiple emails has a significant negative impact on customers (CS- of -0.89). 

In addition to these coefficients, it is interesting to calculate also the total strength and the category strength. These data will help us with the frequency distribution. The total strength shows to what extent users consider a feature important. As a general rule, the value should be greater than 50%.  We can calculate the total strength as follows:

  • Total strength = % Attractive + % Indispensable + %Performance

We see that in the case of the postulated functionalities, all of them pass the test of total strength. We can understand that they are functionalities considered positive by customers. Now, which is the category in which we can most clearly position each functionality?

Category strength shows how distinct a category is compared to other categories. The category strength must be greater than 5% to show that a characteristic unequivocally belongs to a category.

  • Category strength = percentage of the most frequent response – percentage of the second most frequent response

As we can see in our table, feature number 3 is below 5%. Therefore, there is no clear category for feature 3 which will need further analysis.

What did we do with the results?

In sprints after the study, we planned some of the functionalities catalogued as attractive and that were not indifferent to a large percentage of the users. The following graph shows the distribution of responses for each functionality.

One of the features we prioritized was the ability to send an approval email to more than one user. This was one of the cases in which not having this possibility generated more customer dissatisfaction. Below you can see the UI of the functionality before and after the Kano study. 


Now users, by clicking on the “Add receiver” button, can add unlimited recipients to receive a document for approval via email.

Below is another of the prioritized features: Generate QRs with URLs to share documents publicly with external users.

generar qr codes con enlaces publicos

Another feature implemented was the thumbnail view on the left side of the forms.


Reflections on the experience

Although apparently simple, the Kano model offers results that are sometimes difficult to interpret. For example, in cases where for a high percentage of the sample, the functionality is attractive, but for an equally significant percentage the functionality is indifferent. I personally consider that it is difficult to make a fine line with a model like this or, at least, in the handcrafted manner in which we have tested it. 

The model presents a number of implementation difficulties in an industry like ours, for example, you must make sure that the customer understands what functionality you are proposing. You must help them visualize something that does not exist and even if you use mockups, wireframes or prototypes, you are using a survey in which the interaction with the user does not exist. Perhaps, the ideal would be to use interviews instead of surveys, although this would be much more expensive.

However, at a high level, the Kano model has allowed us to know that in general terms, the implementation of these functionalities would cause a positive response from users. I believe this is one of its advantages.

In addition, I believe that another positive aspect of doing this type of exercise is that it forces you to ask your users what they want and to listen to their direct feedback. Listening to users is essential for the product to grow in the right direction.

Next steps

I think to close this exercise, we still need to listen to the post-implementation feedback of these features. Was it what the users understood when we asked them? Would they still categorize the features in the same way if we asked them now that they have them?

In the end, I think the positive part of this type of initiative is to give users more ways to express their opinions about the product. In the future we will continue to test new ways to get feedback.