Sorting techniques for eliciting people’s experiences

Repertory Grid Technique

To get useful feedback and diverse opinions on different stages of the design from different users before doing final product UX evaluation. This process do by building early prototypes and creating all possible scenarios. It benefits to fulfill the actual user needs and desires.

In this study, structured interview was built by using repertory grid technique and divided into two main phases. In first phase, researcher created three different bipolar elements and presented to participants and in second phase asked for rate those elements on own opinions.

The same approach was considered in this journal as well “Capturing The Design Repertory Space From Grid Technique a User Perspective” for finding, exploring, understanding design space in early prototypes, generate different views on the artifacts, embodying various individual needs and concerns in relation to the artifact.

Below are findings and suggested steps of this research after applying this approach by using 11 group of people.

  • Charting the design space
  • Exploring and understanding design space
  • Abstraction: Underlying topics made visible

At final, the best design alternative is generally the one the most people agree on. This view rooted in the quantitative research tradition.

A comparison of five elicitation techniques for elicitation of attribute of low involvement products

Compared five elicitation techniques on eight criteria derived from the theories of consumer buying behavior.

  • First triadic sorting for mapping cognitive structure.
  • Second is free sorting, form groups based on important aspects as compare to other products.
  • Third is not a sorting method, but use to find respondent relevant attributes products, called direct elicitation.
  • Another one is ranking for prioritizing products according to preference.
  • Last one, is picking from an attribute list.

The purpose of these approaches to find the theoretical conceptualisations of relationship between product attributes and consumer choice and motivation.

What aspects are you interested in improving (e.g. usability, beauty, satisfaction, …)

In our project I am interested to improve ease of use because it is related to blind people and trying to make this app as easy as we can do.

What design details you are unsure about (e.g. menu items, visual appearance, functions…)

I am not sure about the voice messages/commands like we need to evaluate our messages based on voice quality, rhythm, speed, pitch etc

Who would you like to evaluate the prototype: expert or users

In our study we are only considering users, first we have planned to do evaluation with actual user (to make them blind temporarily) by using paper prototype along scripted voice messages (scripted audio).

After that we will do our final evaluation with actual blind user and it could be available and participate before our final evaluation but it depends on his availability.

Do you need qualitative or quantitative data, or perhaps both? Later in class we’ll collect some data and do the analysis.

We required both qualitative and quantitative data to evaluate our first prototype.

Selection of suitable color scheme, menu items etc

About color scheme and design we are not putting much effort on these, we are focusing on audio interfaces and how can we provide ease of use to blind people. We have documented all the possible functionality and scenarios (in our previous group post) those will be addressed in this application.


Karapanos, Evangelos, and Jean-Bernard Martens. 2008. “The Quantitative Side of the Repertory Grid Technique: Some Concerns.” UXEM Workshop in CHI’08, April 6th, 2008.

Hassenzahl, Marc, and Rainer Wessler. 2000. “Capturing Design Space From a User Perspective: The Repertory Grid Technique Revisited.” International Journal of Human-Computer Interaction 12 (3): 441–59. doi:10.1207/S15327590IJHC1203&4_13.

Bech-Larsen, Tino, and Niels Asger Nielsen. 1999. “A Comparison of Five Elicitation Techniques for Elicitation of Attributes of Low Involvement Products.” Journal of Economic Psychology 20 (3): 315–41. doi:10.1016/S0167-4870(99)00011-2.



Methods for Evaluating Early Prototypes


Object based techniques help to understand how people come to think, feel, and know about their lives. Object-based technique categories in three main parts:

Photo Elicitation

Participants respond to a set of images, discuss images and on those responses we try to understand what they are thinking, feeling and see in them, after analysis we apply those interpretations & knowledge in our own project.

Collage and Mapping

Provide platform to participants to externalize emotions, expressing attitudes and desires. And mapping provide relationship between people and these objects.

Card Sorting

Participants sort cards with words or phrases in them into groups and uncover how people organize information, relate and categorize concepts. We can perform open and closed sorting based on the desire needs.

Multiple Sorting

This technique is used to evaluate and understand the user experience and explore people’s conceptualization. People ask for classification and then interviewed with them to know the reason of these categorisations.

Contextual laddering

One-to-one interviewing, qualitative data gathering and quantitative data analysis technique. Helps researchers and designers to understand how concrete product attributes benefit personal values for end users and decision making.

UX laddering is under development but have several advantages for this technique:

  • Greater emphasis on concrete attributes,
  • Suitable within a user-centered design approach
  • Answering why – questions


Kuniavsky, M. (2003). Observing the User Experience. Observing the User Experience. (chapter#8)

World, D., Frohlich, D., & Wilson, M. (2008). User Experience : A Multiple Sorting Method based on Personal Construct Theory. Chi, 1–5.

Vanden Abeele, V., & Zaman, B. (2009). Laddering the User Experience! User Experience Methods, Interact 2009, (July). Retrieved from the User Experience.pdf


User Experience Evaluation Methods

User experience is widely used but in different understandings and ways. And sometime is very difficult to evaluate, below described three approaches for evaluating user experience.

1. Anticipated eXperience Evaluation (AXE)

This approach is used to address the following challenges like

  • experiences illustrations with words
  • imagine and explain an experience
  • endless interpretation ways of participants answers elicited as words

AXE is a qualitative method that has been designed to provide development teams insights on how future users might experience and value a product or service concept. AXE required one user at a time and doesn’t demand for trained researchers and special equipment. And it is suitable for web services, pc software, hardware and mobile software designs.

Design targets defined before evaluation by design team, on the basis of design targets results will be compared and give the shared understanding of the goals throughout the development stages and the ability to assess whether user’s perception of the concept matches the goals.

Concept Briefing

Concept conveyed through description and narratives. Also illustrations and lo-fi prototypes are used to clarify and for better understanding and handed these to participant for using throughout the session for clearing understandings about idea/concept.

This activity start at the beginning of the evaluation session the concept needs to be presented to participants each time in the same manner and order to guarantee comparable results.

Concept evaluation

This study has been done by three parts: instructions, warm-up, image pairs. The instructions are general tips or overview to introduce the participant with the evaluation approach and guide through the process and contains one warm-up exercise to practice the evaluation with the participant. And the image pairs approach to get the knowledge of participants experiences, attitudes, opinions and beliefs towards a given product concept.

Data analysis

Subsequent to the interview, the data is transcribed and analyzed. Following steps for coding the data:

  • Transcription, to get as much information as possible and has to be done word-to-word
  • Separate text that get from transcription process into manageable segments
  • Categorize by similar information after step 2
  • Sorted by positive, negative and not applicable evaluations


  • No presuppositions
  • Users define freely their valuations and points of interest
  • deep insights into the participants real life context
  • No specific training or interviewing skills required
  • provide a method to study how people perceive a product/concept at a very early stage
  • help developers for further refining and steering


  • analysis framework for AXE was formed through multiple iterations
  • comparability between similar concepts
  • requires visual literacy from participants
  • time consuming

2. Exploration Test

The exploration test approach is achieve via exploring already available products in market by doing competitive review for getting knowledge, understanding and gain perceptions of people about competitors strategies, work structure and future of your product.

We call this an ethnographic test for evaluating user’s perception of a design. We often listen and use term competitive analysis that mostly done by business development group’s, independent auditors for  assessing a company’s offerings from a financial or marketing perspective and gain understanding of how much competitors products cost, who buys them, where customers live, and what aspects advertising emphasizes.

the real goal of competitive user experience research is to figure out how to creatively differentiate your product from the competition—not just fix other people’s mistakes.


  • to help executives in strategic decisions
  • focus groups reveals views of the brand and the identity of the product
  • reveal the fundamental strengths and weaknesses of the competition
  • easily get to know the more focused features by users
  • to get knowledge about main goals and basic needs of users
  • competitive research gives deeper understanding of what makes a good user experience
  • admiration for your competitors’ ability to solve problems and pride in your own product


  • problem in to create a correct interview script and the subjective analysis
  • difficult to account all data and responses

Consider as a field study by using early prototypes, functional prototypes, products on market and one user at a time.

3. Sentence Completion

Sentence completion technique to find out the product Symbolism that users attach to a product. For this technique we use one word ”Symbolic” that refers to the image and associations that spring to mind in regard to a product. In this study system & product use as a symbol and give it to participant for exploring different features of system, after that, participant is handed a set of beginnings of sentences that need to be complete based on used system knowledge/experience.

In this case participants provide feedback to designers to give understanding about how users see their products. Symbolic meaning is not an easy and one-dimensional concept. It need to be evaluated as users point of view, need to understand its components and to know about how they can affect user experience. The following factors of symbolic meaning and relationship to product experience, they describe the nature of phenomenon:

  • Memory retrieval and associations
  • Support for identity, self-expression or status
  • Beliefs about the kinds of people who use the product
  • Support for user values and social relatedness

This consider as a qualitative and quantitative approach and single user at a time. Also required trained researchers and functional prototypes or products on market during development phase.


  • respondents use their own words to describe their situation
  • more spontaneous and honest answers compared to traditional questionnaires
  • uncover conflicted attitudes, values and associations
  • support the positive meanings and correcting features that create negative reactions


  • hard to anticipate other people’s reactions
  • designers and users may attach different meanings to a product
  • Symbolic meaning is challenging to evaluate because of its intangible nature
  • interviewing is time demanding and only a limited number of users can participate
  • intangible nature

UX Evaluation Method for iBeacon Project

On the basis of my study and nature of our project I will select Exploration test method for user experience evaluation. We have already performed competitive research/review not in very detail and professional way but a small effort to identify and get to know about our competitors in current market.


  • Anticipated eXperience Evaluation by Gegner, L. and Runonen, M. 2012
  • Observing user experience by Elizabeth Goodman, Mike Kuniavsky and Andrea
  • Sentence Completion for Evaluating Symbolic Meaning by Sari Kujala and Piia Nurkka