Research Glossary

Research can sometimes feel like its own language reserved for those in the know. We've put together a growing list of terms in the space to keep you in the know.

Research Glossary

Research can sometimes feel like its own language reserved for those in the know. We've put together a growing list of terms in the space to keep you in the know.

Research Glossary

Research can sometimes feel like its own language reserved for those in the know. We've put together a growing list of terms in the space to keep you in the know.

Research Glossary

Research can sometimes feel like its own language reserved for those in the know. We've put together a growing list of terms in the space to keep you in the know.

A

What is A/B Testing?

A/B testing, also known as split testing, is a method of comparing two versions of a digital asset to determine which one performs better. By showing different versions (A and B) to different segments of users simultaneously, businesses can measure which variant drives more conversions, whether that’s clicks, sign-ups, purchases, or any other desired action. (If you want more info on the stats behind it, take a look at this article)

Why is it Helpful?

The greatest strength of A/B testing is its ability to provide empirical evidence for decision-making. Rather than relying on guesswork, intuition, or the designer’s personal preference, researchers can rely on concrete data to optimize user experiences. By systematically testing variations, A/B testing helps identify what resonates with users, leading to informed design choices and ultimately better outcomes.

The origins of A/B Testing

The concept of A/B testing has its roots in the scientific method, where controlled experiments are used to test hypotheses. It was first applied in the marketing world in the early 20th century, particularly in direct mail campaigns. Marketers would send two different versions of a mail piece to separate groups and track which version generated more responses.

With the rise of the internet, A/B testing evolved into a digital marketing staple. Companies like Google and Amazon popularized its use, employing it extensively to optimize their websites and services. Today, A/B testing is an integral part of UX design, allowing businesses to refine user interfaces, improve conversion rates, and enhance overall user satisfaction.

Understanding the Methodology

Conducting an A/B test involves several key steps:

1. Hypothesis Formation:
Identify a problem and formulate a hypothesis about what change might improve the outcome. For example, “Changing the call-to-action button color will increase sign-ups.”

2. Creating Variants:
Develop two versions of the webpage or app screen: the control (A) and the variation (B). The variation includes the change you hypothesize will improve performance.

3. User Segmentation:
Randomly split your user base into two groups. One group sees the control version, while the other sees the variation.

4. Running the Test:
Run the test for a sufficient amount of time to gather meaningful data. Ensure that both versions are exposed to a representative sample of users.

5. Analyzing Results:
Compare the performance of the two versions using key metrics, such as conversion rate, click-through rate, or any other relevant KPI. Statistical significance tests help determine if the observed differences are likely due to the changes made.

6. Implementing Changes:
If the variation performs significantly better, implement the change. If not, use the insights to formulate new hypotheses and continue testing.


A/B Testing in Action: Airbnb

Airbnb, a global leader in the online marketplace for lodging and tourism experiences, routinely uses A/B testing to refine its platform. One notable example involved the redesign of its homepage. Airbnb’s team hypothesized that altering the layout to emphasize high-quality images and simplifying the booking process would increase user engagement and bookings.

To test this, Airbnb launched an A/B test where one group of users saw the existing homepage, while another group interacted with the redesigned version. The redesigned page featured larger images, a more prominent search bar, and a streamlined navigation menu. The data revealed that the new design led to a significant increase in click-through rates and, more importantly, higher booking conversions. This result confirmed their hypothesis, and the new design was rolled out to all users, ultimately driving increased revenue.

Conclusion

A/B testing is like the secret weapon of digital marketing and UX design. Its origins in the scientific method give it a solid foundation, but its application in the digital age has unlocked new possibilities for optimization and growth. By comparing different versions of webpages or apps and measuring their performance, businesses can make informed decisions that enhance user experience and drive conversions.

The case of Airbnb illustrates the power of A/B testing to transform hypotheses into actionable insights. While it requires careful planning and analysis, the benefits of data-driven decision-making are undeniable. In an ever-changing digital landscape, A/B testing remains a vital tool for continuous improvement, helping businesses stay agile and responsive to user needs.

As we look to the future, the core principle of A/B testing remains timeless: making better decisions through experimentation and data. By embracing this approach, companies can ensure that their digital experiences are not only functional but also deeply engaging and effective.

FAQ

How many participants do you need for an A/B test?

For usability-focused A/B testing, where qualitative insights are valuable, smaller samples (around 5-10 users) can help identify significant usability issues. For quantitative A/B testing aimed at measuring performance differences, a larger sample size, often in the thousands, is necessary to achieve statistically significant results. Utilizing sample size calculators and considering your site’s traffic volume and test duration will help ensure your A/B test is both effective and reliable.

What is accessibility?

Accessibility refers to the design of products, services, environments, and technologies in a way that ensures they can be used by everyone, regardless of physical, cognitive, or sensory disabilities. The goal of accessibility is to eliminate barriers, making experiences inclusive for all users, including those with vision, hearing, mobility, or learning impairments.

Accessibility best practises

Accessibility ensures that everyone, regardless of their abilities or disabilities, can fully participate in and benefit from products, services, and environments. Here are some best practises to ensure accessible design.

1. Adhere to Accessibility Guidelines

Use standards like the Web Content Accessibility Guidelines (WCAG) to meet compliance levels, ensuring your design meets core accessibility requirements.

2. Provide Alternative Text

Ensure images, icons, and media have descriptive alt text so users with visual impairments can understand visual content through screen readers.

3. Ensure Keyboard Navigation

Design for keyboard accessibility, allowing users to navigate without a mouse. Make sure all interactive elements are easily accessible via keyboard shortcuts.

4. Maintain Adequate Colour Contrast

Use a high colour contrast ratio (4.5:1 or higher) between text and background to ensure readability for people with visual impairments or colour blindness.

5. Provide Text Alternatives

Offer captions for videos and transcripts for audio content so users with hearing impairments can access multimedia information.

6. Design for Screen Readers

Ensure that your site or app works well with screen readers by organising content logically, using semantic HTML, and adding ARIA labels where appropriate.

7. Use Descriptive Links and Buttons

Avoid generic text like "Click here." Instead, use meaningful link texts and labels on buttons that describe the action or destination.

8. Create Responsive Layouts

Design for multiple devices and screen sizes, ensuring that the layout adapts without losing functionality or readability.

9. Provide Scalable Text

Allow users to adjust text size without breaking the layout, improving readability for people with low vision.

10. Test with Real Users

Conduct usability tests with people with disabilities to gather feedback and insights, ensuring the design meets their needs in real-world contexts.



What is affordance?

Affordance refers to the perceived or actual properties of an object that suggest how it can be used. In design, affordances help users intuitively understand what actions are possible, such as a button suggesting it can be clicked or a handle suggesting it can be pulled. Effective affordances reduce the need for instruction or explanation and allow users to interact naturally with products. For example, a scroll bar affords vertical scrolling, and a door handle affords pulling. Recognising affordances helps designers create user interfaces and products that feel intuitive and aligned with users' expectations.

Best Practices for Implementation:

1. Ensure Visual Clarity

Use clear visual cues that suggest the function of interactive elements. For instance, buttons should look clickable, with appropriate visual affordances like shadows, highlighting, or distinct colours. This allows users to easily identify how to interact with elements without requiring additional instructions.

2. Consistency Across Elements

Keep affordances consistent across similar elements. For example, buttons across your interface should behave the same way and look similar. Inconsistency can confuse users and disrupt their mental model of how the interface functions.

3. Design for Intuition

Align design elements with real-world expectations. For example, sliders for volume controls resemble physical sliders, leveraging users’ understanding of how similar objects work in real life. This reduces the learning curve and makes interfaces feel more natural to navigate.

4. Use Signifiers When Necessary

While affordances should be obvious, sometimes additional signifiers (e.g., labels or icons) help reinforce what an object can do. For instance, a "play" icon on a button not only suggests that it can be clicked but also indicates the result of the action.

B

B

What is behavioural design?

Behavioural design is the practice of using design principles to influence users' behaviour in a desired way. It draws on concepts from psychology, such as motivation, habit formation, and decision-making processes, to guide users toward specific actions. For example, apps might use notifications to prompt users to exercise regularly, or a website might encourage users to complete a purchase by simplifying the checkout process. Behavioural design is common in fields like health and wellness, education, and marketing, where user engagement is critical. The goal is to design environments or products that make desired behaviours easy, appealing, and rewarding.

Best practises for great behavioural design

1. Understand User Motivations

Conduct thorough user research to understand what drives your target audience’s behaviours, including motivations, pain points, and barriers. By empathising with users, you can design solutions that align with their natural tendencies and preferences.

2. Use Nudges

Nudging users towards desired behaviours can be achieved by subtle prompts like reminders, notifications, or default settings. For example, automatically enrolling users in environmentally friendly options (while allowing opt-out) can promote sustainable behaviour without forcing it.

3. Simplify Tasks

The easier a task, the more likely users will engage. Remove friction points such as unnecessary steps or overly complex interfaces. Break down larger tasks into smaller, more manageable actions to encourage completion.

4. Leverage Positive Reinforcement

Reward users for performing desired actions. This could be in the form of gamification elements, such as badges or points, or tangible rewards like discounts or offers. Positive reinforcement encourages repeated behaviour and loyalty.

5. Test and Iterate

Behavioural design is an iterative process. Test behavioural interventions (like nudges or rewards) and gather feedback to assess their effectiveness. Over time, refine your approach based on what drives the highest engagement and desired outcomes.

Understanding the Methodology

Benchmarking is built on a systematic approach, where data collection, analysis, and interpretation form the core of the process. Here is a breakdown of the key elements involved in benchmarking methodology:

  1. Identifying Key Metrics: The first step is to define what will be measured. Common metrics in user research include task success rates, time on task, error rates, and user satisfaction scores. These metrics need to align with organisational goals or industry standards.

  2. Establishing a Baseline: A baseline is the starting point from which future comparisons will be made. It may come from internal historical data or be based on competitor analysis. For instance, if a company is launching a new app feature, the baseline could be the performance metrics of a competitor’s similar feature.

  3. Collecting Data: Data collection methods vary depending on the context, but they typically include usability testing, surveys, A/B testing, analytics, and interviews. User experience researchers gather both quantitative and qualitative data to provide a comprehensive view of the current state.

  4. Analysing Results: Once the data is collected, analysis focuses on identifying strengths and weaknesses. Are users taking longer to complete a task than those of competitors? Is user satisfaction declining over time? Analysis often leads to hypotheses about why certain issues are occurring, helping to prioritise areas for improvement.

  5. Iterating and Implementing Changes: Benchmarking is not a one-off process. It requires continuous evaluation, particularly after changes are made to a product or service. This iterative approach ensures that progress is tracked and the organisation moves closer to meeting or exceeding its benchmarks.

Top 5 Use Cases for Benchmarking Research

  1. Measuring Product Usability Against Competitors: One of the most common applications of benchmarking is comparing the usability of a product against that of competitors. By identifying strengths and weaknesses in the user experience, organisations can adapt quickly, capitalising on areas where they outperform rivals or addressing shortcomings.

    For example, a study by the Nielsen Norman Group found that average task success rates across websites typically range between 70-80%. Companies falling short of this figure have a clear indication that usability improvements are required to remain competitive.

  2. Tracking Product or Service Improvements Over Time: After a product launch or feature update, benchmarking can be used to track progress. It is especially valuable for companies with a long-term focus on optimising user experiences. By establishing a benchmark before and after a redesign, researchers can measure the impact of their efforts in quantitative terms.

    Google, for instance, regularly uses internal benchmarks to evaluate the effectiveness of new feature rollouts. A case in point is YouTube’s autoplay feature, where Google benchmarked its impact on session lengths and engagement.

  3. Optimising Customer Journeys: Understanding the customer journey in-depth is crucial for businesses seeking to create frictionless experiences. Benchmarking is invaluable for comparing various touchpoints within the customer journey, from checkout processes on e-commerce platforms to onboarding in SaaS products, or issue resolution in customer service environments.

  4. Improving Accessibility: Benchmarking is also useful in assessing how well a product meets accessibility standards. By comparing performance metrics for users with disabilities against industry guidelines, businesses can identify gaps and enhance the inclusivity of their designs. At a time when digital accessibility is a legal and ethical priority, benchmarking helps organisations comply with regulations such as the UK Equality Act while fostering a more inclusive user base.

  5. Benchmarking Satisfaction and Loyalty: Many companies use Net Promoter Score (NPS) and Customer Satisfaction (CSAT) scores as benchmarks to gauge user loyalty and satisfaction. Regularly benchmarking these scores provides vital insights into how product or service changes impact user sentiment. Apple and Amazon, for example, heavily rely on customer satisfaction benchmarks to continually refine their offerings.

Best Practices for Benchmarking

To ensure meaningful outcomes, it is essential to adhere to best practices when conducting benchmarking research. Below are some strategies that organisations can employ:

  1. Focus on the Right Metrics: The metrics selected for benchmarking should reflect the company’s goals and be actionable. It’s tempting to gather as much data as possible, but focusing on key metrics such as task completion rates, satisfaction, or retention rates ensures that the data collected is relevant and can lead to effective changes.

  2. Use Both Quantitative and Qualitative Data: Combining quantitative data (like success rates and time on task) with qualitative data (such as user feedback) offers a more comprehensive understanding of the user experience. While quantitative data provides hard numbers for comparison, qualitative insights explain why those numbers are what they are, offering richer context for decision-making.

  3. Benchmark Against Competitors and Industry Standards: Selecting appropriate benchmarks for comparison is crucial. Companies can benchmark their NPS scores against industry averages or compare usability against direct competitors. Relying on sources such as the 2023 Digital Experience Benchmark by Contentsquare, which reports on metrics like time spent per page and bounce rates across industries, can provide guidance on where improvements are most needed.

  4. Iterate Regularly: User behaviour evolves quickly, and what worked last year may not be effective today. Regular benchmarking allows for continuous improvement. It is important to update benchmarks over time to keep pace with changing trends, customer needs, or the emergence of new competitors.

  5. Make Data-Driven Decisions: The real value of benchmarking lies in the ability to make informed, data-driven decisions. Organisations should ensure that benchmarking results are shared with key stakeholders and used to guide product development, marketing strategies, and customer experience initiatives.

Conclusion

Benchmarking in user research is a powerful tool that enables businesses to assess how their products and services compare with industry standards and competitors. By providing a structured approach to understanding performance, usability, and customer satisfaction, benchmarking informs product development and improvements that lead to better user experiences.

Adopting best practices such as focusing on the right metrics, combining quantitative and qualitative data, and conducting benchmarking on a regular basis ensures that the process is effective and insightful. Whether it’s used to track performance over time, improve accessibility, or optimise the customer journey, benchmarking remains an essential technique for businesses committed to remaining competitive in a rapidly evolving digital landscape.

Understanding the Methodology

Brainstorming is deceptively simple but effective when conducted with the right structure. Though traditionally seen as an informal exercise, modern brainstorming sessions in user research follow a more methodological approach to ensure productive outcomes. Here’s a breakdown of the key elements:

  1. Preparation: Effective brainstorming begins before the meeting itself. It requires a clear goal, whether it’s solving a specific problem or generating ideas for a new feature. Preparing a well-defined problem statement or challenge ensures that the team stays focused and aligned.

  2. Diverse Participants: Successful brainstorming thrives on diversity. A mix of stakeholders, including designers, developers, marketers, and end users, can bring different perspectives that enrich the process. For user research, involving both internal team members and external participants who closely resemble the target audience helps ensure a wide range of viewpoints.

  3. Structured Rules: A set of ground rules encourages openness and prevents the session from descending into chaos. The most common rule is "defer judgment," meaning that no idea should be criticised during the ideation phase. Another key rule is encouraging participants to build on others' ideas, leading to a more collaborative and iterative thought process.

  4. Facilitator and Recording: An experienced facilitator is crucial for keeping the session on track. They ensure that everyone contributes and that no one dominates the discussion. Meanwhile, ideas should be recorded in real-time, often on a whiteboard, sticky notes, or digital collaboration tools like Miro or Figma, so they can be referenced, grouped, and expanded upon.

  5. Converging on Solutions: Once the ideas have been captured, the group transitions from divergent thinking (where ideas are generated) to convergent thinking (where ideas are refined and selected). This stage often involves voting or ranking ideas based on feasibility, user impact, and alignment with business goals.

Top 5 Use Cases for Brainstorming in User Research

  1. Ideating New Features for Digital Products: Brainstorming is invaluable when developing new features or improving existing ones in digital products like mobile apps or websites. For instance, a team may brainstorm how to enhance an e-commerce platform’s checkout process based on identified user pain points, generating ideas that improve the user experience and drive conversions.

    A study published by Harvard Business Review found that diverse groups can generate up to 20% more creative solutions compared to homogenous teams, highlighting the importance of involving a cross-functional team in feature ideation .

  2. Solving Complex Usability Problems: When user research identifies a critical usability issue, such as users struggling to complete a task, brainstorming helps create innovative solutions. The collaborative nature of brainstorming allows for both small tweaks and large redesigns to be considered, often yielding solutions that might not emerge from individual analysis alone.

    For example, Spotify's team used brainstorming to rethink how their mobile app’s search functionality could be improved after identifying usability friction during user testing .

  3. Developing Personas and User Journeys: Brainstorming sessions are often used during the early stages of user research to develop user personas and map out customer journeys. Gathering insights from various team members with different expertise ensures that personas and journeys are holistic and grounded in real-world perspectives.

    Research published by UX Collective showed that brainstorming with cross-functional teams helps create more realistic personas and user journeys, ultimately leading to better-aligned product decisions .

  4. Prioritising Features Based on User Needs: When faced with a long list of potential features, brainstorming can help teams prioritise which ones to focus on by considering factors like user impact, technical feasibility, and business goals. Through collaborative brainstorming, teams can also identify low-hanging fruit—features that deliver high value with minimal effort.

  5. Innovating User Testing Approaches: Brainstorming is not just about product design; it can also be used to innovate the way user research itself is conducted. Teams may brainstorm creative approaches to usability testing or customer interviews, coming up with unique methods that yield deeper insights into user behaviour.

    As user research methods continue to evolve, brainstorming allows researchers to stay ahead of the curve by experimenting with different approaches, tools, and techniques.

Best Practices for Brainstorming

To maximise the effectiveness of brainstorming in user research, following a set of best practices is essential:

  1. Establish a Clear Goal: Every brainstorming session should start with a clear, specific problem to solve or goal to achieve. Whether it’s improving a feature, solving a usability issue, or refining a customer journey, defining the challenge upfront ensures the session stays on track.

  2. Encourage Wild Ideas: Innovation often emerges from seemingly outlandish ideas. Encouraging participants to think beyond the usual boundaries can result in unexpected, valuable solutions. Avoiding judgment in the early phases is crucial to keep the creative energy flowing.

  3. Focus on Quantity First, Then Quality: Brainstorming should begin with a focus on quantity—more ideas lead to better ideas. Research from Stanford University suggests that encouraging teams to generate a large volume of ideas leads to higher-quality solutions as participants build on one another's thoughts . The refinement and filtering of ideas can happen once the ideation phase is complete.

  4. Leverage Collaboration Tools: In today’s increasingly remote work environment, collaboration tools such as Miro, MURAL, or Figma provide an interactive platform for brainstorming. These tools allow teams to visualise ideas, vote on them, and create workflows that can be shared easily with others. Virtual brainstorming sessions can be just as effective, if not more so, when the right tools are used.

  5. Ensure Psychological Safety: Psychological safety is key to a successful brainstorming session. Everyone should feel comfortable sharing their ideas without fear of criticism or judgment. According to a study by Google’s Project Aristotle, psychological safety was found to be the number one factor in successful team collaboration . When team members feel safe to share, creativity flourishes.

Conclusion

Brainstorming is an indispensable tool in user research, helping teams generate a wealth of ideas, solve complex problems, and foster collaboration across disciplines. Whether it’s ideating new features, solving usability challenges, or innovating user testing methods, brainstorming offers a flexible framework that can be adapted to different contexts and challenges.

By following best practices such as encouraging wild ideas, focusing on quantity before quality, and ensuring psychological safety, teams can unlock their full creative potential. In an industry where user needs are constantly evolving, brainstorming remains a timeless and vital approach for driving user-centric innovation and ensuring that products and services meet the demands of a diverse audience.


C

C

What is card sorting?

Card sorting is a user experience (UX) research technique used to understand how users categorize and organize information. By asking participants to group a set of labeled cards into categories that make sense to them, UX designers can gain insights into users' mental models and preferences. This method is particularly useful for designing information architecture, such as website navigation, menu structures, and workflows.

The origins of Card Sorting

The origins of card sorting in user research can be traced back to the field of psychology, where it was initially used as a method for studying cognitive processes. It was later adopted by the field of user experience (UX) design as a way to understand how users categorize information, which helps in creating intuitive navigation structures for websites and applications.

While no single person is credited with inventing card sorting, Celeste Paul is noted for creating the Modified-Delphi card sort, a variant of the method. Donna Spencer is also mentioned as a significant figure in the field, having written a book on card sorting and contributed to its methodology

Card sorting began being used in user research around the 1990s, coinciding with the rise of the internet and the need for more user-friendly web interfaces. It has since become a common method in UX research and design, valued for its ability to uncover users' mental models and inform the organization of content in a way that aligns with user expectations.

Understanding the Methodology

Card sorting can be conducted in several formats:

1. Open Card Sorting:
In an open card sort, participants create their own categories and labels. This approach is particularly useful for generating insights into how users naturally categorize information, providing a raw look at their mental models.

2. Closed Card Sorting:
Here, participants sort cards into predefined categories. This method is beneficial when designers want to validate existing organizational structures or fit information into a set framework.

3. Hybrid Card Sorting:
A blend of open and closed methods, hybrid card sorting allows participants to use predefined categories while also creating new ones if necessary. This flexibility can yield richer insights, balancing structure with user intuition.

Top 5 Use Cases for Card Sorting

Let's look at some examples of how card sorting can be used for research in digital products.

1. Website Redesigns

Card sorting helps to restructure existing websites, making them more user-friendly by aligning the information architecture with user expectations.

Example:

E-commerce Website Redesign: An online retailer wants to improve the navigation on their website to make it easier for users to find products. They conduct a card sorting exercise to understand how customers naturally group items like clothing, accessories, and electronics.

2. Information Architecture

Card sorting is used to organize content and features on a website or application, ensuring that the structure aligns with how users think about and look for information.

Example:

University Website Structure: A university aims to reorganize its website to make academic resources more accessible to students and faculty. Card sorting helps determine how to structure departments, courses, and student services.

3. Content Organization

Card sorting helps determine the most logical way to organize large amounts of content, ensuring that users can easily find what they are looking for.

Example:

News Website: A news organization wants to restructure their website to ensure users can easily find news articles by topic. Card sorting helps identify the best way to categorize different types of news content.

4. Mobile App Navigation

Card sorting is used to design intuitive navigation systems for mobile applications, making it easier for users to access different features and functionalities.

Example:

Fitness App: A fitness app developer wants to create an intuitive navigation system. Card sorting is used to determine how users expect to find workouts, nutrition advice, and tracking features.

5. Product Feature Categorization

Card sorting helps categorize and prioritize product features, ensuring that the user interface is logical and easy to navigate.

Example:

Software Dashboard - A software company is redesigning the dashboard for their project management tool. Card sorting helps decide how to categorize and prioritize features like task management, team communication, and project analytics.

Card Sorting in Action:  The Canadian Government’s Website Consolidation Project

As part of an initiative to consolidate hundreds of government websites into a single, user-friendly portal, the Canadian government employed card sorting to determine the best way to organize content across departments.

The government conducted extensive card sorting studies with citizens, government employees, and other stakeholders. Participants were given various types of government-related content, such as information on taxes, benefits, and immigration, and asked to group them into logical categories.

The card sorting results revealed significant insights into how users expected to find government services and information. This led to the creation of a more citizen-centric information architecture, which prioritized common tasks and topics rather than the internal structure of government departments. The redesign made it easier for Canadians to navigate the government portal and access services.

Conclusion

Card sorting is like a behind-the-scenes magic trick in the world of UX design. Its origins in cognitive psychology might sound academic, but at its heart, it’s about making our digital lives simpler and more intuitive. By tapping into how users naturally think about and categorize information, designers can craft websites and apps that feel just right.

As our digital world gets more complex, the importance of clear, user-friendly design only grows. Card sorting isn’t going anywhere – it’s evolving with new tools and techniques that give even deeper insights into user behavior.

In today’s fast-paced design world, one thing remains constant: the need to keep users front and center. Card sorting is a timeless tool in this effort, helping ensure our digital experiences are not just functional, but genuinely enjoyable and easy to navigate. As we embrace new technologies and methods, card sorting will continue to be a key part of creating designs that resonate with users.

FAQ

How many participants do you need for a card sort study?

For most card sorting studies, 15 to 20 participants are often sufficient to uncover the majority of common patterns and trends. This sample size balances the need for diverse input with practical considerations like time and resources.

Can Card Sorting Replace a Tree Task?

Card sorting and tree testing are complementary methods rather than interchangeable ones. While card sorting helps in the initial stages of creating an information architecture, revealing how users think about content categories, tree testing evaluates the effectiveness of that structure. Tree testing involves users navigating through a simplified version of the site’s hierarchy to complete tasks, providing insights into the usability of the structure created from card sorting.

Understanding the Methodology

The study of cognitive load in user research revolves around evaluating how much mental effort is required to complete tasks within a digital environment. There are three primary types of cognitive load to consider:

  1. Intrinsic Cognitive Load: This relates to the complexity of the information or task itself. For example, filling out a tax return or navigating a complex software program will naturally demand a higher cognitive load than browsing a blog.

  2. Extraneous Cognitive Load: This is caused by how information is presented. Poor interface design, cluttered layouts, or unclear instructions can increase extraneous cognitive load, making even simple tasks feel more difficult than they should be.

  3. Germane Cognitive Load: This type of load relates to how much mental effort is put into understanding and learning new information. It’s considered positive and essential for tasks where users need to process new concepts, like learning a new feature in an application.

The goal in user research is to reduce extraneous load, balance intrinsic load, and support germane load where necessary. A well-designed interface strikes the right balance, ensuring users can complete tasks efficiently without feeling mentally taxed.

Top 5 Use Cases for Cognitive Load Research

  1. Simplifying Navigation in Websites and Apps
    A common use case for cognitive load research is to evaluate how easily users can navigate through websites and apps. Complex or poorly structured navigation systems often increase extraneous cognitive load, leading to frustration and task abandonment. User research methods like task analysis and usability testing help identify navigation pain points.

    For instance, studies have shown that users tend to abandon websites if they cannot find what they’re looking for within 10-20 seconds (NNGroup). By reducing unnecessary steps, simplifying menus, and offering clear calls to action, UX designers can reduce cognitive load and improve user retention.

  2. Optimising Forms and Data Entry Processes
    Filling out forms can quickly overload users with cognitive demands, especially if the form is long, asks for redundant information, or has unclear instructions. User research can focus on streamlining forms by reducing the number of fields, grouping related information, and providing helpful visual cues, such as progress bars.

    Research by the Baymard Institute found that 69% of users abandon forms due to usability issues. Reducing cognitive load in these processes by creating simpler, more intuitive forms can improve completion rates significantly.

  3. Enhancing Onboarding Experiences
    When users first engage with a new product, their onboarding experience plays a crucial role in shaping their perception of the product. If the onboarding process is too complex, users may become overwhelmed, leading to drop-off. Cognitive load research helps teams design onboarding experiences that gradually introduce features, rather than overwhelming users with too much information at once.

    For instance, apps like Slack and Duolingo use progressive disclosure techniques, offering users a minimal interface initially and then revealing more features as they become more familiar with the system, reducing cognitive overload.

  4. Designing Accessible Interfaces
    Cognitive load research is essential when designing interfaces for users with cognitive disabilities or low digital literacy. For these users, too much information, confusing layouts, or cluttered designs can make it nearly impossible to complete tasks.

    According to the Web Content Accessibility Guidelines (WCAG), reducing cognitive load is one way to make digital content more accessible. Techniques such as using plain language, providing consistent navigation, and breaking down complex tasks into smaller steps help create a more inclusive experience for all users.

  5. Improving Mobile Usability
    Mobile users often face higher cognitive load due to the smaller screen size, limited input options, and varying environmental distractions. In mobile usability research, the focus is on creating simple, intuitive designs that reduce unnecessary steps and streamline the user journey.

    A report by Google found that 53% of mobile users abandon a site that takes longer than 3 seconds to load, while cluttered interfaces with too much information often lead to confusion and frustration. By optimising mobile interfaces to reduce cognitive load—through techniques like clear touch targets, minimal text, and concise navigation—companies can improve user engagement and satisfaction.

Best Practices for Managing Cognitive Load in UX Design

  1. Simplify Visual Design
    One of the most effective ways to reduce cognitive load is by simplifying the visual design. Removing unnecessary elements, such as excessive text, images, or animations, can help users focus on what’s most important. Using whitespace effectively and maintaining a consistent layout across pages makes it easier for users to process information.

  2. Chunk Information
    Breaking down complex tasks or information into smaller, more manageable parts—known as "chunking"—can significantly reduce cognitive load. For example, instead of presenting a lengthy form on one page, divide it into smaller sections with clear headings and progress indicators. This approach helps users feel less overwhelmed and more in control.

  3. Use Familiar Patterns
    Familiarity reduces cognitive load. Users come to digital products with certain expectations based on past experiences. By using familiar design patterns, such as standard navigation layouts or common icons, designers can leverage existing knowledge to make interactions smoother. As Don Norman, a leader in UX design, famously stated, “users spend most of their time on other sites.” Following established conventions makes it easier for users to navigate new platforms.

  4. Provide Clear Feedback
    Feedback is essential for reducing cognitive load. When users interact with a system, they need clear, immediate feedback to understand what’s happening and confirm that their actions were successful. Simple messages like “Form submitted” or “Your file is uploading” reassure users and reduce the need for guesswork.

  5. Progressive Disclosure
    Rather than overwhelming users with all features at once, the principle of progressive disclosure involves showing only what’s necessary at each stage of interaction. As users grow more familiar with a product, additional features or information are revealed. This approach, used by platforms like Dropbox and LinkedIn, allows users to acclimate to the system gradually, reducing cognitive load.

Conclusion

Cognitive load is a crucial concept in user research and UX design, as it directly affects how easily users can engage with and complete tasks in digital environments. By understanding the types of cognitive load and applying the right research methodologies, UX teams can design experiences that minimise unnecessary mental effort while supporting users in complex tasks.

Whether it's improving navigation, optimising forms, or designing for accessibility, keeping cognitive load in check ensures users feel empowered, not overwhelmed. By following best practices such as simplifying visual design, using familiar patterns, and employing progressive disclosure, designers can create intuitive, user-friendly products that lead to higher satisfaction and greater engagement.

Managing cognitive load is not just about making things easier; it's about designing with empathy and a deep understanding of user needs. As the digital world continues to evolve, so too must our strategies for reducing the mental strain on users, ensuring every experience is as smooth and efficient as possible.

d

d

e

e

f

f

g

g

H

H

I

I

J

J

K

K

L

L

M

M

What is Micro-Copy?

Micro-copy refers to the short, contextually relevant pieces of text found throughout a digital product. These include button labels, error messages, tooltips, form field instructions, confirmation messages, and more. While often overlooked, micro-copy serves as the invisible glue that binds the user interface (UI) together, ensuring a smooth and intuitive user journey.

Consider the difference between a button labeled “Submit” and one labeled “Get Started.” While both may functionally perform the same task, the latter communicates a sense of action and forward momentum, aligning better with the user’s goals. Similarly, a well-crafted error message, such as “Oops, something went wrong. Please try again,” is far more user-friendly than a generic “Error 404.” Good micro-copy not only provides clear instructions but also conveys the brand’s voice and tone, helping to establish trust and rapport with users.

What is the Best Process for Creating Micro-Copy?

Creating effective micro-copy is as much an art as it is a science. It requires a deep understanding of the user, the product, and the context in which the text will appear. Here’s a step-by-step guide to crafting micro-copy that resonates with users:

1. Understand the User Journey

Before you start writing, it’s essential to map out the user journey. This involves understanding the various touchpoints where users will interact with the product, from onboarding to completing a task. Identify the moments where micro-copy can provide guidance, reduce friction, or add value. For instance, during the sign-up process, micro-copy can help clarify what information is required and why, reducing the likelihood of user drop-off.

2. Define the Brand Voice and Tone

Micro-copy is an extension of your brand’s personality. It should align with the overall voice and tone guidelines of the brand. For example, a financial app might use a more formal and reassuring tone, while a fitness app might opt for an energetic and motivational voice. Consistency in voice and tone across all touchpoints ensures that users have a cohesive experience.

3. Collaborate Across Teams

Creating effective micro-copy is a collaborative effort. It’s important to involve designers, product managers, developers, and, if possible, users themselves. Designers can provide insights into how the micro-copy fits within the visual design, while product managers can ensure that it aligns with the overall product goals. Developers can advise on any technical constraints that might affect the placement or length of the micro-copy.

4. Focus on Clarity and Brevity

The primary goal of micro-copy is to communicate clearly and concisely. Users should be able to understand the message instantly, without needing to pause or think. Avoid jargon, complex language, and unnecessary words. For example, instead of saying “Please provide your email address for further correspondence,” you could simply say “Enter your email.”

5. Prioritize the User’s Perspective

Effective micro-copy addresses the user’s needs and concerns. This means thinking about what the user is trying to achieve at each point in their journey and how the micro-copy can facilitate that. For example, if a user is hesitant about entering their credit card information, a reassuring message like “Your payment is secure and encrypted” can help ease their concerns.

6. Write Multiple Variations

Don’t settle on the first draft. Write several variations of each piece of micro-copy and evaluate them for clarity, tone, and alignment with user goals. A/B testing different versions can also provide insights into which variation performs best.

7. Review and Iterate

Once you’ve written the micro-copy, review it in the context of the overall design. Does it fit well within the UI? Is it consistent with the brand’s voice? Are there any potential misunderstandings? Gather feedback from other team members and be prepared to iterate. Micro-copy should evolve as the product and user needs change.

1. Mailchimp: Playful Yet Informative

  • Example: When you send out a campaign, Mailchimp displays a high-five animation with the text, “Your campaign is in the queue!”

  • Why It’s Great: This playful message celebrates the user’s accomplishment, reinforcing a positive emotional response. It’s both functional (letting the user know their task is complete) and delightful.

2. Dropbox: Simplifying the Experience

  • Example: When you delete a file, Dropbox asks, “Are you sure you want to delete ‘File Name’?”

  • Why It’s Great: This micro-copy is specific, calling out the exact file name to ensure users know exactly what action they’re taking, preventing accidental deletions.

4. Asana: Motivating Productivity

  • Example: When you complete a task in Asana, you might see a celebratory message like, “Great job! Another task down.”

  • Why It’s Great: This positive reinforcement motivates users to keep being productive. It’s a small touch, but it makes the user feel good about their progress.

5. Trello: Encouraging User Exploration

  • Example: Trello’s empty state micro-copy for a new board is, “Let’s get started!” with a prompt to “Create your first list.”

  • Why It’s Great: This micro-copy is encouraging and action-oriented, guiding users gently towards their next step without feeling pushy.

6. Buffer: Humanizing the Experience

  • Example: When Buffer queues up a post, it says, “You’re all set! Your post is queued and will be sent on schedule.”

  • Why It’s Great: This micro-copy confirms the user’s action in a friendly and reassuring way, reducing any anxiety about whether their content will be published correctly..

7. Grammarly: Encouraging User Improvement

  • Example: When Grammarly corrects a mistake, it shows a message like, “Great work! You’ve caught a tricky one.”

  • Why It’s Great: This micro-copy is positive and motivational, encouraging users to feel good about improving their writing.

Conclusion

In the digital landscape, where attention spans are short and user expectations are high, micro-copy plays a critical role in shaping the user experience. These small, carefully crafted pieces of text can guide, reassure, and delight users, making the difference between a seamless interaction and a frustrating one.

By following a thoughtful process for creating and testing micro-copy, product teams can ensure that their words not only convey the right message but also reflect the brand’s personality and resonate with users. As digital products continue to evolve, so too will the role of micro-copy in bridging the gap between users and the technology they interact with.

N

N

O

O

P

P

Q

Q

When Did It Start?

The application of qualitative research in user research gained prominence with the rise of Human-Computer Interaction (HCI) studies in the late 20th century. Influential works by researchers like Donald Norman and Jakob Nielsen in the 1980s and 1990s emphasized the importance of understanding user needs and behaviors to improve usability and design. This era marked the beginning of integrating qualitative methods such as user interviews, usability testing, and ethnographic studies into the design and development processes of digital products.


Understanding the Methodology

Qualitative user research methodologies encompass various approaches tailored to explore the user experience in depth. Here are some key methodologies:

1. User Interviews: One-on-one interviews with users are conducted to gather detailed insights into their experiences, needs, and frustrations. This method allows researchers to probe deeper into specific aspects of the user journey and uncover underlying motivations and behaviors.

2. Usability Testing: This approach involves observing users as they interact with a product or prototype to identify usability issues and areas for improvement. Usability testing helps in understanding how users navigate interfaces and where they encounter difficulties.

3. Ethnographic Studies: Researchers immerse themselves in the users’ environment to observe and interact with them in their natural context. This method provides a comprehensive understanding of how users engage with products in real-life scenarios.

4. Diary Studies: Users are asked to document their experiences with a product over a period of time. This longitudinal approach captures the evolving relationship between users and the product, highlighting patterns and changes in behavior.

5. Focus Groups: Group discussions with users provide a platform for participants to share their experiences and opinions. Focus groups facilitate the exploration of diverse perspectives and the collective dynamics influencing user behavior.


Top 5 Use Cases for Qualitative User Research

1. Product Development: Qualitative research informs the design and development of new products by uncovering user needs and preferences. By understanding what users want and how they interact with current solutions, companies can create products that better align with user expectations.

2. Usability Improvements: Identifying and addressing usability issues is a key use case for qualitative research. By observing users in real-time as they interact with a product, researchers can pinpoint specific pain points and design flaws that hinder user experience.

3. Customer Journey Mapping: Qualitative methods are essential for mapping out the customer journey, providing insights into how users move through different stages of interaction with a product or service. This helps in identifying critical touchpoints and optimizing the overall user experience.

4. Market Segmentation: Understanding different user segments and their unique needs can drive more targeted and effective product strategies. Qualitative research helps in defining these segments based on behavior, preferences, and feedback.

5. Innovation and Ideation: During the ideation phase, qualitative research can generate new ideas and concepts based on user insights. Engaging with users early in the development process can inspire innovative solutions that resonate with the target audience.


Pros and Cons of Qualitative User Research

Pros

1. Depth and Detail: Qualitative research provides rich, detailed data that captures the complexity of user experiences, offering insights that quantitative methods may overlook.

2. Contextual Understanding: By studying user interactions within their natural contexts, qualitative research reveals the situational factors that influence behavior and decision-making.

3. Flexibility: Qualitative methods are adaptable, allowing researchers to explore new questions and directions as they arise during the study.

4. User-Centered Insights: This approach prioritizes the perspectives of users, ensuring that their voices and experiences guide design and development processes.

5. Innovation Catalyst: Qualitative research can uncover unmet needs and inspire innovative solutions, driving creative problem-solving and design thinking.


Cons

1. Subjectivity: The interpretive nature of qualitative research can introduce researcher bias, as findings are shaped by the researcher’s perspectives and interactions with users.

2. Limited Generalizability: Due to typically small, non-random samples, qualitative findings may not be easily generalizable to larger user populations.

3. Time-Consuming: Collecting and analyzing qualitative data is often time-consuming and labor-intensive, requiring significant resources and expertise.

4. Complex Data Analysis: Analyzing qualitative data involves complex, iterative processes that can be challenging to systematize and standardize.

5. Replication Challenges: The unique contexts and interactions in qualitative studies can make replication difficult, limiting the ability to verify findings through repeated studies.


Conclusion

Qualitative research is a cornerstone of user research, providing deep insights into user experiences, behaviors, and needs. Its methodologies, while diverse and sometimes challenging, offer a depth of understanding that quantitative approaches alone cannot achieve. By embracing the strengths of qualitative research and addressing its limitations, researchers can continue to illuminate the complexities of user experience, driving user-centered design and innovation

R

R

What is ResearchOps?

ResearchOps, or Research Operations, is the orchestration and optimization of the processes, tools, and people involved in conducting research. Think of it as the backbone of research activities, ensuring that everything runs smoothly, from the recruitment of participants to the final analysis and reporting of data. The primary goal of ResearchOps is to create a seamless environment where researchers can focus on extracting insights without being bogged down by administrative or logistical hurdles.

In qualitative research, where methodologies such as interviews, ethnographies, and focus groups often require meticulous planning and execution, ResearchOps plays a crucial role. It ensures consistency, efficiency, and compliance across all research activities, which is particularly important as organizations scale their research efforts and navigate increasingly complex regulatory landscapes.


How Did ResearchOps Start?

The formalization of ResearchOps is a relatively recent development, born out of necessity as organizations, particularly in the tech sector, began to realize the inefficiencies inherent in their research processes. Before ResearchOps, research activities were often siloed, with little coordination between teams. This lack of structure led to duplicated efforts, inconsistencies in data collection and analysis, and ultimately, missed opportunities for deriving meaningful insights.

The rise of user experience (UX) research in the 2010s was a significant catalyst for the development of ResearchOps. As UX research became integral to product design and development, researchers found themselves spending an increasing amount of time on operational tasks—recruiting participants, managing consent forms, and organizing data—rather than on actual research. This inefficiency highlighted the need for a dedicated function to manage these operations, leading to the birth of ResearchOps.

A pivotal moment in the evolution of ResearchOps was the 2018 kickoff meeting of the ResearchOps Community in London. This gathering of researchers from various industries marked the beginning of a concerted effort to define and share best practices in research operations. Since then, the community has grown, with practitioners around the world contributing to the development of frameworks and tools designed to streamline research activities.


When Do You Need to Create a ResearchOps Function?

Establishing a ResearchOps function is not a decision to be taken lightly. It requires careful consideration of the organization’s needs, scale, and research goals. However, several indicators can help determine when it’s time to implement your team.

1. Increased Volume of Research Activities: If your organization is conducting multiple research projects simultaneously, especially across different teams or geographies, a ResearchOps function can help centralize and streamline these efforts, ensuring consistency and reducing redundancy.

2. Researcher Overload: When researchers spend more time on administrative tasks—such as scheduling interviews, managing consent forms, or organizing and storing data—than on actual research, it’s a clear sign that a ResearchOps function could be beneficial.

3. Complex Research Methodologies: Qualitative research often involves complex methodologies that require careful planning and execution. A ResearchOps function can provide the necessary support to ensure that these methods are applied correctly, efficiently, and ethically.

4. Compliance and Risk Management: As data privacy regulations become more stringent, ensuring compliance with ethical standards and legal requirements in research is critical. A ResearchOps function can help navigate these complexities, reducing the risk of legal issues and ensuring that all research activities are conducted responsibly.

5. Cross-Functional Collaboration: In organizations where research findings need to be shared and integrated across different departments—such as design, product development, and marketing—a ResearchOps function can facilitate smoother communication and collaboration, ensuring that insights are effectively utilized.

Measurable Outcomes of ResearchOps

One of the key arguments for establishing a ResearchOps function is the potential for measurable, tangible outcomes that can significantly enhance the effectiveness of research activities. Here are some of the key benefits:


1. Increased Efficiency:

One of the most immediate and tangible benefits of ResearchOps is the dramatic increase in research efficiency. In many organizations, researchers are often bogged down by administrative tasks—scheduling interviews, managing consent forms, or organizing data—leaving them with less time to focus on the core of their work: generating insights. ResearchOps changes this dynamic by centralizing and automating these tasks, allowing researchers to concentrate on what they do best.

Consider a tech company launching multiple user studies simultaneously across different markets. Without a ResearchOps function, this process might be disjointed, with teams duplicating efforts and wasting time on redundant tasks. ResearchOps streamlines these processes, ensuring that resources are used effectively and that projects are completed on time, ultimately accelerating the research cycle. The result is not just faster research but better, more focused research—allowing companies to stay ahead in an increasingly competitive landscape.


2. Higher Quality Data:

Quality is the bedrock of meaningful research. Without consistent methodologies and rigorous standards, the insights derived from research can be flawed, leading to misguided decisions. ResearchOps ensures that research is conducted with the highest level of consistency, across all projects and teams. By standardizing processes and tools, ResearchOps minimizes the risk of errors and ensures that data collection and analysis are reliable and valid.

This consistency is particularly crucial in qualitative research, where the nuances of human behavior and experience are often difficult to capture. With ResearchOps in place, organizations can be confident that their research methodologies are applied uniformly, leading to high-quality data that truly reflects the needs and desires of their users. This not only bolsters the credibility of the research but also enhances the trust that stakeholders place in the insights generated.


3.  Enhanced Compliance and Risk Management:

In an era where data privacy and ethical considerations are at the forefront of public concern, ensuring that research activities comply with legal and ethical standards is more important than ever. ResearchOps plays a crucial role in this area by establishing clear guidelines and protocols for conducting research. This not only helps organizations navigate the complex regulatory environment but also reduces the risk of legal issues that can arise from non-compliance.

By managing informed consent processes, data storage, and participant privacy, ResearchOps ensures that all research activities are conducted with the highest ethical standards. This commitment to ethical research not only protects the organization from legal risks but also builds trust with participants and stakeholders, enhancing the overall reputation of the organization.


4. Improved Cross-Functional Collaboration:

In many organizations, research operates in silos, with findings often isolated within specific teams. This can lead to missed opportunities, as valuable insights are not shared or leveraged across the organization. ResearchOps acts as a bridge between these silos, facilitating better communication and collaboration between researchers and other stakeholders, such as designers, product managers, and executives.

By centralizing research data and making it accessible to all relevant teams, ResearchOps ensures that insights are not just generated but also utilized effectively. This cross-functional collaboration is key to ensuring that research findings are integrated into decision-making processes, leading to products and services that are more closely aligned with user needs. The result is a more cohesive organization, where every department works together towards a common goal, driven by a shared understanding of the user.


5. Scalability: ResearchOps provides a scalable framework that can support the increasing demands of research as organizations grow. This scalability is crucial for maintaining the quality and consistency of research findings over time, especially in large organizations with diverse research needs.


Conclusion

ResearchOps is not just a trend—it’s a necessary function for organizations that are serious about integrating research into their decision-making processes. By establishing a ResearchOps function, organizations can streamline their research activities, improve the quality of their findings, and ensure that their research practices are ethical, efficient, and scalable.

The measurable outcomes of ResearchOps—ranging from increased efficiency to higher quality data and better researcher retention—underscore its importance in the modern research landscape. As the field of research continues to evolve, the need for a dedicated ResearchOps function will only grow, particularly in organizations where research plays a crucial role in shaping products, services, and strategies.

S

S

What Are Screeners?

Screeners are a series of questions or criteria used to determine whether potential participants are a good fit for a user research study. These questions are designed to filter out individuals who do not meet specific criteria, ensuring that only the most relevant participants are selected. For example, if a company is developing a new feature for their e-commerce app, they might want to screen participants based on their online shopping habits, familiarity with certain types of apps, or demographic factors like age or income level.

The primary purpose of screeners is to identify participants who can provide valuable, actionable insights that align with the research goals. A well-designed screener not only filters out unqualified participants but also helps researchers understand the nuances of their target audience, leading to more meaningful and impactful findings.


Best Practices for Screeners

1. Start with Clear Research Objectives

The foundation of an effective screener lies in the clarity of your research objectives. Before drafting your screener, it’s essential to define what you aim to learn from the study. Are you looking to understand how new users navigate a website, or are you seeking feedback on a specific feature from experienced users? These objectives will guide every question you craft, ensuring that the screener is purpose-driven and aligned with your goals.

Insight: A study by the Nielsen Norman Group emphasizes the link between well-defined research objectives and the effectiveness of participant selection, noting that clear objectives lead to higher quality data by enabling more targeted recruitment (Nielsen Norman Group).


2. Define Key Participant Criteria

Once your objectives are set, identify the key characteristics that your participants must possess. These criteria could be demographic (e.g., age, gender, location), behavioral (e.g., frequency of product use, experience with similar products), or psychographic (e.g., attitudes, values). The specificity of these criteria ensures that the participants are not just representative of your target audience, but also relevant to the research questions at hand.

For instance, if you are conducting research on a fitness tracking app, it may be crucial to include participants who exercise regularly and use fitness technology. Conversely, if the focus is on testing a new budgeting tool, your criteria might prioritize participants who manage their finances through digital platforms.

Case in Point: According to a report by UserTesting, companies that tailored their screeners to specific participant criteria saw a 78% increase in actionable insights (UserTesting).


3. Craft Clear and Unbiased Questions

The wording of your screening questions can significantly impact the quality of your participant pool. Questions should be clear, concise, and free from bias. Avoid leading questions that might prompt participants to respond in a certain way. Instead, use neutral language that allows participants to naturally reveal whether they meet your criteria.

For example, rather than asking, “Do you enjoy using our app frequently?” which presumes a positive experience, ask, “How often do you use our app?” and provide multiple-choice answers. This approach ensures that the responses reflect participants’ true behaviors and experiences.

Expert Advice: UX expert Jared Spool highlights the importance of neutral phrasing in screeners, pointing out that leading questions can skew data and lead to unreliable research outcomes (Jared Spool).


4. Pilot Test Your Screener

Before deploying your screener on a larger scale, it’s wise to pilot test it with a small group. This allows you to identify any questions that are unclear, too broad, or unintentionally excluding the right candidates. Pilot testing also helps you catch any technical issues, such as questions that don’t display correctly or response options that are confusing. The feedback gathered during this phase can be invaluable for refining your screener.

Data Insight: Research suggests that pilot testing can increase the relevance of participants by up to 40%, as it allows for critical adjustments before the main recruitment phase (Nielsen Norman Group).


5. Keep It Short and Focused

In today’s fast-paced digital environment, attention spans are limited. Potential participants are unlikely to complete a screener that is overly long or complex. To maximize completion rates and ensure you don’t lose valuable candidates, keep your screener concise. Focus on the questions that are absolutely necessary to determine participant suitability and avoid adding superfluous ones.

Each question should serve a specific purpose. If it doesn’t contribute directly to identifying the right participants, it should be omitted. This not only increases the likelihood of completion but also makes the process more efficient for both the participants and the researchers.

Survey Data: According to SurveyMonkey, screeners that take more than five minutes to complete see a 20% drop in completion rates, underscoring the importance of brevity (SurveyMonkey).

Conclusion

Screeners are an essential tool in the user research toolkit, ensuring that the right participants are selected to provide the most relevant and actionable insights. By carefully crafting and deploying screeners, researchers can enhance the quality of their studies, saving time and resources while ensuring that the data collected is both accurate and meaningful.

However, screeners are not without their challenges. Overly restrictive criteria can limit the diversity of participants, potentially excluding valuable perspectives. Therefore, it’s crucial to strike a balance between precision and inclusivity, ensuring that screeners are both effective and representative.

T

T

U

U

V

V

W

W

X

X

Y

Y

Z

Z