HOME
BACK TO TOP

mobile app | POC & user testing

SETUP & TROUBLESHOOTING WITH AR

client
Sennheiser
role
UX designer
completion
3 weeks, 2025
Usability testing
Prototyping
Proof of concept
👁 TL;DR

Team

Business developers, Researchers, 🙋 Designer

The challenge

Sennheiser came to us with an interesting challenge: some of their products were being returned more often than others. Internal research suggested that setup difficulties might be part of the problem. Their proposed solution was to introduce AR-assisted device identification, setup, and troubleshooting to make the process faster, simpler, and less frustrating.

My responsibilities

📌 Create interactive prototypes based on the concept provided by the client
📌 Assist the research team with interviews & interview analysis
📌 Prototype change and/or improvement suggestions based on the research results

The outcome

Although the camera‑based AR recognition feature didn’t significantly outperform more traditional inpout methods, it still proved technically viable and provided valuable learning for future improvements.

The findings led to a recommendation for a hybrid approach that combines camera‑recognition, QR scanning, and manual selection for troubleshooting, offering flexibility and broader accessibility. Overall, the project helped uncover key insights on how users engage with the device setup experience and where friction exists, laying a stronger foundation for future iterations.
01
Understanding the problem & RESEARCH DESIGN

What do we want to address?

How can AR help users properly install a device and troubleshoot issues

The client conducted internal user research and shared the findings with us. They aimed to reduce high return rates for certain products, assuming installation and problem-identification difficulties were key factors.

Our goal was to test whether AR-based identification and troubleshooting could simplify and speed up the process while improving the user experience.

❓ HYPOTHESES
➔ Device and issue recognition via AR can help users quickly fix issues without manually describing the problem and browsing for solutions.
➔ Users can speed up troubleshooting with AR when things need to happen quickly, e.g. at events and on stage
➔ Users would prefer to identify an issue via camera recognition instead of manually describing it or looking it up in the manual
⏱ DURATION
45 min - 1 hr moderated online interviews with 5-7 test users matching the target audience persona
📋 SCOPE
1. Open ended questions to learn about habits and preference when working with new equipment or/and solving an issue
‍
2. A moderated interactive thinking aloud task based test session with high-fidelity prototype.
đŸ‘„ ROLES
Business & Research team: Client communication, recruitment, carrying out the interviews, moderation, summarizing the findings
‍
Designer (me): assistance with test design, recruitment and interviews, prototyping

User research

Planning interviews & recuriting participant matching the target audience

We aimed to recruit users who closely matched the client’s target audience. To begin, our research team sent a survey to potential participants—hobbyist and part-time musicians or sound engineers—and selected 5–7 for interviews.

The interviews had two parts:
‍
🎯 general questions about their setup and troubleshooting habits
🎯 an interactive prototype usability test where participants completed moderator-guided tasks while thinking aloud.

02
PROTOTYPING THE CONCEPT

Troubleshooting an issue via camera recognition

📝 Test scenario for a first time user: identify the device & troubleshoot in a single flow

We aimed to validate the client’s hypothesis that AI-powered camera recognition could identify a device, detect issues, and suggest troubleshooting steps. The concept was for users to simply point their phone at the device for instant diagnostics. Before creating the user-testing prototype, our data team checked technical feasibility by annotating images of the client’s receiver device. Once viability was confirmed, I designed a high-fidelity prototype for testing.

02
RESEARCH & SUMMARIZING THE FINDINGS

User feedback on identifying the device with the camera

⏱ ISSUES

What if it is too dark to identify the device reliably?

It might be faster and easier to just scan the QR code instead of the device itself

📋 SUGGESTIONS

Scanning the QR code on the device or the packaging would be sufficient

It would be helpful to have the AR camera recognition assistant during the first initial device setup that can walk me through all the steps

It would be helpful to have a single app to manage my devices, troubleshoot & access manuals

đŸ‘„ POSITIVE FEEDBACK

Straightforward process, user-friendly flow

It is clearly communicated what is happening at every step of the process

User feedback on troubleshooting with AR

⏱ ISSUES

The process feels cumbersome compared to just entering the information or picking from a list of problems

What if the light conditions at the event are not accomodating and the recognition does not work?

At an event things need to go quickly and one might not have the time to run to the device to scan it if one is away

📋 SUGGESTIONS

Replace scanning via camera with a simple text or tag based input

Have the manual available in the app for my devices

đŸ‘„ POSITIVE FEEDBACK

Simple-to-use and intuitive flow

03
Coming up with solutions

Suggestion: enable problem description via text field & suggestions

During the user test the general sentiment was that troubleshooting via AR takes too long and may not be the best way to go in low-light environments, such as backstage or at events.

Most users we interviewed said they would prefer a more classic approach and would rather input the issue via a description field assisted by tags one can pick from and narrow down the issue.

Based on this, we recommended the client to focus on an easier, cheaper and more robust solution for the troubleshooting flow.

Suggestion: streamline the device identification process

The recurring sentiment duriung our test interviews was that the issue recognition via the camera is not providing a better or faster experience compared to simple issue selection via a menu or description via text or voice input.

To justify the effort to enable recognition for all devices in various light conditions should bring significant improvement to the experience, which we did not validate during our test sessions.

Related projects