top of page



As part of Jeff Huang's UI/UX class taught at Brown University, I worked in a group of 4 to design an app for AudioFocus, a startup focused on creating technologies to help people hear their friends in noisy environments.


3 Weeks

October–November 2019


I worked with 3 other students: Regina Mao, Aishwarya Bagaria, and Amanda Han.


Improve the user experience of AudioFocus' noise filtering technology through the design of an app interface.


Without looking at the actual product or website, we were tasked with designing an interface for AudioFocus based on the description alone.

Provided Description 


"AudioFocus is a startup focused on creating technologies to help people hear their friends in noisy environments. The technology works by building a “fingerprint” of the user’s friends’ voices based on samples of recorded speech and then filtering out voices and sounds that do not match the respective fingerprint."

Initial Thoughts

We began the process by considering who direct and indirect stakeholders would be, and how they would be affected.


Direct Stakeholders

The direct stakeholders would be users who use the technology to filter out sounds in loud environments, such as outdoor crowds, concerts, or busy cafes. Additionally, those whose voices are being used to create the fingerprints would be directly impacted. 


This technology could also be used by those with hearing disabilities who use the technology to amplify conversations in general.


Indirect Stakeholders

People that would be indirectly impacted by this interface would be those surrounding the users, whose voices would be filtered out.


Effects On Stakeholders and Ethical Considerations

The people whose voices are being sampled to create the “fingerprints” will have to consent to their voices being used and processed as their voices are being stored on someone else’s device. There should also be a Privacy Policy to reassure users that the app developers will not share any recordings with third parties.


Additionally, people who are in the environment but not party to the use of the app may express concern if they see someone using the app and perceive that they are being recorded. As a result, the interface should contain a statement that the user can read out to anyone who approaches them, which reassures them that the technology will filter out any speech that is not that of their friend(s). The app should also warn users that if the person who approaches them is still not happy, they should cease to use the app in that particular situation.


We began the design phase by creating two sets of wireframes, from which we could mix and match the best options.

We decided that for security and privacy reasons, users would set up their own fingerprints when opening the app for the first time, which would be attached to their profile. Other users would then receive their fingerprint upon adding them as a friend, which they would have to approve. If a user wishes to remove their fingerprint from friends phone, they must delete the friend from their list.


We created wireframes for the main screens needed to complete the two main tasks on the app: creating fingerprints, and initializing and having conversations. We designed two different sets of screens for each task, in order to provide ourselves with a wide range of options going forward.






Friend Selected,

Conversation Requested








Your Location,

Friends Near You








Following the creation of our wireframes, we began to streamline our design decisions and then created a high-fidelity prototype in Figma.

Design Decisions


After creating two versions of the general flow of the app, we decided to integrate the flow of “set-up fingerprint” from Version 1 and “conversation” flow from Version 2 into our high-fidelity Figma prototype. From Version 1 we liked the inclusion of the breadcrumb where users can easily track how much progress they’ve made in setting up their voiceprint, and in Version 2 for the “conversation” page, we felt that the cleaner design was more intuitive and easier to navigate. Compared to our original sketches, we made the “set up fingerprint” flow cleaner by reducing the number of elements on the page, and adding some additional steps to ensure that there is a clear, chronological flow to set up the voiceprint. Moreover, for the “conversation” section, we added more options on one page to reduce the number of clicks to achieve a task. 


For the visual design of our high fidelity prototype, we decided to choose a simple color palette, so that the interface was engaging, but not distracting from the task. We also chose a fun display typeface, “Orbitron,” for the logo, but used “Open Sans Hebrew” for the body as it feels modern and is very legible. 

A video walkthrough of the final iteration of the app can be found below.

Critique Feedback and Revisions


After developing our high fidelity prototype, we presented our interface to a couple of classmates to get a critique on what changes could be made to improve the usability and flow. The overall comment was that we had a nice, clean design with a good overall flow. However, the “valid fingerprint”, which initially had a rounded rectangle around it, appeared like a button even though it wasn’t. You were also able to see friends nearby on a map, which people felt was unnecessary - they preferred just to have the distance mentioned. Finally, there was a “phone” icon next to talk to a friend, which felt misleading based on the goal of the app because you don’t physically call someone but are able to talk to a friend in person. 


Based on the feedback we received from the critique, we changed the “fingerprint valid” to just green text so that it doesn’t look like a button. We also decided to remove the map view and replace it with the friend’s distance under their profile picture to reduce the number of clicks for the user and achieve their task more efficiently. Finally, we changed the phone icon into a talking icon to more clearly represent what the goal of the app is, which is to be able to clearly talk to people in loud spaces. 


After we made the relevant changes, we sought further feedback from anonymous users on

Testing Instructions

The scenario we provided the users with was “Imagine that you’ve lost your friend (Jamie Adams) at a festival. You’re trying to hear him through a call but the background noise level is too high. You try to talk to him using AudioFocus, an app that filters background sounds so that you can hear your friends better.”


The main task we assigned was to start a voice-filtered conversation with Jamie Adams, which involved the following sub-tasks:

  • Create an AudioFocus account

  • Set up a voice fingerprint

  • Find Jamie Adams’ profile

  • Start a voice-filtered call with Jamie Adams


We also asked them to answer the following questions:

  • What do you think of the overall flow of the prototype? 

  • What screens were unclear or hard to use?




Before conducting the tests, we came up with the following hypothesis:


“We expect the most confusion to come from the “Set Up Fingerprint Section”, as possible mistakes may arise from not realizing that there is a self-timer/autocomplete function built-in, and so they may attempt to press to proceed. We feel that the rest of the app is quite intuitive and users should be able to complete the remaining tasks within a relatively short period of time with few clicks.”


Testing Videos

Metrics Table

Upon reviewing the videos, we calculated three metrics per task: completion rate, error count, and time on task, shown on the table below. Our final numbers are the average results of the three users.

A-B Testing and Eye Tracking_Page_1.png

Explanation of Results

Overall, very few mistakes were made, and the verbal feedback was quite positive, with all three users calling the flow and interaction “intuitive,” “straight-forward,” and “simple.” We hypothesized that the voice fingerprint section may cause confusion, and though some users implied that they were unsure if it was actually working or pre-programmed, this did not lead to mistakes. The only mistakes were users believing that the sign-up information had to be filled in. One user took a while to understand this, as demonstrated by his “time on task” of 120 seconds.


Potential Changes


One of the users mentioned that the “talk” button to begin a conversation with a friend may be slightly confusing to understand as you are not physically talking with the friend. So we would change the wording of this to “call” to make it clearer for the user to understand. 




The overall testing experience was really successful, and it was particularly useful to watch how new users physically interacted with the interface and hear their thoughts as they proceeded through each task. 


One surprise was that one of our users believed the prototype to be a real website, causing confusion. This is something to bear in mind for future testing sessions, and maybe an explanation at the beginning would be helpful.


This project introduced me to, which I view as a very valuable alternative to in-person testing. I found that being able to observe the user and hear their real-time feedback gave a more accurate picture of the success of the interface. I also think that the anonymity and unfamiliarity of the users greatly helped to reduce bias. Overall, I found that this project allowed me to reflect on my designs much more thoroughly than before.

Like what you see?

Let's chat.

bottom of page