2018 Q1 - Q3
iOS, Android, Web
The Tech Foundation
Mic UX Revisit
Song UX Revisit
Social UX Revisit
Android & Web
Jeannie Yang - Chief Innovation Officer, former CPO
Wang Liang - Sr Product Designer
Oscar Corrol - Sr Visual Designer
David Steinwedel - Director of Product
Live streaming has been a trend in the video/music content market. It is a natural way for people to connect in real time, which is specially true in the case of singing, therefore strategically meaningful to Smule.
Smule explored a standalone live app in 2016, but halted the effort before completion.
At the beginning of 2018, the company decided to bring it to the main product Smule (previously named Sing!), as a new feature named LiveJam.
I was the lead designer of this project end-to-end, working with previous Chief Product Officer Jeannie Yang. The second designer Oscar Corrol joined later to help parts of V1 design finalization.
We spent 6 months from kicking off to launching the first public version, went through important product and design decisions with real users’ input, achieved positive results qualitatively and quantitatively.
The Tech Foundations
The vision of LiveJam is to empower people to sing together and connect in real time.
The half-done explorations in 2016 left us a dual-server broadcast system that supports up to 2 broadcasters at the same time, which became the technical foundation.
There are essentially three kinds of users in LiveJam:
Host Singer - the main broadcaster of the live stream, representing the creation experience.
Guest Singer (optional) - the second broadcaster who joins the host to broadcast together, representing the experience from consumption to creation.
Audience - the rest who are consuming and engaging with others, representing the consumption experience.
There are three singer personas for Smule: Influencer, Rising Star and Fun Lover.
We wanted to create a good experience for all in the long term, and more so for Fun Lover to begin with. We recruited a group of engaged users to work with us from beta.
The Market Landscape
We looked into the market of live broadcasting based on two main dimensions: Musically Expressive and Solo-Social Broadcast.
We learned about different products and strategies, and confirmed that there is potential for our positioning of LiveJam with unique competitive advantages.
For example some interesting insights that helped us shape our product directions:
The main stream doesn’t focus on musically expressive, instead musical is only one of many kinds of content for the live broadcasts.
Those a few who are musically expressive also have their own focuses
Chinese competitors tend to focus more on the “Internet Celebrity” model and prioritize gifting economy, with gamification that encourages the competitiveness of the users. Inevitably the screens are always busy and crammed with content that are actually getting in the way of authentic connections but likely effective for business achievements.
StarMaker was trying a light-weight live karaoke room with a mechanism that everyone takes turn to sing one song without video capability. It’s fair, but also too rigid, there is no room for chitchats, or just having fun goofing around.
Periscope had very intuitive and efficient UX, users can easily navigate to different parts of the product and access the functions they want, yet the content were not engaging enough, audience easily get confused or bored.
Overall there is pretty of room for Smule to play to make a live product that focus on musical content and connections. We wanted to be in the top two quadrants.
Our first milestone is to build a beta version to validate the technology and product hypotheses.
We listed out main goals of each user groups, organized them into themes, and visualized high-level users flows based on the themes.
The three main themes emerged and become the three main aspects to design for:
Mic UX - How do I broadcast?
Song UX - What do I sing?
Social UX - How do we interact?
It is important to have an intuitive mic UX so that users can naturally understand how it works and how to move from one role to another as they want.
Our hypothesis is that, essentially, each user group has their own actions needed for the mic
Host: Invite Guest, Pass The Mic, Drop The Mic, End Duet.
Guest: Pass The Mic, Drop The Mic, End Duet.
Audience: Request Mic, Accept Mic
A few low-fi UX mocks:
Med-fi mic UX prototypes to show how the mic flows should work in beta:
One constant question for the singer(s) is “What do I sing?”
Normally in the app, singers start by deciding on the song first, so there is no such question for traditional singing flow. However when broadcasting, it can easily get awkward while there is no song to sing, and as the singer you are responsible for the decision.
One idea we tried is “Song Request”, audience can send request for songs, and vote for them, the singer can see a list of requested songs, and pick the one he wants to the most, if not, by default there is always songbook and search.
Played with the idea for a while, I realized that allowing the audience to request songs isn’t the best for our singers, especially Fun Lovers, since it emphasizes what OTHERS want them to sing over what THEY want to sing. Plus, it could be awkward and stressful if as the singer you don’t know the songs requested.
I removed the idea to simplify the song book, so singers can go with their most familiar flow to find their songs. Below is a med-fi song UX prototype for the beta build.
Eventually, after the music, we want to help people connect.
To help singers communicate more easily, we made it so that singers only use voice to communicate, while audience can send hearts, short text msgs, and Selfie Chats - an idea from 2016 exploration that allows the audience use their front camera to record a quick clip and send as a sticker.
The med-fi social UX prototype for beta:
Beta User Study
We built beta version in April, and held a two-week long testing session with 200+ beta testers.
During and after that, we surveyed 79 and interviewed 2 of them to collect insights besides what we observed.
Fun fact: LiveJam beta was accidentally a convenient tool for remote user interviews.
Besides concrete product insights, we also enjoyed a lot of fun vibes and amazing duets:
With fruitful learnings from beta tests, we revisited all the main aspects of the product, aiming at a V1 milestone.
Our Sr Visual Designer Oscar Corrol joined me in this phase for V1 design finalization, we broke apart the remaining tasks so that I would continue to hammer the UX aspects, and he would focus on iconography, color schemes, and desktop web design. We also regularly discuss and debate over each other’s work to share feedback and keep consistency.
Mic UX Revisit
Examining the Mic UX feedback in detail, we learned that our beta hypothesis was mostly correct, but can be improved:
Zero mic interaction affordance on the main broadcast screen is not intuitive
Most of the time people are on the broadcast screen and would need to look for mic actions if they want to, which they don’t know where.
Passing the mic is confusing and socially tricky
One one hand, it is confusing to have passing the mic and dropping the mic as two different options. To the singers, a more intuitive action is “I don’t want the mic anymore”.
On the other hand, specifically to whom the mic is passed can be a tricky question. There is a line of people waiting for the mic, if not limited to a closed friend circle, it might seem unfair to not pass it to the one who waits the longest. Also, holding the mic for too long can be seen as a “mic hog”, which some people are sensitive about. These factors could make singers more nervous on mic.
Knowing that these testers are more engaged comparing to our target Fun Lovers, I iterated another round for mic UX.
For example, it seemed like Song Request, the idea we killed in the previous round due to different reasons, would have helped some of the issues, so I played with it again just to see if it would make more sense.
After rounds of internal tests, discussions and debates, we decided to keep the Mic UX simple by
Removing pass the mic option
Not adding song request
Showing a people-centric audience list
V1 hi-fi mic flow prototypes:
Song UX Revisit
We learned that the awkwardness of not knowing what sing on the mic is real and constant.
We also know that it is not our decision to make, but singers’ decision that we can help by making easier, which means the they should always have access to a pool of the songs that they know well and would probably want to sing.
We brainstormed and picked up ideas like Daily Song Recommendation and Partner's Bookmark.
Besides song selection, we also saw that the ability to save the singing sessions from a LiveJam as a permanent recordings is meaningful, so we iterated for Clip Saving feature as well.
V1 hi-fi host song flow prototype:
Social UX Revisit
There was no major negative feedback on social UX beta. We liked the decision that singers can only vocally engage with the audience, which makes the communication more intimate, authentic and inviting.
I refined the details of interactions for social actions like Love, Chat, Selfie Chat.
V1 hi-fi audience chat prototype:
Web & Android
Android experience followed iOS, except for minor adjustments for Material Design consistency.
For web, we decided to first launch a light-weight experience only for audiences (visitors and Smule users), with the links to the native apps.
Mobile web layout followed mostly the app design to maintain familiarity across-platform, while desktop web became an all-in-one consumption experience.
With confidence in the core mechanisms, we moved onto the next important topic on the priority list - Admin Control.
Essentially, there are three roles for the admin control mechanism:
Owner: the user who started the LiveJam, have full access to admin controls, including assigning and removing admin roles, and ending the LiveJam anytime, change the public/private settings, etc.
Admin: the user(s) who can ban people, stop the broadcasting stream from host/guest, etc.
Regular: other users in the LiveJam, can report anyone and the LiveJam itself to Smule.
Eventually, to meet the target date with limited resource, the project team had to cut the design that was out of scope.
In August 2018, we launched LiveJam V1 on iOS without features including Love, Clip Saving, etc.
Web V1 audience experience was released the same date as iOS, Android v1 followed up later in December.
Even with a few important UX missing, LiveJam V1 still brought around +10% total time spent in the app within the first month.
Half a year after Android V1, LiveJam DAU has grown to 110k and is continuing to lift.
Before I handed over the follow-up design lead responsibility to other product designers in Q4 2018, we planned a series of updates to improve LiveJam. I delivered the first a few of them with satisfying results:
LiveJam Discovery with new entry point UI/UX that right after release brought:
LiveJam Open Collabs that allows singer to load a pre-recorded clip of other users and sing with it in the live stream, improved:
+13% total Singing_Start
+25% total Time_Spent_Hosting
+17% total Time_Spent in LiveJam
Other areas of the design not covered here:
Info Screen & Flows - how do different users check out/edit the info of the LiveJam
Share - how do people share the link of LiveJam and invite more audience
Network - how does each flow work under fast and slow networks
Audio/Video Effects - how broadcasters use special effect and filters
Entry Points, Browsing Flows, etc.
2016 design explorations done by Ben Hersh: