Wattpad — 2018
Role — Lead design
As a user generated content platform with well over 70 million active users, Wattpad sees its fair share of unsavoury content. Because writers can add 100s of images and gifs to their written work, along side videos for chapter header, the opportunity for less than ideal content to slip past is increasingly a problem.
Readers and writers of all ages, around the world rely on Wattpad for a multitude of different reasons. Some come to escape in a new story, some come to read and provide feedback on stories, some come to test the creative waters and share their first story, some are seasoned writers looking to grow their audience and some are simply here to make friends.
With that in mind it's easy to see why moderating the content uploaded to Wattpad is imperative.
Team & role
As the sole full-time designer at the time, I led the research, testing, and implementation of the design of this feature with alongside our project manager.
Our problem was clear, so our first step to understanding how to improve our situation was to audit the impact it's had on our community. This meant combing through app store reviews, Zendesk tickets and community forums to better understand how our readers and writers were feeling about the status quo, and what they opted to share about it.
Our end goal was to build a scalable machine learning model to moderate all imagery uploaded to the platform at scale, but we learned quickly that this was far easier said than done. There are a ton of nuances in imagery that make context a key factor, and so we needed to create an in between state as we pushed the limits of ML.
From the start we felt it was imperative that we treat our writers as innocent until proven guilty. This meant that we aired on the side of empathetic, informative, and cautious in our communication and approach. Sharing creative work leaves you open to a lot of criticism, and we were not about to add another level of concern.
We spoke with a collection of writers from our ambassador group to better understand the impact of banned content on their day to day as well as it's perceived impact on readers on the platform from a reporting standpoint.
We carefully selected a handful of our most risky audience, writers that have been on the platform for a while that take advantage of image uploads in their stories. Is was important for us to understand their thought process when adding imagery to their stories as well as the process in which they go about selecting those images. Are they creating them, or sourcing them? What is the perceived value in including them?
From there we took away our learning and I drafted up an initial flow of an end to end story upload and brought it back to test with another group of writers. What we hoped to gain from this was basic usability and comprehension but also to gauge sentiment. It's crucial that our writers feel respected and safe on Wattpad as they create the content that allows us to grow and continue bringing new readers in for more.
What we learned from testing was that allow the flow we tested was a tad confusing — the limitations of prototyping mostly, but also the use of a 'banned' image, that wouldn't necessarily be banned (because we could not test with actual banned imagery) — the overall flow not only made sense, but left writers feeling considered.
All images uploaded to the platform now run through an ML tool that does a first pass to determine if there is potential that the image contains unsavoury content.
Images that are deemed unsafe are removed from the reader (story view) and blurred on the writer side. For a reader, the story experience is not affected beyond the disappearance of the image. It was important to writers that their readers didn't have a negative perception of them, and so we opted to communicate nothing on their end. Where did all of our communication was on the writer side. Within the context of the story we both hide the image in question, and communicated to the writer why it was no longer visible.