Software Review Form: Increasing Completion and Publication Rates
8% ↑ Form Completion | 13% ↑ Publication rate
As the Lead UX Designer on the central review team, I was tasked with modernizing our review form—a key tool used across four B2B software sales websites to collect user reviews. Our main goals were to increase the form’s completion and publication rates and to ensure users felt engaged and encouraged to share their authentic insights.
Background: The Business Impact of the Review Form
At GDM, the review form is a vital element of the business model, supporting multiple revenue streams and building user trust. Reviews provide essential data for individual product pages, offering detailed and reliable insights that help users make informed decisions. These pages, in turn, drive engagement with key revenue channels like Pay per Lead and Pay per Click.
The quality and quantity of reviews are critical for user satisfaction and credibility. Optimizing the review submission process increases the volume of reviews and improves the depth of the data collected. This strengthens the ability to support end users while driving greater engagement with revenue-generating activities..
2. Barriers to Submission
To improve the review form, we needed to identify the specific barriers preventing users from completing it.
First page drop-off - Through FullStory analysis, we discovered a significant drop-off rate on the first page of the form. While users who advanced past this page tended to complete the form, many abandoned it early.
Form layout - The original form utilized a two column layout for fields. Multi-column layouts on forms are know to be error prone - users are more likely to skip fields, and have more trouble reviewing answers after completion. Additionally, unnecessary section headers add more visual clutter, without enhancing clarity or usability.
3. Rethinking Information Architecture
Pictured here is the original form. It was three lengthy pages with multiple distinct topics on each page. We developed two primary hypotheses to address the main pain points preventing form conversion:
Form completion will increase when asking more engaging questions first - Users were arriving at a form ready to describe their experience with a product, and were immediately met with a slew of personal information fields. With the original form, the valuable insight users had top of mind had to be put on the back burner. I proposed re-working the information architecture and placing the most engaging information first. As a team we identified the 5 main categories of information within the form, and ranked them from most likely to engage to least likely to engage. This category cohesion lead to the next hypothesis -
Form completion will increase with shorter, less overwhelming pages, and a clear sense of direction - Using the categories we identified as a team, I structured the using a simple navigation menu.
4. Solidifying the New Order
I listed all of the questions we planned to ask in a question protocol, grouping questions into our newly defined categories. Question protocols are essential at this stage because they:
Achieve Dev / Design alignment early on. After reviewing the questions protocol, the whole team was in agreement around what must be asked
Omit unnecessary questions, and prevent unnecessarily lengthy forms
Solidify the order in which questions should be asked.
In the protocol, I noted questions that warranted a specific placement:
Email was needed on the first page incase of form abandonment, so that a user could be entered into a marketing nurture cycle.
The user role determined questions asked later in the form regarding set-up, so it needed to precede that section.
This document laid the foundation for time-efficient wire framing. The user flow was clear, the questions were solidified - time to hash out the visuals.
5. Chunking for Clarity and Simplicity
After reaching clarity on the exact questions and order they would be asked, I created wireframes to nail down how the navigation and visual structure of the form.
The navigation was placed on the left for desktop breakpoints, and on mobile it was placed at the top for easy visual reference.
I removed the heavy dark design element of the old form, added rounded corners, and a lighter background to modernize the feel and create less harsh visual distraction.
6. Streamlining the Feature Rating Section
The feature rating section was a pivotal component of the form, designed to collect ratings and feedback on the main features of each listed product. However, it faced challenges with incomplete data - some feature had tons of ratings and many features had not been rated at all. To address this, the team had previously introduced functionality requiring users to rate a minimum number of features, and strategically revealed underrepresented features at the top of the list shown to reviewers. This change enhanced the product pages, making them feel more comprehensive and reliable.
Despite its effectiveness, the original design for this section was cumbersome and visually overwhelming. While the team agreed on the need for improvement, a full redesign would have required significant iteration to meet all requirements, and potentially more development effort than we had tome for. For the MVP, we opted to utilize user testing to gather more information and to craft a more comprehensive overhaul post-release.
Alternative designs, such as a feature cloud allowing users to select features or a card-based layout to chunk information more intuitively, were proposed for future iterations. For now, the focus was on improving usability within the existing framework, ensuring functionality aligned with user needs and business goals.
The original is shown on the left and the proposed revision on the right.
7. The User Testing Plan
6 moderated user testing sessions
3 legacy form sessions
3 new form sessions
To validate our redesign, we conducted moderated user testing sessions. Three participants tested the original form, and three tested the new form design. This approach was crucial because the Central Reviews team had not previously conducted any testing on the original form. Initially, the team planned to rely solely on A/B testing after rollout, but I advocated for a quick round of qualitative testing to inform intentional design decisions early on. Testing the original form allowed us to establish a baseline and ensure we weren’t removing any elements that worked well for users.
The new design was tested against three main hypotheses:
The new navigation is intuitive and helpful to users.
Users are more engaged when product-centered questions are presented first, rather than personal information.
The feature rating section is functioning, but has room for improvement.
8. User Testing Results
To ensure efficiency, I reviewed testing sessions using a Coda spreadsheet in real time. Each substantive user experience was logged, timestamped, and categorized by primary feature or experience. Items were also given a general importance rating (e.g., "must resolve" or "investigate post-MVP"). This approach allowed us to prioritize essential improvements, validate our hypotheses, and meet production deadlines. I then presented essential changes with timestamps to the team for alignment on final revisions before implementation.
There’s more!
Want to learn more about my UX design process and research approach? Reach out, I’d love to hear from you!