Reimagining Home Listings

REALTOR.COM

Polaris was a reimagining of Realtor.com’s core product, the listing details page. This was an experience optimized for lead volume impacts, retention, and lead quality. We acquire 80-100M web users monthly, but only less than 3% return to the experience on their own. Instead of optimizing for leads, what if we design a best-in-class experience to attract, retain and convert high-intent buyers?

1.5 year project 

Massive experiment

Process

Leadership lead initiative

  • Deep project planning 

  • Research

  • Design brief creation

  • Concept testing

  • Concept refinement

  • Third-party engineering help

  • Build

  • QA

  • Launch

  • Retro

The Team behind the dream 

2 designers, 1 contract designer, 1 design director (2 through out), 15+ eng, 3-pms (1 at a time), 3 CPOs, 2 VP product management

My Role

Lead designer - managed two other designs and was the sole design voice pushing this project through (spoiler, I needed more support)

Goal

Bring the presentation of listings to life on desktop and mWeb so that buyers can efficiently and effectively determine which homes could be a good fit for them, encouraging them to shortlist and connect with an agent when ready.

Our approach

Consumer first approach

  • Ambitious rethink of LDP & technical re-platforming 

    • First → rethink buyers’ core needs and validate

    • Then → optimize for all other business objectives

  • An incremental approach to building out page functionality 

    • Iterating in waves 

    • Interpreting metrics for indications that Polaris LDP is outperforming existing LDP

    • A set of key metrics with guardrails & weighted tradeoffs (not all metrics are expected to rise)

Experience metrics

Have we created an LDP that buyers prefer?

  • Retention 7-day – acquisition

  • Unique LDP Views

  • CSAT – subjective preference

  • Saved Listings

  • Return Visits – engagement

  • Page Speed – satisfaction


Wave 1

Preferred LDP

Create an LDP that buyers prefer...


Wave 2

Transaction and lead optimization

...that drives more transactions (and leads in Choice markets)


Wave 3

Revenue optimization 

...and generates more overall revenue


Revenue Metrics 

And are we making more money from it?

  • TCV (and its components)

    • Met Rate – Referral Markets

    • Lead Volume – Lead Markets

    • SEO Unique Users 

Execution

After the project planing had concluded and the core team was in agreement of the waved process we wrote a design brief to keep our team grounded. Next, over two months the design team (director, lead, pd, contract designer) worked on several full redseigns of the listing experience. We wanted several designers to concept in order to quickly user test high level ideas. These prototypes were meant to be out of the box in order to find out how users navigate and would ideally like to browse listings. 


From there, we did heavy moderated users tests and were able to refine our concepts down to one core wireframe. 


At this time we hired a third party company from Australia to help with our engineering replatofrming needs. Cogent was selected by the CPO so then quit the company a week before the company onboarded. 


Navigating a transition is not easy.


This is when we felt a shift in leadership. From there we began to refine the design while begging to build the experience. 

We were forced to finish the design while enginerring was build. We had daily checkins inorder to keep everyone aligned. It was a challenge. 


8+ months in… we finally had an MVP that we launched to 2% of our users. This was a massive milestone. At this time our new CPO left the company. 


While Polaris was running to a small subset of users we kept refining the design and were attempting finish the design to meet our intended MVP expectations. 


Our CEO became the interim CPO. 

The project was shuttered in June 2022 after being live to consumers for 8 months at very limited traffic starting in Pure markets in October and then Choice markets in April. 


Results and learnings

Goal Clarity

  • Polaris goals changed throughout the project.  Feasibility of achieving the goals was not questioned with the goal changes

  • Project was set up with a ‘wave’ approach but team was only able to execute Wave 1 before the goals were changed

  • Initiated with best in class consumer experience intending to make up lower performance in lead submissions by higher performance in lead quality

  • At launch with leads drop, shifted to focus to leads and then shortly after proposed revenue parity (and possibly profit parity)

  • No potential tradeoffs were provided by the business (ex would be willing to accept fewer leads in exchange for higher engagement) = unclear success criteria beyond “parity or better” 

  • Too many goals were focused on 

  • Tech uplift, redesign, better consumer experience, revenue parity, media uplift


Measurement

  • Limited data / unreliable data created large overhead 

    • Project started without a dedicated data resource

    • Lack of data integrity is an issue across the board (ex leads disconnect in clickstream vs LCS table, leads appearing in one but not the other)

    • Low traffic introduced potential noise (high proportion of rental leads) for some segments, invalidating data reliability 

    • Risk in making large decisions on limited / poor data

  • Action Plan needed to help all teams / universal issue

    • Individual metrics are measured differently (ex leads)

    • Individual metrics are hard to measure (media revenue)


Goal Alignment within Leadership

  • There were conflicting instructions from ELT members / misalignment within ELT 

    • Team members heard different information from various ELT members (ex whether certain groups were stakeholders, how to involve stakeholders)

    • During the 1.5 years of the project we had 5 variations of a CPO

    • Conflicts on various goals were not escalated for ELT alignment (ex telling teams to “not worry about revenue”)

    • Sponsoring ELT members were not aligned with other ELT members on success criteria / importance of the project

  • Support from ELT decreased over time with loss of sponsoring leaders and presence of new projects (Native #1)

    • More involvement may have helped drive alignment and success criteria within ELT


Experiment Approach

  • Big Bang approach led to difficulty in attributing success / failures to particular hypotheses within the design 

    • Risks entire project instead of winning portions of project 

    • Difficult to design with the big bang approach for an experience of this size with only two designers who had minimal historical knowledge 

  • Market approach introduced challenges

    • Market and audience caveats (ex exclusion zips, no Choice, no non-deeded homes) introduced bias data (data may correlate with nature of Pure markets instead of new page design)

    • High investment in Pure markets for the future of RDC 

  • Partnership with Optimizely and creative Optimizely setup

    • Team pushed Optimizely boundaries to identify best set-up with highest flexibility in feature releases and coordination with other teams’ features

Collaboration

  • Collaboration model with stakeholders improved over time

    • Initially project was developed in silo which had to be backtracked prior to launch

    • Involving stakeholders in regular check-ins and understanding their objectives helped upskill entire team in pursuing page needs

  • Could create more scheduled milestone check-ins with ELT / Stakeholders

    • Monthly stakeholder updates helped but did not ELT audience

  • Resourcing for contributing teams was often side of desk or unavailable (ex Research, component owners on LDP) 

  • Squad performance improved after partnership with offshore team Cogent

    • Very difficult to collaborate with the Cogent team based in Australia

    • Team had little to no overlap of working hours

Development Execution

  • Team was able to pivot on changing goals and deliver

    • Tremendous success in focusing on key metrics and incrementally improving them 

  • Squad ceremonies matured

    • Growth in team ownership as functional quad in addressing challenges

    • Strong focus on team morale, resilience and ownership

  • Under-estimated some work / distractions

    • Tech incidents, lead form development, working with team in Australia

    • Cost of not having a PM or EM at stages of the project

Design Execution

  • Team was able to deliver a new innovative experience

    • Massive body of work was created by only two designers and only one for about 4 months — met all expectations for speed and quality of execution

    • Foundational design components were contributed to the design system

  • Became leaders in cross functional collaboration 

    • Created a successful cadence and framework for design teams to collaborate

  • Distracted by research and other non-design related work

    • Due to the lack of research and data resource the team had to conduct their own user tests and distill the findings

  • Native #1 was a distraction

    • Director level support was not always available due to the N#1 Big Bet

While Polaris was in its final stages, I was moved over to the company's new priority, Native #1, a visual uplift of our entire native experience.