Product learning

The conclusion of every project should include analysis of performance. Below is the beginnings of us keeping track of what we’ve learned based on usage data.

Team: Product

Author: Eric Brody-Moore

Overview: We processed sessions on Sourcegraph Cloud to categorize them as successful or failed search sessions and, most notably, how they correlates to week 1 retention.

Caveat: Our current defition/proxy of a successful session is a click into search results. This will evolve over time to become more accurate. See more context in the RFC.

Search session success/failure

Conclusion: one successful session (in its definition today) does not lead to realization of the value proposition and week 1 retention.

This supports the need for a lot of projects/ideas already in motion:

  • Including code intelligence in search results (hovering is 2x retention correlated with retention than submitting a search)
  • Improvements to the search tour. 60% of failed search sessions had two or less searches, which were most likely low quality searches

What I think will help but have no quantitative proof from this analysis:

  • More efforts from the search redesesign to improve the quality of the searches earlier on in the user lifecycle
  • Search results ranking so the likelihood of a user clicking into results and seeing code intelligence is higher
  • Improvements to the search tour to get people to the code they care about and learn the search syntax

Data

I ran a week of search sessions for the week of .

  • 64% were success, 36% were failed sessions

Of the 64% successes:

  • 54% clicked into results and used code intel
  • 39% clicked into results but did not use code intel
  • 7% clicked ‘open code host’

Of the fails:

  • 60% of fails are <=2 searches and leave; 68% are <=3 searches and leave

Supporting data

  • 12% week 1 retention in all users who searched vs. week 1 24% retention for users who hovered and clicked ‘find references’ (Source: Amplitude)
  • Multiple searches vs. one search increases the week 1 retention from 10 -> 15% (Source: Ampltide)

Note: This data should not be looked at as causation, but we have additional qualitative analyses that makes me more confident in the conclusions.

Team: Product

Author: Eric Brody-Moore

Overview: A deep-dive into what actions lead to Cloud retention. The full slide deck is available on Google Drive.

Cloud retention

Conclusion: There are no actions that obviously lead to retention, and no actions that are significantly stickier than others.

  • Action: The next step is to analyze what they’re looking at and how it might fit into their workflow, not the specific actions they’re taking on Cloud.

  • Action: If we want to take this step, develop an RFC to propose adding action-based retention to pings to get insight into which actions lead to retention on on-prem instances.

Neither of these actions have been prioritized (as of ).

Team: Web

Author: Joel Kwartler, with help from Eric Brody-Moore

Overview: Data related to the value of browser extensions (+integrations) and recent improvements. Action items are to continue supporting code host integrations.

Code host integrations user value

Conclusion: A qualitative analysis of all NPS promoters for the past 14 months (Nov 2019-Nov 2020) found 6% of them cite a feature the integrations provide as the only given reason for their score.

A mapping of DAU/MAU vs Integration Usage Saturation by customer displayed a positive correlation between integration use and high customer use.

An analysis of retention found significantly higher retention on Sourcegraph.com for users with the extension.

  • Action: we will continue to prioritize adoption, growth, and maintenance of our integrations.

Browser extension panel redesign

Conclusion: The browser extension panel redesign was successful at reducing uninstall feedback around “usage confusion” or “security concerns” to 0 (from N0). It also reduced our uninstall/install rates by 5% in month 1.

  • Action: we will continue to make design/UI updates to features addressing user feedback.

Team: Search

Author: Eric Brody-Moore

Overview: Data relating to and the next iteration of search homepages, tour, performance and more. Eric BM set up GitHub issues where he thought appropriate but most of other action items are either on him or should be revisited in the future by the product team.

Cloud homepage

Conclusion: People aren’t using these panels to click through, but we are seeing improvements to the search learning curve with users using multiple filters quicker on their onboarding, so having the syntax on the page is helpful. We could improve this by getting info on what is most helpful, and then curating a really nice panel with that info as a resource (vs. examples to click through), but this likely will not become a priority until we can do a comprehensive refresh when users can add private code.

Enterprise homepage

No conclusions (it’s still really early in the data), but we have a couple outstanding projects:

  • We’ll eventually need discovery/user research into the the entry points of search. If a lot of users have bookmarks that bypass the homepage, for example, the panels will see lower traffic.
  • We can see this quantitatively also in the ratio of WAU on the panels to overall WAUs (issue assigned to Eric BM).

CNCF homepage

Conclusion: CNCF repogroups didn’t have many searches because there was no guidance/help for what the search page means. A majority of users don’t submit a search on repogroup pages.

  • Fix (16256): Add repo panel for repositories being searched over in the repogroup, and pull in the search syntax panels.

Search tour

Conclusion: Need to look at the data, but it’s a disjointed experience now. Priority TBD, and Eric BM will be pulling this data soon.

Search performance

Conclusion: P50/90/99 aren’t really helpful; we need to change how we approach this or how we use this data.

  • Going forward, we should setup a system that runs test queries multiple times per day on a large instance (e.g. Cloud when we hit more repositories), and these queries should be based on customer use cases.

References