Changing Engines Mid-Flight: Neeva’s Switch to SSR

Rajaram Gaunker, Todd Wang, Carl Lischeske and Seth Li on 03/31/21

Reimagining search...quickly!

Neeva is search reimagined: an ads-free, private search engine. We are creating a search experience that revolves around our users and customers rather than advertisers.

By allowing the initial rendering to happen in parallel with client application parsing and rendering, we had significantly improved the end-to-end time. Overall, the transition to SSR was very successful, and we finished the project in about six weeks without other developers having to stop releasing features.

Regardless of how visual or beautiful or different we wanted Neeva’s user experience to be, a clear and persistent piece of feedback from early users was that speed was a critical need for a search engine. People are incredibly quick at opening a new tab and typing out what is on their mind! They expect the search results page to show up almost instantly.

As early as 2008, A/B testing conducted by Amazon revealed that every 100ms of latency cost the company 1 percent in sales, while a study from Akamai estimated that a mere 1 second delay in page response can result in a 7 percent reduction in conversions.

Yet while being fast constitutes a massive competitive advantage for any sector, it is arguably most crucial for a search engine application, where one second can feel like an eternity and perceived page loading time is just as important as the real loading time.

Our technology stack

Neeva is built on React: an open-source, front end JavaScript library for building user interfaces and UI components. It enables developers to be much more productive and creative with their coding, thanks to a rich and flexible structure and great development tools.

React originated with Facebook. It has since become widely adopted by many applications including Dropbox and Airbnb because of its ability to render rich and interactive Single Page Applications. However, this comes with a tradeoff: there is a large initial code download. This code has to then be parsed and loaded into browser memory. This is well suited for long running applications like Facebook, where the user keeps using the application on the same tab once it’s opened, so a longer initial load time doesn’t cause friction in the same way as opening a normal webpage would.

Yet Neeva is not Facebook or Airbnb… Most of our queries (around 80 percent) come through the browser address bar, not the loaded app. This means that the initial loading time impacts nearly every query. With our original implementation, we were requiring the client to parse and execute 3.2MB of minified JavaScript code before the call to the search API could even be made. This meant the actual search query would be sent to the server a full 2 seconds after the user pressed enter. We tried a few simple things to speed this up, including fetching results in parallel with the application code, saving us a networking round-trip, but the user cost of waiting for the application to load and execute was just too high. Lack of speed was a major problem, and we began to explore alternative approaches.

We wanted to keep the benefits of the rich programming environment that React offered, but we also needed to show results to the user sooner. The answer lies in using a technique called progressive rendering, which shows results to the user even as the full page is being sent to the browser and the React code is being interpreted. In other words, we wanted to have our cake and eat it too!

Progressive Rendering

The strong psychological benefit of progressive rendering is well documented from the very early days of the web, when Netscape 1.0's beta versions introduced the feature allowing the page to begin to appear—and text to be read—before all content and images were completely downloaded. Ever since then, the golden rendering rule has been to "show as much as you can as soon as you can." Google search results, for example, load the page header straight away, followed by the number of search results and the results themselves. The page is usable even without the JavaScript and footer.

For progressive rendering to work, we needed to send back fully formed HTML to the browser, rather than an empty page which then got built dynamically with React. However, a move away from React was a big change, and we were worried this would result in “work stoppage” for multiple months while we rebuilt our client layer. This would have been unacceptable for our fast moving startup: we needed to increase performance while continuing to ship new features. We had to be creative and scrappy!

The best of both worlds: React SSR

React supports a process of rendering HTML on the server, known as Server-Side Rendering (SSR). The application is run within a node.js process, outputting an HTML representation of the React virtual dom. This HTML is sent to the client and rendered there so that the user sees results right away. In parallel, the application is loaded and run again on the client, invisibly replacing the static rendered page with an interactive app. This process is called “hydration”. The end result is that users can see, read, and interact with their Neeva search results without having to wait for the app to be loaded and executed.

As web content has become more complex over the years, there has been a growing need for pre-rendered applications that can be interacted with without the need to reload the page. It’s a “best of both worlds” solution that makes SSR such a powerful and appealing proposition for developers. Crucially, for us, adopting React SSR allowed Neeva to deliver fully formed HTML to the client to vastly improve that rendering speed.

The results were surprising; we saw not just a 70+% improvement to time to first meaningful paint, but also a significant improvement in time to the application being fully rendered on the client (i.e. fully “hydrated”). By allowing the initial rendering to happen in parallel with client application parsing and rendering, we had significantly improved the end-to-end time.


We didn’t expect to arrive at a complete solution straight away, but needed to determine that the problem was solvable in a relatively short time frame. In about two weeks, we had wired up a working system to experiment on, compiling a concrete list of issues we needed to solve. We also took learnings and tools from open source repositories such as reactdomserver, express, as well as React Github, into account.

There were many critical issues we addressed to make the transition over to React SSR complete and production ready. We had to:

  1. Convert browser API dependent components to components aware of rendering context (server or client).
  2. Isolate React features unsupported on the server-side like React Portals into client-only components.
  3. Postpone rendering of client-only components to after hydration so that the server-rendered DOM matches the client-rendered DOM exactly, a requirement for successful hydration.
  4. Construct a minimalist logging JavaScript library and load it before the full app so that we would not miss logging important client events (a quick click!) that happen before the React app loads.

After solving all visible problems with getting the app to render, we encountered some additional, more subtle issues. For example, SSR broke any React code that depended on client context, since we were bypassing our React/JavaScript API client code to make a single HTTP request for the full page. This resulted in bugs such as showing users weather for the wrong city, since user location isn’t accessible in that first request. (For privacy reasons, we do not store user location server-side.) Additionally, because we weren’t getting application version number information, our API would sometimes return incompatible data. To address these issues, we baked the browser context needed for rendering and search quality into cookies, including user time zone and location information. This allowed us to render the correct content server side.

Overall, the transition to SSR was very successful, and we finished the project in about six weeks without other developers having to stop releasing features. Way ahead of schedule!

So what’s next? One thing we’ve learned is that work on latency is never really done. We’re certainly excited to see how Server Components lands, so stay tuned for future updates!