Less noise, more data. Get the biggest data report on software developer careers in South Africa.

Dev Report mobile

How We Implemented Elasticsearch to Improve OfferZen’s Search Functionality

12 March 2021, by Ethan Marrs

Finding the right developer for a tech role can be really challenging. As the product team of a tech talent marketplace, one of the key challenges for us is to ensure that hiring teams can source tech talent with minimal effort and never miss out on great candidates. That's why we've recently implemented Elasticsearch. Here’s how we approached it and what we learned.

Inner-Article-Image

Why we rebuilt our search functionality

OfferZen is a tech talent marketplace where hiring teams reach out to candidates first, helping them find great developers who are actively looking for new work. Hiring can be a challenging and time-consuming process, so it’s vitally important that hiring managers can source candidates for their open roles easily.

During the initial stages of OfferZen’s growth, we had implemented a home-grown filtering engine using Ruby and Redis. We used a machine-learning model which determined the order of candidates in the search results.

This solution had served OfferZen well for more than three years. However, as more companies and candidates started using the platform and we began to scale the product, it became clear that we needed a new solution.

There were three key reasons we needed to change the search technology:

  • New functionality: The existing home-grown solution was sturdy, but not easily adaptable. It started taking us too long to implement new features, and there was certain functionality that was just off-limits due to the complexity. Search is a well-explored area in computer science, and we wanted a solution that would allow us to quickly deliver these features to users.

  • Performance and risk reduction: As the activity on our platform increased, so did the amount of caching required for our filtering engine. This took a lot of computation time and required complex cache invalidation strategies. We needed to find a solution that would be more robust in the event of caching issues, but was also more efficient overall.

  • Technical debt: Over time, we had accumulated technical debt on the codebase for the filtering engine, making it more challenging for developers to easily make changes. We could undertake the task of refactoring the code, but replacing it with something new would ensure a cleaner codebase while also unlocking new functionality.

As the lead developer of our Marketplace team, I was ultimately responsible for delivery and project management. This included coordinating the engineering efforts, doing technical research, writing code, setting milestones and updating stakeholders along with my team.

Here’s how we went about that and what we learned.

Phase 1: Technical discovery and setting up initial infrastructure

Changing our search functionality was a big overhaul with a large potential impact on our users. Sourcing candidates is the starting point of a hiring pipeline, and therefore impacts all later stages. The search engine also consists of a complex set of features and all active companies on OfferZen make use of it.

That’s why we had to consider the following when weighing up our options:

  • Integration with our tech stack: We needed to be able to integrate the new solution with our existing Ruby tech stack.
  • Fine-grained control: We needed the ability to make subtle tweaks to the ranking algorithms, with full control of any search features that we made use of.
  • Good developer experience: Going forward, multiple teams and developers would be interacting with the technology. It needed to be accessible and usable. This included ensuring solid documentation, but also easily understandable APIs.
  • Matching the existing functionality: As a start, we needed to at least match the functionality that we already had if the technology was going to be a viable option.

Our team of engineers responsible for the infrastructure and ease-of-use of the platform kickstarted the journey. They did exploratory research to find out what options were available to us. They also did market research to find out what other marketplaces such as LinkedIn, Eventbrite, Uber, eBay, and Takealot were using as solutions for their platforms.

Eventually, we narrowed down our options as the following:

  • A new machine-learning model that would add value to our filtering solution by picking up on more subtle user behaviour to improve the way results are ranked.

  • A dedicated high-performance database that would be fine-tuned for this purpose and improve our caching computation time. This would allow us to move away from Redis as a caching solution.

  • Elasticsearch which provides both a caching solution and allows us to implement a good search experience to replace our existing machine-learning model.

Elasticsearch came out on top as the best solution. It held many benefits for us, which included:

  • More detailed search, and more control: Elasticsearch would allow us to build really intricate search queries with fine control over rankings. Candidates that match multiple variables across a user’s search criteria would appear above candidates who match just a single variable. We could also now provide a wider range of search results, with indications of which results were more relevant than others.
  • The right tech stack: Elasticsearch has a Ruby library which we could integrate with our tech stack.

It also provided the following as an added bonus...

  • Great documentation: Elasticsearch is well-documented, has a big pool of users to answer questions, and excellent guides and documentation for all their features.
  • In-depth metrics: It could provide detailed metrics on user searches. This would allow us to understand whether we’re delivering our intended results to our users.

Setting up the infrastructure

Our team of engineers started setting up the basis of the infrastructure to get a feel for the technology. We still used Ruby on Rails and React for our front-end, with Ruby to pass data around, but Elasticsearch could replace Redis to do the heavy lifting on the search.

My advice for developers from this stage of implementation:

Find tools that help you collaborate on discovery: Although Covid disrupted our timeline for this project at the start of 2020, we were able to keep working well in a remote environment thanks to tools like Miro, which allowed us to document architecture diagrams well.

Decide early what you want to achieve: There’s so much you can do with Elasticsearch’s technology. In order to make sure we met our deadlines, we had to limit our scope right at the start. We decided exactly what we wanted this tech to do, and what problem we wanted it to solve for our users.

Phase 2: Testing Elasticsearch internally and with some external users

Once we had set up the infrastructure, we implemented Elasticsearch as an internal search engine. Testing it against users within OfferZen helped us eliminate some of our early questions, and gain confidence in the solution before testing it with actual customers.

Practically, this meant asking our own account managers and talent advisors to use it as an internal tool when searching for candidates. They could then provide us with initial feedback on the changes to the search function.

During this period, we continuously shipped changes to the search algorithm.

We found that some aspects of the search criteria seemed to be affecting the rankings more than others. To solve this, we normalised the scores for each section to ensure that all parts would be taken into account. We fine-tuned the rules for when a result was considered relevant enough to be returned, and adjusted the full-text search to ensure more accurate results.

Testing Elasticsearch with some external users

It was important to us to also start testing the features we were working on with a handful of external users before our beta release. That way we could establish early on whether our planned features were in line with what users actually wanted and found useful.

We created prototypes and approached a small number of hiring managers and recruiters to try out some of our early features. We also used feature flags to test our live code, by turning on specific features for a short period of time so they could start testing it out and give us their feedback.

Feedback we received:

We discovered that users found the vast increase in the search results initially confusing and hard to navigate. Both our internal account managers and external users were used to our old search, which typically resulted in far narrower search results since it was based on a filtering solution.

Based on their feedback, we realised a UI solution would be needed in future to help our users navigate the longer search results and better support the different ways they source candidates. Some of them have limited time and hiring resources available, and our list would need to be flexible to support that.

My advice for developers from these stages of implementation:

Testing in a production environment is important: You need to test early and often to identify your biggest problems and challenges as soon as possible, decide what features you need to focus on, and what you need to change. We decided to do early testing against both internal users and a small number of external users, to avoid having to make major changes to our planned features later on.

Use a real dataset: It’s really difficult to evaluate search performance and ranking without actual production data. When developers are creating features, we work with ‘dummy’ data since we don’t want to compromise real, confidential information. However, the input of data is crucial to determine rankings.

Real data is much more diverse and varied than ‘dummy’ data, and it’s important to incorporate it in your testing phase. We couldn’t properly evaluate our search engine’s performance until our colleagues and external users started using actual candidate data in their search.

You should have tight feedback loops and strong collaboration between teams: We had to work very closely with our designers and product managers to validate technical changes and new designs. It was important to ensure that our changes were properly tested and were clear and easy to understand from a user perspective. We also had to make sure our planned changes worked well within the overall search architecture.

Keep your changes small: Shipping small and quick changes enabled us to get a feel for this new technology. This helped us avoid getting stuck in bigger problems and complexities that we didn’t actually need. It also allows the development team to keep up a high cadence of software delivery, encourages faster feedback loops and helps you identify blockers early on.

As a team, we could have improved on this. Although we broke down our development into milestones, we still struggled to make the pieces small enough.

Phase 3: Carrying out the beta testing

After we had acted on all the initial feedback, we started onboarding more real customers. This included hiring managers and tech recruiters who were interested in trying out the beta candidate list with our new solution.

We identified five companies as our target beta test group based on a number of factors, such as their activity on OfferZen over the preceding weeks. We decided to keep our beta test group small, so that we would be able to get a lot of rich qualitative feedback.

Once they agreed to join the beta release, we held an introductory Zoom session with each of them. We were able to meet each other, give them a quick tour of some key changes to the candidate list, and establish a feedback plan for the following few weeks of beta testing. After these sessions, the beta candidate list was activated for each participating company.

We also needed to test performance and see how the engine would run with higher volumes of searches. To do this, we added another five companies to our test group, to double capacity.

Feedback we received from end users:

The hiring managers and tech recruiters in our test group wanted to be efficient and save time when navigating the longer search results. At this point, we decided on our UI solution to help users navigate the candidate list. This took the form of a ‘best match’ badge to mark best candidates for a search at the top of the rankings. The badge saved time for our users by pointing to the candidates they could focus on first.

best-match_blog-FINAL-1

We spent some time refining the rules of what a ‘best match’ means. Our users wanted to feel confident that candidates with the ‘best match’ badge really are the best match for their query at the time of their search. That meant seeing candidates that match their required years of work experience in the roles they needed, and were actively working with their desired tech and tools. We updated the search rules to boost candidates based on these factors so that they appeared at the very top of the list.

We also introduced more options to change the sorting of our rankings, allowing users to choose between relevance, salary and experience in candidate listings.

Many company members have a weekly sourcing cadence. This means they search for new candidates on the platform every Monday. They were used to new candidates always being shown at the top of our candidate lists. Sorting the candidate list by relevance meant that these new candidates were no longer grouped together at the top of the list.

This caused some efficiency losses and frustration at seeing many candidates they had already reviewed. To solve this issue, we quickly shipped a ‘new candidates only’ filter at the very start of the beta process.

New-this-week

My advice for developers from this stage of implementation:

Analyse data both qualitatively and quantitatively: Although there are a lot of ways to analyse your search performance, don’t underestimate the qualitative aspect of searching, which means users giving feedback on whether the search feels intuitive.

It's going to be hard to know exactly whether your search results are better or not. You need to talk to end users to see what their expectations are and whether your solution is matching it.

Ship fast and regularly: We split the features we wanted to deliver to our users into separate milestones. Once again, we used feature flags to ensure that most companies would not receive the beta candidate list. That helped us quickly finish work and move forward to the next milestone.

Have a hard deadline: Having a deadline helped us figure out that we can't do it all. Decide: What do you need to do, and what's nice to have? We used the MoSCoW method to decide what we needed to prioritise, and Gantt charts to visualise our progress.

Juan_prioritised_image2_table


Thanks to Ethan’s team members Madelein van Niekerk, Laura Manzur, Kenny Chigiga and Lydia Dodge, who contributed to this article.

Ethan Marrs is a lead developer at OfferZen. He is a full stack developer driven by building cool things and learning every day. He enjoys readable code, good journalism, books, running, cycling and data visualisations!

OfferZen is hiring!

Recent posts

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.