August 8, 2022

Similar to Google's quality rater guidelines, human raters asses the quality of Neeva's search results using these instructions.

The post Neeva shares search rating guidelines for technical queries appeared first on Search Engine Land.

Neeva has revealed how it instructs human evaluators to rate its search results, specifically for technical queries. 

Like Google (which, coincidentally, updated their quality rater guidelines today), Neeva uses human raters to assess the quality of its search results.

The guidelines break down into three key areas: query understanding, page quality rating and page match rating. 

Query understanding. This is all about figuring out the intent behind the user’s search query. Neeva breaks down the types of queries into the following categories:

  • How to: User is searching for instructions to complete a task.
  • Error/troubleshooting: Something went wrong, user is searching for a solution.
  • Educational/learning: Who/what/where/when/why.
  • Product seeking/comparison: User is searching for a new product/tool or comparing products/tools.
  • Navigational: User is searching for information on a person or entity.
  • Ambiguous: Unclear what the user is searching for.

Page quality rating. Neeva has broken down pages into three levels of quality: low, medium and high. Advertising usage, page age and formatting are critical elements.

Here’s a look at each:

Low quality:

  • Dead pages
  • Malware pages
  • Porn/NSFW pages
  • Foreign Language
  • Pages behind a paywall
  • Clones

Medium quality:

  • 3+ ads when scrolling / 1 large banner ad / interstitial or video ads
  • Page is 5+ years old
  • Page loads slowly
  • Format of page makes it difficult to extract information
  • Forked github repo
  • Pages behind a login or non-dismissable email capture
  • Question page with no response

High quality:

  • Meet the age criteria
  • Meet the ads criteria
  • Be well formatted

Page match. Neeva has its raters give a score to the match between the query and a webpage, between 1 (significantly poor) to 10 (vital). Here’s that scale:

  1. Significantly Poor Match. Does not load, page is inaccessible.
  2. Especially Poor Match. Page is wholly unrelated to the query. Missing key terms.
  3. Poor Match. Page may have some query phrases, but not related to the query.
  4. Soft Match. Page is related to query, but broad, overly specific, or tangential.
  5. On Topic but Incomplete Match. Page is on topic for the query, but not useful in a wide scope, potentially due to incomplete answers or older versions.
  6. Non-Dominant Match. Page is related to the query and useful, but not for the dominant intent shown.
  7. Satisfactory Match. This page satisfies the query, but may have to look elsewhere to round out the information.
  8. Solid Match. This page satisfies the query in a strict sense. There is not much extra, or beyond what is asked for.
  9. Wonderful Match. This page satisfies the query in a robust, detailed sense. It anticipates questions/pitfalls that might come up and/or adds appropriate framing to the query.
  10. Vital Match. This is a bullseye match. It is not available on all queries. The user has found exactly what they were looking for.

Read the full guidelines. They were published on the Neeva blog, here.

Why we care. It’s always smart to understand how search engines assess the quality of webpages and content, and whether it matches the intent of the search. Yes, Neeva has a tiny fraction of the search market share. But the insights Neeva shared can provide you some additional ways to think about, assess and improve the quality of your content and webpages.

The post Neeva shares search rating guidelines for technical queries appeared first on Search Engine Land.

Please follow and like us: