Large-Scale Search
V. 2009
An introduction to HathiTrust’s strategy and rationale for providing large-scale full-text search of the digital library is given below.
Introduction
The ability to discover content in the HathiTrust repository benefits the archive in a variety of ways. The greater the ability of users to find and use content in the repository, the greater their appreciation of what might otherwise be seen as a preservation effort of hypothetical value. In addition, the process of revealing content in the repository also adds a method for ensuring the integrity of the files; use of those files can reveal problems that might go undetected in a dark archive. While we can facilitate basic discovery through bibliographic searches, deeper discovery through full-text searches across the entire repository provides even greater benefits.
When we began investigating options for large-scale search in 2008, research in this area was its infancy and there were few clear strategies for searching a repository the size of HathiTrust. The major large-scale open source search engine, Lucene, did not provide benchmarking information for data sets this large, and Solr, the most widely deployed implementation of Lucene, had only recently begun gathering benchmarking data. We embarked on trying to solve this problem with only general guidance on strategies. Research programmers in the University of Michigan’s Digital Library Production Service undertook a process to generate benchmarking data to help shape our strategies. After a preliminary investigation of options, they chose to use Solr and they engaged the Solr development community in helping to define paths.
One feature of Solr is its ability to scale searches across very large bodies of content through its use of distributed searching and “shards.” When an index becomes too large to fit on a single system, or when a single query takes too long to execute, an index can be split into multiple shards, and Solr can query and merge results across those shards. Although the size of our data clearly points to the need for shards, there are many other variables in designing a successful approach, one that scales to large amounts of data and provides meaningful results. This introduction summarizes the strategy we are taking.
Overview of Strategy
We have attempted to define the variables that have the greatest impact on large-scale searching. We have also tried to stage our benchmarking process so that we start with the simplest approach and introduce each new variable only after collecting benchmarking data on the previous instantiation of the index and environment. Our stages are as follows. A report detailing results of stages 1 and 2 is available.
-
- Stage 1: Growing the index As the index gets larger, we expect to learn about the size of the index relative to the body of content, time to index with a growing body of content, and degradation of search performance as the amount of content increases. In order to gain a clear sense of the way that these phenomena take place, we are conducting tests on indexes in 100,000 volume steps, from 100,000 to 1,000,000 (with additional increments at 10,000 and 50,000).
-
- Stage 2: Impact of memory Increasing physical memory and changing JVM configurations will also influence performance. We will increase physical memory from 4GB to 8GB and test several JVM configuration changes in combination with the test suite and each of these index sizes.
-
- Stage 3: Using shards We will introduce shards in the approach, employing multiple shards with the 8GB memory and optimal JVM configuration. We will test the suite of searches with one shard on each of two physical servers. We will then test the suite of searches with one shard on each of two virtual servers on each physical server (i.e., four shards). Benchmarking data will be gathered for all of the index steps.
-
- Stage 4: Load testing We will introduce load testing for the single shard and multi-shard approaches, attempting to see what impact a large number of users will have on performance.
- Stage 5: Faceting results: Full-text searching of a large number of documents will undoubtedly lead to the retrieval of a large numbers of results, and thus usability problems. One obvious strategy for improving navigation of large numbers of results is the use of faceted displays from associated bibliographic data. We will add relatively full bibliographic records to each of the documents and repeat the testing process with a faceted display of results from the bibliographic data.