Big Data/Analytics Zone is brought to you in partnership with:

My passion is building crawlers and search engines. In particular, I specialize in building vertical search engines like Indeed.com, Homethinking.com, Bright.com and Enormo.com (all companies I've worked with). I've also worked on products such as Atlassian Jira and Confluence to improve their search capabilities. Kelvin has posted 22 posts at DZone. You can read more from them at their website. View Full User Profile

Anatomy of a Lucene Tokenizer

11.24.2012
| 8491 views |
  • submit to reddit

A term is the unit of search in Lucene. A Lucene document comprises of a set of terms. Tokenization means splitting up a string into tokens, or terms.

A Lucene Tokenizer is what both Lucene (and correspondingly, Solr) uses to tokenize text.

To implement a custom Tokenizer, you extend org.apache.lucene.analysis.Tokenizer.

The only method you need to implement is public boolean incrementToken(). incrementToken returns false for EOF, true otherwise.

Tokenizers generally take a Reader input in the constructor, which is the source to be tokenized.

With each invocation of incrementToken(), the Tokenizer is expected to return new tokens, by setting the values of TermAttributes. This happens by adding TermAttributes to the superclass, usually as fields in the Tokenizer. e.g.

public class MyCustomTokenizer extends Tokenizer {
  private final CharTermAttribute termAtt = addAttribute(CharTermAttribute.class);

Here, a CharTermAttribute is added to the superclass. A CharTermAttribute stored the term text.

Here's one way to set the value of the term text in incrementToken().

public boolean incrementToken() {
      if(done) return false;
      done = true;
      int upto = 0;
      char[] buffer = new char[512];
      while (true) {
        final int length = input.read(buffer, upto, buffer.length - upto); // input is the reader set in the ctor
        if (length == -1) break;
        upto += length;
        sb.append(buffer);
      }
      termAtt.append(sb.toString());
      return true;
}

And that's pretty much all you need to start writing custom Lucene tokenizers!

Published at DZone with permission of its author, Kelvin Tan. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)