Class CompoundWordTokenFilterBase
- java.lang.Object
-
- org.apache.lucene.util.AttributeSource
-
- org.apache.lucene.analysis.TokenStream
-
- org.apache.lucene.analysis.TokenFilter
-
- org.apache.lucene.analysis.compound.CompoundWordTokenFilterBase
-
- All Implemented Interfaces:
java.io.Closeable
,java.lang.AutoCloseable
,Unwrappable<TokenStream>
- Direct Known Subclasses:
DictionaryCompoundWordTokenFilter
,HyphenationCompoundWordTokenFilter
public abstract class CompoundWordTokenFilterBase extends TokenFilter
Base class for decomposition token filters.
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description protected class
CompoundWordTokenFilterBase.CompoundToken
Helper class to hold decompounded token information-
Nested classes/interfaces inherited from class org.apache.lucene.util.AttributeSource
AttributeSource.State
-
-
Field Summary
Fields Modifier and Type Field Description private AttributeSource.State
current
static int
DEFAULT_MAX_SUBWORD_SIZE
The default for maximal length of subwords that get propagated to the output of this filterstatic int
DEFAULT_MIN_SUBWORD_SIZE
The default for minimal length of subwords that get propagated to the output of this filterstatic int
DEFAULT_MIN_WORD_SIZE
The default for minimal word length that gets decomposedprotected CharArraySet
dictionary
protected int
maxSubwordSize
protected int
minSubwordSize
protected int
minWordSize
protected OffsetAttribute
offsetAtt
protected boolean
onlyLongestMatch
private PositionIncrementAttribute
posIncAtt
protected CharTermAttribute
termAtt
protected java.util.LinkedList<CompoundWordTokenFilterBase.CompoundToken>
tokens
-
Fields inherited from class org.apache.lucene.analysis.TokenFilter
input
-
Fields inherited from class org.apache.lucene.analysis.TokenStream
DEFAULT_TOKEN_ATTRIBUTE_FACTORY
-
-
Constructor Summary
Constructors Modifier Constructor Description protected
CompoundWordTokenFilterBase(TokenStream input, CharArraySet dictionary)
protected
CompoundWordTokenFilterBase(TokenStream input, CharArraySet dictionary, boolean onlyLongestMatch)
protected
CompoundWordTokenFilterBase(TokenStream input, CharArraySet dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch)
-
Method Summary
All Methods Instance Methods Abstract Methods Concrete Methods Modifier and Type Method Description protected abstract void
decompose()
Decomposes the currenttermAtt
and placesCompoundWordTokenFilterBase.CompoundToken
instances in thetokens
list.boolean
incrementToken()
Consumers (i.e.,IndexWriter
) use this method to advance the stream to the next token.void
reset()
This method is called by a consumer before it begins consumption usingTokenStream.incrementToken()
.-
Methods inherited from class org.apache.lucene.analysis.TokenFilter
close, end, unwrap
-
Methods inherited from class org.apache.lucene.util.AttributeSource
addAttribute, addAttributeImpl, captureState, clearAttributes, cloneAttributes, copyTo, endAttributes, equals, getAttribute, getAttributeClassesIterator, getAttributeFactory, getAttributeImplsIterator, hasAttribute, hasAttributes, hashCode, reflectAsString, reflectWith, removeAllAttributes, restoreState, toString
-
-
-
-
Field Detail
-
DEFAULT_MIN_WORD_SIZE
public static final int DEFAULT_MIN_WORD_SIZE
The default for minimal word length that gets decomposed- See Also:
- Constant Field Values
-
DEFAULT_MIN_SUBWORD_SIZE
public static final int DEFAULT_MIN_SUBWORD_SIZE
The default for minimal length of subwords that get propagated to the output of this filter- See Also:
- Constant Field Values
-
DEFAULT_MAX_SUBWORD_SIZE
public static final int DEFAULT_MAX_SUBWORD_SIZE
The default for maximal length of subwords that get propagated to the output of this filter- See Also:
- Constant Field Values
-
dictionary
protected final CharArraySet dictionary
-
tokens
protected final java.util.LinkedList<CompoundWordTokenFilterBase.CompoundToken> tokens
-
minWordSize
protected final int minWordSize
-
minSubwordSize
protected final int minSubwordSize
-
maxSubwordSize
protected final int maxSubwordSize
-
onlyLongestMatch
protected final boolean onlyLongestMatch
-
termAtt
protected final CharTermAttribute termAtt
-
offsetAtt
protected final OffsetAttribute offsetAtt
-
posIncAtt
private final PositionIncrementAttribute posIncAtt
-
current
private AttributeSource.State current
-
-
Constructor Detail
-
CompoundWordTokenFilterBase
protected CompoundWordTokenFilterBase(TokenStream input, CharArraySet dictionary, boolean onlyLongestMatch)
-
CompoundWordTokenFilterBase
protected CompoundWordTokenFilterBase(TokenStream input, CharArraySet dictionary)
-
CompoundWordTokenFilterBase
protected CompoundWordTokenFilterBase(TokenStream input, CharArraySet dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch)
-
-
Method Detail
-
incrementToken
public final boolean incrementToken() throws java.io.IOException
Description copied from class:TokenStream
Consumers (i.e.,IndexWriter
) use this method to advance the stream to the next token. Implementing classes must implement this method and update the appropriateAttributeImpl
s with the attributes of the next token.The producer must make no assumptions about the attributes after the method has been returned: the caller may arbitrarily change it. If the producer needs to preserve the state for subsequent calls, it can use
AttributeSource.captureState()
to create a copy of the current attribute state.This method is called for every token of a document, so an efficient implementation is crucial for good performance. To avoid calls to
AttributeSource.addAttribute(Class)
andAttributeSource.getAttribute(Class)
, references to allAttributeImpl
s that this stream uses should be retrieved during instantiation.To ensure that filters and consumers know which attributes are available, the attributes must be added during instantiation. Filters and consumers are not required to check for availability of attributes in
TokenStream.incrementToken()
.- Specified by:
incrementToken
in classTokenStream
- Returns:
- false for end of stream; true otherwise
- Throws:
java.io.IOException
-
decompose
protected abstract void decompose()
Decomposes the currenttermAtt
and placesCompoundWordTokenFilterBase.CompoundToken
instances in thetokens
list. The original token may not be placed in the list, as it is automatically passed through this filter.
-
reset
public void reset() throws java.io.IOException
Description copied from class:TokenFilter
This method is called by a consumer before it begins consumption usingTokenStream.incrementToken()
.Resets this stream to a clean state. Stateful implementations must implement this method so that they can be reused, just as if they had been created fresh.
If you override this method, always call
super.reset()
, otherwise some internal state will not be correctly reset (e.g.,Tokenizer
will throwIllegalStateException
on further usage).NOTE: The default implementation chains the call to the input TokenStream, so be sure to call
super.reset()
when overriding this method.- Overrides:
reset
in classTokenFilter
- Throws:
java.io.IOException
-
-