lexnlp.extract.en.preprocessing.span_tokenizer.
SpanTokenizer
Bases: object
object
get_token_spans
returns: [(‘word’, ‘token’, (word_start, word_end)), …]