I had built a spell checker using an earlier version of the SyntaxEditor that relied on the fact that the token aggregator returned snapshot ranges (not just ranges of a single character) for tokens that are classified by the default attributes of a lexer state. Let me give an example to explain what i mean:
I am using the dynamic lexer UI editor and I have for my default state: DefaultTokenKey="Text". Then I have a pattern group in it that matches numbers with the regex "[0-9]+" with a different token key.
Let's say we have the following input "this is a test 123"
Now, when I use the ITagAggregator<ITokenTag>.GetTags(...) method, it returns each of the characters of "this is a test " as a seperate range, which seems incorrect behavior to me. It correctly returns "123" as a single range.
Is this behavior a bug? Isn't it inefficient to create a sperate range for every text character in the document? I know I can merge them myself, but I am worried about the performance of the text editor due to the fact that it stores _every_ text character as a seperate token range.