It has to do with how the lexers are defined in the two various languages. In the dynamic one, we have special tokens for the various End ??? keywords where the token is both words. This makes it easy to identify them and set up the content dividers on the tokens.
In the add-on, the keywords End and Sub for instance are parsed separately. So there is nothing really to identify (from the lexer's perspective that is) that it should receive a content divider. Perhaps this is something we can add down the road however it will take some redesign to get there.