I wanted to make one note about painetraine's approach. I used a similar technique for function definitions in a C-like language, but I did run into one major issue. For instance, I had the following function declaration:
public function function_name(in arg1, in arg2)
// Function Body
I wanted to collapse the text that represents the function body so that I would get the following effect:
[+] public function function_name(in arg1, in arg2) [...]
I couldn't just use the curly braces for collapsing because those are also used for if-blocks and other control structures that I didn't want to collapse. I only want to collapse function declarations.
So when my semantic parser would encounter an opening curly brace, I had to determine if it opened the body of a function declaration. If so, it started an outlining node. Otherwise, it didn't do anything. This goal was easy to achieve by scanning backwards through the token stream to make sure it was part of a valid function declaration. So far good!
The problem is this. If I were to delete the 'function' keyword from my function declaration, it is no longer a valid function declaration and should NOT be outlined. The problem is that the sematic parser doesn't pick up this change. It's smart (for performance reasons), so it only looks for modifications made within the start and end offsets of the outlining node. Since these modifications occurred before the start offset, the semantic parser didn't re-parse the opening curly brace to recognize that it should no longer be an outlining node.
I thought I'd mention this so that you adequately test your code's responsiveness to changes in tokens that are not part of the outlining node.