Posted 18 years ago
by Jared Phelps
Hello-
I was wondering if you guys could point me to some resources on this subject. As I've mentioned before, I have custom written a semantic parser and lexer, and have integrated them into the syntaxeditor object model (the lexer creates ITokens and the semantic parser creates IAstNode trees).
Where I'm getting stuck is figuring out the most optimized way to recreate my Ast tree after a change to the document. Do you guys typically run a full semantic parse every time, do you just run it for everything after the change, or do you have some fancy algorithms in place to only parse the stuff that might be affected, depending on the change the user made? I was trying for the 3rd option and got it working for most cases, but things like adding/removing quotation marks, brackets, and other lexically small but semantically huge changes got cumbersome. It seems like a huge waste of resources to semantically parse the whole document when they might have just added some whitespace or changed the name of an identifier somewhere. If I were to generate my semantic parser using your parser generator, how would it behave?
If it makes a difference, For me, an "average" semantic parse takes between 100-500 milliseconds. A longer one may be 2 seconds-ish. Not really noticable since I'm using the semantic parser service, but still feels like a waste.
I know this isn't exactly a syntax editor issue, but I figured if anybody knows, you would.
Thanks!
Jared
I was wondering if you guys could point me to some resources on this subject. As I've mentioned before, I have custom written a semantic parser and lexer, and have integrated them into the syntaxeditor object model (the lexer creates ITokens and the semantic parser creates IAstNode trees).
Where I'm getting stuck is figuring out the most optimized way to recreate my Ast tree after a change to the document. Do you guys typically run a full semantic parse every time, do you just run it for everything after the change, or do you have some fancy algorithms in place to only parse the stuff that might be affected, depending on the change the user made? I was trying for the 3rd option and got it working for most cases, but things like adding/removing quotation marks, brackets, and other lexically small but semantically huge changes got cumbersome. It seems like a huge waste of resources to semantically parse the whole document when they might have just added some whitespace or changed the name of an identifier somewhere. If I were to generate my semantic parser using your parser generator, how would it behave?
If it makes a difference, For me, an "average" semantic parse takes between 100-500 milliseconds. A longer one may be 2 seconds-ish. Not really noticable since I'm using the semantic parser service, but still feels like a waste.
I know this isn't exactly a syntax editor issue, but I figured if anybody knows, you would.
Thanks!
Jared