Am I Completely Lost?

SyntaxEditor for Windows Forms Forum

Posted 15 years ago by Captain Starbuck
Version: 4.0.0280
I've just found and downloaded SyntaxEditor. The first goal I had for the product was to create a language definition for a non-OO dialect of BASIC. Here is a taste of the syntax:
! Begin the loop   (comments begin with * or ! )
X = 2            ; * Comments at line end preceded by semicolon
FOR N = 1 TO 10  ; * Collapsible block here ends with NEXT
  X = X * (N/2)  ; * Assignment, expressions vary in complexity
  GOSUB SHOW.VALUES ; * Jump to label (note indentation)
NEXT N           ; * Handle like close brace
STOP             ; * Simple keywords
SHOW.VALUES: * Labels denoted with colon, comment may follow
  PRINT N,X      ; * varying delimiters and expressions possible 
RETURN           ; * Handle like close brace to collapse to label
END              ; * optional final token
I have 30 years of coding experience in about 20 different languages and have served as tech editor for some published C# books, so I'm no slouch to coding. But I have to say I'm really lost with SyntaxEditor. I had no idea how complex this sort of work is - or rather, I wouldn't mind the complexity if there were better docs.

I think I just need to understand the process. Where do we start to create a new definition? In the code samples I see editor.Document.LoadLanguageFromXml() imports a <SyntaxLanguage> language definition, which is not the same as a <Grammar> definition.

I'm understanding that the XML files are only one way (dynamic?) of defining a language, where the other way (syntactical) is to define the language in code - and the only example I can find of this is the SimpleSyntaxLanguage, which encapsulates an instance of SimpleLexicalParser. Was that parser generated from the Grammar XML?

So ... we write a Grammar file by hand, generate a Lexical Parser from it, then write a FooSyntaxLanguage class, deriving and implementing some other required classes and including the generated class?

I don't understand what's supposed to happen with the Grammar designer. I see the code in DynamicallyCreateForm.cs and the code in CreateLoggingLanguage, but I don't understand what it's supposed to be doing. The docs say dynamic languages make use of XML but the DynamicLexicalState class is code - is that just a re-use of the word "dynamic"?

Call me an idiot, but as you can see there are some basic assumptions being made, and it's difficult for a newb to filter through all of this to get to those "aha" moments with this package.

Is there a cookbook anywhere that says "if you want to do X, implement Y and here is an example..."? It seems like the documentation has a lot of definitions but doesn't put many of them into context. I'm sure the docs serve as an OK reference if you know what you're doing but getting to that point seems to be pretty painful.

I've seen a suggestion in this forum about video tutorials. Maybe Actipro can invite people to a GoToMeeting or other session where we can see you create a basic parser, then add options for the editor. If it's live we can ask some (moderated) questions. If not then maybe you can just do 10 minute segments every few days and let us ask questions in the forum so that you know how to proceed with each next segment.

Thanks for your time.

Comments (15)

Posted 15 years ago by Actipro Software Support - Cleveland, OH, USA

You make very good points and it's something we are definitely focusing on as we move into the future. We're currently in the midst of completing a first release of a WPF version of SyntaxEditor, after which we'll be working on SyntaxEditor 5.0 for WinForms. But in the WPF version we already have over 30-some samples (more to come before release too) some of which are more step-by-step introductions on how to create a language. Additionally we are working on making a more robust Language Designer tool that will be able to walk you through the creation of a language and should be able to automate part of it.

As for now, let me clarify some things that may help you and feel free to reply with more questions if you have any.

1) Lexical Parsers... Lexical parsing is used to syntax highlight text in the editor and to provide tokens for more advanced parsing. There are currently two ways to do lexical parsing with syntax languages. One is the "dynamic" way and the other is "programmatic". "Dynamic" languages usually have their lexical parser defined via one of the XML files you see in our sample where you specify states, pattern groups, patterns, etc. The XML gets loaded into a .NET object model and a special internal dynamic lexical parser gets used to parse text based on the loaded object model. So even though the object model is usually created with the XML definition, you can build it in code too. It's flexible that way. The other type of lexical parser, programmatic, is how the Simple sample's lexical parser is implemented. It is written by you in .NET code and manually reads in text, figuring out which token the text represents. These types are usually faster than dynamic lexical parsers since programmatic ones are optimized for a certain language.

2) Syntactic/Semantic Parsers... The grammar XML is an optional extra where you can generate a recursive descent parser. It can work with any language regardless of which type of lexical parser was used in the language as long as the tokens created by the lexical parser each have IDs that uniquely identify the type of token. The grammar XML can create a token ID class, a lexical state ID class, AST node implementation classes, and a semantic parser class based on the definition you have in it. So sometimes if you are creating a programmatic lexical parser, it may be handy to generate the token ID and lexical state ID classes using the grammar designer even if you don't use the other things it can generate. Once you use the grammar designer and design the productions, etc. then you can generate all the C#/VB files it makes and include those in your project for use. The Simple language demo is a good sample of this. The generated AST info can be used along with tokens to help figure out what to populate in IntelliPrompt UI, etc.

I hope this helps with your main questions and as mentioned above, feel free to ask more. We're always willing to help.

Actipro Software Support

Posted 15 years ago by Captain Starbuck
I thank you kindly for your cordial and informative response. I hope my questions aren't too remedial, and if RTM is the right response I'll be happy to get it. I'm the sort of person who wants to know where the good fishing is, rather than being given fish, though sometimes my questions will border on the latter just to get me started - like if I can just get a hook from somewhere I'll figure out how to tie it to the line. :)

Consider this: if the first step is to create a lexer, I fire up the sample project and look for something that mentions Lexical Parser ... but I don't see anything. After the higher level SDI Editor sample comes the Grammar Designer - but that's the second step after a lexer. Just to get started with the software I'm at a roadblock.

So I hit the docs where I find code for PerformLexicalParse() and related information. The info there is about what is available to perform tasks, but not what the tasks are that need to be performed. The part I'm missing is what functional tasks needs to be accomplished toward the goal. Once I understand this I'm sure I can fine tune the process with custom code.

I think you're providing tools for someone who should have a base of information about what it is that they should be doing with the tools. If that's the case and that's what I'm missing then I'll look for some more fundamental info. I found SE when I was looking to create a syntax-highlighting editor for this BASIC dialect - I generally write code for databases and communications, so this pre-compiler-ish language parsing was something I was unprepared to do but I'm certainly up to learn what's required. Beyond this initial BASIC thing I can see lots of applications for SE.

The documentation overview page for Lexical Parsing has a great image that shows classes(?) related to parsing a comment and 'using' statement, but there is no related code. Looking for CommentStartToken, I don't find anything in the doc. I figured this is defined as a TokenKey attribute of ExplicitPatternGroup in a SyntacticLanguage document. Looking for definitions for SyntacticLanguage, I find definitions for Properties and Triggers, and figure I'm on the way to finding the details I need. Then I see the Styles tag in SyntaxLanguage, but no property for Styles. Is the doc incomplete? What can I trust? So I figure maybe Styles isn't defined by class SyntaxLanguage but maybe by LogicalTreeNodeBase from which it seems to inherit. LogicalTreeNodeBase isn't in the doc index and isn't found with search, even though it' evidently in the doc.

I was hoping to find a guide something like this: "Create token definitions for every aspect of the language including comments and reserved words. Define how each token is delimited, like with WhitespaceTokens or PunctuationTokens, and how they're terminated, like with CommentEndToken or LineTerminationToken."

As you see, I'm trying to follow the breadcrumbs but they keep disappearing.

Let's take a step back. I guess the first step is to figure out if I'm going to use XML to define the syntax, or whether I'm going to hand-code it in C#. The doc says "A dynamic language is a type of language that can be defined using an XML language definition." Well, can't they all? If further says "These types of languages inherit MergableSyntaxLanguage so they are mergable." Why is that implied? Most languages don't have multi-language support.

When I think of "languages", the only two languages I'm thinking about here are the one that I'm parsing (like BASIC) and the one that I'm writing in to do the parsing (C#). Add XML if we're defining the SyntaxLanguage in XML rather than in code. This concept of dynamic languages vs non-dynamic languages is very confusing. Is my BASIC dialect considered a dynamic language if I'm using XML to create the lexer? The language is what it is, and it's not a different language type based on tools that I use to parse it. Do you see the confusion that this invokes?

So I have SE4 here and I want to create a Lexical Parser for this BASIC dialect. Can you give me an idea of what the first step is? Do I copy an XML doc like ActiproSoftware.VBDotNet.xml as a base? Then copy VBDotNetDynamicSyntaxLanguage.cs as a model? What are the various steps that get executed (or should) whenever Document changes?

Or do people find that the XML-based definitions are too restrictive so the best long-term method is really to skip the XML altogether and just learn how to write code that implements SyntaxLanguage?

Thanks again - sorry for being so verbose.
Posted 15 years ago by Actipro Software Support - Cleveland, OH, USA
Thanks again for feedback, and I do want to say for newer versions we are making step-by-step walkthroughs on building a language, where you start with lexical parsing only and then build from there.

For you to start off, what I would do for simplicity is copy our ActiproSoftware.VBScript.xml sample. It doesn't have a code-behind class so it's about as simple as you can get. That XML file is a "dynamic language definition". Meaning the contents of that XML file are used to configure the lexical parser that is used behind the scenes for a DynamicSyntaxLanguage class.

Essentially when you load that XML file in the editor via a call to (editor.Document.LoadLanguageFromXml()) it creates an instance of a DynamicSyntaxLanguage, configures its .NET object model based on the information in your XML file, and tells the Document to use that language.

So then just go and modify your XML file to use states and patterns more like your BASIC laguage. It should be pretty straightforward.

The documentation information about the language being "mergable" means that the dynamic language lexical parser is automatically able to merge with other languages if you want, but obviously you don't need to do that. It's just a feature. If you were to make a programmatic (hand written) lexical parser instead, you would have the option as to whether you want it mergable or not.

And to clarify on dynamic languages, dynamic languages just mean the language inherits the DynamicSyntaxLanguage class and its lexical parser runs on our pattern engine, which is most often configured using those XML files. Non-dynamic languages don't inherit DynamicSyntaxLanguage and can't be loaded with those XML definition files. Instead they use the programmatic (hand-written) lexical parsers you would need to make.

Anyhow once you get your syntax highlighting looking right based on your dynamic language XML definition, you may want to get into outlining or other code-behind features. That is where in your dynamic language XML definition you'd set the SyntaxLanguageTypeName attribute like we do in our ActiproSoftware.CSharp.xml sample, and it would tell the language to auto-load a code-behind class. For the C# one, it is told to load CSharpDynamicSyntaxLanguage.cs. From there you can add more features like outlining or start supporting IntelliPrompt features, etc.

Hope this helps you get started.

Actipro Software Support

Posted 15 years ago by Captain Starbuck
Excellent intro. Thanks again. Time to go fishing. I'll post results here later. I hope this thread is helpful for other newcomers.
Posted 15 years ago by Captain Starbuck
I created a SyntaxLanguage XML definition and wrote a new project in VS2005 that implements the SyntaxEditor. So now I have a functional but basic highlighting editor. Now I need to take it to the next level.

I created a Grammar XML and generated a semantic parser with some customizations. What's the next step? I've followed the sample code through with the Simple definition but it's tough to know what needs to be created manually from what can be generated. For example, where did the SimpleSyntaxLanguage class come from, and should that be used as a base for another language? Should we just create a new class that inherits from DynamicOutliningSyntaxLanguage, for example, and start overriding members?

The code samples provided only have one example of a generated semantic parser, that's for the Simple language. This makes it very difficult to understand how different definitions might be created, customized, or implemented.

As a basic example, I was thrown off by the AstNodeProperty PropertyType="Simple", which after some reading I was certain that "Simple" really was the property type and had nothing to do with the "Simple" language that was being defined. The word "simple" is used elsewhere - for example, is the SimpleName "simply a name" or should that be "MyLanguageName"? For this reason I think "Simple" was a poor choice of language names - how about something more unique and distinquishable that we can search on like "SimplEx"? :)

As another example of how it's tough to learn from a single example, SimpleSyntaxLanguage is the only class (with source that I can find in the VS2005 examples) that inherits from MergableSyntaxLanguage, (almost?) all others inherit from the extended DynamicOutliningSyntaxLanguage. I don't know if that's a recommendation or just keeping it a more general purpose sample. Would our own definitions inherit from SyntaxLanguage, MergableSyntaxLanguage, DynamicSyntaxLanguage, or ...? I understand this is probably dependent on the features we want to support. I'm of the opinion that it's better to base samples off of more detailed base classes and then just defer to base classes for members that aren't overridden.

I'm continuing to do my homework (yes, I'm reading doc and code as best I can) but just tossing this out since I'm at a cross-roads and you might be interested in the thought process.

Thanks again.
Posted 15 years ago by Captain Starbuck
I dunno how many people have read that last note so I'll just add a new posting here.

I created a new FooSyntaxLanguage.cs but then couldn't figure out how to get it to use the XML lexer, so I created FooRecursiveDescentLexicalParser.cs, and then FooLexicalParser.cs to encapsulate in FooSynaxLanguage. I also needed to create an OperatorType enum and FooTokenID class.

A null reference exception is thrown from somewhere inside of SE after it loads my ExampleText but I may figure that out too.

This is all very haphazard but it's moving along. As I get myself from one step to the next I look back at the doc and say "oh, so that's what they meant". There's gotta be a better way.

It's been very painful until now, but assuming there are no weird runtime aborts after this all comes together, it looks like this will result in a purchase.

And to think "all I wanted was a better editor..."
Posted 15 years ago by Actipro Software Support - Cleveland, OH, USA
The language class needs to be created by hand. So if you are using a dynamic language you'd want to inherit DynamicSyntaxLanguage. DynamicOutliningSyntaxLanguage provides some functionality for code outlining however if you plan on making an AST that will drive your outlining, then you would NOT want to not use that. Instead just keep the core DynamicSyntaxLanguage as your base class.

At this point in your XML definition, be sure to specify the SyntaxLanguageTypeName attribute in the SyntaxLanguage tag. This will tell SyntaxEditor how to marry up your XML definition and the code-behind class. The docs explain the format for that attribute.

The base language class you choose, depends on features you want. If you didn't care about merging and were doing a programmatic lexer, inheriting SyntaxLanguage would be find. If you wanted merging and had a programmatic lexer, inheriting MergableSyntaxLanguage is good. If you are using a dynamic language, you have to inherit DynamicSyntaxLanguage (it in turn auto-inherits MergableSyntaxLanguage). If you have a dynamic language, want code outlining but don't plan on having an AST build your code outlining tree, then inherit DynamicOutliningSyntaxLanguage.

The grammar will generate the semantic parser class, the token and lexical state ID classes, and the AST classes. Nothing else right now.

Once you have an AST with nodes that implement ICollapsibleNode, you can implement code like PerformAutomaticOutlining(), ResetAutomaticOutliningBehavior() and ShouldSerializeAutomaticOutliningBehavior() in the SimpleSyntaxLanguage to have outlining populate from your AST.

The recursive-descent lexical parser is kind of a bridge between the normal lower level lexical parser and your recursive-descent semantic parser generated by the Grammar Designer. All it really does is feed the semantic parser with tokens that are generated by the low-level lexical parser. In your case the low-level lexical parser is the one used by your dynamic language. You may ask, why do I need this bridge? Well when doing semantic parsing you don't generally care about comments or whitespace. So this bridge class allows you to skip over those in terms of what is fed up to the semantic parser.

Per your comment though, you are using a dynamic language so you do NOT need FooLexicalParser.cs. With dynamic languages, the lexical parser is already created for you internally.

Hopefully this information is helpful for you and can help other people getting started until we can have a smoother walkthrough process.

Actipro Software Support

Posted 15 years ago by Captain Starbuck
Excellent info, thanks again.

One more question related to the above before I continue debugging:

I had the lexer from XML and loaded the XML, but then I created a semantic parser which needed to be loaded, seemingly as an alternative to the the lexer. I'm not sure how to get the XML lexer loaded with the coded parser. Right now I'm loading the semantic parser and then encapsulating my coded lexer. I'm not drawn to the XML lexer necessarily, just curious about how to use that with a semantic parser as an alternative to code. It seems strange that the editor.document.language.load methods allow for either lexer or semantic parser, not both (gulp, or maybe I missed that).

I hope that makes sense. I can provide details if required.
Posted 15 years ago by Actipro Software Support - Cleveland, OH, USA
Let me post some code to help you. Your scenario is similar to the XML language in our Web Languages Add-on. In that we have a dynamic language that does semantic parsing, builds outlining from the AST, etc.

The language is declared:
public class XmlSyntaxLanguage : DynamicSyntaxLanguage, ISemanticParserServiceProcessor {
This is the core PerformSemanticParse method that is part of SyntaxLanguage:
public override void PerformSemanticParse(Document document, TextRange parseTextRange, SemanticParseFlags flags) {
    SemanticParserServiceRequest request = new SemanticParserServiceRequest(SemanticParserServiceRequest.MediumPriority,
        document, parseTextRange, flags, this, document);
Note that in the request above we pass 'this' as the ISemanticParserServiceProcessor. So to implement that interface, we have this which stores the results of a semantic parse in the request's SemanticParseData property:
void ISemanticParserServiceProcessor.Process(SemanticParserServiceRequest request) {
    // Perform semantic parsing
    request.SemanticParseData = MergableLexicalParserManager.PerformSemanticParse(this, request.TextBufferReader, request.Filename) as ISemanticParseData;
The MergableLexicalParserManager ends up calling this (defined on MergableSyntaxLanguage):
protected override object PerformSemanticParse(MergableLexicalParserManager manager) {
    MergableRecursiveDescentLexicalParser lexicalParser = new MergableRecursiveDescentLexicalParser(this, manager);
    XmlSemanticParser semanticParser = new XmlSemanticParser(lexicalParser);
    return semanticParser.CompilationUnit;
So now that we have semantic parse data coming back, which is an AST, we can implement outlining like this:
public override void ResetAutomaticOutliningBehavior() {
    this.AutomaticOutliningBehavior = ActiproSoftware.SyntaxEditor.AutomaticOutliningBehavior.SemanticParseDataChange;
public override bool ShouldSerializeAutomaticOutliningBehavior() {
    return (this.AutomaticOutliningBehavior != ActiproSoftware.SyntaxEditor.AutomaticOutliningBehavior.SemanticParseDataChange);

public override TextRange PerformAutomaticOutlining(Document document, TextRange parseTextRange) {
    // If there is another pending semantic parser request (probably due to typing), assume that the existing outlining structure 
    //   in the document is more up-to-date and wait until the final request comes through before updating the outlining again
    if (!SemanticParserService.HasPendingRequest(SemanticParserServiceRequest.GetParseHashKey(document, document)))
        return new CollapsibleNodeOutliningParser().UpdateOutlining(document, parseTextRange, document.SemanticParseData as CompilationUnit);
        return TextRange.Deleted;
For your language, just swap your classes in place of the XML portion and you should be set. Hope that helps!

[Modified at 03/06/2009 08:55 AM]

Actipro Software Support

Posted 15 years ago by Captain Starbuck
My code was almost exactly what you laid out except for that key part about instantiating and parsing a request. Yes, I've substituted my classes for the XML classes as required. I'll do my homework but I'm still confused about how to make use of the XML lexer. And I'm still getting a null reference exception thrown from within SE. I'm trying to diagnose this now. In ISemanticParserServiceProcessor.Process, the request doesn't have a Filename. I dunno if these points are related.

XmlSchemaResolver is defined in ActiproSoftware.SyntaxEditor.Addons.Xml. Are the Addons assemblies a part of the SE purchase or are these for-fee extras?

At this point I need to just read through the docs more carefully and follow the samples through code execution to see where they differ from my own. Have I mentioned this is painful? :)

Thanks again.
Posted 15 years ago by Actipro Software Support - Cleveland, OH, USA
As for the null exception if you get stuck, just stick it in a simple sample project that shows it and email it over. We can take a look.

Oh sorry, the line with XmlSchemaResolver in it isn't needed for your implementation. I'll remove it above.

The two add-on products are optional and are sold separately.

Actipro Software Support

Posted 15 years ago by Captain Starbuck
Aside from the basic four steps identified in the doc, do you guys have a flow diagram which charts execution all the way from lexer through each features of the semantic parser?

It's my understanding that the lexer just creates tokens which are the base for highlighting but the semantic parser does the heavy lifting by creating an AST.

The flow that I'm missing (obfuscated through multithreaded execution) involves exactly what happens to the tokens and nodes as they move through the process from document change through to a page repaint. I guess at this point I am familiar with CompilationUnit, Statement, Expression, Identifier, and other basic class types, but without knowing what the environment is doing with them I know I'm not defining them properly in the Grammar.

My approach at the moment is to go back to basics and create a Grammar for a really basic text file with very few rules, then work forward from there. Understand that this is a hunt and peck approach to black-box development - it's not the way it should be done...

Another approach would be paid consultation. Could you msg me about what it would cost for an hour of someone's time to guide me through this? We have GoToMeeting. Honestly I think this should be more open to others in a webinar format but I'll do what's necessary to move forward.

Thank you for your continued support.
Posted 15 years ago by Actipro Software Support - Cleveland, OH, USA
Sorry other than the diagrams in the documentation we don't have any others. The lexer does indeed create tokens and provide them for the purposes of syntax highlighting as well as semantic parsing.

So always the first thing to do when creating a language is to set up the lexer properly. When using a dynamic language, this means writing the XML definition. Once you have that, then the syntax highlighting works.

When you make a text change, the lexical parser incrementally updates the Document.Tokens collection, only updating what is needed to bring everything back up to a valid state. It repaints the editor accordingly to update syntax highlighting.

Text changes also kick off a request for semantic parsing by implementations of PerformSemanticParse like in my previous post. Semantic parsing also uses tokens created by a lexical parser. So consider when you add a request to the SemanticParserService. What it does is queue the request (in case there are others). It is running on a separate thread so that its processing doesn't take up UI processor cycles. When your request is ready to be processed, it uses your language's lexical parser to parse a snapshot of the text that is taken right when semantic parsing begins. The semantic parser is basically iterating tokens and determining what they mean based on your grammar definition. Usually it builds an AST from that info. As it is going through tokens, when it requests the next token, the core lexical parser from your language returns the next token up into the RecursiveDescentLexicalParser, which can optionally filter it out if it's a comment or whitespace, etc. If filtered, it asks the core lexical parser for the next token. Once one is found that can be passed to the semantic parser, it returns that up and the semantic parser uses it as the next token to examine.

Consider this C# code:
int foo;
The first token the semantic parser sees is the 'int' token. The second one it sees is the 'foo' identifier token because the RecursiveDescentLexicalParser filtered out the whitespace token in between.

Once semantic parsing is complete, the result is passed to the ISemanticParserServiceProcessor you specified in your request. In my previous post, you can see the ISemanticParserServiceProcessor passed to the request was the language itself. In the ISemanticParserServiceProcessor.Process method it placed the semantic result in the request. The ISemanticParseDataTarget that was passed was the Document. Document has code such that when the semantic parse is complete, it takes the data in request.SemanticParseData and places it in Document.SemanticParseData.

If your outlining is set up to kick off from semantic parse data, that is where outlining kicks in, calling PerformAutomaticOutlining.

I hope that helps you understand better the whole flow of what happens with text changes.

Have a good weekend.

Actipro Software Support

Posted 15 years ago by Captain Starbuck
I'm getting it (and I hope you can see that your investment here is paying off :)) though like I said, I need to scale back and start with a basic "int foo;" type definition, then work up to For loops for outlining, etc.

One point I'm still missing now is how to invoke a semantic parser when using editor.Document.LoadLanguageFromXml(), or to load an XML lexer or a coded lexer when using .Load with a semantic parser. It's much easier for now to use an XML lexer as you just mentioned.

The dots I need to connect go something like this: Given "int foo = 1 + bar;", lex it into tokens.
    "int" is a keyword
    " " is whitespace (skip all)
    "foo" is an Identifier (?)
    "=" is an operator
    "1" is a Number
    "+" is an operator
    "bar" is an Identifier
    ";" is a line terminator
Now comes the semantic parsing that loops on the tokens. The Grammar XML says the pattern identifies a Statement, and further, an AssignmentStatement which includes an Expression. An ASTNode can be created to operate on these nodes if required.

I think the problem is the disconect between the lexer and the semantic parser, which I'll work on this over the weekend. I think it's "where do I define 'this' concept", "how do I define it", and "what happens to the node after it's been parsed".

I'll get there.
Posted 15 years ago by Actipro Software Support - Cleveland, OH, USA
Upon text changes, your language's PerformLexicalParse method is called automatically. Normally with dynamic languages you don't need to worry about that since dynamic languages implement that method to run the dynamic language lexical parser based on your XML definition patterns.

Next, your language's PerformSemanticParse method is called automatically. As long as you have it implemented similar to what we posted above, it should just work. You never need to worry about when to invoke it. Same for automatic outlining.

When calling LoadLanguageFromXml, it reads in all the patterns etc to configure a dynamic language class. If you specified a SyntaxLanguageTypeName attribute, it will use that class as the language class (which must directly or indirectly inherit DynamicSyntaxLanguage) instead of DynamicSyntaxLanguage. At that point since all your PerformXX methods are in place per above, everything is ready to go.

Remember that lexical parsers read text characters and output tokens. Semantic parsers read in tokens (the output from lexical parsers) and do something with it, such as construct an AST tree.

Actipro Software Support

The latest build of this product (v24.1.0) was released 4 months ago, which was after the last post in this thread.

Add Comment

Please log in to a validated account to post comments.