
Hi,
we have a cache that stores the parse data for documents which are not opened in an editor. We use the parse data to resolve type definitions like this:
private ITypeDefinition ResolveTypeDefinition(
ITypeReference typeReference,
ISourceFileLocation location,
IProjectAssembly projectAssembly,
IDotNetParseData parseData)
{
var request = new ResolverRequest(typeReference.QualifiedName)
{
Context = mContextFactory.CreateContext(new TextSnapshotOffset(parseData.Snapshot, location.NavigationOffset ?? -1))
};
return projectAssembly.Resolver.Resolve(request).Results.FirstOrDefault()?.Type as ITypeDefinition;
}
Now these calls are only a couple of milliseconds, but this happens a few thousand times and adds up to seconds:
100,00 % ResolveTypeDefinition • 4.991 ms • Vector.ITE.Languages.CS.ExportTables.CsExportTableGenerator.ResolveTypeDefinition(ITypeReference, ISourceFileLocation, IProjectAssembly, IDotNetParseData)
100,00 % CreateContext • 4.991 ms • ActiproSoftware.Text.Languages.DotNet.Implementation.DotNetContextFactoryBase.CreateContext(TextSnapshotOffset)
100,00 % CreateContext • 4.991 ms • ActiproSoftware.Text.Languages.CSharp.Implementation.CSharpContextFactory.CreateContext(TextSnapshotOffset, DotNetContextKind)
100,00 % hOX • 4.991 ms • ActiproSoftware.Text.Languages.CSharp.Implementation.CSharpContextFactory.hOX(qLp, ITextSnapshotReader)
98,25 % get_Token • 4.903 ms • ActiproSoftware.Text.Implementation.TextSnapshotReader.get_Token
97,94 % zc9 • 4.888 ms • ActiproSoftware.Text.Implementation.TextSnapshotReader.zc9
97,94 % qct • 4.888 ms • ActiproSoftware.Text.Implementation.TextSnapshotReader.qct(TextRange, Object)
97,64 % GetTokensInternal • 4.873 ms • ActiproSoftware.Text.Tagging.Implementation.TokenTagger.GetTokensInternal(ILexer, TextSnapshotRange, Object, out Int32)
97,64 % GetTokens • 4.873 ms • ActiproSoftware.Text.Tagging.Implementation.TokenTagger.GetTokens(ILexer, TextSnapshotRange, Object, Boolean, out Int32)
97,64 % GetTokens • 4.873 ms • ActiproSoftware.Text.Lexing.Implementation.LexerContextProvider.GetTokens(ILexer, TextSnapshotRange, Object, Boolean, out Int32)
97,39 % Parse • 4.860 ms • ActiproSoftware.Text.Lexing.Implementation.MergableLexerBase.Parse(TextSnapshotRange, ILexerTarget)
0,25 % TokenSet..ctor • 12 ms • ActiproSoftware.Text.Lexing.Implementation.TokenSet..ctor(TextRange, IEnumerable, Object)
0,30 % TextSnapshotRange..ctor • 15 ms • ActiproSoftware.Text.TextSnapshotRange..ctor(ITextSnapshot, TextRange)
Now I was wondering why would we need to parse the document again to get the tokens? The entire document should already be parsed in IDotNetParseData with all tokens available. So I digged a little deeper and found out that the following if-check is not matched:
// If the text range is completely in a single line and that line's cached context data is valid...
var positionRange = snapshotRange.Snapshot.TextRangeToPositionRange(snapshotRange.TextRange);
Now the snapshot range is compared to the entire snapshot range which is never a single line...
I think you intended to use desiredSnapshotRange instead:
var positionRange = snapshotRange.Snapshot.TextRangeToPositionRange(desiredSnapshotRange.TextRange);
[Modified 3 years ago]
Best regards, Tobias Lingemann.