Side Question: Rank usability, features, memory, and speed

SyntaxEditor Brainstorming Forum

Posted 16 years ago by Actipro Software Support - Cleveland, OH, USA
Avatar
Please your preference order of usability, features, memory, and speed from 1 to 4 with 1 being your top preference.


Actipro Software Support

Comments (8)

Posted 16 years ago by Eric J. Smith - CodeSmith Tools, LLC
Avatar
1. Usability
2. Features
3. Performance
4. Memory

Performance is a little tricky though. I rank in this order for development decisions and then address performance issues when they are proven to be an issue and will move performance up over both features and usability if that is what is necessary to get performance to an acceptable level.
Posted 16 years ago by Damir Bulic
Avatar
Same as Eric.
Posted 16 years ago by Kelly Leahy - Software Architect, Milliman
Avatar
Same as Eric.

Kelly Leahy Software Architect Milliman, USA

Posted 16 years ago by Tom Goff
Avatar
In general, I would agree with Eric as well. Although, since this is a side-question to the line/col thread, I would put usability at the bottom for that subject though.
Posted 16 years ago by Eric J. Smith - CodeSmith Tools, LLC
Avatar
So a concrete example of this would be tokens storing document references. A reference adds 4 bytes of data to each token and increases memory usage, but it enables some nice usability features like being able to get the text of the token from the token itself, being able to pass a token around and navigate the object model from that token without having to also pass a document object around with it and I'm sure there would be some other nice helper methods that could be added to token if it knew about it's context.

If you take the very extreme case of a 1mb document and say that you average 1 token per 5 characters, that would be 104,857 tokens and it would add 409kb of memory usage to the document. Personally, I don't know why someone would want to tokenize a document that big. A more reasonable example would be maybe a 50kb document which is actually still pretty big. 50kb would be 5120 tokens and 20kb of memory usage.

So the question to you guys is:

Would you prefer Bill to add usability niceties like document reference on a token and use a little more memory or would you rather Bill focus on keeping memory usage to a bare minimum?
Posted 16 years ago by Kelly Leahy - Software Architect, Milliman
Avatar
For things that would obviously be nice to have (I'm not sure 'document' is one of them), I think they should be added to the interface (IToken) but not required to be implemented. That way, when we derive our own token types, we can add the functionality and use the public interface to get to it, but we don't have to provide it if we don't want to (possibly at the loss of some features, or requiring a different workaround on our part).

Alternatively, it might be nice to think about a way of 'annotating' tokens with our own decorations - I've done on my AST by adding a SetDecoration<T>(T decoration) that stores a decoration (keyed by type) in a dictionary (this is very similar to WPF properties - though I didn't know so at the time :)), and a GetDecoration<T> that returns those decorations.

I use this to 'lazy load' several things in my AST as they are needed, but remember them for a given instance of the AST - without the AST nodes directly knowing about the stuff being kept (since the parser doesn't build this information itself). It's come in very handy for additional semantic information that is determined after the parse and computed on demand.

Kelly Leahy Software Architect Milliman, USA

Posted 16 years ago by Wesner Moise
Avatar
Same as Eric Smith.
Posted 16 years ago by Matt Whitfield
Avatar
I would rank them differently - actually...

1. Usability
2. Performance
3. Memory
4. Features

I'm from the school of the KISS principle - so i'd go for the top three over features.

Performance is also a particular issue for me (writing a SQL editor - not uncommon to be working on a document 5MB in size). However - the performance is only ever a problem on the initial lexical parse of a new document, be it loaded or pasted. As I said in the thread on the main board - i would have several kittens if it were possible to do this lexical parsing in a different thread, and then periodically call back to the UI to update the highlighting (or even not have highlighting at all until the lex was finished - although this is less preferable).