Replies: 1 comment 11 replies
-
Hey @movalz,
Our unordered groups work a bit differently compared to Xtext's. Using optionality shouldn't be necessary at all, since all unordered groups are optional in Langium. Internally, all Langium does is transform
Do you have a reproducible example? It's pretty difficult to tell what goes wrong and where. Generally, the default behavior of the scope provider/linker implementation works differently in Langium compared to Xtext. You might need to perform some adjustments in a specialized
15 seconds for 2.3mb sounds indeed rather slow. However, we usually deal with small-ish grammars (less than 500 lines) and parser performance can tank pretty heavily once the amount of alternatives increases. Note that Langium already uses the fastest parser generator available for JavaScript - but at some point we are limited by the runtime. Have you done any benchmarking yet? At least with most grammars, we usually match Xtext in performance.
Langium already caches stuff pretty agressively. However, it needs to reparse the file up until the point where the completion is requested. How large are the files in question?
The parser is pretty well battle-tested. I assume at some part your grammar is so deeply nested that the runtime just cannot keep up. In theory you can increase this by increasing the stack size when starting the language server by passing the appropriate arguments into the language client.
What generator? |
Beta Was this translation helpful? Give feedback.
-
I investigate if a change from Xtext to Langium is an option for us. I have used xtext2langium to transform a big Xtext DSL grammar of our company to Langium. The *.xtext grammar file has around 2000 lines of code with many references.
Then I have implemented the ScopeProvider and a small generator and tested it with 111 model files with a total size of around 2.3 MB. The tests are:
I run into some Langium issues:
(...)? &(...)?
). I got the error message "Optional elements in Unordered groups are currently not supported". This is an important feature for us. Are there plans when this feature will be implemented?functions, but when I change it to
(where XxToplevelElement and DEntity are deeply nested rules) I get many, many linker errors. Reducing it to
works better (linkings are successful), but I get a strange parse error: "Expecting token of type ';' but found
[]
." for... SomeType[];
, these errors don't appear in the first (short) version of theentry Model
rule.3. Performance: when the inner VSCode instance opens, it takes about 15 seconds until all model files are loaded and the linkings are done. This isn't very fast but ok. But when editing inside a model file each pressing of CTRL+SPACE for code completion takes 5 to 7 seconds, this makes code completion almost useless. Perhaps caching for all model files which hasn't changed would help?
4. Stack overflow error: when pressing CTRL+SPACE at a certain reference position following error occurrs:
This happens when
LangiumCompletionParser.parse(input)
callsthis.mainRule.call(this.wrapper, {})
. I don't know if it's an endless loop or if setting call stack size to a higher value (don't know how to do it) would help.5. When running the generator the linking isn't done successfully. I get many errors "Could not resolve reference to ...".
We would be very grateful for help in solving these issues.
Beta Was this translation helpful? Give feedback.
All reactions