-
-
Notifications
You must be signed in to change notification settings - Fork 564
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow concurrent calls to Engine #673
base: main
Are you sure you want to change the base?
Conversation
Awesome work here! I'm a bit concerned about he performance. Currently Jint is highly optimized to work as single threaded engine and you should pool instances outside of Jint logic. Where comes the need to concurrently access the same engine and data structures? Historically JavaScript has been ran using single-threaded model which has its benefits and short comes. |
For me the main use case would be users wanting to encorporate this lib in a webservice. Currently I would see 2 options for this usecase:
The first one does not only impact the response times, it will be using a lot more memory than required due to the stored duplicate information, which is directly correlated to the number of requests. Pooling would prevent the memory allocation, but decreases performence due to the limiting of concurrent processed calls when the pool hits its limit. As to the hit in performance, I would have to run the benchmark :) |
Given that a lot of that is related to the need to parse / state. |
I quite like that approach as well 👍 |
I think that the problem is that even with concurrency support you would suffer from shared or dirty state. Pre-parsed programs (esprima) should give you quite fast evaluation even if you would build engine per request. There was job done earlier to make engine creation faster and of course we want all execution to be fast. Single-threaded should be quite fast and locks would mean slower execution. Would be nice to see some benchmarks to show the use case of sharing these engine instance. |
I also agree there must be a way to be able to reuse "hot" engines that are somehow immutable, probably with some higher level prototype, or at least intermediate. Maybe based on a dedicated mode where the default objects are immutable (global, Error, Math, ...) and only the user context would be released. So even if an engine is not thread-safe, it could at least be pooled, and cleaned cheaply. Like with a "Freeze" method where at that point any existing state can't be changed, and only the user code one can be. Then we could even image making a freeze after the standard object are initialized, and provide an automatic engine pool, which would improve all the common scenarios without any custom configuration. We know that creating an Engine incurs lots of C# code by default already (every Constructor object, all the lambdas, ...). |
Just throwing out an alternative idea I've been toying around with. You could put the instance of the engine in a Grain using Microsoft Orleans. It should handle the pooling/concurrency issues for you and provide another layer of state sharing/management within the Grain if you need it. https://github.com/dotnet/orleans Out of curiosity, does anyone have any metrics around the cost of creating an Engine vs reusing one? |
This would explain #704 that we've run into. @sebastienros I'd love to see this merged, because I think it would enable our use of deferred calls too. In our case we use Jint in a not-ultra-light way, and a small performance hit would be an acceptable trade-off in my opinion. @mariusdv I assume it would be hard to make this an Engine option? |
(First PR on this codebase)
Rewrote the way the global information is stored to allow concurrent requests to both Execute and Invoke methods.
This mainly uses AsyncLocals, which got introduced in net framework 4.6 (required a package upgrade)
Execution Context & Call Stacks
Previously, these structures were kept using a simple stack. when running multiple invocations, this stack would be poluted by the different calls. To separate the data for every call while maintaining the ability of a common state, I use AsyncLocals. These locals will remain scoped to a single call.
The structure is set up in a way to allow concurrent steps within a single invocation as well if we ever want to run some things in parallel.
StrictModeScope and EvalCodeScope
Structure used to be based on a disposable pattern that modified the same static state. This similarly starts to be poluted during concurrent requests. Pattern changed from disposables to using delegates. change in usage is minimal, whereas the scoping is now provided correctly.
Others
To make this scenario work, I also changed some dictionaries to concurrent dictionaries and altered the global cache objects so they use the ConcurrentObjectPool instead of the ObjectPool.