-
-
Notifications
You must be signed in to change notification settings - Fork 522
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bad performance test ping/pong messages #838
Comments
I'll take a peek at this. |
Thanks for the quick response! The PC is not doing any other work during the test. I tried to run it on another PC (several times), the result was the same. |
Wrote a little test. Here you can also see that the number of ping-pong messages was significantly higher in the old version.
Results:
|
Marking this up for grabs. |
Just for reference, on a M1 Mac Pro I get around 280000 messages per second. @kharitonov1995 is this problem still happening? |
I'm looking at the pprof trace of this example and one thing stands out to me: there are a lot of goroutines being created. not all at the same time, but serially. The dispatcher/scheduler of the default mailbox is very strict about marking the mailbox as idle as soon as the mailbox is empty. that makes the |
relevant code: protoactor-go/actor/mailbox.go Lines 91 to 92 in 4900170
protoactor-go/actor/mailbox.go Lines 110 to 111 in 4900170
protoactor-go/actor/mailbox.go Lines 125 to 126 in 4900170
|
does that make sense @rogeralsing ? |
First of all, are we talking about the same example? This is what I´m getting in the actor-inprocess-benchmark:
Are you running something else? |
I ran the code @kharitonov1995 provided and used pprof to profile it. this benchmark kinda hides the "issue" i've described because there is almost always a message in the mailbox. The case i've described is that the scheduler has to run a new goroutine for each new message. because of the nature of the example @kharitonov1995 wrote, it is impossible to exist more that one message at a time in the mailbox, hence the schedulerState keeps changing between idle and running and creating new goroutines. *edited |
IIRC we have tried various tricks here before, e.g. we have this bit in the mailbox:
To keep the goroutine alive while there are more messages to consume, but still allow for other goroutines to do their work. That might improve this benchmark, but hard to say what effect it has on real app code |
doing this makes the performance increase from ~280k to around 1 million for this particular test. Now, I know it is not a good idea to call i, t := 0, 0 // m.dispatcher.Throughput()
for {
if i > t {
i = 0
runtime.Gosched()
}
i++
[...] |
I was using an old version of proto actor, now I saw the new version in Git and decided to upgrade. When I measured the number of message transfers, I saw that the processing is much faster in the old version.
The old version: (github.com/AsynkronIT/protoactor-go v0.0.0-20210125121722-bab29b9c335d)
New version: (commit:66c886e5; dev branch)
Code benchmark:
The text was updated successfully, but these errors were encountered: