-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix bug with prefetching on different nesting levels #519
base: master
Are you sure you want to change the base?
Fix bug with prefetching on different nesting levels #519
Conversation
…s when prefetching is on different nesting levels.
Codecov ReportBase: 95.72% // Head: 95.71% // Decreases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## master #519 +/- ##
============================================
- Coverage 95.72% 95.71% -0.01%
Complexity 1769 1769
============================================
Files 154 154
Lines 4276 4274 -2
============================================
- Hits 4093 4091 -2
Misses 183 183
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
@hjerichen thanks for the PR! Please allow some time for review. |
I thought about my solution again and I think it is a mistake. In the $sources I use for the PrefetchBuffer there my be unserializable objects, and that would result in an exception. No one should do that in my opinion, but it is possible. |
I see. How's that any different from how it is currently with a serialization issue for objects? I actually haven't made use of prefetch as we tend to rely on ORM and Repository level hydration in many cases. What is the shape of the |
That is too bad. I was hoping one of you active maintainers have more insights on how prefetching is working under the hood. query {
contacts {
name
company
uppercaseName
repeatName(prefix:"foo", suffix:"bar")
repeatInnerName
... on User {
email
}
}
} If we go the way with $sources, we need the key in PrefetchBuffer to be a combination with $args. That I did wrong here. But the issue remains with the unserializable objects in I will try to dig deeper when i have time. |
@hjerichen where does the unserialization come into play? Looking at the |
@oojacoboo maybe I have expressed myself incorrectly. private function computeHash(array $arguments): string
{
return md5(serialize($arguments));
} After my change I found a more logical approach to that problem. It is still not perfect, but maybe acceptable. |
My reasoning here is that registering a new object to the prefetch buffer (for instance a Blog type object) should invalidate the corresponding result cache. |
If you agree with me here, I will add more unit tests to the PrefetchBufferTest class. |
@hjerichen can you explain why invalidating the result cache is necessary or beneficial? |
Assume that you want to query a list of posts and each post as several comments. In some special cases, like in my test, On the other side, if the result cache gets invalidated when a new post gets registered, |
@hjerichen thanks for that. Unfortunately, I'm not really following you.
|
Yes. In $this->results are the comments for the posts in $this->objects. |
@oojacoboo Did you have time to take a look at it? Would be great to have this fixed. |
@hjerichen so, this invalidation in the If the hash is being accurately generated and unique, I don't understand how the results aren't accurate per the hash? What's causing them to get out of "sync"? Is this an issue in your case because "subBlog" and "blog" are actually the same object ( |
Maybe an other example: query {
managers {
id
addresses {
id
street
}
purchasers {
id
addresses {
id
street
}
}
}
} Lets say purchasers are from type The Problem is, that the prefetching of addresses is only called for the managers. The If you do not think this is a bug, then at least throw an error or warning when register is called, after the result is already filled. Removing the results when new objects are registered to force a new prefetching is one way. Not the best one. |
@hjerichen I'm suggesting that we find a way to uniquely hash the objects, such that removing the results isn't a concern. If it's indeed different and needs different results, the hash created for the cache should be able to be different. We shouldn't have to be adding and removing based on the order of operations. |
How about using the internal IDs of the objects to create a Hash? |
@hjerichen not sure what you mean by internal IDs. If you're talking about an object's |
Every object in PHP has an unique ID. I would think that graphqlite does not destroy his type objects before everything ist fetched. The reuse of the ID after destroy should not be problem. |
@hjerichen I thought you might be talking about using the spl_object_id. It's possible that could work. I do wonder though if there is any concern about an object being destroyed and another object taking that memory space and receiving the same id. That could render incorrect results. What's wrong with the composite? Would that not work? |
@oojacoboo Sorry for not responding so long. What do you mean exactly with composite. This will work as long as there is no object with a property that holds a PDO Instance. Besides that, I think this could be a performance lost for many big objects. |
@hjerichen A composite key or compound key is a key that consists of more than one identifier. So something like: |
Can someone please confirm that this issue is resolved now by #702 |
Fix bug of calling prefetch method only once with not all type objects when prefetching is on different nesting levels.
Running the new test without the fix shows that prefetchPosts is only called once with Blog 1 and 2. But it should be called with Blog 1, 2, 10, and 20. Therefor Blog 10 and 20 have no Posts, but they should (see expected results).
With this fix, prefetchPosts gets called two time. Once with Blog 1,2 and once with Blog 1,2,10,20. This solves the Problem, but is not the best solution. It wohl be better it gets called only once with all Blogs. But that i could not get working.