You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
HI, I use 【com=alphaimg+(1-alpha)([10, 255,15])】 to get the 'real-world portrait dataset' matting foreground on a green background,but i found the boundary area is not so soft。 And I have used your pre-trained model to get my own portrait dataset's matting result, I found that the boundary area performed well, but it missed a lot of inner foreground. My mask is got by using the u^2 net demo.
The text was updated successfully, but these errors were encountered:
As mentioned in the paper and this github repo, using original image as foreground usually cannot lead to satisfying results. You can try train one using this repo with random alpha blending, which should not be hard to implement, or some other traditional methods (e.g., closed-form matting) to obtain the foreground color. I am occupied recently but will work on releasing the foreground code/model if I get time.
The results look weird to me. Could you please confirm that the pretrained weight you are using is MGMatting-RWP-100k instead of MGMatting-DIM-100k?
HI, I use 【com=alphaimg+(1-alpha)([10, 255,15])】 to get the 'real-world portrait dataset' matting foreground on a green background,but i found the boundary area is not so soft。
And I have used your pre-trained model to get my own portrait dataset's matting result, I found that the boundary area performed well, but it missed a lot of inner foreground. My mask is got by using the u^2 net demo.
The text was updated successfully, but these errors were encountered: