- ๐ Hi, Iโm Mengping Yang (Just call me Mengping), and recently finished my Ph.D from East China University of Science and Technology with dozens of honors.
- ๐ Iโm very interested in Multi-modal learning and Generative models on various modalities, e.g., visual-language modeling, text-to-image/video/3D/4D synthesis.
- ๐ฑ I'm always opening for collabration and disscussion for any academic/enginnering issues of advanced tech, fell free to reach me at [email protected].
๐
Focusing
Ph.D of Computer Science and Technology, focus on multi-modal learning and generative models.
- Shanghai
-
09:09
(UTC +08:00) - kobeshegu.github.io
- @kobeshegu
- https://www.zhihu.com/people/ke-ke-ke-ke-ke-da-xia
Pinned Loading
-
awesome-few-shot-generation
awesome-few-shot-generation PublicA curated list of awesome few-shot image generation papers
-
ECCV2022_WaveGAN
ECCV2022_WaveGAN PublicThe official code of WaveGAN: Frequency-aware GAN for High-Fidelity Few-shot Image Generation (ECCV2022)
-
FreGAN_NeurIPS2022
FreGAN_NeurIPS2022 Public[NeurIPS2022] FreGAN: Exploiting Frequency Components for Training GANs under Limited Data
-
CKA-Evaluation
CKA-Evaluation PublicCode for our metric paper: Revisiting the Evaluation of Image Synthesis with GANs
-
Monalissaa/DisenDiff
Monalissaa/DisenDiff Public[CVPR`2024, Oral] Attention Calibration for Disentangled Text-to-Image Personalization
-
llm-conditioned-diffusion/llm-conditioned-diffusion.github.io
llm-conditioned-diffusion/llm-conditioned-diffusion.github.io Public
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.