Abstract: This paper introduces V2Coder, a non-autoregressive vocoder based on hierarchical variational autoencoders (VAEs). The hierarchical VAE with hierarchically extended prior and approximate ...
We present Representation Autoencoders (RAE), a class of autoencoders that utilize pretrained, frozen representation encoders such as DINOv2 and SigLIP2 as encoders with trained ViT decoders. RAE can ...
MAESTRO: Masked Autoencoders for Multimodal, Multitemporal, and Multispectral Earth Observation Data
MAESTRO_FLAIR-HUB_base — pre-trained on FLAIR-HUB MAESTRO_S2-NAIP-urban_base — pre-trained on S2-NAIP-urban Land cover segmentation in France, with 12 semantic classes. Note that the FLAIR#2 version ...
Abstract: Variational Graph Autoencoders (VAGE) emerged as powerful graph representation learning methods with promising performance on graph analysis tasks. However, existing methods typically rely ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results