News
Masked Autoencoder (MAE) has shown remarkable potential in self-supervised representation learning for 3D point clouds. However, these methods primarily rely on point-level or low-level feature ...
We proposed a convolutional autoencoder with sequential and channel attention (CAE-SCA) to address this issue. Sequential attention (SA) is based on long short-term memory (LSTM), which captures ...
We propose a Crystal Diffusion Variational Autoencoder (CDVAE) that captures the physical inductive bias of material stability. By learning from the data distribution of stable materials, the decoder ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results