Task-agnostic model pre-training is now the norm in Natural Language Processing, driven by the recent revolution in large language models (LLMs) like ChatGPT. These models showcase proficiency in tackling intricate reasoning tasks, adhering to instructions, and serving as the backbone for widely used AI assistants. Their success is attributed to a consistent enhancement in performance…
In the constantly evolving field of machine learning, particularly in semantic segmentation, the accurate estimation and validation of uncertainty have become increasingly vital. Despite numerous studies claiming advances in uncertainty methods, there remains a disconnection between theoretical development and practical application. Fundamental questions linger, such as whether it is feasible to separate data-related (aleatoric) and…
The practical deployment of multi-billion parameter neural rankers in real-world systems poses a significant challenge in information retrieval (IR). These advanced neural rankers demonstrate high effectiveness but are hampered by their substantial computational requirements for inference, making them impractical for production use. This dilemma poses a critical problem in IR, as it is necessary to…
The struggle to balance training efficiency with performance has become increasingly pronounced within computer vision. Traditional training methodologies, often reliant on expansive datasets, substantially burden computational resources, creating a notable barrier for researchers with limited access to high-powered computing infrastructures. This issue is compounded by the fact that many existing solutions, while reducing the sample…
A crucial area of interest is generating images from text, particularly focusing on preserving human identity accurately. This task demands high detail and fidelity, especially when dealing with human faces involving complex and nuanced semantics. While existing models adeptly handle general styles and objects, they often need to improve when producing images that maintain the…
Object detection plays a vital role in multi-modal understanding systems, where images are input into models to generate proposals aligned with text. This process is crucial for state-of-the-art models handling Open-Vocabulary Detection (OVD), Phrase Grounding (PG), and Referring Expression Comprehension (REC). OVD models are trained on base categories in zero-shot scenarios but must predict both…
MLLMs, or multimodal large language models, have been advancing lately. By incorporating images into large language models (LLMs) and harnessing the capabilities of LLMs, MLLMs demonstrate exceptional skill in tasks including visual question answering, instruction following, and image understanding. Studies have seen a significant flaw in these models despite their improvements; they still have some…
3D-aware Generative Adversarial Networks (GANs) have made remarkable advancements in generating multi-view-consistent images and 3D geometries from collections of 2D images through neural volume rendering. However, despite these advancements, a significant challenge has emerged due to the substantial memory and computational costs associated with dense sampling in volume rendering. This limitation has compelled 3D GANs…
In human-computer interaction, the need to create ways for users to communicate with 3D environments has become increasingly important. This field of open-ended language queries in 3D has attracted researchers due to its various applications in robotic navigation and manipulation, 3D semantic understanding, and editing. However, current approaches have limitations of slow processing speeds and…
In 3D scene generation, a captivating challenge is the seamless integration of new objects into pre-existing 3D scenes. The ability to modify these complex digital environments is crucial, especially when aiming to enhance them with human-like creativity and intention. While adept at altering scene styles and appearances, earlier methods falter in inserting new objects consistently…