Abstract:
Generative language models and other Artificial Intelligence-based systems have recently gained significant trust and popularity. However, their outputs often lack reliability, mainly due to the influence of external factors such as biases and hallucinations. While these phenomena represent critical challenges, their underlying causes and the nature of their relationship remain insufficiently understood. The experiments presented in this paper demonstrate the existence of a bidirectional relationship between biases and hallucinations, where each influences the other. Nevertheless, the findings serve mainly as a proof of concept, as the experiments were conducted on a small scale relative to the complexity of language models, implying the need for further large-scale investigations.