This Paper addresses the use of artificial intelligence (“AI”) in the judicial system, specifically its application in making predictions that influence judicial decisions, and the legal and ethical concerns stemming from AI biases. First, this Paper will explore the various roles that AI can play in this domain, with a particular emphasis on AI in risk assessment and recidivism prediction. This involves analyzing data related to a crime and a defendant to generate predictions. A significant concern in this area revolves around biases. While biases in criminal justice have long been recognized as a critical issue, there has been optimism that AI could mitigate these biases. However, this may not necessarily be the case. There is potential for risk assessment AI to enhance sentencing accuracy and reduce human error and bias. However, there is apprehension that it could perpetuate or exacerbate existing biases and even undermine fundamental principles of fairness in the justice system. Several factors contribute to this risk, including biased coding and incomplete and inaccurate training and testing datasets. Additionally, the presence of dynamic algorithms and the lack of transparency and explainability make it challenging to identify and address biases effectively. The forthcoming European regulation on artificial intelligence, known as the AI Act (“AIA”), aims to mitigate biases. However, it is acknowledged by experts that biases cannot be completely eradicated,1 like biases that are inherent in human decisions.