On the occasion of the recent G7 summit, Pope Francesco delivered a speech on AI.
Interesting especially his quoting of judges tasks as follows:
“An important example of this is the use of programs designed to help judges in deciding whether to grant homeconfinement to inmates serving a prison sentence. In this case, artificial intelligence is asked to predict the likelihood of a prisoner committing the same crime(s) again. It does so based on predetermined categories (type of offence, behaviour in prison, psychological assessment, and others), thus allowing artificial intelligence to have access to categories of data relating to the prisoner’s private life (ethnic origin, educational attainment, credit rating, and others). The use of such a methodology – which sometimes risks de facto delegating to a machine the last word concerning a person’s future – may implicitly incorporate prejudices inherent in the categories of data used by artificial intelligence.
Being classified as part of a certain ethnic group, or simply having committed a minor offence years earlier (for example, not having paid a parking fine) will actually influence the decision as to whether or not to grant homeconfinement. In reality, however, human beings are always developing, and are capable of surprising us by their actions. This is something that a machine cannot take into account.”