Go back to menu

The Italian courts lead the way on explainable AI

Embracing the risk-based approach envisaged in the draft EU AI Regulation

22 June 2021

In three recent groundbreaking decisions the Italian courts have taken a close look at how artificial intelligence software works, with the aim of ensuring that the exploitation of AI is unbiased, ethic and explainable.

This article was update on 13 July 2021 to reflect the decision in Foodinho case.

  • In the Deliveroo case, the Court of Bologna granted the Italian trade unions' claim that the algorithm used by delivery platform Deliveroo to rank riders and allocate deliveries is discriminatory since the app downranks riders simply because they failed to make a delivery, irrespective of whether the rider in question had justifiable reasons for their absence (e.g. health reasons, child care, exercise of a worker's right to strike).
  • In the Mevaluate case, the Court of Cassation stated that, when a web platform providing reputation ranking services relies on an algorithm to produce reputation scores, users cannot be deemed to have been fully aware when they granted privacy consent if they had no knowledge of the key logic and key elements of the algorithm. Hence the privacy consent is not valid;
  • In the Foodinho case, the Italian Data Protection Authority, found that the algorithm used by delivery platform Glovoapp23 to rank riders and allocate deliveries is discriminatory: Foodinho failed to set out procedures to protect the "right to obtain human input, express an opinion and challenge decisions made through the use of the algorithms," resulting in the Foodinho riders being unjustifiably discriminated (the rating system penalises riders who do not accept orders promptly or refuse them, favoring instead riders who accept on time or deliver the greatest number of orders).
The Deliveroo case: "Blind" AI leads to discrimination

Following the Italian trade unions' claim that Deliveroo discriminates against riders through its algorithm, the Court of Bologna investigated how Deliveroo allocates deliveries among the riders who apply through the dedicated app.

It emerged that Deliveroo provides its riders with a "flexible self-service booking service" (SSB) with which they book work sessions as follows:

  • To apply for deliveries, riders access the SSB every week in order to make themselves available for that week by selecting the windows during which they will be available to make deliveries;
  • To make themselves available, riders access the SSB every Monday during three different time slots, i.e. 11 a.m., 3 p.m. or 5 p.m.. The earlier a rider accesses the SSB, the more chance they have of finding suitable deliveries windows. So, for example, a rider has a greater chance of being allocated deliveries on a Saturday night (when Deliveroo's customers are more likely to order food deliveries) in a given week if they log into the SSB at 11 am on Monday, rather than at 5 p.m., because at 5 p.m. there are less delivery slots available for that week;
  • Each rider, however, is allowed to access the SSB only in one of the aforementioned three Monday slots, depending on their "reputation ranking" as calculated by 'Frank', Deliveroo's algorithm. 

The reputation ranking is a score that combines two different variables, as follows:

  • The "reliability index", which is in inverse proportion to the number of times the rider failed to attend a work session they had applied for on the previous Monday; and
  • The "peak participation index", which is proportional to the number of times the rider made themselves available for deliveries during the high demand delivery windows, i.e. the windows between 8 p.m. and 10 p.m. on Fridays, Saturdays and Sundays, when Deliveroo's customers are most likely to require food deliveries.

The reputation ranking is materially affected if the rider:

  • Makes a "late cancellation" of a delivery window that the SSB had assigned to them: Riders can only withdraw from a delivery window 24 hours before that window starts, otherwise a late cancellation (occurring less than 24 hours before the start) will have a significant negative impact on that rider's reputation ranking;
  • Fails to log into the Deliveroo app at least 15 minutes before the start of the delivery window the SSB had allotted to them.

Based on the reputation ranking, the SSB then allows riders to access on Mondays as follows:

  • 11 a.m.: 15% of riders, i.e. those having the best reputation ranking;
  • 3 p.m.: 25% of riders with the second-best ranking; and
  • 5 p.m.: The remaining 60% of riders.

Consequently, riders who access the SSB at 11 a.m. have more job opportunities than the others.

As a result the Court of Bologna found that the Deliveroo working system shows an intrinsic discriminatory character, because the way 'Frank' the algorithm calculates the reputation ranking is blind to the reason for a rider's delay in cancelling a delivery window 24 hours beforehand, or logging into the app 15 minutes before the window begins.

The Court concluded that Frank's blindness is discriminatory to the riders, because it deprives them of some of their basic rights in their capacity as employees. For example, Frank treats in the same way – by simply lowering the reputation ranking – cases that are in fact very different, e.g. that of a rider who unjustifiably fails to login, and that of a rider who failed to login for objective and legitimate reasons (e.g. health reasons, child care, exercise of a worker's right to strike).

The Court also commented that it would have been possible for Deliveroo to train Frank not to discriminate, considering that – at the time of the decision – Frank already fixed rankings in two cases, i.e. in the case of an injury on consecutive shifts (provided that there is evidence that it has in fact prevented the continuation of the work) and a technical problem of the site, such an app crash. In the Court's opinion, this showed how the Deliveroo decision to treat riders who were absent from work for legitimate reasons and riders who did not have valid excuse in the same way was totally deliberate, hence Deliveroo was obliged to fix Frank.

The Mevaluate case: privacy consent is void if the user does not know how AI works

This case concerns the provision of a reputational rating service, whereby users could access a web platform made available by a non-profit organisation, Mevaluate, in order to obtain an impartial assessment of their reputational ranking. For example, a job candidate may have used this service in order to show their prospective employer a third party's assessment of their reputation.

In 2016, the Italian Data Protection Authority issued a ban preventing Mevaluate from processing personal data through its web platform, because the processing was inconsistent with the principles of the then applicable Italian Privacy Code, such as lawfulness of processing and data minimisation, data subject's consent, processing of sensitive data.  

Mevaluate successfully appealed the decision before the Court of Rome, according to which – in the absence of a regulatory framework governing the provision of reputational rating services – the provision of that kind of services is left to the initiative of service providers like Mevaluate, and the users' use of the platform and of the resulting rating are based on the users' consent to the processing of their personal data through the platform.

Following the Italian Data Protection Authority's appeal, the Court of Cassation overturned the Court of Rome's decision on the following grounds:

  • The key issue is whether – before using the rating platform – the user is sufficiently informed about how the algorithm calculates the rating;
  • To solve that key issue one has to assess not only whether the user gave their consent to the algorithm-based processing, but also whether that consent was lawfully granted, i.e. the user was fully informed about all aspects of the subsequent processing (consistently with the then applicable data privacy law implementing the Privacy Directive);
  • It is the duty of the data controller (the service provider) to provide evidence that the consent given by the user was suitable to cover the actual processing, in this case, that the consent covered the logic underlying the algorithm's calculation of the ranking;
  • The lower court's decision does not deny that the algorithm lacked transparency. The Court of Rome concluded that transparency was not an issue, because it is market recognition that ultimately determines whether a product is worth buying (in other words users end up buying digital services that 'work well', but do not necessarily need to reverse engineer them to know how the algorithm works);
  • The Supreme Court rejected the lower court's reasoning, and concluded that by agreeing to use a web platform, the user cannot be deemed to have agreed to be bound to an algorithm the underlying logic of which is totally obscure to them.
The Foodinho case: sanction of EUR 2.6 million to a platform of the Glovo group for discrimination based on algorithms

As part of a control activity initiated ex officio by the Italian Data Protection Authority, on 16 and 17 July 2019, an on-site inspection was carried out at Foodinho, an Italian digital platform (fully controlled by Spanish delivery giant Glovo) offering delivery services through its riders. The Authority found that Foodinho had failed to:

  • Adequately inform its employees on the functioning of the AI system; and
  • Implement suitable safeguards to ensure accuracy and fairness of the algorithmic results that were used to rate riders' performance and allocate deliveries; and
  • Set up procedures to put the rider in a position to enforce his/her right to "obtain human intervention, express a personal point of view, and challenge the decisions taken by the algorithm."

As a result of the above, the Authority ascertained that there had been cases where riders had in fact been excluded from work assignments. Accordingly, the Authority ordered the company to implement measures aimed at protecting the riders' rights and freedoms with regard to automated decision-making, including profiling.

The Authority hence ordered Foodinho to take remedial measures, including the following:

  • Check accuracy and relevance of the dataset deployed by the AI system – including chats, emails and phone calls between riders and customer care, geolocation at 15-second intervals, mapping of routes, estimated and actual delivery time, details on the handling of current and past orders, feedback from customers and partners, device battery level, etc. – in order to minimise the risk of errors and biases that may prevent certain riders from being allotted delivery assignments  (or even exclude the riders from the platform as such). This risk is also related to Foodinho's rating system, which relies on the use of a mathematical formula which sanctions those riders who do not accept orders promptly or reject those orders, whilst riders accepting orders on schedule or delivering most orders are prioritised. The rating takes into account delivered orders, check-ins performed within each booked slot a few minutes after start of the slot, acceptance of the assigned order within 30 seconds, without the rider being aware of the underlying algorithm's logic; and
  • Lay down measures preventing inappropriate and/or discriminatory applications of the reputational mechanisms based on the feedback from customers and business partners.

In calculating the EUR 2.6 million fine imposed on Foodinho, the Authority also considered the poor cooperation provided by the company during the inquiries as well as the considerable number of Foodinho riders in Italy – amounting to about 19,000 when the inspection was carried out. The Spanish company GlovoApp23 SL, (the entity controlling Foodinho) is the subject of a separate proceeding led by the Spanish data protection authority in cooperation with the Italian SA.

The Italian Authority set a 60-day deadline for Foodinho to start implementing the measures required in order to remedy the serious shortcomings it had found, whilst additional 90 days were granted to finalise the rehauling of the algorithms.

Conclusion

While the decisions above take two different approaches – in the Deliveroo case the Court of Bologna considers the use of AI from an employment law perspective, while in the Mevaluate and Foodinho case the Court of Cassation and data protection authority tackle AI from a privacy angle – they are quite consistent as to the premises on which they are based and their conclusions.

The starting point to all three decisions is the view that there is an information asymmetry between the business deploying AI (Deliveroo, Mevaluate, Foodinho) and the user (the Deliveroo and Foodinho riders, the Mevaluate users). It is therefore imperative to prevent the business from taking unfair advantage of that asymmetry. And to avoid that risk, both Italian courts deemed it necessary to first identify the capacity in which users use the AI-based solution: The Court of Bologna's and the data protection authority's decisions start by saying that riders are to all effects employees; the Court of Cassation takes it for granted that reputation defines and identifies an individual to such an extent that reputation-related data qualifies as personal data, thereby triggering privacy laws.

All three decisions are also based on the assumption that the use of AI significantly infringe certain fundamental individual rights, consistent with the risk-based approach envisaged in the draft AI Regulation currently being discussed by the EU institutions. These Italian decisions are quite ahead of the curve in this respect because they flag a risk in cases where (i) the algorithm does not take into account all relevant information (e.g. Deliveroo's Frank valued app bugs, but not the rider's justifiable absences) when rectifying a rider's ranking, (ii) the algorithm processes sensitive data (such as an individual's reputation), and/or (iii) the algorithm's decision making is not transparent, so that the user cannot understand the logic behind the algorithm's decision.

Put into the wider context of the EU institutions' approach to AI, the Italian decisions call for greater focus on all aspects of AI development and compliance, hence the following actions should be taken into account by all businesses intending to use AI-based solutions:

  • Identifying all potential risks arising from the use of AI. Environmental, social, governance drivers to be considered alongside the pure business ones;
  • Promoting internal governance and compliance systems aimed at ensuring that AI can be explained (e.g. to users, authorities), and to show how AI pursues algorithmic transparency, data cleanliness, ethics 
  • Identifying remedies (e.g. insurance policies) aimed at limiting the risks associated with the use of AI.

Shadiah Obaidi Stagiarie and Filippo Maria Volpini contributed to the writing of this article.