Automated profiling and automated decision making
14 March 2018
The automated processing of personal data for the purposes of profiling and automated decision-making both qualify as data processing for the purposes of the GDPR and both activities are therefore subject to the GDPR's general data protection principles. This Insurtech Taster looks at some of the key issues of such data processing for insurers.
Profiling of customers is not new to the insurance industry. Insurers need to process data in order to make underwriting decisions and to market insurance products to customers. Real-time automated decision-making is a reality thanks to advances in artificial intelligence and machine learning and consumer demand for instant services, means that insurers have no option but to use automated means to rapidly process information. Services can also be provided more conveniently to customers by supplementing shorter application forms with indirect sources of data (e.g. from social media posts). However, just because an insurer believes it has a legitimate business interest in collecting personal data and the technology to automatically profile and make automated decisions is available, it does not mean that automated processing and automated decision-making activities ("AP and ADM") can be undertaken with impunity.
The GDPR introduces specific provisions designed to limit the risks to individuals of AP and ADM covering the entire cycle from data collection, through to the output of AP and ADM activities. Any form of automated processing of personal data is caught by the regulation and special attention is given to solely automated decision-making (including profiling), which significantly affects the data subject (Article 22). The GDPR is supplemented by guidelines, of which the latest version on this topic were adopted in February 2018 (Article 29 Data Protection Working Party guidelines on automated individual decision-making and Profiling)(WP Guidelines).
Impact on Insurers
- What right do you have to process the data? Insurers should ensure that they have a lawful basis for processing personal data to create a profile, taking particular care when re-using personal data collected for a different purpose. For example, data collected for a car insurance quote cannot necessarily be re-used to profile the customer for the purposes of marketing other insurance products. Explicit consent can form the basis for lawful processing of personal data but it is not always an appropriate basis for justifying the particular form of processing. Profiling for the purposes of fraud detection should not be relied on as a panacea for all forms of processing the insurer may want to undertake either. Insurers should also be aware that the act of profiling itself, can create special categories of data from data which is not a special category in its own right, potentially creating problems in justifying the lawful basis for processing the data and otherwise complying with the GDPR.
- Have you told data subjects what you are up to? Transparency of processing personal data is a fundamental requirement of the GDPR. It requires the data controller to provide data subjects with unambiguous information about the processing of their personal data and extends to data obtained indirectly, for example from wearable technology. An insurer must take steps to tell its customers that it engages in AP and ADM activities in relation to personal data and explain the nature of the data collected, the logic of the profiling/automated decision-making that it is relying on and the significance and envisaged consequences of the processing of the individual's data. Being transparent does not mean that valuable information about an algorithm or internal model needs to be disclosed. That said, the transparency principle is quite onerous and may include efforts to create graphics, videos and other visual techniques to provide meaningful explanations. This may be difficult to achieve in practice if the insurer is relying on mobile-phone based distribution of products. Insurers should consult the WP Guidelines on transparency for further guidance.
- Is what you are doing fair? Fair processing is linked to the principle of transparency, but also requires an assessment of how the interests of the data subject are affected by the processing. For example, the outcome of profiling may create discrimination among data subjects and regulators are mindful that individuals could be denied access to insurance cover as a result of AP and ADM, which can have significant consequences for the data subject. What matters is whether the detriment to a data subject is justified. The principle of fair processing therefore requires data controllers to refrain from using personal data in ways that unjustifiably have a negative effect on a person, or on a group of people or communities or social groups as a whole, if applicable. The use of truly anonymised data in profiling activities can assist with the fair processing principle (although truly anonymised data may not have much practical use to insurers).
- Why are you still holding a person's data? Organisations cannot collect and retain more personal data than they actually need, in the hope that it may prove useful in the future. Insurers must be able to justify the retention of personal data and it is not a legitimate excuse to collect, retain or process data on an indefinite basis, just because the data is available, the technology exists to process vast quantities of data and huge volumes of data are needed for the technology to 'learn' to make better decisions. Insurers need robust data retention policies that take into account the data subject's rights and freedoms and ensure data is retained for no longer than is necessary and proportionate for the purpose for which the data is being processed.
- Do you make solely automated decisions? The WP guidelines clarify that Article 22 of the GDPR establishes that solely automated decision-making, including profiling, that has a legal or similarly significant effect, is only permitted if one of the exceptions in the GDPR applies. A 'legal effect' would be a decision that results in the cancellation of a contract. A 'similarly significant effect', would be the automatic refusal of an online credit application. The Article 22 prohibition could have implications for the adoption of artificial intelligence, e.g. to decide whether or not to approve a claim under a health insurance policy, as denial could have significant affects for the individual (depending on the circumstances). Having some meaningful human oversight of the decision-making process that takes all the relevant data into consideration, means the decision is not solely based on automated processing and therefore not subject to the prohibition, but the human intervention needs to be more than 'just a token gesture'. There are also specific exceptions to the rule, but measures must be in place to safeguard the individual's rights and freedoms and legitimate interests and may require the data controller to carry out a Data Protection Impact Assessment.
These are just some of the points that insurers need to be mindful of in relation to AP and ADM activities. Our Insurance Sector team of lawyers can assist you further with advice on the GDPR and equivalent legislation in other jurisdictions and more broadly on matters relating to Insurtech, including company formation, investments, structuring consortia/partnerships/joint ventures, service, distribution and outsourcing agreements and regulatory advice.