A very interesting article from La Quadrature du Net
on the CAF algorithm used to "predict" who is likely or not to cheat
among the beneficiaries (!).
Thus, CAF employees are no longer subject to moods
when they have to deal with fraud: it is the algorithm that tells them who they
should target.
Except that, as La Quadrature du Net rightly points
out, the algorithm only processes data according to the program with which it
was designed by human beings. So there is no such thing as a neutral,
impartial, objective algorithm....
“This algorithm is interesting from this point of view
since it was trained 'in the rules of the art', see the references above,
starting from a database resulting from random checks. there is therefore no
sampling bias a priori, as in the case of facial recognition algorithms. That
being said, the algorithm repeats the human biases linked to the checks carried
out on these randomly selected files (severity with people on social minima,
difficulty in identifying complex fraud…) But above all, as explained in the
article, it reflects the complexity of the rules for access to social benefits,
which is a purely political subject that the algorithm only reveals.
La Quadrature du Net purely and simply requests the
withdrawal of this discriminating algorithm from the CAF.
If you want to know more, you can contact La
Quadrature du Net directly (which always does excellent, very serious work)
here: contact@laquadrature.net
CAF: digital at the service of exclusion and
harassment of the most precarious
Posted on October 19, 2022
For almost a year now, we have been fighting within
the collective “Stop Controls” in order to oppose the effects of
dematerialization and the use of digital technology by administrations for the
purposes of social control. After having discussed the situation at Pôle
Emploi, we are interested here in the case of the Family Allowance Funds (CAF).
We will soon come back to the consequences of this fight in which we wish to
fully engage in the coming months.
"Between CAF and you, there is only one
click". This is what we could read on a CAF poster at the start of the
year. And the subtitle leaves you dreaming: “Access to all CAF services 24
hours a day”. Vain promise of a digital facilitating access to social benefits,
at any time of the day and night. Sinister slogan masking the reality of
excessive computerization, a vector of calculated social exclusion.
While the generalization of online procedures is
accompanied above all by a reduction in physical reception capacities, a mode
of contact that is essential for people in precarious situations2, it is to an
algorithm that the CAF leaves the care of predict which recipients would be
“(un)trustworthy” and need to be checked3. Responsible for giving a score to
each beneficiary, supposed to represent the “risk” that they benefit unduly
from social assistance, this scoring algorithm serves a policy of institutional
harassment of the most precarious
The Shame
Algorithm
Fed with hundreds of data that CAF has on each
beneficiary5, the algorithm continuously assesses their situation in order to
classify and sort them, via the assignment of a score (“risk score”). This
note, updated monthly, is then used by the teams of CAF controllers to select
those to be subject to in-depth control6.
The little information available reveals that the
algorithm deliberately discriminates against the precarious. Thus, among the
elements that the algorithm associates with a high risk of abuse, and therefore
negatively impacting the score of a beneficiary, we find the fact7:
– To have low income,
– To be unemployed or not to have a stable job,
– To be a single parent (80% of single parents are
women)8,
– To dedicate a significant part of its income to
housing,
– To have many contacts with CAF (for those who would
dare to ask for help).
Other parameters such as place of residence, type of
housing (social, etc.), mode of contact with CAF (telephone, email, etc.) or
being born outside the European Union are used without that we do not know
precisely how they affect this note9. But it is easy to imagine the fate
reserved for a foreign person living in a disadvantaged suburb. This is how,
since 2011, CAF has been organizing a veritable digital hunt for the most
disadvantaged, the consequence of which is a massive over-control of poor
people, foreigners and women raising a child alone.
Worse, CAF brags about it. Its director qualifies this
algorithm as being part of a "constant and proactive policy of modernizing
tools to fight against fraudsters and crooks". The institution, and its
algorithm, are also regularly presented at the state level as a model to follow
in the fight against "social fraud", a theme imposed by the right and
the far right in the early 2000s.
How can such a profoundly discriminatory device be
publicly defended, moreover by a social administration? It is here that the
computerization of social control takes on a particularly dangerous character,
through the technical alibi it offers to political leaders.
A technical
alibi for an iniquitous policy
First of all, the use of the algorithm allows CAF to
mask the social reality of the sorting organized by its control policy. Exit
the references to the targeting of social minima recipients in the “annual
control plans”. The latter now report “datamining targets”, without ever
explaining the criteria associated with the calculation of “risk scores”. As a
CAF controller said: “Today it is true that data makes things easier for us. I
do not have to say that I will select 500 RSA beneficiaries. It's not me who
does it, it's the system that says it! (Laughs). »12
The notion of “risk score” is also used to
individualize the targeting process and deny its discriminatory nature. A CAF
control officer thus declared in front of deputies that “More than populations
at risk, we are talking about profiles of beneficiaries at risk, in connection
with data mining”13. In other words, CAF argues that its algorithm does not
target the poor as a social category but as individuals. A large part of the
"risk factors" used to target recipients are, however,
socio-demographic criteria associated with precarious situations (low income,
unstable professional situation, etc.). This rhetorical game is therefore statistical
nonsense, as the Defender of Rights reminds us:14 "More than a targeting
of 'presumed risks', the practice of data mining forces the designation of
populations at risk and, in doing so, leads to instil the idea that certain
categories of users are more inclined to cheat”.
Finally, the use of the algorithm is used by CAF
leaders to shirk responsibility for choosing the criteria for targeting the
people to be controlled. They transform this choice into a purely technical
problem (predicting which files are most likely to present irregularities)
whose resolution is the responsibility of the institution's teams of
statisticians. The only thing that counts then is the effectiveness of the
proposed solution (the quality of the prediction), the internal workings of the
algorithm (the targeting criteria) becoming a simple technical detail that does
not concern politicians15. A director of CAF can thus say publicly: “We [CAF]
do not draw up the typical profile of the fraudster. With datamining, we don't
draw conclusions,” simply omitting to say that CAF delegates this task to its
algorithm.
Early
over-control of the most precarious
This is our response to officials who deny the
political nature of this algorithm: the algorithm has only learned to detect
what you have decided to target. The over-control of the most precarious is
neither a coincidence nor the unexpected result of complex statistical
operations. It is the result of a political choice of which you knew, even
before the deployment of the algorithm, the consequences for the precarious.
This choice is as follows16. Despite CAF's
communication about its new "fight against fraud" tool (see for
example here, here or here), the algorithm was designed not to detect fraud,
which is intentional, but indus (overpayments) in the broad sense17, the vast
majority of which result from involuntary declarative errors18.
However, CAF knew that the risk of error is
particularly high for people in precarious situations, due to the complexity of
the rules for calculating social benefits concerning them. Thus, as early as
200619, a former director of the fight against fraud at the CAF explained that
"the undus are explained […] by the complexity of the services",
which is "all the more true for the services linked to precariousness
(hear the social minima). He added that this is due to taking into account
“numerous elements of the user’s situation which are very variable over time,
and therefore very unstable”. Concerning isolated women, he already recognized
the “difficulty of grasping the notion of “marital life””, a difficulty in turn
generating errors.
Asking the algorithm to predict the risk of undue
payment therefore amounts to asking it to learn to identify who, among the
recipients, is dependent on social minima or is a victim of the
conjugalization20 of social assistance. In other words, CAF officials knew,
from the start of the targeting automation project, what would be the “risk
profiles” that the algorithm was going to identify.
Nothing is therefore more false than to declare, as
this institution did in response to the Defender of Rights' criticisms, that
"the controls to be carried out" are "selected by a neutral
algorithm" which obeys "no presupposition »21. Or that “the controls
[…] resulting from datamining […] leave no room for arbitrariness”.
Discriminate
to profit
Why favor the detection of errors rather than that of
fraud? Errors being more numerous and easier to detect than situations of
fraud, which require the establishment of an intentional character, this makes
it possible to maximize the amounts recovered from the beneficiaries and thus
to increase the "yield" of controls.
To quote a former head of CAF's anti-fraud department:
"We CAF, quite honestly, on these very big frauds, we can't be the leader
because the stakes are beyond us, in a way." And to point out a little
further on his satisfaction that in the last "objective and management
agreement", a contract binding CAF to the State and defining a certain
number of objectives,22 there is a "distinction between the rate recovery
of undue fraud and undue non-fraud […] because the efficiency; is still more
important on non-fraud industrials which, by definition, are of lesser
importance”.
This algorithm is therefore only a tool used to
increase the profitability of the controls carried out by CAF in order to feed
a communication policy where, throughout activity reports and public
communications, the harassment of the most precarious becomes a evidence of
"good management" of the institution
Dehumanization
and digital exposure
But digital has also profoundly changed the control
itself, now turned towards the analysis of the personal data of the
beneficiaries, whose right of access given to the controllers has become
sprawling. Access to bank accounts, data held by energy suppliers, telephone
operators, employers, traders and of course other institutions (employment
center, taxes, national social security funds …)24: control has turned into a
real digital stripping.
These thousands of digital traces are mobilized to
feed a control where the burden of proof is reversed. Much more than the
interview, personal data now forms the basis of the controllers' judgement. As
a CAF controller said: “Before, the interview was very important. […] Now the
control of information upstream of the interview takes on much more importance.
»25. Or even, “a controller when he prepares his file, just by going to see the
partner portals, before meeting the beneficiary, he has a very good idea of
what he will be able to find”.
Refusing to submit to this transparency is prohibited
under penalty of suspension of benefits. The “right to digital silence” does
not exist: opposition to total transparency is equated with obstruction. And
for the most reluctant, CAF reserves the right to request this information
directly from the third parties who hold it.
The control then becomes a session of humiliation
where everyone must agree to justify the smallest detail of their life, as this
beneficiary testifies: “The interview […] with the CAF agent was a humiliation.
He had my bank accounts in front of him and went through every line. Did I
really need an Internet subscription? What had I spent these 20 euros drawn in
cash on? »26.
The score assigned by the algorithm acts in particular
as proof of guilt. Contrary to what the CAF wants to believe, which reminds
anyone who wants to listen that the algorithm is only a "decision-making
tool", a degraded risk score generates suspicion and severity during
controls . It is up to the beneficiary to answer for the algorithmic judgment.
To prove that the algorithm is wrong. This influence of algorithmic scoring on
control teams, a recognized fact referred to as "automation bias", is
even better explained here by a controller: "Given the fact that we are
going to control a situation strongly scored, some told me that, well, there is
a kind of – even unconsciously – not an obligation of results but to say to
themselves: if I am there, it is because there is something so it is necessary
that I find »
Dramatic
human consequences
These practices are all the more revolting as the
human consequences can be very serious. Psychological distress, loss of
housing, depression28: the control leaves significant traces in the lives of
all controlled. As a director of social action explains29: “You have to imagine
that the undue payment is almost worse than non-recourse”. And to add: “You are
in a mechanism for recovering undue payments and administrations which can also
decide to cut you off all access to social benefits for a period of six months.
Really, you find yourself in a black situation, that is to say that you made a
mistake but you pay extremely dearly for it and this is where an extremely
strong degradation situation begins which is very difficult behind to recover ”
.
Requests for undue reimbursement can represent an
untenable burden for people in financial difficulty, especially when they are
due to errors or omissions that cover a long period. Added to this is the fact
that overpayments can be recovered via deductions from all social benefits.
Worse, the numerous testimonies30 collected by the
Defender of Rights and the Stop Control and Changer de Cap collectives report
numerous illegal practices on the part of CAF (non-compliance with adversarial
proceedings, difficulty of appeal, abusive suspension of aid, failure to
provide the report investigation, no access to findings) and abusive
re-qualifications of situations of involuntary error as fraud. These improper
qualifications then lead to the filing of recipients identified as
fraudsters31, filing reinforcing à in turn their stigmatization
during future interactions with CAF and whose consequences may extend beyond
this institution if this information is transferred to other administrations
Digital,
bureaucracy and social control
Admittedly, digital technologies are not the root
cause of CAF practices. As the “social” side of the digital control of public
space by the police institution that we document in our Technopolice campaign,
they are the reflection of policies centered around logics of sorting,
surveillance and general administration of our lives32.
The practice of scoring that we denounce at CAF is not
specific to this institution. A pioneer, the CAF was the first social
administration to set up such an algorithm, it has now become the "good
student", to use the words of a LREM MP33, which should inspire other
administrations. Today it is thus Pôle emploi, health insurance, old-age
insurance or even taxes which, under the impetus of the Court of Auditors and
the National Delegation for the Fight against Fraud34, are working to develop
their own scoring algorithms.
At a time when, as Vincent Dubois35 says, our social
system is always tending towards "fewer social rights granted
unconditionally [...] and more aid [...] conditional on individual
situations", which "logically calls for more control », it seems
legitimate to question the major projects for the automation of social
assistance, such as that of « solidarity at the source » proposed by the
President of the Republic. Because this automation can only be achieved at the
cost of an ever-increasing scrutiny of the population and will require the
establishment of digital infrastructures which, in turn, will confer ever more
power on the State and its administrations.
Fight
Faced with this observation, we ask that the use of
the scoring algorithm by CAF be put to an end. The search for undus, the vast
majority of which are of the order of a few hundred euros36, can in no way
justify such practices which, by their nature, have the effect of throwing
precarious people into situations of immense distress.
To the remark of a CAF director saying that he could
not "answer precisely as to the biases" that his algorithm could
contain - thus implying that the algorithm could be improved -, we answer that
the problem is not technical, but political. Since it simply cannot exist
without inducing discriminatory vetting practices, it is the scoring algorithm
itself that must be abandoned.
We will soon come back to the actions we want to take
to fight, at our level, against these policies. Until then, we will continue to
document the use of scoring algorithms in all French administrations and invite
those who wish, and can, to organize and mobilize locally, like the
Technopolice campaign run by La Quadrature. In Paris, you can find us and come
and discuss this fight within the framework of the general meetings of the Stop
Controls collective, whose press releases we relay via our website.
This fight can only benefit from exchanges with those
who, at CAF or elsewhere, have information on this algorithm (the details of
the criteria used, the internal dissensions that its implementation may have
provoked, etc.) and want us to help combat such practices. We encourage these
people to contact us at contact@laquadrature.net. You can also drop documents
anonymously on our SecureDrop (see our help page here).
Finally, we would like to denounce the police
surveillance to which the Stop Controls collective is subject. Making telephone
contacts on the part of the intelligence services, allusions to the actions of
the collective with some of its members in the context of other militant
actions and over-presence of the police during simple towing operations in
front of CAF agencies: as many of police measures aimed at the intimidation and
repression of a social movement that is both legitimate and necessary.
Saisi