LONDON: French plans to use artificial intelligence (AI) to scan the thousands of athletes, coaches and spectators descending on Paris for the Olympics is a form of creeping surveillance, rights groups said.

French authorities have tested artificial intelligence surveillance systems at train stations, concerts and football matches in recent months.

When the games open in late July, these systems will scan the crowds, check for abandoned packages, detect weapons, and more.

French officials say these tools will not be fully operational ahead of the games, but from then on police, fire and rescue services, and some French transport security agents will use them until March 31, 2025.

Campaigners worry AI surveillance could become the new normal.

"The Olympics are a huge opportunity to test this type of surveillance under the guise of security issues, and are paving the way to even more intrusive systems such as facial recognition," Katia Roux, advocacy lead at Amnesty International France, told the Thomson Reuters Foundation.

TRAIN STATIONS AND TAYLOR SWIFT

The French government has enlisted four companies in the effort - Videtics, Orange Business, ChapsVision and Wintics.

The security platforms of these companies measure eight key metrics: traffic that goes against the flow, the presence of people in prohibited zones, crowd movement, abandoned packages, the presence or use of weapons, overcrowding, a body on the ground, and fire.

Depeche Mode and Black-Eyed Peas concerts, as well as a soccer match between Paris Saint-Germain and Olympique Lyon, have been test sites for the software.

More tests were run on crowds travelling through the Nanterre Préfecture and La Defense Grande Arche metro stations to see Taylor Swift, and the 40,000 attendees of the Cannes Film Festival.

Cannes Mayor David Lisnard said the town already had the "densest video protection network in France", with 884 cameras - one for every 84 residents.

Across France there are about 90,000 video surveillance cameras, monitored by the police and the gendarmerie, according to a 2020 report.

"One overarching concern is that while the majority of these use cases may not seem to involve revealing the identity of, or profiling, individual people, they still require the deployment of a surveillance infrastructure that is always one software update away from being able to do the most invasive kinds of mass surveillance," said Daniel Leufer, a senior policy analyst at digital rights group AccessNow.

"Members of the public will have little to no oversight about what types of things these systems are monitoring, what updates are made etc, and so we will get the inevitable chilling effect that comes from this type of public surveillance," he said.

OLYMPICS BECOME AI PLAYGROUND

French lawmakers have attempted to assuage criticism with a ban on facial recognition. Authorities say it is a red line not to be crossed.

Matthias Houllier, the co-founder of Wintics, said the experiment was "strictly limited" to the eight use-cases outlined in the law, and that features like crowd movement detection could not be used for other processes like gait detection, whereby a person's unique walk can identify them.

Houllier said it was "absolutely impossible" both for end-users and advanced engineers to use Wintics for facial recognition due to its design.

Representatives from Videtics, Orange Business and ChapsVision did not respond to requests for comment.

Experts have concerns that the way the government is measuring the success of these tests, and the precise way this technology works, has not been made available to the public.

"There is nowhere near the necessary amount of transparency about these technologies. There is a very unfortunate narrative that we cannot permit transparency about such systems, particularly in a law enforcement or public security context, but this is nonsense", Leufer said.

"The use of surveillance technologies like these, especially in law enforcement and public security contexts, holds perhaps the greatest potential for harm, and therefore requires the highest level of public accountability," he said.

Privacy campaigners say that carve outs in legislation would allow deployment of facial recognition by "competent authorities", for purposes including national security and migration.

"This is not a ban. That's actually an authorisation for law enforcement agencies. People have this illusion that because it says we are banning the technology - except in this, this and this situation - it's okay, but these situations are the most problematic ones," Roux said.

France's historical use of surveillance has also raised concerns. In November last year, non-profit Disclose found that law enforcement agencies had covertly used facial recognition software from Israeli company Briefcam since 2015.

French politicians suggested there was still a gap between the promises made by AI-surveillance, and its capabilities.

"AI-driven video surveillance will not be optimal at the time of the Olympic Games. But the Olympics will be a great playground to experiment with it," said Senator Agnes Canayer.

"More internal security forces or private security forces will be needed to compensate for tech's shortcomings," she said.

The Ministry of the Interior, which oversees French law enforcement, did not respond to a request for comment.

In a list of proposals on the future of AI-enabled surveillance, the government's Law Commission recommended that the "experimental basis" of the technology continue and the retention period of images captured by the systems should also be extended, to "test the equipment over all seasons" and during smaller events.

"That's why we decided to campaign and raise awareness right now on facial recognition, even if it's not going to be used during the Olympics," Roux said. "If we wait until it's going to be used, then it's going to be too late."