- The MTA is deploying AI-powered surveillance in NYC subways to detect «irrational or concerning conduct,» aiming to predict and prevent crimes before they occur.
- The system scans for suspicious actions (e.g., unattended bags, aggressive movements) but does not identify individuals, framing it as «predictive prevention.»
- Gov. Kathy Hochul has prioritized subway safety since 2021. Humans already monitor 40 percent of platform cameras in real time. The AI system would expand coverage without requiring more staff.
- Critics warn of mass behavioral policing, arguing AI inherits human biases and could flag harmless behaviors (e.g., nervous tics) as threats, eroding privacy and normalizing surveillance.
- While officials tout AI as an «objective» solution, skeptics like journalist Cristina Maas argue it’s a flawed tool that amplifies existing biases under the guise of innovation, prioritizing control over true safety.
The Metropolitan Transit Authority (MTA) is quietly taking a controversial step to address rising safety concerns in New York’s subway system: deploying an AI-powered surveillance system designed to detect «irrational or concerning conduct» before any crime occurs.
Gov. Kathy Hochul has been expanding subway surveillance since taking office in 2021 to combat a string of high-profile crimes in the subway, including assaults and robberies, which have heightened public anxiety. Currently, about 40 percent of subway platform cameras are monitored in real-time by human operators.
In line with this, the MTA is collaborating with AI firms to implement real-time video analysis technology that scans for «problematic behaviors» and alerts law enforcement before incidents occur.
Unlike facial recognition systems, the AI would not identify individuals but instead detect suspicious actions, such as unattended bags or aggressive movements, to predict possible threats. The agency claimed that this is a «predictive prevention» – a pre-crime dragnet that turns nervous tics, anxious pacing or even talking to yourself into potential red flags for law enforcement.
The AI system would expand surveillance coverage without requiring additional staff, automatically notifying the New York Police Department (NYPD) of potential dangers.
«AI is the future,» said MTA Chief Security Officer Michael Kemper. «We’re working with tech companies literally right now and seeing what’s out there right now on the market, what’s feasible, what would work in the subway system.»
Policymakers believe that AIs are inherently more objective than human beings
This move has raised alarms among civil liberties advocates who warn of a slippery slope toward mass behavioral policing.
In an article written by Cristina Maas for Reclaim the Net, she claimed that there is a dangerous illusion spreading among policymakers: the belief that algorithms are inherently objective, that they transcend the biases and blind spots of human judgment. (Related: Atlas of Surveillance database reveals THOUSANDS of law enforcement agencies unlawfully surveilling Americans.)
However, she argued that AI is no oracle. Instead, it is a tangled web of code and human assumptions, trained on flawed data and peddled by tech evangelists who have never had to navigate the chaos of a rush-hour subway, let alone the complexities of real-world decision-making. Their glossy presentations promise progress, but in reality, they are selling a veneer of precision over the same old prejudices, repackaged as innovation.
«Whatever patterns these systems detect will reflect the same blind spots we already have; just faster, colder and with a plausible deniability clause buried in a vendor contract,» Maas wrote. «And while the MTA crows about safer commutes, the reality is that this is about control. About managing perception. About being able to say, ‘We did something,’ even if that something is turning the world’s most famous public transit system into a failed sci-fi pilot.»
Learn more about surveillance programs and surveillance technology at Surveillance.news.
Watch this clip from «Redacted with Clayton Morris» as he discusses how the CDC was caught tracking unvaccinated Americans using new surveillance technology.
This video is from the Galactic Storm channel on Brighteon.com.
More related stories:
Sources include: