WiFi everywhere with Google Loon
Helium balloons are carrying WiFi connectivity to developing regions lacking network infrastructure
A pilot test in India that will analyze CCTV images through algorithms is the latest initiative to prevent crime through AI systems.
As a species, we have evolved to detect risky or hazardous situations. Non-verbal language is a tell-tale sign that shows if the other person is nervous or aggressive. A frown, clenching their teeth or fists are some of the signs. Ironically, a shoplifter betrays himself by those expressions. What if we could analyze and isolate those behavioral patterns by using AI systems? After all, it is the next logical step within the recognition of the physical world. The same principle that allows a driverless car to avoid an accident. Now, after the latest innovations in facial recognition, the technology seems to be ready for another leap. That is the scheme announced by Cortica, an Israeli lab specialized in autonomous artificial intelligence.
The company, which also does research on driverless vehicles and smart cities, bases its software on neural learning patterns tracked on mice and translated into mathematical formulae. Thus, their systems can learn and predict future events based on the collected data. The latest application is a software that analyses CCTVs to detect movements and behaviors linked to violent crimes or theft. The tool is powerful enough to examine terabytes of information and it can fine-tune its own abilities while processing them. The capability to detect misconducts is based on the so-called “micro expressions” that betray a potential criminal.
The first pilot test is being carried out in India, in a joint venture with Best Group a local company specialized in the automotive, education, smart machines, and technology sectors. In the first stage, the software will be learning to link the movements of pedestrians to criminal practices. Soon, individual crimes, such as a shooting, will be anticipated, but also situations where an angry mob turns to violence. The applications, however, go well beyond security in the streets, as the technology could be used in driverless taxis to raise the alarm when an aggression takes place.
As always, technology is neutral and the outcome hinges on who and how implements it. However, if properly used, AI applied to the improvement of security could help to create safer cities and anticipate dangerous situations and solve them before they even happen.
The analysis of CCTV cameras is just one of the many applications of AI within the field of security. Two years ago, the US Justice Department assigned part of its budget to a program carried out by the University of Cardiff, in the UK. The project aims to develop a new software for the analysis of data in social media, to detect potentially dangerous zones. The researchers found a correlation between crimewaves and the mention of antisocial behaviors, littering and street drunkenness. The link was even stronger than with crime records and census data. The system works by analyzing tweets and verbal aggression together with hate crime data from the LAPD and contrasting them with violent situations unleashed in the city. Following that, an algorithm will learn to predict future outcomes inferred from earlier correlations. This will allow assigning resources to cover potential crime hot spots.