Reducing False Alarms in Central Monitoring Stations

By IVmarcom,

The central monitoring station – the heart of many security systems.

At the heart, albeit remote, of many home and commercial security systems is the central monitoring station. Banks of computer screens are watched day and night by human operators looking for break-ins, loiterers and other suspicious events. While they provide a valuable service there are problems associated with the use of human operators – there are only so many screens one operator can monitor, and the human brain is wired to start ignoring stimuli of the same kind over time. In other words, the more events and alerts there are and the longer an operator has been watching, the less likely it is that an important event will be noticed. And because most CCTV surveillance cameras in home or commercial use are just dumb cameras, they can transmit a lot of alerts requiring human intervention or verification.

Not only is this a problem for home and business owners, it also limits the number of customers that a central station can effectively monitor. Traditional CCTV surveillance systems that just stream video or use PIR (Passive Infrared) motion detection, are extremely prone to false alarms, requiring either video verification or a loss of accuracy and business.

Human/vehicle detection

The way to decrease false alarms, improve event detection accuracy and thereby allow a monitoring station business to grow its customer base, is to use AI in the form of video analytics. Video analytics can analyze a stream of video, frame by frame, in real-time, and provide alerts only when real events occur, detecting the presence of humans or vehicles where they shouldn’t be at that hour. Or detecting when a person spends time in a restricted area – loitering – or a vehicle is detected at a business after hours, when no-one should be there.

The way to decrease false alarms, improve event detection accuracy and thereby allow a monitoring station business to grow its customer base, is to use AI in the form of video analytics. Video analytics can analyze a stream of video, frame by frame, in real-time, and provide alerts only when real events occur, detecting the presence of humans or vehicles where they shouldn’t be at that hour. Or detecting when a person spends time in a restricted area – loitering – or a vehicle is detected at a business after hours, when no-one should be there.

But the real benefit of AI-based video analytics is not what they detect, but what they ignore. An animal crosses the camera at night; it’s fall and leaves are coming off the trees; snow is falling; headlights are detected from the highway a few hundred yards away. Video analytics discards all of these potential alerts only sending the ones that actually matter.

The advantages of video analytics are these:

  • Reduced false alarms
  • Improved monitoring accuracy
  • Less strain on operators
  • Less need for video verification
  • Ability to add more customers without more operators
Detecting human shapes

How do AI-based video analytics work? In the past, in the early years of video analytics, programmers would create reams and reams of algorithms to determine what was a valid motion in a video frame and what wasn’t. Video frames were analyzed by these algorithms and the only way they could be improved was by the programmer writing more or better algorithms. The accuracy of the solution depended on the skill of the programmer. But a few short years ago, the mathematical concept of the neural network was born, closely mimicking the workings of the human brain in how it learns. One of the applications of the neural network is in machine learning. The computer can teach itself what is a human shape by scanning thousands of shapes of humans in different poses, at different angles, and in different lighting.

Neural networks and machine learning – this is what we mean by artificial intelligence or AI in the realm of video analytics. The analytics get better and more accurate the more data is passed through them. Note that this doesn’t happen with live, real-time data, but at the development stage. It takes a lot of processing power and usually high-powered graphical acceleration (GPUs) to process all that video data. The result is a compact analytics engine that can run on a regular server, or even inside the camera itself.

So stop relying on human operators to determine when an alert is real or not and make a move to AI-powered video analytics. AI – one time when artificial is actually better than the real thing.

  Category: Video Analytics
  Comments: Comments Off on Reducing False Alarms in Central Monitoring Stations

Facial Recognition: Facing the Future

By IVmarcom,

Facing the future?

The average human can recognize faces with an accuracy of 97.5%. As of 2018, face recognition (FR) technology is achieving accuracy rates of 95-99%, with sub-second recognition times. Of course it’s easy to throw computing power at the problem from a local server or in the cloud, but with today’s multi-core camera chips such as those from Ambarella or Qualcomm, the really interesting applications are doing face recognition at the edge.

The applications of facial recognition (FR) are almost without limit. For security we can dispense with key cards, as a camera at the office door or turnstile recognizes you and lets you in without you taking your hands out of your pockets. You may forget to bring your card, but not your face. It works just as well in apartment complexes, warehouses, hospitals, care homes, data centers or secure government installations. Retailers can quickly and automatically recognize known shoplifters or serial returners from a central database, cutting down on fraud and pilfering. Schools can ensure that only authorized people are allowed in. Banks and financial centers can match your face with your account records.

There are also added-value possibilities with FR. When your best customers come into your store or restaurant or casino you can be alerted to the presence of a VIP and give them personalized service. Better still, when you come home from work your home recognizes you and sets the temperature to your liking, and plays your favorite Led Zeppelin channel on Pandora. Or Beyonce (home personalization).


Ms Dong’s image on a public screen

Of course problems can sometimes occur. In China, a country with 170 million cameras, and 400 million more on the way, a well-known businesswoman was recently “named and shamed” for jaywalking. Her face had been recognized on the side of a bus as it sped along. The error was quickly addressed in an upgrade according to the authorities.

Credit: Joy Buolamwini

Another issue with FR is bias. The training database used in the deep learning development phase of the algorithm has a major bearing on the accuracy of the recognition, and bias can easily creep in. Developers are usually male so they may unconsciously pick more male faces to train with, and a database with predominantly Caucasian faces may have difficulty accurately recognizing Asian or African faces. “Gender Shades,” a study on some of the most popular FR systems by Joy Buolamwini, a researcher at the M.I.T. Media Lab, has shown that female black faces are badly underrepresented, leading to low accuracies.

There are also the scenic characteristics of the captured image to take into account. Poor lighting makes it harder to recognize faces – even for a human – and dark-skinned faces in low light are the most difficult. Low light conditions can be alleviated somewhat by the use of infrared lighting which some cameras will provide. Another issue is that of “liveness,” whether the face in front of the camera is that of a real person or just a picture. The algorithm has to be tuned to look for the 3D aspects of a live face such as different distances, small movements etc, instead of a static picture, to prevent spoofing. (See “anti-spoofing”)

Lastly is the thorny issue of privacy, a discussion which is more in the realms of ethics and law rather than technology. In China your face is being captured everywhere you walk whether you like it or not, and likely stored for future unspecified uses. In Europe, Facebook’s auto face-tagging has been blocked. But here in the US there is no Federal privacy law like Europe’s GDPR, and only three states – Illinois, Texas and Washington – have state-level Biometric Information Privacy Acts, though Texas and Washington do not allow private lawsuits.

As a security technology face recognition is here to stay, and as with any new technology the issues are being addressed almost as fast as they crop up. It may not be as accurate as iris scanning or fingerprints, but it is quicker and less intrusive and offers a convenient and secure method of access control, visitor tracking and criminal detection.

  Category: Facial Recognition
  Comments: Comments Off on Facial Recognition: Facing the Future