Irisity

Video Privacy Policy

At Irisity, we develop video analytics software for improving the security of our society, while maintaining maximal respect for personal integrity. We target applications where the positive effects in terms of reduced personal injury and property damage are high, while the invasion of privacy and personal integrity is as limited as possible.

This policy explains how video data is managed at Irisity in order to fulfil the legitimate interests of our customers and our own legitimate interests of improving our systems and services by using machine learning and similar technology.

Irisity services overview

The main purpose of Irisity’s processing of video data is to react in real-time on intrusions of private property, violent behavior, and other criminal or unwanted activity. Our systems use automated video analytics to filter streaming video from connected cameras. When something suspicious has happened, a short video clip is typically sent to a human operator for assessing the situation and taking measures if needed. Since video is only shown to human operators when a suspicious situation has been identified, the need for human operators viewing live streams from cameras is minimized.

A large part of our connected security installations are used for monitoring areas at times when no person should be in the monitored area, such as construction sites at night or retail stores during closed hours. Any individual caught on video in such installations is an intruder (whether intentional or not), and the risk of capturing innocent individuals on video is very small.

Although our main focus is around alarm-driven responses and privacy protection, our system also supports video recording and various forms of live views from connected cameras. Each customer is responsible for assessing the necessity and proportionality of their enabled system features, taking into account the specific cicrumstances around each security installation.

We do not develop or use face recognition technology or make any other efforts to identify the identity of individuals. Our systems are used within the security industry, to protect assets and people, never for the purpose of pursuing or monitoring innocent individuals.

Machine learning

Almost all state-of-the-art video analytics systems use machine learning, and our systems are no exception. Machine learning systems are trained by using representative training data containing real-world examples of the task that the systems are designed to carry out. For example, in order to train a machine learning system to detect broken windows, it needs to be trained on a database of examples of windows breaking in different situations. In order to develop and improve our systems, we need to store example data from various security installations.

Masking and anonymization

At many installations, parts of the video are masked out, such that only relevant parts of the scene visible to the camera is recorded or monitored. For example, for a camera covering a construction site where a public bicycle lane is also included in the view, the public part of the scene is typically masked out. Even if a camera is mounted such that it seems to cover a public area, the public parts are in fact often masked out from recording and analysis. At some installations, we also deliberately use a resolution that is too low to make identification of individuals possible.

At sensitive installations, such as schools, we often use complete anonymization. In this case, all detected individuals are white-masked or replaced with low-resolution versions, completely hiding the identity of any individuals visible in the video. This way, our system can detect suspicious activity without exposing any personal identity, thereby striking a good balance between crime prevention and personal integrity protection.

Legal data protection responsibilities

Video data from security cameras where the resolution and quality is high enough to identify individuals is considered personal data by the European data protection regulation (GDPR), and we process all such data according to this regulation.

For video data processed within our systems, our customers are the data controllers, while Irisity is data processor. This means that each customer is responsible for making an assessment on whether they have the required legal support for using camera-based security systems in each specific case. The regulations around camera surveillance require the customers to make a case-by-case assessment for each camera or group of cameras, weighing the negative privacy impacts of the surveillance with the positive effects of reduced crime or property damage. Our customers are in control of configuration settings such as black-masking of irrelevant areas, and how these settings are configured may be an essential part of the customer’s legal motivation. For example, a certain camera installation may be legally acceptable only if irrelevant public areas contained in the camera field-of-view are masked out.

As a data processor, Irisity processes video according to instructions from each customer, as documented in written customer agreements. We have a legal responsibility to ensure the safety and integrity of the processing, and we do our utmost to ensure that video data is treated with a maximal amount of care and protection.

Use of video in Irisity systems

Irisity’s software services support many different video processing functionalities, with large possibilities for customization depending on the local characteristics of each security installation. Each customer decides which features to use based on their own consideration of legitimate interest according to GDPR, camera surveillance regulations and other applicable laws, regulations and guidelines.

The following main video processing features are supported within Irisity systems:

  • Streaming video can be analyzed to detect suspicious human activity such as illegal entry into restricted zones or violent behavior. When such activity is detected, short video clips can be sent to human operators for assessment and measure-taking.
  • Streaming video can be analyzed to detect fire, camera tampering, video quality issues and a few other analysis functinalities not directly related to human activity.
  • Streaming video can be black-masked and anonymized to hide human identities.
  • Video can be recorded according to defined schedules, with automatic deletion after a certain configurable retention time.
  • At some installations, human operators may view live video from cameras even when there are no active alarms.

Use of video in internal R&D

Irisity may collect sample video data from security installations for use within our internal R&D. For such data collection, Irisitiy is either data controller or data processor depending on the setup in each customer agreement. This processing is supported by the legitimate interest of us and/or our customers to improve our products and services to become better at preventing criminal activities.

This processing is done for the following purposes:

  • Troubleshooting our systems to analyze and fix errors.
  • Creating benchmarks for evaluating and improving our automated analysis.
  • Creating datasets for training machine learning analysis models.

We only store video data on devices approved by our IT department, always on encrypted drives and always behind firewalls, according to our documented internal IT security policy and video data management guidelines. The access to video data is restricted to the R&D staff working with related R&D activities.

We may send video data temporarily for data annotation performed by third parties. If such third parties reside outside the EU, the data is transferred under the standard contractual clauses provided by the European Commission or using other EU-approved mechanisms for ensuring a good data protection level. We continuously document, evaluate and take measures to minimize the risks involved in this data transfer. For example, we restrict access to storage systems based on trusted IP addresses, always use encrypted storage and transfers, and enforce storng password policies. We also make sure that video transferred to third parties is only stored temporarily at the third parties and is permanently deleted as soon as each annotation task is completed.

We are completely uninterested in the identity of any individual that may be identifiable on recorded video. More specifically:

  • We do not develop or apply face recognition or any other technology for identifying the identity of individuals.
  • We make no effort to categorize or systemize recorded video data based on personal traits such as appearance, ethnicity, clothing or similar.
  • We never handle any other personal data, such as names, personal ID number or any other identifier of personal identity connected to the video data, unless required to do so by applicable law.
  • We never use video data collected for R&D purposes for any other purpose. This means that we never distribute such data back to the original customer, to property owners, or to law enforcement or any other governmental agencies, apart from what may follow from applicable law.

The video data stored for R&D purposes is permanently deleted as soon as it is no longer needed for improving our products and services. For example, we delete data when it is no longer representative due to obsolete resolution and image quality, in cases where public datasets become available that fulfill the same needs, when the amount of data stored grows larger than what is needed, or when we see no more room for improvement in our automated analysis models.

Your rights

According to the GDPR, you have the right to know if we have videos stored where your identity is visible. You also have the right to obtain a copy of any such videos. Recall that for privacy reasons, we do not develop or use face recognition. This means that we have no means of automatically identifying individuals in video data, and any inquiries about your presence in recorded video data must therefore be processed manually.

Requests related to personal data contained in recorded video data should be sent by mail according to details in the “Contact information” section below. Such requests must include the time and location that you want us to examine, and must include a photography with high similarity to you such that we can identify you in our video material.

Single, legitimate requests covering a limited time window and a single location can be accommodated free of charge once per year. For requests that are manifestly unfounded or excessive, in particular because of their repetitive character, or would involve a disproportionate effort, we charge a reasonable fee to cover our cost of the manual work involved.

If you believe that Irisity acts in violation to GDPR or other data protection legislation, you have the right to lodge a complaint with the Swedish Authority for Privacy Protection (Integritetsskyddsmyndigheten, IMY. Previously Swedish Data Protection Authority, Datainspektionen). You can read more at their web site imy.se.

Contact information

Our Customer Success team is the main point of contact for all incoming questions. They can be reached at success@irisity.com or by telephone at +46 771-41 11 00. To reach our data protection officer, please contact Customer Success and ask to be redirected.

Formal, written requests related to personal data exports, information, rectification, removal or related issues should be sent by mail to “Data Protection Officer, Irisity AB (publ), Lindholmspiren 7, 417 56 Göteborg, Sweden”. Such requests must contain your contact information and be signed by you. Your letter should contain as detailed information as possible about the nature, scope and context of your request. We also need proof of your identity in order to avoid unauthorized or fraudulent requests. This proof can be provided by including an attested copy of your ID card with your request or showing your ID card during a personal visit to our office. We may contact you to request additional information or verify your personal information in order to avoid misuse. We may deny requests if the provided information is incomplete or we have reason to suspect that the request is fraudulent or illegitimate, or if our denial is otherwise in accordance with applicable law.

Are you wondering about our privacy policy?