Humanizing the Scan Car

Given the rising prevalence of automated data collection systems in cities, many are growing to fear what appears to be an increasing state of surveillance. Whilst the automation of civic processes using sensor technologies can bring certain benefits (cleaner streets, reduction in fraud, etc…), they also threaten to erode the organic relationship between people and the cities they live in.

In collaboration with AMS Institute, UNSense co-hosted a three-day design sprint in the hopes of steering the future of these technologies toward more human ends. Given its familiar with designing the future of scan cars as a starting point.

Five Design Principles

When it comes to responsible digitization, improving privacy is seen by many as the primary aim. While this is certainly a valid and necessary aim, we also understand that there are many other principles that any automated sensing system should abide by. With this in mind, we developed a spectrum of five distinct design principles: transparency, legibility, relatability, contestability and actionability. Our ultimate goal in creating these principles is to provide a framework for designing sensing systems in cities toward more human ends.


One of the reasons why fear of surveillance surrounds automated sensing systems is due to a lack of system transparency. As a scan car drives down the street with an automated dashboard camera, it creates for many a sense of unease, despite the car being used merely for parking enforcement. We take this unease to stem from a lack of transparency regarding what the car is “seeing,” and what that data is being used for. With this in mind, we started with simple design fixes to better communicate the function of the existing system. Here, we propose a modular signage system, which communicate the various functions that a future scan car may be used for (parking violations, air quality monitoring, etc…) and is illuminated only when the scanner is active. This could also involve designing the camera in such a way that better communicates its function, by mounting a camera only looking for license plates at license plate level, rather than at human eye height, for example.


It is crucial to communicate what the scan car is looking for and how the car is interpreting objects. Hence, we propose to allow users to understand the scanning algorithm through being able to “see like the scan car”. In the case of vehicle-mounted object recognition cameras used to identify trash on the street, the algorithm quickly disposes of any image data, leaving behind only the metadata of what the algorithm has recognized. To communicate this process, this could be achieved via a web-based live stream or an AR app that lets citizens see how the algorithm is recognizing and processing objects.


While scan cars roam around neighborhoods on a daily basis, they still can feel like a foreign object passing through the neighborhood anonymously. How can these vehicles be made more relatable and invite interaction? One example would be to give each scan car a unique name such that civilians can initiate text-based conversations with the car from their phones. These conversations would allow citizens to ask questions about what the scan car is measuring, where they can access the data, or what the data is used for. The vehicle could moreover double as a source for hyper-local information, utilizing its geo-location and a screen to alert residents to things such as road closures to new restaurant openings. This design principle adds a human perspective to the scan cars through establishing identities in local neighborhoods.


In any automated sensing system, the ability to contest (push back, reject, etc…) the system is paramount. While this is upheld as a principle in the GDPR, it is often difficult in practice. In the chat-bot interaction mentioned above, naming the car serves an additional function of also providing a recognizable identity to the car, allowing citizens to identify the car in case of an objection. Furthermore, the modular signage system allows for an additional layer of accountability, allowing citizens to determine if the function the car is scanning for makes sense in the given time and place. In the case of parking fines, immediate fine notifications, a link to the scan-car “footage” attached to the fine, or the integration of a temporary “unloading” feature in the parking payment app all increase the ability to contest, or eliminate the need entirely.


We started with simple fixes and improvements to the existing system, and then moved towards rethinking the system entirely to introduce new uses of vehicle-mounted sensing systems in cities. We imagine a future in which these vehicles are seen as useful tools for citizens, not merely for enforcement. One example would involve making the data collected about miscellaneous objects on the street available to people via on online interface or notification system. Rather than the object being tagged for pickup by a garbage truck, this system could be used to reduce waste in the neighborhood by promoting free, hyper-local recycling. Additionally, sensors measuring things such as air quality or traffic information could be used to provide insight into the health or real-time status of a neighborhood. Another idea explored involved a gesture-based interaction system, where citizens could vote on a question posed by the car (i.e. How do you feel about the noise levels in your neighborhood), and contribute to a repository of hyper-local opinions.


Understanding that the concepts explored here cannot be simply copy-pasted into practice, we abstracted the design elements into the beginnings of a toolkit. Whilst such a toolkit remains incomplete, the mix-and-match nature suggests a flexible approach, allowing different solutions to be applied to various urban sensing systems.

What’s next? The next step to prove these concepts is to test them in real life. We are currently looking for partners to develop functional prototypes of some of these concepts outlined above.