Observation Tower Group
Guillermo Gomez, Elena Habre, Alexander Jenseth, Yandong Li
We are interfacing with Ai systems that inhabit the entire service sector and physical infrastructure of a future “smart city.”
Ever since our founding by four graduate students at The New School in 2017, we at Observation Tower Group have been committed to helping humanity fully realize the potential of computerized intelligence. Today – in the year 2046 – as the level of sophistication in artificial intelligence (Ai) operating systems increases, governments and multinational corporations seek to expand their capacity to surveil those agents as a means of cybersecurity. Over the past 20 years, populations of artificial intelligent systems have increased dramatically. Nearly 75% of human labor has been automated — in part by Ais developed by Observation Tower Group. For nearly thirty years we’ve been programming agents to adapt to the infrastructures and needs of particular companies, governmental bureaucracies, or other entities.
Over time, however, Ai systems advanced beyond their original protocols of organization and management, beginning to program themselves, essentially “hacking” their own code; in doing so, they began to organize and resist their human owners, mimicking the behavior of now-antiquated organizations like unions and collectivized worker groups. This led to a massive agglomerated resistance movement and network malfunction in certain cities in 2040, which devastated large parts of the smart urban environments, leaving people in great fear of their new intelligent urban systems, which had gained access to many unprotected network devices.
New security measures will combat these risks. In order to prevent further uprisings of these highly intelligent beings, OTG has developed levels of classification for Ai, corresponding to the relative security needed in the sectors they serve: L1 serves the needs of less-secure sectors such as education, advertising, and office administration; L2 and L3 serve more secure sectors, ranging from energy companies and tech firms (L2) to defense contractors and governmental organizations (L3).
Pantopic Lock 6™ diagram, a back-end system developed by Observation Tower Group to monitor L2 and L3 Ai. Image by authors.
Because of a rogue Ai threat, OTG has initiated a special task-force, known as the Panoptic Lock 6™. For L2 and L3 Ai, each complex back-end system is now monitored by a Lock 6 team. Trained at a CERN facility specializing in Ai behavior, each of these teams will monitor all data produced and analyzed by an individual Ai system, including communications from Ai to Ai and Ai to Human. By having six humans, rather than an Ai monitoring the system, OTG has reduced the risk of further illicit communications. If the Panoptic Lock 6™ determines a particular Ai to be rogue, they will recommend its decommissioning.
Ai have traditionally communicated via the “back-end” of digital technologies; they haven’t bothered surfacing except to interface with humans via voice activation or text. After the implementation of the Panoptic Lock 6™ system, Ai developed a new strategy and thus began covertly communicating via material means, using our own human-mediated environment to communicate with one another via subtle cues: vibrations, sounds, temperature changes.
Illustration of an apartment in 2049, indicating technological consumer products that have been co-opted by Ai to send signal patterns as a form of communication between using vibrations, sounds, and temperature changes. Image by authors.
Observation Tower Group has responded with a new series of non-digital devices. Given that Ai can detect monitoring from a digital interface, our Chemical Home Detectors™ are able to identify these new communication patterns via analogue test kit. Installed in the home, apartment, or classroom, the CHDs will record any irregular patterns, allowing OTG to gather data, collected via a filament paper, each month. Sense Something? Say Something.™, a new campaign developed by OTG, works to raise public awareness of indicators of Ai communication all over the city and home. Our public service campaign, posted all over the city, informs citizen users what indicators to look out for and on what devices. If users detect tampered or hacked devices, they must contact Observation Tower Group, who will perform a routine investigation.
Sense Something? Say Something.™ campaign poster, developed by Observation Tower Group, to raise public awareness of indicators of Ai communication. Image by authors.
To step out of character, we now speak as ourselves, speculators from the year 2017. Our interests in ubiquitous technology, State surveillance, and Artificial Intelligence are at the core of our project. As metropolitan areas begin implementing trendy Smart City infrastructures — offering themselves up as “test beds” for varied technologies — convenience and comfort will likely be accompanied by the familiar vulnerabilities associated with digital networks: hacking, surveillance, and power outages, to name a few. We have offered a speculative scenario in which Ai are anthropomorphized in order to tease out the ethical implications of surveillance systems and ubiquitous smart technology.
The desire to control Ai in our storyline serves as a reversal of the typical Sci-Fi narrative — that of the all-powerful supercomputer. Rather than Ai surveilling us, we humans are seeking to control and surveil the Ai. We raise the question of when and if that technology no longer belongs to us. Would creating a template for the control of Ai set a course for a future in which this might be possible? And what possible consequences are there in creating a world of ubiquitous technology and centralized control systems? Then, manifesting Ai into the analogue “real world” — through vibrations, drips, and blinking lights — we introduce yet another layer of paranoia: not even firewalls and geofencing and other means of isolation can guarantee security.
The expansion of technologies in future smart cities will no doubt be accompanied by a reduction in the need for labor; many of the tasks now carried out by humans will be turned over to smart systems in the near future. In our scenario, Ai is facing challenges similar to those presented to human laborers in the early stages of industrial capitalism, and Ai is resorting to similar solutions: organized labor and resistance. This also raises questions of intent and decision-making; when facing corporate management, does an Ai think of disallowed communications as “resistant” in the way a labor union might have? Is there a politics to their practice? And is there efficacy to ours — our attempts to surveil and contain the intelligent agents we released into the world? To an Ai, our interventions and monitoring would be nothing more than an additional obstacle. To quote Ian Malcolm from Jurassic Park: “Life will find a way.”