Smart Cities New York (SCNY) is North America’s leading global conference exploring the emerging influence of cities in shaping the future. With the global smart city market expected to grow to $1.6 trillion within the next three years, Smart Cities New York is Powered by People and guided this year by its “Emerging Power Of Cities” theme. The conference brings together top thought leaders and senior members of the private and public sector to discuss investments in physical and digital infrastructure, health, education, sustainability, security, mobility, workforce development, and more, to ensure cities are central to advancing and improving urban life.
High-profile speakers and panelists have confirmed their participation at Smart Cities New York 2018, including:
Stefano Boeri – Architect, Founder of Stefano Boeri Architetti, and Professor, Politecnico, Milano
Alice Charles – Urban Development Lead, World Economic Forum
Ali Chaudhry – Deputy Secretary for Transportation, Office of Governor Andrew M. Cuomo
Peter Hirshberg – Principal & Co-Founder, Maker City Project
Sarah Hunter – Director of Public Policy, X (Formerly Google [X])
Don Katz – Founder & CEO, Audible, and Founder, Newark Venture Partners
Bastian Lehmann – Co-Founder & CEO, Postmates
Jeffrey Sachs – Director of SDSN, Co-Author of U.S. Cities SDG Index and Professor at Columbia University
Andy Stern – President Emeritus, Service Employees International Union (SEIU)
Mary Stuart Masterson – Actress, Filmmaker & Founder, Stockade Works
Call for Papers #6: ‘The Spectre of Artificial Intelligence’
Over the last years we have been witnessing a shift in the conception of artificial intelligence, in particular with the explosion in machine learning technologies. These largely hidden systems determine how data is gathered, analyzed, and presented or used for decision-making. The data and how it is handled are not neutral, but full of ambiguity and presumptions, which implies that machine learning algorithms are constantly fed with biases that mirror our everyday culture; what we teach these algorithms ultimately reflects back on us and it is therefore no surprise when artificial neural networks start to classify and discriminate on the basis of race, class and gender. (Blockbuster news regarding that women are being less likely to get well paid job offers shown through recommendation systems, a algorithm which was marking pictures of people of color as gorillas, or the delivery service automatically cutting out neighborhoods in big US cities where mainly African Americans and Hispanics live, show how trends of algorithmic classification can relate to the restructuring of the life chances of individuals and groups in society.) However, classification is an essential component of artificial intelligence, insofar as the whole point of machine learning is to distinguish ‘valuable’ information from a given set of data. By imposing identity on input data, in order to filter, that is to differentiate signals from noise, machine learning algorithms become a highly political issue. The crucial question in relation to machine learning therefore is: how can we systematically classify without being discriminatory?
In the next issue of spheres, we want to focus on current discussions around automation, robotics and machine learning, from an explicitly political perspective. Instead of invoking once more the spectre of artificial intelligence – both in its euphoric as well as apocalyptic form – we are interested in tracing human and non-human agency within automated processes, discussing the ethical implications of machine learning, and exploring the ideologies behind the imaginaries of AI. We ask for contributions that deal with new developments in artificial intelligence beyond the idiosyncratic description of specific features (e.g. symbolic versus connectionist AI, supervised versus unsupervised learning) by employing diverse perspectives from around the world, particularly the Global South. To fulfil this objective, we would like to arrange the upcoming issue around three focal points:
Reflections dealing with theoretical (re-)conceptualisations of what artificial intelligence is and should be. What history do the terms artificiality, intelligence, learning, teaching and training have and what are their hidden assumptions? How can human intelligence and machine intelligence be understood and how is intelligence operationalised within AI? Is machine intelligence merely an enhanced form of pattern recognition? Why do ’human’ prejudices re-emerge in machine learning algorithms, allegedly devised to be blind to them?
Implications focusing on the making of artificial intelligence. What kind of data analysis and algorithmic classification is being developed and what are its parameters? How do these decisions get made and by whom? How can we hold algorithms accountable? How can we integrate diversity, novelty and serendipity into the machines? How can we filter information out of data without reinserting racist, sexist, and classist beliefs? How is data defined in the context of specific geographies? Who becomes classified as threat according to algorithmic calculations and why?
Imaginaries revealing the ideas shaping artificial intelligence. How do pop-cultural phenomena reflect the current reconfiguration of human-machine-relations? What can they tell us about the techno-capitalist unconscious? In which way can artistic practices address the current situation? What can we learn from historical examples (e.g. in computer art, gaming, music)? What would a different aesthetic of artificial intelligence look like? How can we make the largely hidden processes of algorithmic filtering visible? How to think of machine learning algorithms beyond accuracy, efficiency, and homophily?
If you would like to submit an article or other, in particular artistic contribution (music, sound, video, etc.) to the issue, please get in touch with the editorial collective (contact details below) as soon as possible. We would be grateful if you would submit a provisional title and short abstract (250 words, max) by 15 May, 2018. We may have questions or suggestions that we raise at this point. Otherwise, final versions of articles and other contributions should please be submitted by 31 August, 2018. They will undergo review in accordance with the peer review process (s. About spheres). Any revisions requested will need to be completed so that the issue can be published in Winter 2018.
I’m writing today as an editor of ACM’s student magazine XRDS (like CACM but for students http://xrds.acm.org/).
The fall issue is on “The Computer Scientist.” We’ll investigate what a computer scientist is and the values inherent in technical problem solving, as well as ask questions about the role of the computer scientist in society, how that changes in different contexts and when different people take on the role.
We’d love for you (or your colleagues/students/community partners at the Digital Equity Lab) to contribute an article. I am particularly interested in the computer scientist as community builder or as a partner with community builders (What do community builders want from computer scientists? How can computer scientists be good partners?), but am also very interested in any perspective that asks what it means to advocate and fight for equity while navigating the technology sector.
Some quick details: Articles are by invitation, and may cover previously published findings and ideas (in fact, a higher-level synthesis and broader view is encouraged). Articles are roughly ~2500 words long and are archived in the ACM Digital Library.
If you’re interested we’d need to know in the next couple of days (please do let me know either way). The deadline for articles is June 4. We’d be happy to provide any further information.
If you are not available, but have suggestions of colleagues who may be interested in contributing to the issue, please let me know!
Contagions swarm not through grounded history, but through oxidants in the air that accelerate upon communicated contact: information becomes viral, and the mediums that render it possible become protocol. Not unlike many scenes: Frank Gehry, known for a few buildings, shows new late 80’s CATIA computer software to the then-minimalist sculptor Richard Serra, materializing in Serra what was previously a geometrical fantasy by actualizing a new material form. “Torqued Ellipse” is the name of a decades long engagement of Serra’s artwork (still today at Dia:Beacon, a few miles north of us), yet also a form not found in nature or architecture – synthetic, synchronic, out of place.
As a formal invention, it is recognized by its “curvalineararity” – impenetrable corten steel becomes malleable under physical pressure and digital code. The impossibility of this form is not its contrived viscerality – or that which suggests its aesthetic importance – but of its literal grounding, standing, being: the torqued form is emergent insofar as its material production operates along a similar metric. Parametric design is that which supplements human inadequacies with algorithmically rendered capacities, that which are previously unthought. Approaching industrial steel for its natural proclivity to mass, counter balance and gravitational load, these installations become environments of their own, suspect to similar bodily tropes of weather, decay, oxidation. Yet their longevity is supported not only by atmospheric conditions of the museum and the curation, but concordantly by human design and parametric tools.
While Zaha Hadid’s constructs appear floating, ethereal and buoyant, Richard Serra’s Torqued Ellipses are grounded by gravity and mass, yet still lofted by a curvilinearity that doubles back, elastically bends, on the subject; as continuous as it is are striating. While the outside’s peripheral angle is indiscreetly formless and not imposing an end, the inner concave side is entirely subsumptive and encloses a full entrapped optical field. Accordingly paradoxical, the ellipse on the ground and the ellipse in the sky is equivalent.
Serra’s work is visually queued as industrial rust, massive corten steel plates occupying site specific installation. Decaying and inert structures that circumscribe the world and the human’s place somewhere arbitrarily within. Although rust simulates decay, a minor conceit throughout the artwork of Serra is that his works are not even proto-naturalist, but are machinicly construed, simulated earth-work, through CATIA software (computer-aided three-dimensional interactive application), largely popularized in the field of architecture by Frank Gehry.
CATIA optimizes on many levels: price, time, speed, etc. Serra (and Gehry +Hadid co.) continues to use it, with his Torqued Ellipse for example, by re-inventing tools to metricize internal volume that can withstand the physicality. CATIA is cross applicable to other mediums, used in conglomeration with IBM, Boeing, Windows, Linux, etc. A steered universalism guides this software, translating its use function variably across applications. (correspondingly, Frank Gehry was among the first architectural heads to utilize computational software to render and assist the development of buildings.)
Serra “invents strategies that allow to see in a way that they haven’t seen before in a way to extend their vision… ways of informing themselves by inventing tools, techniques, process that allow them to see into a material manifestation in the way that you would not if you dealt with standardized or academic thinking” (Serra interview)
Rather than arguing of the primacy of the computational design, Luciana Parisi argues that the design renders formal impossibilities, possible, through the control of code: parametric techniques within the field of digital design specifically involve the use of scripting languages that allow designers to step beyond the limitations of the user interface, and do their design work through the direct manipulation of code rather than form. (Contagious Architecture: Computation, Aesthetics and Space, pg. 265) A certain discordance accompanies these earthworks and floating utopias alike, that which is irresolvable alone by either human or machine, yet dependently comitant. Parisi understands these relational conditions as instances of mishearing and misunderstanding, never not deployed in the computational organization of algorithms (Parisi, 171)
Serra’s sculptures are intentionally not a direction for future urban intelligence, relegated to the aesthetic realm of sculpture, deliberately not useful, excessive and wasteful. But one can start to delineate ways in which the city can incorporate these earthworks into a landscape of the future, countering the present wills to transparency, immediacy. Insofar as Serra’s sculptures are obstinate, opaque and non-transparent, there are potential regimes of signifiers to be deployed: invoking fundamentally antagonistic values to the present illuminative city. Perhaps the city of right angles has reached its limit…
After the creation of FICO in the US and its incredible success with the three-digit credit score, China has been working on evolutions of a similar model to create a credit score for its citizens which can be in charge not only for their credit value but also their value as citizens and how to control them. From the use of social media to a medium of social manipulation; Chinese companies, Ant Financial and Alipay developed a comprehensive app tracking all the payments, movements, beliefs and actions, and ranking every individual by their daily routine and activities. The idea of using big data for tracking and ranking opened a new journey of social credit where privacy is obsolete.
The creation of the Zhima Credit started in 2014 when the Chinese government established the necessity of having a tracking system to rank the reputation of individuals, businesses, and officials. The aim is to create a file compiling data from public and private sources of each individual by 2020. The goal is to push people to a standard behavior in order to create a safe and balanced environment within the country and on an international basis.
The revelation of an individual tracking software creates confusion on how communities have developed since its beginnings. The evolution of smarts cities brought new behavior to our communities. Cities have always changed and with them, their people, but some growths have been bigger than others in some decades. The use of big data as a behavioral measurement is, no doubt, one of the biggest steps in community trustworthiness. It also creates a personal challenge where each person develops a competition for reaching the best potential on their score. Have you helped someone on the streets today? did you get into a fight with the supermarket person? Everything will be recorded and will increase or decrease the score. It is also not only a personal issue but one with your personal network. The creation of a personal score based on good and bad could create a social gap in some communities. Since the score belongs to the public domain everybody is aware of your rank which could decrease or increases your status in your community and even more with your friends.
As we analyze the social credit score every personal and social aspect of a person life.On a big scale, this could help to create a safer environment and an enjoyable community. In a city where people do anything but good actions, people will dispose of bad people and bad behaviors. So, if having friends with a low score will bring down a person’s score, they will try to be surrounded by people similar to them, with their same interests and same scores. It will not only clean communities but will make them smart and extraordinary.
Credit: Kevin Hong, taken from Wired.com
With Zhima Credit everything counts. From grocery payments to the school kids attend. And for this to engage more people it also offers rewards as a form of gamification. Since the score is basically a personal ranking in each society this will help people to upgrade their status and become more trustworthy. From renting cars or hotels without a security deposit, to travel to a foreign country without needing a visa. Each point counts, in the same way, that by having a bad behavior will lower the score decreasing the chances to buy some plane tickets, reserve luxury hotels and taking them to the blacklist.
As we see the importance of the Zhima Credit is estimated to cover every aspect of the Chinese culture and making people just a number. Although it is intended to cover every corner and change the way we understand privacy there are some loops which are still uncovered and hardly sorted out. As in today, the Zhima Credit is only a prototype, which probably wouldn’t affect people’s scores too much and can help the users to understand better every aspect it covers (which is basically everything). But since the project is intended to be a nationwide system by 2020 they still have a lot of spaces that need to be fixed.
#wiredbackpage “The future of esports”
In a country like China, where people are used to the constant evolution and have the biggest population, a project with this impact could be exactly what they need in order to keep control and balance. Since Alipay and WeChat pay came to business people agreed on losing what was left of the personal privacy they had. There’s still no conclusions on how Zhima Credit will work. Actual subscribers will suffer a contrast of lifestyles and will surely find the advantages to this system. Until it becomes a regulatory program people should enjoy what they have left of freedom and privacy.
We are running out of materials to make into crop tops; this isn’t news. In order to supply our ravenous style demands, scientists, designers, and craftspeople have been searching for, and using, alternative textile materials for decades (for mass production). The problems with current textile manufacturing resources are well known: pollution, toxic and dangerous resource extraction, abusive labor conditions, and the demand for chemically constructed synthetic garments only grows.
But how do we measure, and thus determine, the relative good and bad of textiles and their replacers? The Healthy Materials Lab at Parsons is currently trying to do just this. Within their library of material collections, they have started sorting and grading emerging technologies focused on harnessing the design capabilities of biological organisms. “Consumer products can be designed and grown harnessing biological organisms. This emerging design paradigm is centered on cultivating materials with living cells. Organisms such as yeast, bacteria, fungi, algae and mammalian cells are fermented, cultured and engineered to synthesize natures materials but with new functional and aesthetic properties.” This is the idea behind the Biofabricated Materials project currently surveying the field of resources based in biologically ‘organic’ resources.
Data Set as grading rubric // methodology
Currently, their collection is quite small with only five ‘biofabricated materials’ listed as part of their Donghia Healthier Materials Library. However, they are continuously adding to their collection, with two new additions in the past month alone. Their survey includes materials being used for knitting, masonry, homeware, and leather goods. BioMASON grows bricks from aqueous solutions. Mycoboard makes thermal and sound insulation from mushrooms. A biomaterials research company that turns kelp into readily usable fibers for clothing and shoes, AlgiKnit, is featured in the collection along with pictures, links to their website, and a downloadable PDF data sheet. The data sheets generated by the lab serve as a vague grading rubric for the products featured in their material collection. Essential information like the material composition, the manufacturer, the country of manufacture, and a brief description of the product make up the meat of the data sheets. In addition to these considerations, available colors are also included in their data sheets, as well as checked or unchecked boxes next to various “certifications and disclosures” that might or might not apply to the manufacturing process. As primary concerns featured on the materials’ main page, color and country of manufacture inform us of the project’s central audience: designers and students hoping to find healthier materials for their garment, building, and production needs.
F.D.A certification, Health Product Declaration, Declare Label, Environmental Product Declaration, and Safety Data Sheet make up the entire list of certifications⎯⎯established grading rubrics and guidelines that can tell us how well the company scores on various environmental tests, or perhaps it only tells us how new, small scale, low budget, or unacknowledged their projects may be. Certainly, the short list of certifications and disclosures tell us about considerations the Healthy Materials Lab are making, including government certifications, traditional sustainability measurements, and life cycle assessment. These considerations help to illustrate the lab’s methodology for prescribing “healthy” materials, most notably that they include the beginning and end of the product’s life (HPDs) and long term considerations for sustainability measures (lifecycle assessment by EPD).
weaknesses and other considerations
The Healthy Materials Lab goes a long way in making obscure or underdeveloped technologies accessible to designers, students, and anyone else interested in alternative materials for a variety of design projects. However, the scope of their research and library are somewhat small. Whether this is due to the relative newness of their Biofabricated Materials project or not, their digital and physical collection could benefit considerably from adding various measurements of scalability, longevity, cost, and accessibility to their data sets. How easily can kelp be grown and processed for large scale manufacturing? How fast is this process? What resources are needed to grow and extract this resource? How replicable is this process?
Another product profiled, Loliware, makes biodegradable ‘plastic’ cups from seaweed (again). This product is available for retail online, however the data sets do not feature the retail price or anything on where this product is being used, or whether it is for individual consumption or commercial use (in the restaurant industry).
Due to these missing considerations, the collection can serve as more of a product endorsement than a resource for education and research. However, as the project grows and more products are added to the list, there is the potential for the grading rubric function of the data sets to inform and guide instead of endorse entrepreneurial pursuits. Regardless of shortcomings, the Healthy Materials Lab is actively working to contribute to the body of resources available to conscious and concerned designers, bringing us all closer and closer to resources we need to improve our designed environments and lives.
“Our world goes to pieces; we have to rebuild our world. We investigate and worry and analyze and forget that the new comes about through exuberance and not through a defined deficiency. We have to find our strength rather than our weakness. Out of the chaos of collapse we can save the lasting: we still have our ‘right’ or ‘wrong,’ the absolute of our inner voice⎯ we still know beauty, freedom, happiness…unexplained and unquestioned.” ⎯ Anni Albers
Depression Quest is a 2013 interactive fiction game by Zoe Quinn, Patrick Lindsey, and Isaac Schankler. On the game’s website, the developers share that they wrote the game with twofold intent – to both spread awareness about what depression can feel like and to let people who are dealing with depression know that they are not alone. The game arguably does succeed in achieving the first goal – as it contains clear descriptions of how depression can feel on a day-to-day basis. More broadly, however, I feel that the game does not work well as a game – it is a solid piece of text on the experience of depression, but the decision to make this text playable adds little to the experience of reading it.
One of the notable elements of the game are the status bars at the bottom of every page that let you know three things: how depressed your character is, whether they are seeing a therapist, and whether they are on medication. From a gameplay perspective I appreciate these bars, as it is useful to see how my choices in the story affect my character’s level of depression. From an education perspective, however, I find that these bars undermine the game’s ability to show what depression is like.
One of the hallmarks of depression can be an inability to have much perspective on the feelings that you’re dealing with – though you may have been feeling better yesterday, when depression kicks in it’s often accompanied by a conviction that things have always been this way and will always be this way. As Allie Brosh writes in one of her comics on depression, having someone tell you that things will get better when you’re depressed is often met with a feeling of “look, guy, I don’t know what it’s like in I-still-have-meaning-in-my-life-land, but every direction looks like bullshit right now.” The game writes these feelings into the text, but it does a bad job of actually making the player feel them. Depression creates an unreliable narrator, and games are a great way to integrate unreliability into the player’s point of view. Depression Quest misses this opportunity by always telling players exactly how depressed they are, rather than letting them experience some of the confusion and lack of perspective of not actually knowing.
This ties into a broader set of issues the game has with telling rather than showing – as a player, I am repeatedly told that my character doesn’t like their job, but no part of the game actually lets me experience this. There are no penalties for making my character go to work when they don’t want to, and as a player I never have to experience the frustration that my character is said to experience at work. Further on in the game, I am given the option to have my character stop taking their medication, since they have been feeling better lately. While this is a real decision that many people come to face in the course of their treatment, the game doesn’t effectively represent the weight of this crossroads. As a player, I know that I will do better in the game if I keep my player on their medication, and none of the things that the character may be feeling – shame, worry, fear – are made to influence my decisions as a player at all.
There is one mechanic in the game that does effectively show rather than tell, and this is my favorite part of Depression Quest. Almost every time the game gives you options about what action to take next, one or more of them will be crossed out. The more depressed your character is, the more options are crossed out. So, early in the game, when my character’s girlfriend invites them to a party, I have the options of “agree to go” and “say that you’re really just not feeling well and can’t make it.” Above these options is a third: “shake of your funk and go have a good time with your girlfriend,” but it is crossed out and I am unable to click it.
The game is filled with large stretches of time in which you can’t select the option that you think, as a player, would be best for you character. When I played through, even after starting therapy and medication there was a long period of time in which my character was still listed as ‘very depressed’ and I was unable to click many useful options. This is an element of the game that really works to show something to the player rather than telling them about it, and effectively conveys the feeling of knowing, vaguely, what the best thing to do would be while still being unable to do it.
This mechanic, however, isn’t enough to turn the experience of the game around. Overall, Depression Quest feels like it would function better as an essay on depression than as a game, and the parts that make it engaging as a game aren’t enough to fix this feeling. The mechanics of the game don’t work to support its epistemological intent, ultimately undermining the game’s ability to fulfill its stated goals.
In an attempt to redesign future urban cemeteries that integrate the virtual realm of memory with the physical realm of memory, this precedent project is inspired by neuroscientist David Eagleman’s statement:
“There are three deaths: the first is when the body ceases to function. The second is when the body is consigned to the grave. The third is that moment, sometime in the future, when your name is spoken for the last time.”
The speculative design, ‘You Only Live Thrice’ attempts to deal with this specific period between the second and third death, when people virtually mention the name of the deceased. The design uses Google Alerts or a similar software to monitor the internet for any mention of the deceased online. Small LED light-buds surround each grave in clusters with varying sized bulbs that interact with the Google Alert system. Once alerted, the ambient lighting system is activated to provide a visceral and visual experience of the data. Not only does it engage physical senses, but it serves to unite the multiple forms of recollection, existing between the second and third deaths to unite forms of virtual and physical memory communication (DesignBoom, 2013).
Underlying epistemology and methodology
This precedent’s methodology conceives of cemeteries as vital public green spaces, and does not attempt to create a vertical infrastructure design based on population or spatial constraints. Although it approaches the integration of virtual and physical with a very operationalized interpretation of internet data [i.e, names and searches = memory], the approach is a sort of “widgetization” relationship between virtual/physical memorial interfaces obscured by lights.
Despite the design methods integrating technology into the built structures of a space, the use of light seeks to conceive of death as being part of multiple systems at work similarly to Columbia University’s Death Lab research. Within the literature reviewing alternative methods of burial, decomposing-dead matter offers a potential avenue to power life. Similarly, harnessing decomposition energy to power life recognizes the basic rule of chemistry in which energy is neither created nor destroyed, simply transferred. This design pays homage to the circular nature of death + birth, but it is limited in its physical capacity to do so. (Rather than using decomposing energy as Columbia’s Death Lab is researching, this design uses artificial LED lighting powered by solar voltaic cells). Although this design does not power the lights through bacterial decomposition, it is a fascinating way to interpret death as a sequence within a continuous cycle. This methodology elevates the personal human experience through its use of activated lights to temporally engage the continued-life of the deceased within virtual mentions. But since this project also ultimately peaks its light to eventually go dark, there is a somber understanding that the light mirrors the way a sound would travel, ultimately drifting into oblivion. Ultimately, the light acts as symbol of the sound for someone’s last mention.
How its format or mode of execution serves, or fails to serve, its purposes
This precedent struck me due to its integration of sensory experiences and ability to unite virtual and physical spaces. Instead of redesigning the entire space, this interpretation of urban cemeteries protects the current zoning and spatial uses. It does not diminish historic practices or undermine historic architectural preservation within headstone symbolism; rather, it offers family members of the deceased opportunities to engage with their virtual afterlife within this physical space reserved for memory.
Its weaknesses or unexplored critical dimensions.
Despite a reliance on physical infrastructure and technology, the use of these “intelligent” features does not impinge upon the scope of the natural environment. The use of ambient lighting echoes nature’s own brilliance (and innate natural intelligence). In this way, the design is smartly integrated or embedded into the environment, while maintaining the sort of ethereal ambience that traditional cemeteries employ and provide. Despite this interesting integration of technology, the fact that technology is used at all within this process fails to address the hidden costs of this design. Practically, there would be a reliance on maintenance costs which may fail to create permanent solutions to this permanent space. It fails to repurpose existing infrastructure, increasing a demand and creating a reliance upon fiber-optic networks that may require unsustainable amounts of energy to practically integrate into the cloud’s functions. Similarly, this design places preference on those already well-known in real-life and would thus continue to live virtually by physically outshining their deceased peers even into death. Although this experience mirrors that of life, it is difficult to apply equitably and justly even though death is the ultimate equalizer. Maybe that is what it seeks to do, but as a serious contender for urban cemetery futures, this function is limiting.
“Arnos Vale”. 2018. Arnos Vale. Accessed April 4 2018. https://arnosvale.org.uk/.
“Constellation Park – GSAPP | Deathlab”. 2018. Deathlab.Org. Accessed April 4 2018. http://deathlab.org/constellation-park/.
In late February, Barbara Streisand was interviewed in Variety magazine. She touched on her desire to disrupt the male-driven atmosphere in Hollywood, her experience with Harvey Weinstein, and her distaste for Donald Trump. However, the detail that garnered the most attention had nothing to do with her career or politics: she had cloned her dog before its death and was living with two genetically identical clones. This revelation drew attention (and ire) from many pet owners. The New York Times published a write-up, which led to a subsequent follow up article penned by Streisand herself. The practice of domestic pet cloning was pioneered in 2005 and offered to the general public in 2015 by Korean company Sooam Biotech. Between 2015 and the recent publication of Streisand’s interview, multiple pet cloning companies have emerged, and the process has become more streamlined.
Interestingly, some of the top selling dog products on Amazon are dog DNA kits designed to give pet owners the most accurate genetic information about their dogs in order to tailor environment, health issues, and ease aging to best suit their pets. The Embark (great name) Dog DNA Test, and the subsequent rise in canine home genetic testing kits, in my mind mirrors the rise of human home genetic testing kits. The advanced sophistication of cloning, genetic factor awareness, and bio-engineering technology has brought this information into mainstream discussion, and as a result the biomedical and consumer worlds have collided. Consumer pet cloning is a practice that calls into question issues of humans ‘playing god’, interrelations between species, and elitism within a technological context.
The age of cloning was dramatically ushered in by Dolly the sheep in 1996, bringing what was once science-fiction fare into tangible reality. However, while genetic clones of other species quickly followed, the process of producing a cloned dog proved more complicated. “Certain unique aspects of the reproductive process in canids” meant that traditional in vitro methods were not resulting in live births. Scientists in South Korea adopted the use of somatic cell nuclear transfer (SCNT):
In this technique, eggs are removed from female dogs, the nucleus is removed (enucleated), and body cells from the to-be-cloned dog are injected into the eggs. The eggs serve as host for the genetic material of the dog to be cloned. Electric stimulation makes the egg divide, and divide, and divide to behave like a growing embryo, and eggs are then implanted into a dog who serves as a surrogate.
Fast-forward to 2018, where the technology is available for consumer use. In the same way that advanced human fertilization treatments have become commonplace for those who can afford them, companies offering pet cloning seek to offer pet owners a similar use of reproductive technology (at a steep price). This results in a unique combination of high-tech bio-engineering imagery with a sentimental appeal to pet owners.
American company ViaGen and Korean Sooam both feature pages of testimonials featuring pictures of the happy cloned pets thriving, along with their owners’ heartwarming stories. Both companies also offer detailed descriptions of the cloning procedures, again making use of language to emphasize how natural and ‘simple’ the process is, and how a support community exists for customers, including blogs with general pet owner information:
The Korean Sooam is more focused on the scientific information and raw details, whereas ViaGen focuses more on the emotional call to action and sentimental connection. On the main ‘dog cloning’ page, Sooam features instructions on what to do if your pet suddenly dies to keep the genetic samples viable:
Both sites feature rundowns on the science and ethics of animal cloning. ViaGen has an extensive FAQ, featuring questions such as: What is a cloned dog, Do pets delivered by cloning have normal lifespans, and Is a pet born through the cloning process physically and behaviorally identical to the “original” pet? This last question is particularly important in reinforcing the desire to clone a loved pet. ViaGen answers that, “This is best described as identical twins born at a later date” but does eventually admit that “The environment does interact with genetics to impact many traits such as personality and behavior.” Sooam does not go as far as to mention personality, instead focusing on identical genetics. ViaGen’s homepage offers an animated clip, with a comforting voice-over explaining how “the Anderson’s” considered their dog Buddy “an irreplaceable part of the family”. The minute and a half long video then briefly describes the cloning process in the most nonchalant way possible. The video finishes with the claim that the Anderson’s new dog will “most likely be smart, playful, and look like Buddy too”.
Only the more scientifically-focused Sooam readily mentions the success rate as “nearly 30%”; that information is not mentioned at all on the main ViaGen pages.
This relatively low success rate also leads into another big problem with dog-specific cloning: the process requires a lot of female dogs, for eggs and for surrogates. John Woestendiek, who wrote a book on dog cloning in 2010, “In addition to the tissue sample of the original dog, cloners will need to harvest egg cells from dogs in heat – maybe a dozen or so. And, after zapping the merged cells with electricity so they start dividing, they’ll need surrogate mother dogs, to carry the puppies to birth. That’s a whole lot of surgeries, on a whole lot of dogs.”
The other glaring issue involved in pet cloning is the price. ViaGen offers dog cloning for $50,000, while Sooam’s 2015 price was $100,000. With a huge number of dogs (not to mention humans) already living without a home, this sort of investment seems flippant, especially considering the cloned dog’s personality might not even resemble the original dog. While many owners’ love shows no bounds, that is certainly a high price to pay for an ultimately unrealistic desire to bring back the dead. Some have accused the cloning companies of playing on wealthy, grieving dog owners by using leading language. Both wealthy pet owners who opt for dog cloning, and the companies charging for an ultimately harmful facade have faced claims of selfishness and anti-cloning campaigns from concerned animal welfare groups.
The Center for Urban Pedagogy (CUP) is a Brooklyn-based nonprofit that uses design thinking, art practice, and engaged pedagogy to educate the public about policy and planning in our cities. CUP regularly collaborates with experts and professionals to develop programs that engage children of various ages to investigate public problems that are relevant to the students. Last summer, high school aged students from the Red Hook Community Justice Center worked with CUP and teaching artist Nurpur Mathur to learn about why school segregation persists in New York City. The students interviewed experts in education, including researchers, professors, policy makers, politicians, reporters and activists, to understand what factors contribute to continued school segregation and why it is so difficult to overcome, and offered possible solutions and ways to change the distribution of students and resources to different schools.
The outcome of the students’ research and interviews is a twenty-page book called The Public School Avengers. Designed to look like a composition book, the book’s lined pages are filled with sketches of the interview subjects and doodles of typical classroom scenes. The book begins with an introduction explaining the project and posing research questions about public school segregation in a font that resembles handwriting. The importance of quality education is emphasized with a quote from Columbia Professor Douglas Ready that presents education as a source of opportunities to help people escape poverty. In fact, throughout the book, the interview subjects are quoted extensively, with text bubbles appearing next to doodles of the subjects and providing the bulk of the data about the troubles of school segregation. In addition to these direct quotes, the students have organized the book into six sections, with brief explanatory text and resources for additional information: What is the process to get into a public school? What influences what school a student attend [sic]? What makes each school in NYC different? How did we get this way? How could it be different? How can you create change?
While the book makes extensive use of direct quotations from experts, demonstrating the extensive research conducted by students and care and consideration given to identifying knowledgeable subjects and designing the interviews, the direct insertion of quotes without further contextualization at times disrupts the flow of the narrative. These interviews are an incredible source of data, but without further explanation to tie the information back into the main argument about the problems of school segregation, the throughline can get lost. Though very knowledgeable, the respondents are also coming from largely similar political positions, advocating greater funding for public schools and more integration of communities in general. Though surveying proponents of the various forms of charter and private schools in the city might be impossible, it seems important to consider these alternatives as a solution to the problem of school segregation, if for no other reason than to reiterate the importance of free education to increase equality through educational opportunity. Finally, it would have been interesting to hear from some of the students themselves, about what they think is valuable about public schools and how they would like to change their schools, rather than focusing on solutions from adults who may have ulterior motives in the solutions they propose.
Nevertheless, I find the form of the composition book a very successful and compelling format to present the work. The work of the students in conducting interviews and background research is reflected in the informal, scholastic design. While it seems like the doodles of interviewees may be supplied by Mathur, there are smaller pictures of classroom scenes and school villains and superheros throughout (though unfortunately these characters aren’t explained or tied into the narrative, they do embody the subject matter under discussion on the page). The use of colorful quote boxes also clearly separates the data (interviews) from the background framing material and additional resources. The final product resembles the notebook of a creative student who is sometimes lacking in focus (as we all are) – they grasp the outline and take notes of interesting ideas or concepts that they hear in the classroom, but they also have other interests and skills that might not be cultivated in the context of a traditional lecture-based classroom. The result is a visually engaging, informative, and accessible book that can be shared in other schools or with members of the public to initiate conversations around public school segregation and spread information with the intention of enhancing political engagement.
The rationale undergirding this project is personal. Upon leaving the nest, I negligently left mixtapes given to me by friends and admirers in my mother’s care. Needless to say, years of sonic memories are irrevocably buried underground.
Fusing performance art practice and archival research methodology, this project is interested in presenting a way to “mine” sensation and memory. As such, precedents may be traced to notions formed by Fluxus artists as a precursor to data for data’s sake, as well as algorithms deployed for the purpose of personalization (the specific example here being The Music Genome Project paradigm used by Pandora). The question posed by technocriticism is broadly concerned with what information might be lost if algorithms are left to do the heavy lifting. Is there something to be gained by regressing to a point in time where a mixtape’s construction was contingent on a radio DJ? Using digital technologies to subvert the overarching intent of the tech industry to monetize (operationalize, formalize, etc.) data, the expression “art for art’s sake” can be applied to Miranda July’s app, Somebody, as well as one of her earlier projects, Joanie 4 Jackie. For the purpose of this analysis, I will be looking at Joanie 4 Jackie as a precedent project, though there are several works that specifically index mixtape culture and ideology (e.g. “Attention K-Mart Shoppers” archive by Mark Davis and the WebCassette app by Klevgrand come to mind)  .
Originally ideated as a “free distribution system,” Joanie 4 Jackie operated as a video chain letter for female filmmakers to share their work in the mid-1990s (Hoffman, 2009, p. 23). July passed the project along to Bard College in 2003 and donated an archive to the Getty Research Institute in 2017. While the project continues to operate, it also serves as a critical collection of early feminine (and feminist) experimental film as well as a monument to the videotape format. July is known for her interest in human connectivity and collaboration, “characteriz[ing] her work as ‘always [having] to do with other people’” (Hoffman, 2009, p. 24). Her interest in compiling, and subsequently sharing, films by women was primarily to develop correspondence between participants, inviting feedback and the opportunity to collaborate on future projects (Hoffman, 2009, p. 24).
This communication and relationship-centered epistemology is embedded in the methods implemented to extract and bequeath knowledge. Much like a mixtape in traditional cassette form, Joanie 4 Jackie is subject to a finite duration of content, resulting in a temporality in information exchange. Limited to ten filmmakers per tape, the intention here is retrieving purposeful, meaningful data. Interestingly, all submitted work was accepted, an admirable feature given today’s ultra-curated and oftentimes non-inclusive creative programming and exhibition processes. Part of the underground feminist punk movement in Portland, July’s vision for Joanie 4 Jackie was set in motion by a simple printed message to interested participants: “A challenge and a promise: Lady, you send me your movie and I’ll send you the latest [Joanie 4 Jackie] Chainletter Tape” . A textual accompaniment from each filmmaker made its way to the next participant, emphasizing July’s hope for an intimate collective intelligence ‒ a far cry from the algorithmic approach taken up by the likes of Pandora, Spotify, and other streaming platforms.
My hesitance to propose a weakness in Joanie 4 Jackie is perhaps indicative of my inherent bias toward “old” media and the belief that it is weak only insofar as the technology accessible at the time of its fruition. There’s a kind of physicality in the crafting of a mixtape (similar to analog filmmaking, which requires actual splicing and cementing of material) whereas “algorithmically designed playlists” preclude this haptic sensation. The playlist as a digital interface, according to writer Liz Pelly, is anesthetizing . I wonder how July would reimagine this project in the digital age. The multimedia artist has made her opinion on technologically mediated communication very clear: at the very least, pixels should be accompanied by performance or some variation of human intervention. As Alison Hoffman suggests, “July’s work activates the persistence of feelings and hand-touch sensibilities both to model and to build coalitions that locate agency in a shared openness and (bodily) vulnerability” (2009, p. 22). July’s insistence on multiplicity in sense engagement stands diametrically opposed to streaming services like Spotify, the interests of which include venture capitalists and corporate sponsorship ‒ not the people who create the actual content.
Because July’s works are more concerned with sense engagement rather than commenting on the prescribed teleology embedded in most information technology, they do not neatly fit into the same categories as a typical test-kit project (perhaps it’s more appropriate to identify this work and my proposed project as a performative method). While I certainly have grievances with streaming services and their disregard for creative sustainability, I’m still working out the most appropriate means to share the “ecological anthologies” resulting from cumulative, collaborative memory mining. While my sensibilities would certainly lend themselves to cultural probe methods, which would definitely feed my nostalgia to send participants home with a cassette deck, I’m more interested in the idea of the mixtape rather than its materiality. At this point in time, I imagine the extraction of knowledge/intelligence would probably take the shape of a survey, where participants would respond to a prompt through written communication. The intention here is to move away from digital interfacing (even if it’s just temporarily), focusing more on sentient environments (within interior and exterior spaces), especially in terms of the collaborative effort to express a specific mood (affect, memory) and the “[breaching] of psychological space” while building on July’s idea of shared openness . Not surprisingly, two of my very talented musically-inclined friends prioritize the cassette format when releasing their music, but also make their work available on most streaming platforms. In any case, I’m interested in holistically extracting information from willing participants and looking forward to learning of new artists ‒ perhaps people will share the work of new, underrepresented talent. I intend to do the same, but I might also add a disco song.
Where less performative human interaction lends itself to data for the sake of commercialization, more humanistic and less goal-oriented projects lend themselves to data for the sake of memory formation and retention, social interaction, and curiosity. Returning to Sophie Calle’s work, we can also qualitatively determine ‒ and learn from ‒ the “output” of intelligence across disciplines.
Alison Hoffman’s chapter, “The Persistence of (Political) Feelings and Hand-Touch Sensibilities: Miranda July’s Feminist Multimedia-Making” from Columpar, C. & Mayer, S. (Eds.) (2009). There She Goes: Feminist Filmmaking and Beyond. Detroit: Wayne State University Press.
Through the New Inquiry-supported White Collar Crime Risk Zones(WCCRZ), contemporary artists Sam Lavigne and Francis Tseng, along with data scientist Brian Clifton, illustrate a paradox that lies at the intersection of data and law enforcement. It presents the satirical flipside of predictive policing programs like PredPol, which tend to heavily monitor street crime in low income neighborhoods, while turning a blind eye to “high level financial crime” committed by the likes of corporations and banks. Police departments around the United States rely on such programs to optimize operations, which analyze and learn from historical data to predict where crime is likely to occur and send cops there to prevent them. Some even go as far as to predict who is likely to be a victim. But the data fed into these predictive machines reflects how law enforcement have worked based on racist and classist assumptions in the past, and thus perpetuates the same uneven policing in an egregious feedback loop. Disguised as factual, efficient, and unbiased, predictive policing builds human prejudice right into itself.
Thus, White Collar Crime Risk Zones proposes to pick up the slack here, where other predictive policing products drop it. There are three components to WCCRZ: A web application, an iOS application, and a technical white paper that explains the methodology of the White Collar Crime Early Warning System (WCCEWS), the machine that powers the applications. Taking on a satirically factual tone, the white paper presents WCCEWS as an opportunity to expand—not correct—predictive policing. Referring to academic research done on predictive policing and public surveys, WCCRZ identifies the gaps in the field and proposes to fill them with a model with admitted similarities to HunchLab, another predictive policing program. WCCRZ does not explicitly say why white collar crime—instead of, say, cybercrime—should be the next focus for predictive policing, but instead it appears to be presented simply as an untapped market. On one hand, this presentation takes away from WCCRZ’s obvious point that predictive policing is unfair and unjust, but on the other hand, it seems to show a convoluted support for predictive policing by saying why not be unfair and unjust to everyone then?
The interfaces of the two versions of WCCRZ are quite similar. Both the web and mobile apps overlay Google Maps with predictive data that the WCCEWS algorithm churns out. The maps use color to indicate crime risk in different zones, from yellow to orange to bright red, corresponding with the severity of the crime. Places where risk of white collar crime is predicted to be particularly high aren’t surprising. The streets of Midtown, Manhattan, for example, are hardly visible under all of the bright red risk. In contrast, Governor’s Island has one measly yellow square, indicating an 80% chance of crime involving up to $10,000. Both applications also employ a tongue-in-cheek beta facial analysis feature, which is the averaged product of “the pictures of 7000 corporate executives whose LinkedIn profiles suggest they work for financial organizations.”
While everything else about WCCRZ makes sense to me and satisfies my need for satire, the one thing that I feel slightly dilutes or confuses an otherwise fantastic project is its mobile component. For one, it is accessible for free through the AppStore. Furthermore, on mobile, unlike on web, the user can choose to get notifications whenever they are in a risk zone by setting the percentage of risk at which they want to be notified. By that standard, WCCRZ is not very different than my mobile banking app which lets me know when I’m near a branch. It is not obvious to me, though the white paper calls it a tool for “citizen policing and awareness,” how a mobile app available to the public supports the point against predictive policing—isn’t the whole critique that these technologies are mainly available to government agencies who can afford them? Perhaps in the name of truly committed satire, it means whatever you want it to mean, but to me it feels like a bit of an afterthought.
Relevant mobile app or not, WCCRZ visualizes a salient point not only about the way our cities are policed, but also how deeply our lives can be affected by the importance and trust placed on data. It reminds us that data and algorithms are not exempt from human prejudice—they are carriers of it.
“Sufficiently advanced simulation is indistinguishable from the real thing”, to twist Clarke’s aphorism. Simulations can take place at levels anywhere from modeling markets, to predicting sea level rise, to the staging of wargames. It is at the level of the wargame that simulation truly becomes artful in the pursuit of the temporal “God-eye”, the unified site of utter anticipation.
But the notion of “utter anticipation” is fraught in the first instance, haunted by a single question: can we actually think like the enemy? Manuel De Landa sums this problem up nicely in War in the Age of Intelligent Machines: “In most cases Red [the enemy] becomes simply a mirror image of Blue [the allied group]”.
But what happens if Blue can think Red? Instead of what may be commonly assumed—that losses would promote in Blue a greater understanding, the simulated loss opens up onto an existential nightmare, a confrontation with Blue’s own fragility. The problem then is that, of course, the wargame will always be weighted in favor of Blue.
Part of this bias is institutional, but there is also the fundamental problem of information: the true nature of Red’s tactics and materiel will forever be draped in a “ludigital” fog of war, no matter how complete Blue’s intel may be. The wargame, constructed with faulty information and to provide a satisfactory outcome, is revealed to not be a strategy tool at all, but rather, a machine to produce in Blue assurance in its own supremacy.
When this supremacy is violated, the effects are internally destabilizing, forcing Blue to come to terms with the specter of its own death, touching down on the plane of abstract horror. De Landa relates for us an anecdote: “…in the early 1960s…Richard Bissell from the CIA, father of the U-2 spy plane and co-engineer of the Bay of Pigs invasion, played Red in a counterinsurgency war game and was able to exploit all the vulnerable points in the American position.” This sent shivers down the US’s spine: Bissell’s win was enough to get the files of the game’s proceedings classified, never to be released.
In roughly the same mid-century milieu, the ‘Hot 60s’ forces the hand of the war makers to break out from abstraction, and the wargame graduates into physical space and human players as a response to civil unrest in NATO countries. With the ‘peacetime’ arrival of full-size “war cities” such as Hammelburg, (West) Germany and later, San Clemente Island off the coast of California, the wargame begins to draw ever nearer to realism. These Potemkin complexes were (and indeed, are) created entirely for training in the minutae of urban operations and neutralization of enemy combatants, appearing as a heterotopic everywhere, crammed into nowhere, a consolidation of the whole world in a top-secret blacksite.
But the spatial revolution of the wargame still was not complete. As detente collapsed, and with an ever-increasing fetish for realism and complexity, the war simulation exploded out of the city and went runaway to continental scales, with millions of machine parts. Perhaps the best kept secret of this variety was US/NATO operation Able Archer 83, a simulation that achieved such a high degree of realism that it threatened to erupt into actual nuclear conflagration.
Able Archer 83 took place from 7-11 November 1983, the culmination of nearly a year of “naval muscle-flexing” and PSYOPs designed to rattle the USSR, such as sporadic “air and naval probes near Soviet borders”, undertaken specifically to “rattle the Soviets”. These actions led to the creation of Operation RYaN by the Warsaw Pact to “prevent the possible sudden outbreak of war by the enemy”. In this already-heightened climate, US/NATO held their annual Able Archer event, designed to “practice new nuclear weapons release procedures”, specifically the “[transition] from conventional to nuclear operations”. From the official SHAPE description:
“The exercise scenario began with Orange (the hypothetical opponent/[Red]) opening hostilities in all regions of ACE [Allied Command Europe] on 4 November (three days before the start of the exercise) and Blue (NATO) declaring a general alert. Orange initiated the use of chemical weapons on 6 November…All of these events had taken place prior to the start of the exercise and were thus simply part of the written scenario… As a result of Orange advances, its persistent use of chemical weapons, and its clear intentions to rapidly commit second echelon forces, SACEUR [Supreme Command Allied Powers Europe] requested political guidance on the use of nuclear weapons early on Day 1 of the exercise (7 November 1983)…the weapons were fired/delivered on the morning of 9 November.”
Able Archer 83 was unique with respect to past simulations, which one commentator referred to as “special wrinkles”. These include a new battle language and encryption, which made the maneuvers of NATO completely opaque to the USSR, forced to rely on observations and extrapolation as units and materiel were moved across the ACE theater and routines were executed within SACEUR/SHAPE. These terrifying machinations forced the USSR to ask a new epistemological question: if armies and nuclear weapons are being moved into position by the enemy, does it matter what reason its for? At what point does war, occurring in a liminal, ludic space, breach the gap into reality altogether? Is there functionally any difference between war and its simulation? Or, even more to the point, is simulation itself an escalation of hostilities?
Jean Baudrillard’s famous definition from Simulacra and Simulation states that “the simulacrum is never that which conceals the truth—it is the truth which conceals that there is none. The simulacrum is true.” In Able Archer 83 the “apotheosis of simulation” is itself simulated, a nesting torus of that-which-never-quite-comes-true. The ragged era of the early 80s’ “Cold War II” takes the apocalyptic promise of atomic apocalypse and plugs it in to the motor of banal politics (and indeed, routine wargames), in which “the unknown is precisely that variable of simulation which makes of the atomic arsenal itself a hyperreal form, a simulacrum that dominates everything”. Able Archer 83, in which SHAPE takes part in producing a simulation of nuclear hyperreality, contained within it the possibility of finally crashing Baudrillard’s hyperreality of infinite deterrence (warding off Europe After the Rain), and inaugurating the climax, the real event of nuclear war.
And of course, what comes after the war is also itself simulated.
Manuel De Landa, War in the Age of Intelligent Machines
In 2015, at the 56th Venice Biennale, artist Simon Denny presented Secret Power, as part of his ongoing research on the visual culture of the National Security Agency (NSA). It was only two years before that the former US intelligence contractor Edward Snowden leaked sensitive materials from the NSA. While the artist’s interests in these documents were initially sparked by revelations that New Zealand (the birthplace of the artist) was part of the Five Eyes global surveillance network, the artist is also interested in speculating on a “visual imaging department” within the NSA, looking into the ways NSA communicated internally about its own operations and politics.
Perhaps it was not a total coincidence that we should encounter Denny’s work in 2015. 2015 was, after all, the “International Year of Light and Light-based Technologies” (IYL2015) as designated by the United Nations. As the official website for IYL2015 puts it: “Light plays a vital role in our daily lives and is an imperative cross-cutting discipline of science in the 21st century. It has revolutionised medicine, opened up international communication via the Internet, and continues to be central in linking cultural, economic and political aspects of the global society.” The documents released by Snowden were distributed through the Internet. We can also imagine that the gathering and circulation of intelligence within those documents were themselves made possible by the Internet. Rapid pulses of light that enable sensitive information to travel at the speed of light.
On display within the dim interiors of the Marciana Library, in Venice, was Denny’s installation, made up of illuminated server racks, hollowed out to serve as vitrines for the artist’s appropriation of the leaked documents. While the setting of the library was clearly a deliberate choice on the part of the artist to point to the library as a site of knowledge and power, we need also to consider the lighting of the installation setup as part of the artist’s provocations. Against the warm orange glow that softly showed off the contours and surfaces of the paintings and sculptures in the Marciana Library, the cool blue lights emanating from Denny’s server-rack vitrines revealed a new apparatus of power and knowledge, a global network of intelligence and surveillance enabled by an infrastructure of lights and optics.
Light gives life, literally. This is most evident in the biological and chemical processes that sustain our ecosystems. Its importance in our perceptions of the world is also represented in the ways we describe knowledge as revelations, illuminations, and clarifications. If we truly live in a photosensitive culture, then perhaps Denny’s work offers an important complication. That is, there is a biopolitics to this light. This infrastructure of light and optics enables the surveillance of bodies identified as potential threats to those looking from the other side of the dashboards and monitors. And it is this biopolitics of light that I am particularly interested in exploring with my own project on Singapore’s infrastructural lights (i.e. the network of street lamps that are becoming smart sensors for the city, the lights that are perpetually on, 24/7, to secure the critical infrastructures of the city).
Bo Wang, Spectrum, or the Singapore Dan Flavin (2016)
[On a similar point, but as a brief digression, it is inspiring to see how the New York-based artist Bo Wang sampled light sources from various locations across Singapore in order to critique this biopolitics of light. Wang’s Spectrum, or the Singapore Dan Flavin(2016) presented fluorescent tubes sampled from consumer and public spaces (e.g. a dining and cocktail place, a hotel lobby) to industrial and working spaces (e.g. offices in financial hub, dormitories for migrant workers), arranging the samples from warm to cool, orange to blue, as part of his installation. What is revealed is how migrant workers in Singapore, those precarious bodies who build and maintain the critical infrastructures in Singapore, are subjected to bright blue lights even in their spaces of rest.]
Returning back to Denny’s installation at the Marciana Library on closer examination, there is nonetheless something quite unsettling about the ways in which the artist had chosen to materialize his appropriations of the NSA documents. Stuffed into his server-rack vitrines were Denny’s attempts at making a spectacle out of what we consider to be fairly innocuous graphics and illustrations (e.g. bar graphs, clip-art). There is an ambivalence here. One that vacillates between a mocking exaggeration of NSA’s fairly crude aesthetics and a knowing celebration of the efficacy of this aesthetics (regardless of, and in a way also precisely because of, what the trained eyes of the art world may think of them. Compared to any international artist whose work may have provided the art world with a steady stream of beauty, ecstasy and entertainment, these graphics have actually had a hand in determining the fates of many people around the world.
To that the end, I can’t help but wonder if there is be something fundamentally disingenuous at the heart of Denny’s work. Though titled Secret Power, Denny’s work seems also to function as a veiled mockery of the art world’s inefficacy and impotence, even while the artist goes on to add another blockbuster event to his growing / glowing C.V.. Perhaps Denny truly believes that this is the secret privilege of an artist. But this overriding sense of entitlement is also evident in Denny’s decision to co-opt the work of freelance graphic designer David Darchicourt, who directed and produced visuals for the NSA between 1996 and 2012, without seeking Darchicourt’s approval. All in the name of doing a “performance … which makes us think about how it feels to have work that we’ve done or things we’ve said on the Internet used in ways we’re not sure of and aware of.”
Yet, having said all this, I am for the moment trying to resist such a cynical indictment; that might be after all too easy a position to slip into. So while I may not fully agree with his “smarts,” Secret Power is nonetheless an important reminder that we cannot simply look at the leaked NSA documents for their informational “content” but we have to at least also consider their visual grammar and their mediated formats. And if anything else, Denny’s work did bring me to think further about the biopolitics of light.
For the first time in history, more than half of the world’s population lives in urban areas. In just a few more decades, 70 percent of the world will live in cities. Enabling those cities to deliver services effectively, efficiently, and sustainably—while keeping their citizens safe, healthy, prosperous, and well–informed—will be among the most important undertakings of this century. In parallel to this growth, the volume, variety, and production rate of data—much of which is being collected by a variety of sensors—are unprecedented. If properly acquired, integrated, and analyzed, “big data” can take us beyond today’s imperfect and often anecdotal understanding of cities to enable better operations, better planning, and better policy.
Most data currently generated by sensors are used only for the purpose for which they were collected—for example to detect and control anomalies. However, given the breadth of data generated by sensors, there is considerable value in exploring best practices in creating and exploiting sensor network data for future re-use and immediate integration with other data. With the support of The Kavli Foundation and industry sponsors, NYU’s Center for Urban Science + Progress is proud to host focused and engaged discussions on emerging sensing topics with leading voices from across the US.
Sensing the City will engage the local community in exploring overarching challenges and opportunities as urban sensing capabilities and ambitions continue to expand, inviting participants across New York to join sensing luminaries from across the USA for a blend of talks and panel discussions on three core themes.
15th Architectural Humanities Research Association International Conference
15th – 17th November 2018
Department of the Built Environment, TU Eindhoven
Increasingly the world around us is becoming ‘smart.’ From smart meters to smart production, from smart surfaces to smart grids, from smart phones to smart citizens. ‘Smart’ has become the catch-all term to indicate the advent of a charged technological shift that has been propelled by the promise of safer, more convenient and more efficient forms of living. When combined, all these so called ‘smart’ devices amount to a ubiquity of computing which is heralding a new technological paradigm and a fundamental shift in the way buildings and cities are both experienced and understood. Through a variety of sensors, cities and buildings are now defined not by the people that inhabit them, nor their functions, nor their identity or history, but simply as increasingly larger sets of data. Such sets are then processed to immediately adjust and alter (physical) conditions in real time. Although such large scale collection and use of (big) data has an inevitable effect on the way people live and work, there has yet to emerge a clear answer to how architecture and cities should respond and assimilate such brave new world.
Carried by both corporate and governmental initiatives the ‘smart’ paradigm has entered architecture and cities as a powerful force. Even as it indelibly reshapes our patterns of inhabitation, the particular ways in which the ‘smart’ paradigm affects architectural and urban debates, design practices, and our forms of living remains woefully under-analysed. An open question that gains further urgency and demands debate, as with each development the meaning of ‘smart’ becomes more diluted.
We seek to stimulate a broad understanding of ‘smart’ technologies – one that conceives them not merely as “efficiency oriented practices, but [as practices that] include their contexts as these are embodied in design and social insertion” (Andrew Feenberg, 1999). Such a broad understanding includes questions of responsibility, accountability, ethics, participation, knowledge (necessary to both produce and participate), and many more. Effectively, beyond comfort, safety and efficiency – how can ‘smart design and technologies’ assist to address current and future challenges of architecture and urbanism?