February 21: What’s “smart”? And how do we know it when we see it?

Howard Knox’s Feature Profile Test, via CT Public Broadcasting

4-4:30pm: Making Center Tour with Abby Mechanic: meet @ Tool Checkout, west end of 2 W 13th St, 2nd Floor

Back in the classroom, we’ll talk about “smartness” and develop metadata schemes for next week’s cataloguing exercise: what criteria are most salient in unifying and distinguishing between various forms of “intelligence”? We’ll also examine test kits created to measure and cultivate intelligence.

Supplemental Resources: 

  • William H. Calvin, “The Emergence of Intelligence,” Scientific American 271:4 (October 1994): 100-7.
  • Controversy of Intelligence,” Crash Course Psychology 23 <video>.
  • Emergence,” RadioLab 1:3 (2005).
  • Mark Dery, “Cortex Envy,” Cabinet 34 (Summer 2009).
  • An Ethereal Future,” Reddit (2014) [on blockchain futures].
  • Mariusz Flasinski, “Theories of Intelligence in Philosophy and Psychology” in Introduction to Artificial Intelligence (Springer, 2011): 213-23.
  • Natasha Frost, “What Is It Like to Be a Bee?” Atlas Obscura (December 6, 2017).
  • Michelle G., “Picture Yourself as a Stereotypical Male,” MIT Admissions (September 5, 2015).
  • Gary Groth-Marnat, Handbook of Psychological Assessment, 5th (New York: John Wiley & Sons, 2009).
  • Orit Halpern, Robert Mitchell, and Bernard Dionysius Geoghegan, “The Smartness Mandate: Notes Toward a Critique,” Grey Room 58 (Summer 2017): 106-29.
  • Donna Haraway, “Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective,” Feminist Studies 14:3 (Autumn 1988): 575-99.*
  • Institute for the Future, “Understand the Blockchain in Two Minutes” <video>.
  • Olivia Judson, “What the Octopus Knows,” The Atlantic (January/February 2017).
  • Raffi Khatchadourian, “The Doomsday Invention,” New Yorker (November 23, 2015).
  • Eduardo Kohn, How Forests Think: Twd an Anthropology Beyond the Human (Berkeley: University of CA Press, 2013).
  • Shane Legg and Marcus Hutter, “Universal Intelligence: A Definition of Machine Intelligence,” Minds & Machines 17:4 (2007): 391-444.
  • Stefan Junestrand, “How Can Blockchain Help Smart Cities?Blockchain Revolution (August 9, 2017).
  • J. Mackintosh, “History of Theories and Measurement of Intelligence,” in Robert J. Sternberg and Scott Barry Kaufman, The Cambridge Handbook of Intelligence (Cambridge University Press, 2011): 1-19.*
  • Colin McFarlane and Ola Söderström, “On Alternative Smart Cities,” City (2017):1080/13604813.2017.132716.*
  • Perluigi Serraino, The Creative Architect: Inside the Great Midcentury Personality Study (New York: Monacelli Press, 2016).*
  • Murray Shanahan, “Consciousness Exotica,” Aeon (October 19, 2016).
  • Tom Stonier, Beyond Information: The Natural History of Intelligence (New York: Springer-Verlag, 1992).
  • Don Tapscott, “How Blockchains Could Change the World,” McKinsey & Company High Tech (May 2016).
  • Anna Tsing, The Mushroom at the End of the World: On the Possibility of Life in Capitalist Ruins (Princeton: Princeton University Press, 2015).
  • Susana Urbina, “Tests of Intelligence” in Robert J. Sternberg and Scott Barry Kaufman, The Cambridge Handbook of Intelligence (Cambridge University Press, 2011): 20-38.*

More Supplemental Resources re: Intelligence Test + Educational Kits:

Images: Howard Knox’s Ellis Island Jigsaw Puzzle Intelligence Test, via Smithsonian Magazine | Wechsler Adult Intelligence Scales WAIS-R IQ Complete Test 8-991-713

8 Replies

  • The ideology of “smart” infrastructure, the appearance of which is steadfast in the commitment to improve quality of life by extending access and optimizing mobility, refuses the proverb of one of the most efficient systems of movement. “Error as architecture,” an observation made quasi-affectionately in reference to ant colonies, can be applied to “swerve” tendencies that ultimately construct cities. Historically, accidents have created neighborhoods — not intelligence. But while top-down efforts to build cities (ironically) from the ground up are certainly cause for concern, so too are notions of completely decentralized systems operating independently if we re-contextualize this possibility within organic architectures that have proven remarkable in their collective endeavors (though completely unsophisticated as individual subjects).

    I’ll admit that it’s difficult to challenge paradigms of city architecture when most of the language used to articulate this field are borrowed from information processing jargon. In that regard, I echo Shannon’s call for “new models for thinking about cities that do not compute.” I think Debra Gordon scratches at this surface when she asks, “Where is thought?” Probably not the most streamlined transition, but I think there’s something to be said of Google Brain’s hope that artificial neural networks may one day mimic brain flexibility — if they aren’t able to already. Of course, this intimates acknowledgement of the brain’s capacity to grow, fail, and reform.

    • Thanks, Allie! Lots of great provocations here: about the revolutionary potential and dangers of distributed agency. What are the risks of “emergence” and planning-by-accident? And I also appreciate your ontological/epistemological/existential question: where does thought reside? Is it in the brain’s biochemistry? Is it in an interaction between the brain and the world? Is it something more metaphysical? …

  • Going through the readings for today, specifically those about the semantics and language-centric nature of intelligence, I am pointed to the term ‘Artificial Intelligence’ itself. In the Calvin reading, he describes the process and diversity of young humans learning based on feedback from reality and other humans: “Our abilities to plan gradually develop from childhood narratives and are a major foundation for ethical choices, as we imagine a course of action, imagine its effects on others and decide whether or not to do it.” Can we subject computers to a parallel human-development process, allotting for the diversity of choice and background (among other factors) and produce an adaptive and flexible AI that works for all segments of society? Thinking of our trip to Sidewalk Labs, I can only imagine companies will soon seek to shed the ‘artificial’ distinction of AI in lieu of a term highlighting the ‘organic’ nature of human intelligence integrated into AI.

    Thinking more about language, I like the distinction Pinker makes between ‘language fluency’ and psychometric measures of intelligence. Surely computers are ‘smarter’ than humans when it comes to vocabulary and programmable intelligence, but just looking at the tremendous rise and influence of modern charlatans who use ‘big words’ to convey specious points (looking at you Jordan Peterson) gives me pause about human ability to differentiate intelligence from deception. I also think of the translation example in the Lewis-Kraus article – how ‘more transparent’ translations convey wider meaning, but lack the individual nuance and artfulness of Murakami’s translations (though this may just be the lit student in me). In the Mattern reading, the measurement of ‘effectiveness’ of a city also just leads me to think about the new kind of digital poverty that would be ushered in by such a metric.

    • Thanks, Tim! You’ve helpfully foregrounded a few key concepts that are threaded throughout this week’s readings: the capacity for language, the ability to plan, the ability to adapt, etc. You also raise some important questions about ethics — how do we build an AI that “works for all segments of society?” — and poetry: what dimensions of intellectual work and cultural production simply elude the “transparent translation”?

  • The readings for this week cover such vast and varied understandings of intelligence I find them somewhat difficult to compare, though of course there are numerous overlapping points. I’ve heard the arguments that what makes human intelligence so superior to other animals is our capacity for language, and that in large part this has to do with our ability to recognize patterns, but tracing these back to our ability for refined movement was really interesting. It definitely begs the question of how intelligence can be developed outside embodiment, which would seem a central concern to AI, yet the fact that the Lewis-Kraus piece doesn’t address this as an issues for Google Brain, which is working on the very ability that has been associated with superior human intelligence (language) is surprising. Given the publication dates, perhaps there were significant shifts in our understanding of the origins of intelligence that occurred in the 90s and 00s that minimized the importance of the body. At the same time, Wilk and Sutela’s piece on slime mold intelligence points toward intelligence in movement that never builds toward pattern recognition or language. The emergent intelligence of slime molds, bees, and ants seems inextricable from the environment they inhabit. (For the record, it’s very strange to speak about emergence as a sociologist without referencing Durkheim! So there, I did it.)

    The ability of slime molds to navigate and adapt to their environment is fascinating, and provides a nice counter to Steven Pinker’s ridiculous assertion that culture and environmental factors have no impact on intelligence. What’s so shocking to me about Pinker’s championing of nature over nurture is that he continues to hold this position despite the state of genetic science today (see his recent piece in the Chronicle of Higher Ed where he throws all of STS and cultural relativism under the bus as essentially responsible for alternative facts and, by extension, Trump). Our DNA may create predispositions for particular characteristics, but often environmental factors determine whether or how genes express themselves. This distinction is central to genetic medicine, as counselors help patients make lifestyle choices based on predispositions in their DNA. Mumford (in Mattern) proposes an assemblage of infrastructure, actors, and discursive practices to understand the city in a way that I think is helpful to think about intelligence, especially if we think of cognition as distributed. I would be really interested to discuss information assemblages compared to information ecologies, and what kind of work or research each understanding permits.

    • Excellent, Zoe. Thank you! I appreciate your acknowledgment of the environmental situatedness and politics of intelligence; environment and culture play critical roles in shaping what we value and how we look for and measure it. I’m hoping you can provide a bit of background on Pinker in our conversation, today 🙂 And yes, I like this idea of exploring the relationship between information ecologies and assemblages.

  • This week’s readings have me feeling a little frustrated about intelligence. Some of the pieces, such as the Scientific American article from 1994, the recent Rhizome piece on slime, and the Places article, discuss and argue for the intelligence of living beings, whether human, animal, or single-celled organism. The others, talk only about artificial or programmed intelligence.

    Two quotes from the readings hint at the contradiction of intention that is bugging me:

    From NYT article on Google Translate, quoting a Google employee:
    “It’s not about what a machine ‘knows’ or ‘understands’ [consciousness] but what it ‘does,’ and — more importantly — what it doesn’t do yet.”

    From the Rhizome article:
    “Many, including Noam Chomsky, have argued that no technological comparison [machine] will be accurate until we understand how consciousness works first…”

    Is consciousness not what we are striving for (and frightened by) in AI? Does the machine not do what it does based on the historical data it is conscious of? On one hand, it seems that we are striving to create AI that replicates (or replaces?) human consciousness. On the other hand, it seems we are striving for only enough artificial consciousness that humans just have a lighter load on our end.

    From all of the articles, what seems to be the fundamental determinant of intelligence are foresight (doing) and hindsight (knowing). Mattern urges that we need to consider that humans are experientially intelligent when thinking about cities because city-making cannot happen without city-knowing. The slime piece seems to take a similar position: the slime charts the most efficient path according to its environment and its needs. In both cases, the foresight is gleaned from past experience.

    Why does there seem to be such a gap between what a machine can know and what it can do? Are we just not ready to have it do both at once?

    • Thanks, Samiha! Sorry for your frustration — but it seems that it was at least productive: the discrepancies between the “intelligence models” presented in these various readings led you to identify some important debates and fundamental questions. What, for instance, is the relationship between knowing and doing? How is consciousness related to intelligence? I doubt we’ll develop answers to these questions in class today — after all, philosophers and scientists have been asking them for millennia! — but I hope there’s at least something to be learned in considering how professionals and scholars in different fields tackle these timeless questions. I think their disagreements can be revelatory — if also frustrating 🙂

Leave a Reply