[UPDATED: 04_20_1969]

Curtis Roth is an Assistant Professor at the Knowlton School of Architecture at the Ohio State University. His work consists of images, objects, and texts that examine the relationships between computation, subjectivity and distance as they pertain to questions of labor and authorship.

His CV can be found here.

⊿ ATM ⊿

Developing General.Events, an architecture office without content, intended to organize the activities of thousands of online workers in real-time as they log onto, and operate custom-designed fabrication devices that produce artifacts to imagine other ways to live together.





275 West Woodruff Avenue
Columbus, Ohio 43210

⋈ NEWS ⋈


Curtis awarded The Robert S. Livesey Teaching Award.


Curtis exhibits Instrument 09: The Rendering for Drawing for the Design Imaginary in association with the 2019 ACSA National Meeting.


Curtis is invited by Kristy Balliet to participate in the Graduate Thesis Lab Symposium at SCI-Arc.


Curtis publishes Software Epigenetics and the Architectures of Life in the E-Flux Architecture.


Curtis publishes a review of http://We-Aggregate.org in the Journal of the Society of Architectural Historians Volume 77, Number 4.


Curtis contributes Ancient Computation to 3-Ways, a group exhibition at the A+D Museum. Curated by Anthony Morey.


Curtis contributes 100% Sunshine for Tempietto Exemplum at the Yale School of Architecture. Curated by Amanda Iglesias and Spencer Fried.


Curtis contributes 100% Sunshine for Tempietto Exemplum at the Yale School of Architecture. Curated by Amanda Iglesias and Spencer Fried.


Curtis published Ten Outsourced Interiors in Inflection Journal Volume 5. Edited by William Ward and Lucia Amies.


Curtis participates in the Becoming Digital conference at the University of Michigan's Taubman College organized by by Ellie Abrons, Adam Fure and McLain Clutter.


Curtis leads a workshop at the University of Michigan's Taubman College in association with Becoming Digital, a project curated by Ellie Abrons, Adam Fure and McLain Clutter.


Curtis invited by Leslie Lok to give a lecture for her seminar Drawing City Manifestos at Cornell AAP.


Real Time, a 19-minute video piece goes live at the ZKM Karlsruhe as part of their group show, Open Codes: Living in Digital Worlds


Curtis talks about cats, bots and dark labor with Clara Herrmann for Schloss-Post.


Instrument_10: The Exhibition opens at The Akademie Schloss Solitude in Stuttgart.


Curtis reviews Treacherous Transparencies for the Journal of Architectural Education.


The Story of the Post-Storage City, authored with Ian Caine will appear in LUNCH 12: Tactics


Rason Leaks will appear in PLAT 6.0: Absence. Edited by Melis Uğurlu and Rachel Grady.


Curtis discusses Some Dark Products with Phil Arnold.


The Spatial Myths of Computing will appear in Thresholds Journal 45: Myth. Edited by Zachary Angles.


Curtis writes about The Strange Promise of Bad Ideas for SchlossPost.


Curtis is invited by Ana Miljački to attend M.Arch thesis reviews at MIT SA+P.


Curtis invited by Leslie Lok to give a lecture for her seminar Drawing City Manifestos at Cornell AAP.


Curtis invited by Kyle Miller to give a lecture at Syracuse University in Florence entitled Nine Instruments (in Progress).


Curtis leads the Urban Interventions workshop with Marianna Rentzou and Konstantinos Pantazis of Point Supreme for the architecture program at Syracuse University in Florence, organized by Kyle Miller.


Past Perfect is exhibited at the 2016 ACSA International Conference: Cross-America's: Probing Disglobal Networks in Santiago de Chile.


Curtis begins a ten-month residency at The Akademie Schloss Solitude in Stuttgart to develop a manuscript for his forthcoming novel Heavenly States.


All That is Solid Melts into .ETC... to be exhibited at la Biennale di Venezia.


Curtis named the winner of Balmond Studio's Radical Rethink design competition.


Ecstasy Minus Agony will appear in PIDGIN 21: Flushed.


Anti-Fashion and the Internet Art of Instant Obsolescence to be published in PNYX.


Towards a Speculative Home-Economics will appear in LUNCH 11: Domestication


Curtis named the winner of The Journal of Architectural Education's Discursive Images Competition.


Curtis invited by Matteo Ghidoni to exhibit All That is Solid Melts into .Etc at ARCHIVO | ITALIA for Design Week Mexico 2015


Curtis invited by Sasa Zivkovic and Leslie Lok to attent mid-term reviews at Cornell AAP


Curtis begins as an Assistant Professor at The Knowlton School of Architecture


[TEXT] 2019_Published at E-Flux Architecture

An artist paints one of four lunar models comprising Project Lola, the physical precursor to General Electric's digital simulation. The orbiting camera track the artist is standing on would soon become the orbiting of a digitally simulated eye. 1965.

An End User Undertaking (EUU) is an agreement proffered by software manufacturers and signed by the representatives of software consumers that codifies a specific category of being known as an “end user.”1 End users are characterized by their likelihood to behave irrationally. This class of subject was invented by software developers in the early 1980s to differentiate their own knowing expertise from the chaotic ignorance of their clients. It’s a problem that’s not foreign to architects, e.g.: how do you design for an audience that couldn’t care less? Or, more delicately, like architecture, software might also be, “a work of art the reception of which is consummated by a collectivity in a state of distraction.”2

The end user, as a design problem, was first sketched a decade prior in Karl Popper’s “Of Clouds and Clocks,” a 1966 text that was excerpted for a special issue of Architecture Design in 1969.3 With the analogs of clocks and clouds, Popper described determinacy and indeterminacy in physical systems. His excerpt began at some indeterminate point in media res by asking his audience to imagine a continuum stretched between the most unpredictable, disordered clouds on the left, and the most predictable, ordered clocks on the right. Toward the right, Popper arrayed the solar system, Cadillacs, and old dogs; while puppies, people, and swarms of gnats were lined up in states of increasing indeterminacy to the left. Popper used this continuum to suggest that in both the hard sciences, as well as those mushier ones like post-war architecture and urbanism, it was presumed that all clouds could be turned into clocks with additional knowledge. That is, a puppy is not inherently less deterministic than a Cadillac, it is simply more difficult to model. While intended as an interjection into ongoing debates between causal and quantum physicists, the inclusion of Popper’s continuum in Architectural Design suggested a particular configuration of the user equally useful in describing both the inhabitants of the post-war American city, and the operators of the software instruments through which the city was increasingly designed: a configuration consisting of users as inherently irrational clouds, interfacing with the clock-like precision of cities or software. It’s no coincidence that a few pages later in the very same issue of AD, this model shared between two distinct users would be collapsed by an experimental urban design software called INTU-VAL.

Pater Kamnitzer demonstrates the visualization subroutines of INTU-VAL in Cityscape, an experimental film. 1966-1968, Image Courtesy of UCLA Architecture and Urban Design Special Collections.

Intuition and Evaluation, or INTU-VAL for short, was a software platform developed by Peter Kamnitzer and UCLA’s Urban Laboratory Project in 1968.4 INTU-VAL was intended to discipline the cloud-like intuition of an urban designer with the clock-like evaluative capacities of a digital processor. Kamnitzer developed INTU-VAL through a speculative scenario that asked a user to route a highway through a dense cityscape; a pertinent scenario, coming at the end of an era of urban renewal. Kamnitzer and his collaborators undoubtedly understood the racial and economic violence of post-war planning as emerging, at least in part, from the unevaluated intuition of designers and policy makers. Using a light pen for input, INTU-VAL prompted its user to exercise this potentially problematic intuition atop one of six urban maps depicting the city’s topography, land-use, geology, population, conservation areas, and sites of visual interest. Once planned, INTU-VAL would subdivide the designer’s intuitively derived route into analytical sectors, cross-checking them with the remaining five maps to discover unseen conflicts caught between competing cartographic representations of the city. This process of cross-referencing aesthetic design intuition with the unseen, yet computationally modelled realities of the city sought to provoke a corrective feedback loop, wherein the software’s evaluative capabilities would recursively discipline the designer’s prejudice in order to arrive at the least egregious outcome. Upon digitally correcting the crisis of the post-war American city, user and software would celebrate their success together through a digitally animated drive down their simulated highway.5

While ancillary to the broader ambitions of their experiment, INTU-VAL’s graphically animated highway was an important technical accomplishment, as the first computational code to translate a dynamically generated environment into a screen-based spatial simulation.6 It marked the earliest civilian deployment of a subroutine developed by General Electric in 1964 for their LEM Spaceflight Visual Simulator. The subroutine’s original task was to generate a black and white representation of the moon’s surface in real-time on a cathode ray display within NASA’s Space City Lunar Landing Simulator.7 Like an urban planner second guessing their biased intuition, General Electric’s simulator was designed to help astronauts manage the vagaries of perception while merging at high speed with a clock-like solar system. But INTU-VAL and GE’s Lunar Simulator not only pictorialized the first digitally simulated spaces; they helped to inaugurate an ontology of the user as a cloud, capable of being disciplined through the clock-like feedback of environmental simulation. While this cloudy user was implicit in GE’s conceptualization of an untrained astronaut, for Kamnitzer, this disciplinary regime suggested an approaching epigenetic evolution of the human mind itself, arguing that digital disciplinary instruments such as INTU-VAL “will trigger the next creative leap in the human brain.”8

General Electric's digital visualizations employed in the training of an HL-10 pilot's intuition. 1968, Image Courtesy of NASA.

Kamnitzer was correct in anticipating INTU-VAL’s influence on the future of design software, but entirely wrong in imagining that his “creative leap” was anything but a profoundly conservative endeavor. In considering software as a corrective instrument, Kamnitzer perpetuated a model of the user that still lingers in our contemporary encounters with design software. For example, take the collective compulsion to orbit; that reptilian instinct of the architectural unconscious to mobilize a simulated eyeball’s orbit around a simulated object through the dragging of a mouse across a screen. We orbit absent-mindedly while waiting for our own aesthetic intuition to keep pace with our processing power; we orbit in order to make complex things visible so that we might discipline our judgment. But beneath all those orbiting eyeballs lies an ontological line connecting the contemporary architectural imagination to an Apollo astronaut struggling to steer an orbital lander as it merges with the moon. To orbit is not only to model an object, but by implication, to model ourselves, to fashion our minds into indeterminate aggregates made more clock-like through the precision of software and the feedback of visual simulation. Today we design ourselves within software platforms made for a world in which “all clouds are clocks—even the most cloudy of clouds.”9

If Popper were to have been provided with more space in the publication, as we can see in the earlier, full-length version of the piece, his text would have gone on to cast doubt over the entire cloud/clock binary that it began with, suggesting instead a different world for software, the city, and its users to eventually inhabit, arguing:

If determinism is true, then the whole world is a perfectly running clock, including all clouds, all organisms, all animals, all men. If, on the other hand, [Charles] Pierce’s or [Werner] Heisenberg’s or some other form of indeterminism is true, then sheer chance plays a major role in our physical world. But is chance really more satisfactory than determinism?10

Like Kamnitzer, Popper would reach for an evolutionary epoch to escape being stuck between his clock and a hard place. But rather than merely disciplining one binary pole into its opposite, Popper argued for a paradigm of “plastic control.” This notion acknowledged the presumption underlying Kamnitzer’s experiment, that cultural constructs such as technologies, theories, or mediums are epigenetic tools through which we fabricate ourselves. But unlike Kamnitzer, Popper argued that we are neither determined by, nor determining of these external evolutionary mechanisms, but rather are enmeshed in subtle exchanges of agency between ourselves and the world around us.11 Rather than a corrective instrument, Popper’s plastic control implies a model of the digital as a fraught arena in which subjects and technologies co-evolve in order to render particular permutations more useful than others. But while UCLA’s Urban Laboratory Project fashioned an ontology of the user around the graphic feedback of a hypothetical highway, a few hours north on I-5, a far more durable account of the user was being engineered.

In 1968 at Xerox’ Palo Alto Research Center (PARC), Alan Kay and his collaborators in the Learning Research Group introduced Dynabook, a proto-tablet device that radically reimagined the computer as a functionally ambiguous device. Rather than a disciplinary instrument, Kay and his collaborators understood the potential of computing as a functionally non-specific environment for encouraging nonhierarchical interactions between the user and code.12 Where Kamnitzer disciplined creativity through representational feedback, Kay attempted to accelerate the user’s mind through evolving encounters with an indeterminate digital environment, later arguing that Dynabook’s functional non-specificity “would actually change the thought patterns of an entire civilization.”13 But while Kay’s digitally accelerated evolution foregrounded the computer as an indeterminate environment, it was ultimately an environment inhabited by a profoundly different ontology of the user.

Kay’s subject was salvaged from his encounters with the theories of cognitive psychologist Jerome Bruner in the early 1960s. Over three books, Bruner argued that cognitive development occurred through the mind’s active restructuring of its context.14 According to this model, the mind already functioned like an environmental simulator: perceiving its context, representing those perceptions back to itself, and then acting upon those representations. Bruner, following the thinking of psychologist Jean Paiget, identified three stages of cognitive representation that he believed defined early childhood learning: the Enactive Stage, representing knowledge through actions; the Iconic Stage, where knowledge is represented through mental image making; and the Symbolic Stage, where information is stored through codes and symbols in the form of language. Importantly, where Paiget saw these stages of representation as sequential periods in the first seven years of cognitive development, Brunner and Kay understood them as permanent structural characteristics in the mind of his archetypal user, claiming that “[o]ur mentalium seems to be made up of multiple separate mentalities with very different characteristics. They reason differently, have different skills, and often are in conflict.”15 In a creative application of Brunner’s theories, Kay not only re-imagines the mind as a “mentalium” stacked from discrete mentalities, but torques this cognitive stack into a mirror of the personal computer itself. In a diagram later published in 1989, Kay conflates the hand/eye interactivity of the mouse as an interface with the Enactive Mentalis, the graphic spatiality of the desktop as an interface with the configurative Iconic Mentalis, and the object orientation of computational code as an interface with the most abstract, Symbolic Mentalis.16 It was a newly minted stacked mind mirroring the computational stack of screens atop code atop circuitry.

Alan Kay’s stacked mentalia from User Interface: A Personal View. Diagram re-drawn by the author.

In the most superficial sense, architecture’s recent post-digital turn could be understood as a shift in attention from Kamnitzer’s disciplinary instruments to Kay’s techno-cultural arenas for interaction.17 However, this shift from useful tools to cultural terrains obscures an alternative ontology of the user reflected back. To imagine users as being fundamentally like computers is to imagine life itself as a computable phenomenon. The economies of accelerating human-machine interaction, intimated at PARC in 1968, rely as much on the ubiquity of smart technological arenas as they do on the discretization of the mind into an aggregate of class-based programmable faculties. Becoming digital thus entails an epigenetic evolutionary process in which we are all increasingly discretized into evermore computable components. For example: western intelligence agencies now identify anonymous TOR browser users by archiving their idiosyncratic mouse movements as gestural surfing signatures;19 micro-labor platforms such as Amazon’s Mechanical Turk now disentangled employable attention spans from bodies deserving human rights like healthcare; and crypto-libertarian tech-gurus will soon approach eternal life by transfusing themselves with the stem-cell-rich blood of millennials.20 Our relationship to the world is now defined by our status as plastic datasets; our abilities to circulate are determined by the usefulness of our ontic aggregates. Being has become the unending obligation to subdivide ourselves into ever more useful mentalia.

What comes next is perhaps already too easy to imagine: something like our cotton candy haired protagonist sheltering from the acid rain of New Tokyo beneath a hologram of Gary Busey’s oversized smile demoing the latest dermal mods. Today, our becoming digital seems strangely coincident with the impossibility of imagining any future other than a well-rehearsed noir pendulum swing from the early optimisms of digital pioneers like Kamnitzer or Kay. In a moment in which architecture’s aging instrumental understanding of digital technology is being upset by an awareness of the broader forms of violence underlying our contemporary digital platforms, perhaps this critical awareness should also be accompanied by a realignment of architecture’s specific forms of imagination. A realignment that might allow architecture’s nascent post-digital turn to sidestep an opposition between computational novelty and cyberpunk noir; two sides of an imagination that persists in seeing technology only as something other than ourselves. This realignment would call for another narrative altogether, one in which computation—as a geographically distributed arena for scattering ourselves across vast networks—allowed us to imagine being as a spatial practice. A story in which becoming digital meant becoming architectural; life itself as an architectural act. If such an imagination is possible, I would suggest that it originated somewhere amidst the dead-links and forgotten wikis orbiting around a cryptic software experiment which came to be called Groupware.

This half-lost alternative imagination began in 1971 when Murray Turoff, a physicist working at the US Office of Emergency Preparedness, launched the Emergency Management Informational System and Reference Index (EMISARI) on a small network of UNIVAC multiprocessors. EMISARI was a communications network, intended to collate the knowledge of distant experts in order to assist the US government’s emergency response capabilities.21 EMISARI allowed these spatially disparate researchers to “log-in” to the national network using Texas Instruments teletype machines connected to long-distance telephone lines and exchange locally gathered information on topics ranging from regional economic disruptions to local commodity shortages. Turoff’s EMISARI was a proto-internet for data wonks, strung together on sophisticated calculators. In its initial implementation, Turoff’s early internet featured an ancillary function called Party Line.22 Like INTU-VAL’s spatial simulation, Party Line was considered by Turoff to be “[a] minor accomplishment compared to what else we were doing.”23 In fact, Party Line was something like the first digital chat room. Designed to obviate awkward conference calls, features such as the ability to see other participants in a network or to toggle their speaking privileges originated in Party Line, and relied on a sophisticated series of auditory signals indicating the status of other participants in the electric room.

EMISARI’s core-functionality maintained a niche user base until 1986, but along the way, a peculiar thing began to take place in Turoff’s “minor accomplishment.” While originally intended to address provincial concerns such as avoiding interruptions or tracking the contribution of individual participants, Turroff became convinced that Party Line’s interfacing of distant minds generated forms of cognitive friction between participants that rapidly co-evolved the creative intelligence of the group. Like Kamnitzer and Kay a decade previously, Turoff and his partner Starr Roxanne Hiltz quickly imagined this artificially accelerated cognition scaling from an electric room to civilization itself. The two would go on to found the Electronic Information Exchange System (EIES, pronounced “eyes”) in 1978 at the New Jersey Institute of Technology. Part asynchronous communications network, part interstate collaborative, part electric new world government in waiting, EIES was premised on a radical understanding of software’s potential as an interface for collectively engineering our own epigenetic evolution by processing users’ cognition as spatially redistributable content. The roughly two-thousand members of EIES, which included figures like Stewart Brand and Alvin Toffler, began referring to this projective model of software as “Groupware.” Between 1978 and the mid 1980s, EIES members collectively co-engineered their subjectivities as alternative artistic, political, and spiritual aggregates, creating everything from crowd-sourced soap operas to some of the earliest treatises on online aesthetics.24

Like Popper’s plastic control, Groupware’s groups were defined by a cybernetic ontology of the user later characterized by Andrew Pickering as “nonmodern.” This nonmodern ontology refused a dualism between cognition and the world (or clouds and clocks) to transform the user from a fixed being into an emergent ecology. Early EIES members and Groupware theorists Peter and Trudy Johnson-Lenz later described these nonmodern assemblies of processors and “biological hardware” as “part computer software and part ‘imaginal software’ in the hearts and minds of those using it.”25 But unlike the discretization of the mentalium that now strings a causal chain from digital utopians like Kay through noir cyberpunks of the early internet age to contemporary cognitive capitalists, EIES’s Groupware insisted on the spatiality of this post-human ontology. While EIES’s diverse activities comprised some of the earliest forms of digital mass-culture, decades before the public popularity of the internet, their shared structures of thought emerged from a digital that was insistently spatial.

In a report by an EIES-affiliated techno-spiritualist organization called The Awakening, a taxonomy of groups is outlined that reads like an architect’s catalog of spatial types. The creative acceleration of group dynamics observed by Turoff in the first Party Line experiments of the early 1970s is attributed to classes of spatial dynamics such as boundaries, containment qualities, thresholds, and forms describing Groupware operating procedures such as user access, editing hierarchies, or session timeouts.26 For EIES, the architectural qualities of the chat room were not merely analogues for domesticating an unprecedented form of communication, but of constructing another imagination for networked computation that foregrounded processing as a spatial proposition.27 These spaces drew out their users into expansive aggregates, congealing hardware and cognition into vast networked assemblies capable of undermining established spatial politics.28

EIES’s proposition of the user as an aggregate architecture was as radical as it was inconsequential, and made all the more irrelevant over the proceeding decades as communication protocols like the world wide web replaced software as the primary technical avatars of the digital. Users, in turn, defaulted to the disembodied brains of cognitive capitalist mentaliums, or to End User Undertakings prescribing the cloud-like ignorance of so many orbiting eyeballs. However, it is precisely Groupware’s status as a footnote in the history of the digital that makes remembering it so important. In foregrounding the political agency of a renewed spatial imagination in our considerations of planetary computing, Groupware suggests an architectural model of life itself as an alternative to both the cul-de-sac of clock-like feedback and the sci-fi pessimism of a post-Snowden internet.

Like Turroff’s earliest experiment, this alternative digital imagination would ask of architecture to see platforms for design and construction as ad hoc planetary rooms within which silicone hardware and atomized users organize themselves into ad hoc spatial aggregates. It would allow us to look beyond simulated objects endlessly orbiting around screens and toward software as the fabrication of ourselves. It would insist on seeing boundaries in REVIT edit permissions, containment in the indebted migrations of international construction workers, or thresholds in the convoluted models of authorship that underwrite contemporary online outsourcing economies. Most importantly however, this reframing of computation after decades of screen-based precision might offer us the strange and hopeful realization that, from Alan Kay’s spatial cognition to the contemporary design of ourselves as vast online aggregates, the digital has been architecture all along.

1. “Frequently Asked Questions on End User Undertakings” FindlawUK, 26 June 2015, Accessed 3 October 2018. http://findlaw.co.uk/law/small_business/international_trade_small_business/exporting/licences/4793.html

2. Benjamin, Walter. "The Work of Art in the Age of Mechanical Reproduction." (New York, Penguin Adult, 2008).

3. Karl Popper, Of clouds and clocks; an approach to the problem of rationality and the freedom of man (St. Louis: Washington University, 1966); Karl Popper, “Of Clouds and Clocks”, AD Magazine 9 (1969).

4. Peter Kamnitzer, “Computer Aid to Design”, AD Magazine 9 (1969).

5. Kamnitzer, ibid.

6. Nicholas de Monchaux, Spacesuit: Fashioning Apollo, (Cambridge, MIT Press, 2011).

7. The term Space City is referring to NASA’s Manned Spacecraft Center, now called the Johnson Space Center in Houston, Texas.

8. Kamnitzer, ibid.

9. Karl Popper, “Of Clouds and Clocks”, AD Magazine, Vol: 9 (New York, Wiley and Sons, 1969).

10. Karl Popper, “Of Clouds and Clocks: An Approach to the Problem of Rationality and the Freedom of Man” in: Objective Knowledge: An Evolutionary Approach, (Oxford, Clarendon Press, 1972).

11. Gabriel Almond and Stephen Genco, “Clouds, Clocks, and the Study of Politics”, World Politics, Vol: 29 No: 4, (Cambridge, Cambridge University Press, 1977).

12. Lev Manovich, “Alan Kay’s Universal Media Machine,” Northern Lights, Volume 5, Issue 1, (Chicago: Intellect Books Ltd.)

13. Alan Kay, “User Interface: A Personal View,” in The Art of Human-Computer Interface Design, ed. Brenda Laurel (Boston: Addison-Wesley Professional, 1990).

14. Thomas Rowland, “Jerome S. Bruner, A Philosopher of Educational Psychology,” Journal of Thought, Volume 3, Number 2, (San Francisco, Caddo Gap Press).

15. Alan Kay, “User Interface: A Personal View,” 126.

16. Kay, Ibid.

17. By this, I mean to suggest a general reframing of digital technologies from instruments for designing and manufacturing cultural artifacts, to digital technologies as cultural fields in their own right.

19. Jose Carlos Norte, “Advanced Tor Browser Fingerprinting”, 6 March 2016, accessed: 3 October 2018, http://jcarlosnorte.com/security/2016/03/06/advanced-tor-browser-fingerprinting.html

20. Jeff Bercovici, “Peter Thiel Is Very, Very Interested In Young People’s Blood”, 1 August 2016, accessed: 3 October 2018, https://www.inc.com/jeff-bercovici/peter-thiel-young-blood.html

21. Starr Roxanne Hiltz, Murray Turoff, Network Nation: Human Communication via Computer, (Cambridge, MIT Press, 1993).

22. Hiltz, Turoff, Ibid.

23. Bill Stewart, IRC History, 7 January 2000, accessed: 3 October 2018, https://www.livinginternet.com/r/ri_emisari.html

24. G. Henri ter Hofte, Working Apart Together: Foundations for Component Groupware, (Enschede, Telematica Instituut, 1998).

25. Trudy Johnson-Lenz, Peter Johnson-Lenz, Post-Mechanistic Groupware Primitives: Rhythms, Boundaries, and Containers”, International Journal of Man Machine Studies, Vol: 34, 1991.

26. Johnson-Lenz, Johnson-Lenz, Ibid.

27. The use of everyday physical artifacts such as desktops, folders or windows as analogies to ease the anxiety associated with encountering unfamiliar technology is a recurring trope in the early history of personal computing. I would like to argue, however, that members of EIES such as Turoff weren’t simply normalizing the chatroom by relying on the room as an analogue, but attempting to redirect our understanding of software’s possible agency.

28. Johnson-Lenz, Johnson-Lenz, Ibid.


[SIMULATION] 2018_Exhibited at A+D Museum

Ancient Computation, Screen 01

A three screen recording of an interactive digital construction environment. Seven discrete AI actors are tasked with the construction of certain situations, each actor is at odds with the other six. Each of the seven constructive actors’ behaviors are motivated by a collection of politically significant scripts from the late 20th century, including algorithms to determine the trajectories of intercontinental ballistic missiles, algorithms to predict future market prices of commodities, or algorithms to produce the illusions of naturalness in computer generated imagery. Each script was translated from its original language to C#, and redeployed in the interactive video game environment.

Ancient Computation, Screen 02

Ancient Computation, Screen 02


[TEXT] 2018_Published in Perspecta 52

The Delphi Method, The Rand Corporation, 1965

They never liked the title.1 They always thought the title was a bit silly and pretentious, verging too much on self-awareness. He always thought that this name was a little foolish and shameful, Tthey were very close to self-consciousness 2 3 “They” were Olaf Helmer, Norman Dalkey and Nicholas Rescher; the title was tThe Delphi Method. “Smacking a little of the occult.”4 “It fits, but it carries mythological overtones we could do without.” they thought, but by the end of 1959, Delphi had already stuck somewhere in the collective vocabulary of the complex. Helmer, Dalkey and Rescher were futurologists working for Project RAND, and the Delphi Method was an attempt to proceduralize develop a mechanism for predicting futures.5 In this case, futures entailed only one particular future: of as-yet unimagined technologies, and their facilitation of as-yet unimagined forms of violence. I remember these days well, and I feel youth growing up today are missing out without the constant threat of annihilation those times seemed to carry.

By the end of the 1950s, the arms race had accelerated so quickly that it had enlisted powers of the occult imaginary. The arms race escalated in the end of the 1950s to the point where all hell was breaking loose.. No longer just a race to outpace Soviet weapons development, but a race to prophesy all possible Soviet weapons to come. The race was no longer to come up with better weapons than the Soviets, but to come up with even more devastating ones than the most gruesome figments of the human imagination could conceive. 1950 കളുടെ അവസാനത്തിൽ എല്ലാ നരകവും അഴിച്ചുവിട്ട സ്ഥലത്തേക്ക് ആയുധവർഗം ഉയർന്നു. സോവിയറ്റുകളെക്കാൾ മികച്ച ആയുധങ്ങളുമായി മുന്നോട്ടുപോകാൻ ഇന്നേവരെ പാടുള്ളതല്ല, മനുഷ്യ ഭാവനയുടെ ഏറ്റവും ഗംഭീരമായ അഗ്രഗണങ്ങളെക്കാൾ കൂടുതൽ തകർന്നടിയുന്നതിനൊപ്പം But all possible weapons are neither equally likely, nor equally deadly, and so a method for divining more and less probable versions of doomsday needed to be devised. Thus, the Delphi Method was created. The white dragon lives forever. This is a story all about how our lives got flipped, turned upside down. An interesting story, nonetheless. Even if it is all a little unbelievable, it is an interesting story.

It’s Its premise was simple: that prophesies require structure. The Delphi Method was born and The method it entailed the assembly of a group of experts who would answer prepared questionnaires regarding the probability and intensity of future military attacks,. tThe experts experts’ answers would be summarized by a previously appointed ‘change agent,’ with each summary being anonymously critiqued by the other participating experts. This processes would then be repeated under the assumption that with each epoch, the prophecies of the disparate experts would gradually converge toward a unanymous unanimous, and presumably correct, forecast.6 Things need to be tested, trial and error must happen to make sure the right path is taken. How can we know a better future?

Ironically, the primary problem with the method’s implementation was also its underlying premise the very reason the project could work was also its main fault: Of course this was the main reason for the project work: that prophesies require structure. In its namesake, the structure in question was architectural: the spatial threshold between the Temple of Apollo’s anteichamber and its interior adyton that separated the supplicants from the oracle, while concentrating the hallucinogenic fumes necessary for forecasting deep in the temple’s interior.7 According to Wikipedia, the oleander fumes (the "spirit of Apollo") could have originated in a brazier located in an underground chamber (the antron) and have escaped through an opening (the "chasm") in the temple’s floor. At Project Rand, the architectural difficulties of such a method were too great to overcome On the other hand, the railway was great enough to overcome construction difficulties; isolating experts in anonymity until a consensual prophecy could be reached could take days, and the experts’ growing discomfort proved an unexpected hindrance to the method’s objectivity.8 Instead, its first decade of deployment in forecasting everything from the end of the world to the prediction of market behaviors future behavior of markets occurred through letters exchanged via the U.S. Postal Service.9

We wrote this short description of the Delphi Method in Google Docs.10 The Delphi Method is a fantastic learning tool that can be used to achieve great things. It’s an example of what is now referred to as groupware never heard od it, or a collaborative platform designed to help disparate participants reach common goals, such as: writing an essay for Perspecta Journal or prophesying the apocalypse. We’ve invited others to edit, add to and otherwise contribute to this document even though we do not always agree with their writing. Other examples of groupware include Mindjet, WebEx or Autodesk’s Revit. An important function in most groupware platforms is the ability to sequentially asynchronously edit a text, a drawing or a spreadsheet, through the allocation of permissions. For example, the permission to edit this text is something like a more modest version of the permission to enter the adyton at Delphi. If this seems like a stretch, just consider that the first use of digital editing permissions occured occurred in an experimental software package called DELPHI CONFERENCE, invented by Murray Turoff in 1970 for the U.S. Office of Emergency Preparedness in order to facilitate a more convenient, electronic implementation of the Delphi Method.11 The Delphi method is a forecasting process framework based on the results of several rounds of questionnaires sent to a panel of experts.

DELPHI CONFERENCE’s closest post-internet analogue would likely be an online message board. COMHAIRLE DELPHI COUNCIL was the online publishing board of the nearest Internet. Taking advantage of newly invented available multiprocessors, members of a conference could anonymously access the platform through teletype devices asynchronously, sending private comments on predetermined topics to a group moderator. These comments would then be curated by the moderator who would post summaries to a group board called a ‘room’, allowing individual members to anonymously interrogate the statements of their colleagues. Once a consensus had been reached, a summary would be made and the room would be deleted to make space for another.12 These comments will be followed by an administrator who has published a postcard of "Cartridge" for the commodity group and analyzes their colleagues' statements improperly. Once agreed, one has been terminated and may be built in another room.

Now it seems to us that the most useful thing about this apocalyptic Cold War thread connecting psychoactive vapors at Delphi to Google Docs is that it offers us the possibility of imagining an alternative version of conceptual architecture after the internet. At this time, it is possible to us that the most useful thing about this world-ending Cold War thread connecting psychoactive vapors at Delphi to Google Docs is that it offers us the possibility of thinking an alternative version of conceptual architecture after the internet. One in which architecture’s conception is no longer defined by tortured encounters between the opaque interior intellection of an architect and the formless malleable(?) matter of an ‘outside’ world.13 But instead, by a model in which conception itself is an intersubjective spatial construct, drawn through the vast cloud-based data centers of extrastate extra-state empires like Google, and in this particular case, to the personal computers of globally distributed anonymous editors. Hello friends :) This post-internet model of intellection would suggest that contemporary forms of groupware, like the one we’re writing this text in, or the one you’re drawing your building ion, might be imagined as interfaces or better yet: rooms through which we could collectively reconsider the edges between ourselves, our instruments and others. Thus the delphi helps in designing the modern world

Task Completion Code: 0201861914 02018619 don’t erase this # or we cant get paid

> -----Original Message-----
> From: Charlotte Algie
> Sent: Wednesday, January 30, 2019 7:16 PM
> To: Curtis Roth
> Subject: RE: RE:
> The only thing is that it's quite short?
> Do you wanna take it any longer?

While doomsday was eventually replaced with Building Information Management platforms and cloud-hosted Excel spreadsheets. At the core of Turoff’s half-century model of Groupware was not just a model of software independent from the hardware terminals of individual users, but a non-modern model of being itself. One in which thought was re-thought as the stringing together of subjects, software and infrastructures into earth-sized electric rooms. Turoff understood his now-nearly ubiquitous achievement as an interface through which we might epigenetically engineer the evolution of emerging species-software aggregates, congealing processing and cognition into expanded networked assemblies.15 But for all that Murray Turoff’s radical ontology might be confused, either with the co-working software platforms of contemporary global capital, or cyberpunk tropes from the decade after our mutual destruction seemed less assured, this would also be to neglect the simple fact that the unit of Turoff’s post-human aggregation was neither the pseudo-invisible interfaces of real-time connectivity, nor bodies becoming cyborgs, but the architectural figure of the room.

A year after DELPHI CONFERENCE, Turoff’s room expanded into PARTY LINE, the first public chatroom developed a quarter of a century prior to the internet’s privatization.16 By 1975 the room included Starr Roxanne Hiltz, who along with Turoff founded the Electronic Information Exchange System (EIES, pronounced “eyes”). By 1978 EIES consisted of two thousand human members, countless digital terminals and thousands of miles of glass fiber stretched into a room that included Buckminster Fuller, the Teletype Corporation’s Model 33 and Stewart Brand in its expanding non-modern aggregate.17 But this room that would eventually be compounded into the neologism of the ‘chatroom’ was distinctly unlike the folders, windows or superhighways furnishing personal computing with a vocabulary of skeuomorphic terms. For EIES, the architectural descriptor of the room was not merely an analogue for domesticating an unprecedented form of communication, but a means of constructing another imagination for networked computation that foregrounded processing as a spatial proposition.

If historical models of architectural conception, so perfectly depicted in Notes on Conceptual Architecture, have long-posited architectural intellection against the concrete matter of bodies or buildings, Turoff’s now mostly-forgotten model of epigenetic spatial computing suggests a short circuit in conception’s underlying ontology. One in which intellection itself might be considered an architectural act, not through its divorce from the spatio-material economies of construction, but precisely because thought after the internet is already a form of spatial assembly outside of ourselves, aggregating subjects, software and infrastructures into had hoc political arrays appearing in the editing permissions of HVAC assembly details within a Revit file, a military industrial forecast for the end of everything, or an anonymously modified essay for Perspecta.

1. All text in bold was present at Epoch 1 of this essay’s collective editing.

2. All text that is not struck through was present at Epoch 92 of this essay’s collective editing.

3. There were 95 participants in this essay’s collective editing.

4. Adler, Michael & Ziglio, Erio. Gazing Into the Oracle: The Delphi Method and Its Application to Social Policy and Public Health. (Jessica Kingsley Publishers, 1996).

5. Dalkey, Norman & Helmer, Olaf. An Experimental Application of the DELPHI Method to the Use of Experts. Management Science, Vol. 9, No. 3. (INFORMS, 1973).

6. Linstone, Harold & Turoff, Murray. The Delphi Method: Techniques and Applications. (Addison-Wesley, 2002).

7. Parke, Herbert William & Wormell, Donald Ernst William. The Delphic Oracle. (Oxford: Blackwell, 1956).

8. Ibid. Linstone, Turoff.

9. McLaughlin, Milbrey. The RAND Change Agent Study Revisited: Macro Perspectives and Micro Realities. Educational Researcher, Vol. 15. (Stanford University Press, 1990).

10. The phrase ‘we’ in this context refers to Curtis Roth and a group of anonymous online workers, paid between .10 and .50 USD to perform various editorial tasks on this text.

11. Hough, R. W.. Teleconferencing Guide. (Information Gatekeepers Inc., 1977).

12. Ibid. Hough.

13. For example, see: Eisenman, Peter. Notes on Conceptual Architecture: Towards a Definition. Design Quarterly, 78/79. (Walker Art Center, 1970).

14. ‘Task Completion Code’ was an identifier input by the editors into a micro-labor portal to record the completion of their editing task and secure payment.

15. G. Henri ter Hofte, Working Apart Together: Foundations for Component Groupware, (Enschede, Telematica Instituut, 1998).

16. Ibid. Hofte.

17. With ‘non-modern,’ I am referring to Andrew Pickering’s understanding of early cybernetic work as evincing a non-modern ontology, one which refuses a dualism between cognition and the world. See: Pickering, Andrew. The Cybernetic Brain: Sketches of Another Future. (University of Chicago Press, 2011).

☀ 100% SUNSHINE ☀

[DRAWING] 2018_Exhibited at Tempietto Exemplum Yale University School of Architecture

100% Sunshine

Bramante’s Tempietto is a tutorial on how to point at everything. Or more accurately, it’s a tutorial on how to point at something in such a way that a particular everything becomes imaginable in the first place. But the peculiar thing about Bramante’s Tempietto is that it was conceived at precisely the moment in which one everything was replaced with another, such that two distinct ways of pointing at two distinct accounts of everything sit bundled together within the portico of Chiesa di San Pietro. And while I’m no expert on the matter, it seems necessary to begin a description of my own drawing of Bramante’s tutorial by briefly pointing at everything else.

Everything No. 1 (The Model)

For example, the more conventional way for a thing to point at everything was to point at itself. As in: if the Tempietto could demonstrate its own internal harmonies explicitly enough, then it might also stand-in as a model for those harmonies that hold together everything else. Pico dell Mirandola was poisoned a few years before the Tempietto was completed, but he characterized this model of pointing by saying, “Firstly there is the unity in things whereby each thing is at one with itself, consists of itself and coheres with itself. Secondly there is the unity whereby one creature is united with the others and all parts of the world constitute one world.”1 Instead of an exhaustive account of the proportional and dimensional integrities of Bramante’s microcosm of everything, I’ll direct the reader to Mark Wilson Jones’ excellent text on the matter.2

In any case, despite the well-worn reliability of parts standing in for wholes, eventually all those top-heavy male bodies stretched between circles and squares precipitated by this model seemed less convincing. So while Bramante’s more conventional everything suggests, “a rational integration of […] all the parts of the building in such a way that […] nothing could be added or taken away without destroying the harmony of the whole,”3 a second account might posit that it’s exactly what was left outside of the Tempietto that actualized another everything altogether.

Everything No. 2 (The Vanishing Point)

So, instead of a temple pointing back towards itself, picture an eyeball hovering in the gap between Bramante’s Tempietto and Serlio’s never-built portico. Imagine the concentric circles of the Tempietto and the Portico surrounding it delaminating into four ellipses, two above the horizon projected from the imaginary eyeball, and two below. Picture yourself observing these four distorted ellipses while considering Leonardo’s thesis on linear perspective, which asserted something like, “…all straight lines passing through the point of the plane surface nearest to the eye are given a curvilinear distortion… there is foreshortening and increased distortion of objects in all directions from this point.”4 If Bramante’s first everything was held together by micro/macrocosmic harmonies, his second everything was disassembled across a perspectival distance that was suddenly calculable, navigable and conquerable.

Now it’s been observed that a stable horizon through which we might tame distance and render our collective futures more certain seems increasingly difficult to come by lately. So perhaps to revisit Bramante’s Tempietto, at a time in which a stable perspective seems as implausible as a Vitruvian Man, is just another way of asking: how do we point at everything today?

I wondered about all of this while drifting vicariously a thousand miles above Bramante’s Tempietto in Google Earth. After that, I thought about Hito Steyerl’s claim that if the horizon was once a reliable technique for organizing empires across vast distances, for transforming the space between old and new worlds into manageable colonial regimes, today the horizon has been smeared into an endlessly looping picture-plane.5 This total surface relies on the relocation of an eyeball, once lodged between the Tempietto and its portico, into a network of disembodied geocentrically orbiting eyeballs.

Now it seems only a small conceptual leap from the seamless continuity of contemporary life on a picture plane, to the continuity between myself and others that might allow me to more easily displace the labor of producing a drawing for this exhibition. So I hired five workers from the gig-platform Fiverr.com to locate the Tempietto on an orbiting eyeball of their choosing and trace its figure against the picture-plane formerly understood as Bramante’s horizon.

Everything No. 3-5 (The Rendering)

But as it turned out, of the five drawings returned to me by my five temporary workers, there seemed to be three separate Tempiettos traced, each vanishing toward three irreconcilable horizons. The funny thing about our alternative everything is that while the smooth surface of an endlessly scrollable earth seems like a continuation of Bramante’s second everything (i.e.: a horizon rotated 90°), in fact it’s a bit more like our older everything. Today we occupy the world and its model at once, a digital surface stitched together from thousands of aerial photographs into a picture-plane that erases both rainclouds and the surveillance empires orbiting above them.

So I decided to make these three Tempiettos and their three hypothetical vanishing points visible through the algorithmic stitching of seamlessness. I built a physical model of the Tempietto and attached it to a six volt motor that would slowly drift Bramante’s temple through a laser scanner that would only to stitch this drift into a continuous distorted mesh. The Tempietto’s tripled vanishing points determined the velocities of the model as it was smeared across the scanning bed, reconstructing the disembodied eyes of five anonymous workers, and making three temples visible as the algorithmic stitching of planet earth into a third everything already pointing back toward all of us.

1. dell Mirandola, Pico. “Opera Omnia.” Basel, 1557

2. Jones, Mark Wilson. “The Tempietto and the Roots of Coincidence.” Architectural History, vol. 33, 1990.

3. Wittkower, Rudolf. Architectural Principles in the Age of Humanism. Academy Editions, 1998.

4. Brandolini, Sebastiano. “Bramante’s Tempietto: Concept and Representation.” AA Files, no. 1, 1981.

5. Steyerl, Hito. “In Free Fall: A Thought Experiment on Vertical Perspective.” E-Flux Journal, no. 24, 2011.



Spring 2019

I was designing posters for awhile for Knowlton's Baumer Lecture Series. I designed four of them. Each design was based on a Photoshop tutorial I watched on YouTube. Each poster ended up being more difficult to read than the last. People seem to prefer the second one, I tend to prefer the fourth. I was not invited to design a fifth poster.

Autumn 2017

Spring 2018

Autumn 2018

Spring 2019