Consider this scene from the 2014 film, Ex Machina: A young nerd, Caleb, is in a dim room with a scantily clad femmebot, Kyoko. Nathan, a brilliant roboticist, drunkenly stumbles in and brusquely tells Caleb to dance with the Kyoko-bot. To kick things off, Nathan presses a wall-mounted panel and the room lights shift suddenly to an ominous red, while Oliver Cheatham’s disco classic “Get Down Saturday Night” starts to play. Kyoko—who seems to have done this before—wordlessly begins to dance, and Nathan joins his robotic creation in an intricately choreographed bit of pelvic thrusting. The scene suggests that Nathan imbued his robot creation with disco functionality, but how did he choreograph the dance on Kyoko, and why?

Ex Machina may not answer these questions, but the scene does gesture to an emergent area of robotics research: choreography. Definitionally, choreography is the making of decisions about how bodies move through space and time. In the dancerly sense, to choreograph is to articulate movement patterns for a given context, generally optimizing for expressivity instead of utility. To be attuned to the choreographics of the world is to be mindful of how people move and interact within complex, technology-laden environments. Choreo-roboticists (that is, roboticists who work choreographically) believe that incorporating dancerly gestures into machinic behaviors will make robots seem less like industrial contrivances, and instead more alive, more empathetic, and more attentive. Such an interdisciplinary intervention could make robots easier to be around and work with—no small feat given their proliferation in consumer, medical, and military contexts.

While concern for the movement of bodies is central to both dance and robotics, historically, the disciplines have rarely overlapped. On the one hand, the Western dance tradition has been known to maintain a generally anti-intellectual tradition that poses great challenges to those interested in interdisciplinary research. George Balanchine, the acclaimed founder of the New York City Ballet, famously told his dancers, “Don’t think, dear, do.” As a result of this sort of culture, the stereotype of dancers as servile bodies that are better seen than heard unfortunately calcified long ago. Meanwhile, the field of computer science—and robotics by extension—demonstrates comparable, if distinct, body issues. As sociologists Simone Browne, Ruha Benjamin and others have demonstrated, there is a long-standing history of emerging technologies that cast human bodies as mere objects of surveillance and speculation. The result has been the perpetuation of racist, pseudoscientific practices like phrenology, mood reading software, and AIs that purport to know if you’re gay by how your face looks. The body is a problem for computer scientists; and the overwhelming response by the field has been technical “solutions” that seek to read bodies without meaningful feedback from their owners. That is, an insistence that bodies be seen, but not heard.

Despite the historical divide, it is perhaps not too great a stretch to consider roboticists as choreographers of a specialized sort, and to think that the integration of choreography and robotics could benefit both fields. Usually, the movement of robots isn’t studied for meaning and intentionality the way it is for dancers, but roboticists and choreographers are preoccupied with the same foundational concerns: articulation, extension, force, shape, effort, exertion, and power. “Roboticists and choreographers aim to do the same thing: to understand and convey subtle choices in movement within a given context,” writes Amy Laviers, a certified movement analyst and founder of the Robotics, Automation and Dance (RAD) Lab in a recent National Science Foundation-funded paper. When roboticists work choreographically to determine robot behaviors, they’re making decisions about how human and inhuman bodies move expressively in the intimate context of one another. This is distinct from the utilitarian parameters that tend to govern most robotics research, where optimization reigns supreme (does the robot do its job?), and what a device’s movement signifies or makes someone feel is of no apparent consequence.

Madeline Gannon, founder of the research studio AtonAton, leads the field in her exploration of robot expressivity. Her World Economic Forum–commissioned installation, Manus, exemplifies the possibilities of choreo-robotics both in its brilliant choreographic consideration and its feats of innovative mechanical engineering. The piece consists of 10 robot arms displayed behind a transparent panel, each stark and brilliantly lit. The arms call to mind the production design of techno-dystopian films like Ghost in the Shell. Such robot arms are engineered to perform repetitive labor, and are customarily deployed for utilitarian matters like painting car chassis. Yet when Manus is activated, its robot arms embody none of the expected, repetitious rhythms of the assembly line, but instead appear alive, each moving individually to animatedly interact with its surroundings. Depth sensors installed at the base of the robots’ platform track the movement of human observers through space, measuring distances and iteratively responding to them. This tracking data is distributed across the entire robotic system, functioning as shared sight for all of the robots. When passersby move sufficiently close to any one robot arm, it will “look” closer by tilting its “head” in the direction of the stimuli, and then move closer to engage. Such simple, subtle, gestures have been used by puppeteers for millenia to imbue objects with animus. Here, it has the cumulative effect of making Manus appear curious and very much alive. These tiny choreographies give the appearance of personality and intelligence. They are the functional difference between a haphazard row of industrial robots and the coordinated movements of intelligent pack behavior.