All Generative Deconstruction Projects

Group 1 Line Tool

by hongxi-du

Group 2 - Link

by yingqi-jia

Group 3 - animator

by alexandre-ciorascu

Group 5 Translide

by katja-kordes

Group-6, MagnifyMe

by anshul-singh-jadone

Group 7, Move it

by julien-berry

Group 8

by angeliki-fokou


by maleik-rostom

Group 10 - Amogus

by marco-acquati

Group 11: Cement

by katerina-koleva



Understanding Generative Theory


Beaudouin-Lafon, Bødker, and Mackay (2022) introduced generative theories of interaction to help HCI researchers and UX designers benefit from existing scientific theory to create more innovative designs. Each generative theory is grounded in an established theory from the natural and/or social sciences, from which we derive concepts relevant to human-computer interaction. Each concept is then transformed into a set of actionable principles that can be applied to a specific design. The generative deconstruction process involves applying these principles through three lenses, to first deconstruct an existing design artifact and then to generate a new one. Designers use the analytical lens to identify which principles currently exist; the critical lens to judge whether or not these principles would be relevant and useful; and the constructive lens to generate new ideas based on these principles.

Apply the concept of Instrumental Interaction to alignment.

The concept of instrumental interaction involves transforming concepts and commands into interactive instruments. It includes three key principles: reification, polymorphism and reuse. Each principle can focus on input, related to the user's actions; or the output, related to the results of those actions. In order to apply the principle of reification to the problem of alignment, we apply each lens in turn:

First analyze the standard method for aligning graphical objects, shown below. Has the alignment command been transformed into an interactive instrument? No. It is just a button that performs one alignment action once. The button cannot be dragged to a collection of objects, instead they must be selected in advance, and the resulting alignment does not persist if any of the objects is moved later.

Next, critique the interaction according to the principle of reification. We can see that the interaction is cumbersome, especially if you must align multiple objects, and the resulting alignment is not persistent: every time you move one object, the other objects must be realigned.

Finally, construct a new form of interaction by applying the principle of reification. For example, a Stickyline, transforms the alignment command into an alignment instrument. The resulting alignment, that is, the relationship among the aligned objects, is preserved as an alignment substrate.


The following tables provide a more general description of each of the principles:




Reify a command in a first-class object for the interface, in a digital tool that the user can manipulate directly.

Digital instrument should have the possibility to be used on different types of object. It could also reify further commands.

Possibility for the user to use again a previous process (Input Reuse) or its result (Output Reuse).


Instrumental Interaction





Objects of Interest

What are the objects visible and directly manipulable by the user?

Do these objects match those of the users’ mental models?

Are there other objects of interest, e.g. styles in a text editor? Should some objects of interest turned into instruments?


What functions are available as tools, e.g. in tool palettes, as opposed to commands, e.g. menu items?

Do the tools actually work as such i.e. by extending users capabilities? Do the tools enable technical reasoning?

Which commands can be turned into tools? Are they related with physical tools?



Which concepts/commands are reified into interactive objects/tools? How can these objects be manipulated?

Are the reified concepts effective? Are the objects directly manipulable?

Which concepts/commands should be reified? Into which objects/tools? What manipulations should be available?


Which commands/tools apply to objects of different types?

Do they apply to collection of heterogeneous objects? Should commands/tools apply to multiple object types? Which types?

How to make each instrument (more) polymorphic? How to create groups of heterogeneous objects?


Which commands/objects can be reused?

Which commands/objects should be reusable?

How to make commands reusable (input reuse)? How to make objects reusable (output reuse)?

Human-Computer Partnerships


This theory is based on the concepts of co-adaptation. In HCI, co-adaptation means that the user tames the system by getting used to it but also by adapting the systems to its needs. There is also the concepts of "Reciprocal co-adaptation" related to intelligent systems, but we will not talk of that here. If you want to know more see the reference [1] in the "About" page.





Reveal how the system interprets the user’s recent behaviour (feedback) and which commands are now possible (feedforward)

Modify the system’s behaviour by customising its characteristics for new purposes

Create rich, personalised output generated from individual user-controlled input variation.


Human-Computer Partnership






Can users reveal, interpret or modify the system’s behaviour?

Which aspects of the system are discoverable, appropriable and expressive?

How can we help users to both adapt to the system, and adapt it for new tasks and creative expression?



Does the system reveal how it interpreted user behaviour and show what options are currently available?

Can users discover and understand the system? Can the system interpret aspects of the user’s behaviour?

How can we present the system’s interpretation of users’ actions and reveal user-relevant features?


Does the system permit customisation of the system or its features?

Can users create or modify the commands and features they need?

How can we help users personalise or redefine the system or its features?


Does the system transform individual input variation into expressive output?

Can users control how the system interprets their actions so as to generate rich or expressive output?

How can we help users dynamically control their expressive output?