Ditto & Pattern AI : Design Hand-off Automation (Phase 2)
Earlier to this, back around 2013-14, I had experimented with conversion of design files (PSD/AI/PDF etc.) to style-guide ( with conversion of design files (PSD/AI/PDF etc.) to PDF specs documents) through prototypes like Specstra as an early attempt to bring automation into design process to reduce the manual intervention around drafting style guides. through prototypes like Specstra as an early attempt to bring automation into design process to reduce the manual intervention around drafting style guides.
After my time at IBM, joining Red Hat marked a significant new chapter of learning. On my joining day, I received Red Hat CEO Jim Whitehurst's book, The Open Organization, which I still keep close to this day. The book's emphasis on how horizontal organizations can thrive sparked the idea of democratizing design operations through the principles of the DevOps mindset—what I later coined as DesOps.
Earlier to this, back around 2013-14, the design automation journey I had started at Samsung with Specstra, which focused on design handoff, began to evolve into something bigger. My deep dive into Agile and Design Thinking at IBM also played a crucial role, allowing me to see the design process from a fresh, more holistic perspective.
In search of a tool that can minimize the translations of values, I was toying with an idea of Ditto, a simplified version of the tool that can look familiar to the different roles involved in the process and at the same time it will align with a process that is about having the same source files at any of the stages of the process and can align with any design-system with easy configuration mechanism.
Before arriving at this concept, I was exploring emerging technologies like Vision APIs and OpenCV libraries. One of my explorations led to building of a simplified conceptual workflow that named PatternAI that involved some image recognition through a live camera and matching that for potential UI patterns by processing the camera input using computer-vision technologies. During this attempt, I wanted to go beyond the usual approaches and explore more practical solutions that could help build a UI design-automation component.
PatternAI, which involved using image recognition through a live camera to identify potential UI patterns. The camera input was processed using computer-vision technologies. My setup was simple, just enough for a "hello world" validation. It consisted of a webcam feeding input to a Kinetic-based web application running in Chrome. The application would try to match the patterns—like a simple image, icon, or button—drawn on paper to predefined pattern arrays with corresponding object names. Although I didn't go beyond pattern matching at that stage, it was clear that once a match was made, generating predefined HTML-CSS code would be a straightforward next step.
This experimentation helped me to later see that possible variation on this line can also be a part of the high level vision of bringing design automation especially in UI design space, like a missing puzzle piece.
However, this also made it clear that whatever, the potential approach might be, finally what we need is a “consistent and scalable design system model” that is essential to ensure whatever is getting translated from the different types of source through any of these various approaches, as that is the point where all these information is getting mapped and the meaning of each pattern is formed. This basically highlighted the necessity of a "consistent and scalable design system model." Such a model is crucial to ensure that translations from various sources, regardless of the approach, are accurately mapped and that the meaning of each pattern is preserved.
The fundamental questions that arise from this are:
When the system receives input from a source design (such as a hand-drawn wireframe) indicating a dropdown, can it consistently map this to a target design system, pattern library, or brand-system-based widget library?
This was an eye-opener, as it was clear from this that, irrespective of the technology used, and irrespective of the methodology is used to generate the required information for design benchmark, without the essential "common language" among the design-systems in use, the translation of those information/artefacts might not make sense to the systems running the automation. And this is where the whole purpose of DesOps mindset will fail.
It all began with a realization irrespective of the technology or methodology used to generate design benchmarks, without a "common language" among the design systems, the translation of information and artifacts might not make sense to the systems running the automation. This was an eye-opener, revealing a critical gap in our current design systems—a lack of a consistent and scalable way to define patterns. This gap threatened the very foundation of the DesOps mindset.
This experiment also helped to build the concept on Open Design System and Nuclear Design System that can help in DesignOps/ DesOps. Checkout the video below for the slides on these concepts:
However, Ditto, the hypothetical vision of a design tool tailored for software product development, was conceived to address this gap by leveraging a configurable UI pattern library that understands the relationships among the components of the design system. It could create and maintain its own objects, rendered on a user-friendly interface featuring drag-and-drop interactions. This design allowed users of different roles to focus on their goals rather than figuring out how to use the tool.
Conceptually, Ditto was capable of taking various inputs and exporting different outputs on demand, such as wireframes, visual comps, and ready-to-deploy UI code in HTML/CSS/JS. The primary benefit of such a tool was to reduce the critical dependency on any particular role within the team to progress with the project.
Imagine a startup with only a visionary and a developer. The visionary could use Ditto to drag and drop shapes and objects in an interface similar to MS PowerPoint or Keynote. This familiar environment allowed them to export a bare-bones interactive code/prototype that could be tested. They could continuously test and receive feedback, tweaking the source and iterating as needed. The developer could then use the same source to export production-ready code with a single click. In this way, Ditto not only bridged the gap in design systems but also empowered teams to work more efficiently and collaboratively, ensuring that the DesOps mindset could thrive.