Additive GUIs

+1  

Specify the attributes of the GUI and let the computer generate the GUI

YAML Idea

I have a idea whereby you specify the relationships of widgets on the screen and the computer generates the layout.

Rather than positioning widgets manually, the computer generates the layout. Essentially the widgets are an inequality formula where X and y is set relative to one another.

We say that one widget is left of another widget or another widget is below another. This is how you might describe practically any GUI

The idea is that the computer generates variations of the layout and the human reviews them.

You also define the data flow between widgets. BackedBy is how we set the data source for a widget. MappedTo is a reference to a template that defines the GUI for an item in a collection. It's the same as a functional map.

The system is configured in triples.

Have you ever heard of Todo MVC? https://todomvc.com/

It's a simple problem implemented in many frameworks. The problem is a to-do list.

This is a to-do app written in additive GUIs.

You should notice that it is extremely compact.

{

"predicates": [

    "NewTodo leftOf insertButton",

    "Todos below insertButton",

    "Todos backedBy todos",

    "Todos mappedTo todos",

    "Todos key .description",

    "Todos editable $item.description",

    "insertButton on:click insert-new-item",

    "insert-new-item 0.pushes {\"description\": \"$item.NewTodo.description\"}",

    "insert-new-item 0.pushTo $item.todos",

    "NewTodo backedBy NewTodo",

    "NewTodo mappedTo editBox",

    "NewTodo editable $item.description",

    "NewTodo key .description"
],

"widgets": {

    "todos": {

        "predicates": [

            "label hasContent .description"

        ]
    },
    "editBox": {

        "predicates": [

            "NewItemField hasContent .description"

        ]
    }
},
"data": {

    "NewTodo": {

        "description": "Hello world"

    },
    "todos": [
        {

            "description": "todo one"

        },
        {

            "description": "todo two"

        },
        {

            "description": "todo three"

        }
    ]
}

}

chronological,



(suppress notifications) (Optional) Please, log in.

I wonder, is the OpenAI Codex making an internal representation similar to your formalism of Additive GUIs, when being instructed informally, like in this video?

I think it would be great simplification for defining UIs more rigorously. We could just provide such compact UI specifications as an API response, and if all browsers just had the certain libraries preloaded (e.g., by someone making an nmp preloading browser extension), it could render the UI very fast, without extra web requests, basically, making the front-end development unnecessary, and replacing it by standardized API views of such declarative statements.


Yes Mindey Additive GUIs is based on the concept that a GUI is a query that splices multiple dimensions multidimensional plane. Where each dimension is a widget and the points are states of that widget. There is a function that defines the relation of the points of each dimension to another set of points for each dimension, perhaps by human interaction or server interaction.

If APIs can return a highly dense definition of how the GUI should work and be rendered, then we can remove a lot of custom code.

Most interaction with data orientated GUIs like infinity are just verbs against items in lists. They are reactive in response to data collections. Or add items to collections.

For drawing GUIs like diagram tools like PowerPoint or paint I think you need a different model.



    :  -- 
    : Mindey
    :  -- 
    

chronological,

// If APIs can return a highly dense definition of how the GUI should work and be rendered, then we can remove a lot of custom code.

I see. To simplify the matters, the problem then is fully-defining such declarative language, and then constructing the mapping between specification of such language and the implementation via the (HTML, JS, CSS) triplets as components, that is what defines reactive UI elements, regardless of whether its in pure or virtual DOM. The state space of each such triplet as component then corresponds to what you define as the dimension, and state space of entire UI as the Cartesian product of the codomains of those components, each particular state of entire UI being a "multidimensional plane".

I see this concept go, and being important, but to get it actually working, I see a lot of work being required to define the exhaustive set of terms, and then the browser-bound interpreter for this to run.


I've written a very simple interpreter that works with the example. It uses React. The rendered HTML is ugly but Todo adding works.

The hard part is what you say, providing a flexible enough language that can support most GUIs.

My goal was for IDEs to be representable with this format of GUI.

I dread to look at the code for IntelliJ I bet it's very very very complicated.



    : Mindey
    :  -- 
    :  -- 
    

chronological,