Some thoughts on UI

Posted 2023-01-16

Here, I write a bit about ui generators I actually have for batch programs, and some concepts that would be nice to explore for more interactive programs.

Core D Development Statistics

In the community

Community announcements

See more at the announce forum.

Some thoughts on UIs

Forgive me if this is a bit repetitive for long time readers, but I generally break up applications into two categories: batch and interactive.

A batch program runs just like a function in source code: you give it some parameters, it does its thing, and produces some output. Some will ask for more parameters as it runs, but it ought to be able to skip that if the data is provided ahead of time.

An interactive program runs a series of transformations driven from events. Each event runs a handler which changes some state which produces some output, and it loops around.

A lot of people draw the distinction as cli vs gui, which do have a lot of overlap with my definitions, but I'd rank things like html forms as a batch processor, but they are clearly not a command line.

The line between batch and interactive is also quite blurry - when you click a button in a gui, it tends to run as a batch. And a batch program can have interactive components, such as the command line processor itself or a form on the internet. But I still find the distinction useful to think about both for how it affects code and how it affects the user interfaces.

Batch programs

A batch program is a function, ideally, but not necessarily, a pure function - that is, one that defines all its inputs and outputs up-front. Why is this ideal? Because it gives huge control to the caller and enough information for the caller to decide what to do about it, including, but not limited to, how to get the needed information, display the result, and offer some way to compose results. Let's consider an example:

struct User {
	int id;
	string name;

User[] getUsers() { ... }

User addUser(User newData) { ... }

There's a few ways you can imagine using this.

Perhaps in code:

addUser(User(1, "Adam"));
assert(getUsers() == [User(1, "Adam")]);

Perhaps as a command line session:

$ add-user --id=1 --name=Adam
$ get-users
id | name
 1 | Adam

Or as an interactive text program:

$ add-user
ID: 1____
Name: Adam_

Perhaps as a website:



Perhaps as a gui dialog box.

A gui dialog of the same thing

You get the idea. All these interfaces can be pretty easily auto-generated from the function definition! Of course, you might want to tweak some of the ui elements and maybe even do a custom one from time to time (e.g. adding autocompletion and such), but this can all be done too. This, I think, is the biggest strength of batch processing programs - they can be very flexible in how you use them.

Interactive programs

How to best do interactive programs is more controversial. There's a lot of different approaches.

The most basic structure is that events happen and call event handlers and the event handlers just do whatever. This tends to be what form the foundation of most interactive programs, though there are even some variations on this, like how events are defined and delivered. The pros are you can do almost anything you need and nothing you don't, making it flexible and lightweight. The cons are about the same - you can do almost anything and the code can get unstructured and buggy, so it is common to see other things built on top.

This tends to be some kind of model/connector/view architecture. This separates out the application state from the ui and puts specific constraints on what changes can occur and when. There's variants for just about every aspect, but there's almost always those three main pieces: a model of the application in code and/or data, the UI to interact with it, and something that connects the two. The variants tend to be how these pieces are designed and the division of state into each piece.

The batch programs I described previously can also be described in these terms, with the function being some kind of connector, the view being potentially auto-generated, and the model being application-defined, and indeed, applying some of the concepts might help there too.

I don't want to spend too much time talking about these (the first two drafts of this post did and I was really unhappy with them, hence why it is this late lol), but I do want to briefly describe what I think is ideal:

In a perfect world, the application model is fully described independently of the ui. It describes its own operations, its own (preferably, reversible) state transitions, exposes its own status and workflows, all in some inspectable way. You can ask it, for example, if it is capable of doing operation X without actually trying to do it. Ideally, you can even ask it what, specifically, it would need to do operation X and what would happen if you did, including next steps, again, all without actually committing to do it.

The UI for this ideal application would be made out of hot-swappable components that reflect this model and allow for interactive exploration of it as well as scripting the actions. You'd be able to dry-run anything you wanted and undo it harmlessly. You could go to a disabled button, representing a currently invalid operation, and it tells you what to do to make it possible. You'd be able to change data displays and manipulators from the (ideally perfect anyway) default over to an alternative component to easier manipulate for what you're doing right now.

And in this ideal application, all the connector code between these is magically generated from the model. The programmer would focus on writing that complete model and the custom ui components it needs (which ought not be too many most the time, since so much of it is pulled out of the declared model) and all the glue code just magically works.

This is one of the things I like about traditional websites - they hit a few of these points. The original REST paper realized this and write up a description of how these common patterns were emerging in the wild - web applications represent their state through the html they send down. This describes what the resource currently is (the semantic data inside) as well as what change operations it supports in the form of links and forms on the page. The back and forward buttons, thanks to the semantics of GET vs POST, give you some generic exploration options too, but of course, not everything can be undone. Finally, web browsers are pretty good at letting you, the end user, live-edit the ui if you want to rearrange things, while the refresh button always lets the site go back to a clean slate, generated from the backend application state.

While it doesn't quite reach the ideal, I still think there's a lot to learn from those web 1.0 techniques. (Which btw have continued to grow and evolve to this day). What's particularly interesting about it is that it achieves a good amount of flexibility in its ui: you can open a standards-compliant website on a variety of browsers, including text displays, graphical displays, mobile displays, and even automated web scrapers, and find it usable.

What might the code look like?

I'm a big fan of declarative metadata being embedded in code. In fact, that's probably the main thing I like about D - the static types are most valuable as a form of metadata, then you can add more user defined annotations and process it all in code to turn it into desired results.

But, that's not the only way to do it. As the web illustration shows, you can also run code to get a description, so long as the code is carefully defined to stay in its bounds (such as GET requests not changing anything). Output data can have runtime tags to describe how they can be formatted and other operations possible with them - indeed, this is, at its core, what a HTML document fundamentally is.

Just the more code you write the more chances that it will fall out of sync with reality. You might forget to update a link, for example, when changing a function leading to a broken ui. So you want, as much as possible, a single source of truth with one obvious way to do things in the code so programmers don't forget the rules and accidentally break things. This is where leaning on auto-generation from metainfo helps a lot - the metainfo is completely incapable of changing anything, since it isn't code, and then the auto-generation minimizes room for mistakes, letting the programmer focus on correctly defining the model.

What I'd love to see is a reduction of the interactive program into an interactive loop of isolated batch-style operations. You write the functions, with rich metadata attached to each one describing what it needs and what it does, then describe the relationships between them in the form of declarative state machine transitions and workflows. If you manage to do this right, the average UI view and all the connecting code can be auto-generated, including at least some form of reliable preview and undo functions, and you then focus your frontend work on making big-picture layouts and special-purpose components.

I already have auto generation of forms from functions, seen above, and I've done some automatic rebuilding of dependency things based on function analysis (think about how spreadsheets work, you update one cell and everything that uses that cell in a formula also update) but I haven't gone into actually defining state machines and such. I've seen some other attempts at it - Ruby on Rails has a few gems I've tried to use in the past, but to call them "gems" is being a bit charitable - but so far nothing I love.

Would be nice to experiment with it some day though. I think D could do it, or if you were working with a custom language, it might be a paradigm to consider looking into optimizing the language around.