A tight iteration loop is the only solution I've seen for building high quality UI/UX. Embrace that it's going to be terrible the first few times and plan for this.
We could save on cost per iteration by avoiding the layout and styling pass until the very end. These interfaces are about communicating information. Plain text and basic form submissions work just as well as anything else when you're still trying to design site map and purpose of each page.
There's a direct correlation between the customer-developer distance and the number of iterations required to achieve an acceptable result. You can dramatically reduce the churn by requiring daily builds or screenshots to the customer. If your development process can't support the idea of iterating like this, you need to find one that can. If the actual customer can't handle that amount of traffic, create internal proxies or advocates.
The best functional specs for a UI have come from business people building their own in excel. At a minimum they will have some idea of layout and what information should be displayed on which screens.
This also lets them focus on the core information, not the styling. Building GUIs on the web, there are infinite ways to style every UI element and more attention goes to that then the actual interaction.
I'd disagree with that. People build wild shit in Excel that only they can understand. It makes sense to them because they built it. Nobody else can use it.
Not all spreadsheets are like this but there are absolutely no guardrails to prevent it.
Building a good GUI takes thoughtful design by someone who understands what makes a good GUI and what the goals of the interface are from the user's perspective. Someone who can make it look like what the user is already used to, even if that isn't "beautiful" UI or doesn't follow the latest trends in whitespace and widget appearance.
>I'd disagree with that. People build wild shit in Excel that only they can understand. It makes sense to them because they built it. Nobody else can use it.
I think the parent comment is talking about using excel as a grid based layout tool to show how they want the app to look, not implying that you should build GUIs based upon the convoluted stuff people build in excel to avoid having a dedicated app.
I've been in this exact situation. Client provides their current workflow, implemented in a spreadsheet.
The problem is they make significant concessions in their design to fit the tabular model of spreadsheets. It can be really warping, not only to GUI but also the underlying data model. Then you show them what a relational data model is capable of, and (hopefully) blow their mind.
>thoughtful design by someone who understands what makes a good GUI and what the goals of the interface are from the user's perspective.
This is critically overlooked too many times.
>People build wild shit in Excel that only they can understand.
I resemble that remark ;)
In one respect, that's what excel is really helpful for.
With no other GUI, the default of a mouse on an electronic spreadsheet is one of the oldest and most familiar to those who need wild math to be accomplished immediately without delay, in spite of its drawbacks. And quite popular, most likely by "default".
Remember before they had a gooey all they had was an ooie.
Should have seen what it was like before people had a mouse ;)
How about back when almost all prospective users wanted computerized calculation abilities, but computers were so uncommon none of them had ever used a computer (other than a first-generation game console), yet. They were of course well aware of what computers could do but wouldn't be actually touching one until sometime in the (very near) future.
They were looking forward to it which was a good sign, but when you handed one to them, the ideal situation was if they could simply be directed to the power button on the device. Everything else needs to logically follow and be completely intuitive to those familiar with the domain, with no further guidance or support from the author. Budding operators who were absolutely computer illiterate must be able to get it right the first time. Would you settle for anything less when it's somewhat confusing industrial high-stakes computation under the hood?
That's just text but when you think about it, even the most complex logic & code might benefit from first making sure it can be well-navigated from a text-based UI, before adding the desktop & mouse to complete the "picture".
Remember, a text-based UI must ask the right questions, or there will be no correct response.
Any other UI which doesn't ask the same questions in one way or another, is unlikely to provide the same correct response.
It is also my experience that this is the way to go, and it also matches my theoretical view aka prejudices.
It seems to me that one of the things making this approach difficult is that we lack (design) tools that support iteration between design and development.
Modern tools make going from design to development easier, but it is still largely a one-way street. And one that's been made worse by recent trends towards building the UI in code, rather than from data. There are good local reasons for doing this, but it does seem to push even more strongly towards a waterfall-y development process (design does pretty mockups, throws over wall)
For a long while now, what seems to me a reasonably simple project (re-creating the UI of a drawing program and making it straight-forward to re-create program state with annotations/highlighting) has been stalled because I can't find a design tool which works better than just placing the screen grabs in a vector drawing program and annotating by hand....
I'm about at the point where I'm going to just code everything up in METAPOST....
What made display postscript so great? I have only seen it mentioned in frothy marketing terms, not an in depth discussion of its approach, why it worked well, and how that compares to other alternatives.
You have to understand or have experienced how fragile the early graphics programs were --- basically a .eps file would be a black box of PostScript code, and would have a pixel image preview used on-screen (a classic gag was to use a resource editor to change the pixel preview) --- whether or no it would actually print/image correctly was something one wasn't certain of until holding the actual output in hand, and even then, it only held for a specific PostScript rasterizer/job setting. Sure, it was okay 99.99% of the time, but that 0.01% could be disastrous.
Display PostScript meant that the system used for on-screen display was the same as for actual printed output --- I _never_ had a job on my NeXT Cube fail to print to match the on-screen version. Moreover, one could do cool DPS programming such as custom strokes and fills in Altsys Virtuoso.
These days, PDF and Quartz (née Display PDF) mostly address the reliability (and presumably it's gotten even better since I left the industry), but I miss the programmability. Hopefully, using METAPOST will let me get some of that back.
Thanks, I'm starting to understand. So Display Postscript was useful because it let you know that what you put on the screen as a programmer would be what's printed?
And it allowed for cool graphics effects like custom strokes and fills.
So that primarily matters for stuff you want to print. It wouldn't matter as much for assembling a UI.
Windows uses (used?) WMF and pixels for on-screen display which would then either be used for printout via some conversion process, or a parallel construction would be maintained for output and the need to keep that in synch would often result in slight differences in output --- maybe there were other approaches.
One of the neat things in Display PostScript was one could use Interface Builder to add a print button to any window to get the underlying PS code.
In practice that's what you could do with HyperLook on NeWS:
SimCity, Cellular Automata, and Happy Tool for HyperLook (nee HyperNeWS (nee GoodNeWS))
HyperLook was like HyperCard for NeWS, with PostScript graphics and scripting plus networking. Here are three unique and wacky examples that plug together to show what HyperNeWS was all about, and where we could go in the future!
I've written lots and lots of user interfaces in PostScript. Just not Display PostScript. NeWS was a lot better for making interactive user interfaces than Display PostScript, and it came out a lot earlier.
Then I used HyperLook to implemented the user interface for SimCity, programming the entire interactive user interface in NeWS PostScript -- Display PostScript couldn't do anything like that:
Several years before that, I worked on the NeWS version of Gosling Emacs at UniPress, with multiple tabbed windows and pie menus. (Gosling also wrote NeWS, and much later Java.):
HCIL Demo - HyperTIES Authoring with UniPress Emacs on NeWS:
I used UniPress Emacs to develop an authoring tool for the NeWS version of HyperTIES, an early hypermedia browser, which we developed at the University of Maryland Human Computer Interaction Lab.
Designing to Facilitate Browsing: A Look Back at the Hyperties Workstation Browser:
You'll need a SparcStation emulator to run it, if not a real SparcStation. I've resisted the temptation because there's so much new code to write that I don't have much time to run old code. ;) Although it would be fun to run it in an emulator 1000 times faster than it ever ran on real hardware!
Here are some links I've found:
Unix & Linux: How to emulate the NeWS window system?
I'd like to find a nicer development environment which made use of such options.
Apple killed off HyperCard, Runtime Revolution became LiveCode which when opensource, then closed source and now is only available for monthly license fees.
PythonCard never got to 1.0 and hasn't been updated in almost two decades...
(Python-enabled version of OpenSCAD) but the only user-facing options are the Customizer which is quite limited, and a repository of files which users can access --- unfortunately, trying to bring up a canvas or UI window crashes the app.
Something about the human brain just makes it very bad at observing a mocked-up screen layout and understanding how well it works in practice. Apply that to an entire application with multiple functions and the problem increases exponentially.
Experience helps speed things up. But rapid iteration with a fast feedback loop is the best practice. Design is not doing the start of the loop, it’s doing the entire loop. Repeatedly.
The fact frequent, repeated contact with the customer isn't the norm is why so many interfaces suck and so many engineers could't design a decent one with a gun to their head (although, frankly, that level of stress might not induce thoughtful design patterns).
Instead engineers get hit with micro view after micro view, and they build it using test flows that don't mimic the real world, and then they all tie it in to create a tangled macro view that's a shit show for the user.
I've been working to bring recurring Shadow Sessions here at my workplace by creating a basic scheduler (which is really the pain point at scale) that just sends you and somebody working in the tooling you're building (we're internal tooling) every three weeks and the feedback is overwhelmingly positive and we're working to expand the functionality a bit.
So, all you out there who want a nice win, set up a little scheduler and get your Product, Design, Engineers, Managers, and TPMs in rotating sessions with actual customers at a lightweight pace with minimal asks to create greater empathy which translates to all of us potentially ending up with better software in the world as a whole.
When I do GUI work, it's usually for hobbyist or internal projects. I value quality UI/UX extremely highly. I often get 'analysis-by-paralysis' here because I try to do the design and development in a synchronous single pass. Your comment about tight and rapid iteration being the only solution resonates with me.
One 'trick' I discovered recently was to completely ignore UI design and focus on _formatting_ instead -- the placement of elements in a proper and useable way. Saving the visual aspects of design (widget design, margins, padding, color, animations, etc.) until the very very last step.
My hypothesis is that "good UI" = "good page formatting" + "pretty UI elements".
Nice to have, icing on the cake, but what I need if it was to be mission-critical is at least 10x better workflow than average these days.
As a child, before I had any concept of software, I just wanted to get something worthwhile out of electronics itself.
I'm so old that most adults didn't have a concept of software either in those days. "Software awareness", that it even exists as an entity of its own has by now proliferated by many orders of magnitude like most other things do not.
One thing that's stood the test of time, if you can make the electronics do something it wasn't doing before, well that might just be remarkable. Maybe even game-changing. Maybe even like never before.
Sometimes you program, sometimes you don't.
In the right balance it can end up quite a system.
Decades ago for my own purposes I separated the UI from the rest of the code, and this was of course a monolith with line numbers. The equivalent of punch cards, but when you think about it the UI could be in the final 25% of the deck of cards, and quite easily physically replaceable in that media form factor. Plus, if you're transparent about it, it can really come in handy sometimes to deal from the bottom on the deck. GOTO can easily be your friend if you know how to accommodate each other ;)
But code also doesn't necessarily have to have any electronics involved.
Software alone can be considered more independent of constraint by a "system", because it can be so abstract.
Doesn't have to be so abstract, but that is a serious option sometimes.
The ultimate would be pure software which is not part of any other "system" at all.
I'm so out-of-date I'll probably just stick with the electronics ;)
I agree. Layout and styling should be completely decoupled and be made orthogonal to another. Basically, by default styling should exclusively be theming.
Some emacs setups. (much more variability than VIM)
Some VIM setups.
The thing that all of these have in common is that they are designed for experts, not for every user. Also each one of those is a custom full platform app (or primarily text based) vs web app.
Conflating beginner tools and expert tools (user-friendliness) is usually where everything go wrong. For most people, Wordpad was enough, Microsoft Word was for when you need a bit more control. But an expert tool is Adobe InDesign where you have the maximum control. And the UI is quite different.
Same when learning to code, a basic text editor like Gnome's Text Editor or Nano is all you need. But an expert will reach out to Intellij's for his project because his needs are more complex.
Is that last one actually Garage Band? I used to use it a very long time ago and I don't remember it ever looking like that. It does, however, look basically the same as Logic does today. I'm not sure if I'd consider it a good GUI or not.
You got downvoted for the snark, but damned if it ain't a reasonable opinion.
If you read the seminal "Design of Everyday Things" by Norman Rockwell you'll come away annoyed at half the physical _doors_ you walk through... here in 2025.
I've been pushing these terms to help us talk about and design better interfaces at work...
Static Interfaces - Your supermarket's pretty much a static interface. The frame of whatever website you're looking at. These are static. They've very powerful and were pretty much all you had before digital interfaces became ubiquitous. There's an initial learning curve where you figure out navigation, and then for the most part it's fairly smooth sailing from there provided the controls are exposed well.
Adaptive Interfaces - These interfaces attempt to "adapt" to your needs. Google is probably one of the most successful adaptive interfaces out there. A query for "Shoes" will show a series of shopping results, while a query for "Chinese food" will show a map of the restaurants nearby. The interface adapts to you.
I call this narrow adaptive because the query triggers how the UI adapts. I think "wide area" adaptive interfaces where the interface attempts to meet your needs before you've had a chance to interact with the static interface around it are tremendously difficult and can't think of examples of them being done well.
Adaptable Interfaces - This last interface bucket includes controls which allow a user to adapt the interface to their own needs. This may include dragging icons into a particular order, pinning certain view styles or filters, or customizing the look or behavior of the applications you're working with.
Finder, the iPhone's basic UI, terminal, basic music catalog management (e.g. iTunes)... these are interfaces which are created once with an initial curve of varying difficulty to learn and then live on for decades without much change.
Conclusion - The best interfaces combine an intuitive static frame, with queried adaptive elements, and adaptable features to efficiently meet the needs of a diverse group of user flows instead of attempting the one size fits all approach (which leaves 2/3rds of people annoyed).
Another category, searchable interfaces, may fit into one of these or may be it’s own separate category. But tools like MacOS Spotlight or the command palette in some editors are very useful for power users. Having every command available through a minimal set of fuzzy keyboard strokes is a significant productivity boost, while also allowing some degree of discoverability.
As an aside, if anyone at Adobe is reading this, this sort of tool would be an excellent addition to Illustrator, Photoshop, etc. InDesign already has something like it, although that implementation leaves a little to be desired.
Or you can focus on a single class and produce the best UI for that class.
Static Interfaces for the common actions that everyone does. Best as basic utilities in the operating system (Notepad, The calculator)
Adaptive Interfaces where you have a few advanced layouts for people that wants a bit more. (Wordpad, Notepad++, Kate,...)
The expert tools (Blender, matlab, Adobe Illustrator,...) You will have a small userbase, but they're often willing to pay for a good tool that will solve their needs.
> A tight iteration loop is the only solution I've seen for building high quality UI/UX. Embrace that it's going to be terrible the first few times and plan for this.
The problem I have seen is that all the GUI toolkits weld themselves way too hard to the code.
Consequently, when you want to adjust the UI, you always have to rewrite big chunks of the code.
> I have made GUIs many, many times and my best case scenario goes something like:
> 1. A design has been made, everyone loves it. Detailed drawings have been made.
> 2. The devs are told to Make It
> 3. The devs Make It, exactly to spec
> 4. Everyone looks at it and everyone hates it
> 5. So many meetings. So much stress. This Is Terrible! What To Do?
> 6. A new design is made. So much better! Detailed drawings are made.
> 7. The devs are told to Make It
> 8. The devs Make It, exactly to spec
> 9. Everyone looks at it and They Do Not Love It
> 10. So many meetings. So much stress. This Is Terrible! What To Do?
> 11. Someone suggests in one of the many, many meetings what amount to basically minor changes, moving something, changing some colors, changing some text, something like that.
> 12. The devs Make It, exactly to spec
> 13. Nobody’s happy. But nobody hates it.
> 14. The devs are pissed.
I have similar experience. I think the real issue with GUIs: You have technical people building something (mostly) for non-technical people. Imagine developing a GUI for an internal app that the purchasing or accounting department uses. Most of your internal customers are non-technical. They don't think like devs. Plus, many devs have awful communication skills, especially with non-technical users, so large gaps in expectations can emerge.
The best experience I have ever seen: Have the internal customer team hire a semi-technical fresh grad. They are the primary dog-fooder of the new app. Force them to do their job only using the new app (as much as possible). They give lots and lots and lots of immediate, direct feedback to the devs. You can slowly iterate to something reasonable. The secret that makes mid-level managers upset: Don't plan; allow it be built organically, especially if your audience is internal.
Another thing that I have noticed: Some people are just way, way, way better at designing and implementing GUIs. I have no idea how to filter for these people, but "you know it when you see it".
The issue is that step 2 is wrong. Step 1 is to make a design, step 2 is to test it. Make a paper prototype and have your customer simulate working with it. If you feel fancy make a pretty prototype in Figma.
If you have a good designer and cooperative costumer you can even combine step 1 and 2 with a whiteboard prototype. Ask the customer what a common task is. Draw the interface the whiteboard and ask the consumer where they would click or interact, then draw the next UI state, and so on.
After a couple rounds of iterating that way you can start writing actual code. You will still need a couple iterations once people can try the actual software but you will have a much better starting point
> The issue is that step 2 is wrong ... step 2 is to test it.
But of course the best and only real way to test it is to test the real thing...so build it. Back to step 2. :-/
Recurring theme in pre-press: get the texts, get everybody to proof-read the texts, get customer sign off, do on-screen proofs of the layout, everyone signs off, do print-proofs, do proof-printer proofs, do a press-run. Everybody signs off. Print run of 120000 copies. Typo in first sentence on first page, present all the way back.
My idea is to make building real things about as cheap as creating a click-dummy or paper prototype. How's the old saying? "The merely difficult we do immediately, the actually impossible might take a while" ;-)
I think these kind of tasks where many people are asked to review but nobody owns it are the problem. One assumes others reviewed it and then they might review them superficially.
You could give $100 per typo found and I bet that first page would be caught.
The problem is that it isn't their job to review it. They have their own deadlines, and their effort reviewing your prototype won't show up on their performance review.
Unfortunately, superficial feedback like typos are the least valuable feedback possible when creating a new design. What you really want to know is whether the design is actually feasible, if it introduced new pain points, if it is better than what it is going to replace. That's the sort of thing people will notice internally but not necessarily give voice to when they spend three minutes glancing at a prototype or initial build.
I'd add that you have to be careful with this approach that you don't just outsource the design to the customer.
Customers give valuable feedback, but it's rarely a good idea to implement their ideas as-is. Usually you want to carefully consider problems/friction/frustration that they bring up, but take their suggested solutions with a grain of salt.
This can be harder than it sounds, because customers who give the best feedback are often very opinionated, and you naturally want to "reward" them by including exactly what they ask for.
Yeah, step 1 is wrong too. The article goes into that.
You can't design an interface based on a partial feature-set, you need full interaction modes specified beforehand. You can't finish the details before you implement it, or even before you test. You can't have a committee of "everyones" to like it before you test.
Combining the steps isn't something "you can do", it's the one way that works.
* Step 2: design is "tested" with the users, later we find out the users really had no idea what was going on because they weren't really paying attention. Then the real product is delivered and they are shocked that the changes were made behind their back and without their input.
UX is hard. What usually happens is that you have:
- developers who are bad at UX writing the software and improvising with all the use cases that were not adequately specified in the design document
- end users who are bad at UX giving their input, but can tell when something feels right for their use case
- managers/spec writers who are bad at UX and are trying to translate the wishes of end users to the developers
The result is a worthless spec, garbage in, garbage out.
Sometimes, when the stars align, in your team you get someone who is actually good at UX. I am told this happens, because in my career I have never seen such a unicorn. And even in that case, the end user with no understanding of UX, the one that actually pays for the project, might really, really want that periwinkle blue button, so in pretty much all cases GUIs get through countless rewrites and tweaks.
>I think the real issue with GUIs: You have technical people building something (mostly) for non-technical people.
As a disgruntled power user, I think we have more of the exact opposite problem: the 'common denominator' users are driving interfare development toward oversimplicity. We have good software which continually undergoes translation into the GUI-equivalent of New-Speak. No please, I beg of you: don't spuritually reduce your program to some singular green [GO!] button! KDE 3.5 > 4+, GNOME 2 > 3+. Mozilla with a long habit of option removal (rivaled by GNOME project). If today's interface designers were in control 60 years ago, we would never have the Unix paradigm (it would be "too complicated" lol).
There should be a bit of elitism in this. Some things are hard to do, not all software should turn all hard things into 'easy', 'simplified', singular [GO!] buttons.
I refer to it as "designed by programmers". It's so prevalent that the Silicon Valley TV show used this as one of it's story lines.
When I worked for a consultancy in early internet days I also often repeated "the customer doesn't know what they want until they see what they don't want". People weren't used to web GUIs and there weren't the common patterns we see today, so it wasn't until they had a half working version they could actually give feedback, so getting early feedback was essential to reduce the frustrating of building something that is immediately changed. It still applies today but less to do with usage patterns and more to do with missing requirements the customer forgot to tell you about.
Conversely, I've seen way too many designed-by-designers GUIs and UX flows that can be best characterized as a glittery, polished turd. They sure look nice in Figma but often only have one well-specified happy path from which the end user will surely deviate. Managers will easily greenlight broken designs because they can only see the visuals, not the complete picture.
If you find a talented designer, they are valued so much they will only get assigned to design tasks. If you have a rockstar 10x developer, their mind just cannot comprehend the average 0.1x end user.
What you need is someone who understands design but dislikes it enough to not focus on aligning single pixels or fixing the kerning. They need to be able to code but hate it with passion, because hard-working programmers create software for hard-working end users and the end user is not hard-working.
That's because "UX" designers need to build according to actual usability engineering guidelines, not just built what "looks good". The 1995-2005 feels like it was the golden decade of this sort of thing.
A lot of UIs are made to get a promotion. Others are made by numerical optimization of the funnel, which is even worse. The best UIs come from incorporating lots of actual user feedback, and then it almost doesn't matter if they're built by programmers or designers.
The "one happy path" idea is what you want, though. Non-technical users use software through rote muscle memory (it's why in 25 years as a sysadmin I've had thousands of "there was an error message" reports and precisely 0 of those people actually read the error message to report it to me: not in the happy path means user simply shuts down).
The problem becomes that people try to make software too complicated to have one happy path. This is the road to perdition.
An error message is an automated bug report for the developer. I don't know why you think the user is supposed to care about it, or even see it. Are you paying the user to develop the software?
I think there's a figure mostly absent in these processes. Designers and devs are living in their bubble/silo and don't think like a user.
It's very rare to find someone who can understand design, UX, and code and put it all together into a cohesive vision. In my experience, if you have the UX right from the start, then the rest becomes much easier. UX is the foundation that will dictate how it has to be designed and programmed.
But also in so many cases it needs to be made to work in reverse :\
Even when the pure logic or core process is where the real magic happens, and makes the app what it is, if interfacing with a user is very important you could also say that the UI is the actual heart of the program instead, which ends up calling that wonderfully unique code that may be different than anything else. Or anything else that has ever had a UI before.
But you need a "more common" interface that can accommodate optimized workflows of some kind or another, in enough ways for target mainstream users to make the most of the unique parts of the core creatively-engineered code, enjoying the most easily gained familiarity at the same time. With at least as usable a UI as they are accustomed to if not better.
Once that's all said & done is when I think it would be best for the creative artists to bring it up to meet visual objectives, with carefully crafted content curation, and run that by all kinds of ergonomic testers.
If they come up with any engineering shortcomings I would listen to them, even if it's nothing like an actual deficiency or true defect. There should be some pretty tight functioning and I think that would happen less often.
> if interfacing with a user is very important you could also say that the UI is the actual heart of the program instead
If it's meant to be used by humans, then yes. The experience should be the north star.
That doesn't mean that you have to sacrifice everything in favor of this. Obviously you need someone with enough technical knowledge to understand how to balance all the priorities.
Edit:
And by experience I don't only mean the UI per se. Also the performance of the whole system etc.
Like software, requirements can never be perfect. Overly prescriptive requirements are a huge red flag to me that a PM/client/designer is doing an engineer’s work, or micromanaging.
Arguably lots of UIs getting worse with every iteration of redesign.
- Windows GUI went downhill from Windows 7 (or even XP) with every release.
- Outlook went from good over fair to annoying so that I finally replaced it as my personal client.
These are not the only examples I could name but they are the most prominent.
I think the main problem is that both technical staff and UX designers both trying to make something "new" or "fancy" which is in most cases the opposite of something usable. E.g. Aero was fancy but it took away that my active window had one signal color header bar and all others were tamed. Now all windows are colorful and yelling at me at the same time. Orientation is gone.
And after that UIs got even more "fancy".
Step 13 ("Nobody's happy but nobody hates it") is the plateau when everybody is to tired to keep on fighting - a compromise, not the state of the GUI reached anything acceptable. It is not fancy enough anymore for developers and UX designers to be proud of but at the same time and is still annoyingly bad for the users.
About Outlook: Are you talking about the Win32 desktop client or the M365 web app? If the desktop client, what has gotten so much worse? And is there a better alternative to the Exchange calendar? I have not seen one in my experience at mega corps.
I think this is a partial solution, but I have to point out that relegating that function of translator and tester to a “fresh grad”. That is ideally the exact role of a Product Manager today, the very go-between and translator with vision that can manage customer/client expectations while also adequately communicate technical concepts and both communicate initial tex task breakdowns and also run interference for devs, i.e., dish out conditional nos.
This function is both extremely critical as well as it is also not valued in my opinion. The business/client side thinks that’s what devs are for, and devs think they’re just more management until they’ve learned that (please excuse the sport metaphor since it’s not something I do, but it seems fitting) Product Managers can be the defensive line as well as the quarterback for the running backs converting, the coach’s strategy into wins and a cheering crowds instead of boos and disappointments all around.
The difference? Fresh grads are much cheaper than experienced PMs. I always say: Don't hire PMs; hire better devs (who, when necessary, can wear the hat of a PM). To be clear: My example is specifically talking about internal software development, and I have seen this strategy work at multiple companies. Creating an external product for B2B or B2C is very different.
Internal SW dev can work with a lot less overhead and setting up direct communication between users and developers is reasonably simple. There is usually a 1:1 relationship between user roles and developers.
Published software ideally has many, many more licensees and you absolutely need rigid communication channels with various go-betweens (PM, marketing, support). Direct communication between devs and customers wastes too much of the developers' time. Especially the PM role becomes extremely important for product quality then. In the extreme, the product can only be as good as its PM.
You are missing the point: What generates higher ROI: (1) dev + PM (separate people) or (2) highly skilled dev who can periodically act as PM? In my experience, it is always (2). Any time that I hear a senior manager complaining about "expensive devs", I always ask them: "How do you balance cost and quality?" Most of them are stunned by this question and give a bullshit answer. The truth: Almost all orgs are better to hire far fewer devs who are very high quality, versus many devs who are lower quality. I never worked for Amazon AWS but the "pizza-sized" team thing is real -- from experience.
This was very hard to read and I wasn't even sure what is the conclusion. One thing I didn't understand, how does one disagree with agile dev processes, which are mostly built on top of the fact that many things, especially UX, you can't know in advance, so you have to build something small, get feedback, then either scrap it or improve it. The process described here sounds exactly like someone spending weeks if not months designing the GUI, then devs spend weeks or months implementing it, without any cross-communication, so it's kind of obvious it needs to be fully re-done so many times. People started switching to agile dev specifically to shorten the feedback loop and scrap bad ideas faster.
> so it's kind of obvious it needs to be fully re-done so many times.
But it hasn't really caught on in the management layer. Sure, they use all the right Agile buzzwords, but they still put features A, B and C into the plan, and ask questions like "when will B be finished?"
"Finished?". Nah - we're the stewards of 14 bugs-as-a-service. We won't so much "finish B" as much as we'll transition to becoming the stewards of 15 bugs-as-a-service.
This precisely. They treat development (programming) as the slowest part of the process, but that has not been my experience since Figma came out. I’ve not seen agile done right since it arrived, we’re just doing waterfall with sprints.
I have only skimmed the text but regarding GUIs specifically the list in the end is spot on.
With that being said, I firmly believe that all software (given that one is not already deeply familiar with the domain) is/can/should be written three times to end up with a good product:
1. Minimal prototype. You throw something together fast to see it can be done, taking shortcuts and leaving out features which you know you will want later(tm).
2. First naive real implementation. You build upon the prototype, oftentimes thinking that there is actually not that much missing to turn it into something useful. You make bad design decisions and cut corners because you haven't had a chance to fully grasp all the underlying intricacies of the domain and the more time you spend on it the more frustrating it becomes because you start seeing all the wrong turns you took.
3. Once you arrive at a point where you know exactly what you want, you throw it all away and rewrite the whole thing in an elegant way, also focusing on performance.
(1) and (3) are usually fun wereas (2) fast becomes a dread. The main problem is that in a work context you almost never are allowed to transition from (2) to (3) because for an outsider (2) seems good enough and nobody wants to pay for (3).
"Plan to throw one away. You will anyhow."- Fred Brooks, _Mythical Man Month_
A software engineering book written decades before I was born- my college assigned us the 25th Anniversary Edition- and yet I re-read it every few years and find some new way to apply its lessons to my current problems.
Personally, I’ve never found this lean methodology to work for me. I have a bit of a mantra that I’ve found works really well for me: “Put everything on the screen”.
Every feature every variant ever possible configuration and all future potential states. Don’t care about how it looks or how it feels just put it all there. Build out as much of it as possible, as fast as possible, knowing it will be thrown away.
Then, whittle away. Combine, drop, group, reorganize, hide, delete, add. About halfway through this step it becomes clear what I really should have been striving for the whole time—and invariably, it’s a mile away from what I started out to build.
One I have that, then I think step three stays about the same.
This isn’t really a critique of lean development, but after a decade of trying to do things leanly, I’ve just accepted that it’s not how my brain works
Hard agree. (2) is all about building out the test suite; once you have this (3) becomes a cake walk.
I've worked in a lot of places where end to end testing is performed manually by a SIT team who absolutely do not like to re-run a test once it's been passed. These people hate the idea of (3) and will overestimate the costs to the PM in order to avoid having to do it.
I agree completely with the idea of building something 3 times. As I get older, I tend to compress things more into 2 iterations, but that just because I like to think I’m getting better at coding, so step two is less pressing.
I think of the three iterations in these terms:
1) You don’t know what you’re doing. So this iteration is all about figuring out the problem space.
2) You know that you’re doing, but you don’t know how to do it. This iteration is about figuring out the way to engineer/design the program.
3) You’ve figured out both what you’re doing and how to do it. So now, just build it.
i would add that the reason no product manager wants to pay for #3 is because historical attempts to do so have overwhelmingly resulted in cost/schedule overruns; did-not-finish outcomes are common. Let he who believes otherwise demonstrate so with his own money, this is called a startup and note that virtually all startups fail i.e. run out of some critical resource without finishing! So what is a wisened product manager to do? No easy answers here - simply look to the industry to see what the average outcome is. And it is not for lack of trying. in my opinion software delivery is not a solved problem. but it is really hard to make money as a software delivery expert by going around and saying that you don’t know how to deliver software.
I hear what you're saying but my experience is that dwelling in #2 without seeing the bigger picture does very often just as much result in cost/schedule overruns, because shoving certain features or trying to improve certain aspects just collides with the status quo and sometimes cannot be easily accomplished if things are built "wrong" to begin with (wrong often just meaning that they were based on then-relevant prerequisites/assumption which are no longer relevant). Also, the cost of maintenance is often just not taken into account, which means that in the end you have to spend way too much time to shoehorn a half-baked solution into the status quo which has the appearance of delivering what was requested (but doesn't always, because you had to compromise, leaving everybody unhappy) while taking way too much time and at the same time just piling more bloated poo on top of what's already there, making maintenance in the long run even harder. I can't count how many times I've been in a situation where implementing something shouldn't have taken more than 30 minutes but because the codebase was in a not-so-good(tm) state took several days instead. This piles up exponentially, resulting in frustrated developers, a worse product and cost/schedule overruns. In a perfect world, code should improve over time, not deteriorate.
from the PM perspective, it makes little sense to transform from 2 to 3.
Those devs had spent weeks/months for this app, now they want to throw it all away ?, that means throwing money through windows. Also, the risk that the new app may not work like before, or missing deadline, etc. A safe bet would be reiterating (2)
I explained in another comment why it isn't throwing money out of the window. In my experience, it often costs a lot more money in the long run to not do it. The underlying problem is that most companies don't really think mid- or long-term and are happy with chasing fast money and eventually throwing it all away anyway because the product isn't competitive anymore and/or maintenance becomes too expensive. These are problems which definitely can be mitigated, but it requires a good team.
4. Now you arrive at a point where you really know exactly what you want, you throw it all away and rewrite the whole thing in a better more elegant and performant way.
The article is poorly written. No clear message, topics jumped weirdly, and the overall style is like written by teenager/intern trying to be as impressing as some professional.
But here's the thing with "GUIs/UIs/UXs whatever":
The best UI/UX is created by a domain professional, who knows why and how it serves as the best designed tool for that domain - a tool made by a professional for himself and/or for other professionals in the same domain.
This is why Bloomberg terminal UI/UX is like it is for finance professionals, as are DAWs for music professionals, as are CAD tools for EE/architects etc. They act as the right tool for the right job.
Coders, (figma) designers, and other "implementers" (including management and "product owners"!) has to understand the business domain in order to fully manifest their craftsmanship talent. It is very hard to start and/or iterate UI/UX design if the implementers are not personally using the tool in some professional domain, and therefore know what is right and cool design and what is not.
100%. Designers in love with white space should not design UIs for engineers (or anyone who lives their professional life in Excel). Lots of margin, padding, drop shadows, ‘round-lg’ etc might look pretty, but when you can only fit two numbers on a page it doesn’t help.
I don't know about other professional tools, but EDA tools for chipdesign are like they are because electrical engineers and the vendors are 20 years behind in how to develop software.
I have worked for a number of different software companies over the years. For most of them, there were no dedicated frontend or UX designers. It is mostly backend devs who had enough/decent skills at frontend, whether it be GUI apps or Web apps.
However, when you are doing something specific for customers (not staff) - the design is important to get right early. However, even at a number of places I have worked, the structure is still wrong.
For example I worked for a company which has one UX designer. I will give him his props. He was good at GUI design and a whizz at css! Sadly, when he had "finished" the design, it gets passed over to the developers to implement the functionality around it. If something was not going to work functionally or a customer has changed the design... it is the developer that has to fix it. The UX guy has moved on to another project and the cycle repeats. It was wrong structure.
I found good results when a UX guy works alongside a Developer. As the UX guys works on the designs, it allows the developer to start building the business logic around it. It is all part of the development process, afterall. Sure, the UX guy is likely to make changes even from the customer but the developer is always aware and can adjust. A lot of the module work is likely to be small amendments.
Once the UX is finished then so is (mostly) the Module alongside Unit Tests or similar. It is simply a developer taking the UX project and adding the needed calls to the modules. It keeps the middle layer small, easier for further changes to the UI or the Module. etc.
I wasn't sure where the author (Patricia) was going with the whole 'GUIs are built >= 2.5x'.. but by the end, I agree.
Discovery is fundamentally different from assembly (as in the 'factory' metaphor). And innovation (= new product product) is fundamentally about discovery (whether product/market fit, or product/user fit). Therefore, new product development is fundamentally an iterative process.
Any org trying to force-fit a 'get it right the first time' mentality on discovery/innovation has discovered (no pun) just how common failure is...
> "This second is the most dangerous system a man ever designs. When he does his third and later ones, his prior experiences will confirm each other as to the general characteristics of such systems, and their differences will identify those parts of his experience that are particular and not generalizable."
I think most ui is borken by design possibly for perpetual income reasons. HyperCard, vb, and other easy to use and accessible builders are dead even though this what people really want. If I want a blue menu bar, I need to code markup!?!, but to stop me from creating blue menu bars, today I am forbidden from having a menu bar anyways. Crazy ideas and creatively built prototypes seem to have no place in the private ManagerFactoryClass.
One objection was that the text scrolling was line by line and Steve said “Can’t this be smooth?”. In a few seconds Dan made the change. Another more interesting objection was to the complementation of the text that was used (as today) to indicate a selection. Steve said “Can’t that be an outline?”. Standing in the back of the room, I held my breath a bit (this seemed hard to fix on the fly). But again, Dan Ingalls instantly saw a very clever way to do this (by selecting the text as usual, then doing this again with the selection displaced by a few pixels — this left a dark outline around the selection and made the interior clear).
> I think most ui is borken by design possibly for perpetual income reasons.
I don't know how the incentives really play out anymore. It's definitely self-interest in a lot of places.
I have a new theory that some user interfaces are made to be janky on purpose such that the users are constantly bathed in cortisol and made easier to subjugate with the other dark patterns.
The UI/UX for Azure instantly comes to mind as an example. By the time I've been able to ascertain that my VM is actually running, I have forgotten about the five other things I wanted to verify wrt billing, etc. Eventual consistency for something like this appears to me as an intentionally user-hostile design choice, especially in the case of Microsoft with their vast experience and talent pools.
The thing about MS I recently realized is that whatever they do (and most of the technologies they output), they target it from the enterprise angle. So they check boxes with features, they just need to make sure they are available/usable, but they don't particularly care how nice they are to use.
So it is indeed an intentional choice just to make a good enough product and move on to something else. They never want to polish whatever they have.
>I have a new theory that some user interfaces are made to be janky on purpose such that the users are constantly bathed in cortisol and made easier to subjugate with the other dark patterns.
> So imagine a pipeline that takes in encrypted text and the first “filter” decrypts the text, the second takes the decrypted text and strips away the beginning and the end, the third takes its input and sends it in an email. From a programmers perspective, we might think of these inputs and outputs as the “same” because they are text, however, in meaning, they are very different.
I've only got this far and thought it was interesting. Firstly because I think it's partly wrong; a programmer definitely doesn't think of encrypted data binary blobs as the same as text, but secondly because I do wonder if a subclass of a string type that is "has leading and trailing whitespace removed" might be quite an interesting way to model your data. The object could do the strips on construct.
It's just a description of InputStream/OutputStream type classes. You can have an EncryptedStream as well.
There's something to be said for having objects that are just "a string (or number), but having had its prerequisites enforced and validated". Especially in unicode land.
UI concerns need to be in service to the full set of requirements and the data model.
UIs are easily accessible to end-users and product-managers, and can allow people to focus on a subset of the requirements. The trap is to allow the UI perspective to direct the development process.
It is vital to set an expectation with customers that allows discussion about UI matters as part of requirements discovery, but where they expect it to churn. During early development UI should be rough and should churn constantly in response to changes of more foundational matters: the business requirements, the data model, concurrency matters, interactions with other systems and the deployment.
I think the author could be more concise and also confuses multiple things in the article. I'll provide just a couple of points:
- Patterns like "Pipes and Filters" and "Signals and Slots" are *not* related to the process of software development, they are about internal software architecture. It does not matter how much one iterates over GUI during development with client's feedback, software still takes some input, processes it, and returns some output. Also, calling "signals" "inputs" and slots "outputs" is weird: usually signals are processed by slot (this is the Qt framework terminology for events and event handlers for GUI), so it is more natural to think about signals as inputs and slots as something that produces outputs.
- From the same section:
> I don’t know if these patterns are in a book, or have a name, but if not, they are now in a blogpost
Or yeah, it is good to write an article without trying to do literature search first.
The last part of the article that says that people need to feel things before they understand whether they like them or not, was good, but I guess, all nontrivial things are done iteratively.
That "feels right" thing is "the quality without a name" from "The timeless way of building" by Christopher Alexander: it is "fitness for the purpose" or, perhaps, "being true to its own nature". It is both very real and very elusive.
Russian carpenters had a saying to the effect of "to do the job without tricks and let the measure and beauty to guide you". The primary skill here is not to do a thing, but to listen to what the thing itself is telling. (See also "The stone flower" by P. Bazhov).
I feel like building a house and programming are the only kinds of engineering where the customer can change the project halfway through and not get laughed out the room
erh, the option with the house, sounds rather expensive?
my day-to-day work regards professional building cost estimation software, and I would claim people try to do a lot of work to avoid having to do that.
I'm not saying they don't sometimes end up doing it anyway, but in my perspective, the larger the scale, the more this is aggressively avoided as much as possible.
Similarly, I encounter a lot of comparisons where "we" software people are told to be either more or less like the building people.
What I do see though, is that building people perpetually tend to miss out on a lot of data optimisations / pipelines in the building project flow. They keep talking about wanting to do this, but in practice end up entering a lot of data from scratch multiple times. One of the culprits I see, is that the people who should have shaped the data for this, have no economic motive to do so - "why should we do that, it will only be a problem for X other people at a later stage we are not involved in".
It's weird that the author is bothered by the concept of waste being applied to software, because when people talk about waste in software development, one of the main forms of waste is inventory: the effort put into building software that has not yet been used.
Or, in the article's terms, things you've built but have yet to receive the feedback "that's shit" so that they can be iterated on.
Modern web UIs and the tools to create them are so bad that a billion dollar companies (e.g. Figma) emerged to make an entirely separate system to make non-functional UIs.
This is a similar situation to when websites would be designed in Photoshop and the translated into "pixel perfect" HTML.
I agree the better analogy is that software itself is the factory. We should aim to create lean software (well factored into simple, reliable, modular components dealing with manageable chunks of data at a time).
Lean manufacturing doesn't really imply much about the day-to-day work of the factory designers and their interactions with their stakeholders, except to say that when bugs (or inefficiencies) happen a developer should fix them to get the "factory" moving again.
Which is a different story to "how do you design a greenfield factory?" and "how do you design the widgets produced by the factory that will entice consumers to buy them?" and many other important aspects. If we compare to Toyota, your software team is responsible for designing the cars, building a factory from scratch for said cars, running the factory and getting cars out the door, improving the cars based on consumer feedback, improving the factory based on bugs/inefficiencies/internal feedback, while making all of the above profitable. It's a whole range of responsibilities and tasks that need to be managed differently.
I think agile approach to iterative building is kind of obsolete with AI. There is no "12th step agile fast process", with all stakeholders involved. Instead you get experts throwing slop over the wall to stakeholders, to see what sticks.
I made webservice recently. To help me debug and test results, I asked AI to make me a simple CRUD web UI in Vue. Customer liked it, and it was kept in final version.
This UI was not even a prototype. There was no request, ticket or problem to solve. I just needed it to fix other problem, and it was kept as a bonus.
The faster you get real user feedback, the better. Decades ago, that meant coding first -- which sucked.
So we evolved.
Wireframes, pixel-perfect designs, clickable prototypes -- tightening the loop and cutting costs at every step.
Today, tools like Figma make that process even faster and more accessible. Build it in Figma, using UX-approved components and brand-approved styles, and you get something ready for feedback -- fast. (Plus, you save developers from wasting time coding something just to find out it’s wrong.)
Every front-end project should start with a clickable, usability-tested prototype before it ever hits a dev's backlog. It’s not rocket science. Skipping this step isn’t "moving fast," it’s just wasteful.
Absolutely agree — getting real user feedback early is everything. Tools like Figma have been amazing for that.
For folks who are still in the idea exploration phase or want to rapidly prototype low-fidelity flows without getting bogged down in design details, https://Wireframes.org has been super useful. It combines traditional drag-and-drop wireframing with AI-generated layouts from simple text prompts, which helps get something testable in front of users really fast — even before pixel-perfect designs are needed.
It’s a great way to tighten that feedback loop even further, especially for solo founders or early teams.
I get the frame but I don't think arguing the co-opting of Cockburn by the MBA crowd gets us anywhere.
Think about it. GUI - Graphical User Interface - a concept taken from HCI Human Computer Interaction. I think that describes Peek and Poke in BASIC pretty well 50 years ago though nobody attributes those to Dartmouth. It also describes AI at present around the world.
But HCI is lossy. Why?
Exploding n-dimensional dot cloud vectors of language leveled by math are exactly why I fear that GUI should have died with CASE tools as a hauntological debt on our present that is indeed, spectral.
The world doesn't need more clicks and taps. Quite the converse: less. Read Fitts. You don't run a faster race by increasing cadence. You run a faster race by slowing down and focusing on technique. Kipchoge knows this. Contemplative computing could learn too but I'm not sure waiting on the world to change works.
Imagine a world where we simply arrived at the same kind of text interfaces we enjoy now whether they benefit from the browser or are hindered by it. We just needed better, more turnkey tunnels, not more GUI! We sort of have those from meet:team:zoom, but they suck while few realize why or can explain the lossy nature of scaling tunnels when many of us built them impulsively in SSH decades ago for fun.
The present suffers from the long-tail baggage of the keyhole problem Scott Meyers mentioned twenty years ago. Data science has revealed the n-dimensional data underlying many, if not most, modern systems given their complexity.
What we missed is user interface that is not GUI that can actually scale to match the dimensionality of the data without implying a 2D, 2.5D, or 3D keyhole problem on top of n-dimensional data. The gap from system-to-story is indeed nonlinear because so is the data!
I'd argue the missing link is the Imaginary or Symbolic Interface we dream of but to my knowledge, have yet to conceive. Why?
It's as if Zizek has not met his match in software though I suspect there's a Brett Victor of interface language yet to be found, (Stephen Johnson?) because grammatology shouldn't stop at speech:writing.
Grammatology needed to scale into Interface Culture found in software's infinite extensibility in language, since computers were what McLuhan meant when he said, "Media" and I'm pretty sure "Augmentation is Amputation" is absolute truth if we continue down our limited Cartesian frame - we'll lose limbs of agency, meaning, and respond-in-kind social reciprocity in the process, if any of those remain.
The very late binding (no binding?) we see in software now is exactly what research labs were missing in the late sixties to bridge from 1945 to 1965 and beyond. I can't imagine trying to do that with the rigid stacks close-to-metal we had then.
I hope I'm not alone in seeing or saying that the answers should be a lot closer-to-mind now given virtualization from containers to models and everything in-between.
A tight iteration loop is the only solution I've seen for building high quality UI/UX. Embrace that it's going to be terrible the first few times and plan for this.
We could save on cost per iteration by avoiding the layout and styling pass until the very end. These interfaces are about communicating information. Plain text and basic form submissions work just as well as anything else when you're still trying to design site map and purpose of each page.
There's a direct correlation between the customer-developer distance and the number of iterations required to achieve an acceptable result. You can dramatically reduce the churn by requiring daily builds or screenshots to the customer. If your development process can't support the idea of iterating like this, you need to find one that can. If the actual customer can't handle that amount of traffic, create internal proxies or advocates.
The best functional specs for a UI have come from business people building their own in excel. At a minimum they will have some idea of layout and what information should be displayed on which screens.
This also lets them focus on the core information, not the styling. Building GUIs on the web, there are infinite ways to style every UI element and more attention goes to that then the actual interaction.
I'd disagree with that. People build wild shit in Excel that only they can understand. It makes sense to them because they built it. Nobody else can use it.
Not all spreadsheets are like this but there are absolutely no guardrails to prevent it.
Building a good GUI takes thoughtful design by someone who understands what makes a good GUI and what the goals of the interface are from the user's perspective. Someone who can make it look like what the user is already used to, even if that isn't "beautiful" UI or doesn't follow the latest trends in whitespace and widget appearance.
>I'd disagree with that. People build wild shit in Excel that only they can understand. It makes sense to them because they built it. Nobody else can use it.
I think the parent comment is talking about using excel as a grid based layout tool to show how they want the app to look, not implying that you should build GUIs based upon the convoluted stuff people build in excel to avoid having a dedicated app.
> People build wild shit in Excel that only they can understand.
If these people are the customer, then a wild shit xlsx file is perhaps one of the better possible scenarios for requirements gathering.
I've been in this exact situation. Client provides their current workflow, implemented in a spreadsheet.
The problem is they make significant concessions in their design to fit the tabular model of spreadsheets. It can be really warping, not only to GUI but also the underlying data model. Then you show them what a relational data model is capable of, and (hopefully) blow their mind.
When you get to relational models, the amount of UI you can quickly build around the django admin is staggering... if you stay in the guardrails.
>thoughtful design by someone who understands what makes a good GUI and what the goals of the interface are from the user's perspective.
This is critically overlooked too many times.
>People build wild shit in Excel that only they can understand.
I resemble that remark ;)
In one respect, that's what excel is really helpful for.
With no other GUI, the default of a mouse on an electronic spreadsheet is one of the oldest and most familiar to those who need wild math to be accomplished immediately without delay, in spite of its drawbacks. And quite popular, most likely by "default".
Remember before they had a gooey all they had was an ooie.
Should have seen what it was like before people had a mouse ;)
How about back when almost all prospective users wanted computerized calculation abilities, but computers were so uncommon none of them had ever used a computer (other than a first-generation game console), yet. They were of course well aware of what computers could do but wouldn't be actually touching one until sometime in the (very near) future.
They were looking forward to it which was a good sign, but when you handed one to them, the ideal situation was if they could simply be directed to the power button on the device. Everything else needs to logically follow and be completely intuitive to those familiar with the domain, with no further guidance or support from the author. Budding operators who were absolutely computer illiterate must be able to get it right the first time. Would you settle for anything less when it's somewhat confusing industrial high-stakes computation under the hood?
That's just text but when you think about it, even the most complex logic & code might benefit from first making sure it can be well-navigated from a text-based UI, before adding the desktop & mouse to complete the "picture".
Remember, a text-based UI must ask the right questions, or there will be no correct response.
Any other UI which doesn't ask the same questions in one way or another, is unlikely to provide the same correct response.
> Building GUIs on the web, there are infinite ways to style every UI element and more attention goes to that then the actual interaction.
AKA bike-shedding, or "Law of triviality", just to put a name on this pretty common occurrence.
It’s way easier to build a GUI for one user than it is for millions of users.
It is also my experience that this is the way to go, and it also matches my theoretical view aka prejudices.
It seems to me that one of the things making this approach difficult is that we lack (design) tools that support iteration between design and development.
Modern tools make going from design to development easier, but it is still largely a one-way street. And one that's been made worse by recent trends towards building the UI in code, rather than from data. There are good local reasons for doing this, but it does seem to push even more strongly towards a waterfall-y development process (design does pretty mockups, throws over wall)
For a long while now, what seems to me a reasonably simple project (re-creating the UI of a drawing program and making it straight-forward to re-create program state with annotations/highlighting) has been stalled because I can't find a design tool which works better than just placing the screen grabs in a vector drawing program and annotating by hand....
I'm about at the point where I'm going to just code everything up in METAPOST....
Makes me wish for Display PostScript again....
What made display postscript so great? I have only seen it mentioned in frothy marketing terms, not an in depth discussion of its approach, why it worked well, and how that compares to other alternatives.
You have to understand or have experienced how fragile the early graphics programs were --- basically a .eps file would be a black box of PostScript code, and would have a pixel image preview used on-screen (a classic gag was to use a resource editor to change the pixel preview) --- whether or no it would actually print/image correctly was something one wasn't certain of until holding the actual output in hand, and even then, it only held for a specific PostScript rasterizer/job setting. Sure, it was okay 99.99% of the time, but that 0.01% could be disastrous.
Display PostScript meant that the system used for on-screen display was the same as for actual printed output --- I _never_ had a job on my NeXT Cube fail to print to match the on-screen version. Moreover, one could do cool DPS programming such as custom strokes and fills in Altsys Virtuoso.
These days, PDF and Quartz (née Display PDF) mostly address the reliability (and presumably it's gotten even better since I left the industry), but I miss the programmability. Hopefully, using METAPOST will let me get some of that back.
Thanks, I'm starting to understand. So Display Postscript was useful because it let you know that what you put on the screen as a programmer would be what's printed?
And it allowed for cool graphics effects like custom strokes and fills.
So that primarily matters for stuff you want to print. It wouldn't matter as much for assembling a UI.
How does windows handle this stuff?
As a graphic designer/compositor/typesetter.
Windows uses (used?) WMF and pixels for on-screen display which would then either be used for printout via some conversion process, or a parallel construction would be maintained for output and the need to keep that in synch would often result in slight differences in output --- maybe there were other approaches.
One of the neat things in Display PostScript was one could use Interface Builder to add a print button to any window to get the underlying PS code.
In theory, you could also capture screenshots as high-quality vector graphics.
In practice that's what you could do with HyperLook on NeWS:
SimCity, Cellular Automata, and Happy Tool for HyperLook (nee HyperNeWS (nee GoodNeWS))
HyperLook was like HyperCard for NeWS, with PostScript graphics and scripting plus networking. Here are three unique and wacky examples that plug together to show what HyperNeWS was all about, and where we could go in the future!
https://donhopkins.medium.com/hyperlook-nee-hypernews-nee-go...
HyperLook SimCity Demo Transcript
This is a transcript of a video taped demonstration of SimCity on HyperLook in NeWS.
https://donhopkins.medium.com/hyperlook-simcity-demo-transcr...
Discussion with Alan Kay about HyperLook and NeWS:
Alan Kay on “Should web browsers have stuck to being document viewers?” and a discussion of Smalltalk, HyperCard, NeWS, and HyperLook
https://donhopkins.medium.com/alan-kay-on-should-web-browser...
Absolutely, since NeWS was largely the same concept, though with more intelligence on the PostScript side.
PostScript is great for user interfaces!
I've written lots and lots of user interfaces in PostScript. Just not Display PostScript. NeWS was a lot better for making interactive user interfaces than Display PostScript, and it came out a lot earlier.
https://en.wikipedia.org/wiki/NeWS
The Story of Sun Microsystems PizzaTool (entirely written in PostScript):
https://donhopkins.medium.com/the-story-of-sun-microsystems-...
PizzaTool PostScript Source code:
https://www.donhopkins.com/home/archive/NeWS/pizzatool.txt
I also worked on HyperLook, which was like HyperCard for NeWS with colorful PostScript instead of black and white pixels, plus networking:
https://medium.com/@donhopkins/hyperlook-nee-hypernews-nee-g...
Discussion with Alan Kay about HyperLook and NeWS:
https://medium.com/@donhopkins/alan-kay-on-should-web-browse...
Then I used HyperLook to implemented the user interface for SimCity, programming the entire interactive user interface in NeWS PostScript -- Display PostScript couldn't do anything like that:
https://donhopkins.medium.com/hyperlook-simcity-demo-transcr...
Several years before that, I worked on the NeWS version of Gosling Emacs at UniPress, with multiple tabbed windows and pie menus. (Gosling also wrote NeWS, and much later Java.):
HCIL Demo - HyperTIES Authoring with UniPress Emacs on NeWS:
https://www.youtube.com/watch?v=hhmU2B79EDU
The source code for UniPress Emacs 2.20 recently surfaced! (We called the NeWS version of Emacs "NeMACS" of course.):
https://github.com/SimHacker/NeMACS
Here's the PostScript code of the NeWS Emacs display driver:
https://github.com/SimHacker/NeMACS/blob/main/src/D.term/Trm...
And lots of other fun interactive PostScript user interface code we shipped with NeMACS:
https://github.com/SimHacker/NeMACS/tree/main/ps
Pie menus:
https://donhopkins.com/home/archive/NeWS/win/pie.ps
Tabbed windows:
https://donhopkins.com/home/archive/NeWS/win/tab.ps
I used UniPress Emacs to develop an authoring tool for the NeWS version of HyperTIES, an early hypermedia browser, which we developed at the University of Maryland Human Computer Interaction Lab.
Designing to Facilitate Browsing: A Look Back at the Hyperties Workstation Browser:
https://donhopkins.medium.com/designing-to-facilitate-browsi...
HyperTIES Discussions from Hacker News:
https://donhopkins.medium.com/hyperties-discussions-from-hac...
I also worked on the Gnu Emacs 18 NeWS driver for The NeWS Toolkit:
https://donhopkins.com/home/code/emacs18/src/tnt.ps
A visual PostScript programming and debugging environment: The Shape of PSIBER Space: PostScript Interactive Bug Eradication Routines — October 1989:
https://donhopkins.medium.com/the-shape-of-psiber-space-octo...
PSIBER source code:
https://www.donhopkins.com/home/pub/NeWS/litecyber/
NeWS was architecturally similar to what is now called AJAX, except that NeWS more coherently:
1) Used PostScript CODE instead of JavaScript for PROGRAMMING.
2) Used PostScript GRAPHICS instead of DHTML and CSS for RENDERING.
3) Used PostScript DATA instead of XML and JSON for DATA REPRESENTATION.
More on that:
SimCity, Cellular Automata, and Happy Tool for HyperLook (nee HyperNeWS (nee GoodNeWS)):
https://donhopkins.medium.com/hyperlook-nee-hypernews-nee-go...
Here's a comparison of X-Windows and NeWS:
https://donhopkins.medium.com/the-x-windows-disaster-128d398...
Are there any available implementations of this?
I've long wanted to experiment w/ HyperLook (or a successor to it).
You'll need a SparcStation emulator to run it, if not a real SparcStation. I've resisted the temptation because there's so much new code to write that I don't have much time to run old code. ;) Although it would be fun to run it in an emulator 1000 times faster than it ever ran on real hardware!
Here are some links I've found:
Unix & Linux: How to emulate the NeWS window system?
https://www.youtube.com/watch?v=9ZhZqfC8sC4
Sun OpenWindows 3.0 (NeWS!)
https://www.youtube.com/watch?v=Kl08TvO0Bgw
Looks like this is running on a real Sun workstation, not an emulator. He shows RasterRap and The NeWS Toolkit demos, but not PizzaTool.
Good luck! Let me know if you get something working.
Not quite willing to dig out my Sparcstation 5 and find the mouse and matching metal mousepad...
Was more hopeful that someone had created a new implementation of that environment.
Let's turn this around --- for vector-graphic oriented development work what current environment would you recommend trying?
Is there any place where one could look up screenshots of this? I'd want to see what is capable with this technology
People you should be paying careful attention to this. You could be living 40 years in the future. Instead, we're were we are and rapidly ossifying.
I'd like to find a nicer development environment which made use of such options.
Apple killed off HyperCard, Runtime Revolution became LiveCode which when opensource, then closed source and now is only available for monthly license fees.
PythonCard never got to 1.0 and hasn't been updated in almost two decades...
I'm currently doing all my development in:
https://pythonscad.org/
(Python-enabled version of OpenSCAD) but the only user-facing options are the Customizer which is quite limited, and a repository of files which users can access --- unfortunately, trying to bring up a canvas or UI window crashes the app.
This has been my experience as well.
Something about the human brain just makes it very bad at observing a mocked-up screen layout and understanding how well it works in practice. Apply that to an entire application with multiple functions and the problem increases exponentially.
Experience helps speed things up. But rapid iteration with a fast feedback loop is the best practice. Design is not doing the start of the loop, it’s doing the entire loop. Repeatedly.
Predicting video from a still image is a fraught task. Predicting interaction from a linear video is another fraught task.
It would be surprising if someone had the ability to do this.
The fact frequent, repeated contact with the customer isn't the norm is why so many interfaces suck and so many engineers could't design a decent one with a gun to their head (although, frankly, that level of stress might not induce thoughtful design patterns).
Instead engineers get hit with micro view after micro view, and they build it using test flows that don't mimic the real world, and then they all tie it in to create a tangled macro view that's a shit show for the user.
I've been working to bring recurring Shadow Sessions here at my workplace by creating a basic scheduler (which is really the pain point at scale) that just sends you and somebody working in the tooling you're building (we're internal tooling) every three weeks and the feedback is overwhelmingly positive and we're working to expand the functionality a bit.
So, all you out there who want a nice win, set up a little scheduler and get your Product, Design, Engineers, Managers, and TPMs in rotating sessions with actual customers at a lightweight pace with minimal asks to create greater empathy which translates to all of us potentially ending up with better software in the world as a whole.
A fella can dream.
When I do GUI work, it's usually for hobbyist or internal projects. I value quality UI/UX extremely highly. I often get 'analysis-by-paralysis' here because I try to do the design and development in a synchronous single pass. Your comment about tight and rapid iteration being the only solution resonates with me.
One 'trick' I discovered recently was to completely ignore UI design and focus on _formatting_ instead -- the placement of elements in a proper and useable way. Saving the visual aspects of design (widget design, margins, padding, color, animations, etc.) until the very very last step.
My hypothesis is that "good UI" = "good page formatting" + "pretty UI elements".
Any thoughts on this approach?
>GUIs are built at least 2.5 times
Unfortunately what is more often needed is 3.0+ and far too many fall short :\
>"good UI" = "good page formatting" + "pretty UI elements".
Nice to have, icing on the cake, but what I need if it was to be mission-critical is at least 10x better workflow than average these days.
As a child, before I had any concept of software, I just wanted to get something worthwhile out of electronics itself.
I'm so old that most adults didn't have a concept of software either in those days. "Software awareness", that it even exists as an entity of its own has by now proliferated by many orders of magnitude like most other things do not.
One thing that's stood the test of time, if you can make the electronics do something it wasn't doing before, well that might just be remarkable. Maybe even game-changing. Maybe even like never before.
Sometimes you program, sometimes you don't.
In the right balance it can end up quite a system.
Decades ago for my own purposes I separated the UI from the rest of the code, and this was of course a monolith with line numbers. The equivalent of punch cards, but when you think about it the UI could be in the final 25% of the deck of cards, and quite easily physically replaceable in that media form factor. Plus, if you're transparent about it, it can really come in handy sometimes to deal from the bottom on the deck. GOTO can easily be your friend if you know how to accommodate each other ;)
But code also doesn't necessarily have to have any electronics involved.
Software alone can be considered more independent of constraint by a "system", because it can be so abstract.
Doesn't have to be so abstract, but that is a serious option sometimes.
The ultimate would be pure software which is not part of any other "system" at all.
I'm so out-of-date I'll probably just stick with the electronics ;)
>Any thoughts on this approach?
Sorry, my head's a blank ;)
Yes, this is part of Information Architecture
https://en.m.wikipedia.org/wiki/Information_architecture
I agree. Layout and styling should be completely decoupled and be made orthogonal to another. Basically, by default styling should exclusively be theming.
Very interested to hear about a high quality UI as I have never encountered one
CAD Software. There is CAD vs CAD esports. https://www.youtube.com/live/C1CqIcfDKbQ?t=1987s It seems dominated by solidworks but other programs (NX, Fusion 360...?) are allowed.
The bloomberg terminal UI.
Some emacs setups. (much more variability than VIM)
Some VIM setups.
The thing that all of these have in common is that they are designed for experts, not for every user. Also each one of those is a custom full platform app (or primarily text based) vs web app.
Conflating beginner tools and expert tools (user-friendliness) is usually where everything go wrong. For most people, Wordpad was enough, Microsoft Word was for when you need a bit more control. But an expert tool is Adobe InDesign where you have the maximum control. And the UI is quite different.
Same when learning to code, a basic text editor like Gnome's Text Editor or Nano is all you need. But an expert will reach out to Intellij's for his project because his needs are more complex.
Another common conflation is graphic design vs application design. Fusion 360 has strong graphic design and meh application design.
Apple used to be very good at this, especially in the pre-iPhone OS X era.
iTunes: https://discussions.apple.com/content/attachment/192853040
Preview: https://www.intego.com/mac-security-blog/wp-content/uploads/...
Garage Band: https://inside.wooster.edu/technology/wp-content/uploads/sit...
Is that last one actually Garage Band? I used to use it a very long time ago and I don't remember it ever looking like that. It does, however, look basically the same as Logic does today. I'm not sure if I'd consider it a good GUI or not.
I would argue Windows Forms somewhere between 3.1 and 95 more or less nailed it.
It's boring, but it's clear.
You got downvoted for the snark, but damned if it ain't a reasonable opinion.
If you read the seminal "Design of Everyday Things" by Norman Rockwell you'll come away annoyed at half the physical _doors_ you walk through... here in 2025.
I've been pushing these terms to help us talk about and design better interfaces at work...
Static Interfaces - Your supermarket's pretty much a static interface. The frame of whatever website you're looking at. These are static. They've very powerful and were pretty much all you had before digital interfaces became ubiquitous. There's an initial learning curve where you figure out navigation, and then for the most part it's fairly smooth sailing from there provided the controls are exposed well.
Adaptive Interfaces - These interfaces attempt to "adapt" to your needs. Google is probably one of the most successful adaptive interfaces out there. A query for "Shoes" will show a series of shopping results, while a query for "Chinese food" will show a map of the restaurants nearby. The interface adapts to you.
I call this narrow adaptive because the query triggers how the UI adapts. I think "wide area" adaptive interfaces where the interface attempts to meet your needs before you've had a chance to interact with the static interface around it are tremendously difficult and can't think of examples of them being done well.
Adaptable Interfaces - This last interface bucket includes controls which allow a user to adapt the interface to their own needs. This may include dragging icons into a particular order, pinning certain view styles or filters, or customizing the look or behavior of the applications you're working with.
Finder, the iPhone's basic UI, terminal, basic music catalog management (e.g. iTunes)... these are interfaces which are created once with an initial curve of varying difficulty to learn and then live on for decades without much change.
Conclusion - The best interfaces combine an intuitive static frame, with queried adaptive elements, and adaptable features to efficiently meet the needs of a diverse group of user flows instead of attempting the one size fits all approach (which leaves 2/3rds of people annoyed).
Small nitpick, "Design of Everyday Things" is written by Don Norman.
Another category, searchable interfaces, may fit into one of these or may be it’s own separate category. But tools like MacOS Spotlight or the command palette in some editors are very useful for power users. Having every command available through a minimal set of fuzzy keyboard strokes is a significant productivity boost, while also allowing some degree of discoverability.
As an aside, if anyone at Adobe is reading this, this sort of tool would be an excellent addition to Illustrator, Photoshop, etc. InDesign already has something like it, although that implementation leaves a little to be desired.
Or you can focus on a single class and produce the best UI for that class.
Static Interfaces for the common actions that everyone does. Best as basic utilities in the operating system (Notepad, The calculator)
Adaptive Interfaces where you have a few advanced layouts for people that wants a bit more. (Wordpad, Notepad++, Kate,...)
The expert tools (Blender, matlab, Adobe Illustrator,...) You will have a small userbase, but they're often willing to pay for a good tool that will solve their needs.
Outside in development helps here too.
> A tight iteration loop is the only solution I've seen for building high quality UI/UX. Embrace that it's going to be terrible the first few times and plan for this.
The problem I have seen is that all the GUI toolkits weld themselves way too hard to the code.
Consequently, when you want to adjust the UI, you always have to rewrite big chunks of the code.
This part is genius:
I have similar experience. I think the real issue with GUIs: You have technical people building something (mostly) for non-technical people. Imagine developing a GUI for an internal app that the purchasing or accounting department uses. Most of your internal customers are non-technical. They don't think like devs. Plus, many devs have awful communication skills, especially with non-technical users, so large gaps in expectations can emerge.The best experience I have ever seen: Have the internal customer team hire a semi-technical fresh grad. They are the primary dog-fooder of the new app. Force them to do their job only using the new app (as much as possible). They give lots and lots and lots of immediate, direct feedback to the devs. You can slowly iterate to something reasonable. The secret that makes mid-level managers upset: Don't plan; allow it be built organically, especially if your audience is internal.
Another thing that I have noticed: Some people are just way, way, way better at designing and implementing GUIs. I have no idea how to filter for these people, but "you know it when you see it".
The issue is that step 2 is wrong. Step 1 is to make a design, step 2 is to test it. Make a paper prototype and have your customer simulate working with it. If you feel fancy make a pretty prototype in Figma.
If you have a good designer and cooperative costumer you can even combine step 1 and 2 with a whiteboard prototype. Ask the customer what a common task is. Draw the interface the whiteboard and ask the consumer where they would click or interact, then draw the next UI state, and so on.
After a couple rounds of iterating that way you can start writing actual code. You will still need a couple iterations once people can try the actual software but you will have a much better starting point
> The issue is that step 2 is wrong ... step 2 is to test it.
But of course the best and only real way to test it is to test the real thing...so build it. Back to step 2. :-/
Recurring theme in pre-press: get the texts, get everybody to proof-read the texts, get customer sign off, do on-screen proofs of the layout, everyone signs off, do print-proofs, do proof-printer proofs, do a press-run. Everybody signs off. Print run of 120000 copies. Typo in first sentence on first page, present all the way back.
My idea is to make building real things about as cheap as creating a click-dummy or paper prototype. How's the old saying? "The merely difficult we do immediately, the actually impossible might take a while" ;-)
I think these kind of tasks where many people are asked to review but nobody owns it are the problem. One assumes others reviewed it and then they might review them superficially.
You could give $100 per typo found and I bet that first page would be caught.
The problem is that it isn't their job to review it. They have their own deadlines, and their effort reviewing your prototype won't show up on their performance review.
Unfortunately, superficial feedback like typos are the least valuable feedback possible when creating a new design. What you really want to know is whether the design is actually feasible, if it introduced new pain points, if it is better than what it is going to replace. That's the sort of thing people will notice internally but not necessarily give voice to when they spend three minutes glancing at a prototype or initial build.
Not really, no.
It's that we humans are really, really bad at reviewing something carefully that we know is not the "real" thing.
Not just step 2, step one is to gain a clear understanding of user needs, different use cases, possible flows...
Now you're taking the whole design and development process and calling it "step 1", creating that parody "Waterfall" system.
I don't understand how you understood gathering information as design and development?
If you have no idea what you want to do you are just "vibe" designing and coding.
Starting with gathering information doesn't mean you can't have an iterative process, where you gain a better understanding with each cycle.
I'd add that you have to be careful with this approach that you don't just outsource the design to the customer.
Customers give valuable feedback, but it's rarely a good idea to implement their ideas as-is. Usually you want to carefully consider problems/friction/frustration that they bring up, but take their suggested solutions with a grain of salt.
This can be harder than it sounds, because customers who give the best feedback are often very opinionated, and you naturally want to "reward" them by including exactly what they ask for.
Yeah, step 1 is wrong too. The article goes into that.
You can't design an interface based on a partial feature-set, you need full interaction modes specified beforehand. You can't finish the details before you implement it, or even before you test. You can't have a committee of "everyones" to like it before you test.
Combining the steps isn't something "you can do", it's the one way that works.
* Step 2: design is "tested" with the users, later we find out the users really had no idea what was going on because they weren't really paying attention. Then the real product is delivered and they are shocked that the changes were made behind their back and without their input.
>The issue is that step 2 is wrong.
Step 1 could even be wrong if making detailed drawings turns out to be wasting time by making detailed drawings :\
But OTOH you could say up until step 14 it's not as disappointing as it could be.
Then it ends with the devs being pissed, that's probably why it's only 2.5 instead of the full 3.0 :)
There is an alternative ending but you would have to be a whole lot more optimistic.
13. Nobody’s happy. But nobody hates it.
14. That's better'n They Do Not Love It innit?
15. Progress is being made.
16. The devs are energized for more.
17+. . .
Next round a couple people love it. But just a couple.
Do over again, and almost everybody is happy.
One more time and you've really got something there!
~35+. Champagne 6.0 showered on devs.
It could happen . . .
UX is hard. What usually happens is that you have:
- developers who are bad at UX writing the software and improvising with all the use cases that were not adequately specified in the design document
- end users who are bad at UX giving their input, but can tell when something feels right for their use case
- managers/spec writers who are bad at UX and are trying to translate the wishes of end users to the developers
The result is a worthless spec, garbage in, garbage out.
Sometimes, when the stars align, in your team you get someone who is actually good at UX. I am told this happens, because in my career I have never seen such a unicorn. And even in that case, the end user with no understanding of UX, the one that actually pays for the project, might really, really want that periwinkle blue button, so in pretty much all cases GUIs get through countless rewrites and tweaks.
I've found that most teams are hesitant to show backend code, API calls, etc. during demos.
Screw that! Show your product owner and stakeholders the complexity of the backend _in conjunction_ to the ease of the front end.
The underlying note is "here is what you'll do manually and without support if you don't want the UX as it is now."
Showing the code won't do anything for non-technical people. They'll nod along politely and file it under "they did the job they were hired for".
Plus, it feels a little too passive-agressive to me.
>I think the real issue with GUIs: You have technical people building something (mostly) for non-technical people.
As a disgruntled power user, I think we have more of the exact opposite problem: the 'common denominator' users are driving interfare development toward oversimplicity. We have good software which continually undergoes translation into the GUI-equivalent of New-Speak. No please, I beg of you: don't spuritually reduce your program to some singular green [GO!] button! KDE 3.5 > 4+, GNOME 2 > 3+. Mozilla with a long habit of option removal (rivaled by GNOME project). If today's interface designers were in control 60 years ago, we would never have the Unix paradigm (it would be "too complicated" lol).
There should be a bit of elitism in this. Some things are hard to do, not all software should turn all hard things into 'easy', 'simplified', singular [GO!] buttons.
I think of these steps as: designed by project managers who don't understand the user's work situation and don't understand the technology.
> customer team hire a semi-technical fresh grad
This is a person with two legs, one in each camp. This is the optimal solution and beats any one legged person.
It's unfortunate that so many organizations just don't like two legged people, they want you to go into a single box.
I refer to it as "designed by programmers". It's so prevalent that the Silicon Valley TV show used this as one of it's story lines.
When I worked for a consultancy in early internet days I also often repeated "the customer doesn't know what they want until they see what they don't want". People weren't used to web GUIs and there weren't the common patterns we see today, so it wasn't until they had a half working version they could actually give feedback, so getting early feedback was essential to reduce the frustrating of building something that is immediately changed. It still applies today but less to do with usage patterns and more to do with missing requirements the customer forgot to tell you about.
Conversely, I've seen way too many designed-by-designers GUIs and UX flows that can be best characterized as a glittery, polished turd. They sure look nice in Figma but often only have one well-specified happy path from which the end user will surely deviate. Managers will easily greenlight broken designs because they can only see the visuals, not the complete picture.
If you find a talented designer, they are valued so much they will only get assigned to design tasks. If you have a rockstar 10x developer, their mind just cannot comprehend the average 0.1x end user.
What you need is someone who understands design but dislikes it enough to not focus on aligning single pixels or fixing the kerning. They need to be able to code but hate it with passion, because hard-working programmers create software for hard-working end users and the end user is not hard-working.
That's because "UX" designers need to build according to actual usability engineering guidelines, not just built what "looks good". The 1995-2005 feels like it was the golden decade of this sort of thing.
The whole damn point of calling it "UX" instead of "UI" is to stress that "looks good" is not actually the important part.
I'd argue that this failed though because UX really just means "design for maximum engagement".
Which means "keep the time between them using the UX and opening their wallet to a minimum" because that's the only place businesses care.
No one is building a delightful UX for running IR spectrometers.
It is really SEX. Shareholder's Equity Experience.
And yet, at least in my personal experience, it is only since i started hearing the term UX that i felt UIs started going downhill :-P
A lot of UIs are made to get a promotion. Others are made by numerical optimization of the funnel, which is even worse. The best UIs come from incorporating lots of actual user feedback, and then it almost doesn't matter if they're built by programmers or designers.
The "one happy path" idea is what you want, though. Non-technical users use software through rote muscle memory (it's why in 25 years as a sysadmin I've had thousands of "there was an error message" reports and precisely 0 of those people actually read the error message to report it to me: not in the happy path means user simply shuts down).
The problem becomes that people try to make software too complicated to have one happy path. This is the road to perdition.
An error message is an automated bug report for the developer. I don't know why you think the user is supposed to care about it, or even see it. Are you paying the user to develop the software?
> "the customer doesn't know what they want until they see what they don't want".
like all good NP-complete problems, the issue of making a good GUI is verifiable easily by a customer, but they cannot tell you what a good GUI is.
I think there's a figure mostly absent in these processes. Designers and devs are living in their bubble/silo and don't think like a user.
It's very rare to find someone who can understand design, UX, and code and put it all together into a cohesive vision. In my experience, if you have the UX right from the start, then the rest becomes much easier. UX is the foundation that will dictate how it has to be designed and programmed.
Looks like good insight to me.
But also in so many cases it needs to be made to work in reverse :\
Even when the pure logic or core process is where the real magic happens, and makes the app what it is, if interfacing with a user is very important you could also say that the UI is the actual heart of the program instead, which ends up calling that wonderfully unique code that may be different than anything else. Or anything else that has ever had a UI before.
But you need a "more common" interface that can accommodate optimized workflows of some kind or another, in enough ways for target mainstream users to make the most of the unique parts of the core creatively-engineered code, enjoying the most easily gained familiarity at the same time. With at least as usable a UI as they are accustomed to if not better.
Once that's all said & done is when I think it would be best for the creative artists to bring it up to meet visual objectives, with carefully crafted content curation, and run that by all kinds of ergonomic testers.
If they come up with any engineering shortcomings I would listen to them, even if it's nothing like an actual deficiency or true defect. There should be some pretty tight functioning and I think that would happen less often.
> if interfacing with a user is very important you could also say that the UI is the actual heart of the program instead
If it's meant to be used by humans, then yes. The experience should be the north star.
That doesn't mean that you have to sacrifice everything in favor of this. Obviously you need someone with enough technical knowledge to understand how to balance all the priorities.
Edit:
And by experience I don't only mean the UI per se. Also the performance of the whole system etc.
The main thing that pisses off devs is changing requirements. Unless the devs get a free pass the rebuild everything from scratch.
Like software, requirements can never be perfect. Overly prescriptive requirements are a huge red flag to me that a PM/client/designer is doing an engineer’s work, or micromanaging.
Nobody said requirements need to be perfect or never leave room for free interpretation.
Arguably lots of UIs getting worse with every iteration of redesign.
- Windows GUI went downhill from Windows 7 (or even XP) with every release.
- Outlook went from good over fair to annoying so that I finally replaced it as my personal client.
These are not the only examples I could name but they are the most prominent. I think the main problem is that both technical staff and UX designers both trying to make something "new" or "fancy" which is in most cases the opposite of something usable. E.g. Aero was fancy but it took away that my active window had one signal color header bar and all others were tamed. Now all windows are colorful and yelling at me at the same time. Orientation is gone.
And after that UIs got even more "fancy".
Step 13 ("Nobody's happy but nobody hates it") is the plateau when everybody is to tired to keep on fighting - a compromise, not the state of the GUI reached anything acceptable. It is not fancy enough anymore for developers and UX designers to be proud of but at the same time and is still annoyingly bad for the users.
About Outlook: Are you talking about the Win32 desktop client or the M365 web app? If the desktop client, what has gotten so much worse? And is there a better alternative to the Exchange calendar? I have not seen one in my experience at mega corps.
I think this is a partial solution, but I have to point out that relegating that function of translator and tester to a “fresh grad”. That is ideally the exact role of a Product Manager today, the very go-between and translator with vision that can manage customer/client expectations while also adequately communicate technical concepts and both communicate initial tex task breakdowns and also run interference for devs, i.e., dish out conditional nos.
This function is both extremely critical as well as it is also not valued in my opinion. The business/client side thinks that’s what devs are for, and devs think they’re just more management until they’ve learned that (please excuse the sport metaphor since it’s not something I do, but it seems fitting) Product Managers can be the defensive line as well as the quarterback for the running backs converting, the coach’s strategy into wins and a cheering crowds instead of boos and disappointments all around.
The difference? Fresh grads are much cheaper than experienced PMs. I always say: Don't hire PMs; hire better devs (who, when necessary, can wear the hat of a PM). To be clear: My example is specifically talking about internal software development, and I have seen this strategy work at multiple companies. Creating an external product for B2B or B2C is very different.
Internal SW dev can work with a lot less overhead and setting up direct communication between users and developers is reasonably simple. There is usually a 1:1 relationship between user roles and developers.
Published software ideally has many, many more licensees and you absolutely need rigid communication channels with various go-betweens (PM, marketing, support). Direct communication between devs and customers wastes too much of the developers' time. Especially the PM role becomes extremely important for product quality then. In the extreme, the product can only be as good as its PM.
Devs are expensive, and devs who can PM are incredibly expensive.
You are missing the point: What generates higher ROI: (1) dev + PM (separate people) or (2) highly skilled dev who can periodically act as PM? In my experience, it is always (2). Any time that I hear a senior manager complaining about "expensive devs", I always ask them: "How do you balance cost and quality?" Most of them are stunned by this question and give a bullshit answer. The truth: Almost all orgs are better to hire far fewer devs who are very high quality, versus many devs who are lower quality. I never worked for Amazon AWS but the "pizza-sized" team thing is real -- from experience.
This was very hard to read and I wasn't even sure what is the conclusion. One thing I didn't understand, how does one disagree with agile dev processes, which are mostly built on top of the fact that many things, especially UX, you can't know in advance, so you have to build something small, get feedback, then either scrap it or improve it. The process described here sounds exactly like someone spending weeks if not months designing the GUI, then devs spend weeks or months implementing it, without any cross-communication, so it's kind of obvious it needs to be fully re-done so many times. People started switching to agile dev specifically to shorten the feedback loop and scrap bad ideas faster.
> so it's kind of obvious it needs to be fully re-done so many times.
But it hasn't really caught on in the management layer. Sure, they use all the right Agile buzzwords, but they still put features A, B and C into the plan, and ask questions like "when will B be finished?"
"Finished?". Nah - we're the stewards of 14 bugs-as-a-service. We won't so much "finish B" as much as we'll transition to becoming the stewards of 15 bugs-as-a-service.
This precisely. They treat development (programming) as the slowest part of the process, but that has not been my experience since Figma came out. I’ve not seen agile done right since it arrived, we’re just doing waterfall with sprints.
I have only skimmed the text but regarding GUIs specifically the list in the end is spot on.
With that being said, I firmly believe that all software (given that one is not already deeply familiar with the domain) is/can/should be written three times to end up with a good product:
1. Minimal prototype. You throw something together fast to see it can be done, taking shortcuts and leaving out features which you know you will want later(tm).
2. First naive real implementation. You build upon the prototype, oftentimes thinking that there is actually not that much missing to turn it into something useful. You make bad design decisions and cut corners because you haven't had a chance to fully grasp all the underlying intricacies of the domain and the more time you spend on it the more frustrating it becomes because you start seeing all the wrong turns you took.
3. Once you arrive at a point where you know exactly what you want, you throw it all away and rewrite the whole thing in an elegant way, also focusing on performance.
(1) and (3) are usually fun wereas (2) fast becomes a dread. The main problem is that in a work context you almost never are allowed to transition from (2) to (3) because for an outsider (2) seems good enough and nobody wants to pay for (3).
"Plan to throw one away. You will anyhow."- Fred Brooks, _Mythical Man Month_
A software engineering book written decades before I was born- my college assigned us the 25th Anniversary Edition- and yet I re-read it every few years and find some new way to apply its lessons to my current problems.
"If you plan to throw away one, you will throw away two" -- Craig Zerouni, via Programming Pearls: Bumper Sticker Computer Science
https://moss.cs.iit.edu/cs100/Bentley_BumperSticker.pdf
Is that better or worse? (Let's shelve for a minute if worse is better.)
Maybe you will, but maybe not. Hence the title - 2.5 attempts sounds about right.
Personally, I’ve never found this lean methodology to work for me. I have a bit of a mantra that I’ve found works really well for me: “Put everything on the screen”.
Every feature every variant ever possible configuration and all future potential states. Don’t care about how it looks or how it feels just put it all there. Build out as much of it as possible, as fast as possible, knowing it will be thrown away.
Then, whittle away. Combine, drop, group, reorganize, hide, delete, add. About halfway through this step it becomes clear what I really should have been striving for the whole time—and invariably, it’s a mile away from what I started out to build.
One I have that, then I think step three stays about the same.
This isn’t really a critique of lean development, but after a decade of trying to do things leanly, I’ve just accepted that it’s not how my brain works
Sounds like sculpture. Or “add lightness.” My brain works the same way.
Hard agree. (2) is all about building out the test suite; once you have this (3) becomes a cake walk.
I've worked in a lot of places where end to end testing is performed manually by a SIT team who absolutely do not like to re-run a test once it's been passed. These people hate the idea of (3) and will overestimate the costs to the PM in order to avoid having to do it.
Time for a new team. Also sounds like your customers are the testers. In other words: fire the “team” (SIP)
I agree completely with the idea of building something 3 times. As I get older, I tend to compress things more into 2 iterations, but that just because I like to think I’m getting better at coding, so step two is less pressing.
I think of the three iterations in these terms:
1) You don’t know what you’re doing. So this iteration is all about figuring out the problem space.
2) You know that you’re doing, but you don’t know how to do it. This iteration is about figuring out the way to engineer/design the program.
3) You’ve figured out both what you’re doing and how to do it. So now, just build it.
i would add that the reason no product manager wants to pay for #3 is because historical attempts to do so have overwhelmingly resulted in cost/schedule overruns; did-not-finish outcomes are common. Let he who believes otherwise demonstrate so with his own money, this is called a startup and note that virtually all startups fail i.e. run out of some critical resource without finishing! So what is a wisened product manager to do? No easy answers here - simply look to the industry to see what the average outcome is. And it is not for lack of trying. in my opinion software delivery is not a solved problem. but it is really hard to make money as a software delivery expert by going around and saying that you don’t know how to deliver software.
I hear what you're saying but my experience is that dwelling in #2 without seeing the bigger picture does very often just as much result in cost/schedule overruns, because shoving certain features or trying to improve certain aspects just collides with the status quo and sometimes cannot be easily accomplished if things are built "wrong" to begin with (wrong often just meaning that they were based on then-relevant prerequisites/assumption which are no longer relevant). Also, the cost of maintenance is often just not taken into account, which means that in the end you have to spend way too much time to shoehorn a half-baked solution into the status quo which has the appearance of delivering what was requested (but doesn't always, because you had to compromise, leaving everybody unhappy) while taking way too much time and at the same time just piling more bloated poo on top of what's already there, making maintenance in the long run even harder. I can't count how many times I've been in a situation where implementing something shouldn't have taken more than 30 minutes but because the codebase was in a not-so-good(tm) state took several days instead. This piles up exponentially, resulting in frustrated developers, a worse product and cost/schedule overruns. In a perfect world, code should improve over time, not deteriorate.
from the PM perspective, it makes little sense to transform from 2 to 3.
Those devs had spent weeks/months for this app, now they want to throw it all away ?, that means throwing money through windows. Also, the risk that the new app may not work like before, or missing deadline, etc. A safe bet would be reiterating (2)
I explained in another comment why it isn't throwing money out of the window. In my experience, it often costs a lot more money in the long run to not do it. The underlying problem is that most companies don't really think mid- or long-term and are happy with chasing fast money and eventually throwing it all away anyway because the product isn't competitive anymore and/or maintenance becomes too expensive. These are problems which definitely can be mitigated, but it requires a good team.
4. Now you arrive at a point where you really know exactly what you want, you throw it all away and rewrite the whole thing in a better more elegant and performant way.
Number 4 is “huh I wonder if I should rewrite it in rust” ;)
The article is poorly written. No clear message, topics jumped weirdly, and the overall style is like written by teenager/intern trying to be as impressing as some professional.
But here's the thing with "GUIs/UIs/UXs whatever":
The best UI/UX is created by a domain professional, who knows why and how it serves as the best designed tool for that domain - a tool made by a professional for himself and/or for other professionals in the same domain.
This is why Bloomberg terminal UI/UX is like it is for finance professionals, as are DAWs for music professionals, as are CAD tools for EE/architects etc. They act as the right tool for the right job.
Coders, (figma) designers, and other "implementers" (including management and "product owners"!) has to understand the business domain in order to fully manifest their craftsmanship talent. It is very hard to start and/or iterate UI/UX design if the implementers are not personally using the tool in some professional domain, and therefore know what is right and cool design and what is not.
100%. Designers in love with white space should not design UIs for engineers (or anyone who lives their professional life in Excel). Lots of margin, padding, drop shadows, ‘round-lg’ etc might look pretty, but when you can only fit two numbers on a page it doesn’t help.
I agree wirh your general statement but I don't agree about DAWs specifically. Many DAWs (used to) have terrible UX.
I don't know about other professional tools, but EDA tools for chipdesign are like they are because electrical engineers and the vendors are 20 years behind in how to develop software.
I have worked for a number of different software companies over the years. For most of them, there were no dedicated frontend or UX designers. It is mostly backend devs who had enough/decent skills at frontend, whether it be GUI apps or Web apps.
However, when you are doing something specific for customers (not staff) - the design is important to get right early. However, even at a number of places I have worked, the structure is still wrong.
For example I worked for a company which has one UX designer. I will give him his props. He was good at GUI design and a whizz at css! Sadly, when he had "finished" the design, it gets passed over to the developers to implement the functionality around it. If something was not going to work functionally or a customer has changed the design... it is the developer that has to fix it. The UX guy has moved on to another project and the cycle repeats. It was wrong structure.
I found good results when a UX guy works alongside a Developer. As the UX guys works on the designs, it allows the developer to start building the business logic around it. It is all part of the development process, afterall. Sure, the UX guy is likely to make changes even from the customer but the developer is always aware and can adjust. A lot of the module work is likely to be small amendments.
Once the UX is finished then so is (mostly) the Module alongside Unit Tests or similar. It is simply a developer taking the UX project and adding the needed calls to the modules. It keeps the middle layer small, easier for further changes to the UI or the Module. etc.
That is 100% my experience as well.
I wasn't sure where the author (Patricia) was going with the whole 'GUIs are built >= 2.5x'.. but by the end, I agree.
Discovery is fundamentally different from assembly (as in the 'factory' metaphor). And innovation (= new product product) is fundamentally about discovery (whether product/market fit, or product/user fit). Therefore, new product development is fundamentally an iterative process.
Any org trying to force-fit a 'get it right the first time' mentality on discovery/innovation has discovered (no pun) just how common failure is...
Also related [1]:
> "This second is the most dangerous system a man ever designs. When he does his third and later ones, his prior experiences will confirm each other as to the general characteristics of such systems, and their differences will identify those parts of his experience that are particular and not generalizable."
[1] https://wiki.c2.com/?SecondSystemEffect
I think most ui is borken by design possibly for perpetual income reasons. HyperCard, vb, and other easy to use and accessible builders are dead even though this what people really want. If I want a blue menu bar, I need to code markup!?!, but to stop me from creating blue menu bars, today I am forbidden from having a menu bar anyways. Crazy ideas and creatively built prototypes seem to have no place in the private ManagerFactoryClass.
One objection was that the text scrolling was line by line and Steve said “Can’t this be smooth?”. In a few seconds Dan made the change. Another more interesting objection was to the complementation of the text that was used (as today) to indicate a selection. Steve said “Can’t that be an outline?”. Standing in the back of the room, I held my breath a bit (this seemed hard to fix on the fly). But again, Dan Ingalls instantly saw a very clever way to do this (by selecting the text as usual, then doing this again with the selection displaced by a few pixels — this left a dark outline around the selection and made the interior clear).
> I think most ui is borken by design possibly for perpetual income reasons.
I don't know how the incentives really play out anymore. It's definitely self-interest in a lot of places.
I have a new theory that some user interfaces are made to be janky on purpose such that the users are constantly bathed in cortisol and made easier to subjugate with the other dark patterns.
The UI/UX for Azure instantly comes to mind as an example. By the time I've been able to ascertain that my VM is actually running, I have forgotten about the five other things I wanted to verify wrt billing, etc. Eventual consistency for something like this appears to me as an intentionally user-hostile design choice, especially in the case of Microsoft with their vast experience and talent pools.
The thing about MS I recently realized is that whatever they do (and most of the technologies they output), they target it from the enterprise angle. So they check boxes with features, they just need to make sure they are available/usable, but they don't particularly care how nice they are to use.
So it is indeed an intentional choice just to make a good enough product and move on to something else. They never want to polish whatever they have.
>I have a new theory that some user interfaces are made to be janky on purpose such that the users are constantly bathed in cortisol and made easier to subjugate with the other dark patterns.
Sounds like bullshit... I believe it!
> So imagine a pipeline that takes in encrypted text and the first “filter” decrypts the text, the second takes the decrypted text and strips away the beginning and the end, the third takes its input and sends it in an email. From a programmers perspective, we might think of these inputs and outputs as the “same” because they are text, however, in meaning, they are very different.
I've only got this far and thought it was interesting. Firstly because I think it's partly wrong; a programmer definitely doesn't think of encrypted data binary blobs as the same as text, but secondly because I do wonder if a subclass of a string type that is "has leading and trailing whitespace removed" might be quite an interesting way to model your data. The object could do the strips on construct.
It's just a description of InputStream/OutputStream type classes. You can have an EncryptedStream as well.
There's something to be said for having objects that are just "a string (or number), but having had its prerequisites enforced and validated". Especially in unicode land.
UI concerns need to be in service to the full set of requirements and the data model.
UIs are easily accessible to end-users and product-managers, and can allow people to focus on a subset of the requirements. The trap is to allow the UI perspective to direct the development process.
It is vital to set an expectation with customers that allows discussion about UI matters as part of requirements discovery, but where they expect it to churn. During early development UI should be rough and should churn constantly in response to changes of more foundational matters: the business requirements, the data model, concurrency matters, interactions with other systems and the deployment.
I think the author could be more concise and also confuses multiple things in the article. I'll provide just a couple of points:
- Patterns like "Pipes and Filters" and "Signals and Slots" are *not* related to the process of software development, they are about internal software architecture. It does not matter how much one iterates over GUI during development with client's feedback, software still takes some input, processes it, and returns some output. Also, calling "signals" "inputs" and slots "outputs" is weird: usually signals are processed by slot (this is the Qt framework terminology for events and event handlers for GUI), so it is more natural to think about signals as inputs and slots as something that produces outputs.
- From the same section:
> I don’t know if these patterns are in a book, or have a name, but if not, they are now in a blogpost Or yeah, it is good to write an article without trying to do literature search first.
The last part of the article that says that people need to feel things before they understand whether they like them or not, was good, but I guess, all nontrivial things are done iteratively.
Agree with the idea but does anyone else find the text really hard to read? Not sure if it's the font or writing style.
For me, part of it was the font.
Another 1.5 times to go.
I wish the author would go back and read the Poppendieks' books on Lean Software Development. The Popppendieks make the following points:
* Software development is design rather than manufacturing.
* Software manufacturing is CI/CD.
* The Toyota Production System (TPS) starts with an idea for a product, and not with building the product.
* Design is a core part of the TPS.
* Building the assembly line is part of TPS.
Sadly the essay argues against Lean Software Development by arguing that it says exactly the opposite.
That "feels right" thing is "the quality without a name" from "The timeless way of building" by Christopher Alexander: it is "fitness for the purpose" or, perhaps, "being true to its own nature". It is both very real and very elusive.
Russian carpenters had a saying to the effect of "to do the job without tricks and let the measure and beauty to guide you". The primary skill here is not to do a thing, but to listen to what the thing itself is telling. (See also "The stone flower" by P. Bazhov).
Maybe the next revision of this article about GUIs should include some images
The only metaphor I’ve come across that is relatable to non-software people is building a house(in particular Australia because we do it so badly).
As mentioned humans are terrible at imagining things that don’t exist yet.
Any engineering surely works as an analogy - be it mechanical, electrical, chemical or anything else.
I feel like building a house and programming are the only kinds of engineering where the customer can change the project halfway through and not get laughed out the room
erh, the option with the house, sounds rather expensive? my day-to-day work regards professional building cost estimation software, and I would claim people try to do a lot of work to avoid having to do that. I'm not saying they don't sometimes end up doing it anyway, but in my perspective, the larger the scale, the more this is aggressively avoided as much as possible. Similarly, I encounter a lot of comparisons where "we" software people are told to be either more or less like the building people. What I do see though, is that building people perpetually tend to miss out on a lot of data optimisations / pipelines in the building project flow. They keep talking about wanting to do this, but in practice end up entering a lot of data from scratch multiple times. One of the culprits I see, is that the people who should have shaped the data for this, have no economic motive to do so - "why should we do that, it will only be a problem for X other people at a later stage we are not involved in".
It’s not only customer requirements that it works well for.
Ever started prepping a site for a concrete foundation and run into rock as your levelling?
Yeah - next door didn’t enjoy the arrival of the rock breaker!
I like when they just give up and a huge natural rock sits in the front desk with a table top or something.
It's weird that the author is bothered by the concept of waste being applied to software, because when people talk about waste in software development, one of the main forms of waste is inventory: the effort put into building software that has not yet been used.
Or, in the article's terms, things you've built but have yet to receive the feedback "that's shit" so that they can be iterated on.
Modern web UIs and the tools to create them are so bad that a billion dollar companies (e.g. Figma) emerged to make an entirely separate system to make non-functional UIs.
This is a similar situation to when websites would be designed in Photoshop and the translated into "pixel perfect" HTML.
I agree the better analogy is that software itself is the factory. We should aim to create lean software (well factored into simple, reliable, modular components dealing with manageable chunks of data at a time).
Lean manufacturing doesn't really imply much about the day-to-day work of the factory designers and their interactions with their stakeholders, except to say that when bugs (or inefficiencies) happen a developer should fix them to get the "factory" moving again.
Which is a different story to "how do you design a greenfield factory?" and "how do you design the widgets produced by the factory that will entice consumers to buy them?" and many other important aspects. If we compare to Toyota, your software team is responsible for designing the cars, building a factory from scratch for said cars, running the factory and getting cars out the door, improving the cars based on consumer feedback, improving the factory based on bugs/inefficiencies/internal feedback, while making all of the above profitable. It's a whole range of responsibilities and tasks that need to be managed differently.
Siemens NX 10, KDE 3.5, Windows 2000, SimCity 2000
Examples of UI designs which should be copied everywhere.
GNOME 3+, Apple iOS, Windows XP/8/11, SimCity 3000+
Examples of UI designs for which you need to STOP IT NOW!
I think agile approach to iterative building is kind of obsolete with AI. There is no "12th step agile fast process", with all stakeholders involved. Instead you get experts throwing slop over the wall to stakeholders, to see what sticks.
I made webservice recently. To help me debug and test results, I asked AI to make me a simple CRUD web UI in Vue. Customer liked it, and it was kept in final version.
This UI was not even a prototype. There was no request, ticket or problem to solve. I just needed it to fix other problem, and it was kept as a bonus.
Curious, which AI tool are you using for this kind of simple UI prototype?
Feedback loops are critical.
The faster you get real user feedback, the better. Decades ago, that meant coding first -- which sucked.
So we evolved.
Wireframes, pixel-perfect designs, clickable prototypes -- tightening the loop and cutting costs at every step.
Today, tools like Figma make that process even faster and more accessible. Build it in Figma, using UX-approved components and brand-approved styles, and you get something ready for feedback -- fast. (Plus, you save developers from wasting time coding something just to find out it’s wrong.)
Every front-end project should start with a clickable, usability-tested prototype before it ever hits a dev's backlog. It’s not rocket science. Skipping this step isn’t "moving fast," it’s just wasteful.
Absolutely agree — getting real user feedback early is everything. Tools like Figma have been amazing for that.
For folks who are still in the idea exploration phase or want to rapidly prototype low-fidelity flows without getting bogged down in design details, https://Wireframes.org has been super useful. It combines traditional drag-and-drop wireframing with AI-generated layouts from simple text prompts, which helps get something testable in front of users really fast — even before pixel-perfect designs are needed.
It’s a great way to tighten that feedback loop even further, especially for solo founders or early teams.
> GUIs are built at least 2.5 times
If only.
Maybe.
I get the frame but I don't think arguing the co-opting of Cockburn by the MBA crowd gets us anywhere.
Think about it. GUI - Graphical User Interface - a concept taken from HCI Human Computer Interaction. I think that describes Peek and Poke in BASIC pretty well 50 years ago though nobody attributes those to Dartmouth. It also describes AI at present around the world.
But HCI is lossy. Why?
Exploding n-dimensional dot cloud vectors of language leveled by math are exactly why I fear that GUI should have died with CASE tools as a hauntological debt on our present that is indeed, spectral.
The world doesn't need more clicks and taps. Quite the converse: less. Read Fitts. You don't run a faster race by increasing cadence. You run a faster race by slowing down and focusing on technique. Kipchoge knows this. Contemplative computing could learn too but I'm not sure waiting on the world to change works.
Imagine a world where we simply arrived at the same kind of text interfaces we enjoy now whether they benefit from the browser or are hindered by it. We just needed better, more turnkey tunnels, not more GUI! We sort of have those from meet:team:zoom, but they suck while few realize why or can explain the lossy nature of scaling tunnels when many of us built them impulsively in SSH decades ago for fun.
The present suffers from the long-tail baggage of the keyhole problem Scott Meyers mentioned twenty years ago. Data science has revealed the n-dimensional data underlying many, if not most, modern systems given their complexity.
What we missed is user interface that is not GUI that can actually scale to match the dimensionality of the data without implying a 2D, 2.5D, or 3D keyhole problem on top of n-dimensional data. The gap from system-to-story is indeed nonlinear because so is the data!
I'd argue the missing link is the Imaginary or Symbolic Interface we dream of but to my knowledge, have yet to conceive. Why?
It's as if Zizek has not met his match in software though I suspect there's a Brett Victor of interface language yet to be found, (Stephen Johnson?) because grammatology shouldn't stop at speech:writing.
Grammatology needed to scale into Interface Culture found in software's infinite extensibility in language, since computers were what McLuhan meant when he said, "Media" and I'm pretty sure "Augmentation is Amputation" is absolute truth if we continue down our limited Cartesian frame - we'll lose limbs of agency, meaning, and respond-in-kind social reciprocity in the process, if any of those remain.
The very late binding (no binding?) we see in software now is exactly what research labs were missing in the late sixties to bridge from 1945 to 1965 and beyond. I can't imagine trying to do that with the rigid stacks close-to-metal we had then.
I hope I'm not alone in seeing or saying that the answers should be a lot closer-to-mind now given virtualization from containers to models and everything in-between.
One can only hope.
[dead]
[flagged]