Trying To Fix The Web Dev: Part 2, The Solution?
If you missed an introduction to the series, please check it out first.
This is the part where it gets serious.
Disqualified Candidates: New Wave Front-End Frameworks.
Svelte, Preact, etc.
- Some of them try to address the complexity (making simple things more straightforward and complex things harder).
- Some client-side bloat
- Some state management burnout
- Some are distilled versions of others
But none of them solves The Issue ™ (even Juris) because they do nothing with the server.
Candidate #1: Hypermedia-Driven
HTMX, Unpoly, fixi.js, etc.
"HTML enhancement" that enables fragment fetching and partial page updates in response to user actions without writing JS code.
How It Works
General Flow:
- User triggers an event on the element (e.g., clicks a button or submits a form).
- The hypermedia library inspects the element's attributes to determine the request URI, then sends the request via AJAX.
- The server returns an HTML fragment
- The hypermedia library inspects attributes to determine where and how to apply the HTML patch.
What Is Good:
- Logic is on the surface (you don't have to understand component state and run JS code in your head to figure out what will happen after user action)
- No need for the client-side input validation logic
- Extremely light on the client
- SSR without hydration
- Feels like a natural evolution of HTML (there is even a standardization initiative)
Why I Will Never Use It:
Control Flow Distribution
Imagine that every time you write a function, you do something like this:
- Start as always: "function onForm(data) {"
- Create a separate file submit.js and write the function body there
- In the original file, you write a meaningless text macro to include the body at runtime and then return the result.
function onForm(data) { @submit.js return result }
Does it look like a haven?
That's how I feel about HTMX:
function onForm:
<!-- post to "/submit", then replace the form itself with the response body --> <form hx-post="/submit" hx-target="this" hx-swap="outerHTML"> <input type="text" name="name"> <input type="email" name="email"> <button type="submit">Submit</button> </form>
submit.js:
// http handler, that processes form data app.post("/submit", (req, res) => { /* authorization, validation */ const { name, email } = req.body; // respond with HTML (usually using a template engine) res.send(`<p>Submitted: <strong>${name}</strong> (${email})</p>`); });
But isn't it the same as the "old" approach? No, because in native HTML after submission, the new page will be loaded, so control is just passed to the back-end and never returned.
Isn't calling the API a similar thing? No, because in 90% cases, the API is just a DB wrapper; it has nothing to do with the control flow.
Shallow Simplicity
Arguably, the hypermedia stack is the fastest way to launch. However, the limitations of the architecture complicate the implementation of an already complex UI.
It's like you have only one way to build user flow - you store stateless functions that always output a single HTML fragment in a huge map, and then for each interactive element, specify the map key and arguments as constants. Does it make simple things more straightforward? Yes. Does it make tricky UIs even messier? Also, yes.
What if you need a multi-step form or non-linear user flow? Store intermediate values in HTML, cookies, or external memory. Is it more transparent than using a variable?
Update several places on the page in response to user action? Make a total control flow mess.
UI state and state management libraries are here for a reason, and complex web apps are a modern-day requirement, keeping web relevant in the mobile apps era. There is a particular hate towards
state
in the community, but it's probably because it allows us to create more elaborate things.
Text-driven
It's not wrong. However, I don't feel comfortable describing an algorithm via HTML attributes.
Endpoint Security
Although the "API issue" from the previous article is addressed - "DB wrapper" is gone (there can't be more "Backend For Frontend" than HTMX server), you still need to care a lot about proper endpoint authorization logic (with state-in-request, there are even more things to keep in mind).
Verdict:
Front-end hypermedia tools scale the "classic" approach, without improving it, even if there are (or will be) more "sugar" containing libs that partially address my points.
Don't get me wrong, it's great technology that has its niche. However, it's essential to consider its limitations when selecting a stack for a long-term project.
So, does it address The Issue? Partially. Is it The Solution? Hell no.
Candidate #2: Server-Driven
Hotwire Turbo Streams, Phoenix Live View, Laravel Livewire, Blazor Server
Main UI logic is written for the server and runs on the server; the client acts as a "remote" renderer and user input provider.
General Flow:
- User triggers an event on the element (e.g., clicks a button or submits a form).
- Client lib intercepts the event and sends it to the server
- Server prepares HTML updates and how to apply them, wires that to the client
- The client follows the server's instructions
Compared to the previous candidate, the server owns control flow completely. The browser has no idea what will happen in response to the event. You can think of it like a backwards API - the server calls the client to modify HTML on the fly.
What Is Good:
- Business logic has returned to the server 🔥
- Light on the client
- SSR without hydration
- State-full* server superpowers, like page instance & element scoped endpoints/handlers (no developer-defined and exposed API at all)
* Not all of them are stateful
So this is it? The Issue is solved: extra-thin client, no API, business logic executed in a controlled and safe environment.
Compromises:
Blocking Event Processing
Guilty: Phoenix Live View and Blazor
Page by design lives in a single thread or process (Phoenix). In other words, when interacting with multiple page elements, processing, database interaction, and rendering occur in series, with the next event being processed after the previous cycle is complete.
That makes internal framework logic much simpler (no concurrency issues), but it looks like a blocker for mass adoption. You don't want your UI to be unresponsive or accumulate an event queue when a time-consuming task is running. You can mitigate this by leveraging background processes and asynchronous tasks; however, the complexity cost seems too high to justify the effort (task management & UI synchronization).
Hotwire lacks concurrency control at all. Livewire, by default, blocks the component during a request on the client side, which is most of the time good enough.
50/50 State Situation
You can argue that modern web UI can operate without state. But it's here already, relied on everywhere, and keeps being reinvented.
Hotwire
No built-in state management.
Laravel Livewire
Serializes component state to hidden inputs under the hood, which seems cool.
Laravel server is stateless and does not "live" in memory. It "reacts" only to requests - restores component state from request data and session storage, then uses it to render an HTML update; However, it has built-in mechanics to trigger the update of multiple components (within one request lifecycle).
Blazor Server
Full support: component state, global state, derive functionality, reactivity. But C#, so who cares (
Phoenix Live View
Global state per LiveView (root app component) instance: propagated to components as props and triggers rendering, similar to storing state in the root component in React and passing down the tree as props.
Components state, which operates in a similar way to React.
Grandpa WebSocket
UI updates rely on it in: Phoenix Live View, Blazor Server, Hotwire Turbo Streams (truly server-driven part of Hotwire, because Turbo Frames are just like HTMX)
Let's address the clickbait elephant in the room.
Not compatible with QUIC (HTTP3).
95% browsers now support QUIC. It offers higher resistance to network conditions, lower latency, and faster connection establishment. If your app users live in a datacenter, that makes no difference. Otherwise, for a server-driven UI, lack of QUIC hurts.
WS via QUIC may be enabled in the future. However, even WS via HTTP/2 (after 10 years) adoption remains rare. Also, with wider streaming request body (hello safari) and BYOB support or WebTransport, we won't need WS that much.
Sloppy Reconnect
Connection loss detection is guaranteed only by pings or heartbeats. For example, 10 seconds between pings, with at least two skips required to initiate reconnect, resulting up to 20 seconds of unusual UI behavior.
Timeout-based disconnect detection is unavoidable in real-time scenarios, regardless of whether you are using WS or not. But it adds up to weaker network condition resistance of TCP compared to QUIC, longer handshake, and separate connection circuit (meaning other network app activity failing does not help WS disconnect detection)
App Users Notice
I am talking from experience. WebSocket-controlled UI comes with the deal: there will be cases, despite the internet already being reachable, where the app is not responding for a noticeable time. Maybe it will happen not so often, and perhaps it's not ruining UX, but it's a design flaw.
SSE situation is also not great. The browser is extremely sloppy at reconnecting. I have no idea why. Especially after PC wake-ups from sleep, and again, you will rely on heartbeat (and manual event source recreation).
Non-Native Forms
Guilty: Phoenix Live View and Blazor
By default, it is serialized and passed via WebSocket. Not a deal breaker, but requires care when dealing with files.
Endpoint Security
Laravel Livewire, Hotwire
Server is stateless and does not maintain in‑memory UI component instances with unique "hooks" between requests. Endpoints are static, so you must enforce authentication and authorization on every handler, just as you would with HTMX.
Stateful Cost
Phoenix Live View and Blazor
- Load balancing is not so trivial, because a page "lives" on a specific server instance
- Each active page consumes memory on the server
- Non-cachable HTML
- UI will lose state after a period of inactivity (server memory is not infinite) or due to server restart
Seems like a reasonable compromise for me. With more users, you probably can afford more computing resources.
Verdict:
Phoenix Live View is fascinating as a concept (app lives on the server, browser as remote actor), but quirks like "single-threaded" blocking UI and HTTP/1.1 reliance make it hard to recommend. Not to mention that elixir is not everyone's cup of coffee.
From a theoretical perspective, Laravel Livewire represents an evolution (real one this time) of the classic web — stateful Components and "live" page updates, all while the server remains stateless. However, it's not factually "live". There is no app runtime; you can't initiate UI Updates from the server because there's no reactivity, endpoints are static and require care.
Blazor (the server-side variant) is conceptually similar to Phoenix LiveView, but it has a distinct Microsoft flavor. Users' reports confirm it is sluggish and has high resource consumption.
Hotwire looks like it is either advanced HTMX or inferior Livewire.
So, does it address The Issue? Yes. Is it The Solution? Unfortunately, no, too huge of a compromise.
There is no end to our suffering?
I think there is, and now I know The Formula. So, I quit my job as CTO and dived in.
Subscribe for the Part 3: The Formula.
Top comments (25)
1 is basically jQuery's ajax* functions but with a lot more cruft in your HTML.
Personally, I want the page to work without JavaScript as much as possible, and to bolt things on, and leave everything in data-* attributes.
Mind blowing 🤯. These articles will become legendary.
Such a refreshing breakdown ..
That’s a lot of heat. Excited to see how you wrap this up in Part 3!
Probably Part 3 will not be a wrap-up yet..
Great article ! So many web tech references to check out.
Btw, found a typo :
Thanks, fixed
I am not sure Juris is disqualified, I am waiting Resti opinions
I mentioned it specially for you
Thank you :-)
Anyway, it will be interesting to hear Resti Guay's opinion on this. I hope you will share it.
Perhaps will he respond himself🙂
Hi! nice article rant about the ecosystem and I totally understand the sentiment and I'll wait for part 3. Thanks for including juris. It helped a lot. Cheers
Fore sure this can't be done in server.
Juris reactive style attribute for Heavy Physics Simulation w/o canvas and library
Thank you Resti,
Awesome , could be part of a Dom DemoScene :-)
Thanks much for coming here.
Impressive demo! Of course, there is no need to do it on the server; some things make more sense on the front.
Good luck with Juris!
Thank you for these articles. I enjoyed them.
Just curious, have you ever looked at Inertia.js, and if so, where would it fit in your taxonomy above?
It's not a new wave front end framework at all .. it's not even a framework.
It's not hypermedia driven or partial pages.
it's not grandpa websocket.
But it does eliminate the need for backend APIs which I thought was fairly creative. A rich view (view logic and view state) with domain logic, domain state and domain control flow on the backend, and a messaging protocol in the middle.
Looking forward to your next article.
Thanks
It's a pleasure to read such comments; thank you as well.
I had never heard of Inertia, so I briefly checked it out.
If I understand correctly, Inertia enables the dynamic serving of front-end framework components in response to path changes (that's how the server controls the UI). It ships data alongside components (instead of performing API reads) and utilizes forms instead of relying on API writes. Additionally, it supports SSR with hydration + Node.js.
It's like a combination of Laravel Livewire and Next.js. That's interesting; the server determines a set of components and data on the page, and these components define the UI behavior - nice separation of concerns.
What would worry me:
I think it's a legitimate and unique approach, and it works well when your app is heavier on on-page interactions than on back-end communication.
Hi Alex
Thank you for the quick response.
My naive perspective: I think of Inertia.js as a RPC messaging protocol over HTTP. It's scope is limited to interception -> encoding/decoding to/from inertia protocol -> handoff to the backend framework or handoff to the frontend component framework (and update browser history). And that's it scope wise.
I believe Hydration, SSR, etc are primarily the scope of the respective front end and back end frameworks.
If you squint when you look at the Inertia.js protocol/json coming from server to front end, you can almost see SOAP :-).
But, I might be all wet.
Thanks
Hunter
@derstruct
Hello Alex
You are correct. Inertia will spin up node on the backend and use Vue, React, etc for SSR, SSG etc.
Thanks
obviolsy there are not better than angular or react and will never be better due to their nature, those "frameworks" or maybe superset of html are desgined for people who don't know or they don't want to know js or they think that using them is faster, yeah it is for prototyping/poc/mvp not for products with complex functionalities
Baseball, Huh?
??
You lost me at "...But C#, so who cares ("
C'mon, why so serious? It's just a joke, no disrespect for C# enjoyers. I love Rust; if I was reacting every time someone dunked on it, living in a cave would be my only option.
I liked the joke. And I'm a C# (and .NET / Mono) fanboy.