Data driven programming techniques: strength and weakness.

Data driven programming techniques: strength and weakness.

Some time ago I was able to make a whole inventory and shop application from a scratch in drastic 3 month timeframe. I guess the next time it could take 1/3rd of it. One of reasons for such speed was in the use of data driven UI and web services. The database schema was a source for web services configuration which was loaded along with the web application. Than it was used for data type operations (casting, validation, etc.) on web service level and for UI binding on front-end.

The magic is simple: each data type was presented as a read-only and writable UI widget. In such way where shown the primitive types, complex types been presented as tables with a common parent to serve a generic fields containers. It was a rare case of overriding of specific to the type functionality. Same concept of 90% web services implementation used common base functionality with adjustments only for 10% of special cases.

In parallel for same company has been made another data-driven forms framework which had a hiccups all the time. The ratio of special cases vs. reused code there was in reverse: 10% of common, 90% customization.

While both “frameworks” shown significant advantage in comparison to usual case-by-case development eventually the higher customized one started to fall apart: the development became less manageable and almost touched the same level of complexity as usual single form development practice. Looking back the many its aspects could be designed deeper which will make a better impact. But the major problem still will be unresolved: under data driven patterns in most cases the Software Development Life Cycle (SDLC) will be broken. Data driven UI usually introduced as a shortcut for long lasting case-by-case development. And as such missing many cycles, before enforced either by process or by development environment: from design (why we need a design if it is a part of metadata?) to versioning, code review up to release approval. Such systems in agile environment often developed and run in production resulting in emergencies and fast patches rather thought-through solution. Once the primary development is done it is assumed that maintenance and code adjustment will be easy, the work on changes is relocated to less experienced developers. 

Without restrains of usual SDLC it could be a disaster. So to keep it under control all parts of SDLC need to be a part of this data-driven framework. Without it the scope insulation of later changes and simple roll back process are the bare minimum.

In the inventory and shopping cart application above the DB schema was serving as metadata source for:
·         Database tables exposed as web services
·         Forms based UI
Does metadata have more use? Of course. J2EE developers immediately will recall the service flows, Object to data model mapping and many other cases, usually defined in XML files on project. Those are so deeply integrated with Java environment that people taken it as given.

I would like to bring you back to UI which modern days also taking over lot of business logic to its side. The fat client relives the server not just from the UI rendering but also serves as a gateway to the web cloud services. That is IMO Web 3.0 concept: the client is an application on its own and became a nerve cell in collective distributed network brain with server connections as synapses. The server is not in charge of whole process anymore, clients live and interact on its own discretion.

That idea defines the complexity we will face in fat client apps. The industry is trying to manage the scope by introducing so called “one page” applications. In another word by splitting the complex tack on primitive ones which do not have any or just little smarts on it. I guess that explains the popularity of AngularJS and its competitors.

But sometimes it is a good advantage to have a fat client app or module to leverage the cross-site knowledge and provide appropriate user experience. In my case the sync between version control, Wikipedia hierarchy, documentation sites and test results take too much orchestration if implemented via web 1.0 or web 2.0 patterns. The sequence of hierarchy discovery, sync data with automated and interactive merges, progress updates and error/retries handling do not fit easy into web service – UI pair. Old dogs would advise to look into server-side model with live sync like IceFaces or Vaadin. Modern ones will refer to MeteorJS. While that could be doable (not sure) the idea of keeping live model on server side is hardly scalable. Few hundreds simultaneous sessions will need proportional resources increase on server side. Rather the fat client will use just a bare minimum from each involved server.

Programming of complex apps is not easy a priory, on front-end especially. The XHR chain for the described tack with complex hierarchical orchestration is impossible with usual callbacks: too many of them will be encapsulated into each other. The deferred pattern given a bit more flexibility on scenario creation. At least sequential chaining is a part of when().then()

Such pattern could use a help from frameworks. Dojo Toolkit given the deferred collection (dojo/promise/all); some essential deferred combinations you could build yourself (like the dynamically added dependencies during hierarchy traverse via XHR). It still will be the hardest challenge to read and maintain such code. Not to mention to give such to third party developers for own modules integration.

The data driven flow will be the answer for resolving of complexity above. It matches the aspect oriented or reactive programming principles: the model change is handled by small and discrete actions. You could name it data change event handlers if that is easier to understand. In such case the flow will be defined by current data state. This state could be preserved and recovered at any moment giving ability to develop and test the small steps independently simplifying the development.

The decision on what form the metadata presented is on you. Handy part for it would be the hierarchy query, ability to persist/(de-)serialize, mix and separate different data aspects on same level of hierarchy( aka namespaces), change event subscriber API. Guess what has been chosen by XML guy J

Happy coding!

No comments:

Post a Comment