Wednesday, October 12, 2011

what's so wrong with our JSON API?

Nothing, it is just old-fashioned way of data transformation support.

1. Why JSON is used in general?
It has been times when the only language to fill the data into UI for web 2.0 apps( dynamic HTML content ) was JavaScript.
And for UI developers was quite difficult to parse artificial data formats. Especially accounting that JS has not given any reasonable parsing capabilities.
So once some kind of parser appeared ( eval() function or embedding JSON as SCRIPT tag ) everybody have started to use it despite of security and speed impact.
Than JS libraries covered gaps by making parsing themselves. That approach eventually picked by browsers and now we have JSON parsing support in JavaScript natively in most of current browsers(still need JS wrapper for compatibility).
On server side data serialization into JSON became widely supported by many frameworks. Python is exception and does not have native implementation.

2. Why JSON used in my company?
I guess nobody was considering alternatives.

3. What other ways?
Directly populate the data to UI bypassing JavaScript.
At the moment it could be done either on server by many technologies: from DB queries which rendering HTML to complex portal frameworks. On client side only technology available is XSLT.
That approach does not requires(but still could be used from) JavaScript and uses XML as data carrier.

Now to the history of XML in browser.
Till the moment when IE introduced XML support in XHR, it was available only via ActiveX which was not available as cross-platform solution.
Fortunately XHR became accepted despite of microsoft-fobia. Next significant step was done 4 years ago when rest of browsers followed IE and made XSLT part of native implementation. The process of XSLT support is still not complete in WebKit, but has enough developed to be useful.

(I still wiping the tears on fate of HTC(HTML Component), and VML - another strong Web 2.0 APIs by IE.)

Browsers have done poor job on introducing DOM model but wiping out remaining XML API from it. As result nobody have even been thinking to treat DOM as XML and apply XML technologies(XPath, XSLT) there. Time has been changed, most of browsers have changed the pattern, XHTML exist for decade, but W3C still not made the move. It looks like they will follow standard de-facto later on.

Meanwhile in same way as JS libs have closed the gaps in DHTML design and eventually became native API and later W3C standard, XML/XSL libraries will lead the way.

Modern applications have ability to use XSLT in browser without need to wait until generic frameworks matured enough to became a streamline.
The advantages given by XSLT use definitely worth the effort. It has been proven on multiple large scale projects on back-end. Now it is time to shift to browser.

Happy coding!

Friday, August 19, 2011

CA IT Process Automation Manager

Recently I have come over "CA IT Process Automation Manager" 3.0. Which should be great cross-computer process programming environment.

In few words ITPAM allows to run and interact processes on different computers. Like conditionally install different software on set of servers, configure and start them, wait for readiness, run tests from clients, wait for results and process report. Make that working on schedule or by external event like version control commit. All in different computers, different OS and applications.

ITPAM includes IDE, control/monitoring suite, set of agents for most OS-es. As language it is kind of old 4G development language: primary UI is UML with attached properties on each element. It has "low-level" programming capabilities over JS which triggered on different kind of events. IDE is kind of OK, sufficient for target goals of creation. Debugger is not capable of run-time breakpoints(should be set on source level), there is no "watch" variable. No debugger on JS level. Documentation is plain absent. 600 pages of docs currently present barely sufficient to have simplest algorithm implemented. No samples.

Conclusion: hire contractor already familiar with system and passed more than training. As simple question on real-life tasks like "how to pass parameters to sub-routine".

If you do not have skilled in media person, go to CA directly asking for their service. Do not bother to try on your own, it will take 2+ month for skilled developer to make any a bit more than primitive system. Or bug myself :)

Who are brave enough could try themselves. Wave CA IT PAM tips and tricks could be helpful. Feel free to add your own thoughts there.

Sunday, July 17, 2011

HTML structure-presentation-behavior and its programming

Original Wave: Thinking aloud on in-browser development languages and tools.

Thinking aloud on in-browser development languages and tools.

In the concept of 3-tiered HTML page design ( structure/HTML, presentation/CSS, behavior/JS ) none of tiers has given good enough OOP or aspect-OP capabilities I get used to. Could be any language be used on all tiers? Sure, yes.

JS is capable of all. Could not see much of positive sides besides of

sufficient to achieve the goal.

Weakly support of OOP.

lack of strict definitions and validation

Weak interaction with other tiers (css&html). Browser does not support sufficient native interfaces to be used by JS:

§Do you recall how and why getComputedStyle used? Browser does not give API for CSS parameters get and set. Even most popular layout computation routine get box dimentions needed quite CPU expencive work around.

§Structure/HTML dom tree also suffering of lack support on JS level. No transactional and detached/group operations. Each dom node operation accomplished by layout recalculation, while that should be explicit function to finalize series of dom changes.

CSS covers just own tier and almost none of others. IE behaviours do not count since absent in other browsers.

HTML tags. Interesting beast.

Has limited support on all tiers. Styles as inline attribute and result of HTML structure. Behavior as inline scripting and tags( reload meta, MARQUEE, animateTransform, etc. )

Obviously, nothing toward OOP or any serious modular development. IFrame or HTC do not count.


HTC need to be highlighted as BEST solution for HTML modularization on ALL page tiers(structure,layout,behavior). Sadly it did not became a standard even with effort to be part of W3C.


FLASH, Silverlight, Java are just proprietory plugins and have no support on all 3-tiers.


XSL till 2006 was not available in major browsers. Not the issue since.

Has no direct support for either of 3 tiers. But:

natively capable to use any natively supported languages. HTML as structure, behaviors as JS, layout as CSS

not to mention the mix of all 3 for efficiency.

there is no more need to separate tiers for either development or efficiency.

new higher level entities could be introduced on native level: themes, languages, etc. AspectRule (see XSL pipeline implementations ) is one of them.

no frameworks ( working on one)

no developers ( need HTML stack and XSLT in same hands )



Working on XslAspect, I found the need to define own language for "Aspect" rules. I.e. what(rule), where(in model) and when(environment) should be applied. The thing is that the rule should be functional on any point of delivery chain: back-end, before DOM loaded in browser, in run-time. From another side, it should have OOP and AOP capabilities. XSL has all needed, but for syntax makes me uncomfortable. So there is a dilemma: should be new(or some existing) XML-based language invented/used or use old-fasioned XSL for things it is not meant to be ( like methods definition ).


Since XslAspect uses XSL templates, they also need to be OOP capable. ( AOP has XSL nature anyway ). If that will be done, than invention of new things is not necessary. I guess. And even if some new features need to be added, there is still a way to alter XSL with own namespace or add the data into XSL code.


To make the OOP match, it would be wise to make basic OOP stuff defined in XSL terms:

o    deployment or source module (group of files). Not OOP stuff, but still needed

o    package/namespace ( to separate "classes" )

o    Type/Class

o    Object

o    Method

o    Parameters

o    return values

Than AOP things. But the join/attach points will define not just what and where the code should be applied but also the (server, browser, runtime) environment and model peaces.

At our disposal XSL has:

·         template mode.

·         template name.

·         template match filter. Which is good for model navigation.

In addition in XSL could be used:

o    XslAspect namespace to define own attributes on


§xsl:stylesheet as grouping tool

o    folder and file name convention

o    embedded xml with own namespace

In general, it would be better to use as much native XSL stuff as possible and as last resort - own namespace.


Procedural vs AOP.

Another problem - developer convenience in procedural programming. Aspect assumes applying of rules all instantly and on whole model.

Which is quite different than sequential step-by-step execution. While the efficiency of parallel execution in AOP is not under question, the order of operations and their encapsulation in XSL is not developer and data-driven dev-t friendly: rules applied to same data could be spread in modules and there is no dependency to make strict order. In XSL that dependency could be done on data level by RuleSet ( embedding XML with recurrently enclosed rules ). Usually rules are quite simple (do not confuse with implementation). That way 1st implementation of XslAspect has been done. Another name for such use data processing is pipelining. Result of XSL processing is the source data for further processing.


Is there native XSL way of strictly define the sequence and enclose-dependency?

Sure, there are few ways in addition to RuleSet. Question is, what is more convenient?

·         xsl:call-template with parameters. Parameters are calculated first. The parameter value could be the call to another template.

·         xsl:apply-template has ability to pass parameters. Unfortunately not cross-browser capable. (or it is in modern ones?)

·         Blend input with RuleSet and use complex filtering in xsl:apply-template making select on original data set AND on joint RuleSet.



Actually all methods in framework could be equal and transformed one into another during platform-targeted packaging. So no actual restrictions are in place, just developer's convenience. Speaking of which, lets compare 3 approaches:

·         RuleSet

<xa:AcpectRule aspectName="ScrollableTable" >

<xa:AcpectRule aspectName="SortableTable_UI" >

<xa:AcpectRule aspectName="SortableTable_DataSort" >

<xa:sort colNum="2" order="ascending" data-type="text" />

<xa:sort colNum="1" order="descending" data-type="text" />

<xa:AcpectRule aspectName="TestTable" />




·         xsl:call-template

It is definitely native XSL. Bulky at least.

<xsl:template name="AspectRule1">

<xsl:call-template name="ScrollableTable">

<xsl:with-param name="data" >

<xsl:call-template name="SortableTable_UI">

<xsl:with-param name="data">

<xsl:call-template name="SortableTable_DataSort">

<xsl:with-param name="data" select="."/>

<xsl:with-param name="Sort">

<xa:sort colNum="2" order="ascending" data-type="text" />

<xa:sort colNum="1" order="descending" data-type="text" />








·         xsl:apply-template. How to to make the match reference only to the select with 2 nodes(data & rule) and access data from both. It could be done by mode but it will exclude the use of mode for other purposes (like define the module). From another side why module name should differ from aspect rule name? Another problem is pipelining. To make it work apply-template should be called from parent.

<xsl:template mode="AspectRule1_asApply">

<xsl:variable name="d1">

<xsl:apply-templates select="." mode="AspectRule2_asApply"/>


<xsl:apply-templates select="$d1" mode="ScrollableTable" />


<xsl:template mode="AspectRule2_asApply">

<xsl:variable name="d1">

<xsl:apply-templates select=" . | $sort " mode="SortableTable_DataSort"/>


<xsl:apply-templates select="$d1" mode="SortableTable_UI" />





It is obvious to see that AspectRule is shorter and more readable. Note that in all implementation the result of previous processing used as an argument for following one. XSL 1.0 is strictly prohibiting that. But with some tricks and magic it could be worked around on all browsers.

The performance impact which you could see on additional data processing could be relocated to packaging routine, that way xsl:call-template will be generated and delivered to client instead of AspectRule.


In terms of native/interpreted languages the xsl:call-template is natively compiled and AspectRule is the interpreted scripting.


Sunday, July 10, 2011

JSON as fast alternative for XML is wrong

While going over Learn page ( the first page new bee will read ) on prototype site, I found the statement which leads to misdirect development community:
JSON is notably used by APIs all over the web and is a fast alternative to XML in Ajax requests

It would be nice to remove "fast" word since it brainwash populous without good reason.
I am not sure has anyone on site tried to compare JSON and XML against each other, but my conclusion was totally opposite.

On the chain web server - network - reader - browser rendering   the XML is a winner on ALL tiers.
Why? The answer on all is "native support". I.e. no interpreters involved, all compiled and multi-threaded, polished to perfect in memory and CPU use:

  • web server - back-end serialization could start from DB level. In most DB XML is one of native outputs. Than Object serialization. While on ALL platforms there are zillions of implementations, most platforms are supporting XML as embedded API. Java, .NET, PHP, name it.
    ANY popular enough "Middle" tier on server (the one which does the business and UI logic) has support for use the XML as data holder. 
  • Network. On network JS people are counting each byte. And JSON looks less polluted with unnecessary text. But hold, in production all content goes over gzip-ed stream. And surprize! XML actually has few bytes smaller footprint
  • Reader. Browser needs to parse data. XHR has XML support and EACH implementation uses native (C-based xml lib) code. As result it could process larger amount of data with smaller CPU and memory use. XML gives 100 times faster and 100 times larger data volume. That is right. JSON will die on 100K records, XML will hold 10 times more.
  • Browser rendering. What rendering API does support multy-threading and native code? JS+JSON? No way. Answer is XML+XSL. In addition to run-time (lazy) rendering of heavy UI it gives extra option: rendering before page is loaded. And with speed no JS could be compared to.
On development side "fast" usually mean many things.
For startup where people are working on all tiers, having same data format and API is quite development time saver. As mentioned, XML has support on all, JSON - on some platforms. Documentation and feature set for XML is way behind of what JSON could offer. From validation against schema to encoding.

For large projects communication between tiers has become more important. Cross-team development speed relies on strict data format and validation, performance on data transformation, same data/API on different platforms, etc. XML has it all. JSON - no.

Could you have JSON be applied in all or at least in half? No.

Analysis on why JSON has become so popular and advertised is out of scope of this review. Industry is not always going the rational ways.

Happy coding!

Sunday, May 8, 2011

Browser implementation programming languages

Part of Designing better Web Stack and Browser initiative. 

In times when multi-core CPUs came to each device, even on cell phone, there are several modules in web browser which could gain so much from parallel processing:

  • primitives rendering. Some improvements have been done on 2D accelerator use.
  • Parsing and DOM tree rendering
  • runtime DOM operations for Web 2.0 components load and functionality

For rendering 3D GPU is ideal candidate. Remaining two could be covered by XmlAspect idea. That way the internal browser code and client page code will be implemented using same technology and any part of whole browser model could be accessible from any part of UI ( as internal as client side ). Side effect: natively compiled code across whole web browser and client page(s).
©2011 Sasha Firsov

Sunday, January 30, 2011

SCC freeze, Precompiled Static CSS vs. Dynamic

Insulation of simple CSS rules as substitution for global scope will produce fast and reliable code, remove the danger of breaking complex Web 2.0 app UI and behavior outside of module/widget.
How to achieve that using SCC freeze and  Precompiled Static CSS you could read on my Precompiled Static CSS vs. Dynamic wave:

    Precompiled Static CSS vs. Dynamic
    Part of Designing better Web Stack and Browser initiative.
    CSS as selector and style/behavior had served relatively good job on
    1. Static HTML: abstraction and insulation of style from document structure (link-s)
    2. Dynamic HTML: operations on page delegated to native brouser layer as in opposite to JS-driven styling.( like CSS group visibility switch )

    But as complexity of web apps where growing and volume of CSS sometime overwhelming the HTML both targets were working against each other. Once the static part was exercised there is no need for CSS rules unless they are involved in dynamic HTML behavior.
    Lifecycle control over different types of CSS rules allows significantly accelerate the dynamic HTML changes.
    Imagine the stages of each HTML widget and page overall:
    1. Loaded
    2. CSS rules applied
    3. DOM adjusted
    Those operations could be dedicated as external transaction and done way ahead of implementing on the page. During the prepackaging/widget compiling. If the packager is aware of complete context, HTML could be premade. Obviously DOM events(if any) for such operations need to be chained and fired in appropriate order. But after the widget content is rendered.
    Systems utilizing those principles exist: IceFases doing that on server side, XSLT does it for inline styling and so on.

    When it comes to real life app, the following could improve the CSS-related performance:
    For the selected theme(set of CSS rules applied across whole app ) the widgets HTML to be extracted from rendered page and placed into most efficient for library format as a template. In best case it would be XML with all package resources, in worst JS strings map.

    The preference of general XML over JS(link) is a separate subject but in this case we are talking about JS string hash map versus XML (not string, but native object) in
    a) load and
    b) conversion to HTML.
    It goes to
    • memory allocation heap(native XML vs. JS VM),
    • load speed ( native XML object vs JS parsing - JSON parser usually is not compatible with JS framework packager ),
    • selector speed ( precompiled XPath vs. JS hash map) and so on.
    All of above is not in favor of JS, especially in embedded browser conditions

    Such HTML template for widget will consist of computed styles with default ones extracted. There is good amount of optimization could be done here, but important that there is a 1. original CSS + HTML class code and 2) complete set of applied computed styles. Which is quite handy during troubleshooting.

    If you have a luxury of generating target browser specific JS, performance and reliability will be even better.

    ... find more on original WAVE