2013-10-20

RE: How to solve the lack of a real multi-threading in JavaScript?

Recently on LinkedIn post the question was raised. And I was surprised how "respectable" people on JS community are struggling in their insulated "JS" world. While there are many ways to solve the given problem none was even exposed. It seems to be a good opportunity for people with out of the box thinking and bigger knowledge. But to gain the advantage there is a need for willingness to listen from community. Is there any?

On subj:
While JS is single threaded by definition, there are lot of ways to have parallel processing from graphics computation to data conversion. XSLT, SVG & CSS transforms just to name a few. Depend of your needs big chance there is a solution.
@Jeff Schwartz. The JS loading and parsing is multithreaded in Chrome. Running is not. XSLT uses internally multithreaded processing; frames have own threads. And so on. I had 2 alerts on the screen quite a bit when tuned communication in between. Have whole HTML rendered as a string with multithreaded XSLT and than passed to single-threaded DOM rendering. Please do not confuse people if do not know what you are talking about. 
There is no multithreaded JS(Web Workers aside). But there are numerous ways to use multithreading in browser from JS and between JS VMs. 

2013-09-20

XHTML5 DTD validation and data-dojo- attributes

 In development, updates will follow. 

While this is guide for dojo templates validation, the concept is applicable to any JS library which leverage of data- HTML5 attribute.

The mistakes in HTM dojo toolkit widget templates could cost quite a pain as browser will try to make a smart guess what it should actually make out of invalid document. As result you could find misbehavior in completely irrelevant location and spend a fortune on finding the original cause.
The solution is simple: add the DTD validation into development process.
If template is made as HTML, the smart enough IDE or online validator will highlight the errors.

The dijit templated widget uses a DIV as template string source which prevent regular DTD check. Fortunately text! AMD pluging gives ability to strip the tag from html body content:
SAMPLE

HTML5 twisted design made DTD validation impossible just for sake of XML hating. Lucky us there is a work around. XML allow to use the XSL and browsers are capable to render data-dojo- attributes out of "dojo-data" namespace before JS and HTML dom is used making the browser to deal with HTML5 syntax and XML parser with DTD validation.

The sample of completely valid XML and HTML5:
SAMPLE.

Dijit template does not accept the XML as template string, so you would need to use the XHtmlBody! loader.
SAMPLE.


References:

www.html5dtd.org - try to cover some subset of HTML5 dtd.

2013-09-07

OSI Foundation as a service?

For my open sourced projects shim library, shim-based JS framework, API registry, Dash Studio and series of micro-projects there is a need for legal shelter and non-profit monetary support. In general it fits to same concept as Apache or Dojo foundation. From brief review I have not found the way of using either of them as a base for my projects. There is no transparency on how the money flow between the particular project/contributor and donation/investor. In the world of highly dynamic projects arising, success and failure/death there is no service which could support opensource projects efficiently as a service. People at the moment either on their own or completely given up in favor of "free" community service. Usually not-profile activities are balanced among
  • outsourced(free) Foundation for legal purposes
  • Source repository/project management which is free for OSI project
  • Demo content hosting, light services on related VCS service hosts or cheapest hostings paid out of pocket
  • Donations processed either personally or by shelter company
  • (continuous) build and test environment
There some other aspects like publishing and advertising, meetings and large shows participation, etc.

The service of such kind will be quite beneficial  for efficient support of free and open source projects.

For such organization will be essential the transparency of funds distribution and flow along with straight targeting of donations.

2013-08-24

Modules compatibility as business problem

While the issues(and solutions) bellow are not browser client specific, the current subject of API registry primarily focused there.
  • Legal compatibility. While open source is a great engine of progress it also a significant problem in keeping the application legally clean. Most of opensource contributors are enthusiastic junior developers who do not really paying attention for such things. And adding some fancy UI component could lead to the lawsuit with the cost way bigger than nice UI could ever bring. To prevent such gap the OSI licensing has been developed but going further it become not sufficient to use the library which has multiple contributors.
    Contributors License Agreement(CLA) has been developed by foundations and each code commit should be backed by it when applied to the library or application. When using external modules legal department should go over commit list and make sure that every committer have signed it.
    The complete chain of external modules licences and CLAs could relax the restrictions for external use or make the approval process straight.
  • Identity validation (electronic signatures on sources&/binaries) will give an assurance for security review. For now this is performed(if at all) by IT department while it could be delegated to the trusted verifier within API registry.
  • Methods overloading. JS methods could have multiple signatures as result there is a need for signature recognition in beginning of method and routing to the code matching the particular signature in run time. That increases JS code size, makes API confusing and code hard to maintain(as business logic mixed with signature recognition).
    In compiled languages this problem is resolved by performing of signature recognition and type casting during compile time. That way methods overloading does not cost in development/maintenance and runtime performance.
  • Extending the existing API. Some existing API often need to be extended with additional functionality. For example retryCount in XHR or alternative location for AMD MID. Shim code could be attached to API similarly to AOP advise.
  • Platform support. Often the API has a generic solution which is broken in some environments( like lack of Html Components or Web Components in browsers ). Special treatment could be done before|after|instead of original module methods for such special case. The problem is in separation of main codebase and special cases. It is pretty similar to extending the existing API but with platform conditional inclusion applied on top.
  • API registry. For one or another reason like licensing or platform support the alternative modules could be demanded. The AMD does not answer whether AMD MID has a backup location or what licence it uses. The API registry meant to hold information about the module for legal, design, development, maintenance reasons
    • API - interface definition locator. The reference to pure API(s) which will be implemented by module.
    • dependencies presenting not just a list of modules used by given one bul also their validated/permitted revisions and perhaps sources.
    • API compatibility. The similar business logic could be implemented by different modules but not all of them could be compatible between each other and the caller. Method signature will resolve just the API syntax but not the implementation compatibility..
    • Localization I18N and accessibility support.
    • source and primary source location (VCS branch+revision)
    • identity validation. Verifiable source(as particular module as involved in binary assembly other modules) and binary signature, trusted compilation environment reference and locator, binary assembly all sources
    • help,blog,FAQ,discussion
    • support abilities
    • legal (license, foundation, contributor CLA, etc)
    • Platform support
    • Test( + against dependencies revisions) and results matrix and related support. The test in this case is treated as "test" dependency for original module.
    • Other dimensions TBD. Registry should permit custom attributes  of different types.
  • Open registry network. Currently solutions for the some of problems above are insulated under single foundation umbrella with same ( like ISI or Dojo ) licence or API convention. Having registry as open platform capable of passing through and caching data should take the compatibility complexity out of decision making opening the doors for individual contributor modules into enterprise. The best analogy for data sharing would be DNS.

2013-08-01

opensource micro projects hosting

It is time to split playground kitchen sink set into independent subprojects. Question is

what opensource hosting platform will serve the project better? 

It should serve as contributors as developers who use it, project discoverability and popularity.

Required components:
  • Version control( SVN, HG, GIT )   with 
    • http(s) protocol
    • free for opensource
    • project with contributors ACL
    • private repositories *
  • Ticket tracking(Jira,bugzilla,Track,etc)
    • link to VC via commits*

  • Wiki(w/ comments)
  • FAQ( w/ comments)
  • Forum
    • web+mailist
    • spam control
    • filters( aka out of office removed )
    • moderation (permissive and by approval)
    • voting
  • project management *
  • public discover-ability/self-advertisement
  • demo site
    • static content with own JS
    • active content (php,.net,py,java)
    • SQL
  • integrated continuous builds and tests environment *
What is missing?

* nice to have but for micro-project alone has no value, just as potential to grow into complex or commercial project.

There are some commercial suites like cloudforge.com, offering free service; free ones like sourceforge.net.

The popularity of GitHub among of opensource community is unquestionable and it grows along with GIT popularity itself. While the startup commercial pricing is loosing to bitbucket.org, should the herd instincts be accounted as more significant factor for opensource project?

Conclusion.


  • Host VC, bug tracker, Wiki on GitHub, outsource demo to JsFiddle + PHP hosing, mail list/forum either to google groups or one of PHP+maillist apps. Static pages and forum could be over GitHub Pages but need some extra effort to learn jekyll.
  • cloudforge.com - all including continuous integration. As integration requires $20/m (standard account+integration daemon) multiple projects could be integrated into common test flow. Active back-end and demo outsourced.
  • sourceforge.net hosts most of services. Developer web given PHP,  Perl, Python, Tcl, Ruby, and shell scripts. No direct build integration.
  • codeplex.com - .NET centric Microsoft source hosting proj. Has all except of content hosting.
My preliminary choice is GitHub for VC+Tracking ( has GIT with SVN bridge, best visibility and toolset ), CloudForge for test integration as cumulative proj; external demo hosting, forum TBD(perhaps CloudForge or SourceFourge).

Helpful links:
jsfiddle.net - front-end code demo site (JS, HTML, CSS, web service simulation)
GitHub Pages hosts myproj.github.io and uses static HTML (could be with HTML generator jekyll with disqus blog services- run by ruby locally and committed to github )

ajaxian.com - popular blog on Web 2.0 subjects.

Comparison of open-source software hosting facilities




2013-07-11

How to embed TFS revision number into C# WCF project

Once in a while there is a need to see whether the code changes where deployed in one or another environment. I.e. does QA box runs prod or latest dev version/branch. Or to make it worth between few developers who using shared deployment environment where it is uncertain what files each developer is working on. The answer would be in service which returns the revision of module folder and changed files set. Manual modification is quite painful and only answer is the automated build processing.

The idea is simple and could be applicable with any kind of project and version control. The pre-build step should run the script which get history for project folder, preserving only last record. Than run status check to expose files modified after check-out. The output is embedded into rendered service code.
Sample for WCF service generated from TFS output:

set path=%path%;%1%
set outFile=%~dp0ClassFiles\Revision.cs
echo using System.ServiceModel.Web;using System.ServiceModel;using System.ServiceModel.Activation; >%outFile%
echo namespace BSA_EventPlanner.ClassFiles >>%outFile%
echo { [ServiceContract][System.ServiceModel.Activation.AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] >>%outFile%
echo  public class Revision >>%outFile%
echo  { [WebInvoke(UriTemplate = "/", Method = "GET", ResponseFormat = WebMessageFormat.Xml)] >>%outFile%
echo   public string EntitieList() >>%outFile%
echo   {  var s = @^" >>%outFile%
tf history %~dp0 /stopafter:1  /recursive /format:brief /noprompt >>%outFile%
tf status %~dp0 /recursive >>%outFile%
echo ^";return s;}}}  >>%outFile%

The rendered code will look like:
using System.ServiceModel.Web;
using System.ServiceModel;
using System.ServiceModel.Activation; 
namespace BSA_EventPlanner.ClassFiles 
{ [ServiceContract]
        [System.ServiceModel.Activation.AspNetCompatibilityRequirements(RequirementsMode =
 AspNetCompatibilityRequirementsMode.Allowed)] 
 public class Revision 
 { [WebInvoke(UriTemplate = "/", Method = "GET", ResponseFormat = 
                WebMessageFormat.Xml)] 
  public string EntitieList() 
  {  var s = @" 
";return s;}}}  

The string is empty if TFS is not available, otherwise it will be filled with revision and modified files list.
Do not forget to add the WCF routing and ClassFiles\Revision.cs file to project.

Happy coding!


2013-06-05

JS UI Widget design

the work in progress, content will evolve over time.

Requirements

HTML,CSS and other resources to represent insulated independently designed and developed module.
While those component types are not mandatory, using standard components increases manageability, productivity and reliability. From managing point having the component in publicly-accepted standard allows to easier find worker, reassign the job, pick the best expert for critical peace, match the standard and utilize this standard-based tools including validation, tests, compilation and transformations, other tiers(component types) interfacing protocol  and so on.

The resource types listed individually in order of development life cycle:

Requirements and overview documents. Alternatives are online docs and static file format. Online is more about collaboration, static is the currently accepted snapshot. Static content ideally to refer online threaded discussion with ability to back-refer location inside of static content. As static content for now HTML is universal. It is popular, has ability to include raster and vector graphics, print-friendly.

Prototypes and mockup. Adobe (Photoshop & Illustrator), Visio are most industry-accepted tools. All have ability to keep the project in cross-platform and -media format. SVG is the best candidate. Another approach is to have public format(SVG, PNG, PDF) be a secondary, synchronized on each modification of original. I personally like 3D MAX for those purposes, but as project manager will not use it unless have at least few people on team familiar with concept and product.

HTML based templates.All UI developers are familiar with this format. Any custom one is narrowing the number of people familiar with it and as result cut managing ability. The balance of given and taken out functionality is always in place. HTML itself does not have all "template" functionality of course. But it is popular to under estimate its power.
I like the ability to fuse different technologies in compatible way. For example dojo toolkit dijit 1.x has given HTML customization via data-dojo-xxx attributes leaving the HTML functionality intact.
While among "native" templates the XSLT is oldest and most robust one, it is not 100% HTML compatible. It is not easy to find people skilled in both XSLT and HTML.

The a lack of development cycle support is applicable to all template versions. XSLT has best coverage on that( preview, debug, rudimentary documentation and test suite).

Another reason of tempate format selection could be the native engine(parser and binding) support. On web client the browser itself is one of such engines.On server side multiple HTML parser available, XML is the simplest one. Native support given performance via native code, multithreaded and asynchronous processing.
IMO for simple widgets DTK 1.x dijit format is best(but with browser loading and parsing instead of JS one). For complex is XSLT.

CSS and style formats. CSS is outdated in terms of complex project requirements. XStyle provides rudimentary but still way more powerful control over complex styling rules. For relatively simple UI the set of rules is limited and as result will (and should) fit into CSS limits. If not, than styles are subject for insulation into own module(style themes and i18N localization are samples of such). Good styling format will be back-compatible with native CSS and give as shim as OOP abstractions. At the moment it does not exist.

Tiers Plumbing. The configurable loader should be sufficient to support distributed compile, server- and client-side modules bindings in case the tier resources will be loadable directly or by loader plugins ( see cssI! as sample ).  AMD or UMD loaders are good fit.

Design

Phases of widget life. 

Placeholder, Facade, UI and runtime. (de-)serialization and View Model. Interaction with data model.
Incremental and steaming rendering. Interruptable and prioritized rendering.

Presentation variations.

I18N,  media device (display|print).

Shims.

Declarative programming as effective way to target rendering options according to destination environment and configuration.
The shim name has dual meaning: "marker" in declarative programming and implementation module for it.

2013-05-30

JS hashmap as interface tree for inherited hierarchies

If you ever need this, last night made converter from JSON to class inheritance hierarchy
{Exception:{IOException:{StreamException:{OpenStremException:function(){} }}}} - for Exceptions hierarchy

{Marker :{Relative:{prev:function(n){this.index=n;},prev1:0,prev2,... }
,Absolute:{abs:function(n){},abs1:0,abs2:0,... }}}  - for Parameters reference

You could find that useful for marking of input field types in form generator.

There are few JS patterns where pure class inheritance have much sense. Exceptions are among them.
OOP defines the pure interface as a class with methods or members declarations but without their implementation. Even if method/member will be defined by JS implementation there is no need for declaration in interface class itself. For example in given hierarchy

  • Exception - IOException - StreamException - StreamOpenException

only final (StreamOpenException) could hold some data like URI or redefined toString() method. On exception handling code only the hierarchy could be used. Data like URI  could be utilized mostly for non-functional (logging) code. If there is a rare need to (re)define implementation members, the function in hashmap serves as constructor. I.e. will be valid to say new exceptions.StreamOpenException(myURL). Methods could be defined within constructor via this.xxx assignment. I.e.
StreamOpenException: function(URL){this.toString=function(){return this.URL;}}
will override parent string conversion inside of StreamOpenException constructor.

Another application for pure hierarchy definition would be parameters markers:
  • $w().xhr(...)..title().innerHTML( $w.prev, $.prev2 )
where $w.prev and $w.prev2 are instances of Marker(above) hierarchy. For convenience $w.prev, $w.prev1,$w.prev(1) referencing same entity, which is result of previous operation on chain. In sample above it is title attribute.


Happy coding!

2013-03-14

$w(css,parentNode) as WidgetList (NodeList on steroids)

Story in development, the post will be modified as ongoing design changed.

The CSS selector for widget's content has been used for a while by dojoplay2012/lib/TemplatedWidget. It allows the query to scope by widget's content and handy for group operations given by NodeList:

this.$(".classInChild").html("");

This interface has been extended to get sub-child widget

this.$w(".classInChild").resize();


At the moment $w() returns only first widget but it would be handy to operate with all widgets with same call. So the expression above will resize() all matching children rather first only.

Following the NodeList chaining concept, would be nice to have calls chained.

this.$w(".classInChild").update().resize();

WidgetList will be returned on each call to support the chainig. That could be achieved by wrapping returned widgets into object which will simulate each method call invoking each widget one and returning WidgetList.

To access results of last call :
this.$w(".classInChild")
    .resize(maxDimentions)
    .getSize()
    .forEachResult(function(sz){ total.x+=s.x; });

COST

When WidgetList is created there is no way to define ahead which method will be called as a first in the chain. Which will have an overhead of wrapper creation for each method name in each returned object.
Each call could potentially change widget set and it's  interface and as result WidgetList and methods map need to be updated. While API changing of widgets could be ignored assuming no changes during WidgetList existence, DOM changes are little trickier to guess.

WAYS AROUND
The chaining by member method name could be substituted with explicitly set function name set in first parameter:

this.$w(".classInChild")
    .call( "resize", maxDimentions )
    .call(  "getSize"  )
    .forEachResult( function(sz){ total.x+=s.x; } );

Another short version of above using return the function instead of object:

this.$w(".classInChild")
    ( "resize", maxDimentions )
    (  "getSize"  )
    .forEachResult( function(sz){ total.x+=s.x; } );


Such call convention will prevent creation of mappings hence could help with extra memory and CPU during execution. But it makes code less elegant and readable. The simple string substitution during compilation could convert original "nice" syntax to "efficient" one.

Comments and suggestions are welcome.

2013-02-12

Stateful object history vs change journal DB patterns

During creation of retail sales module I have come over 2 different patterns dealing with history of object.
On one side each atomic operation need to be preserved for audit reasons. Which makes a patter of journalling of change records. The current object state in such pattern is a sequential combination of journal records. In the best case it is just a last record.  In worst it is a set of business rules applied over the sequence. In order to keep track of current state all records should be preserved.

As alternative instead of keeping change records as primary source for current state the object's state itself could serve as historical record. That would eliminate the need for extra table per entity(change one).

As the history is still is requirement, each change should have a matched record in HISTORY namespace. This namespace could reside as in another DB as in same but under another high-availability profile(partition, etc.)

There is another frequently used pattern which goes along with history track. Comments. The comment often accompanied the change but on another hand comment could be just a verbal instruction. While the 1st fits into change content, the comment without a change looks as overkill as it wasting the object state for "no reason". But if we look from the data integrity prospective the object state is a part of comment. I.e. without object state as whole comment has no track value.

The interesting side effect of history pattern is that schema will be preserved as on current objects snapshot as in history namespace. I.e. there is no need to increase DB schema complexity to add history track.

Having the state embedded into object rather using journal assumes the data from change will embedded into object. Normalized schema will allow to preserve minimal data footprint ( the change will have only changed fields and reference to obj ). Fusing the change and other object fields will preserve. Could the increase of db table size be justifiable? And what are the criteria?

1. DB schema simplicity. As stateful object itself comprise all potential changesets there is no need for keeping relations between change and object. During prototyping phase it is a strong argument. As well as in  conditions of stressed development resources.

2. DB performance effect.
The journalling allows to insulate change operations and avoid affecting of other object properties. Here the current object state is not simple extraction: query or complex procedure are giving the cumulative result. The optimization of such cases leads to creating the cached (either withing object or aside) "current" state. Which in fact is same pattern as stateful object except of some complexity on top of it. As stateful object reflects all kind of changes, tuning of each use case still in place(indexing, partitioning,etc). But it gives ability to separate statistics and troubleshooting( history review) optimizations within dedicated environment(HISTORY namespace). The most real-time namespace is "current" state is extracted from way larger volume of historical data and as result could be held in high-available profile.

The sync with history for stateful object could be done over generic queue-ing. Which will allow to decouple
RT and HISTORY namespaces. Obviously sync need to be in place when looking on RT data withing HISTORY. But that is a rare case as most of HISTORY operations(troubleshooting or reports) are done way beyond of queue flush time( days vs minutes)

3. DB integrity. While journalling permits to use strictly normalized schema, it is also subject for DB corruption or cost of transactional lock. Unlike that, operations over stateful object are atomic by definition(no object references) and do not require any locks.

4. For stateful pattern the State flow implementation gain simplest implementation. Either service which does the change initiates next step in the flow. Or it could be done natively by DB triggers (not my case but DB-centric apps will value it a lot).

5. Security. Stateful approach also given ability to create extra DB user profiles which in case of ShoppingCart could be a business/audit requirement. More discreet access to historical vs current state data, excluding any changes in HISTORY namespace are shaping true multi-tiered security.
Conclusion. If the project is stable and size + performance dominate the  development cost, journal records pattern could win. In other cases (including mine) Stateful pattern is the way to go. Hooray to conscious simplicity!

2013-02-01

The internal combustion engine without batteries, generator and starter

There was a news article on ability to ignite ICE using its own chambers for the first push. No details, but in a week later the whole chain of relative ideas came over.

It looks like modern gasoline cars could loose the weight(and read the cost) of batteries, generator and starter(which sometimes combined with generator). The heavy weight of those three components along with additional load on engine to spin the generator is good reason to improve the efficiency/performance.

The diesel engines are ignited by compressed fuel which requires quite more energy to start in comparison with simple fuel injection and spark in gasoline one. I will set diesel aside for now, lets see what could be done in gasoline car.

Complete removal of electrical storage does not seem possible as fuel ignition requires a spark (gasoline engine) not only for initial start but also during work cycle. But for this purpose the amount of energy will be significantly less than carried in current car batteries.

Most of us have more than sufficient energy to produce the spark with a twist of thumb. The piezoelectric gas lighter is a prove. The last peace in this puzzle is fuel injection. Could it use same energy from car keys twist? Hmm... possibly. As a last resort, gas pedal could serve as additional starting energy source. Perhaps it recall the time when initial fuel injection was controlled manually in carburetor :)

The small amount of electricity to support the work cycle could be stored in small capacitor. I remember the times where it was used instead of dead batteries. But at such situations spinning of generator was a pain - the classic engine required full spin by starter. Or as in 100 years ago, make a starter from your own hands.

How the generator could disappear? It could not. But classic generator is not required anymore. Instead few wire rings making an electromagnetic coil fused into ceramic cap of engine plus magnetized plunger will render enough energy to keep the cycle of gas pump, injector and ignition. As motor gain enough RPM, amount of generated electricity should fulfill whole car's needs.

Reduction of batteries volume will affect on ability to use the car lights and say stereo without engine on. Seems bad... But if you think of the gasoline as a electricity source it will be justifiable. Obviously the engine should be working as efficient generator.

I love the car without gears, transmission, all-wheel powered with independent torque (and even spin direction) control. More light-weight and efficient.

Which come to the next generation: fusion of ICE with generator.
Details for that idea are following...

2013-01-30

The pattern of using fake functions parameters instead of var declaration in JavaScript

Many JS geeks making bazaar tricks to shrink the code as much as possible. While it makes the code readability worse and as result less reliable, it pays back with saving a byte or two making them proud.

There is my attempt to recover the damage by balancing it with some benefits.

sample 1 function(par1, par2, var1, var2, var3,i,j,x,y,z,k,w,n,m)

par1, par2, - Using parameters is out of scope of this article, just a note on the good habit to keep them read-only.

var1, var2, var3 - the local variables which are usually declared with var:

sample 2 var var1, var2, var3;
The good thing about such kind of declaration that along with declaring the variable it could be initialized.
IMO it is bad to have any uninitialized variable. Which means no ahead-declaration. The variable should be declared in place where it used and all initialization data are available:

sample 3 for( var x=0...)
for( var y=x ... )

But scripting guys do that all over thinking of saving bytes for "var " string and replacing with list of variables(as in #2, #4):
sample 4 var x,y;
for( x=0...)
for( y=0 ... )

5 more bytes ('var ' and ';')could be taken away by having local variable declared as function parameters:

sample 5 function (x,y)
for( x=0...)
for( y=0 ... )
The list of useful side effect will appear:
  1. shrinking the code footprint
  2. no global scope variables mistakenly assigned(#6)
    sample 6 function ()// no x is declared in the function
    for( x=0...) // here the global scope 'x' assigned
  3. questionable but will be loved by scripters(yuck) - declaring of all variables in single comma-separated list.

To prevent potentially missing declaration variables usually used in local scope (i,j,o,r,d,x,y,z,k,w,n,m) could be listed in complex functions as the safeguard.
The compilers could do the optimization quite well. Not sure whether relocation of variable declaration into function parameters is an option there.