2010-11-16
Silicon Valley Chrome Developers
My 2C to community will be from this blog.
2010-11-04
ASSERT - how it should be
ASSERT( conditionToFail)( var1, var2,...);
In debug mode it should ask to "break, ignore, ignore forever". On break set programmatic breakpoint and call debugger, than call condition once more.
Depend of interactivity ability (console, UI, none) prompt could be shown or not.
Pseudocode:
while( !condition )
{
static boolean ignoreForever = false;
if( ignoreForever )
break;
log(FILE,LINE);
log( "condition=" #condition# " var1=" #var1 "var2=" #var2 );
int pr = prompt( "assertion condition " #condition# " failed. break, ignore, ignore forever"? );
if( pr == ignore )
break;
if( pr == IgnoreForever )
{
ignoreForever = true;
break;
}
debugger; // programmatic breakpoint
int repeatAgain = 0;
condition; // second call for troubleshooting in debugger
if( !repeatAgain ) // in debugger set to 1 and will cycle again
break;
}
2010-09-14
Stealing IP
GoDaddy.com is proxying both bogus domains. Story continues...
Who is entry on TOTALLYABSORBED.COM
Registrant:
Gigline Software, LLC
15244 Vintage Ave
Grand Haven, Michigan 49417
United States
Registered through: GoDaddy.com, Inc. (http://www.godaddy.com)
Domain Name: TOTALLYABSORBED.COM
Created on: 19-Jul-09
Expires on: 19-Jul-11
Last Updated on: 19-Jul-09
Administrative Contact:
Buitenhuis, Eric eric.buitenhuis@gmail.com
Gigline Software, LLC
15244 Vintage Ave
Grand Haven, Michigan 49417
United States
(616) 502-2228 Fax -- (616) 395-0759
Technical Contact:
Buitenhuis, Eric eric.buitenhuis@gmail.com
Gigline Software, LLC
15244 Vintage Ave
Grand Haven, Michigan 49417
United States
(616) 502-2228 Fax -- (616) 395-0759
Domain servers in listed order:
NS1.EVERYDNS.NET
NS3.EVERYDNS.NET
NS2.EVERYDNS.NET
NS4.EVERYDNS.NET
___________________
Registrant:
Domains by Proxy, Inc.
DomainsByProxy.com
15111 N. Hayden Rd., Ste 160, PMB 353
Scottsdale, Arizona 85260
United States
Registered through: GoDaddy.com, Inc. (http://www.godaddy.com)
Domain Name: GIGLINESOFTWARE.COM
Created on: 29-Apr-06
Expires on: 29-Apr-12
Last Updated on: 24-Apr-10
Administrative Contact:
Private, Registration GIGLINESOFTWARE.COM@domainsbyproxy.com
Domains by Proxy, Inc.
DomainsByProxy.com
15111 N. Hayden Rd., Ste 160, PMB 353
Scottsdale, Arizona 85260
United States
(480) 624-2599 Fax -- (480) 624-2598
Technical Contact:
Private, Registration GIGLINESOFTWARE.COM@domainsbyproxy.com
Domains by Proxy, Inc.
DomainsByProxy.com
15111 N. Hayden Rd., Ste 160, PMB 353
Scottsdale, Arizona 85260
United States
(480) 624-2599 Fax -- (480) 624-2598
Domain servers in listed order:
NS4.EVERYDNS.NET
NS1.EVERYDNS.NET
NS2.EVERYDNS.NET
NS3.EVERYDNS.NET
2010-04-25
Extending web browser - HTML 3D/stereo support
Extending web browser - HTML 3D/stereo support
Article is under construction
Any comments and suggestions are appreciated.
Brief
Current browsers do not have much of stereo media support. With relatively small changes in CSS and rendering engine open source browsers could become stereo-capable. Accounting appearance of 3D TV sets and channels and embedding browsers in settop boxes ( not to mention 3D game consoles), HTML standards shall be extended for new media vision. Still having 2D engine to render stereo content.
Intro
While been working on speed analyzing on embedded browser, I realized that change of the browser side is unavoidable to keep the HTML logic relevant to developer's needs as from performance as from API use convenience, not to mention web application design and architecture. Having fear of breaking standard behind, I started to think about web browser evolution.
3D which was so attractive in the past, finally came in our lives. With enormous success of Avatar 3D movie, entertainment industry has been pushed towards 3D. And that is not just on movie theater screen. In some developed countries 3D TV channel are available, you could buy 3D BlueRay disks and players, Stereo-capable TV sets. TV and settop boxes are capable of web browsing and most of them have UI based on HTML browser. All of listed are prerequisites of taking advantage of 3D in HTML browser. It is only question of time which device will extend HTML to support 3D-like UI.
What to expect
The first thing stereo-capable device browsers will adopt is HTML controls stereo-skinning: from borders and scroll bars to buttons. And quite fancy at the moment controls made by flat images on those devices will look as stone-age drawings.
After specific devices and following their popularity the frameworks will adopt stereo or even 3D UI. Flash, SilverLight, Java UI are on the list. After Quake was played in browser I will be not surprised if JS libraries will offer set of stereo themes.
I have no doubts of eventual acceptance of 3D with model similar to OpenGL on the web. But it will not replace the HTML as primary structural container textual and media content, including some of 3D. Instead it will be embedded into HTML as plugin or namespace similar to 2D vector graphics (SVG/Canvas). Unlike full-pledged 3D, support for stereo features on HTML itself is on demand now.
Stereo vs 3D
Even if the single human eye is capable of depth perception, the major 3D impression comes from stereo vision. When each eye is seeing the own perspective. The depth details come from the differencein each eye picture. The true 3D at the moment had not come to our computer screens yet. We need to live with 3D surrogate named stereo.
Games developers invented another substitution for 3D which stays between stereocouple and actual 3D: layered in z-index plains. In CSS z-index serves just a purpose of what layer will be rendered on top. It does not reflect the 3D depth.
Having the stereo output does not require having 3D engine on browser side. The stereo support in browser could be substituted with preparing of stereoimage couple by UI designers. In most cases the difference between two images will be in shift in shadows or HTML components. But how big the difference (shifts) shall depend of personal criteria and device use cases. Having on hands 3D scene parameters for HTML elements like x, y, depth and orthogonal vector will give enough input to 3D rendering engine, but too much on 2D one.
Having 2D engine and using additional depth coordinate along with scene parameters could give good stereocouple rendering capabilities.
While the top,left,depth belong to the HTML, the scene view parameters are device and even viewer-specific. Scene parameters could be switched or even dynamically changed over time. For example, following viewer(s) eyes while moving across the room or distance to device changing.
Stereo HTML vs 3D, evolution vs revolution
The earlier attempts to introduce 3D into web browser successfully failed. VRML, O3D and zillion of others. IMHO, the reason is that web needs do not match 3D engine. HTML serves the different stuff than 3D engine. The target is structured text, images and other embedded content. That is a container logic with some important content blended in: text, CSS, JS. For some reason only JS and embedded objects(plugins) had been covered to some extent. Due to separation of plugins from browser, all of them are still counted as proprietary. The most popular at the moment - Flash - still approaching the standard level sneaking into browser. Not to mention on W3C level.
Leveraging of existing HTML/web industry standards, tools and trained labor force will give HTML-blended solution huge advantage over independent 3D namespace or language.
From amount of effort on web browser modification the stereocouple rendered by slightly modified old 2D engine wins over real 3D support.
CSS media
The basic operations for stereo couple rendering on HTML elements will be shadows and elements shifts. Those shifts relatively easy could be done in CSS. Once CSS is aware of left/right (or middle:) eye media. The media is most appropriate CSS entity to be used as selector part for projection (left/right eye image) separation.
Some samples of media selectors:
- screen(default - middle). Current flat plain 2D presentation. Screens of many kind - from widescreen TV to portrait iPhone..
- left and right stereocouple.
- 3D printer. Kind of funny that same HTML could be used for printing not just on regular but 3D printer. Logic allows that. As result - completely new use case.
Rendering engine
For the stereo presentation the browser will need to render separate image for each side based on scene parameters and stereocouple CSS.
In the first cut scene parameters could be skipped for simplicity. They will use defaults for device. Which will leave the same HTML rendering engine with extra filter on top to keep only stereocouple left or right part.
3D HTML support
Obviously some extra services could be given by engine approximating the use to 3D. For example, extending HTML z-index model with depth, so each element will be granted the volume. The light source driven by user settings will make shadows and sparkles on the buttons and other elements. Borders instead of plain width will accept the radius in similar to corner radius. And so on.
stereo xhtml namespace
In the situation when the stereo web page need to be backwards HTML compatible the xml namespace is ideal way to extend HTML syntax. Stereo namespace will be responsible for stereo features blended into HTML to extend existing syntax
3D xhtml namespace
Unlike stereo namespace the 3D namespace will be matching the need of 3D engine. It will interact not just with HTML, but with animation, physics and so on. The closest match will be VRML.
Js3D and xhtml namespace
Current 3D engines like O3D are exposed to HTML only or mostly via JS API. But most of such functionality could be presented as XML and blended into XHTML. That way integration will be more tight and native. The JS is good to some extend to present the 3D model behavior and OK for creating scenes. But declarative XML will serve better for 3D scene definition. Even now the scene primarily imported before been used by JS, why not to formalize it along with declaration of JS API? In fact to have a complete solution, declarative approach shall lead the API one.
Links
- Genuine post
- Blog
- Playing Quake 3D game in browser: Quake II Ported to HTML5
- WebGL - 3D JavaScript API
- Google Chrome with 3D
- O3D plug-in, 3D engine JS API
- VRML
Keywords
Support of stereo in HTML. 3D in Screw the w3c. w3c suck. Make own browser. Extend the browser.2010/04/25
©2010 Sasha Firsov2010-04-21
Changing the Web Browser use patterns for performance boost.
Changing the Web Browser use patterns for performance boost.
Brief
To have a best performance in the browser, better to use natively supported methods and data formats. XSLT for initial rendering and following DOM modification, instead of server-side templates or JS DOM rendering. Data formats shall be kept in memory in format native to browser-XML, instead of strings, JS hash maps or JSON. Behavior defined in XML/SMIL, instead of JS or proprietary extensions via META, OBJECT, etc.
Caching, precompiling, prerendering on deployable package level is not a standard yet. But always a subject for browser extension on customized platform. We have a unprecedented number of browser platforms appeared last years, especially on mobile and embedded platforms. More to go. It's time to make your own browser!
Intro
In HTML browser the DOM primarily serving the HTML rendering goals and a little of additional functionality. Initially HTML was plain text rendering engine and once new requirements appeared from actual web use, the new functionality was added. Due to dogmatic perception of HTML as the base, additions where scotched as extensions with own lingual and functional presentation.
Misconception of tearing web application on 3 independent parts (HTML DOM, CSS and JS) created enormous gap as in other dimensions (only modularization includes all 3 tiers, security, authentification/signing, packaging, etc) as in ability to create performance-near-optimal browser engines.
Behind of the 3 tiers of web page
Besides of UI, the first thing appeared to be in demand is ability to refresh the page in order to keep content relevant. It was also used for various other reasons like session timeout notification. That is when HTML started to accept non-UI stuff related to behavior. This exact case was covered by META refresh tag. The first attempt has come before 3-tiered HTML was idolized and had a declarative presentation. Special non-UI tags where quite convenient to plug-in extra functionality into HTML. But all of them where encapsulated from each other and had almost nothing in common. From the logic to the lingual presentation. The OBJECT, EMBED, APPLET, SCRIPT, STYLE having so little in common from all sides, that integration with browser in common and convenient way is not possible. Each one presented self-concluded tier could be tuned only as insolated entity, without ability to optimize the web app as a whole. Plugins easily recognized this vacuum and to cover this gap took over whole web app. Powerful plugins like Flash, JavaApplet, SilverLight incapsulated as UI as styling as functionality as all other necessary for web app means. And many web sites and devices where redesigned completely to use more mature technology than HTML itself.
Substitution of browser by plugin was a strong move, but no one plugin had enough guts to substitute the browser and HTML. Probably due to proprietary nature or complexity.
There was no common standard of treating DOM behind of HTML. That has been changed a bit when XHTML introduced. Now along with HTML namespace other functionality could be set on DOM as application API(DOM) model. That created the standard base for extending web browser. But not changed the 3-tiered pattern ice-frosted into web developer mind: HTML/JS/CSS
HTML
This set of tags presenting the base structure of web application in current Web 2.0 apps. It is made on back-end by web server framework or by build for cloud distribution, simple HTML occasionally.
The reused components ( resused modules
| widgets | gadgets | web controls ) bodies often are prerendered(embedded) into
HTML tag set.
The page is also kind of component on it's own.
Performance impact of component concept absence:
- Taking extra network bandwidth - text is longer.
- Inability to use discrete caching. Component is subject for separate caching.
- Inability to use precompilation. Binary compiled template loads and runs faster. Compilation could be done as on client as on server side as well.
- Increasing parsing time/resources
- Increasing the rendering time/resources
Development/maintenance - blends context of page and those components all together creating chaos of
- Naming conventions on all tiers.
CSS selectors meant to work only with dedicated component need to be aware of whole page and other components.
JS operating with DOM needs to know how to separate own belongings from remaining app. - Security restrictions collision. Especially for embedded one into another controls. Editable/selectable are the simplest cases.
- Mixing errors and namespaces. The malformed tag (like <div/> ) will mess not just own control but whole page
Surrogate for it will be embedded or XHR-ed HTML template.
Surrogate for precompilation is invisible but rendered page with all CSS applied. Like hidden behind
style="visibility:hidden;widh:0;height:0;position:fixed"
The real improvement will be rendering template directly into HTML DOM tree.
JavaScript
Hacker's and web developer holy grail. Feeds JS developers well since nobody else could have a deal with it.
There are category which could be and need to be taken off the simple/medium
complexity for:
Defining:
- popular event: drag, hover, mouse enter/leave,
- timer/interval
- custom events
- data retrieval (like in FORM): initiated, in progress, completed, interrupted, paused, error
Actions for event handlers:
- animation - change parameter/attribute or referenced node by some formula with ability to use existing DOM and relative inside of component path-es
- data retrieval: get/post/etc, pause, resume( even after stop), stop
- component insert, render, remove, pause, stop, resume, restart(w/ updated parameters)
- Timers, recurrent operations
- component (and top page - Browser) actions: set url/params, back, forward, set to favorites, preserve locally(a-la offline). Some parameters are not part of component state now. Encoding for example. Artificial parameters set could serve same goal as URL with hashes. Component shall define which state is subject for preserving in navigation stack.
Connecting event handlers with DOM nodes current:
- embedded into HTML tags as attributes.
CONS: mix of structure(DOM) and functional tier(JS), hard to read and maintain.
Absence of JS validation.
PRO: Html Validation - attaching in JS via addEventListener or set node attribute.
CONS: absence of structure matching(DOM) validation;
DOM node lifecycle synchronization is manual and difficult to maintain. Leads to memory leaks and dead code calls.
PRO: JS compilation validation. If it really matters with not-strict language :) - Declarative tags
- SCRIPT - IE only w/
for
attribute - META refresh
- SVG/SMIL animation
PRO: no need for memory management; uses native implementation; no need for JS validation.
CONS: undeveloped control and feature set.
- SCRIPT - IE only w/
- CSS selectors + attached JS.
Unfortunately IE only:width:expression(document.body.clientWidth > 950 ? "950px": "100%" );
behavior:url(behave_typing.htc);
- HTC for CSS
PRO: no worries about lifecycle
CONS: no JS validation
Simulation is on JS frameworks likejQuery.live()
.
Requirements
- Event handlers shall be attached to the matching nodes during rendering process.
- JS code replaced with strict language code and operates with natively accessible entities(DOM nodes)
- Strict language matching browser supported one - XML with DTD.
- language to be compileable into native code.
- Selectors and template rules to be matched the UI rendering engine - use XSLT
Solution - use XSLT for rendering and blending UI with event handlers. That
way UI lifecycle matches the event handler one.
Replace JS with XML-driven rules.
Validation is done on XML level. For legacy browsers JS implementation
need to be created.
CSS
Another kind of hackers ( AKA web designers ) paradise. Instant holywar over table-less vs. fluid layouts resulted in fixed pixel layouts on most of web pages. Those few managed to do it right never agreed with opponents. I always been curious, where those W3C standards created to serve social society rather web pages needs?
I could not imagine less modular and optimization unfriendly language than CSS.
If somebody needs to create the mess in web page code - there is your tool!
The tricks around and use guidelines somehow helping to manage that monster. But in
reality we have majority of web working on trust and "approximately acceptable"
quality.
It is never been about 100% compatibility even on CSS tests on it's own. Refer
to Asid tests and their support across the browsers.
Now add the complexity of Web 2.0 app with hundreds developers sneaked on your
page over popular/opensourced frameworks.
HTML5/CSS3 will not be the cure there. This standard use and implementation
patterns need to be redefined. The CSS on its own does not carry anything useful
except of the HTML rendering parameters, targeting rendering media, etc. In
another words, semantics has a sence to sertain level, syntax is a trash.
Requirements for presentation layer definition.
- Modularity.
- Scope.
- Inheritance.
- Rule set language( CSS vs XSLT+XPath)
Unfortunately nothing from listed requirements is available on CSS. From another hand, having those in XSLT cost nothing. And switching the theme/skin will be possible not just for hardcoded DOM UI structure as for CSS but making DOM UI modified as well. The device-specific UI in that vision is just set of XSLT rules applied on same primary app.
All together
Optimization of presentation layer with keeping only really used rules, bundled only used resources with matching for only defined languages and/or themes in XML/XSLT is straight forward procedure. Unlike in ANY HTML web framework currently existed.
Caching and precompilation for XML and XSLT gives possibility to run native code versus parsing and interpreting every time the HTML page loaded and run.
Browser enhancement
I have been thinking on ability to utilize WebKit (Google Chrome engine) in
embedded environments. It happens the WebKit is most active opensource browser engine and will be best
candidate for embedding. It is already a base for several embedded
browsers.
Embedding of WebKit already been utilized in few places, including ChromeFrame -
WebKit engine inside of Internet Explorer.
Even if support for XML and XSL on WebKit is suck, but it is manageable. And
due to well developed XML/XSLT technologies improvement does not required R&D
and will be limited to integration. Commercial and free opensourced products are
in broad selection.
That needs to be covered during the WebKit port. Android OS also covers some subset of needed functionality (?).
It appears that enumerated above half-way solutions have not resolved primary bottlenecks of HTML design patterns.
I will suggest more efficient and radical improvements on the embedded browser and the way of using HTML applications.
- Dom operations by the native compiled code or at least without interpreter(JS). The HTML DOM currently is rendered by either HTML parser (slow in Galio) or JS (slow in comparison with native code). Proposal is to replace those methods with strict rule engine. XSL is perfectly suitable for initial rendering. It is compileable into native code, supported by WebKit and other rendering engines. It needs extension to be applicable for run-time DOM changes. At the moment you could render XML and use domDocument.clone() to pass result back into HTML. This could be optimized by rendering directly into HTML document.
- CSS engine. Unsurprisingly there is no native code compilation for CSS rules.
Current engines treat CSS as independent and out-of context rule set due to
unpredictability of DOM structure. Once the DOM structure or DOM creation rules
are fixed during packaging, there is nothing prevent to render native code for
applying CSS on this fixed DOM structure.
What about dynamic DOM? If the DOM changes rules are known, more complex but still native code could be created.
This is doable even in current engines by converting CSS into XSL.
- Event handling. Current problem that there is no platform optimization could be done. JS does not allow replacing and optimizing the code sequence and methods. Some engines support build-in rule set for event handlers without JS by special XML sequence. Like animation on mouse over or timer. See in Chrome sample of clock - no JS - it uses the SMIL instead. Replacing JS with native compiled code and removing the need for dynamic event handler could be done in same (as SMIL) way.
- Reusable modules/widgets at the moment are not widely utilized. The only good
solution is HTC (html control by Microsoft). All others are available either on
server side or on JS library level. No one provides proper packaging with signed
code and embedded resources(html,css,js, images,etc) for reusable HTML
components. Closest match will be Flash module or JavaApplet. Having
module/widget defined in natively compiled code is an ideal solution. It
does not
exist yet. The logic is trivial and we
could alter the WebKit to accept reusable web component(s) resources from
bundles. The native compiled code could be provided by XSL( for HTML, JS and
CSS). Definitely, will be backwards compatible to use in old-fashion way.
Proposed could look too radical (and it is in some way) but it is 99% based on existing standards and solutions. And remaining percent of own development is definitely worth of impact on performance of application and development process efficiency as well ( that aspect I will cover some time later). The effort will give back more if the compatibility layer for other browsers will be published. That way many bug fixes and necessary tools like profilers will be available and eventually new technology from proprietory became a standard. It is nice to have a standard compliant platform and apps even before the standard is accepted. Not to mention branding the standard with own name.
Links
2010/04/18
©2010 Sasha Firsov2010-04-17
JavaScript setTimeout performance measurement
Intro
In HTML browser applications timer events are used all over. And it could impact usability of HTML application in various ways. From slowness of UI interaction to memory leaks and eventually death of application.
The application profiling methods are used for code efficiency measurement. Profilers are available in most popular browsers to some extent, but the high level analysis stays in developer's hands and quite subjective. Especially when it comes to comparison of alternative code implementations.
Scope
- expose available measurements of performance and HTML/JS engine load
- enumerate some methods of efficiency improvement
- compare the methods in numbers
Measurements
The timer-related activities in HTML application mostly exposed as timer event
handling. Most popular low-level API is setTimer
, setInterval
, postMessage
and
special use of XHR. Platform also could provide own specific methods.
Difficult part is how to count application performance impact on those timer event handlers changes in attempt to tune them up.
I see the few criterias could be measured:
- Event handler timing. It will be most efficient and adequate method if not
facing few crucial "buts"
- discrete step for timer by JS
new Date().getTime()
stays is in the range of 15-30 ms. Which suppose to be longer of length for most timer event handlers.
But it will work for long (longer of double detectable time) timing events. - due to lack of reasonable functionality in native timer API event handlers are usually wrapped by framework API. Which on its own has big impact on short event handlers. For those in addition to individual handler timing, the framework wrapper need to be tracked as well.
- discrete step for timer by JS
- System CPU load. Obviously it is indirect and not
precise. Will work only in
conditions where CPU utilization is high enough and
impact of say removal of
timer event handler will be
noticeable. The
artificial
bursting of timer event
handler calls to the level of good detection is needed. Also there is no CPU
utilization JS API exist. It could be simulated by count per second of dummy
events.
postMessage
orsetTimeout
w/ zero interval. Just need to be sure that platform does that asynchronously.
Another option is to have constant timeout but incrementally increase the load. Say just loop on heavy math computations. Operations counter per second until the timer interval is reached will give the CPU availability.
CPU load timing is not intrusive and do not chance characteristics of recearched event handler. Also it allows check system load overall. I.e. no special treatment for specific timer event handler. Also could be used for system performance tracking on other recurrent event handlers like drag, mouse movement, progress animations, etc.
Tips
My todo list during timer handlers optimization.
- Check the timing for event handler. If the execution time for event handler is
sizeable (>30ms) than we have easiest case and all needed is the timing stats.
Better to have average computation capabilities and good sampling set. Average
calculation could be oursourced instead of embedded in JS.
Obvoiusly accounting in proper/no exceptions execution: try/finally impacting
the performace themselves
- Collection of timing stats.
console.time
console.timeEnd
will be sufficient
If those are not available (like in IE) - get time @ start
- print delta time @ end in format available for further average processing.
- OR keep global counter and total execution time, printing out average every time @ end
- Collection of timing stats.
console.time
console.timeEnd
will be sufficient
- If event handler execution time is short or you do not want to alter the event
handlers themselves, than CPU load will be the criteria.
- Create the "CPU filler" routine. It shall act as low-priority thread, letting remaining app to run.
- setTimeout with counter increment and comparing of the current time with last detected. Once new time value changed, update the stats: counter per second overall and for last second.
- Other statistics could be handy. Like max time delay between filler calls - matching longest routine. Since there are few routines need to be tracked separately, routine could set it's key and reset last timer value. That key will be used for stats collection.
- Special treatement need to be done for 0 interval: it shall be ignored in MIN stat computation due to minimum detectable interval (~30 ms)
- Trigger on/off collection of stats. Reset stats. The timing functionality could
be expensive. Especially in CPU load timing. The application load for
development comfort need to be as fast as possible.
Also the load timing is a separate problem and shall not be mixed with timer events profiling. From another side it has a reason to trigger profiling programmatically to see the impact in exact conditions and avoid mix with inrelated statistics. Like start on begin drag and stop on release.- Have a global flag(s) or hash map of "profiling enabled" flags. Check the flag before collecting/printing timing
- Have the triggering code on start/stop or after application load. In Web 2.0 application the onLoad for body is not a proper place to start time-based functionality (and obviously the timer events profiling): it is still the heavy initialization phase.
-
Finally on app level
- collect stats for current app.
- validate by simulation of ideal performance tuning: timer event handler have
only
return
in body. - do the real stuff and see how it goes!
Links
©2010 Sasha Firsov2010-04-04
JS Profiler development - enumeration of global scope functions and variables
Embedded browsers do not have much developer's support and lack of javascript profiler on Ant Galio [ settop box (web) browser] pushed me to write one.
Fortunately for me JS has all power for hacking into execution code in runtime. Once my JS on page, I own everything as on page as on user's browser and device.
And shame on W3C to support a such security gap in the heart of every web page. But to be honest I glad to see that lack of intellect: It given us (web developers) good peace of bread with butter and eventually drop of caviar:) Way to go, web 2.0 industry!
It was a trick to wrap most of code in modern OOP JS, but the output is precious for browser performance analytics - for each function/constructor/prototyped and static method counter of all functions calls, list of callers w/ counter + exceptions list w/ counter. Advanced UI is still on the go, but first output already useful. Not yet competes with embedded profilers, but once the time is approved, it has a good chance to become better analytic tool. Many unique and not available for JS stuff shall be there: from the exception chains to the caller graphs blended with timing and call count.
As Google Crome is my primary browser, initial code was tuned there and passed whole scope without any hiccups.
Brief try in IE8 exposed the problem of global scope variables enumerating. It appeared that window object does not match the variables scope. And that is the right approach from JS engine side. The closure variables are not subject to be attached to any variable, not "this", nor "window". I was surprised to see the community and web browser developers expectations there. In my mind IE did the right thing. Even if it does not help me with hacking JS :)
http://blogs.msdn.com/ericlippert/archive/2005/05/04/414684.aspx
Even if it is not really needed , now I am puzzled how to get the list of local in function variables? And to be more happy the closure chain ones as well.
In Ant Galio browser the global variables are listed on window, no problems there. But JS engine appeared to be tricky. The catch and throw of same exception appeared to be unlegal JS syntax and module did not compile. Looks I need to give up the stats on exceptions. Another trouble appeared from "in" operator in conjunction with wrapped functions. Digging there...
Anyone do the JS profiling and call graph ? Will be glad to get in touch.
Sasha
2010-02-02
JS performance test - scope and named functions - Analytics
JS performance test - scope and named functions - Analytics
JavaScript global scope vs. encapsulated, variable as holder of anonymous function vs. named functions.
This stress test gave quite a surprise. Popular JS coding practices and dominant opinion appeared to be completely wrong. Probably because of rapid evolution of JS engines. Or maybe the habit to rely on reasonable guesses of respected people.Idea
To create the test came from inconvenience of using anonymous code. There are many code maintenance reasons to switch to named functions code but on the net many respected JS gurus are promoting the anonymous functions use and most popular JS library are filled with such code. Is there any compromise with widely accepted pattern? If the speed lost is not big, choice could be balanced.Appeared I was wrong in my expectations. All tests shown there is no need for balance. Old fashion coding style shall be considered as bad practice and removed from code as harmful.
Conclusion
- Use the named function instead of variable initialized with anonymous function.
- Global functions given much boost in performance. Reduce the use of scoped functions for initialization code - page initialization code wrapped into own scope way slower of global functions declaration.
- Use SCRIPT dom node code injection instead of slower eval().
The Web 2.0 application consists not just from JS initialization. In fact, it is just 1st row developer facing for page load and run speed. In most cases frameworks are "good enough" on their own. The problems usually appeared in heavy apps utilized several of them simultaneously.
Test.
Initially had simple evaluation purpose for anonymous functions replacement with named ones. Code maintenance is badly damaged by such coding style and test shall show if performance is not a real issue.For test JSP renders identical JS code with only difference on function declaration signature
Var as reference to anonymous function:
Another thought was about having the size of rendered file in stats. XHR gives its string value and length. And that could be used for eval() timing. The known fact is “eval() slower of SCRIPT tag”. Question is how much?
Not sure what given me the idea to run the same test inside of function scope in addition to just global one. But surprising results came out of it.
Speculation on statistics.
1. Anonymous function vs. named functions. Test cases: 0,1,4,5 versus 2,3,6,7IE: functions are 10-200% faster
Opera: run time so damn fast that no way to detect the difference. Same.
Chrome: function 10% faster in local scope. Same
FF: on edge of detection. Same
Safari: Same
2. Global scope vs. wrapped in function. Test cases: 0,1,2,3 vs 4,5,6,7. Global faster:
IE,Opera: 20-300%
Chrome: in eval() local is enormously slow, on run is on the edge of detection. Same w/ favor to global scope.
FF: in eval() local 50% slower, in run untraceable. Same w/ favor to global scope.
Safari:10%
3. Eval() vs SCRIPT . eval looses:
IE: 200-400%
Opera: 1000%
Chrome 10-300%, 1000% for local scope
FF: 700%
Safari 10%
4. Half vs. full set. Meaning TBD. IMHO, that is just JS engine capabilities test,
IE: 20-200%
Opera, Chrome: 5%
FF +=5% (?)
Safari 50% (expected)
5. Browsers overall
Opera is 6 times slower for JS load and parsing time and as result, overall.
JS engine – run time in ms:
Safari: 924 ( worst ).
FF: 23
IE: 17
Opera: 16
Chrome: 10 (best)
JS engine – eval(), ms
Safari: 950
Opera: 560
Chrome: 263
FF: 170
IE: 38(best)
PS It would be nice to integrate real-time charting, but there is not enough bandwidth on my side. Any sponsors or volunteers?
Links
- Genuine post
- Blog
- Statistics data
- Test cases : 500 | 1000 | 3000 | 10000
- Sources: index.jsp - UI file | AnonymousJsVsNamed.js.jsp- JS rendering JSP
Sasha
2010-02-01
JS performance test - scope and named functions - DATA
Sources: index.jsp - UI file | AnonymousJsVsNamed.js.jsp - JS rendering JSP
2010-01-27
JS local functions: anonymous vs. named
… // logic mixed with declarations
var _ajaxify = function(cont) {
return …;
};
…
_ajaxify(XXX);
}// scope ends
It creates anonymous function. As result you:
Solution: use named functions instead:
_ajaxify(XXX);
//logic implementation
…
return;
// local functions declarations
function _ajaxify( cont ) {
return …;
}
function …
}// scope ends
Smaller JS footprint function locFunc(){…} – is 5 character shorter. Could be significant for large number of small functions.