webpack custom plugin

Recently we work with a platform which need to use webpack to build some ng2/4 assets and also some custom steps to pull data from a headless cms(via gulp) and eventually render components. One problem here is we cannot do live reload/recompile that every time we make some change we have to run the npm command again to compile the resources.

To solve the issue, i decided to write a custom webpack plugin to make browser-sync and webpack work together.

The basic flow is 1. run webpack in watch mode so every time a resouce(ts/css/html) changes, webpack auto re-compile, 2. serve the resources via browser-sync, here browserSync just serve as a mini express server and provide browser reload capability. 3. a webpack plugin to start the browser-sync server and register the reload event when webpack compilation is done.


The webpack api is pretty straightforward, it exposes a compile object from the plugin’s apply function. It represents the fully configured Webpack environment. This object is built once upon starting Webpack, and is configured with all operational settings including options, loaders, and plugins. When applying a plugin to the Webpack environment, the plugin will receive a reference to this compiler. Use the compiler to access the main Webpack environment.

const browserSync = require('browser-sync');

function WebpackAndromedaPlugin(options) {
  console.log(`WebpackAndromedaPlugin options: ${JSON.stringify(options, null, 2)}`);

  let browserSyncConfig = require('./lib/dev-server').getBrowserSyncConfig(options);

WebpackAndromedaPlugin.prototype.apply = (compiler) => {

  compiler.plugin("compile", function (params) {
    console.log('--->>> andromeda started compiling');

  compiler.plugin('after-emit', (_compilation, callback) => {
    console.log('--->>> files prepared');

as we can see above, we can register our callbacks with compiler.pulgin(), where webpack exposes different stages for us to interact.

Another important object is compilation, which  represents a single build of versioned assets. While running Webpack development middleware, a new compilation will be created each time a file change is detected, thus generating a new set of compiled assets. A compilation surfaces information about the present state of module resources, compiled assets, changed files, and watched dependencies. The compilation also provides many callback points at which a plugin may choose to perform custom actions.

For example, all the generated files will be in compilation.assets object.


The webpack-dev-middleware is a nice little express middleware that serves the files emitted from webpack over a connect server. One good feature it has is serving files from memory since it uses a in memory file system, which exposes some simple methods to read/write/check-existence in its MemoryFileSystem.js.  The webpack-dev-middleware also exposes some hooks like close/waitUntilValid etc, unfortunately the callback that waitUntilValid registers will only be called once according to the compileDone function here. Anyway, it is still an efficient tool to serve webpack resources and very easy to integrate with the webpack nodejs APIs:

~function() {
  const options = require('./config');
  let  webpackMiddleware = require("webpack-dev-middleware");

  let webpack = require('webpack');
  let browserSyncConfig = getBrowserSyncConfig(options);
  const compiler = webpack(require('./webpack.dev'));
  compiler.plugin('done', ()=>browserSync.reload())
  let inMemoryServer = webpackMiddleware(compiler, {noInfo: true, publicPath:'/assets'});



The webpack-dev-server is basically a wrapper over the above webpack-dev-middleware. it is good for simple resources serving since it does not expose much. I was trying to find a hook to it to intercept the resource it generates/serves but did not get a good solution. If you need more customization, it would be better to go with webpack-dev-middleware.

a very detailed webpack intro article


debug hover item in chrome devtools

Chrome devtools is our friend, always.

Today when I was developing an angular 4.x app with primeng library, i have to check the class set on the tooltip component. As we know the tooltip is hover event based, so if we hover on it to make it showup and then shift our focus to the dev tool element tab, the tooptip would disappear.

Chrome tool has a feature to activate the hover stuff(:hover) on specific element for CSS sake. It is quite handy but obviously it does not apply in this use case since this tooltip is js based.

Search around and finally find a solution: using F8 or CMD + \ which is pause the script execution.

Steps are quite straightforward:

Mouse over the tooltip, and press F8 while it is displayed.

Now you can now use the inspector to look at the CSS.

A deeper look at event loop (micro/macro tasks)

One common question

(function test() {
    setTimeout(function() {console.log(4)}, 0);
    new Promise(function executor(resolve) {
        for( var i=0 ; i<10000 ; i++ ) {
            i == 9999 && resolve();
    }).then(function() {

So why the result is 1,2,3,5,4 rather than 1,2,3,4,5

If we look at the detail, looks like the async of setTimeout is different from the async of Promise.then, at least they are not in the same async queue.

The answer is here in the whatwg SPEC.

  • An event loop has one or more task queues.(task queue is macrotask queue)
  • Each event loop has a microtask queue.
  • task queue = macrotask queue != microtask queue
  • a task may be pushed into macrotask queue,or microtask queue
  • when a task is pushed into a queue(micro/macro),we mean preparing work is finished,so the task can be executed now.

And the event loop process model is as follows:

when call stack is empty,do the steps-

  1. select the oldest task(task A) in task queues
  2. if task A is null(means task queues is empty),jump to step 6
  3. set “currently running task” to “task A”
  4. run “task A”(means run the callback function)
  5. set “currently running task” to null,remove “task A”
  6. perform microtask queue
    • (a).select the oldest task(task x) in microtask queue
    • (b).if task x is null(means microtask queues is empty),jump to step (g)
    • (c).set “currently running task” to “task x”
    • (d).run “task x”
    • (e).set “currently running task” to null,remove “task x”
    • (f).select next oldest task in microtask queue,jump to step(b)
    • (g).finish microtask queue;
  7. jump to step 1.

a simplified process model is as follows:

  1. run the oldest task in macrotask queue,then remove it.
  2. run all available tasks in microtask queue,then remove them.
  3. next round:run next task in macrotask queue(jump step 2)

something to remember:

  1. when a task (in macrotask queue) is running,new events may be registered.So new tasks may be created.Below are two new created tasks:
    • promiseA.then()’s callback is a task
      • promiseA is resolved/rejected:  the task will be pushed into microtask queue in current round of event loop.
      • promiseA is pending:  the task will be pushed into microtask queue in the future round of event loop(may be next round)
    • setTimeout(callback,n)’s callback is a task,and will be pushed into macrotask queue,even n is 0;
  2. task in microtask queue will be run in the current round,while task in macrotask queue has to wait for next round of event loop.
  3. we all know callback of “click”,”scroll”,”ajax”,”setTimeout”… are tasks,however we should also remember js codes as a whole in script tag is a task(a macrotask) too.


In nodejs world:  setImmediate()is macro/task, and process.nextTick() is a micro/job


One good discussion in Chinese and blog.

Fighting with browser popup block


Recently in our project, we have a need of refactoring some old struct actions to rest based pages. This way we avoid multiple page navigation for our user so that all the stuff can be done in a single page.

One example is file download. Previously in the struts based app, if a page have 12 files. What user have to do is click the download link in the main page, if available, user will be taken to the download page where the real download link is, then download. if not available, user will be taken to a request page for confirmation and then once confirmed, to the download page to wait. So to download all the files, user have to constantly navigate between different pages with a lot of clicks which is kind of crazy. In the coming single page application, everything(request/confirm/download) is in the same page which is much better.


However, we hit one issue. When user click the download link, the same as the above flow, we first need to make an ajax call back to server to check, if not available, a modal will show up for confirming request. otherwise get the download id and open a new tab for download the stream. The problem comes from this point where the browser(chrome/FF, safari does not) will block the download tab from opening. Tried it both form submit and window open. What is really bad is in chrome the block notification is really not noticeable, which is a tiny icon on the upper-left where user can barely see.

check status

        this.requestDetail = function (requestObj, modalService) {
                function success(res) {
                    var status = res.data.status;
                    switch (status) {
                        case 'AVAIL_NOT_REQ':
                            that.createNewRequest(requestObj, modalService);
                        case 'NO_DATA':
                            $.bootstrapGrowl('No data available!', {type: 'info'});
                        case 'EXISTING_RPT':
                        case 'PENDING':
                            //add user to notify list then redirect
                                function success(res) {
                                    var DETAIL_RUN_INTERVAL = 3;
                                    var minute = DETAIL_RUN_INTERVAL - res.data.minute % DETAIL_RUN_INTERVAL;
                                    $.bootstrapGrowl('Your detail data file will be ready in ' + minute + ' minutes.', {type: 'info'});
                        case 'ERROR':
                            $.bootstrapGrowl('Error Getting Detail data! Contact Admin or Wait for 24 hour to re-request.', {type: 'danger'});
                            $.bootstrapGrowl('Error Getting Detail data, Contact ADMIN', {type: 'danger'});
                function error(err) {
                    $.bootstrapGrowl('Network error or Server error!', {type: 'danger'});

with form

        this.downloadFile = function (requestId) {
            //create a form which calls the download REST service on the fly
            var formElement = angular.element("
            formElement.attr("action", "/scrc/rest/download/detail/requestId/" + requestId);
            formElement.attr("method", "get");
            formElement.attr("target", "_blank");
            // we need to attach iframe to the body before form could be attached to iframe(below) in ie8
            //call the service

With window

        this.downloadFile = function (requestId) {
            $window.open('/scrc/rest/download/detail/requestId/' + requestId);


Turns out the issue is: A browser will only open a tab/popup without the popup blocker warning if the command to open the tab/popup comes from a trusted event. That means the user has to actively click somewhere to open a popup.

In this case, the user performs a click so we have the trusted event. we do lose that trusted context, however, by performing the Ajax request. Our success handler does not have that event anymore.

Possible Solutions

  1. open the popup on click and manipulate it later when the callback fires

      var popup = $window.open('','_blank');
      popup.document.write('loading ...');
        popup.location.href = '/scrc/rest/download/detail/requestId/' + res.data.requestId;
        // other:

    this will work but not elegant since it opens a tab and close instantly but still create a flash in browser that user could notice.

  2. you can require the user to click again some button to trigger the popup. This will work because we could update the link if existing then user click again, we init the download so popup is triggered by user directly. But still not quite user friendly

  3. Notify user to unblock our site.
    This is eventually what we do. So we detect on the client side if popup is blocked. If so, we ask user to unblock our site in setting. The reason we use this is the unblock/trust action is really a one time thing that browser will remember the behavior and will not bother user again.

            this.downloadFile = function (requestId) {
                var downloadWindow = $window.open('/scrc/rest/download/detail/requestId/' + requestId);
                if(!downloadWindow || downloadWindow.closed || typeof downloadWindow.closed=='undefined')
                    $.bootstrapGrowl('Download Blocked!<br\> Please allow popup from our site in your browser setting!', {type: 'danger', delay: 8000});

Long Text Wrapping in ie9/10 inside table td

Both IE 9 and 10 seem to have problem wrapping long text inside table td element, even we explicitly set the the word-wrap CSS property.

Turns out the solution is we also have a set with and table-layout css to the wrapping table.

/* Both of them are Necessary.*/
.long-text-table {  


<table class="long-text-table">
   <div class="long-text-wrapper"> REALLY LONG TEXT NEED TO WRAP</div>

execution time of JavaScript code in page load processde

In that short period of time between you wanting to load a page and your page loading, many relevant and interesting stuff happen that you need to know more about. One example of a relevant and interesting stuff that happens is that any code specified on the page will run. When exactly the code runs depends on a combination of the following things that all come alive at some point while your page is getting loaded:

  • The DOMContentLoaded event
  • The load Event
  • The async attribute for script elements
  • The defer attribute for script elements
  • The location your scripts live in the DOM

Don’t worry if you don’t know what these things are. You’ll learn (or re-learn) what all of these things do and how they impact when your code runs really soon. Before we get there, though, let’s take a quick detour and look at the three stages of a page load.

Stage Numero Uno

The first stage is when your browser is about to start loading a new page:

the first stage

At this stage, there isn’t anything interesting going on. A request has been made to load a page, but nothing has been downloaded yet.

Stage Numero Dos

Things get a bit more exciting with the second stage where the raw markup and DOM of your page has been loaded and parsed:

the DOM is loaded

The thing to note about this stage is that external resources like images and stylesheets have not been parsed. You only see the raw content specified by your page/document’s markup.

Stage Numero Three

The final stage is where your page is fully loaded with any images, stylesheets, scripts, and other external resources making their way into what you see:

the final stage

This is the stage where your browser’s loading indicators stop animating, and this is also the stage you almost always find yourself in when interacting with your HTML document.

Now that you have a basic idea of the three stages your document goes through when loading content, let’s move forward to the more interesting stuff. Keep these three stages at the tip of your fingers (or under your hat if you are wearing one while reading this), for we’ll refer back to these stages a few times in the following sections.

The DOMContentLoaded and loadEvents

There are two events that represent the two importants milestones while your page loads. The DOMContentLoaded event fires at the end of Stage #2 when your page’s DOM is fully parsed. The load event fires at the end of Stage #3 once your page has fully loaded. You can use these events to time when exactly you want your code to run.

Below is a snippet of these events in action:

document.addEventListener("DOMContentLoaded", theDomHasLoaded, false);
window.addEventListener("load", pageFullyLoaded, false);
function theDomHasLoaded(e) {
    // do something
function pageFullyLoaded(e) {
    // do something again

You use these events just like you would any other event, but the main thing to note about these events is that you need to listen to DOMContentLoaded from the document element and load from the window element.

Now that we got the boring technical details out of the way, why are these events important? Simple. If you have any code that relies on working with the DOM such as anything that uses the querySelector or querySelectorAllfunctions, you want to ensure your code runs only after your DOM has been fully loaded. If you try to access your DOM before it has fully loaded, you may get incomplete results or no results at all.

A sure-fire way to ensure you never get into a situation where your code runs before your DOM is ready is to listen for the DOMContentLoaded event and let all of the code that relies on the DOM to run only after that event is overheard:

document.addEventListener("DOMContentLoaded", theDomHasLoaded, false);
function theDomHasLoaded(e) {
    var images = document.querySelectorAll("img");
    // do something with the images

For cases where you want your code to run only after your page has fully loaded, use the load event. In my years of doing things in JavaScript, I never had too much use for the load event at the document level outside of checking the final dimensions of a loaded image or creating a crude progress bar to indicate progress. Your mileage may vary, but…I doubt it 😛

Scripts and Their Location in the DOM

In the Where Should Your Code Live tutorial, we looked at the various ways you can have scripts appear in your document. You saw that your script elements’ position in the DOM affects when they run. In this section, we are going to re-emphasize that simple truth and go a few steps further.

To review, a simple script element can be some code stuck inline somewhere:

var number = Math.random() * 100;
alert("A random number is: " + number);

A simple script element can also be something that references some code from an external file:

Now, here is the important detail about these elements. Your browser parses your DOM sequentially from the top to the bottom. Any script elements that get found along the way will get parsed in the order they appear in the DOM.

Below is a very simple example where you have many script elements:

<!DOCTYPE html>
        console.log("inline 1");
        console.log("inline 2");
        console.log("inline 3");

It doesn’t matter if the script contains inline code or references something external. All scripts are treated the same and run in the order in which they appear in your document. Using the above example, the order the scripts will run is: inline 1, external 1, inline 2, external 2, and inline 3.

Now, here is a really REALLY important detail to be aware of. Because your DOM gets parsed from top to bottom, your script element has access to all of the DOM elements that were already parsed. Your script has no access to any DOM elements that have not yet been parsed. Say what?!

Let’s say you have a script element that is at the bottom of your page just above the closing body element:

<!DOCTYPE html>
        Quisque faucibus, quam sollicitudin pulvinar dignissim, nunc velit
        sodales leo, vel vehicula odio lectus vitae mauris. Sed sed magna
        augue. Vestibulum tristique cursus orci, accumsan posuere nunc
        congue sed. Ut pretium sit amet eros non consectetur. Quisque
        tincidunt eleifend justo, quis molestie tellus venenatis non.
        Vivamus interdum urna ut augue rhoncus, eu scelerisque orci
        dignissim. In commodo purus id purus tempus commodo.
    <button>Click Me</button>
    <script src="something.js"></script>

When something.js runs, it has the ability to access all of the DOM elements that appear just above it such as the h1, p, and button elements. If your script element was at the very top of your document, it wouldn’t have any knowledge of the DOM elements that appear below it:

<!DOCTYPE html>
        Quisque faucibus, quam sollicitudin pulvinar dignissim, nunc velit
        sodales leo, vel vehicula odio lectus vitae mauris. Sed sed magna
        augue. Vestibulum tristique cursus orci, accumsan posuere nunc
        congue sed. Ut pretium sit amet eros non consectetur. Quisque
        tincidunt eleifend justo, quis molestie tellus venenatis non.
        Vivamus interdum urna ut augue rhoncus, eu scelerisque orci
        dignissim. In commodo purus id purus tempus commodo.
    <button>Click Me</button>

By putting your script element at the bottom of your page as shown earlier, the end behavior is identical to what you would get if you had code that explicitly listened to the DOMContentLoaded event. If you can guarantee that your scripts will appear towards the end of your document after your DOM elements, you can avoid doing the whole DOMContentLoaded approach described in the previous section. Now, if you really want to have your script elements at the top of your DOM, ensure that all of the code that relies on the DOM runs after theDOMContentLoaded event gets fired.

Here is the thing. I’m a huge fan of putting your script elements at the bottom of your DOM. There is another reason besides easy DOM access that I prefer having your scripts live towards the bottom of the page. When a script element is being parsed, your browser stops everything else on the page from running while the code is executing. If you have a really long-running script or your external script takes its sweet time in getting downloaded, your HTML page will appear frozen. If your DOM is only partially parsed at this point, your page will also look incomplete in addition to being frozen. Unless you are Facebook, you probably want to avoid having your page look frozen for no reason.

Script Elements, Defer, and Async

In the previous section, I explained how a script element’s position in the DOM determines when it runs. All of that only applies to what I call simple script elements. To be part of the non-simple world, script elements that point to external scripts can have the defer and async attributes set on them:

These attributes alter when your script runs independent of where in the DOM they actually show up, so let’s look at how they end up altering your script.


The async attribute allows a script to run asynchronously:

If you recall from the previous section, if a script element is being parsed, it could block your browser from being responsive and usable. By setting the asyncattribute on your script element, you avoid that problem altogether. Your script will run whenever it is able to, but it won’t block the rest of your browser from doing its thing.

This casualness in running your code is pretty awesome, but you must realize that your scripts marked as async will not always run in order. You could have a case where several scripts marked as async will run in a order different than what they were specified in your markup. The only guarantee you have is that your scripts marked with async will start running at some mysterious point before the load event gets fired.


The defer attribute is a bit different than async:

Scripts marked with defer run in the order in which they were defined, but they only get executed at the end just a few moments before the DOMContentLoadedevent gets fired. Take a look at the following example:

<!DOCTYPE html>
    <script defer src="external3.js"></script>
        console.log("inline 1");
    <script src="external1.js"></script>
        console.log("inline 2");
    <script defer src="external2.js"></script>
        console.log("inline 3");

Take a second and tell the nearest human / pet the order in which these scripts will run. It’s OK if you don’t provide them with any context. If they love you, they’ll understand.

Anyway, your scripts will execute in the following order: inline 1, external 1,inline 2, inline 3, external 3, and external 2. The external 3 and external 2 scripts are marked as defer, and that’s why they appear at the very end despite being declared in different locations in your markup.


In the previous sections, we looked at all sorts of factors that influence when your code will execute. The following diagram summarizes everything you saw into a series of colorful lines and rectangles:

summary of when your code will run

Now, here is probably what you are looking for. When is the right time to load your JavaScript? The answer is…

  1. Place your script references below your DOM directly above your closingbody element.
  2. Unless you are creating a library that others will use, don’t complicate your code by listening to the DOMContentLoaded or load events. Instead, see the previous point.
  3. Mark your script references with the async attribute.
  4. If you have code that doesn’t rely on your DOM being loaded and runs as part of teeing things off for other scripts in your document, you can place this script at the top of your page with the async attribute set on it.

That’s it. I think those four steps will cover almost 90% of all your cases to ensure your code runs at the right time. For more advanced scenarios, you should definitely take a look at a 3rd party library like require.js that gives you greater control over when your code will run.


Another good discussion on the sequence of rendering html

My another post also has some detail on the browser side

form submit on enter when one input text

Got an odd behavior today when dealing with a form which has only one input.

The input binds to a js function to judge if it is key==13, we do tweak the input params and submit. However everytime we hit enter, even before executing the js function, the form has already been submitted to the server.

Turns out:

In the HTML 2.0 specification, in the section entitled Form Submission, logically enough:

When there is only one single-line text input field in a form, the user agent should accept Enter in that field as a request to submit the form.

So to prevent this. a easy way is to put a hidden input box in the form.

<input class="hide"/>

Hide is just a bootstrap class which has

display: none!important;