angular change detection notes

Was reading some angular change detection related articles trying to understand it. some note.

Angular can detect when component data changes, and then automatically re-render the view to reflect that change.

NgZone

ngZone is an angular verison of zone.js, which patches the browser async calls like user event(click/keyup etc), settimeout/interval, XHR.  It is similar to the AOP we have in Spring that proxies are created by framework to do before/after custom logic around the original method call. The importance of this angular patch these brower-API to trigger the change-detection. Previously in ng1, we have all those special directives like ng-click, $timeout etc to make sure after all the custom logic is done, we all the angularjs $apply() to run the digest cycle to do the dirty check and update the view if necessary. Now in ng2+, all these special stuff are gone because of the usage of Zone which enables us to fire change detection after any of these browser event are done from the main stack. Here is a good article explaining zone in angular.

Basically, The short version is, that somewhere in Angular’s source code, there’s this thing called ApplicationRef, which listens to NgZones onStable event. Whenever this event is fired, it executes a tick() function which essentially performs change detection.

  tick() {
    this.changeDetectorRefs
      .forEach((ref) => ref.detectChanges());
  }

change detection flow

So now we know how CD is triggered, now time for how it is executed.

change detector classes are created on the fly by angular for each component.

From the top of the component(view) tree, we start the CD.

The main logic responsible for running change detection for a view resides in checkAndUpdateView function. Most of its functionality performs operations on child component views. This function is called recursivelyfor each component starting from the host component. It means that a child component becomes parent component on the next call as a recursive tree unfolds.

When this function triggered for a particular view it does the following operations in the specified order:

  1. sets ViewState.firstCheck to true if a view is checked for the first time and to false if it was already checked before
  2. checks and updates input properties on a child component/directive instance
  3. updates child view change detection state (part of change detection strategy implementation)
  4. runs change detection for the embedded views (repeats the steps in the list)
  5. calls OnChanges lifecycle hook on a child component if bindings changed
  6. calls OnInit and ngDoCheck on a child component (OnInit is called only during first check)
  7. updates ContentChildren query list on a child view component instance
  8. calls AfterContentInit and AfterContentChecked lifecycle hooks on child component instance (AfterContentInit is called only during first check)
  9. updates DOM interpolations for the current view if properties on current view component instance changed
  10. runs change detection for a child view (repeats the steps in this list)
  11. updates ViewChildren query list on the current view component instance
  12. calls AfterViewInit and AfterViewChecked lifecycle hooks on child component instance (AfterViewInit is called only during first check)
  13. disables checks for the current view (part of change detection strategy implementation)

Some lifecycle hooks are called before the DOM update (3,4,5) and some after (9). So if you have the following components hierarchy: A -> B -> C, here is the order of hooks calls and bindings updates:

A: AfterContentInit
A: AfterContentChecked
A: Update bindings
    B: AfterContentInit
    B: AfterContentChecked
    B: Update bindings
        C: AfterContentInit
        C: AfterContentChecked
        C: Update bindings
        C: AfterViewInit
        C: AfterViewChecked
    B: AfterViewInit
    B: AfterViewChecked
A: AfterViewInit
A: AfterViewChecked

check on reference

By default, Angular Change Detection works by checking if the value of template expressions have changed. This is done for all components. In other word, Angular does not do deep object comparison to detect changes, it only takes into account properties used by the template.

Performance

Ng2+ gets rid of the ng1 way of doing dirty check which would result in multiple rounds of check. Now we only have 1 round. If we change the fields in the life cycle hooks like ngAfterViewChecked, we will get xxx has changed after it was checked. This error message is only thrown if we are running Angular in development mode. In production mode, the error would not be thrown and the issue would remain undetected.

trigger CD manually

There could be special occasions where we do want to turn off change detection. Imagine a situation where a lot of data arrives from the backend via a websocket. We might want to update a certain part of the UI only once every 5 seconds. To do so, we start by injecting the change detector into the component:

constructor(private ref: ChangeDetectorRef) {
    ref.detach();
    setInterval(() => {
      this.ref.detectChanges();
    }, 5000);
  }

As we can see, we just detach the change detector, which effectively turns off change detection. Then we simply trigger it manually every 5 seconds by calling detectChanges().

 

Some references:

  1. How does Angular Change Detection Really Work ?
  2. ANGULAR CHANGE DETECTION EXPLAINED
  3. Everything you need to know about change detection in Angular

some other good article in angularInDepth:

Exploring Angular DOM manipulation techniques using ViewContainerRef

The mechanics of DOM updates in Angular

difference between detechChanges and markForCheck

Angular Rxjs Error Handling flow

In Angular 2+, http are all RX based. so the error flow is quite different from what we used to have in angular 1.x’s promise based way.

consider the code

this.http.get(‘someurl’)
.map((res: Response) => {
return res.json();
})
.catch((err) => {
console.log(error);
Observable.throw(‘my custom err msg’);
})
.subscribe((res)=>{
console.log(res);
}, (err) => {
console.log(err);
}, ()=> {
console.log(‘complete’);
})

So the basic flow is if anything exception happened, the error handler function in the catch block will be called first. That function should return an Observable since it is a chain. Then the (err) function will be invoked. In the context of an HTTP request in Angular2, a response with a status code different than 2xx is considered as an error.

Note, if we do not use Observable.throw() in catch but return some random result, it would probably result in funky result. In the ​subscribeToResult function, there are multiple type checks. For example, if we return an array, if would be treated valid and be passed to the first handler rather than the error handler. If we return a random object like the error object, we will get TypeError: unknown type returned, error and trigger the 2nd handler(error handler).

Another note is if angular will return some odd error object with status: 0 when invoking some non-exist cross origin endpoints. This is because Whenever you make an api call to a non existing route, the browser sends a preflight request (a) . If the backends error handle strategy is to return an http response with 404 code and the error details in the body, the browser returns a generic error with http status code 0 without the actual server’s payload. The second call is never made because there is no sense in doing it since the preflight went wrong. In the end the error thrown have 0 status without any information about the real cause of the error.

dependencies resolve difference between yarn and npm

Today we found an interesting behavior difference between yarn and npm when resolving/placing dependency on node_module folder. It happens when i was help one of my teammates setting up a local debugging server which has a require on Library A. In our package.json, we do not have an explicit declaration on A. We have 2 depedencies B and C both have A as their dependency but on different version. B -> A1 and C-> A2.

For me(using yarn), when I do yarn install, the newer one A2 is placed on the top level node_module directory and C does not have A2 in its own node_module. And B has an A1 in its node_module.

So if we do a npm install on the project, the behavior is opposite. the older  A1 is placed on the top level and B does not have A1 in its node_module folder. And C has an A2 in its node_module directory to fulfill the dependency.

It breaks because the local server we setup is using some new APIs from A2 [let A = require(‘A’)], so my local will work but my colleague’s will not since she has the A1 in node_module. Minor thing but really reminds us to check the consistency between different build tool. The solution is straightforward: declare the dependency on the package.json directly to get the desired version.

Also tried the new tool called pnpm , though declared to be super fast with links rather than copy files around, and has non-flat layout. seems not working well with direct github dependency. So gave up.

include scss in angular styleUrls

In ng2+, we can include css in the styleUrls on the component level to separate styling into file. In one of our recent project, we have a bunch of scss files to migrate, so we would like to include scss file to avoid major rewrite. We also have some global scss files which will be bundled by webpack withExtractTextPlugin  .

So we basically need 2 rules for scss:

 {
        test: /(main|index).scss$/,
        loader: ExtractTextPlugin.extract({
          fallbackLoader: "style-loader",
          loader: "css-loader!sass-loader",
        }),
      },
      {
        test: /\.scss$/,
        loader: ["raw-loader", "sass-loader",
          {
            loader: 'sass-resources-loader',
            options: {
              resources: ['./styles/global/_common.scss', './styles/global/_mixins.scss']
            },
          }
        ],
        // main scss will be handled by css/sass loader then ExtractTextPlugin then to the bundle css file.
        exclude: [helpers.root('styles/main.scss'), helpers.root('node_modules/header-footer/styles/index.scss')]
      },

If we use css-loader rather than raw loader for other scss files, then we get error message: Error: Uncaught (in promise): Expected 'styles' to be an array of strings..

The sass-resources-loader loader is helpful because it would make the shared commons/mixins available for all components so that we do not have to import them in each component to make the scss compilable.

disable the macos auto update notification

The auto update notification is pretty annoying since it popup everyday, and you need several clicks to make it disappear.

xmac-update-automatic

change from UI

I tried to disable it from System Preference -> App Store -> uncheck Automatically check for update. it does not that the checkbox keep checked after i reopen the system preference panel.

enable-or-disable-auto-apps-update-on-mac

Solution

Have to do it from command line via sudo.

	sudo ​softwareupdate --schedule off
	sudo defaults write /Library/Preferences/com.apple.SoftwareUpdate AutomaticCheckEnabled -bool FALSE

You can view the status by running
sudo defaults read /Library/Preferences/com.apple.SoftwareUpdate

React Native on a machine 8081 occupied

Today, as i was setting up a basic React native app on my company’s laptop, I cannot run the RN packager server which is default to port 8081​. I ran the lsof -i :8081 but get nothing. Turns out I have to run it with sudo because the app that uses this port is the Corporate Mcafee.

Obviously i do not want to uninstall Mcafee from the corporate laptop, I have to find a work around.

First thing to do is to run the packager server in a different port:

react-native start --port 8088

And now the packager server is up and running. However when i run react-native run-android , it still tells me JS server not recognized , so to get rid of this msg, i have to change some source code in the node module. The logic is in node_modules/react-native/local-cli/util/isPackagerRunning.js, and we just need to change the port in the fetch function from 8081 to the 8088 that we use above.

Now the android app should be able to be installed in the virtual device and run. However we still could not leverage the live reload capability of RN because if we double click r to reload, we get a red screen saying cannot connect to 10.0.2.2:8081. This is because when we use Genymotion virtual device, the code in node_modules/react-native/ReactAndroid/src/main/java/com/facebook/react/modules/systeminfo/AndroidInfoHelpers.java will return that url. So we need to cmd+M in the emulator, Go to Dev Settings → Debug server host for device, enter ‘localhost:8081’. 

This is to overwrite the emulator’s debug server.(NOTE: if you are connecting your REAL android device, you do not have to change the above host).  Now we have the last step which is to forward the request in our local VD’s 8088 port to our machine’s 8088 port which runs the packager server by doing:

adb reverse tcp:8088 tcp:8088

The first part is VD and 2nd one is for the hosting machine. More about adb reverse. (Note: even for the VD, try to avoid using 8081. It would work for normal development/reload, but will not work with chrome remote debugging which still forwards the http request for index.android.bundle  to the host’s 8081 which is used by Mcafee.)

Now you should be able to change your index.android.js and hit r twice or cmd+M -> Reload to reload the VD.

webpack custom plugin

Recently we work with a platform which need to use webpack to build some ng2/4 assets and also some custom steps to pull data from a headless cms(via gulp) and eventually render components. One problem here is we cannot do live reload/recompile that every time we make some change we have to run the npm command again to compile the resources.

To solve the issue, i decided to write a custom webpack plugin to make browser-sync and webpack work together.

The basic flow is 1. run webpack in watch mode so every time a resouce(ts/css/html) changes, webpack auto re-compile, 2. serve the resources via browser-sync, here browserSync just serve as a mini express server and provide browser reload capability. 3. a webpack plugin to start the browser-sync server and register the reload event when webpack compilation is done.

Plugin

The webpack api is pretty straightforward, it exposes a compile object from the plugin’s apply function. It represents the fully configured Webpack environment. This object is built once upon starting Webpack, and is configured with all operational settings including options, loaders, and plugins. When applying a plugin to the Webpack environment, the plugin will receive a reference to this compiler. Use the compiler to access the main Webpack environment.

const browserSync = require('browser-sync');

function WebpackAndromedaPlugin(options) {
  console.log(`WebpackAndromedaPlugin options: ${JSON.stringify(options, null, 2)}`);

  let browserSyncConfig = require('./lib/dev-server').getBrowserSyncConfig(options);
  browserSyncConfig.server.middleware.push(require('./lib/dev-server').makeServeOnDemandMiddleware(options));
  browserSync.init(browserSyncConfig);
}

WebpackAndromedaPlugin.prototype.apply = (compiler) => {

  compiler.plugin("compile", function (params) {
    console.log('--->>> andromeda started compiling');
  });

  compiler.plugin('after-emit', (_compilation, callback) => {
    console.log('--->>> files prepared');
    browserSync.reload();
    callback();
  });
}

as we can see above, we can register our callbacks with compiler.pulgin(), where webpack exposes different stages for us to interact.

Another important object is compilation, which  represents a single build of versioned assets. While running Webpack development middleware, a new compilation will be created each time a file change is detected, thus generating a new set of compiled assets. A compilation surfaces information about the present state of module resources, compiled assets, changed files, and watched dependencies. The compilation also provides many callback points at which a plugin may choose to perform custom actions.

For example, all the generated files will be in compilation.assets object.

webpack-dev-middleware

The webpack-dev-middleware is a nice little express middleware that serves the files emitted from webpack over a connect server. One good feature it has is serving files from memory since it uses a in memory file system, which exposes some simple methods to read/write/check-existence in its MemoryFileSystem.js.  The webpack-dev-middleware also exposes some hooks like close/waitUntilValid etc, unfortunately the callback that waitUntilValid registers will only be called once according to the compileDone function here. Anyway, it is still an efficient tool to serve webpack resources and very easy to integrate with the webpack nodejs APIs:

~function() {
  const options = require('./config');
  let  webpackMiddleware = require("webpack-dev-middleware");

  let webpack = require('webpack');
  let browserSyncConfig = getBrowserSyncConfig(options);
  browserSyncConfig.server.middleware.push(makeServeOnDemandMiddleware(options));
  const compiler = webpack(require('./webpack.dev'));
  compiler.plugin('done', ()=>browserSync.reload())
  let inMemoryServer = webpackMiddleware(compiler, {noInfo: true, publicPath:'/assets'});
  browserSyncConfig.server.middleware.push(inMemoryServer);
  browserSync.init(browserSyncConfig);
}();

 

webpack-dev-server

The webpack-dev-server is basically a wrapper over the above webpack-dev-middleware. it is good for simple resources serving since it does not expose much. I was trying to find a hook to it to intercept the resource it generates/serves but did not get a good solution. If you need more customization, it would be better to go with webpack-dev-middleware.

a very detailed webpack intro article