nodejs eventloop 和libuv

NodeJS and Chrome eventloop

Node.js and Chrome do not use the same event loop implementation. Chrome/Chromium uses libevent, while node.js uses libuv.

Node’s API provides a kind of asynchronous no-op, setImmediate. For that function, the “some operation” I’ve mention above is “do nothing”, after which an item is immediately added to the end of the event queue.

There is a more powerful process.nextTick which adds an event to the front of the event queue, effectively cutting in line and making all other queued events wait. If called recursively, this can cause prolonged delay for other events (until reaching maxTickDepth).

  • 问题由来

    事件驱动、异步、单线程、非阻塞I/O,这是我们听得最多的关于nodejs的介绍,连nodejs官网都是这么写的:

    Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.

    过去很长时间里,我都愿意接受这些“耳熟能详”的观点,直到最近,遇到过很多性能问题之后,我才开始思考,nodejs的内部机制到底是怎样的,nodejs的性能瓶颈在哪里?

    问题

    • nodejs既然是单线程,如何实现异步I/O?
    • nodejs如何实现非阻塞I/O的?
    • nodejs事件驱动是如何实现的?
    • nodejs全是异步调用和非阻塞I/O,就真的不用管并发数了吗?
    • nodejs如何靠js和操作系统打交道的?

    概念

    探讨上面问题之前,我们先看下这些概念是什么意思:

    • 事件驱动:
      所谓的事件驱动是对一些操作的抽象,比如 鼠标点击抽象成一个事件,收到请求抽象成一个事件,事件是对异步的一种实现。
    • 同步/异步
      所谓同步,就是在发出一个功能调用时,在没有得到结果之前,该调用就不返回。
      当一个异步过程调用发出后,调用者不会立刻得到结果。实际处理这个调用的部件是在调用发出后,通过状态、通知来通知调用者,或通过回调函数处理这个调用。
    • 阻塞/非阻塞

    阻塞调用是指调用结果返回之前,当前线程会被挂起。函数只有在得到结果之后才会返回。
    非阻塞和阻塞的概念相对应,指在不能立刻得到结果之前,该函数不会阻塞当前线程,而会立刻返回。

注意: 很多人弄混了 同步/异步和 阻塞/非阻塞 的关系,实际上他们并不是对等的,同步不一定会阻塞,只是方法没有返回不代表线程被挂起了,实际上你也可以去做别的工作。异步也并不代表一定是非阻塞,它可以立即返回函数,但是在获取回调的时候采用了不断轮训的方式挂起了线程。

nodejs内部揭秘

要弄清楚上面的问题,首先要弄清楚nodejs是怎么工作的。

nodejs

这张图就是nodejs的内部构造。最上面一层就是我们常用的nodejs API,都是通过js封装好的,node-bings是指对底层c/c++代码的封装后和js打交道的部分,属于交界区域,这部分大都是原生API源码调用c++的情况,用户是不需要直接使用c++模块的。
然后就是底层首先是V8引擎,这个我们非常熟悉,他就是 js 的解析引擎,它的作用就是“翻译”js给计算机看,然而我们今天关注的重点并不是V8.在这里我们也看出来node是v8的关系,v8是js解释引擎,node是js的runtime,相当于浏览器是js的runtime一样,我们接下来解释的东西大都发生在runtime上面。
libuv,早期是libev和libeio组成,后来被抽象成libuv,它就是node和操作系统打交道的部分,由它来负责文件系统、网络等等底层工作。也是我们今天重点关注对象。剩下那些这次按住不表。

libuv简介

一张图揭示了libuv在node中的作用

architecture

可以看出,几乎所有和操作系统打交道的部分都离不开 libuv的支持。libuv也是node实现跨操作系统的核心所在。

现在我们可以回答js是如何同底层操作系统打交道的了?
就是通过libuv,一张简化的图如下(以fs为例):

libuv作用

上面提到过异步和非阻塞IO的特点,那么我们看 nodejs既然是单线程,如何实现异步I/O ?
聪明的你可能马上想到了,js执行线程是单线程,把需要做的I/O交给libuv,自己马上返回做别的事情,然后libuv在指定的时刻回调就行了。其实简化的流程就是酱紫的!细化一点,nodejs会先从js代码通过node-bings调用到C/C++代码,然后通过C/C++代码封装一个叫 请求对象 的东西交给libuv,这个请求对象里面无非就是需要执行的功能+回调之类的东西,给libuv执行以及执行完实现回调。

nodejs异步模型

顺便回答了问题 nodejs真的是单线程吗?,只有js执行是单线程,I/O显然是其它线程,比如我们看到libuv起码要一个线程接受nodejs的异步请求并执行,当然远不止这样,我们后面再说。

libuv何时执行回调?

我们上面提到了libuv接过了js传递过来的 I/O请求,那么何时来处理回调呢?
有人说这还不简单,I/O完了我就回调行不行。这是极度不安全的做法,我们知道js执行是单线程的,如果两个回调同时回来,或者js线程正在工作状态,将会出现回调竞争的情况,这在一个单线程的模式下面是不应该出现的问题,所以,libuv有一个事件循环(event loop)的机制,来接受和管理回调函数的执行。

event loop是libuv的核心所在,上面我们提到 js 会把回调和任务交给libuv,libuv何时来调用回调就是 event loop 来控制的。event loop 首先会在内部维持多个事件队列(或者叫做观察者 watcher),比如 时间队列、网络队列等等,使用者可以在watcher中注册回调,当事件发生时事件转入pending状态,再下一次循环的时候按顺序取出来执行,而libuv会执行一个相当于 while true的无限循环,不断的检查各个watcher上面是否有需要处理的pending状态事件,如果有则按顺序去触发队列里面保存的事件,同时由于libuv的事件循环每次只会执行一个回调,从而避免了 竞争的发生。libuv官方的event loop执行图:

loop_iteration
哪天有时间了详细讲一下这个循环的过程,也很有意思

文件I/O

上面有副图提到了libuv在nodejs中的作用,右半部分 文件I/O ,DNS 和用户的代码对应的是线程池的机制,它的执行过程大概就是:
1 js层面调用如fs.open等指令通过node-bindings转成c/c++代码。
2 把回调函数等封装成一个请求对象,如果线程池有空闲线程,交给一个线程去执行。
3 执行完成在libuv的事件循环中的文件观察中注入一个回调事件,这个事件中会向上转换成js的回调并执行。

在这里我们就看到了线程池的概念,发现nodejs并不是单线程的,而且还有并行事件发生。同时,线程池默认大小是 4 ,也就是说,同时能有4个线程去做文件i/o的工作,剩下的请求会被挂起等待直到线程池有空闲。 nodejs全是异步调用和非阻塞I/O,就真的不用管并发数了吗?得到了回答。

线程池的大小可以通过 UV_THREADPOOL_SIZE 这个环境变量来改变 或者在nodejs代码中通过 process.env.UV_THREADPOOL_SIZE来重新设置。大概的工作流程可参考下面的流程图:

event-loop_0

还有深入浅出中的图也很有代表性:
Node异步IO流程

网络I/O

libuv的网络I/O采用了纯事件机制,其实现是使用的操作系统底层方法,在不同的操作系统中选择了不同的解决方案,比如linux下面使用的是 epoll,在windows下面使用的是IOCP等。
以linux为例,epoll是linux下面非常高效的一种异步I/O解决方案,nginx便是采用的这种方案,通过epoll(见下图)可以实现事件通知机制,网络内核在接收到任何绑定了的事件之后都会通知绑定者,然后执行相应的代码。

epoll

所以我们可以理解为, js绑定事件-> libuv绑定事件 -> 网络内核监听事件. 内核事件触发 -> libuv -> js回调的过程。 所以网络I/O 并没有并发数的限制,因为它也没有线程池的概念,在一个线程中飞快的处理各种回调。

网络I/O的执行流程和文件I/O不同的地方就在于它并没有线程池,而是通过事件机制交给了操作系统去做,操作系统响应或者处理了请求就会触发libuv的callback,进而传递到nodejs执行相应的业务代码。

FROM HERE

Advertisements

require exports module in nodejs/requirejs/commonjs

What is a Module

A module encapsulates related code into a single unit of code. When creating a module, this can be interpreted as moving all related functions into a file. Let’s illustrate this point with an example involving an application built with Node.js. Imagine that we created a file called greetings.js and it contains the following two functions:

1
2
3
4
5
6
7
8
// greetings.js
sayHelloInEnglish = function() {
  return "Hello";
};
sayHelloInSpanish = function() {
  return "Hola";
};

Exporting a Module

The utility of greetings.js increases when its encapsulated code can be utilized in other files. So let’s refactor greetings.js to achieve this goal. To comprehend what is actually happening, we can follow a three-step process:

1) Imagine that this line of code exists as the first line of code in greetings.js:

1
2
// greetings.js
var exports = module.exports = {};

2) Assign any expression in greetings.js that we want to become available in other files to theexports object:

1
2
3
4
5
6
7
8
9
10
// greetings.js
// var exports = module.exports = {};
        
exports.sayHelloInEnglish = function() {
  return "HELLO";
};
   
exports.sayHelloInSpanish = function() {
  return "Hola";
};

In the code above, we could have replaced exports with module.exports and achieved the same result. If this seems confusing, remember that exports and module.exports reference the same object.

3) This is the current value of module.exports:

1
2
3
4
5
6
7
8
9
module.exports = {
  sayHelloInEnglish: function() {
    return "HELLO";
  },
       
  sayHelloInSpanish: function() {
    return "Hola";
  }
};

Importing a Module

Let’s import the publicly available methods of greetings.js to a new file called main.js. This process can be described in three steps:

1) The keyword require is used in Node.js to import modules. Imagine that this is how require is defined:

1
2
3
4
5
6
var require = function(path) {
  // ...
  return module.exports;
};

2) Let’s require greetings.js in main.js:

1
2
// main.js
var greetings = require("./greetings.js");

The above code is equivalent to this:

1
2
3
4
5
6
7
8
9
10
// main.js
var greetings = {
  sayHelloInEnglish: function() {
    return "HELLO";
  },
       
  sayHelloInSpanish: function() {
    return "Hola";
  }
};

3) We can now access the publicly available methods of greetings.js as a property of our greetingsvariable in main.js.

1
2
3
4
5
6
7
8
// main.js
var greetings = require("./greetings.js");
// "Hello"
greetings.sayHelloInEnglish();
        
// "Hola" 
greetings.sayHelloInSpanish();

Salient Points

The keyword require returns an object, which references the value of module.exports for a given file. If a developer unintentionally or intentionally re-assigns module.exports to a different object or different data structure, then any properties added to the original module.exports object will be unaccessible.

An example will help elaborate this point:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
// greetings.js
// var exports = module.exports = {};
exports.sayHelloInEnglish = function() {
  return "HELLO";
};
exports.sayHelloInSpanish = function() {
  return "Hola";
};
/*
 * this line of code re-assigns 
 * module.exports
 */
module.exports = "Bonjour";

Now let’s require greetings.js in main.js:

1
2
// main.js
var greetings = require("./greetings.js");

At this moment, nothing is different than before. We assign the variable greetings to any code that is publicly available in greetings.js.

The consequence of re-assigning module.exports to a data structure other than its default value is revealed when we attempt to invoke sayHelloInEnglish and sayHelloInSpanish:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
// main.js
// var greetings = require("./greetings.js");
    
/*
 * TypeError: object Bonjour has no
 * method 'sayHelloInEnglish'
 */
greetings.sayHelloInEnglish();
        
/*
 * TypeError: object Bonjour has no
 * method 'sayHelloInSpanish'
 */
greetings.sayHelloInSpanish();

To understand why these errors are occuring, let’s log the value of greetings to a console:

1
2
// "Bonjour"
console.log(greetings);

At this point, we are trying to access the methods sayHelloInEnglish and sayHelloInSpanish on the string “Bonjour.” module.exports, in other words, is no longer referencing the default object that contain those methods.

Conclusion

Importing and exporting modules is a ubiqutous task in Node.js. I hope that the difference between exportsand module.exports is clearer. Moreover, if you ever encounter an error in accessing publicly available methods in the future, then I hope that you have a better understanding of why those errors may occur.

From here

Some good discussion on StackOverflow

How bower works.

When I first looked into Bower, I wasn’t exactly sure how it fit in: it wasn’t just a JavaScript package manager, like Jam, and it wasn’t a module loader, like RequireJS. It calls itself a browser package manager, but what exactly does this mean? How’s that different from a JavaScript package manager? The main difference is that Bower doesn’t just handle JavaScript libraries: it will manage any packages, even if that means HTML, CSS, or images. In this case, a packagemeans any encapsulated, third-party code, usually publicly accessible from a Git repository.

Bower is just a package manager.

The important thing to note here is that Bower is just a package manager, and nothing else. It doesn’t offer the ability to concatenate or minify code, it doesn’t support a module system like AMD: it’s sole purpose is to manage packages.

The bower registry will just be a regular webapp with a simple REST api for name/url.

 

What is Bower?

Bower is a package manager for client side technologies. It can be used to search , install, uninstall web assets like JavaScript, HTML, and CSS. It is not an opinionated tool and leaves lot of choice to the developers who are using the technology. There are various tools built on top of bower like YeoMan and Grunt. We will talk about them in future posts.

Why should I care?

  1. Saves time : The first reason why should learn about bower is that it will save the time you spend finding client side dependencies. Each time I have to install jQuery I go to the jQuery website and either download the package or use the CDN version. With bower, you can just type a command and you will get jquery installed on your local machine. You don’t have to remember version numbers etc. You can look up any library information using bower info command.
  2. Helps you work offline : Bower creates a .bower folder in the users home directory where it downloads all the assets and keeps them available for offline usage when you install a package. If you are familiar with Java, Bower is similar to the *.m2* repository for the popular Maven build system. Each time you download any repository it will install that library in two folders — one in your application folder and another in the .bower directory under the user’s home directory. Given this, the next time you need this repository it will pick up that version from the user’s home .bower directory.
  3. Makes it easy to express client side dependencies : You can create a file called bower.json where you can specify all your client side dependencies. Anytime you need to figure out what all libraries you are using you can refer to this file.
  4. Makes update easy : Suppose a new version of a library is released with an important security fix, in order to install the new version you just have to run a command and bower will update all of your dependencies with the new version.

FROM HERE

A good one explaining bower’s relationship with NPM and version management in NPM, also the possibility of replaceing it with NPM

 

missing jwt options in token using nodejs jsonwebtoken

I am using jsonwebtoken to handle the token generation and verification on the server side.

The way I did it is once user auth successfully, I sign the ‘user’ object directly to generate the token like this:

var token = jwt.sign(user, secret.secretToken, {expiresInMinutes: 60, issuer: 'cccg', algorithm:'HS384'});

I found my token never expires. After debugging(you can jwt decode you token here), i found no ‘exp’ is included in the token.  So i tried to sign the id instead.

var token = jwt.sign(user._id, secret.secretToken, {expiresInMinutes: 1, issuer: 'cccg', algorithm:'HS384'});

Still not working. So i had to dive into the jwt source code, in the sign function, it checks the existence of the option and assign it directly to the payload object.


module.exports.sign = function(payload, secretOrPrivateKey, options) {
  options = options || {};

  var header = {typ: 'JWT', alg: options.algorithm || 'HS256'};

  payload.iat = Math.round(Date.now() / 1000);

  if (options.expiresInMinutes) {
    var ms = options.expiresInMinutes * 60;
    payload.exp = payload.iat + ms;
  }

  if (options.audience)
    payload.aud = options.audience;

  if (options.issuer)
    payload.iss = options.issuer;

  if (options.subject)
    payload.sub = options.subject;

  var signed = jws.sign({header: header, payload: payload, secret: secretOrPrivateKey});

  return signed;
};

 

Now we could do it in the right way by passing a js object as a payload so that the exp/issuer/etc could be picked up.

var token = jwt.sign({id:user._id}, secret.secretToken, {expiresInMinutes: 1, issuer: 'cccg', algorithm:'HS384'});

 

PS the token generation in the ‘jws’ module’s jwssign function is pretty straight forward. It just turn the header and payload into base64 and concatenate with ‘.’, then generate signature and turn into base64 and concatenate, so that the 3 parts of the token is generated. See my other POST’s bottom for detail jwt structure

deploy nodejs angularjs mongodb expressjs application to openshift

In my previous post, I described how to upload file using nodejs and angularjs.

Now we are to deploy this MEAN stack app to openshift which is a very good cloud service provider offering 3 application deployment for free. You can even deploy Java web application to it using Tomcat/Mysql, part of which i mentioned in a previous post.

We first need to create a instance in openshift ,then we pull the source code to our local and merge with our local git, after fixing some conflict, we push our code to openshift. This whole process is in this preivous post.

Once we have the codebase and push it to server git, openshift will run npm towards our package.json every time we push something new there, very neat.

Some tricks needed

1. openshift ip and port

openshift has its own ip and port so we need to set it in our server.js when we start our app:

 

//openshift port or local port
var ipaddress = process.env.OPENSHIFT_NODEJS_IP || "127.0.0.1";
var port = process.env.OPENSHIFT_NODEJS_PORT ||3000;

app.listen(port, ipaddress, function () {
    logger.info('Express server listening on port: ' + port);
});

I use || so that the codebase would still run in my local. As for the variables in openshift, you can ssh to the openshift server and type:

env | grep OPENSHIFT_NODEJS

to see a bunch of environment vars set related to nodejs. IP and PORT are two of them.

2. mongo db connection

Similar to the ip and port, you also need some customization on the mongo database connection. I use mongoose. The below code would make sure our code works both locally and remotely.

 


var mongoose = require('mongoose');

// default to a 'localhost' configuration:
var connection_string = '127.0.0.1:27017/contacts';
// if OPENSHIFT env variables are present, use the available connection info:
if(process.env.OPENSHIFT_MONGODB_DB_PASSWORD){
    connection_string = process.env.OPENSHIFT_MONGODB_DB_USERNAME + ":" +
    process.env.OPENSHIFT_MONGODB_DB_PASSWORD + "@" +
    process.env.OPENSHIFT_MONGODB_DB_HOST + ':' +
    process.env.OPENSHIFT_MONGODB_DB_PORT + '/' +
    process.env.OPENSHIFT_APP_NAME;
}

mongoose.connect('mongodb://' + connection_string);
var db = mongoose.connection;
db.on('error', console.error.bind(console, 'connection error:'));

console.log('Hey, database Node module is loaded')

module.exports = db;

3. mongo db data export and import to openshift

To export the local data, just execute command

mongoexport -d targetDB -c targetCollection -o targetFileName.json

This would generate the output file in the current directory.  Next we want to import this into the openshift mongo db. The tmp directory is shorthand for /tmp. On Linux, it’s a directory that is cleaned out whenever you restart the computer, so it’s a good place for temporary files.

So, we could do something like:

$ rsync targetFileName.csv openshiftUsername@openshiftHostname:/tmp
$ ssh openshiftUsername@openshiftHostname
$ mongoimport -d targetDB -c targetCollection--type csv /tmp/targetFileName.csv --headerline

4. bower install

If you are also using bower to manage the client dependencies like me, then you also need to run bower install to get all the client packages. running it directly from the console would not work in openshift since Bower expects to be able to write some of its config files in $HOME/.bower/ (or inside .config), but OpenShift does not provide write access to $HOME.

You can work around this issue by prepending HOME=$OPENSHIFT_DATA_DIR before the bowercommand whenever you run it:

HOME=$OPENSHIFT_DATA_DIR bower

This thread posts some methods. However a easy way I found is adding this into the package.json’s scripts using its postinstall phase:

  "scripts": {
    "postinstall": "export HOME=$OPENSHIFT_REPO_DIR; ./node_modules/bower/bin/bower install"
  },

This will setup a new HOME directory – that is writable by you as well as automatically invoke bower install.

file upload with angularjs and nodejs

angular and node are really hot technologies recently.

The issue i am tackling today is uploading file from angular frontend to nodejs backend. The file i am working with is avatar(image), but it tech should be able to applied to any file.

There are a lot of options on both sides.

Frontend angularJs

angular-file-upload

https://github.com/danialfarid/angular-file-upload#php

This directive is easy to use and not as fancy as others. These are the reason i choose it.

ng-flow

flow js is another option that many ppl are using.

https://github.com/flowjs/ng-flow

I just found it has too many features and a little bit over-complicated. Thing like chunking file transfer is not something i really need here. What’s more the documentation and examples are not quite useful that I have to dig into the source code to understand how some directive attribute work. If you need some of its features like chunking / image preview, use it.

Backend Nodejs

Most of them are based on Busyboy

Formidable

https://github.com/felixge/node-formidable

Formidable has been there for a long time. I choose it because of its easy to use and good documentation. It does not have that much features as the other options but far enough for me since I just need to be able to parse the multi-part request, get the fields and files.

multiparty

https://github.com/andrewrk/node-multiparty/

Multiparty is another middleware to parse http requests wit multipart/form-data. The functionality and API are very similar to Formidable. But with some new features like you can chain multiple callbacks for Parse. It also has support for the AWS s3, which might be very useful to those who uses cloud.

Multer

https://github.com/expressjs/multer

multer is quite new and is actively maintained. I might switch to it later if I have more time. It is also recommended by the Express team as the middleware for replacing the old and ugly express-multipart, which is the integrated in Express 3 but no long bundled in Express 4.

Implementation

html snippet

Use the angular-file-upload directive. Each time a file is added, it triggers the upload(files) method in controller.

<div style="margin-left: 400px; margin-top: 100px;" >
                <label for="uploadWidget"> Upload Image</label>
                <div id="uploadWidget" class="btn btn-default" ng-file-select ng-file-change="upload($files)">Select File</div>
            </div>

Angular controller
The $upload is something we get from the directive and could be injected here. The upload function will upload the file with some metadata to the server’s ‘/api/uploadAvatar’ endpoint. progress() is optional. success() will accept the callback with the response stored in ‘data’.

contactControllers.controller('MemberEditController', ['$scope', '$routeParams', 'MemberResource', '$location', '$window', '$upload',
    function ($scope, $routeParams, MemberResource, $location, $window, $upload) {
        $scope.upload = function (files) {
            if (files && files.length) {
                $upload.upload({url: '/api/uploadAvatar', fields: {username: $scope.member.username}, file: files[0]}).progress(function (event) {
                    var progressPercentage = parseInt(100.0 * event.loaded / event.total);
                    console.log('progress: ' + progressPercentage + '% ' + event.config.file.name);
                }).success(function (data, status, headers, config) {
                    console.log('file ' + config.file.name + 'uploaded. Response: ' + JSON.stringify(data));
                    $scope.member.photo = data.path;
                });
            }
        };

NodeJs
in the server side, we first config the route for the ‘/api/uploadAvatar’ that we have above so that we could handle the request. restImpl is a file that i have for holding my rest implementations. Below is the code in the nodejs bootstrap app.js/server.js

var restImpl = require('./routes/restImpl');

app.post('/api/uploadAvatar', restImpl.uploadAvatar);

in the restImpl js, we do the real work.
The logger below is an instance of winston. Look at my another POST for configuring logging for nodejs.

In the uploadAvatar function, we get a formidable instance which is used to parse the incoming request so that we could get all the files and fields out of it. Then we could do all the processing in the callback. we 1st get the file, then the current path for the temp file and move the file from the temp location to the target path. Finally we return the new file’s path to the client so that we could handle it on the angularjs(the ‘data’ above in the success callback).

var formidable = require('formidable');
var path = require('path');
var fs = require('fs');
var logger = require('winston');
//handle avatar upload
exports.uploadAvatar = function (req, res) {
    var form = new formidable.IncomingForm();
    form.parse(req, function (err, fields, files) {
        var file = files.file;
        var username = fields.username;
        var tempPath = file.path;
        var targetPath = path.resolve('./public/photos/' + username + '/' + file.name);
        fs.rename(tempPath, targetPath, function (err) {
            if (err) {
                throw err
            }
            logger.debug(file.name + " upload complete for user: " + username);
            return res.json({path: 'photos/' + username + '/' + file.name})
        })
    });
};

nodejs log to file for multiple modules

There are several projects provide the ability to log to file for nodejs like log4js, winston etc.

I picked winston for no specific reason.

init setup is pretty simple, just follow what is written in Github.

To make it be able to be shared by multiple modules, we need some tweaks.

First we need a config file:

we name the file logConfig.js and put it under log directory.

/**
 * Created by LeOn on 2/21/15.
 */

var logger = require('winston');
var moment = require('moment');

logger.setLevels({
    debug: 0,
    info: 1,
    silly: 2,
    warn: 3,
    error: 4
});
logger.addColors({
    debug: 'green',
    info: 'cyan',
    silly: 'magenta',
    warn: 'yellow',
    error: 'red'
});

logger.remove(logger.transports.Console);
logger.add(logger.transports.Console, {
    timestamp: function () {
        return getCurrentTime();
    }, level: 'debug', colorize: true
});
//specify the json:false so that it does not write json to the log file which not that human readable.
logger.add(logger.transports.File, {
    json: false, timestamp: function () {
        return getCurrentTime();
    }, filename: 'log/contactApp.log', level: 'debug'
});

//use moment to format to local time, it has better looking than the js' own toLocaleTime()
function getCurrentTime()
{
    return new moment().format("YYYY-M-D hh:mm:ss-SSS");
}

module.exports = logger;

Then in the app.js or server.js

var logger = require('./log/logConfig.js');

Finally in files from other moduels

 var logger = require('winston');