understand CORS

My colleague told me that there is a chrome extension enables you do cross domain request for all sites. Was a bit surprised since my previous understanding was CORS is controlled from the server side with some control headers. So I decided to dig more to it. After reading wiki,  and some Chinese article, I think I know how that works.


One thing I found out is CORS is actually controlled by both server and client side, I mean CORS requires support from both browser and server. All the modern browser and IE 10+ are good to go. The whole process is handled by browser. So for USER, it is transparent. For DEVELOPER, the code is the same. Browser will add some header and sometimes add an extra request.

Two types of Request(Simple/Non-simple)

A simple cross-site request is one that meets all the following conditions:

  • The only allowed methods are:
    • GET
    • HEAD
    • POST
  • Apart from the headers set automatically by the user agent (e.g. Connection, User-Agent, etc.), the only headers which are allowed to be manually set are:
    • Accept
    • Accept-Language
    • Content-Language
    • Content-Type
  • The only allowed values for the Content-Type header are:
    • application/x-www-form-urlencoded
    • multipart/form-data
    • text/plain

For simple requests, browser will add origin  header to the request and see how server response. One caveat is, even if server does not allow, the response status code will probably still be 200. The error can be handled by the onError();

For non-simple requests, browser will send a preflight request by the OPTIONS method to the resource on the other domain to see whether it is allowed. According to the previous request definition, the typical xhr json content type(application/json) is non-simple request which will require a preflight.

chrome CORS extension

So I think how the chrome extension works is it would intercept all the cross site xhr requests.

For simple request, it after getting the response, it would add `Access-Control-Allow-Origin: *  to the header, so that the browser does not complain.

For non-simple request, it would directly return Access-Control-Allow-Origin: * for the preflight request so that browser will allow the subsequence ‘real’ request to be sent out. One thing I notice is that it will set the Origin to evil.com which is kind of funny.


Spring boot jar process management in AWS EC2 instance with supervisor


Spring boot application is built into a jar which contains its own tomcat. So instead of running it in a traditional way of  having a tomcat instance that serving one or multiple wars, we will run the jar with java -jar. The problem here is if we run it directly, when our session quits/expires, the process will end. So we can either run it as a service(init.d), or using some 3rd party tool to manage it. In NodeJs world, we have the super powerful pm2. in the native world, ‘supervisor‘ seems to be the most recommended tool. Here we are going to introduce how to manage our spring boot jar with supervisor.

Install supervisor

Some EC2 AMI comes with easy_install’, which is a feature of setuptools. This is the preferred method of installation.

easy_install supervisor


Depending on the permissions of your system’s Python, you might need to be the root user to install Supervisor successfully using easy_install

Or we can use pip to install it thru: 

pip install supervisor

More reference in their official doc.

Supervisor Config

The Supervisor configuration file is conventionally named supervisord.conf. It is used by both supervisord and supervisorctl. If either application is started without the -coption (the option which is used to tell the application the configuration filename explicitly), the application will look for a file named supervisord.conf within the following locations, in the specified order. It will use the first file it finds.

  1.  $CWD/supervisord.conf
  2.  $CWD/etc/supervisord.conf
  3.  /etc/supervisord.conf
  4.  /etc/supervisor/supervisord.conf (since Supervisor 3.3.0)
  5.  ../etc/supervisord.conf (Relative to the executable)
  6.  ../supervisord.conf (Relative to the executable)

Below is the supervisord.conf i use for ADDS jar which i placed under /etc

logfile=/var/log/supervisord/supervisord.log    ; supervisord log file
logfile_maxbytes=50MB                           ; maximum size of logfile before rotation
logfile_backups=10                              ; number of backed up logfiles
loglevel=error                                  ; info, debug, warn, trace
pidfile=/var/run/supervisord.pid                ; pidfile location
nodaemon=false                                  ; run supervisord as a daemon
minfds=1024                                     ; number of startup file descriptors
minprocs=200                                    ; number of process descriptors
user=root                                       ; default user
childlogdir=/var/log/supervisord/               ; where child log files will live
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
files = supervisor/conf.d/*.conf

App Config

in the ‘include’ section above, we want the supervisor to get all the conf files under conf.d. So now we can create a adds.conf there:

command=java -Dserver.port=8080 -Dlogging.path=/var/log/spring/adds/ -jar /var/adds-rest/adds-rest.jar

Run supervisord and its control

Now we can run ‘supervisord’ or sudo supervisord to start the daemon.

To fine control the processes, we can use the `supervisorctl` tool.Enter supervisorctl alone will take us the to interactive shell. and there are many actions we can take. use `help` to get the list.

So here we can do start/stop/restart/status…etc, a lot of stuff. 

Or we can run it directly using something like: `supervisorctl restart adds`. 

Web control console

We could access the supervisor control via http://IP:9001 and manage process there directly.

serialize enum fields with gson

By default, Gson just serialize the ‘name’ of the Enum which might not be enough since we might need also want to carry all the fields during the serialization. To achieve this we need to has our own gson adaptor and make use of reflection.


public class EnumAdapterFactory implements TypeAdapterFactory

    public <T> TypeAdapter<T> create(final Gson gson, final TypeToken<T> type)
        Class<? super T> rawType = type.getRawType();
        if (rawType.isEnum())
            return new EnumTypeAdapter<T>();
        return null;

    public class EnumTypeAdapter<T> extends TypeAdapter<T>
        public void write(JsonWriter out, T value) throws IOException
            if (value == null || !value.getClass().isEnum())

                      .filter(pd -> pd.getReadMethod() != null && !"class".equals(pd.getName()) && !"declaringClass".equals(pd.getName()))
                      .forEach(pd -> {
                          } catch (IllegalAccessException | InvocationTargetException | IOException e)
            } catch (IntrospectionException e)

        public T read(JsonReader in) throws IOException
            // Properly deserialize the input (if you use deserialization)
            return null;


Enum class:

public enum ReportTypes
    SP(1), CA(2), ADF(3), ORF(4), CTO(5), CDS(6), TSP(7);

    private int reportTypeId;
    ReportTypes(int reportTypeId)
        this.reportTypeId = reportTypeId;

    public int getReportTypeId()
        return reportTypeId;

Test Code:

    public void testReportTypesGsonSerialization()
        GsonBuilder builder = new GsonBuilder();
        builder.registerTypeAdapterFactory(new EnumAdapterFactory());
        Gson gson = builder.create();


    "value": "SP",
    "reportTypeId": "1"
    "value": "CA",
    "reportTypeId": "2"
    "value": "ADF",
    "reportTypeId": "3"
    "value": "ORF",
    "reportTypeId": "4"
    "value": "CTO",
    "reportTypeId": "5"
    "value": "CDS",
    "reportTypeId": "6"
    "value": "TSP",
    "reportTypeId": "7"

js tilde IIFE

// Without superfluous operator, we need to surround the anonymous ‘scoping’ function in
// parenthesis to force it to be parsed as an expression instead of a *declaration*,
// which allows us to immediately function-call-pattern it.
   // ...

// By inserting a superfluous operator, we can omit those parentheses,
// as the operator forces the parser to view the anonymous function as
// an expression *within* the statement, instead of as the
// statement itself, which saves us a character overall, as well as some ugliness:
   // ...

// But, in all of the above examples, if one is depending on ASI, and
// doesn't needlessly scatter semicolons all over their code out of ignorance,
// a prepended semicolon is necessary to prevent snafus like the following:
var foo = 4
   // ...
// ... in which case, the variable `foo` would be set to a crazy
// addition / concatenation involving the (probably non-existent) *return value*
// of our anonymous ‘scoping’ function. Hence, our friend the bitflip:
var foo = 4
   // ...
// ... he solves all of our problems, by disnecessitating the prepended semicolon
// *and* the wrapping parentheses.

JVM JIT and running mode


At the beginning of Java, all code are executed in interpreted manner, I.E execute code line by line after interpretation which would result in slowness. Especially for code that are frequently executed.

So later JIT was introduced so that when when some code are frequently executed, it becomes ‘Hot Spot Code’ and would be compiled to  machine code. Typically there 2 types of hot-spot-code:

  1. A function that are called very frequently
  2. The code in loop.

JVM maintains a count as of how many time a function is executed. If this count exceeds a predefined limit JIT compiles the code into machine language which can directly be executed by the processor (unlike the normal case in which javac compile the code into bytecode and then java – the interpreter interprets this bytecode line by line converts it into machine code and executes).

Also next time this function is calculated same compiled code is executed again unlike normal interpretation in which the code is interpreted again line by line.

Server vs client mode

when we check java version, there are 3 lines, we will check what the the 3rd line.

---> java -version

java version "1.8.0_66"

Java(TM) SE Runtime Environment (build 1.8.0_66-b17)

Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode)

Hotspot jvm has 2 mode: client and server. we can use java -server xxx / java -client xxx to start. The former is more heavier on compilation and optimization hence with longer startup time.


use java -Xint/comp/mixed to trigger, mixed is the default. 

java -Xint -version 

will show that it is in the interpreted mode so that code will be interpreted and executed line by line which would be quite slow specially in loop. On the other hand, -Xcomp will execute the code after compile all the code to machine code. 


GC friendly java coding

1. Give size when initing collections

When initing Map/List etc, if we could know the size, pass into the constructor. This way JVM does not have to constantly allocate memory to new arrays when the size grows.

2. Use immutable

Immutability has a lot of benefits. One of them is typically ignored which is its effect on GC.

We cannot modify the fields/references in the immutables.

public class ObjectPair {

   private final Object first;
   private final Object second;

   public ObjectPair(Object first, Object second) {
       this.first = first;
       this.second = second;

    publicObject getFirst() {
       return first;

   public Object getSecond() {
       return second;


This means immutables cannot reference objects that are created after it. This way when GC is performed in the YoungGen, it could skip the immutables in OldGen which will not have reference to the objects in the current YoungGen which means less memory page scan and result in shorter GC cycle.

3. prefer Stream to big blob

When dealing with large files, reading it to memory will generate a large object on heap and could easily result in OOM. Expose the file as Stream and do proper handling would be more efficient since typically we would have many API for processing Streams.

4. string concat in loop

Typically java compiler will do pretty good optimization for the String concats(with ‘+’) and use StringBuilder.

However when it comes to for loop, it would be a different story. The temp strings in the loop will result in a lot of new StringBuilder objects being created. So a better way is to use StringBuilder directly when it comes to concat with loop. More detail can be found in this SO answer.

hoist for var, let, const, function, function*, class

I have been playing with ES6 for a while and I noticed that while variables declared with var are hoisted as expected…

console.log(typeof name); // undefined
var name = "John";

…variables declared with let or const seem to have some problems with hoisting:

console.log(typeof name); // ReferenceError
let name = "John";


console.log(typeof name); // ReferenceError
const name = "John";


these variables cannot be accessed before they are declared. However, it’s a bit more complicated than that.

Are variables declared with let or const not hoisted? What is really going on here?

All declarations (var, let, const, function, function*, class) are hoisted in JavaScript. This means that if a name is declared in a scope, in that scope the identifier will always reference that particular variable:

x = "global";
(function() {
    x; // not "global"

    var/let/… x;
    x; // not "global"

    let/const/… x;

This is true both for function and block scopes1.

The difference between var/function/function* declarations and let/const/classdeclara­tions is the initialisation.
The former are initialised with undefined or the (generator) function right when the binding is created at the top of the scope. The lexically declared variables however stay uninitialised. This means that a ReferenceError exception is thrown when you try to access it. It will only get initialised when the let/const/class statement is evaluated, everything above that is called the temporal dead zone.

x = y = "global";
(function() {
    x; // undefined
    y; // Reference error: y is not defined

    var x = "local";
    let y = "local";

Notice that a let y; statement initialises the variable with undefined like let y = undefined;would have.

Is there any difference between let and const in this matter?

No, they work the same as far as hoisting is regarded. The only difference between them is that a constant must be and can only be assigned in the initialiser part of the declaration (const one = 1;, both const one; and later reassignments like one = 2 are invalid).