My Postgresql snippets

Install psql

To get the psql, I simply installed the postgresapp for OSX which bundles the psql . We can start the psql from its GUI, or added its bin to our PATH. The specific path is: /Applications/

You can also get psql by installing postgresql one click installer.

Connect to DB with psql

psql -h DbUrl -p DbPort -U DbUser

typically a password prompt comes up(if not add -W to force it).

skip password

If we need to avoid entering pw interactively, we could set the pw as PGPASSWORD in the PATH so that psql could pickup.

export PGPASSWORD='password'

For all other environment variables, reference here.

run batch sql file

append -a -f YourScript.sql to the above command.

-a will print all nonempty input lines to standard output as they are read.

After the file is processed, psql terminates.

Or in the psql shell, \i path_to_sql_file

quit psql shell

use \quite

meta sqls

All DBs: select datname from pg_database;

All Tables: select tablename from pg_table where tableowner='YOUROWNER';

describe table: select column_name, data_type, character_maximum_length from INFORMATION_SCHEMA.COLUMNS where table_name = 'YourTableName';
Or use  \d schemaName.tableName in psql shell.

install ora2pg on mac osx

Recently we need to migrate our on prem oracle database into postgresql on aws RDS. Looks like ora2pg is a nice little tool to mitigate the problem though not perfect.

I am running OS X 10.10.5 Yosemite. It already comes with perl v5.18.2 as well as cpan for installing packages.

download/install Oracle client

The official document specifies quite vaguely about what is the oracle client libraries.  Turns out we need to download 3 packages from oracle website, for my use case we have oracle 12c so my current version is 12.1.02. The packages you need are:

  1. Oracle Basic
  2. Sqlplus
  3. sdk

Then we unzip them to the SAME directory like: /Users/liha/develop/instantclient_12_1 .

Now we need to set 2 env variables for installation:

export LD_LIBRARY_PATH=/Users/liha/develop/instantclient_12_1
export ORACLE_HOME=/Users/liha/develop/instantclient_12_1

install DBI/DBD

Now we need to install DBI/DBD stuff that is required during the ora2pg Makefile phase, especially the DBD::Oracle. The DBD:Pg is optional, only if you need to dump data/schema live rather than offline to file.

#perl –MCPAN –e shell
cpan> install DBI
cpan> install DBD::Oracle
cpan> install DBD::Pg

Check module

Now we can check whether the dependencies are correctly installed by running a perl script.

use ExtUtils::Installed;
my $inst = ExtUtils::Installed->new();
print join "\n",$inst->modules();

result should look like something:

➜  ora2pg perl                                                                                                                                                                              12:48:46

install ora2Pg

now can start to install the real thing by download and extract the ora2Pg tar.bz2 and run its which will eventually generate our makefile which we can make and install:

perl Makefile.PL
make && make install

After installation, the default conf file will be under etc/ora2pg/ora2pg.conf.

ora2Pg config

Now we can start config the ora2Pg by changing the default conf file or create a new conf file based on default conf. If you have multiple DB schemas to migrate or have multiple environments, it would be better to create different conf for each use case.

The important fields are:

ORACLE_DSN      dbi:Oracle:host=hostname;sid=Oracle SID

ORACLE_USER     schema_owner name

ORACLE_PWD      password

You should set as below to connect and retrieve required schema data and objects


I also turned debug on to view progress

DEBUG           1

Other settings used were


SCHEMA          schema_owner_name


I also used the same schema name in PG

PG_SCHEMA       schema_owner_name

I did have some problems with characters not being copied across as required and had to use settings below to override defaults, on some occasions



Now you can run the dump:

ora2pg -c myOra2pg.conf -t TABLE -o tables.sql


Shebang/Hashbang #! in shell

In unix sh file, we usually see the first line start with #!.

Like #!/bin/bash

The shebang (#!) at the start means execute the script with what follows.

In nodejs script, we usually see

#!/bin/env node

So this means the env command will try to find where the node has been installed from the $PATH, so that we do not have to specify the exact node path which would affect portability.


csv file excel BOM for non-ascll characters

Sometimes you need to export a table to the Excel and other spreadsheets. CSV file saved in UTF-8 seems to be sufficiently universal format.

But there is problem with non-ASCII characters in Excel. Excel default encoding depends on the system. The workaround is to put three magical bytes to the file beginning. They are called BOM (Byte order mark) and say to the editor that file is encoded as UTF-8.

for Nodejs, it would be writing the BOM before writing anything else.

    var stream = fs.createWriteStream("YOUR.csv");
    stream.once('open', function(fd) {
        //excel requires utf-8 + BOM for csv file, otherwise chinese char could not be displayed
        stream.write(new Buffer('\xEF\xBB\xBF', 'binary'));
        console.log('file Saved');

understand CORS

My colleague told me that there is a chrome extension enables you do cross domain request for all sites. Was a bit surprised since my previous understanding was CORS is controlled from the server side with some control headers. So I decided to dig more to it. After reading wiki,  and some Chinese article, I think I know how that works.


One thing I found out is CORS is actually controlled by both server and client side, I mean CORS requires support from both browser and server. All the modern browser and IE 10+ are good to go. The whole process is handled by browser. So for USER, it is transparent. For DEVELOPER, the code is the same. Browser will add some header and sometimes add an extra request.

Two types of Request(Simple/Non-simple)

A simple cross-site request is one that meets all the following conditions:

  • The only allowed methods are:
    • GET
    • HEAD
    • POST
  • Apart from the headers set automatically by the user agent (e.g. Connection, User-Agent, etc.), the only headers which are allowed to be manually set are:
    • Accept
    • Accept-Language
    • Content-Language
    • Content-Type
  • The only allowed values for the Content-Type header are:
    • application/x-www-form-urlencoded
    • multipart/form-data
    • text/plain

For simple requests, browser will add origin  header to the request and see how server response. One caveat is, even if server does not allow, the response status code will probably still be 200. The error can be handled by the onError();

For non-simple requests, browser will send a preflight request by the OPTIONS method to the resource on the other domain to see whether it is allowed. According to the previous request definition, the typical xhr json content type(application/json) is non-simple request which will require a preflight.

chrome CORS extension

So I think how the chrome extension works is it would intercept all the cross site xhr requests.

For simple request, it after getting the response, it would add `Access-Control-Allow-Origin: *  to the header, so that the browser does not complain.

For non-simple request, it would directly return Access-Control-Allow-Origin: * for the preflight request so that browser will allow the subsequence ‘real’ request to be sent out. One thing I notice is that it will set the Origin to which is kind of funny.


Spring boot jar process management in AWS EC2 instance with supervisor


Spring boot application is built into a jar which contains its own tomcat. So instead of running it in a traditional way of  having a tomcat instance that serving one or multiple wars, we will run the jar with java -jar. The problem here is if we run it directly, when our session quits/expires, the process will end. So we can either run it as a service(init.d), or using some 3rd party tool to manage it. In NodeJs world, we have the super powerful pm2. in the native world, ‘supervisor‘ seems to be the most recommended tool. Here we are going to introduce how to manage our spring boot jar with supervisor.

Install supervisor

Some EC2 AMI comes with easy_install’, which is a feature of setuptools. This is the preferred method of installation.

easy_install supervisor


Depending on the permissions of your system’s Python, you might need to be the root user to install Supervisor successfully using easy_install

Or we can use pip to install it thru: 

pip install supervisor

More reference in their official doc.

Supervisor Config

The Supervisor configuration file is conventionally named supervisord.conf. It is used by both supervisord and supervisorctl. If either application is started without the -coption (the option which is used to tell the application the configuration filename explicitly), the application will look for a file named supervisord.conf within the following locations, in the specified order. It will use the first file it finds.

  1.  $CWD/supervisord.conf
  2.  $CWD/etc/supervisord.conf
  3.  /etc/supervisord.conf
  4.  /etc/supervisor/supervisord.conf (since Supervisor 3.3.0)
  5.  ../etc/supervisord.conf (Relative to the executable)
  6.  ../supervisord.conf (Relative to the executable)

Below is the supervisord.conf i use for ADDS jar which i placed under /etc

logfile=/var/log/supervisord/supervisord.log    ; supervisord log file
logfile_maxbytes=50MB                           ; maximum size of logfile before rotation
logfile_backups=10                              ; number of backed up logfiles
loglevel=error                                  ; info, debug, warn, trace
pidfile=/var/run/                ; pidfile location
nodaemon=false                                  ; run supervisord as a daemon
minfds=1024                                     ; number of startup file descriptors
minprocs=200                                    ; number of process descriptors
user=root                                       ; default user
childlogdir=/var/log/supervisord/               ; where child log files will live
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
files = supervisor/conf.d/*.conf

App Config

in the ‘include’ section above, we want the supervisor to get all the conf files under conf.d. So now we can create a adds.conf there:

command=java -Dserver.port=8080 -Dlogging.path=/var/log/spring/adds/ -jar /var/adds-rest/adds-rest.jar

Run supervisord and its control

Now we can run ‘supervisord’ or sudo supervisord to start the daemon.

To fine control the processes, we can use the `supervisorctl` tool.Enter supervisorctl alone will take us the to interactive shell. and there are many actions we can take. use `help` to get the list.

So here we can do start/stop/restart/status…etc, a lot of stuff. 

Or we can run it directly using something like: `supervisorctl restart adds`. 

Web control console

We could access the supervisor control via http://IP:9001 and manage process there directly.

serialize enum fields with gson

By default, Gson just serialize the ‘name’ of the Enum which might not be enough since we might need also want to carry all the fields during the serialization. To achieve this we need to has our own gson adaptor and make use of reflection.


public class EnumAdapterFactory implements TypeAdapterFactory

    public <T> TypeAdapter<T> create(final Gson gson, final TypeToken<T> type)
        Class<? super T> rawType = type.getRawType();
        if (rawType.isEnum())
            return new EnumTypeAdapter<T>();
        return null;

    public class EnumTypeAdapter<T> extends TypeAdapter<T>
        public void write(JsonWriter out, T value) throws IOException
            if (value == null || !value.getClass().isEnum())

                      .filter(pd -> pd.getReadMethod() != null && !"class".equals(pd.getName()) && !"declaringClass".equals(pd.getName()))
                      .forEach(pd -> {
                          } catch (IllegalAccessException | InvocationTargetException | IOException e)
            } catch (IntrospectionException e)

        public T read(JsonReader in) throws IOException
            // Properly deserialize the input (if you use deserialization)
            return null;


Enum class:

public enum ReportTypes
    SP(1), CA(2), ADF(3), ORF(4), CTO(5), CDS(6), TSP(7);

    private int reportTypeId;
    ReportTypes(int reportTypeId)
        this.reportTypeId = reportTypeId;

    public int getReportTypeId()
        return reportTypeId;

Test Code:

    public void testReportTypesGsonSerialization()
        GsonBuilder builder = new GsonBuilder();
        builder.registerTypeAdapterFactory(new EnumAdapterFactory());
        Gson gson = builder.create();


    "value": "SP",
    "reportTypeId": "1"
    "value": "CA",
    "reportTypeId": "2"
    "value": "ADF",
    "reportTypeId": "3"
    "value": "ORF",
    "reportTypeId": "4"
    "value": "CTO",
    "reportTypeId": "5"
    "value": "CDS",
    "reportTypeId": "6"
    "value": "TSP",
    "reportTypeId": "7"