debug hover item in chrome devtools

Chrome devtools is our friend, always.

Today when I was developing an angular 4.x app with primeng library, i have to check the class set on the tooltip component. As we know the tooltip is hover event based, so if we hover on it to make it showup and then shift our focus to the dev tool element tab, the tooptip would disappear.

Chrome tool has a feature to activate the hover stuff(:hover) on specific element for CSS sake. It is quite handy but obviously it does not apply in this use case since this tooltip is js based.

Search around and finally find a solution: using F8 or CMD + \ which is pause the script execution.

Steps are quite straightforward:

Mouse over the tooltip, and press F8 while it is displayed.

Now you can now use the inspector to look at the CSS.

LDAP notes on Forgerock OpenDJ

Forgerock has a good explanation on their openDJ, LDAP, DS etc…

Below are some of my notes.

LDAP directory data is organized into entries, similar to the entries for words in the dictionary, or for subscriber names in the phone book.

dn: uid=bjensen,ou=People,dc=example,dc=com
uid: bjensen
cn: Babs Jensen
cn: Barbara Jensen
facsimileTelephoneNumber: +1 408 555 1992
gidNumber: 1000
givenName: Barbara
homeDirectory: /home/bjensen
l: San Francisco
objectClass: inetOrgPerson
objectClass: organizationalPerson
objectClass: person
objectClass: posixAccount
objectClass: top
ou: People
ou: Product Development
roomNumber: 0209
sn: Jensen
telephoneNumber: +1 408 555 1862
uidNumber: 1076

The entry also has a unique identifier, shown at the top of the entry, dn:uid=bjensen,ou=People,dc=example,dc=com. DN is an acronym for distinguished name. No two entries in the directory have the same distinguished name. Yet, DNs are typically composed of case-insensitive attributes.

When you look up her entry in the directory, you specify one or more attributes and values to match. The directory server then returns entries with attribute values that match what you specified.

A directory server stores two kinds of attributes in a directory entry: user attributes and operational attributes. User attributes hold the information for users of the directory. All of the attributes shown in the entry at the outset of this section are user attributes. Operational attributes hold information used by the directory itself. Examples of operational attributes include entryUUID, modifyTimestamp, and subschemaSubentry. When an LDAP search operation finds an entry in the directory, the directory server returns all the visible user attributes unless the search request restricts the list of attributes by specifying those attributes explicitly. The directory server does not, however, return any operational attributes unless the search request specifically asks for them. Generally speaking, applications should change only user attributes, and leave updates of operational attributes to the server, relying on public directory server interfaces to change server behavior. An exception is access control instruction (aci) attributes, which are operational attributes used to control access to directory data.


You may be used to web service client server communication, where each time the web client has something to request of the web server, a connection is set up and then torn down. LDAP has a different model. In LDAP the client application connects to the server and authenticates, then requests any number of operations, perhaps processing results in between requests, and finally disconnects when done.



GPL,以GPL为基础的软件也要用GPL,或者跟GPL兼容。有一个种方式可以做到不用GPL,把该软件版权持有者的公司,收购了,这是后话。目前GPL的主要流行版本是GPLv2 和GPLv3, 至于区别,可以理解为GPLv3有专利报复条款。

Apache License 比较宽松一些,简单可以理解为,在该授权软件基础上的软件可以不开源。

CDDL 可以理解为GPL 和Apache的折中,在一个软件中用不同几个包,在一个包里边,就是该比较完整的模块必须用CDDL,其他的可以用别的,甚至,不开源。

EPL则是因为后来IBM将Eclipse IDE交由名为“Eclipse基金会 (Eclipse Foundation)”来管理,对CPL为小部分修改为成的授权条款。EPL可以理解为在EPL授权的软件基础上的工作,如果新开的软件是源软件独立,就可以用其他的license,否则,只能用EPL。举个例子,你对EPL授权的软件,修正的bug,添加的性能提升,都不算独立的部分。


large file from hive to rdbms(oracle)

Recently we have a requirement of dumping a sizable file(4+G) to oracle from s3. The file itself is hive-compatiable. so instead of downloading the file and generate sql for it, we decided to transfer the content using hive jdbc and persist in via jpa/hiberante.


On the hive side, one important thing is to make sure batchsize is set in jdbc resultset.

hiveJdbcTemplate.query(sqlToExecute, rs -> {
            while ({
      you handling


on the relational database side

  1. make sure index is turned off. otherwise it each insertion will trigger the b-tree index to be inserted.
  2. make sure leverage the hibernate batch-size
    hibernate.jdbc.batch_size. I set it to 50 since my table has over 200 columns.For example , if you save() 100 records and your hibernate.jdbc.batch_size is set to 50. During flushing, instead of issue the following SQL 100 times :

    insert into TableA (id , fields) values (1, 'val1');
    insert into TableA (id , fields) values (2, 'val2');
    insert into TableA (id , fields) values (3, 'val3');
    insert into TableA (id , fields) values (100, 'val100');

    Hiberate will group them in batches of 50 , and only issue 2 SQL to the DB, like this:

    insert into TableA (id , fields) values (1, 'val1') , (2, 'val2') ,(3, 'val3') ,(4, 'val4') ,......,(50, 'val50')
    insert into TableA (id , fields) values (51, 'val51') , (52, 'val52') ,(53, 'val53') ,(54, 'val54'),...... ,(100, 'val100')  

    Please note that Hibernate would disable insert batching at the JDBC level transparently if the primary key of the inserting table isGenerationType.Identity.

  3. make sure flush()/clear() for certain size so that memory is not eaten up by the millions of objects built on the fly.
    flush will make sure query be executed and object saved(synced) to DB.
    clear will clear the persistence context so all managed entities are detached. entities that have not been flushed will not be persisted.

My main code is something like:

    public int doImport(int limit)
        String sql = "SELECT * FROM erd.ERD_PRDCT_FIXED_INCM_MNCPL_HS_prc_txt";
        if (limit >= 0)
            sql = sql + " LIMIT " + limit;
        HiveBeanPropertyRowMapper<SrcErdFixedIncmMuniEntity> mapper = new HiveBeanPropertyRowMapper<>(SrcErdFixedIncmMuniEntity.class);
        int batchSize = 5000;
        int[] inc = {0};
        Instant start =;
        List<SrcErdFixedIncmMuniEntity> listToPersist = new ArrayList<>(batchSize);
        hiveJdbcTemplate.query(sql, (rs) -> {
            while (
                listToPersist.add(mapper.mapRow(rs, -1));
                if (inc[0] % batchSize == 0)
                    persistAndClear(inc, listToPersist);
            //left overs(last n items)
                persistAndClear(inc, listToPersist);
            return null;
        Instant end =;
        System.out.println("Data Intake took: " + Duration.between(start, end));
        return inc[0];

    private void persistAndClear(int[] inc, List<SrcErdFixedIncmMuniEntity> listToPersist)
        em.clear();"Saved record milestone: " + inc[0]);


not bad, ~3.5 Millions records get loaded in about an hour.

serialize enum fields with gson

By default, Gson just serialize the ‘name’ of the Enum which might not be enough since we might need also want to carry all the fields during the serialization. To achieve this we need to has our own gson adaptor and make use of reflection.

public class EnumAdapterFactory implements TypeAdapterFactory

    public <T> TypeAdapter<T> create(final Gson gson, final TypeToken<T> type)
        Class<? super T> rawType = type.getRawType();
        if (rawType.isEnum())
            return new EnumTypeAdapter<T>();
        return null;

    public class EnumTypeAdapter<T> extends TypeAdapter<T>
        public void write(JsonWriter out, T value) throws IOException
            if (value == null || !value.getClass().isEnum())

                      .filter(pd -> pd.getReadMethod() != null && !"class".equals(pd.getName()) && !"declaringClass".equals(pd.getName()))
                      .forEach(pd -> {
                          } catch (IllegalAccessException | InvocationTargetException | IOException e)
            } catch (IntrospectionException e)

        public T read(JsonReader in) throws IOException
            // Properly deserialize the input (if you use deserialization)
            return null;

Enum class:

public enum ReportTypes
    SP(1), CA(2), ADF(3), ORF(4), CTO(5), CDS(6), TSP(7);

    private int reportTypeId;
    ReportTypes(int reportTypeId)
        this.reportTypeId = reportTypeId;

    public int getReportTypeId()
        return reportTypeId;

Test Code:

    public void testReportTypesGsonSerialization()
        GsonBuilder builder = new GsonBuilder();
        builder.registerTypeAdapterFactory(new EnumAdapterFactory());
        Gson gson = builder.create();


    "value": "SP",
    "reportTypeId": "1"
    "value": "CA",
    "reportTypeId": "2"
    "value": "ADF",
    "reportTypeId": "3"
    "value": "ORF",
    "reportTypeId": "4"
    "value": "CTO",
    "reportTypeId": "5"
    "value": "CDS",
    "reportTypeId": "6"
    "value": "TSP",
    "reportTypeId": "7"

DNS原理以及A/NS Record Cname

阮一峰 老师的一篇关于DNS的好博客,尤其喜欢里面对于分级查询以及A-Record, NS-Record, CNAME的解释, 简单明了, 所以转载了这一部分如下:







根域名的下一级,叫做”顶级域名”(top-level domain,缩写为TLD),比如;再下一级叫做”次级域名”(second-level domain,缩写为SLD),比如http://www.example.com里面的.example,这一级域名是用户可以注册的;再下一级是主机名(host),比如http://www.example.com里面的www,又称为”三级域名”,这是用户在自己的域里面为服务器分配的名称,是用户可以任意分配的。



# 即






  1. 从”根域名服务器”查到”顶级域名服务器”的NS记录和A记录(IP地址)
  2. 从”顶级域名服务器”查到”次级域名服务器”的NS记录和A记录(IP地址)
  3. 从”次级域名服务器”查出”主机名”的IP地址








$ dig +trace









七、NS 记录的查询


$ dig ns com
$ dig ns


$ dig +short ns com
$ dig +short ns




(1) A:地址记录(Address),返回域名指向的IP地址。

(2) NS:域名服务器记录(Name Server),返回保存下一级域名信息的服务器地址。该记录只能设置为域名,不能设置为IP地址。

(3)MX:邮件记录(Mail eXchange),返回接收电子邮件的服务器地址。

(4)CNAME:规范名称记录(Canonical Name),返回另一个域名,即当前查询的域名是另一个域名的跳转,详见下文。

(5)PTR:逆向查询记录(Pointer Record),只用于从IP地址查询域名,详见下文。



$ dig


;; ANSWER SECTION: 3370    IN  CNAME  600 IN  A




$ dig -x






$ dig a
$ dig ns
$ dig mx

JPA SequenceGenerator with allocationSize 1 performance tuning

I had a blog last year about fixing the sequence number going wild by setting the allocationSize to 1.

Overall it solves the inconsistency problem if you are using a sequence with ‘INCREMENT BY’ value 1 in database.


One problem comes up today is I am facing some performance issue with the above setting when I was trying to persist a lot of records(entities) because for every entity need to perform a ‘select SEQ.nextval from DUAL’ in order to get a ID from the specified sequence. So when persisting hundreds of thousands of entities, this becomes a problem.

First Try

Did some search and tried to set my allocationSize to 500 also increased my sequence’s ‘INCREMENT BY’ value to 500 by

alter sequence SEQ_EQUITY_PROC_DAILY_ID increment by 500

At doing this, the saving process is much faster(10 times). However  when I query the database, i found another inconsistency that my sequence next value is ‘2549522’ but the ID I have in the db table is something like ‘1274761000’. So the problem for using the MultipleHiLoPerTableGenerator where the id will be allocationSize*sequenceValue. This generator is perfectly fine is you have a new table with sequence init value 1 given that you can tolerate this kind of inconsistency between the ID value and the actual sequence value. So how it works is, by default we have allocation size 50, so hibernate will get the 50 and use the 1-50 for the current entities. Next round it will use 51-100 when the sequence value is 2. The drawback is if there are some other JDBC connection or jpa using a different setting, we will probably get ID collision.


To solve this problem, we need to set a property in hibernate:

properties.setProperty("", Boolean.toString(true));

This ‘’ by default is false which uses the ‘SequenceHiLoGenerator‘ which will have that multiply behavior. Once we set it to true, it will then use the ‘SequenceStyleGenerator‘, which is more jpa and oracle friendly. It generates identifier values based on an sequence-style database structure. Variations range from actually using a sequence to using a table to mimic a sequence.