Understanding Oracle Indexes

B-tree Index

Standard Oracle index is B-tree based.

A key is a column or expression on which you can build an index. Follow these guidelines for choosing keys to index:

  • Consider indexing keys that are used frequently in WHERE clauses.
  • Consider indexing keys that are used frequently to join tables in SQL statements. For more information on optimizing joins, see the section “Using Hash Clusters”.
  • Index keys that have high selectivity. The selectivity of an index is the percentage of rows in a table having the same value for the indexed key. An index’s selectivity is optimal if few rows have the same value.

    Note:

    Oracle automatically creates indexes, or uses existing indexes, on the keys and expressions of unique and primary keys that you define with integrity constraints.


    Indexing low selectivity columns can be helpful if the data distribution is skewed so that one or two values occur much less often than other values.

  • Do not use standard B-tree indexes on keys or expressions with few distinct values. Such keys or expressions usually have poor selectivity and therefore do not optimize performance unless the frequently selected key values appear less frequently than the other key values. You can use bitmap indexes effectively in such cases, unless a high concurrency OLTP application is involved where the index is modified frequently.
  • Do not index columns that are modified frequently. UPDATE statements that modify indexed columns and INSERT and DELETE statements that modify indexed tables take longer than if there were no index. Such SQL statements must modify data in indexes as well as data in tables. They also generate additional undo and redo.
  • Do not index keys that appear only in WHERE clauses with functions or operators. A WHERE clause that uses a function, other than MIN or MAX, or an operator with an indexed key does not make available the access path that uses the index except with function-based indexes.
  • Consider indexing foreign keys of referential integrity constraints in cases in which a large number of concurrent INSERT, UPDATE, and DELETE statements access the parent and child tables. Such an index allows UPDATEs and DELETEs on the parent table without share locking the child table.
  • When choosing to index a key, consider whether the performance gain for queries is worth the performance loss for INSERTs, UPDATEs, and DELETEs and the use of the space required to store the index. You might want to experiment by comparing the processing times of the SQL statements with and without indexes. You can measure processing time with the SQL trace facility.

Bitmap Indexes

1. 案例

  有张表名为table的表,由三列组成,分别是姓名、性别和婚姻状况,其中性别只有男和女两项,婚姻状况由已婚、未婚、离婚这三项,该表共有100w个记录。现在有这样的查询:     select * from table where Gender=‘男’ and Marital=“未婚”;

姓名(Name)

性别(Gender)

婚姻状况(Marital)

张三

已婚

李四

已婚

王五

未婚

赵六

离婚

孙七

未婚

1)不使用索引

  不使用索引时,数据库只能一行行扫描所有记录,然后判断该记录是否满足查询条件。

2)B树索引

  对于性别,可取值的范围只有’男’,’女’,并且男和女可能各站该表的50%的数据,这时添加B树索引还是需要取出一半的数据, 因此完全没有必要。相反,如果某个字段的取值范围很广,几乎没有重复,比如身份证号,此时使用B树索引较为合适。事实上,当取出的行数据占用表中大部分的数据时,即使添加了B树索引,数据库如oracle、mysql也不会使用B树索引,很有可能还是一行行全部扫描。

2. 位图索引出马

如果用户查询的列的基数非常的小, 即只有的几个固定值,如性别、婚姻状况、行政区等等。要为这些基数值比较小的列建索引,就需要建立位图索引。

对于性别这个列,位图索引形成两个向量,男向量为10100…,向量的每一位表示该行是否是男,如果是则位1,否为0,同理,女向量位01011。

RowId

1

2

3

4

5

1

0

1

0

0

0

1

0

1

1

  对于婚姻状况这一列,位图索引生成三个向量,已婚为11000…,未婚为00100…,离婚为00010…。

RowId

1

2

3

4

5

已婚

1

1

0

0

0

未婚

0

0

1

0

1

离婚

0

0

0

1

0

   当我们使用查询语句“select * from table where Gender=‘男’ and Marital=“未婚”;”的时候 首先取出男向量10100…,然后取出未婚向量00100…,将两个向量做and操作,这时生成新向量00100…,可以发现第三位为1,表示该表的第三行数据就是我们需要查询的结果。

RowId

1

2

3

4

5

1

0

1

0

0

and

未婚

0

0

1

0

1

结果

0

0

1

0

0

3.位图索引的适用条件

  上面讲了,位图索引适合只有几个固定值的列,如性别、婚姻状况、行政区等等,而身份证号这种类型不适合用位图索引。

  此外,位图索引适合静态数据,而不适合索引频繁更新的列。举个例子,有这样一个字段busy,记录各个机器的繁忙与否,当机器忙碌时,busy为1,当机器不忙碌时,busy为0。

  这个时候有人会说使用位图索引,因为busy只有两个值。好,我们使用位图索引索引busy字段!假设用户A使用update更新某个机器的busy值,比如update table set table.busy=1 where rowid=100;,但还没有commit,而用户B也使用update更新另一个机器的busy值,update table set table.busy=1 where rowid=12; 这个时候用户B怎么也更新不了,需要等待用户A commit。

  原因:用户A更新了某个机器的busy值为1,会导致所有busy为1的机器的位图向量发生改变,因此数据库会将busy=1的所有行锁定,只有commit之后才解锁。

Cluster Index

Oracle will create an index to police an unique constraint where no pre-existing index is suitable. Without the index, Oracle would need to serialize operations (such as a table lock) whenever someone tries to insert or delete a row (or update the PK).

Contrarily to MS-SQL Server, this index is not clustered on heap tables (default table organization), i.e. this index won’t change the underlying table structure and natural order. The rows won’t be reordered when Oracle creates the index. The index will be a B-tree index and will exist as a separate entity where each entry points to a row in the main table.

Oracle doesn’t have clustered index as MS SQL, however indexed-organized tables share some properties with cluster-indexed tables. The PK is an integral part of such tables and has to be specified during creation.

(Oracle also has table clusters, but they are a completely different concept).

Reference:

1. Understanding Indexes and Clusters

2. Indexes and Index-Organized Tables  

replace indexed but not stored data in lucene

Was trying to replace indexed but not stored data in lucene . found this thread has the same issue:

> > >> I have a strange problem with Field.Store.NO and Field.Index.ANALYZED
> > >> fields with Lucene 3.0.1.
> > >>
> > >> I'm testing my app with twenty test documents. Each has about ten
> > >> fields. All fields except one, "Content", are set as Field.Store.YES.
> > >> The "Content" field is set as Field.Store.NO and
> > >> Field.Index.ANALYZED. Using Luke, I discovered that this "Content"
> > >> field is not persisted to the disk, except on one document (neither
> > >> the first nor the last in the list). This always happens for exactly
> > >> the same document. When I examine the Document object before writing
> > >> it, it has the "Content" field I expect.
> > >>
> > >> When I change the "Content" field from Field.Store.NO to
> > >> Field.Store.YES, everything starts working. Every document has the
> > >> "Content" field exactly as I expect, and searches produce the hits I
> > >> expect to see. I really don't want to save the full "Content" data in
> > >> the Lucene index, though. I'm baffled why Field.Store.NO results in
> > >> nothing being written to the index even with Field.Index.ANALYZED.
> > I finally had time to go back and look at this problem. I discovered that
> the
> > analyzed fields work fine for searching until I use
> > IndexWriter.updateDocument().
> >
> > The way my application runs, it has to update documents several times to
> > update one specific field. The update code queries out Document objects
> using
> > a unique identifier, and updates the field. The problem is in Document
> objects
> > returned by the query. The querying code runs a search, and eventually
> calls
> > IndexSearcher.doc(int). According to the API documentation, that method
> only
> > returns Document objects with stored fields from the underlying index.
> >
> > I tried calling IndexSearcher.doc(int i, FieldSelector fieldSelector)
> with
> > fieldSelector set to null: the documentation states that this returns
> Document
> > objects with all fields, but that also only seems to return stored
> fields.
> >
> > So my question becomes: how can I update a document which contains non-
> > stored analyzed fields without clobbering the analyzed-only fields?
> > Note that I do not need to update the analyzed-only fields. I have found
> nothing
> > helpful in the documentation.
> You cannot retrieve non-stored fields. They are analyzed and tokenized
> during indexing and this is a one-way transformation. If you update
> documents you have to reindex the contents. If you do not have access to
> the
> original contents anymore, you may consider adding a stored-only "raw
> document" field, that contains everything to rebuild the indexed fields. In
> our installation, we have a stored field containing the JSON/XML source
> document to do this.
Adding to Uwe's comment, you may be operating under a false
assumption. Lucene has no capability to update fields in a document.
Period. This is one of the most frequently requested changes, but
the nature of an inverted index makes this...er...tricky. Updates
are really a document delete followed by a document add. And as
a bonus, the new document won't even have the same internal
Lucene doc id as the one it replaces.

So if you're reading a document from the index, non-stored fields
are not part of the new update and your results will be...uhmmmm....
not what you expect...

Also this is a good link for mistakes when using lucene.

This is a good article for lucene coding reference.

lucene index, store, anaylzed, not_analyzed, Document ,Fields

Create the Index

So, step one is to create the index for our set of Word documents. To do this, we need to write some code that takes the information from the Word documents and turns them into a searchable index. The only way to do this is by brute force. We’ll have to iterate over each of the Word documents, examing each and converting each into the pieces that Lucene needs to work with when it creates the index.

What are the pieces that Lucene needs to create the index? There are two.

  1. Documents
  2. Fields

These two abstractions are so key to Lucene that Lucene represents them with two top level Java classes, Document andField. A Document, not to be confused with our actual Word documents, is a Java class that represents a searchable item in Lucene. By searchable item, we mean that a Document is the thing that you find when you search. It’s up to you to create these Documents.

Lucky for us, it’s a pretty clear step from an actual Word document to a Lucene Document. I think anyone would agree that it will be the Word documents that our users will want to find when they conduct a search. This makes our processing rather simple, we will simply create a single Lucene Document for each of our actual Word documents.

Create the Document and its Fields

But how do we do that? It’s actually very easy. First, we make the Document object, with the new operator — nothing more. But at this point the Document is meaningless. We now have to decide what Fields to add to the Document. This is the part where we have to think. A Document is made of any number of Fields, and each Field has a name and a value. That’s all there is to it.

Two fields are created almost universally by developers creating Lucene indexes. The most important field will be the “content” field. This the Field that holds the content the Word document for which we are creating the Lucene Document. Bear in mind, the name of the Field is entirely arbitrary, but most people call one of the Fields “content” and they stick the actual content of the real world searchable object, the Word document in our case, into the value of that Field. In essense, a Field is just a name: value pair.

Another very common Field that developers create is the “title” Field. This field’s value will be the title of the Word document. What other information about the Word document might we want to keep in our index. Other common fields are things like “author”, “creation_date”, “keywords”, etc. The identification of the fields that you will need is entirely driven by your business requirements.

So, for each Word document that we want to make searchable, we will have to create a Lucene Document, with Fields such as those we outlined above. Once we have created the Document with those Fields, we then add it the Lucene index writer and ask it to write our Index. That’s it! We now have a searchable index. This is true, but we may have glossed over a couple of Field details. Let’s take a closer look at Fields.

Field Details: Stored or Indexed?

A Field may be kept in the index in more than one way. The most obvious way, and perhaps the only way that you might at first suspect the existence of, is the searchable way. In our example, we fully expect that if the user types in a word that exists in the contents of one of the Word documents, then the search will return that Word document in the search results. To do this, Lucene must index that Field. The nomenclature is a bit confusing a first, but, note, it is entirely possible to “store” a Field in the index without making it searchable. In other words, it’s possible to “store” a Field but not “index” it. Why? You’ll see shortly.

The first distiniction that Lucene makes between the way it can keep a Field in the index is whether it is stored or indexed. If we expect a match on a Field’s value to cause the Document to be hitby the search, then we must index the Field. If we only store the Field, it’s value can’t be reached by the search queries. Why thenstore a Field? Simple, when we hit the Document, via one of theindexed fields, Lucene will return us the entire Document object. All stored Fields will be available on that Document object; indexedFields will not be on that object. An indexed Field is information used to find an Document, a stored Field is information returned with the Document. Two different things.

This means that while we might not make searches based upon the contents of a given Field, we might still be able to make use of that Field’s value when the Document is returned by the search. The most obvious use case I can think of is a “url” Field for a web based Document. It makes no sense to search for the value of aURL, but you will definitely want to know the URL for the documents that your search returns. How else would your results page be able to steer the user to the hit page? This is a very important point: a stored Field’s value will be available on the Document returned by a search, but only an indexed Field’s value can actually be used as the target of a search.

Technically, stored Fields are kept within the Lucene index. But we must keep track of the fact that an indexed Field is different than a stored Field. Unfortunate nomenclature. This is why words matter. They can save on a lot of confusion.

Indexed Fields: Analyzed or Not Analyzed?

For the next wrinkle, we must point out that an indexed Field can be indexed in two different fashions. First, we can index the value of the Field in a single chunk. In other words, we might have a “phone number” Field. When we search for phone numbers, we need to match the entire value or nothing. This makes perfect sense. So, for a Field like phone number, we index the entire value ATOMICALLY into the Lucene index.

But let’s consider the “content” Field of the Word document. Do we want the user to have to match that entire Field? Certainly not. We want the contents of the Word document to be broken down into searchable tokens. This process is know as analyzation.We can start by throwing out all of the unimportant words like, “a”, “the”, “and”, etc. There are many other optimizations we can make, but the bottom line is that the content of a Field like “contents” should be analyzed by Lucene. This produces a targeted lightweight index. This is how search becomes efficient and powerful.

In the APIs, this comes down to the fact that when we create a Field, we must specify

  1. Whether to STORE it or not
  2. Whether to INDEX it or not
    • If indexing, whether to ANALYZE it or not

Now, you should be clear on the details of Fields. Importantly, we can both store and index a given Field. It’s not an either or choice.

数据库索引的作用database index

索引
可以利用索引快速访问数据库表中的特定信息。索引是对数据库表中一个或多个列(例如,employee 表的姓氏 (lname) 列)的值进行排序的结构。如果想按特定职员的姓来查找他或她,则与在表中搜索所有的行相比,索引有助于更快地获取信息。

索引提供指针以指向存储在表中指定列的数据值,然后根据指定的排序次序排列这些指针。数据库使用索引的方式与使用书的目录很相似:通过搜索索引找到特定的值,然后跟随指针到达包含该值的行。

在数据库关系图中,可以为选定的表创建、编辑或删除索引/键属性页中的每个索引类型。当保存附加在此索引上的表或包含此表的数据库关系图时,索引同时被保存。有关详细信息,请参见创建索引。

通常情况下,只有当经常查询索引列中的数据时,才需要在表上创建索引。索引将占用磁盘空间,并且降低添加、删除和更新行的速度。不过在多数情况下,索引所带来的数据检索速度的优势大大超过它的不足之处。然而,如果应用程序非常频繁地更新数据,或磁盘空间有限,那么最好限制索引的数量。
在创建索引前,必须确定要使用的列和要创建的索引类型。

SQL Server 索引结构及其使用(一)

作者:freedk

一、深入浅出理解索引结构

实际上,您可以把索引理解为一种特殊的目录。微软的SQL SERVER提供了两种索引:聚集索引(clustered index,也称聚类索引、簇集索引)和非聚集索引(nonclustered index,也称非聚类索引、非簇集索引)。下面,我们举例来说明一下聚集索引和非聚集索引的区别:
其实,我们的汉语字典的正文本身就是一个聚集索引。比如,我们要查“安”字,就会很自然地翻开字典的前几页,因为“安”的拼音是“an”,而按照拼音排序汉字的字典是以英文字母“a”开头并以“z”结尾的,那么“安”字就自然地排在字典的前部。如果您翻完了所有以“a”开头的部分仍然找不到这个字,那么就说明您的字典中没有这个字;同样的,如果查“张”字,那您也会将您的字典翻到最后部分,因为“张”的拼音是“zhang”。也就是说,字典的正文部分本身就是一个目录,您不需要再去查其他目录来找到您需要找的内容。我们把这种正文内容本身就是一种按照一定规则排列的目录称为“聚集索引”。
如果您认识某个字,您可以快速地从自动中查到这个字。但您也可能会遇到您不认识的字,不知道它的发音,这时候,您就不能按照刚才的方法找到您要查的字,而需要去根据“偏旁部首”查到您要找的字,然后根据这个字后的页码直接翻到某页来找到您要找的字。但您结合“部首目录”和“检字表”而查到的字的排序并不是真正的正文的排序方法,比如您查“张”字,我们可以看到在查部首之后的检字表中“张”的页码是672页,检字表中“张”的上面是“驰”字,但页码却是63页,“张”的下面是“弩”字,页面是390页。很显然,这些字并不是真正的分别位于“张”字的上下方,现在您看到的连续的“驰、张、弩”三字实际上就是他们在非聚集索引中的排序,是字典正文中的字在非聚集索引中的映射。我们可以通过这种方式来找到您所需要的字,但它需要两个过程,先找到目录中的结果,然后再翻到您所需要的页码。我们把这种目录纯粹是目录,正文纯粹是正文的排序方式称为“非聚集索引”。
通过以上例子,我们可以理解到什么是“聚集索引”和“非聚集索引”。进一步引申一下,我们可以很容易的理解:每个表只能有一个聚集索引,因为目录只能按照一种方法进行排序。

二、何时使用聚集索引或非聚集索引

下面的表总结了何时使用聚集索引或非聚集索引(很重要):

动作描述    使用聚集索引 使用非聚集索引
列经常被分组排序 应 应
返回某范围内的数据 应 不应
一个或极少不同值 不应 不应
小数目的不同值 应 不应
大数目的不同值 不应 应
频繁更新的列 不应 应
外键列 应 应
主键列 应 应
频繁修改索引列 不应 应

事实上,我们可以通过前面聚集索引和非聚集索引的定义的例子来理解上表。如:返回某范围内的数据一项。比如您的某个表有一个时间列,恰好您把聚合索引建立在了该列,这时您查询2004年1月1日至2004年10月1日之间的全部数据时,这个速度就将是很快的,因为您的这本字典正文是按日期进行排序的,聚类索引只需要找到要检索的所有数据中的开头和结尾数据即可;而不像非聚集索引,必须先查到目录中查到每一项数据对应的页码,然后再根据页码查到具体内容。

三、结合实际,谈索引使用的误区

理论的目的是应用。虽然我们刚才列出了何时应使用聚集索引或非聚集索引,但在实践中以上规则却很容易被忽视或不能根据实际情况进行综合分析。下面我们将根据在实践中遇到的实际问题来谈一下索引使用的误区,以便于大家掌握索引建立的方法。

1、主键就是聚集索引
这种想法笔者认为是极端错误的,是对聚集索引的一种浪费。虽然SQL SERVER默认是在主键上建立聚集索引的。
通常,我们会在每个表中都建立一个ID列,以区分每条数据,并且这个ID列是自动增大的,步长一般为1。我们的这个办公自动化的实例中的列Gid就是如此。此时,如果我们将这个列设为主键,SQL SERVER会将此列默认为聚集索引。这样做有好处,就是可以让您的数据在数据库中按照ID进行物理排序,但笔者认为这样做意义不大。
显而易见,聚集索引的优势是很明显的,而每个表中只能有一个聚集索引的规则,这使得聚集索引变得更加珍贵。
从我们前面谈到的聚集索引的定义我们可以看出,使用聚集索引的最大好处就是能够根据查询要求,迅速缩小查询范围,避免全表扫描。在实际应用中,因为ID号是自动生成的,我们并不知道每条记录的ID号,所以我们很难在实践中用ID号来进行查询。这就使让ID号这个主键作为聚集索引成为一种资源浪费。其次,让每个ID号都不同的字段作为聚集索引也不符合“大数目的不同值情况下不应建立聚合索引”规则;当然,这种情况只是针对用户经常修改记录内容,特别是索引项的时候会负作用,但对于查询速度并没有影响。
在办公自动化系统中,无论是系统首页显示的需要用户签收的文件、会议还是用户进行文件查询等任何情况下进行数据查询都离不开字段的是“日期”还有用户本身的“用户名”。
通常,办公自动化的首页会显示每个用户尚未签收的文件或会议。虽然我们的where语句可以仅仅限制当前用户尚未签收的情况,但如果您的系统已建立了很长时间,并且数据量很大,那么,每次每个用户打开首页的时候都进行一次全表扫描,这样做意义是不大的,绝大多数的用户1个月前的文件都已经浏览过了,这样做只能徒增数据库的开销而已。事实上,我们完全可以让用户打开系统首页时,数据库仅仅查询这个用户近3个月来未阅览的文件,通过“日期”这个字段来限制表扫描,提高查询速度。如果您的办公自动化系统已经建立的2年,那么您的首页显示速度理论上将是原来速度8倍,甚至更快。
在这里之所以提到“理论上”三字,是因为如果您的聚集索引还是盲目地建在ID这个主键上时,您的查询速度是没有这么高的,即使您在“日期”这个字段上建立的索引(非聚合索引)。下面我们就来看一下在1000万条数据量的情况下各种查询的速度表现(3个月内的数据为25万条):

(1)仅在主键上建立聚集索引,并且不划分时间段:

Select gid,fariqi,neibuyonghu,title from tgongwen
用时:128470毫秒(即:128秒)

(2)在主键上建立聚集索引,在fariq上建立非聚集索引:

select gid,fariqi,neibuyonghu,title from Tgongwen
where fariqi> dateadd(day,-90,getdate())
用时:53763毫秒(54秒)

(3)将聚合索引建立在日期列(fariqi)上:

select gid,fariqi,neibuyonghu,title from Tgongwen
where fariqi> dateadd(day,-90,getdate())
用时:2423毫秒(2秒)

虽然每条语句提取出来的都是25万条数据,各种情况的差异却是巨大的,特别是将聚集索引建立在日期列时的差异。事实上,如果您的数据库真的有1000万容量的话,把主键建立在ID列上,就像以上的第1、2种情况,在网页上的表现就是超时,根本就无法显示。这也是我摒弃ID列作为聚集索引的一个最重要的因素。得出以上速度的方法是:在各个select语句前加:

declare @d datetime
set @d=getdate()
并在select语句后加:

select [语句执行花费时间(毫秒)]=datediff(ms,@d,getdate())
2、只要建立索引就能显著提高查询速度
事实上,我们可以发现上面的例子中,第2、3条语句完全相同,且建立索引的字段也相同;不同的仅是前者在fariqi字段上建立的是非聚合索引,后者在此字段上建立的是聚合索引,但查询速度却有着天壤之别。所以,并非是在任何字段上简单地建立索引就能提高查询速度。
从建表的语句中,我们可以看到这个有着1000万数据的表中fariqi字段有5003个不同记录。在此字段上建立聚合索引是再合适不过了。在现实中,我们每天都会发几个文件,这几个文件的发文日期就相同,这完全符合建立聚集索引要求的:“既不能绝大多数都相同,又不能只有极少数相同”的规则。由此看来,我们建立“适当”的聚合索引对于我们提高查询速度是非常重要的。

3、把所有需要提高查询速度的字段都加进聚集索引,以提高查询速度
上面已经谈到:在进行数据查询时都离不开字段的是“日期”还有用户本身的“用户名”。既然这两个字段都是如此的重要,我们可以把他们合并起来,建立一个复合索引(compound index)。
很多人认为只要把任何字段加进聚集索引,就能提高查询速度,也有人感到迷惑:如果把复合的聚集索引字段分开查询,那么查询速度会减慢吗?带着这个问题,我们来看一下以下的查询速度(结果集都是25万条数据):(日期列fariqi首先排在复合聚集索引的起始列,用户名neibuyonghu排在后列):

(1)select gid,fariqi,neibuyonghu,title from Tgongwen where fariqi>”2004-5-5”
查询速度:2513毫秒

(2)select gid,fariqi,neibuyonghu,title from Tgongwen
where fariqi>”2004-5-5” and neibuyonghu=”办公室”
查询速度:2516毫秒

(3)select gid,fariqi,neibuyonghu,title from Tgongwen where neibuyonghu=”办公室”
查询速度:60280毫秒

从以上试验中,我们可以看到如果仅用聚集索引的起始列作为查询条件和同时用到复合聚集索引的全部列的查询速度是几乎一样的,甚至比用上全部的复合索引列还要略快(在查询结果集数目一样的情况下);而如果仅用复合聚集索引的非起始列作为查询条件的话,这个索引是不起任何作用的。当然,语句1、2的查询速度一样是因为查询的条目数一样,如果复合索引的所有列都用上,而且查询结果少的话,这样就会形成“索引覆盖”,因而性能可以达到最优。同时,请记住:无论您是否经常使用聚合索引的其他列,但其前导列一定要是使用最频繁的列。

四、其他书上没有的索引使用经验总结

1、用聚合索引比用不是聚合索引的主键速度快
下面是实例语句:(都是提取25万条数据)

select gid,fariqi,neibuyonghu,reader,title from Tgongwen where fariqi=”2004-9-16”
使用时间:3326毫秒

select gid,fariqi,neibuyonghu,reader,title from Tgongwen where gid<=250000
使用时间:4470毫秒

这里,用聚合索引比用不是聚合索引的主键速度快了近1/4。

2、用聚合索引比用一般的主键作order by时速度快,特别是在小数据量情况下

select gid,fariqi,neibuyonghu,reader,title from Tgongwen order by fariqi
用时:12936

select gid,fariqi,neibuyonghu,reader,title from Tgongwen order by gid
用时:18843

这里,用聚合索引比用一般的主键作order by时,速度快了3/10。事实上,如果数据量很小的话,用聚集索引作为排序列要比使用非聚集索引速度快得明显的多;而数据量如果很大的话,如10万以上,则二者的速度差别不明显。

3、使用聚合索引内的时间段,搜索时间会按数据占整个数据表的百分比成比例减少,而无论聚合索引使用了多少个:

select gid,fariqi,neibuyonghu,reader,title from Tgongwen where fariqi>”2004-1-1”
用时:6343毫秒(提取100万条)

select gid,fariqi,neibuyonghu,reader,title from Tgongwen where fariqi>”2004-6-6”
用时:3170毫秒(提取50万条)

select gid,fariqi,neibuyonghu,reader,title from Tgongwen where fariqi=”2004-9-16”
用时:3326毫秒(和上句的结果一模一样。如果采集的数量一样,那么用大于号和等于号是一样的)

select gid,fariqi,neibuyonghu,reader,title from Tgongwen
where fariqi>”2004-1-1” and fariqi<”2004-6-6”
用时:3280毫秒

4、日期列不会因为有分秒的输入而减慢查询速度
下面的例子中,共有100万条数据,2004年1月1日以后的数据有50万条,但只有两个不同的日期,日期精确到日;之前有数据50万条,有5000个不同的日期,日期精确到秒。

select gid,fariqi,neibuyonghu,reader,title from Tgongwen
where fariqi>”2004-1-1” order by fariqi
用时:6390毫秒

select gid,fariqi,neibuyonghu,reader,title from Tgongwen
where fariqi<”2004-1-1” order by fariqi
用时:6453毫秒

五、其他注意事项

“水可载舟,亦可覆舟”,索引也一样。索引有助于提高检索性能,但过多或不当的索引也会导致系统低效。因为用户在表中每加进一个索引,数据库就要做更多的工作。过多的索引甚至会导致索引碎片。
所以说,我们要建立一个“适当”的索引体系,特别是对聚合索引的创建,更应精益求精,以使您的数据库能得到高性能的发挥。
当然,在实践中,作为一个尽职的数据库管理员,您还要多测试一些方案,找出哪种方案效率最高、最为有效。

原文地址