【转】打开tcp_tw_recycle引起的一个问题

2014年7月13日 没有评论

今天普空说了一个问题就是如果设置了tcp_tw_recycle ,那么如果客户端是NAT出来的,那么就可能会出现连接被直接rst的情况。然后我google了下,在内核列表也有人说了这个问题 https://lkml.org/lkml/2008/11/15/67

The big problem is that both are incompatible with NAT. So if you
ever talk to any NATed clients don’t use it.


源码之前了无秘密,我们来看代码,为什么会出现这种问题,我这里是3.4.4的内核。核心代码是在tcp_v4_conn_request中,这个函数是什么时候被调用呢,是当listen socket收到syn包的时候被调用。直接来看涉及到tw_recycle的代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
#define TCP_PAWS_MSL    60      /* Per-host timestamps are invalidated
                     * after this time. It should be equal
                     * (or greater than) TCP_TIMEWAIT_LEN
                     * to provide reliability equal to one
                     * provided by timewait state.
                     */
#define TCP_PAWS_WINDOW 1       /* Replay window for per-host
                     * timestamps. It must be less than
                     * minimal timewait lifetime.
        /* VJ's idea. We save last timestamp seen
         * from the destination in peer table, when entering
         * state TIME-WAIT, and check against it before
         * accepting new connection request.
         *
         * If "isn" is not zero, this request hit alive
         * timewait bucket, so that all the necessary checks
         * are made in the function processing timewait state.
         */
        if (tmp_opt.saw_tstamp &&
            tcp_death_row.sysctl_tw_recycle &&
            (dst = inet_csk_route_req(sk, &fl4, req)) != NULL &&
            fl4.daddr == saddr &&
            (peer = rt_get_peer((struct rtable *)dst, fl4.daddr)) != NULL) {
            inet_peer_refcheck(peer);
            if ((u32)get_seconds() - peer->tcp_ts_stamp < TCP_PAWS_MSL &&
                (s32)(peer->tcp_ts - req->ts_recent) >
                            TCP_PAWS_WINDOW) {
                NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_PAWSPASSIVEREJECTED);
                goto drop_and_release;
            }
        }

可以看到当满足下面所有的条件时,这个syn包将会被丢弃,然后释放相关内存,并发送rst。
1 tcp的option有 time stamp字段.
2 tcp_tw_recycle有设置。
3 在路由表中是否存在完全相同的流(如果打开了xfrm的话,还要比较端口,默认xfrm应该是打开的),如果存在则直接返回.
4 并且数据包的源地址和新请求的源地址相同.
5 根据路由表以及源地址能够查找到保存的peer(这个可以看我以前的blog,也就是保存了一些连接统计信息).
6 当前时间(接收到syn)比最后一次的时间(time stamp)小于60秒.
7 已经存在peer的最近一次时间戳要大于当前请求进来的时间戳.

从上面可以看到,上面的条件中1/2都是 server端可以控制的,而其他的条件,都是很容易就满足的,因此我们举个例子。

如果客户端是NAT出来的,并且我们server端有打开tcp_tw_recycle ,并且time stamp也没有关闭,那么假设第一个连接进来,然后关闭,此时这个句柄处于time wait状态,然后很快(小于60秒)又一个客户端(相同的源地址,如果打开了xfrm还要相同的端口号)发一个syn包,此时linux内核就会认为这个数据包异常的,因此就会丢掉这个包,并发送rst。

而现在大部分的客户端都是NAT出来的,因此建议tw_recycle还是关闭,或者说server段关闭掉time stamp(/proc/sys/net/ipv4/tcp_timestamps).

分类: 内核 标签:

【转】jQuery诞生记-原理与机制

2014年4月20日 没有评论

by zhangxinxu from http://www.zhangxinxu.com
本文地址:http://www.zhangxinxu.com/wordpress/?p=3520

一、看似偶然的东西实际是必然会发生的

我大学时候在图书馆翻过一本很破旧的书,讲生物理论的,主要内容就是探讨生命的产生是偶然还是必然。里面很多亚里士多德都看不懂的公式计算什么的,还有模拟原始地球环境出现了有机物的实验什么的 。总之,书论述的观点是:“在当时的地球环境下,生命的产生是必然的!” 无数次机会的偶然条件、无数次化合物的相遇反应等必定会产生有机物,再有N多偶然,有机物必然形成了有机体……

这种理论类似于,你是个过马路非常小心的人,且你万寿无疆,除了怕被汽车撞。给你100万年的寿命,你最后必然还是被车撞死。

如果以这种理论来看jQuery的出现,结论也应该是必然的

二、需求、动力、发展、事物产生与jQuery的诞生

刀削面机器人

一个成熟的东西显然不是一口气就出来的,所谓“一铲子挖不了一口井”,我想jQuery的原作者再天才,也是循序渐进过来的,如何个循序渐进法,我想,很有可能就是需求驱动而产生的,好比上图刀削面机器人,据说现在已经第八代了!

1. gelElementById太长了
页面上有个按钮,还有个图片,我想点击按钮图片隐藏,如下HTML:

<button id="button">点击我</button>
<img id="image" src="xxx.jpg">

于是,我的脚本可能就这样:

var button = document.getElementById("button")
    , image = document.getElementById("image")

button.onclick = function() {
    image.style.display = "none";
};

有何问题?人几乎都是天生的“懒惰者”,document.getElementById名称长且重复出现,好像到了公司发现卡没带又回家重新拿卡的感觉,我希望越简单越好。恩, 我很喜欢钱,$这个符号我很喜欢,我打算改造一番,简化我的工作:

var $ = function(id) {
    return document.getElementById(id);
};

<span style="color: #cd0000;">$("button")</span>.onclick = function() {
    <span style="color: #cd0000;">$("image")</span>.style.display = "none";
};

这里的$()就是最简单的包装器,只是返回的是原生的DOM对象。

2. 我需要一个简洁的暗号,就像“芝麻开门”
后来页面复杂了,点击一个按钮,有2张图片要隐藏。

$("button").onclick = function() {
    $("image1").style.display = "none";
    $("image2").style.display = "none";
};

好像又看见长长的重复的东西,xxx.style.display = "none", 为什么每次开门都要从包里找到钥匙、对准插口插进去、还要左扭扭右扭扭呢?一次还好,天天经常老是这样怎受得了。设想,要是有个芝麻开门的暗号就好了,“open开”,声音识别,门自动开了,多省心。

这里每次隐藏都要xxx.style.display = "none", 比每天拿钥匙开门还要烦,我希望有一个快捷的方式,例如,“hide隐”,语句识别,元素自动隐藏,多省心。

就是要变成下面的效果:

$("button").onclick = function() {
    $("image1")<span style="color: #cd0000;">.hide()</span>;
    $("image2")<span style="color: #cd0000;">.hide()</span>;
};

3. 如何识别“芝麻开门”的暗号
$("image1")本质是个DOM元素,$("image1").hide()也就是在DOM元素上扩展一个hide方法,调用即隐藏。

哦,扩展,立马想到了JS中的prototype原型。//zxx: 老板,现在满大街什么菜最便宜。老板:原型啊,都泛滥了!

<span style="color: #cd0000;">HTMLElement.prototype</span>.hide = function() {
    this.style.display = "none";
};

上面代码的demo地址应该不会被人看到吧……

虽然在身体上钻了个窟窿插进入了一个方法,毕竟浏览器有有效果啊,切肤之痛就不算什么了。但是,我们是在泱泱天朝,很多IE6~IE8老顽固,这些老东西不认识HTMLElement,对于HTMLElement自残扩展之法根本理解不了,而这些老家伙掌管了半壁江山。唉,面对现实,元素直接扩展是行不通了。

因此,由于兼容性,我们需要想其他扩展方法。

4. 条条大路通罗马,此处不留爷,自有留爷处
虽IE6~IE8不识HTMLElement原型扩展,但是,Function的原型扩展其认识啊。管不管用,暂时不得而知,先随便搞个简单的试试呗~

var F = function() {};
F.prototype.hide = function() {
    this?.style.display = "none";
};

new F().hide();    <span style="color: green;">// 这个实现隐藏?</span>

本文至少还有一半的内容,但是,全文的最难点就在这里的,对new F()的认识和理解。

上面的代码,new F()您可以看做是this?.style这里的this. 您可能会跳起来抢答道:“那new F()return值 = DOM元素不就完事OK啦!—— this.style.hide = new F().style.hide = DOM.style.hide”!

很傻很天真

只要new表达式之后的constructor返回(return)一个引用对象(数组,对象,函数等),都将覆盖new创建的匿名对象,如果返回(return)一个原始类型(无return时其实为return原始类型undefined),那么就返回new创建的匿名对象。

上面的引用来自这里。什么意思呢?说白了就是,new F()如果没有返回值(Undefined类型),或返回值是5种基本型(Undefined类型、Null类型、Boolean类型、Number类型、String类型)之一,则new F()我们可以看成是原型扩展方法中的this; 如果返回是是数组啊、对象啊什么的,则返回值就是这些对象本身,此时new F() ≠ this

举例说明:

var F = function(id) {
    return document.getElementById(id);
};

new F("image1") == document.getElementById("image1");    <span style="color: green;">// true 说明看上去返回DOM对象,实际确实就是DOM对象</span>
var F = function(id) {
    return id;
};

new F("image1") == "image1";    <span style="color: green;">// false 说明看上去返回字符串值,实际并不是字符串</span>

回到上面天真的想法。要想使用prototype.hide方法中的this, 偶们就不能让F函数有乱七八糟的返回值。

因此,new F()直接返回DOM是不可取的,但我们可以借助this间接调用。比方说:

var F = function(id) {
    <span style="color: #cd0000;">this.element</span> = document.getElementById(id);
};
F.prototype.hide = function() {
    <span style="color: #cd0000;">this.element.style.display</span> = "none";
};

new F("image").hide();    <span style="color: green;">// 看你还不隐藏</span>

上面代码的demo地址应该不会被人看到吧……

5. 我不喜欢太暴露
F()中的this.element实际上将element这个属性直接暴露在了new F("image")上!

new F("image").hasOwnProperty("element");    <span style="color: green;">// true</span>

直接暴露的属性

太暴露了,我不喜欢~~
太暴露

如何隐藏?代码如下:

var F = function(id) {
    return this.getElementById(id);
};
F.prototype.getElementById = function(id) {
    <span style="color: #cd0000;">this.element</span> = document.getElementById(id);
    return <span style="color: #cd0000;">this</span>;
};
F.prototype.hide = function() {
    this.element.style.display = "none";
};

new F("image").hide();    <span style="color: green;">// 看你还不隐藏</span>

元素获取方法放在prototype上,通过F()执行。你可能会奇怪了,你刚明明说“new F()直接返回DOM是不可取的”,怎么现在又有return呢?大家务必擦亮眼睛,F.prototype.getElementById的返回值是this,也就是new F()的返回值是this. 形象点就是new F("image")出了一拳,又反弹到自己脸上了。

于是乎,现在就没有直接暴露的API了。

上面代码的demo地址应该不会被人看到吧……

6. 我不喜欢new, 我喜欢$
new F("image")这种写法我好不喜欢,我喜欢$, 我就是喜欢$, 我要换掉。

好吧,把new什么什么藏在$方法中把~

var $ = function(id) {
    return new F(id);
};

于是,上面的图片隐藏的直接执行代码就是:

$("image").hide();

上面代码的demo地址应该不会被人看到吧……

IE6浏览器也是支持的哦!是不是已经有些jQuery的样子啦!

7. 你怎么就一种姿势啊,人家都腻了诶
循序渐进到现在,都是拿id来举例的,实际应用,我们可能要使用类名啊,标签名啊什么的,现在,为了接下来的继续,有必要支持多个“姿势”。

在IE8+浏览器中,我们有选择器API,document.querySelectordocument.querySelectorAll,前者返回唯一Node,后者为NodeList集合。大统一起见,我们使用后者。于是,就有:

var F = function(selector, context) {
    return this.getNodeList(selector, context);
};
F.prototype.getNodeList = function(selector, context) {
    context = context || document;
    this.element = context.<span style="color: #cd0000;">querySelectorAll</span>(selector);
    return this;
};

var $ = function(selector, context) {
    return new F(selector, context);
};

此时,我们就可以使用各种选择器了,例如,$("body #image")this.element就是选择的元素们。

8. IE6/IE7肿么办?
IE6/IE7不认识querySelectorAll,咋办?
怎么办?

jQuery就使用了一个比较强大的选择器框架-Sizzle. 知道就好,重在演示原理,因此,下面还是使用原生的选择器API示意,故demo效果需要IE8+浏览器下查看。

8. 遍历是个麻烦事
this.element此时类型是NodeList, 因此,直接this.element.style.xxx的做法一定是报错,看来有必要循环下:

F.prototype.hide = function() {
    var i=0, length = this.element.length;
    for (; i<length; i+=1) {
        this.element[i].style.display = "none";
    }    
};

于是乎:

$("img").hide();  <span style="color: green;">// 页面所有图片都隐藏啦!</span>

上面代码的demo地址应该不会被人看到吧……

单纯一个hide方法还可以应付,再来个show方法,岂不是还要循环遍历一次,岂不是要烦死~ 

因此,急需一个遍历包装器元素的方法,姑且叫做each吧~

于是有:

F.prototype<span style="color: #cd0000;">.each</span> = function(fn) {
    var i=0, length = this.element.length;
    for (; i&lt;length; i+=1) {
        fn.call(this.element[i], i, this.element[i]);
    }
    return this;
};
F.prototype.hide = function() {
    this<span style="color: #cd0000;">.each</span>(function() {
       this.style.display = "none";
    });
};

$("img").hide();  <span style="color: green;">// 页面所有图片都隐藏啦!</span>

上面代码的demo地址应该不会被人看到吧……

9. 我不喜欢this.element, 可以去掉吗?
现在包装器对象结构类似这样:

F.prototype = {
    element: [NodeList],
    each: function() {},
    hide: function() {}
}

element看上去好碍眼,就不能去掉吗?可以啊,宝贝,NodeList是个类数组结构,我们把它以数值索引形式分配到对象中就好啦!一来去除冗余element属性,二来让原型对象成为类数组结构,可以有一些特殊的功能。

于是,F.prototype.getNodeList需要换一个名字了,比方说初始化init, 于是有:

F.prototype<span style="color: #cd0000;">.init</span> = function(selector, context) {
    var nodeList = (context || document).querySelectorAll(selector);
    <span style="color: #cd0000;">this.length</span> = nodeList.length;
    for (var i=0; i&lt;this.length; i+=1) {
        <span style="color: #cd0000;">this[i]</span> = nodeList[i];    
    }
    return this;
};

此时,each方法中,就没有烦人碍眼的this.element[i]出现了,而是直接的this[i].

F.prototype.each = function(fn) {
    var i=0, length = <span style="color: #cd0000;">this.length</span>;
    for (; i&lt;length; i+=1) {
        fn.call(<span style="color: #cd0000;">this[i]</span>, i, <span style="color: #cd0000;">this[i]</span>);
    }
    return this;
};

我们也可以直接使用索引访问包装器中的DOM元素。例如:$("img")[0]就是第一张图片啦!

上面代码的demo地址应该不会被人看到吧……

10. 我是完美主义者,我特不喜欢F名称,可以换掉吗?
F这个名称从头到尾出现,我好不喜欢的来,我要换成$, 我就是要换成$符号……

这个……$已经用了啊,再用冲突的吧。再说,你又不是狐后,耍无赖也没用啊……

狐后耍无赖

御姐发飙了

 好吧,想想其他办法吧。一步一步来,那我把所有的F换成$.fn.

就有:
所有的F换成$.fn之后~

上图代码的demo地址应该不会被人看到吧……

显然,运行是OK的。似乎也非常有jQuery的模样了,但是,实际上,跟jQuery比还是有差别的,有个较大的差别。如果是上图代码所示的JS结构,则包装器对象要扩展新方法,每个都需要再写一个原型的。例如,扩展一个attr方法,则要写成:

$.fn.prototype.attr = function() {
    <span style="color: green;">// ...</span>
};

又看到prototype了,高级的东西应该要隐藏住,否则会给人难以上手的感觉。那该怎么办呢?御姐不是好惹的。

脑子动一下就知道了,把F.prototype换成$.fn不久好了。这样,扩展新方法的时候,直接就是

$.fn.attr = function() {
    <span style="color: green;">// ...</span>
};

至此,就使用上讲,与jQuery非常接近了。 但是,还有几个F怎么办呢,总不能就像下面这样放着吧:

var $ = function(selector, context) {
    return new F(selector, context);
};
var F = function(selector, context) {
    return this.init(selector, context);
};

$.fn = F.prototype;

$.fn.init = function(selector, context) {
    <span style="color: green;">// ...</span>
    return this;
};
$.fn.each = function(fn) {
   <span style="color: green;">// ...</span>
};
$.fn.hide = function() {
   <span style="color: green;">// ...</span>
};

数学中,我们都学过合并同类项。仔细观察上面的的代码:
$()返回的是new F(),而new F()又是返回的对象的引用。擦,这返回来返回去的,参数又是一样的,我们是不是可以一次性返回,然后再做些手脚,让$.fn.init返回的this依然能够正确指向。

于是,一番调整有:

var $ = function(selector, context) {
    return new $.fn.init(selector, context);
};
var F = function() { };

$.fn = F.prototype;
$.fn.init = function(selector, context) {
    <span style="color: green;">// ...</span>
    return this;
};

<span style="color: green;">// ...</span>

上面代码显然是有问题的,new的是$.fn.init$.fn.init的返回值是this. 也就是$()的返回值是$.fn.init的原型对象,尼玛$.fn.initprototype原型现在就是个光杆司令啊,哟,正好,$.fn对应的原型方法,除了init没用外,其他hide()each()就是我们需要的。因此,我们需要加上这么一行:

$.fn.init.prototype = $.fn

于是,$()的返回值从$.fn.init.prototype一下子变成$.fn,正好就是我们一开始的扩展方法。

于是乎,大功告成。慢着……
慢着……

上面明明还有残留的F呢!

 哦,那个啊。F是任意函数,$本身就是函数,因此,直接使用$替换就可以了:

var $ = function(selector, context) {
    return new $.fn.init(selector, context);
};
<del datetime="2013-07-17T03:12:44+00:00">var F = function() { };</del>   // 这个直接删除
$.fn = <span style="color: #cd0000;">$</span>.prototype;
$.fn.init = function(selector, context) {
    <span style="color: green;">// ...</span>
    return this;
};

<span style="color: green;">// ...</span>

上图代码的demo地址应该不会被人看到吧……

实际上,如果你不是非得一个$行便天下的话,到了上面进阶第9步就足够了。jQuery在第10步的处理是为了彰显其$用得如此的出神入化,代码完美,令人惊叹!

至此,jQuery大核心已经一步一步走完了,可以看到,所有的这些进阶都是根据需求、实际开发需要来的,慢慢完善,慢慢扩充的!

11. 每个扩展方法都要$.fn.xxxx, 好闹心的来

$.fn.css = function() {}
$.fn.attr = function() {}
$.fn.data = function() {}
<span style="color: green;">// ...</span>

每个扩展前面都有个$.fn, 好讨厌的感觉,就不能合并吗?

于是,jQuery搞了个extend方法。

$.fn.extend({
    css: function() {},
    attr: function() {},
    data: function() {},
    <span style="color: green;">// ...</span>
});

12. $()不仅可以是选择器字符串,还可以是DOM
init方法中,判断第一个参数,如果是节点,直接this[0] = this_node. over!

以下13~?都是完善啊,补充啊,兼容性处理啊什么的,没有价值,到此为止!

三、排了很长队的结束语

网上也有其他一些介绍jQuery原理或机制的文章,可能当事人自己理解,而阅读者本来就不懂,说来说去,越说越绕,可能更不懂了。

jQuery是很优秀,好比身为灵长类的人类。但是,其诞生显然是从简单开始的。因此,要了解人类,可以通过追溯其起源。如果你是上帝,要让你造一个人,你会怎么造,是一口气出来?女娲造人还要捏泥人呢!不妨从单细胞生物开始,随着自然进化,淘汰,自然而然,就会出现人类,上帝他就是这么干的。

jQuery的诞生也大致如此,要想了解jQuery,可以试试踏着本文jQuery的成长足迹,一点一点逐步深入,您就会了解为何jQuery要这么设计,它是如何设计的等。

虽然,内容由浅及深,但是,其中涉及的原型以及new构造函数的一些特性,对于新人而言,还是有一些理解门槛的,希望我的描述与解释可以让你有一丝豁然开朗,那就再好不过了。

感谢您的阅读至此,欢迎指出文章可能书写不准确的地方,再次感谢!

月底在百姓网有个小分享,演示文档连个肉渣子还没准备呢。因此,未来一周休文。

原创文章,转载请注明来自张鑫旭-鑫空间-鑫生活[http://www.zhangxinxu.com]
本文地址:http://www.zhangxinxu.com/wordpress/?p=3520

(本篇完)

分类: Javascript 标签:

Introduction to Transaction Locks in InnoDB Storage Engine

2013年9月16日 没有评论

From: https://blogs.oracle.com/mysqlinnodb/entry/introduction_to_transaction_locks_in

Introduction

Transaction locks are an important feature of any transactional storage engine. There are two types of transaction locks – table locks and row locks. Table locks are used to avoid a table being altered or dropped by one transaction when another transaction is using the table. It is also used to prohibit a transaction from accessing a table, when it is being altered. InnoDB supports multiple granularity locking (MGL). So to access rows in a table, intention locks must be taken on the tables.

Row locks are at finer granularity than table level locks, different threads can work on different parts of the table without interfering with each other. This is in contrast with MyISAM where the entire table has to be locked when updating even unrelated rows. Having row locks means that multiple transactions can read and write into a single table. This increases the concurrency level of the storage engine. InnoDB being an advanced transactional storage engine, provides both table and row level transaction locks.

This article will provide information about how transaction locks are implemented in InnoDB storage engine. The lock subsystem of InnoDB provides many services to the overall system, like:

  • Creating, acquiring, releasing and destroying row locks.
  • Creating, acquiring, releasing and destroying table locks.
  • Providing multi-thread safe access to row and table locks.
  • Data structures useful for locating a table lock or a row lock.
  • Maintaining a list of user threads suspended while waiting for transaction locks.
  • Notification of suspended threads when a lock is released.
  • Deadlock detection

The lock subsystem helps to isolate one transaction from another transaction. This article will provide information about how transaction locks are created, maintained and used in the InnoDB storage engine. All reference to locks means transaction locks, unless specified otherwise.

Internal Data Structures of InnoDB

Before we proceed with scenarios and algorithms, I would like to present the following data structures. We need to be familiar with these data structures to understand how transaction locks work in InnoDB. The data structures of interest are:

  1. The enum lock_mode – provides the list of modes in which the transaction locks can be obtained.
  2. The lock struct lock_t. This represents either a table lock or a row lock.
  3. The struct trx_t which represents one transaction within InnoDB.
  4. The struct trx_lock_t which associates one transaction with all its transaction locks.
  5. The global object of type lock_sys_t holds the hash table of row locks.
  6. The table descriptor dict_table_t, which uniquely identifies a table in InnoDB. Each table descriptor contains a list of locks on the table.
  7. The global object trx_sys of type trx_sys_t, which holds the active transaction table (ATT).

The Lock Modes

The locks can be obtained in the following modes. I’ll not discuss about the LOCK_AUTO_INC in this article.

/* Basic lock modes */

enum lock_mode {

        LOCK_IS = 0,    /* intention shared */

        LOCK_IX,        /* intention exclusive */

        LOCK_S,         /* shared */

        LOCK_X,         /* exclusive */

        LOCK_AUTO_INC,  /* locks the auto-inc counter of a table

                        in an exclusive mode */

        LOCK_NONE,      /* this is used elsewhere to note consistent read */

        LOCK_NUM = LOCK_NONE, /* number of lock modes */

        LOCK_NONE_UNSET = 255

};

The lock struct or lock object

The structure lock_t represents either a table lock (lock_table_t) or a group of row locks (lock_rec_t) for all the rows belonging to the same page. For different lock modes, different lock structs will be used. For example, if a row in a page is locked in LOCK_X mode, and if another row in the same page is locked in LOCK_S mode, then these two row locks will be held in different lock structs.

This structure is defined as follows:

struct lock_t {

        trx_t*          trx;

        ulint           type_mode;

        hash_node_t     hash;

        dict_index_t*   index;

        union {

                lock_table_t    tab_lock;/*!< table lock */

                lock_rec_t      rec_lock;/*!< record lock */

        } un_member;                    /*!< lock        details */

};

/** A table lock */

struct lock_table_t {

        dict_table_t*   table;          /*!< database table in dictionary

                                        cache */

        UT_LIST_NODE_T(lock_t)

                        locks;          /*!< list of locks on the same

                                        table */

};

/** Record lock for a page */

struct lock_rec_t {

        ulint   space;                  /*!< space id */

        ulint   page_no;                /*!< page number */

        ulint   n_bits;                 /*!< number of bits in the lock

                                        bitmap; NOTE: the lock bitmap is

                                        placed immediately after the

                                        lock struct */

};

The important point here is the lock bitmap. The lock bitmap is a space efficient way to represent the row locks. This space efficient way of representing the row locks avoids the need for lock escalation and lock data persistence. (Note: For prepared transactions, it would be useful to have lock data persistence, but InnoDB currently do not support lock data persistence.)

The lock bitmap is placed immediately after the lock struct object. If a page can contain a maximum of 100 records, then the lock bitmap would be of size 100 (or more). Each bit in this bitmap will represent a row in the page. The heap_no of the row is used to index into the bitmap. If the 5th bit in the bitmap is enabled, then the row with heap_no 5 is locked.

The Transaction And Lock Relationship

The struct trx_t is used to represent the transaction within InnoDB. The struct trx_lock_t is used to represent all locks associated to a given transaction. Here I have listed down only those members relevant to this article.

struct trx_t {

      trx_id_t        id;             /*!< transaction id */

      trx_lock_t      lock;           /*!< Information about the transaction

                                        locks and state. Protected by

                                        trx->mutex or lock_sys->mutex

                                        or both */

};

struct trx_lock_t {

        ib_vector_t*    table_locks;    /*!< All table locks requested by this

                                        transaction, including AUTOINC locks */

        UT_LIST_BASE_NODE_T(lock_t)

                        trx_locks;      /*!< locks requested

                                        by the transaction;

                                        insertions are protected by trx->mutex

                                        and lock_sys->mutex; removals are

                                        protected by lock_sys->mutex */

};

Global hash table of row locks

Before we look at how the row locks internally works, we need to be aware of this global data structure. The lock subsystem of the InnoDB storage engine has a global object lock_sys of type lock_sys_t. The class lock_sys_t is defined as follows:

struct lock_sys_t {

        ib_mutex_t      mutex;

        hash_table_t*   rec_hash;

        ulint           n_lock_max_wait_time;

        // more …

};

The lock_sys_t::rec_hash member is the hash table of the record locks. Every page within InnoDB is uniquely identified by the (space_id, page_no) combination, called the page address. The hashing is done based on the page address. So given the page address, we can locate the list of lock_t objects of that page. All lock structs of the given page will be in the same hash bucket. So using this mechanism we can locate the row lock of any row.

The Transaction Subsystem Global Object

The transaction subsystem of InnoDB has one global object trx_sys of type trx_sys_t, which is an important internal data structure. It is defined as follows:

/** The transaction system central memory data structure. */

struct trx_sys_t{

        // more …

        trx_list_t      rw_trx_list;    /*!< List of active and committed in

                                        memory read-write transactions, sorted

                                        on trx id, biggest first. Recovered

                                        transactions are always on this list. */

        trx_list_t      ro_trx_list;    /*!< List of active and committed in

                                        memory read-only transactions, sorted

                                        on trx id, biggest first. NOTE:

                                        The order for read-only transactions

                                        is not necessary. We should exploit

                                        this and increase concurrency during

                                        add/remove. */

        // more …

};

This global data structure contains the active transaction table (ATT) of InnoDB storage engine. In the following sections, I’ll use this global object to access my transaction object through the debugger to demonstrate the transaction locks.

The table descriptor (dict_table_t)

Each table in InnoDB is uniquely identified by its name in the form of databasename/tablename. For each table, the data dictionary of InnoDB will contain exactly one table descriptor object of type dict_table_t. Given the table name, the table descriptor can be obtained. This table descriptor contains a list of locks on the table. This list can be used to check if the table has already been locked by a transaction.

struct dict_table_t{

        // more …

        table_id_t      id;     /*!< id of the table */

        char*           name;   /*!< table name */

        UT_LIST_BASE_NODE_T(lock_t)

                        locks;  /*!< list of locks on the table; protected

                                by lock_sys->mutex */

};

The heap_no of a row

Every row in InnoDB is uniquely identified by space_id, page_no and heap_no of the row. I assume that you know about space_id and page_no. I’ll explain here only about the heap_no of the row. Each row in the page has a heap_no. The heap_no of the infimum record is 0, the heap_no of the supremum record is 1, and the heap_no of the first user record in page is 2.

If we have inserted 10 records in the page, and if there has been no updates on the page, then the heap_no of the user records would be 2, 3, 4, 5 … 10, 11. The heap_no will be in the same order in which the records will be accessed in ascending order.

The heap_no of the row is used to locate the bit in the lock bitmap corresponding to the row. If the row has a heap_no of 10, then the 10th bit in the lock bitmap corresponds to the row. This means that if the heap_no of the row is changed, then the associated lock structs must be adjusted accordingly.

The Schema

To demonstrate the transaction locks, I use the following schema in this article. There is one table t1 which has 3 rows (1, ‘அ’), (2, ‘ஆ’) and (3, ‘இ’). In this article, we will see when and how the transaction locks (table and row) are created and used. We will also cover the internal steps involved in creating table and row locks.

set names utf8;

drop table if exists t1;

create table t1 (f1 int primary key, f2 char(10)) charset=’utf8′;

insert into t1 values (1, ‘அ’), (2, ‘ஆ’), (3, ‘இ’);

Table Level Transaction Locks

The purpose of table level transaction locks or simply table locks is to ensure that no transactions modify the structure of the table when another transaction is accessing or modifying the table or the rows in a table. There are two types of table locks in MySQL – one is the meta-data locking (MDL) provided by the SQL engine and the other is the table level locks within InnoDB storage engine. Here the discussion is only about the table level locks within InnoDB.

On tables, InnoDB normally acquires only intentional shared (LOCK_IS) or intentional exclusive (LOCK_IX) modes. It does not lock the tables in shared mode (LOCK_S) or exclusive mode (LOCK_X) unless explicitly requested via LOCK TABLES command. One exception to this is in the prepare phase of the online alter table command, where the table is locked in shared mode. Please refer to Multiple Granularity Locking to know about intention shared and intention exclusive locks.

Scenario () for LOCK_IS table lock

Here is an example scenario that will take intention shared (LOCK_IS) lock on the table.

mysql> set transaction isolation level serializable;

mysql> start transaction;

mysql> select * from t1;

When the above 3 statements are executed, one table lock would be taken in LOCK_IS mode. In mysql-5.6, there is no way to verify this other than using the debugger. I verified it as follows:

(gdb) set $trx_locklist = trx_sys->rw_trx_list->start->lock->trx_locks

(gdb) p $trx_locklist.start->un_member->tab_lock->table->name

$21 = 0x7fffb0014f70 “test/t1″

(gdb) p lock_get_mode(trx_sys->rw_trx_list->start->lock->trx_locks->start)

$25 = LOCK_IS

(gdb)

You need to access the correct transaction object and the lock object.

Scenario () for LOCK_IX table lock

Here is an extension to the previous scenario. Adding an INSERT statement to the scenario will take an intention exclusive (LOCK_IX) lock on the table.

mysql> set transaction isolation level serializable;

mysql> start transaction;

mysql> select * from t1;

mysql> insert into t1 values (4, ‘ஃ’);

When the above 4 statements are executed, there would be two locks on the table – LOCK_IS and LOCK_IX. The select statement would have taken the table lock in LOCK_IS mode and the INSERT statement would take the table lock in LOCK_IX mode.

This can also be verified using the debugger. I’ll leave it as an exercise for the reader.

What Happens Internally When Acquiring Table Locks

Each table is uniquely identified within the InnoDB storage engine using the table descriptor object of type dict_table_t. In this section we will see the steps taken internally by InnoDB to obtain a table lock. The function to refer to is lock_table() in the source code. Necessary mutexes are taken during these steps to ensure that everything works correctly in a multi-thread environment. This aspect is not discussed here.

  1. The request for a table lock comes with the following information – the table to lock (dict_table_t), the mode in which to lock (enum lock_mode), and the transaction which wants the lock (trx_t).
  2. Check whether the transaction is already holding an equal or stronger lock on the given table. Each transaction maintains a list of table locks obtained by itself (trx_t::trx_lock_t::table_locks). Searching this list for the given table and mode is sufficient to answer this question. If the current transaction is already holding an equal or stronger lock on the table, then the lock request is reported as success. If not, then go to next step.
  3. Check if any other transaction has an incompatible lock request in the lock queue of the table. Each table descriptor has a lock queue (dict_table_t::locks). Searching this queue is sufficient to answer this question. If some other transaction holds an incompatible lock on the table, then the lock request needs to wait. Waiting for a lock can lead to time out or deadlock error. If there is no contention from other transactions for this table, then proceed further.
  4. Allocate a lock struct object (lock_t). Initialize with table, trx and lock mode information. Add this object to the queue in dict_table_t::locks object of the table as well as the trx_t::trx_lock_t::table_locks.
  5. Complete.

You can re-visit the above scenario (௨) and then follow the above steps and verify that the number of lock structs created is 2.

Row Level Transaction Locks

Each row in the InnoDB storage engine needs to be uniquely identified in order to be able to lock it. A row is uniquely identified by the following pieces of information:

  • The space identifier
  • The page number within the space.
  • The heap number of the record within the page.
  • The descriptor of the index to which the row belongs (dict_index_t). Stricly speaking, this is not necessary to uniquely identify the row. But associating the row to its index will help to provide user friendly diagnosis and also help the developers to debug a problem.

There are two types of row locks in InnoDB – the implicit and the explicit. The explicit row locks are the locks that make use of the global row lock hash table and the lock_t structures. The implicit row locks are logically arrived at based on the transaction information in the clustered index or secondary index record. These are explained in the following sections.

Implicit Row Locks

Implicit row locks do not have an associated lock_t object allocated. This is purely calculated based on the ID of the requesting transaction and the transaction ID available in each record. First, let use see how implicit locks are “acquired” (here is comment from lock0lock.cc):

“If a transaction has modified or inserted an index record, then it owns an implicit x-lock on the record. On a secondary index record, a transaction has an implicit x-lock also if it has modified the clustered index record, the max trx id of the page where the secondary index record resides is >= trx id of the transaction (or database recovery is running), and there are no explicit non-gap lock requests on the secondary index record. ”

As we can see, the implicit locks are a logical entity and whether a transaction has implicit locks or not is calculated using certain procedures. This procedure is explained here briefly.

If a transaction wants to acquire a row lock (implicit or explicit), then it needs to determine whether any other transaction has an implicit lock on the row before checking on the explicit lock. As explained above this procedure is different for clustered index and secondary index.

For the clustered index, get the transaction id from the given record. If it is a valid transaction id, then that is the transaction which is holding the implicit exclusive lock on the row. Refer to the function lock_clust_rec_some_has_impl() for details.

For the secondary index, it is more cumbersome to calculate. Here are the steps:

  1. Let R1 be the secondary index row for which we want to check if another transaction is holding an implicit exclusive lock.
  2. Obtain the maximum transaction id (T1) of the page that contains the secondary index row R1.
  3. Let T2 be the minimum transaction id of the InnoDB system. If T1 is less than T2, then there can be no transaction holding implicit lock on the row. Otherwise, go to next step.
  4. Obtain the corresponding clustered index record for the given secondary index record R1.
  5. Get the transaction id (T3) from the clustered index record. By looking at the clustered index record, including its older versions, find out if T3 could have modified or inserted the secondary index row R1. If yes, then T3 has an implicit exclusive lock on row R1. Otherwise it does not have.

In the case of secondary indexes, we need to make use of the undo logs to determine if any transactions have an implicit exclusive row lock on record. Refer to the function lock_sec_rec_some_has_impl() for details.

Also note that the implicit row locks do not affect the gaps.

Explicit Row Locks

Explicit row locks are represented by the lock_t structures. This section provides some scenarios in which explicit row locks are obtained in different modes. It also briefly discusses the internal procedure to acquire an explicit row lock.

Scenario () for LOCK_S row lock

For transaction isolation level REPEATABLE READ and less stronger levels, InnoDB does not take shared locks on rows. So in the following scenario, we use SERIALIZABLE transaction isolation level for demonstrating shared row locks:

mysql> set transaction isolation level serializable;

mysql> start transaction;

mysql> select * from t1;

This scenario is the same as the Scenario (௧). The verification via debugger alone is different. In the case of row locks, each lock object represents a row lock for all rows of a page (of the same lock mode). So a lock bitmap is used for this purpose which exists at the end of the lock struct object. In the above scenario, we can verify the locks as follows:

The lock_t object allocated for the row locks is

(gdb) set $trx_locklist = trx_sys->rw_trx_list->start->lock->trx_locks

(gdb) set $rowlock = $trx_locklist.start->trx_locks->next

(gdb) p $rowlock

$47 = (ib_lock_t *) 0x7fffb00103f0

Verify the lock mode of the lock object.

(gdb) p lock_get_mode($rowlock)

$48 = LOCK_S

(gdb)

Verify that it is actually a lock struct object allocated for rows (and not a table lock).

(gdb) p *$rowlock

$43 = {trx = 0x7fffb0013f78, trx_locks = {prev = 0x7fffb00103a8, next = 0×0},

  type_mode = 34, hash = 0×0, index = 0x7fffb001a6b8, un_member = {tab_lock = {

      table = 0xc, locks = {prev = 0×3, next = 0×48}}, rec_lock = {space = 12,

      page_no = 3, n_bits = 72}}}

(gdb)

The row lock bit map is at the end of the lock object. Even though the number of bits in the lock bitmap is given as n_bits = 72 (9 bytes) above, I am examining only 4 bytes here.

(gdb) x /4t $rowlock + 1

0x7fffb0010438:     00011110  00000000  00000000  00000000

(gdb)

Note that there are 4 bits enabled. This means that there are 4 row locks obtained. The row in the page, and the bit in the lock bit map are related via the heap_no of the row, as described previously.

Scenario () for LOCK_X row lock

Changing the above scenario slightly as follows, will obtain LOCK_X locks on the rows.

mysql> set transaction isolation level serializable;

mysql> start transaction;

mysql> select * from t1 for update;

Verification of this through the debugger is left as an exercise for the reader.

What Happens Internally When Acquiring Explicit Row Locks

To lock a row, all information necessary to uniquely identify the row – the trx, lock mode, table space id, page_no and heap_no – must be supplied. The entry point for row locking is the lock_rec_lock() function in the source code.

  1. Check if the given transaction has a granted explicit lock on the row with equal or stronger lock mode. To do this search, the global hash table of row locks is used. If the transaction already has a strong enough lock on the row, then nothing more to do. Otherwise go to next step.
  2. Check if other transactions have a conflicting lock on the row. For this search also, the global hash table of row locks is used. If yes, then the current transaction must wait. Waiting can lead to timeout or deadlock error. If there is no contention for this row from other transactions then proceed further.
  3. Check if there is already a lock struct object for the given row. If yes, reuse the same lock struct object. Otherwise allocate a lock struct object.
  4. Initialize the trx, lock mode, page_no, space_id and the size of the lock bit map. Set the bit of the lock bitmap based on the heap_no of the row.
  5. Insert this lock struct object in the global hash table of row locks.
  6. Add this to the list of transaction locks in the transaction object (trx_t::trx_lock_t::trx_locks).

Releasing transaction locks

All the transaction locks acquired by a transaction is released by InnoDB at transaction end (commit or rollback). In the case of REPEATABLE READ isolation level or SERIALIZABLE, it is not possible for the SQL layer to release locks before transaction end. In the case of READ COMMITTED isolation level, it is possible for the SQL layer to release locks before transaction end by making use of ha_innobase::unlock_row() handler interface API call.

To obtain the list of all transaction locks of a transaction, the following member can be used – trx_t::trx_lock_t::trx_locks.

Conclusion

This article provided an overview of the transaction locks in InnoDB. We looked at the data structures used to represent transaction locks. We analysed some scenarios in which the various locks are created. We also looked at the internal steps that happen when a table lock or a row lock is created.

You can download this article in pdf.

 

分类: MySQL 标签:

Data Organization in InnoDB

2013年9月16日 没有评论

From: https://blogs.oracle.com/mysqlinnodb/entry/data_organization_in_innodb

Introduction

This article will explain how the data is organized in InnoDB storage engine. First we will look at the various files that are created by InnoDB, then we look at the logical data organization like tablespaces, pages, segments and extents. We will explore each of them in some detail and discuss about their relationship with each other. At the end of this article, the reader will have a high level view of the data layout within the InnoDB storage engine.

The Files

MySQL will store all data within the data directory. The data directory can be specified using the command line option –data-dir or in the configuration file as datadir. Refer to the Server Command Options for complete details.

By default, when InnoDB is initialized, it creates 3 important files in the data directory – ibdata1, ib_logfile0 and ib_logfile1. The ibdata1 is the data file in which system and user data will be stored. The ib_logfile0 and ib_logfile1 are the redo log files. The location and size of these files are configurable. Refer to Configuring InnoDB for more details.

The data file ibdata1 belongs to the system tablespace with tablespace id (space_id) of 0. The system tablespace can contain more than 1 data file. As of MySQL 5.6, only the system tablespace can contain more than 1 data file. All other tablespaces can contain only one data file. Also, only the system tablespace can contain more than one table, while all other tablespaces can contain only one table.

The data files and the redo log files are represented in the memory by the C structure fil_node_t.

Tablespaces

By default, InnoDB contains only one tablespace called the system tablespace whose identifier is 0. More tablespaces can be created indirectly using the innodb_file_per_table configuration parameter. In MySQL 5.6, this configuration parameter is ON by default. When it is ON, each table will be created in its own tablespace in a separate data file.

The relationship between the tablespace and data files is explained in the InnoDB source code comment (storage/innobase/fil/fil0fil.cc) which is quoted here for reference:

A tablespace consists of a chain of files. The size of the files does not have to be divisible by the database block size, because we may just leave the last incomplete block unused. When a new file is appended to the tablespace, the maximum size of the file is also specified. At the moment, we think that it is best to extend the file to its maximum size already at the creation of the file, because then we can avoid dynamically extending the file when more space is needed for the tablespace.”

The last statement about avoiding dynamic extension is applicable only to the redo log files and not the data files. Data files are dynamically extended, but redo log files are pre-allocated. Also, as already mentioned earlier, only the system tablespace can have more than one data file.

It is also clearly mentioned that even though the tablespace can have multiple files, they are thought of as one single large file concatenated together. So the order of files within the tablespace is important.

Pages

A data file is logically divided into equal sized pages. The first page of the first data file is identified with page number of 0, and the next page would be 1 and so on. A page within a tablespace is uniquely identified by the page identifier or page number (page_no). And each tablespace is uniquely identified by the tablespace identifier (space_id). So a page is uniquely identified throughout InnoDB by using the (space_id, page_no) combination. And any location within InnoDB can be uniquely identified by the (space_id, page_no, page_offset) combination, where page_offset is the number of bytes within the given page.

How the pages from different data files relate to one another is explained in another source code comment: “A block’s position in the tablespace is specified with a 32-bit unsigned integer. The files in the chain are thought to be catenated, and the block corresponding to an address n is the nth block in the catenated file (where the first block is named the 0th block, and the incomplete block fragments at the end of files are not taken into account). A tablespace can be extended by appending a new file at the end of the chain.” This means that the first page of all the data files will not have page_no of 0 (zero). Only the first page of the first data file in a tablespace will have the page_no as 0 (zero).

Also in the above comment it is mentioned that the page_no is a 32-bit unsigned integer. This is the size of the page_no when stored on the disk.

Every page has a page header (page_header_t). For more details on this please refer to the Jeremy Cole’s blog “The basics of InnoDB space file layout.

 

Extents

An extent is 1MB of consecutive pages. The size of one extent is defined as follows (1048576 bytes = 1MB):

#define FSP_EXTENT_SIZE (1048576U / UNIV_PAGE_SIZE)

The macro UNIV_PAGE_SIZE used to be a compile time constant. From mysql-5.6 onwards it is a global variable. The number of pages in an extent depends on the page size used. If the page size is 16K (the default), then an extent would contain 64 pages.

 

Page Types

One page can be used for many purposes. The page type will identify the purpose for which the page is being used. The page type of each page will be stored in the page header. The page types are available in the header file storage/innobase/include/fil0fil.h. The following table provides a brief description of the page types.

Page Type Description
FIL_PAGE_INDEX The page is a B-tree node
FIL_PAGE_UNDO_LOG The page stores undo logs
FIL_PAGE_INODE contains an array of fseg_inode_t objects.
FIL_PAGE_IBUF_FREE_LIST The page is in the free list of insert buffer or change buffer.
FIL_PAGE_TYPE_ALLOCATED Freshly allocated page.
FIL_PAGE_IBUF_BITMAP Insert buffer or change buffer bitmap
FIL_PAGE_TYPE_SYS System page
FIL_PAGE_TYPE_TRX_SYS Transaction system data
FIL_PAGE_TYPE_FSP_HDR File space header
FIL_PAGE_TYPE_XDES Extent Descriptor Page
FIL_PAGE_TYPE_BLOB Uncompressed BLOB page
FIL_PAGE_TYPE_ZBLOB First compressed BLOB page
FIL_PAGE_TYPE_ZBLOB2 Subsequent compressed BLOB page

Each page type is used for different purposes. It is beyond the scope of this article, to explore each page type. For now, it is sufficient to know that all pages have a page header (page_header_t) and they store the page type in it, and based on the page type the contents and the layout of the page would be decided.

Tablespace Header

Each tablespace will have a header of type fsp_header_t. This data structure is stored in the first page of a tablespace.

  • The table space identifier (space_id)
  • Current size of the table space in pages.
  • List of free extents
  • List of full extents not belonging to any segment.
  • List of partially full/free extents not belonging to any segment.
  • List of pages containing segment headers, where all the segment inode slots are reserved. (pages of type FIL_PAGE_INODE)
  • List of pages containing segment headers, where not all the segment inode slots are reserved. (pages of type FIL_PAGE_INODE).

 

 

InnoDB Tablespace Header Structure

 

From the tablespace header, we can access the list of segments available in the tablespace. The total space occupied by the tablespace header is given by the macro FSP_HEADER_SIZE, which is equal to 16*7 = 112 bytes.

 

Reserved Pages of Tablespace

As mentioned earlier, InnoDB will always contain one tablespace called the system tablespace with identifier 0. This is a special tablespace and is always kept open as long as the MySQL server is running. The first few pages of the system tablespace is reserved for internal usage. This information can be obtained from the header storage/innobase/include/fsp0types.h. They are listed below with a short description.

Page Number

The Page Name

Description

0 FSP_XDES_OFFSET The extent descriptor page.
1 FSP_IBUF_BITMAP_OFFSET The insert buffer bitmap page.
2 FSP_FIRST_INODE_PAGE_NO The first inode page number.
3 FSP_IBUF_HEADER_PAGE_NO Insert buffer header page in system tablespace.
4 FSP_IBUF_TREE_ROOT_PAGE_NO Insert buffer B-tree root page in system tablespace.
5 FSP_TRX_SYS_PAGE_NO Transaction system header in system tablespace.
6 FSP_FIRST_RSEG_PAGE_NO First rollback segment page, in system tablespace.
7 FSP_DICT_HDR_PAGE_NO Data dictionary header page in system tablespace.

 

As can be noted from above, the first 3 pages will be there in any tablespace. But the last 5 pages are reserved only in the case of system tablespace. In the case of other tablespaces only 3 pages are reserved.

When the option innodb_file_per_table is enabled, then for each table a separate tablespace with one data file would be created. The source code comment in the function dict_build_table_def_step() states the following:

                /* We create a new single-table tablespace for the table. 
                We initially let it be 4 pages: 
                - page 0 is the fsp header and an extent descriptor page, 
                - page 1 is an ibuf bitmap page, 
                - page 2 is the first inode page, 
                - page 3 will contain the root of the clustered index of the 
                table we create here. */

 

File Segments

A tablespace can contain many file segments. File segments (or just segments) is a logical entity. Each segment has a segment header (fseg_header_t), which points to the inode (fseg_inode_t) describing the file segment. The file segment header contains the following information:

  • The space to which the inode belongs
  • The page_no of the inode
  • The byte offset of the inode
  • The length of the file segment header (in bytes).

Note: It would have been really more readable (at source code level) if fseg_header_t and fseg_inode_t had proper C-style structures defined for them.

The fseg_inode_t object contains the following information:

  • The segment id to which it belongs.
  • List of full extents.
  • List of free extents of this segment.
  • List of partially full/free extents
  • Array of individual pages belonging to this segment. The size of this array is half an extent.

When a segment wants to grow, it will get free extents or pages from the tablespace to which it belongs.

 

Table

In InnoDB, when a table is created, a clustered index (B-tree) is created internally. This B-tree contains two file segments, one for the non-leaf pages and the other for the leaf pages. From the source code documentation:

In the root node of a B-tree there are two file segment headers. The leaf pages of a tree are allocated from one file segment, to make them consecutive on disk if possible. From the other file segment we allocate pages for the non-leaf levels of the tree.”

For a given table, the root page of a B-tree will be obtained from the data dictionary. So in InnoDB, each table exists within a tablespace, and contains one B-tree (the clustered index), which contains 2 file segments. Each file segment can contain many extents, and each extent contains 1MB of consecutive pages.

Conclusion

This article discussed the details about the data organization within InnoDB. We first looked at the files created by InnoDB, and then discussed about the various logical entities like tablespaces, pages, page types, extents, segments and tables. We also looked at the relationship between each one of them.

分类: MySQL 标签:

分布式原理: 一致性&持久性

2013年9月12日 没有评论

转载自:http://goleo8.iteye.com/blog/662108

1.      什么是一致性、持久性以及事务

当一个原子操作具有了一致性,隔离性和持久性之后,这个原子操作就可以被称为事务。
Consistency is an application-defined requirement that every update to a collection of data must preserve some specified invariant. Different applications can have quite different consistency invariants.
我们一直讨论一致性,并将一致性理解为我们所看到的数据和一系列操作所更新的数据是一致的。但是实际上这并不是一致性的最本质的含义。本质的含义是,根据应用的需求,我们对一系列数据集的操作要维持一个不变量。例如,表的行号应和行数是对应的。cache应和后台数据是对应的。

书中花费了很大段讨论Atomicity。原子性分为all-of-nothing atomicity和isolation atomicity.

强一致性:就是将不一致隐藏在系统边缘内部。从外部看任何时候系统都是一致的。
最终一致性:主要表现在更新数据时,有一段时间从系统边缘外看,是不一致的,但是在某个时间段后,一致性会得到保证。

有时最终一致性反倒是一个优点。比如Download一个网页时,先出现文字,后出现图片。在download过程中,页面与后台数据是不一致的,但是这反而改善了用户体验。

2.      Cache coherence

Cache的一致性要求在于Cache中存储的数据应当和二级存储中的数据应当的相等的。但是由于从Cache到二级存储的延迟,存在某个时间段,Cache和二级存贮中的数据是不同的。

Cache应当满足读写一致性:The result of a read of a named object is always the value of the
most recent write to that object.
请求对一个Object读,读到的应该是最近一次写的结果。
Cache分成:
1.         Write through cache(直写式缓存)每次写操作不光写cache还写到二级存储,这样就容易造成性能的瓶颈。
2.         Write back cache(回写式缓存)先是将写操作写到cache中,这时应用就可以认为写操作已经完成了。而将cache中的数据更新到二级存储是由cache manager来完成。
如果只有一个cache那么回写式缓存也能够提供强一致性,但是如果thread能够直接从二级缓存读数据或者有多个cache,可能其中某个cache并不是最新数据那么一致性就受到了破坏。

如何在分布式缓存中仍然能够获得一致性?
1.         如果shared和writable的数据很少,那么可以将这些数据标示成“不可缓存”。
a)         World Wide Web采用了这种方式。在HTTP头有一个字段,可以设置“不可缓存”这样Web Page就不会被缓存。
b)        Java内存模型中一种思想类似的方法是将一个变量声明为volatile。
2.         另外一种思想是将那些与权威副本不一致的缓存标示为无效。
a)         一个设计思想,如果在多个处理器共享的二级存储上共享cache,则可以避免不一致性。但是共享cache会导致处理器对cache的竞争和互斥。这样会降低系统性能。因此对于每个处理器提供一个单独的私有的cache。这样就产生了不一致性。即使是使用直写式缓存,处理器A对数据Data的更新却无法写到处理器B的私有缓存上,这样就导致了不一致性。这就需要有一种机制去告诉那些使用了数据Data的处理器,数据Data已经失效。
i.              当有一个处理器写的时候,告诉其他所有处理器他们的私有缓存全部都失效了。
另外一种方法是使用更多的wire,去告诉其他私有缓存内存中的那个地址的数据失效了。一种方法就是私有缓存都侦听memory bus。当memory bus上有写操作的时候。A slightly more clever
design will also grab the data value from the bus as it goes by and update, rather than invalidate, its copy of that data. These are two variations on what is called the snoopy cache*—each cache is snooping on bus activity.
ii.              即使使用了Snoopy Cache仍然会遇到问题。cache的问题解决了,但是register却会带来新的同步问题。
3.         只要是允许副本被多个请求并发的访问,如何维护隔离性和一致性就是一个复杂的问题。采用锁的方式避免并发访问是一种解法,但是又会影响到性能。一种可行的方法是使用概率。
3.      持久性,以及有多个副本带来的一致性问题

持久性就会带来多个副本之间的一致性问题:下面都是处理多个副本之间的不一致性。
The durability mantra
Multiple copies, widely separated and independently administered…
Multiple copies, widely separated and independently administered…

1.         Replicated state machine
If the data is written exactly once and never again changed, the management plan can be fairly straightforward。这个是Hadoop和Cassandra能够保证多个副本一致的前提。
2.        Master and Slave结构
M/S结构的一个很大的弱点就是M更新的时候,读S读到的都是旧的数据。设计一个MS的结构会面对一系列的问题:比如M和S瞬时的不一致。还有就是M的数据decay了,S如果还没有同步,则同步的数据也是错误的。
3.        保证分布式数据的完整性?
a)        可以在副本之间做校验,但是一旦这些数据之间的传输开销很大的话,聚会造成很大的时间成本。
b)        并不是直接对比而是传输MD5checksum
总结:
1)简单副本(RAID)2)为了避免地震等故障,更加分布的副本GFS3)按照某种逻辑写数据,也就是大家写数据的时候都遵循一定的规则,从而避免不一致的情况4)运用概率提升性能5)Replicated state machines6)Master/Slave结构->为了避免M和S的不一致性,可以将M的表划分细小,然后每个细小的表有一个Master->为了避免M和S的不一致使用两阶段提交协议->当M失效之后使用选举算法选出新的Master->If the application is one in which the data is insensitive to the order of updates, implement a replicated state machine without a consensus algorithm.->增量更新->传递的不是增量而是操作的log7)quorum算法
4.      协调(Reconciliation)算法

什么是协调?当系统update到一半,或者M-S结构里面M宕机了,或者数据副本出现不一致状态了。那么从这种不“和谐”的状态重新归于“和谐”就是reconciliation。
1.          Occasionally connected operation
这个场景就例如iphone和iMac之间的数据同步。更好的比喻是SVN,client和SVN上面文件的同步。
如何发现文件之间的不同:
a)       checksum
b)       维护一个统一的文件id,一旦产生变化就将id加1。
c)       通过时间戳。文件更改了则时间戳就会更新。
5.      Atomicity across layers and multiple sites

两阶段提交协议的两个阶段:(注意区分两阶段提交协议2PC和两阶段锁协议2PL)
达成协议阶段:
在这个阶段协调者向所有要执行commit操作的节点发出“尝试commit”的请求。节点接受到请求后“尝试commit”,例如包括更新log——在undo log中增加新的信息,在redo log中增加新信息。当发现可以正常的commit,则返回一个Yes的消息,否则返回一个No的消息表示abort。
如果coordinator收到了全部的Yes消息,则发出一个commit消息,则所有的节点都commit,并释放占有的资源锁。
如果coordinator收到的消息中有No消息,表示某个节点不能够commit,则coordinator群发一个rollback的消息。这时每个节点根据undo log中的日志回滚,然后释放占用的资源锁。这里正是memento模式的用武之地!
缺点:
这是一个异步协议,对系统的可用性影响极大;如果coordinator失效了,可能会导致一些node的锁永远不会被释放,被永远绑定。如果node向coordinator发送了agreement消息,并等待commit或者rollback的反馈。如果这个时候coordinator挂了,这个就会被永远锁住,除非从其他的coordinator那里能得到相应的反馈。
当coordinator发送一个“Query-to-commit”消息的时候,在收到全体相应之前coordinator也是被阻塞的。但是如果一个node没有响应,coordinator不会被永久阻塞。因为coordinator可以引入一个timeout机制避免被永久阻塞。
因为上面提到的time out机制,这个协议的另外一大弱点是:偏向于abort一个case,而不是complete一个case。
Implementing the two-phase commit protocol

[edit]Common architecture

In many cases the 2PC protocol is distributed in a computer network. It is easily distributed by implementing multiple dedicated 2PC components similar to each other, typically named Transaction managers (TMs; also referred to as 2PC agents), that carry out the protocol’s execution for each transaction (e.g., The Open Group’s X/Open XA). The databases involved with a distributed transaction, the participants, both the coordinator and cohorts, register to close TMs (typically residing on respective same network nodes as the participants) for terminating that transaction using 2PC. Each distributed transaction has an ad hoc set of TMs, the TMs to which the transaction participants register. A leader, the coordinator TM, exists for each transaction to coordinate 2PC for it, typically the TM of the coordinator database. However, the coordinator role can be transferred to another TM for performance or reliability reasons. Rather than exchanging 2PC messages among themselves, the participants exchange the messages with their respective TMs. The relevant TMs communicate among themselves to execute the 2PC protocol schema above, “representing” the respective participants, for terminating that transaction. With this architecture the protocol is fully distributed (does not need any central processing component or data structure), and scales up with number of network nodes (network size) effectively.
This common architecture is also effective for the distribution of other atomic commitment protocols besides 2PC, since all such protocols use the same voting mechanism and outcome propagation to protocol participants.[1] [2]
[edit]Protocol optimizations

Database research has been done on ways to get most of the benefits of the two-phase commit protocol while reducing costs by protocol optimizations [1] [2] and protocol operations saving under certain system’s behavior assumptions.
[edit]Presume abort and Presume commit

Presumed abort or Presumed commit are common such optimizations.[3][2] An assumption about the outcome of transactions, either commit, or abort, can save both messages and logging operations by the participants during the 2PC protocol’s execution. For example, when presumed abort, if during system recovery from failure no logged evidence for commit of some transaction is found by the recovery procedure, then it assumes that the transaction has been aborted, and acts accordingly. This means that it does not matter if aborts are logged at all, and such logging can be saved under this assumption. Typically a penalty of additional operations is paid during recovery from failure, depending on optimization type. Thus the best variant of optimization, if any, is chosen according to failure and transaction outcome statistics.
[edit]Tree two-phase commit protocol

The Tree 2PC protocol [2] (also called Nested 2PC, or Recursive 2PC) is a common variant of 2PC in a network, which better utilizes the underlying communication infrastructure. In this variant the coordinator is the root (“top”) of a communication tree (inverted tree), while the cohorts are the other nodes. Messages from the coordinator are propagated “down” the tree, while messages to the coordinator are “collected” by a cohort from all the cohorts below it, before it sends the appropriate message “up” the tree (except an abort message, which is propagated “up” immediately upon receiving it, or if this cohort decided to abort).
The Dynamic two-phase commit (Dynamic two-phase commitment, D2PC) protocol[4][2] is a variant of Tree 2PC with no predetermined coordinator. Agreement messages start to propagate from all the leaves, each leaf when completing its tasks on behalf of the transaction (becoming ready), and the coordinator is determined dynamically by racingagreement messages, at the place where they collide. They collide either on a transaction tree node, or on an edge. In the latter case one of the two edge’s nodes is elected as a coordinator (any node). D2PC is time optimal (among all the instances of a specific transaction tree, and any specific Tree 2PC protocol implementation; all instances have the same tree; each instance has a different node as coordinator): it commits the coordinator and each cohort in minimum possible time, allowing earlier release of locked resources.

6.    一致性判别

数据一致性通常指关联数据之间的逻辑关系是否正确和完整。而数据存储的一致性模型则可以认为是存储系统和数据使用者之间的一种约定。如果使用者遵 循这种约定,则可以得到系统所承诺的访问结果。
常用的一致性模型有:
严格一致性(linearizability, strict/atomic Consistency):读出的数据始终为最近写入的数据。这种一致性只有全局时钟存在时才有可能,在分布式网络环境不可能实现。
弱一致性(weak consistency):只要求对共享数据结构的访问保证顺序一致性。对于同步变量的操作具有顺序一致性,是全局可见的,且只有当没有写操作等待处理时 才可进行,以保证对于临界区域的访问顺序进行。在同步时点,所有使用者可以看到相同的数据。
最终一致性(eventual consistency):当没有新更新的情况下,更新最终会通过网络传播到所有副本点,所有副本点最终会一致,也就是说使用者在最终某个时间点前的中间 过程中无法保证看到的是新写入的数据。可以采用最终一致性模型有一个关键要求:读出陈旧数据是可以接受的。
顺序一致性(sequential consistency):所有使用者以同样的顺序看到对同一数据的操作,但是该顺序不一定是实时的。
因果一致性(causal consistency):只有存在因果关系的写操作才要求所有使用者以相同的次序看到,对于无因果关系的写入则并行进行,无次序保证。因果一致性可以看 做对顺序一致性性能的一种优化,但在实现时必须建立与维护因果依赖图,是相当困难的。
管道一致性(PRAM/FIFO consistency):在因果一致性模型上的进一步弱化,要求由某一个使用者完成的写操作可以被其他所有的使用者按照顺序的感知到,而从不同使用者中 来的写操作则无需保证顺序,就像一个一个的管道一样。 相对来说比较容易实现。
释放一致性(release consistency):弱一致性无法区分使用者是要进入临界区还是要出临界区, 释放一致性使用两个不同的操作语句进行了区分。需要写入时使用者acquire该对象,写完后release,acquire-release之间形成了 一个临界区,提供 释放一致性也就意味着当release操作发生后,所有使用者应该可以看到该操作。
delta consistency:系统会在delta时间内达到一致。这段时间内会存在一个不一致的窗口,该窗口可能是因为log shipping的过程导致。

最终一致性的几种具体实现:
1、读不旧于写一致性(Read-your-writes consistency):使用者读到的数据,总是不旧于自身上一个写入的数据。
2、会话一致性(Session consistency):比读不旧于写一致性更弱化。使用者在一个会话中才保证读写一致性,启动新会话后则无需保证。
3、单读一致性(Monotonic read consistency):读到的数据总是不旧于上一次读到的数据。
4、单写一致性(Monotonic write consistency):写入的数据完成后才能开始下一次的写入。
5、写不旧于读一致性(Writes-follow-reads consistency):写入的副本不旧于上一次读到的数据,即不会写入更旧的数据.
6、选举一致性:
Werner Vogels认为:在很多互联网应用中,单读一致性+读不旧于写一致性可以提供足够的一致性了。

分类: 分布式系统理论 标签: