`
LJ你是唯一LT
  • 浏览: 243004 次
社区版块
存档分类
最新评论

pg 函数sfa_tmp_sleep()执行越来越慢-sql分析

阅读更多
pg 函数sfa_tmp_sleep()执行越来越慢
                                                                                                                                                                                                                                                                                                                                                        
近期发现一个函数sfa_tmp_sleep()执行时间越来越长(1h32m),定时每小时执行一次(每次处理100条记录),结果第三个都开始执行了,第一个还没执行完~~
因此想要重点研究一下

函数内容
CREATE OR REPLACE FUNCTION sfa_tmp_sleep()
RETURNS text AS
$BODY$
DECLARE
sqlsexec character varying :='';

r record;
BEGIN
------------------------------------------
----函数功能:对休眠样本计算样本统计信息
------------------------------------------
set work_mem= '3000MB';

sqlsexec = 'SELECT t1.sample_hash FROM t_sfa_sample t1,t_white t2 where t1.sample_hash=t2.sample_hash and t1.state=-1   
and t2.sample_type=7 and not exists (select sample_hash from t_sfa_tmp where t_sfa_tmp.sample_hash=t1.sample_hash)' limit 100;                     ---这个函数只有一条sql查询
RAISE NOTICE '%', sqlsexec;
FOR r IN EXECUTE sqlsexec
LOOP
RAISE NOTICE 'sample_hash = %', r.sample_hash;
PERFORM sfa_tmp_sample(r.sample_hash);               ---查询出来的hash,到函数sfa_tmp_sample去进行统计打分。
END LOOP;
RETURN 'DONE';
exception when others then
RAISE EXCEPTION '(%)', SQLERRM;
return 'error';
END

手动执行第一条sql,不带limit,看耗时多久
SELECT t1.sample_hash FROM t_sfa_sample t1,t_white t2 where t1.sample_hash=t2.sample_hash and t1.state=-1
and t2.sample_type=7 and not exists (select sample_hash from t_sfa_tmp where t_sfa_tmp.sample_hash=t1.sample_hash) ;  ---67528 ms.

将limit条数修改成20条,看耗时多久
测试发现,居然花了4115s,差不多一个小时,这速度简直让人无法忍受啊~~~~
我们要找出哪里是最耗时的。

首先,通过监控pgbadger我们可以看到函数sfa_tmp_sample,做统计其实很快,修改一条hash的统计都是毫秒级别的。
select sfa_tmp_sample('A60CD39357F6D192547CDF85065C43D7');   单独这样来执行,其实很快,才500ms

初步怀疑还是sleep中的那个查询sql太耗时间了
explain SELECT t1.sample_hash FROM t_sfa_sample t1,t_white t2 where t1.sample_hash=t2.sample_hash
and t2.sample_type=7  and t1.state=-1 and not exists (select sample_hash from t_sfa_tmp where t_sfa_tmp.sample_hash=t1.sample_hash) limit 1;
"Limit (cost=1.31..4.34 rows=1 width=33)"
" -> Merge Join (cost=1.31..14435878.87 rows=4759688 width=33)"
" Merge Cond: ((t2.sample_hash)::text = (t1.sample_hash)::text)"
" -> Merge Append (cost=1.31..11326897.81 rows=19474512 width=33)"
" Sort Key: t2.sample_hash"

刚开始以为是t_white这张表太大,耗时久一点情有可原
但我又发现一个奇怪的现象,就是手动执行的时候,带上limit和不带limit,时间相差很远
NOTICE: SELECT t1.sample_hash FROM t_sfa_sample t1,t_white t2 where t1.sample_hash=t2.sample_hash and t1.state=-1
and t2.sample_type=7 and not exists (select sample_hash from t_sfa_tmp where t_sfa_tmp.sample_hash=t1.sample_hash) ;
不加limit    执行select sfa_tmp_sleep() 仅仅花了81s,此时因为刚刚执行过一遍,查询结果记录为0,也就是不需要执行后面的select sfa_tmp_sample,按常理来说,执行完这个sql,就会立马退出。

NOTICE: SELECT t1.sample_hash FROM t_sfa_sample t1,t_white t2 where t1.sample_hash=t2.sample_hash and t1.state=-1
and t2.sample_type=7 and not exists (select sample_hash from t_sfa_tmp where t_sfa_tmp.sample_hash=t1.sample_hash) limit 100;
执行了很久都没有出结果。。。。。

为什么加了limit反而影响了它的执行效率呢????我们来看看具体的分析结果:
explain analyze SELECT t1.sample_hash FROM t_sfa_sample t1,t_white t2 where t1.sample_hash=t2.sample_hash and t1.state=-1
and t2.sample_type=7 and not exists (select sample_hash from t_sfa_tmp where t_sfa_tmp.sample_hash=t1.sample_hash);

"Hash Join (cost=288876.92..3072363.39 rows=4764135 width=33) (actual time=74750.884..74750.884 rows=0 loops=1)"     --总耗时才74s,下面是每一步的耗时
" Hash Cond: ((t2.sample_hash)::text = (t1.sample_hash)::text)"
" -> Append (cost=0.00..2589277.04 rows=19542413 width=33) (actual time=0.515..47951.685 rows=19051749 loops=1)"
" -> Seq Scan on t_white t2 (cost=0.00..0.00 rows=1 width=33) (actual time=0.001..0.001 rows=0 loops=1)"
" Filter: (sample_type = 7)"
" -> Bitmap Heap Scan on t_white20110701 t2 (cost=4.61..27.18 rows=46 width=33) (actual time=0.513..1.464 rows=43 loops=1)"     ---位图索引,t_white每个分区表的查询耗时都还算快。
" Recheck Cond: (sample_type = 7)"
" -> Bitmap Index Scan on t_white20110701_sample_type_idx (cost=0.00..4.60 rows=46 width=0) (actual time=0.500..0.500 rows=53 loops=1)"
" Index Cond: (sample_type = 7)"
" -> Bitmap Heap Scan on t_white20110801 t2 (cost=296.54..3235.54 rows=15520 width=33) (actual time=3.014..128.033 rows=15201 loops=1)"
" Recheck Cond: (sample_type = 7)"
" -> Bitmap Index Scan on t_white20110801_sample_type_idx (cost=0.00..292.66 rows=15520 width=0) (actual time=2.753..2.753 rows=15895 loops=1)"
" Index Cond: (sample_type = 7)"
" -> Seq Scan on t_white20110901 t2 (cost=0.00..179.12 rows=851 width=33) (actual time=0.243..3.192 rows=828 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20111001 t2 (cost=0.00..33.66 rows=57 width=33) (actual time=0.243..1.555 rows=55 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20111101 t2 (cost=0.00..26.19 rows=210 width=33) (actual time=0.290..1.000 rows=183 loops=1)"
" Filter: (sample_type = 7)"
" -> Bitmap Heap Scan on t_white20111201 t2 (cost=762.69..8933.83 rows=40571 width=33) (actual time=8.507..420.264 rows=40073 loops=1)"
" Recheck Cond: (sample_type = 7)"
" -> Bitmap Index Scan on t_white20111201_sample_type_idx (cost=0.00..752.55 rows=40571 width=0) (actual time=7.232..7.232 rows=41184 loops=1)"
" Index Cond: (sample_type = 7)"
" -> Seq Scan on t_white20120101 t2 (cost=0.00..1534.72 rows=8484 width=33) (actual time=0.247..27.334 rows=8279 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20120201 t2 (cost=0.00..8966.33 rows=56440 width=33) (actual time=0.215..153.553 rows=54567 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20120301 t2 (cost=0.00..6881.99 rows=43735 width=33) (actual time=0.239..115.809 rows=42545 loops=1)"
" Filter: (sample_type = 7)"
" -> Bitmap Heap Scan on t_white20120401 t2 (cost=1000.51..11229.42 rows=53193 width=33) (actual time=10.874..426.664 rows=52150 loops=1)"
" Recheck Cond: (sample_type = 7)"
" -> Bitmap Index Scan on t_white20120401_sample_type_idx (cost=0.00..987.21 rows=53193 width=0) (actual time=9.207..9.207 rows=53658 loops=1)"
" Index Cond: (sample_type = 7)"
" -> Bitmap Heap Scan on t_white20120501 t2 (cost=2679.92..32892.85 rows=143434 width=33) (actual time=25.837..1573.999 rows=143011 loops=1)"
" Recheck Cond: (sample_type = 7)"
" -> Bitmap Index Scan on t_white20120501_sample_type_idx (cost=0.00..2644.06 rows=143434 width=0) (actual time=21.501..21.501 rows=146864 loops=1)"   --部分分区表能够使用建立在sample_type字段的索引,因此时间明显比没用索引的要短
" Index Cond: (sample_type = 7)"
" -> Seq Scan on t_white20120601 t2 (cost=0.00..16996.70 rows=98081 width=33) (actual time=0.227..359.610 rows=96404 loops=1)"     --即使没有使用到sample_type上的索引,耗时也还能接受
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20120701 t2 (cost=0.00..12755.74 rows=77482 width=33) (actual time=0.233..250.715 rows=76203 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20120801 t2 (cost=0.00..36999.70 rows=214737 width=33) (actual time=0.239..628.646 rows=211867 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20120901 t2 (cost=0.00..23962.33 rows=149082 width=33) (actual time=0.246..373.619 rows=144067 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20121001 t2 (cost=0.00..11539.94 rows=84231 width=33) (actual time=0.216..179.243 rows=81386 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20121101 t2 (cost=0.00..24504.95 rows=190559 width=33) (actual time=0.246..353.219 rows=184554 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20121201 t2 (cost=0.00..26269.97 rows=180864 width=33) (actual time=0.259..400.979 rows=174210 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20130101 t2 (cost=0.00..21436.80 rows=170187 width=33) (actual time=0.241..316.030 rows=165762 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20130201 t2 (cost=0.00..56572.03 rows=428164 width=33) (actual time=0.010..936.162 rows=418646 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20130301 t2 (cost=0.00..80714.98 rows=649873 width=33) (actual time=0.234..1272.789 rows=638643 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20130401 t2 (cost=0.00..48517.72 rows=385268 width=33) (actual time=0.213..729.710 rows=379044 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20130501 t2 (cost=0.00..43888.30 rows=359839 width=33) (actual time=0.222..690.170 rows=352550 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20130601 t2 (cost=0.00..46958.70 rows=364984 width=33) (actual time=0.235..785.484 rows=357265 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20130701 t2 (cost=0.00..59807.29 rows=362261 width=33) (actual time=0.264..929.197 rows=361180 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20130801 t2 (cost=0.00..80945.31 rows=599443 width=33) (actual time=0.008..1185.351 rows=582070 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20130901 t2 (cost=0.00..66983.62 rows=551307 width=33) (actual time=0.258..893.968 rows=536798 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20131001 t2 (cost=0.00..76896.57 rows=561052 width=33) (actual time=0.234..1008.501 rows=547033 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20131101 t2 (cost=0.00..86725.89 rows=644766 width=33) (actual time=0.216..1205.315 rows=626491 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20131201 t2 (cost=0.00..81994.80 rows=590138 width=33) (actual time=0.723..1646.654 rows=572707 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20140101 t2 (cost=0.00..74730.84 rows=471926 width=33) (actual time=0.254..1226.444 rows=462548 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20140201 t2 (cost=0.00..104963.35 rows=794670 width=33) (actual time=0.258..1566.819 rows=784287 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20140301 t2 (cost=0.00..139674.42 rows=1093780 width=33) (actual time=0.256..1892.263 rows=1077644 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20140401 t2 (cost=0.00..192073.55 rows=1735035 width=33) (actual time=0.215..2498.861 rows=1706597 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20140501 t2 (cost=0.00..156879.65 rows=1284820 width=33) (actual time=0.243..2429.547 rows=1262875 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20140601 t2 (cost=0.00..75085.46 rows=509768 width=33) (actual time=0.200..1142.757 rows=501259 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20140701 t2 (cost=0.00..109932.02 rows=801796 width=33) (actual time=0.254..1699.791 rows=785950 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20140801 t2 (cost=0.00..124129.98 rows=870717 width=33) (actual time=0.244..1959.016 rows=857404 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20140901 t2 (cost=0.00..91497.77 rows=677569 width=33) (actual time=0.208..1499.579 rows=654824 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20141001 t2 (cost=0.00..99435.20 rows=652200 width=33) (actual time=0.480..1943.429 rows=640646 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20141101 t2 (cost=0.00..80946.43 rows=629295 width=33) (actual time=0.250..1826.538 rows=597895 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20141201 t2 (cost=0.00..95591.54 rows=836886 width=33) (actual time=0.382..2146.592 rows=801980 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20150101 t2 (cost=0.00..79795.20 rows=629169 width=33) (actual time=0.022..1899.265 rows=584735 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20150201 t2 (cost=0.00..64520.84 rows=581339 width=33) (actual time=0.230..1430.519 rows=530069 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20150301 t2 (cost=0.00..83947.32 rows=660344 width=33) (actual time=0.342..2088.060 rows=653357 loops=1)"
" Filter: (sample_type = 7)"
" -> Seq Scan on t_white20150401 t2 (cost=0.00..37661.30 rows=288239 width=33) (actual time=0.504..892.213 rows=285864 loops=1)"   
" Filter: (sample_type = 7)"
" -> Hash (cost=277212.84..277212.84 rows=933126 width=33) (actual time=6726.655..6726.655 rows=568872 loops=1)"
" Buckets: 131072 Batches: 1 Memory Usage: 36111kB"
" -> Hash Anti Join (cost=147500.92..277212.84 rows=933126 width=33) (actual time=3212.152..6523.566 rows=568872 loops=1)"
" Hash Cond: ((t1.sample_hash)::text = (t_sfa_tmp.sample_hash)::text)"
" -> Seq Scan on t_sfa_sample t1 (cost=0.00..106786.89 rows=1861524 width=33) (actual time=0.020..1122.771 rows=1859951 loops=1)"   ---顺序扫描   t_sfa_sample 中sample_hash是主键
" Filter: (state = (-1))"     ----过滤条件 state=-1 意味着规划节点为它扫描的每一行检查该条件,并且只输出符合条件的行
" -> Hash (cost=123638.74..123638.74 rows=1908974 width=33) (actual time=3211.371..3211.371 rows=1898083 loops=1)"
" Buckets: 262144 Batches: 1 Memory Usage: 120484kB"
" -> Seq Scan on t_sfa_tmp (cost=0.00..123638.74 rows=1908974 width=33) (actual time=0.392..2210.484 rows=1898083 loops=1)"     ---顺序扫描t_sfa_tmp 表中sample_hash是主键
"Total runtime: 74751.386 ms"
实际的消耗时间跟直接执行函数的时间很接近。

explain  analyze  SELECT t1.sample_hash FROM t_sfa_sample t1,t_white t2 where t1.sample_hash=t2.sample_hash and t1.state=-1
and t2.sample_type=7 and not exists (select sample_hash from t_sfa_tmp where t_sfa_tmp.sample_hash=t1.sample_hash) limit 100;
"Limit (cost=6.88..270.13 rows=100 width=33) (actual time=4208033.574..4208033.574 rows=0 loops=1)"       ---总耗时
" -> Merge Join (cost=6.88..12541509.94 rows=4764141 width=33) (actual time=4208033.571..4208033.571 rows=0 loops=1)"
" Merge Cond: ((t1.sample_hash)::text = (t2.sample_hash)::text)"
" -> Merge Anti Join (cost=5.58..1027854.50 rows=933126 width=33) (actual time=0.302..323876.834 rows=568872 loops=1)"
" Merge Cond: ((t1.sample_hash)::text = (t_sfa_tmp.sample_hash)::text)"
" -> Index Scan using t_sfa_sample_pkey on t_sfa_sample t1 (cost=0.00..481736.97 rows=1861524 width=33) (actual time=0.277..79479.520 rows=1859951 loops=1)"
" Filter: (state = (-1))"
" -> Index Scan using t_sfa_tmp_pkey on t_sfa_tmp (cost=0.00..527407.31 rows=1908974 width=33) (actual time=0.017..229564.972 rows=1898083 loops=1)"   --跟上面的2210.484一比,虽然行数是一样的,但是耗时确实100倍左右
" -> Materialize (cost=1.31..11414825.13 rows=19542439 width=33) (actual time=13.404..3855134.924 rows=19051765 loops=1)"
" -> Merge Append (cost=1.31..11365969.04 rows=19542439 width=33) (actual time=13.397..3831856.556 rows=19051765 loops=1)"
" Sort Key: t2.sample_hash"
" -> Index Scan using t_white_sample_hash_key on t_white t2 (cost=0.00..8.27 rows=1 width=33) (actual time=0.005..0.005 rows=0 loops=1)"   ---所有的t_white的分区表都没有走sample_type的索引,导致耗时也增加了
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20110701 on t_white20110701 t2 (cost=0.00..121.13 rows=46 width=33) (actual time=0.040..31.548 rows=43 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20110801 on t_white20110801 t2 (cost=0.00..13031.54 rows=15520 width=33) (actual time=1.879..3965.901 rows=15201 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20110901 on t_white20110901 t2 (cost=0.00..712.33 rows=851 width=33) (actual time=0.911..199.757 rows=828 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20111001 on t_white20111001 t2 (cost=0.00..140.85 rows=57 width=33) (actual time=0.046..32.823 rows=55 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20111101 on t_white20111101 t2 (cost=0.00..112.66 rows=210 width=33) (actual time=0.023..57.701 rows=183 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20111201 on t_white20111201 t2 (cost=0.00..34747.64 rows=40571 width=33) (actual time=1.072..4954.557 rows=40073 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20120101 on t_white20120101 t2 (cost=0.00..5799.41 rows=8484 width=33) (actual time=0.024..1942.406 rows=8279 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20120201 on t_white20120201 t2 (cost=0.00..35006.71 rows=56440 width=33) (actual time=0.022..11110.476 rows=54567 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20120301 on t_white20120301 t2 (cost=0.00..26839.66 rows=43735 width=33) (actual time=1.412..8513.762 rows=42545 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20120401 on t_white20120401 t2 (cost=0.00..43719.06 rows=53193 width=33) (actual time=0.819..11715.886 rows=52150 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20120501 on t_white20120501 t2 (cost=0.00..127606.13 rows=143434 width=33) (actual time=0.309..35385.370 rows=143011 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20120601 on t_white20120601 t2 (cost=0.00..66095.17 rows=98081 width=33) (actual time=0.255..19346.965 rows=96404 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20120701 on t_white20120701 t2 (cost=0.00..48646.21 rows=77482 width=33) (actual time=0.031..15112.948 rows=76203 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20120801 on t_white20120801 t2 (cost=0.00..140243.01 rows=214737 width=33) (actual time=0.026..40043.470 rows=211867 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20120901 on t_white20120901 t2 (cost=0.00..91808.20 rows=149082 width=33) (actual time=0.764..27296.248 rows=144067 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20121001 on t_white20121001 t2 (cost=0.00..45707.42 rows=84231 width=33) (actual time=0.271..17041.615 rows=81386 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20121101 on t_white20121101 t2 (cost=0.00..96983.57 rows=190559 width=33) (actual time=0.220..37807.532 rows=184554 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20121201 on t_white20121201 t2 (cost=0.00..103932.13 rows=180864 width=33) (actual time=0.019..39480.933 rows=174210 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20130101 on t_white20130101 t2 (cost=0.00..84114.31 rows=170187 width=33) (actual time=0.238..29166.750 rows=165762 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20130201 on t_white20130201 t2 (cost=0.00..223210.01 rows=428164 width=33) (actual time=0.229..82961.080 rows=418646 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20130301 on t_white20130301 t2 (cost=0.00..318834.06 rows=649873 width=33) (actual time=0.225..139716.301 rows=638643 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20130401 on t_white20130401 t2 (cost=0.00..191974.93 rows=385268 width=33) (actual time=0.226..80686.644 rows=379044 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20130501 on t_white20130501 t2 (cost=0.00..173435.62 rows=359839 width=33) (actual time=0.239..74029.353 rows=352550 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20130601 on t_white20130601 t2 (cost=0.00..185554.28 rows=364984 width=33) (actual time=0.242..75051.125 rows=357265 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20130701 on t_white20130701 t2 (cost=0.00..235635.52 rows=362261 width=33) (actual time=0.634..111325.936 rows=361180 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20130801 on t_white20130801 t2 (cost=0.00..319847.98 rows=599443 width=33) (actual time=0.021..137529.831 rows=582070 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20130901 on t_white20130901 t2 (cost=0.00..265118.29 rows=551307 width=33) (actual time=0.199..113658.102 rows=536798 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20131001 on t_white20131001 t2 (cost=0.00..304541.23 rows=561052 width=33) (actual time=0.025..115868.850 rows=547033 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20131101 on t_white20131101 t2 (cost=0.00..343331.79 rows=644766 width=33) (actual time=0.309..134153.298 rows=626491 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20131201 on t_white20131201 t2 (cost=0.00..324976.99 rows=590138 width=33) (actual time=0.023..125280.760 rows=572706 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20140101 on t_white20140101 t2 (cost=0.00..296371.65 rows=471926 width=33) (actual time=0.244..108254.151 rows=462547 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20140201 on t_white20140201 t2 (cost=0.00..412775.87 rows=794670 width=33) (actual time=0.655..154092.691 rows=784287 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20140301 on t_white20140301 t2 (cost=0.00..553468.21 rows=1093780 width=33) (actual time=0.020..209569.984 rows=1077644 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20140401 on t_white20140401 t2 (cost=0.00..760834.35 rows=1735035 width=33) (actual time=0.281..191973.327 rows=1706594 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20140501 on t_white20140501 t2 (cost=0.00..621494.36 rows=1284820 width=33) (actual time=0.317..189809.749 rows=1262874 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20140601 on t_white20140601 t2 (cost=0.00..297441.20 rows=509768 width=33) (actual time=0.028..108351.308 rows=501259 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20140701 on t_white20140701 t2 (cost=0.00..435577.15 rows=801796 width=33) (actual time=0.027..156456.805 rows=785950 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20140801 on t_white20140801 t2 (cost=0.00..491279.21 rows=870717 width=33) (actual time=0.154..184267.384 rows=857403 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20140901 on t_white20140901 t2 (cost=0.00..362271.39 rows=677569 width=33) (actual time=0.549..134776.949 rows=654824 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20141001 on t_white20141001 t2 (cost=0.00..392512.54 rows=652200 width=33) (actual time=0.025..160953.113 rows=640646 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20141101 on t_white20141101 t2 (cost=0.00..320294.70 rows=629295 width=33) (actual time=0.035..115167.440 rows=597895 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20141201 on t_white20141201 t2 (cost=0.00..377995.69 rows=836886 width=33) (actual time=0.024..145096.638 rows=801980 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20150101 on t_white20150101 t2 (cost=0.00..314925.97 rows=629169 width=33) (actual time=0.023..95266.226 rows=584735 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20150201 on t_white20150201 t2 (cost=0.00..254031.29 rows=581339 width=33) (actual time=0.027..89744.102 rows=530069 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20150301 on t_white20150301 t2 (cost=0.00..336172.21 rows=660344 width=33) (actual time=0.025..109274.329 rows=653357 loops=1)"
" Filter: (sample_type = 7)"
" -> Index Scan using u_t_white20150401 on t_white20150401 t2 (cost=0.00..152327.58 rows=288265 width=33) (actual time=0.031..47819.164 rows=285895 loops=1)"
" Filter: (sample_type = 7)"
"Total runtime: 4208034.397 ms"       ----跟上面一比,耗时真是的是好多倍呢~

总结:
虽然一般我们使用limit,都可以以减少行数的输出来降低时间的消耗,但是在这个例子中,我们发现,limit并不一定就能完全达到这种效果。
一旦因为使用了limit而是执行计划发生改变,可能会带来致命的损失。

处理:
将函数中的limit限制去掉后,我们正常去执行select sfa_tmp_sleep(),发现真的很快,仅仅只需要60s的样子。
且一般情况下,一个小时内需要去统计的hash行数也不会产生很多新的。
这也完全验证了,之前为什么select sfa_tmp_sample()执行都是秒级别的,一旦加了上面一个sql查询,就必须消耗一个多小时的原因。

再次测试:
等sql查询有hash需要去统计的时候,我们再来执行一次select sfa_tmp_sleep();   ----63752ms
真的是比较快的。

事实证明,并不是我们写的函数有问题,而是我们在使用一些感觉会加快速度的命令时,却无意中反而拖累了我们的函数。
分享到:
评论

相关推荐

    SFA_变化检测_慢特征分析_

    标题"SFA_变化检测_慢特征分析_"指向的是一个关于遥感图像处理的项目,主要应用了慢特征分析(Slow Feature Analysis, SFA)技术来实现变化检测。在这个项目中,用户可能关注如何利用SFA算法从两幅不同时期的遥感...

    matlab_slowfeature.rar_-baijiahao_SFA_SFA公式MATLAB_SFA工具箱

    代码可能会包括定义SFA的目标函数,求解线性变换矩阵,以及对结果进行分析和展示的环节。 **文件名"matlab_slowfeature"** 这可能是主程序文件或包含所有SFA算法相关代码的文件夹名称。用户在解压后,可以在这个...

    Matlab_WaH_JoE(2010).zip_JOE_SFA_SFA matlab_frontier_wah-wah

    MATLAB CODE FOR Fixed effects SFA MODEL. See Wang (2010) for detials. [Estimating Fixed-Effect Panel Stochastic Frontier Models by Model Transformation]

    KV30P64M100SFA_NXPkv30数据手册_kv30p64m100sfa_

    工作频率“100”意味着KV30P64M100SFA可以达到100MHz的时钟速度,这使得它能够在短时间内执行大量指令,适合高速控制和实时响应的应用。时钟速度的确定需要考虑到功耗和性能的平衡。 在接口和外设方面,NXP的...

    Gartner魔力矩阵_2011_SFA_MQ

    标题与描述中的“Gartner魔力矩阵_2011_SFA_MQ”指的是Gartner公司在2011年发布的一份关于销售力量自动化(Sales Force Automation,简称SFA)市场的研究报告,即“销售力量自动化魔力象限”。这份报告是Gartner每年...

    慢特征分析算法

    在实际应用中,SFA_tk可能是一个MATLAB工具包,包含了实现慢特征分析的函数和示例代码。使用这个工具包,用户可以轻松地在MATLAB环境中对各种信号进行慢特征分析,从而进行盲源分离、特征提取以及模式识别的任务。 ...

    sfa_website

    【标题】"sfa_website" 暗示我们讨论的主题是一个与销售自动化(SFA,Sales Force Automation)相关的网站项目。这个项目可能旨在利用JavaScript技术来提升销售团队的工作效率,提供自动化工具,以及构建用户友好的...

    -sfa_scripts-:ATEC CS斯特林·惠特克

    【标题】:-sfa_scripts-: ATEC CS 斯特林·惠特克 在IT领域,尤其是软件开发中,"sfa_scripts"可能指的是销售自动化(Sales Force Automation)相关的脚本集合。这些脚本可能是为了提高销售流程效率、数据分析或者...

    Python库 | sklearn-sfa-0.1.1.tar.gz

    python库。 资源全名:sklearn-sfa-0.1.1.tar.gz

    sfa_scripts:动画存储库脚本

    【sfa_scripts: 动画存储库脚本】 在IT领域,"sfa_scripts" 提供的是一系列专为创建和管理动画而设计的脚本。这个存储库可能包含使用Mathematica编程语言编写的代码,Mathematica是一种强大的数学计算软件,常用于...

    sfa_scripts:与动画类脚本相关联的Python脚本

    在描述中,我们看到"sfa_scripts"是与动画类脚本关联的Python脚本,这意味着这个项目可能包含了用于生成、编辑、渲染或者分析2D或3D动画的代码。这些脚本可能涉及动画帧的生成、角色运动的模拟、场景的构建、时间轴...

    超全PCA_ICA_SFA算法程序集合.zip

    SFA(慢特征分析)是一种用于特征提取的方法,特别适用于时间序列数据。SFA的目标是从缓慢变化的信号中提取出最重要的特征,忽略快速变化的噪声。这种方法在视觉感知和机器人学中有广泛应用,例如从连续的视觉流中...

    sfa_scripts:使用面向对象的程序创建智能保存工具,以帮助我们遵守命名规则,以保存Maya场景文件并轻松对其进行版本控制

    在“sfa_scripts”中,Python被用来编写逻辑严谨的类和函数,以满足特定的命名规则和版本控制需求。 面向对象编程(Object-Oriented Programming,OOP)是Python的重要特性之一,它允许我们将数据和操作数据的方法...

    慢特征分析算法_matlab编程.rar

    慢特征分析(SFA,Slow Feature Analysis)是一种无监督学习方法,主要应用于视觉、听觉等高维数据的处理,其目标是提取出输入信号中最慢变化的特征。在MATLAB环境中实现SFA,可以有效地从大量数据中抽取关键信息,...

    sfa_scripts

    【标题】"sfa_scripts" 指的可能是一个与销售force自动化(Sales Force Automation)相关的脚本集合,其中包含了实现特定功能或自动化任务的Python代码。在IT领域,SFA通常指的是用于管理销售流程、客户关系和团队...

    sfa_scripts:用于动画脚本编写的Python文件和项目

    "sfa_scripts" 是一个专为动画脚本编写设计的Python项目,它旨在帮助动画师和开发者简化和加速创作过程。这个项目的核心是利用Python的强大功能来处理动画剧本的编写、管理和自动化任务。下面将详细介绍这个项目的...

    ERP系统信息化资料:SAP专业教材资料_CRM Demand - SFA.ppt

    ERP系统信息化资料:SAP专业教材资料_CRM Demand - SFA.ppt

    1992-2022年上市公司营销能力指标随机前沿生产函数SFA模型参考顶刊文献含Stata代码

    营销能力指标随机前沿生产函数 持续更新,后续关注我后免费获取更新版本 不管什么时 候毕业或者发期刊用到,都能用到最新的数据 【原创整理,严禁转载,转载必究】 参考 文献 [1]顾雷雷,欧阳文静.慈善捐赠、营销能力...

    2011-基于核的慢特征分析算法_马奎俊1

    【慢特征分析(Slow Feature Analysis, SFA)】 慢特征分析是一种无监督学习方法,主要用于从时序数据中提取变化最慢的特征,这些特征通常对应于输入信号中的核心不变量信息。SFA最初被引入是为了模拟生物视觉系统...

Global site tag (gtag.js) - Google Analytics