`
dcaoyuan
  • 浏览: 307323 次
社区版块
存档分类
最新评论

Learning Coding Parallelization (Was Tim's Erlang Exercise - Round V)

阅读更多

Updated Oct 16: After testing my code on different machines, I found that disk/io performed varyingly, for some very large files, reading file in parallel may cause longer elapsed time (typically on non-server machine, which is not equipped for fast disk/io). So, I added another version tbray4b.erl, in this version, only reading file is not parallalized, all other code is the same. If you'd like to have a test on your machine, please try both.

Well, I think I've learned a lot from doing Tim's exercise, not only the List vs Binary in Erlang, but also computing in parallel. Coding Concurrency is farely easy in Erlang, but coding Parallelization is not only about the Languages, it's also a real question.

I wrote tbray3.erl in The Erlang Way (Was Tim Bray's Erlang Exercise - Round IV) and got a fairly good result by far on my 2-core MacBook. But things always are a bit complex. As Steve pointed in the comment, when he tried tbray3.erl on his 8-core linux box:

"I ran it in a loop 10 times, and the best time I saw was 13.872 sec, and user/CPU time was only 16.150 sec, so it’s apparently not using the multiple cores very well."

I also encoutered this issue on my 4-CPU Intel Xeon CPU 2.80GHz debian box, it runs even worse (8.420s) than my 2-core MacBook (4.483s).

I thought about my code a while, and found that my code seems spawning too many processes for scan_chunk, as the scan_chunk's performance has been improved a lot, each process will finish its task very quickly, too quick to the file reading, the inceasing CPUs have no much chance to play the game, the cycled 'reading'-'spawning scan process' is actually almost sequential now, there has been very few simultaneously alive scanning processes. I think I finally meet the file reading bound.

But wait, as I claimed before, that reading file to memory is very fast in Erlang, for a 200M log file, it takes less than 800ms. The time elapsed for tbray3.erl is about 4900ms, far away from 800ms, why I say the file reading is the bound now?

The problem here is: since I suspect the performance of traversing binary byte by byte, I choose to convert binary to list to scan the world. Per my testing results, list is better than binary when is not too longer, in many cases, not longer than several KBytes. And, to make the code clear and readable, I also choose splitting big binary when read file in the meanwhile, so, I have to read file in pieces of no longer than n KBytes. For a very big file, the reading procedure is broken to several ten-thousands steps, which finally cause the whole file reading time elapsed is bit long. That's bad.

So, I decide to write another version, which will read file in parallel (Round III), and split each chunk on lastest new-line (Round II), scan the words using pattern match (Round IV), and yes, I'll use binary instead of list this time, try to solve the worse performance of binary-traverse by parallel, on multiple cores.

The result is interesting, it's the first time I achieved around 10 sec in my 2-core MacBook when use binary match only, and it's also the first time, on my dummy 4-CPU Intel Xeon CPU 2.80GHz debian box, I got better result than my MacBook.

(Updated Oct 15: Steve run the code on his 8-core 2.33 GHz Intel Xeon Linux box, with the best time was 4.920 sec, which was exactly 100% speedup to my 4-core box (Although, they are two different machines, we can not compare the results linearly) :

"the best time I saw for your newest version was 4.920 sec on my 8-core Linux box. Fast! However, user time was only 14.751 sec, so I’m not sure it’s using all the cores that well. Perhaps you’re getting down to where I/O is becoming a more significant factor."

Please see Steve's One More Erlang Wide Finder and his widefinder attempts.)

Result on 2.0GHz 2-core MacBook:

$ time erl -smp -noshell -run tbray4_bin start o1000k.ap 4 -s erlang halt
8900    : 2006/09/29/Dynamic-IDE
2000    : 2006/07/28/Open-Data
1300    : 2003/07/25/NotGaming
800     : 2003/10/16/Debbie
800     : 2003/09/18/NXML
800     : 2006/01/31/Data-Protection
700     : 2003/06/23/SamsPie
600     : 2006/09/11/Making-Markup
600     : 2003/02/04/Construction
600     : 2005/11/03/Cars-and-Office-Suites
Time:   10375.53 ms

real    0m10.788s
user    0m11.216s
sys     0m3.851s

Result on 4-CPU Intel Xeon CPU 2.80GHz debian box,:

# When process number is set to 20:
$ time erl -smp -noshell -run tbray4_bin start o1000k.ap 20 -s erlang halt

real    0m9.894s
user    0m20.521s
sys     0m1.668s

# When process number is set to 1:
$ time erl -smp -noshell -run tbray4_bin start o1000k.ap 1 -s erlang halt

real    0m28.193s
user    0m27.218s
sys     0m0.984s

# On a 940M 5 million lines log file:
$ time erl -smp -noshell -run tbray4_bin start o5000k.ap 400 -s erlang halt
44500   : 2006/09/29/Dynamic-IDE
10000   : 2006/07/28/Open-Data
6500    : 2003/07/25/NotGaming
4000    : 2003/10/16/Debbie
4000    : 2003/09/18/NXML
4000    : 2006/01/31/Data-Protection
3500    : 2003/06/23/SamsPie
3000    : 2006/09/11/Making-Markup
3000    : 2003/02/04/Construction
3000    : 2005/11/03/Cars-and-Office-Suites
Time:   66456.95 ms

real    1m6.767s
user    2m7.512s
sys     0m8.489s

On the 4-CPU linux box, comparing the elapsed time between ProcNum = 20 and ProcNum = 1, the elapsed time of parallelized one was only 35% of un-parallelized one, speedup about 185%. The ratio was almost the same as my pread_file.erl testing on the same machine.

It's actually a combination of code in my four previous blogs. Although the performance is not so good as tbray3.erl on my MacBook, but I'm happy that this version is a fully parallelized one, from reading file, scanning words etc. it should scale better than all my previous versions.

The code: tbray4.erl

-module(tbray4).

-compile([native]).

-export([start/1,
         start/2]).

-include_lib("kernel/include/file.hrl").

start([FileName, ProcNum]) when is_list(ProcNum) -> 
    start(FileName, list_to_integer(ProcNum)).
start(FileName, ProcNum) ->
    Start = now(),

    Main = self(),
    Counter = spawn(fun () -> count_loop(Main) end),
    Collector = spawn(fun () -> collect_loop(Counter) end),

    pread_file(FileName, ProcNum, Collector),

    %% don't terminate, wait here, until all tasks done.
    receive
        stop -> io:format("Time: ~10.2f ms~n", [timer:now_diff(now(), Start) / 1000])       
    end.

pread_file(FileName, ProcNum, Collector) ->
    ChunkSize = get_chunk_size(FileName, ProcNum),
    pread_file_1(FileName, ChunkSize, ProcNum, Collector).       
pread_file_1(FileName, ChunkSize, ProcNum, Collector) ->
    [spawn(fun () ->
                   Length = if  I == ProcNum - 1 -> ChunkSize * 2; %% lastest chuck
                                true -> ChunkSize end,
                   {ok, File} = file:open(FileName, [read, binary]),
                   {ok, Bin} = file:pread(File, ChunkSize * I, Length),
                   {Data, Tail} = split_on_last_newline(Bin),
                   Collector ! {seq, I, Data, Tail},
                   file:close(File)
           end) || I <- lists:seq(0, ProcNum - 1)],
    Collector ! {chunk_num, ProcNum}.

collect_loop(Counter) -> collect_loop_1([], <<>>, -1, Counter).
collect_loop_1(Chunks, PrevTail, LastSeq, Counter) ->
    receive
        {chunk_num, ChunkNum} ->
            Counter ! {chunk_num, ChunkNum},
            collect_loop_1(Chunks, PrevTail, LastSeq, Counter);
        {seq, I, Data, Tail} ->
            SortedChunks = lists:keysort(1, [{I, Data, Tail} | Chunks]),
            {Chunks1, PrevTail1, LastSeq1} = 
                process_chunks(SortedChunks, [], PrevTail, LastSeq, Counter),
            collect_loop_1(Chunks1, PrevTail1, LastSeq1, Counter)
    end.
    
count_loop(Main) -> count_loop_1(Main, dict:new(), undefined, 0).
count_loop_1(Main, Dict, ChunkNum, ChunkNum) ->
    print_result(Dict),
    Main ! stop;
count_loop_1(Main, Dict, ChunkNum, ProcessedNum) ->
    receive
        {chunk_num, ChunkNumX} -> 
            count_loop_1(Main, Dict, ChunkNumX, ProcessedNum);
        {dict, DictX} ->
            Dict1 = dict:merge(fun (_, V1, V2) -> V1 + V2 end, Dict, DictX),
            count_loop_1(Main, Dict1, ChunkNum, ProcessedNum + 1)
    end.

process_chunks([], ChunkBuf, PrevTail, LastSeq, _) -> {ChunkBuf, PrevTail, LastSeq};
process_chunks([{I, Data, Tail}=Chunk|T], ChunkBuf, PrevTail, LastSeq, Counter) ->
    case LastSeq + 1 of
        I ->
            spawn(fun () -> Counter ! {dict, scan_chunk(<<PrevTail/binary, Data/binary>>)} end),
            process_chunks(T, ChunkBuf, Tail, I, Counter);
        _ ->
            process_chunks(T, [Chunk | ChunkBuf], PrevTail, LastSeq, Counter)
    end.

print_result(Dict) ->
    SortedList = lists:reverse(lists:keysort(2, dict:to_list(Dict))),
    [io:format("~b\t: ~s~n", [V, K]) || {K, V} <- lists:sublist(SortedList, 10)].

get_chunk_size(FileName, ProcNum) ->
    {ok, #file_info{size=Size}} = file:read_file_info(FileName),
    Size div ProcNum.

split_on_last_newline(Bin) -> split_on_last_newline_1(Bin, size(Bin)).   
split_on_last_newline_1(Bin, Offset) when Offset > 0 ->
    case Bin of
        <<Data:Offset/binary,$\n,Tail/binary>> ->
            {Data, Tail};
        _ -> 
            split_on_last_newline_1(Bin, Offset - 1)
    end;
split_on_last_newline_1(Bin, _) -> {Bin, <<>>}.
    
scan_chunk(Bin) -> scan_chunk_1(Bin, 0, dict:new()).    
scan_chunk_1(Bin, Offset, Dict) when Offset =< size(Bin) - 34 ->
    case Bin of
        <<_:Offset/binary,"GET /ongoing/When/",_,_,_,$x,$/,Y1,Y2,Y3,Y4,$/,M1,M2,$/,D1,D2,$/,Rest/binary>> ->            
            case match_until_space_newline(Rest, 0) of
                {Rest1, <<>>} -> 
                    scan_chunk_1(Rest1, 0, Dict);
                {Rest1, Word} -> 
                    Key = <<Y1,Y2,Y3,Y4,$/,M1,M2,$/,D1,D2,$/, Word/binary>>,
                    scan_chunk_1(Rest1, 0, dict:update_counter(Key, 1, Dict))
            end;
        _ -> scan_chunk_1(Bin, Offset + 1, Dict)
    end;
scan_chunk_1(_, _, Dict) -> Dict.

match_until_space_newline(Bin, Offset) when Offset < size(Bin) ->
    case Bin of
        <<Word:Offset/binary,$ ,Rest/binary>> ->
            {Rest, Word};
        <<_:Offset/binary,$.,Rest/binary>> ->
            {Rest, <<>>};
        <<_:Offset/binary,10,Rest/binary>> ->
            {Rest, <<>>};
        _ ->
            match_until_space_newline(Bin, Offset + 1)
    end;
match_until_space_newline(_, _) -> {<<>>, <<>>}.

=====> Updated Oct 16:After testing my code on different machines, I found that disk/io performed varyingly, for some very large files, reading file in parallel may cause longer elapsed time (typically on non-server machine, which is not equipped for fast disk/io). So, I wrote another version: tbray4b.erl, in this version, only reading file is not parallalized, all other code is the same. Here's a result for this version on a 940M file with 5 million lines, with ProcNum set to 200 and 400)

# On 2-core MacBook:
$ time erl -smp -noshell -run tbray4b start o5000k.ap 200 -s erlang halt

real    0m50.498s
user    0m49.746s
sys     0m11.979s

# On 4-cpu linux box:
$ time erl -smp -noshell -run tbray4b start o5000k.ap 400 -s erlang halt

real    1m2.136s
user    1m59.907s
sys     0m7.960s

The code: tbray4b.erl

-module(tbray4b).

-compile([native]).

-export([start/1,
         start/2]).

-include_lib("kernel/include/file.hrl").

start([FileName, ProcNum]) when is_list(ProcNum) -> 
    start(FileName, list_to_integer(ProcNum)).
start(FileName, ProcNum) ->
    Start = now(),

    Main = self(),
    Counter = spawn(fun () -> count_loop(Main) end),
    Collector = spawn(fun () -> collect_loop(Counter) end),

    read_file(FileName, ProcNum, Collector),

    %% don't terminate, wait here, until all tasks done.
    receive
        stop -> io:format("Time: ~10.2f ms~n", [timer:now_diff(now(), Start) / 1000])       
    end.

read_file(FileName, ProcNum, Collector) ->
    ChunkSize = get_chunk_size(FileName, ProcNum),
    {ok, File} = file:open(FileName, [raw, binary]),
    read_file_1(File, ChunkSize, 0, Collector).    
read_file_1(File, ChunkSize, I, Collector) ->
    case file:read(File, ChunkSize) of
        eof ->
            file:close(File),
            Collector ! {chunk_num, I};
        {ok, Bin} -> 
            spawn(fun () ->
                          {Data, Tail} = split_on_last_newline(Bin),
                          Collector ! {seq, I, Data, Tail}
                  end),
            read_file_1(File, ChunkSize, I + 1, Collector)
    end.

collect_loop(Counter) -> collect_loop_1([], <<>>, -1, Counter).
collect_loop_1(Chunks, PrevTail, LastSeq, Counter) ->
    receive
        {chunk_num, ChunkNum} ->
            Counter ! {chunk_num, ChunkNum},
            collect_loop_1(Chunks, PrevTail, LastSeq, Counter);
        {seq, I, Data, Tail} ->
            SortedChunks = lists:keysort(1, [{I, Data, Tail} | Chunks]),
            {Chunks1, PrevTail1, LastSeq1} = 
                process_chunks(SortedChunks, [], PrevTail, LastSeq, Counter),
            collect_loop_1(Chunks1, PrevTail1, LastSeq1, Counter)
    end.
    
count_loop(Main) -> count_loop_1(Main, dict:new(), undefined, 0).
count_loop_1(Main, Dict, ChunkNum, ChunkNum) ->
    print_result(Dict),
    Main ! stop;
count_loop_1(Main, Dict, ChunkNum, ProcessedNum) ->
    receive
        {chunk_num, ChunkNumX} -> 
            count_loop_1(Main, Dict, ChunkNumX, ProcessedNum);
        {dict, DictX} ->
            Dict1 = dict:merge(fun (_, V1, V2) -> V1 + V2 end, Dict, DictX),
            count_loop_1(Main, Dict1, ChunkNum, ProcessedNum + 1)
    end.

process_chunks([], ChunkBuf, PrevTail, LastSeq, _) -> {ChunkBuf, PrevTail, LastSeq};
process_chunks([{I, Data, Tail}=Chunk|T], ChunkBuf, PrevTail, LastSeq, Counter) ->
    case LastSeq + 1 of
        I ->
            spawn(fun () -> Counter ! {dict, scan_chunk(<<PrevTail/binary, Data/binary>>)} end),
            process_chunks(T, ChunkBuf, Tail, I, Counter);
        _ ->
            process_chunks(T, [Chunk | ChunkBuf], PrevTail, LastSeq, Counter)
    end.

print_result(Dict) ->
    SortedList = lists:reverse(lists:keysort(2, dict:to_list(Dict))),
    [io:format("~b\t: ~s~n", [V, K]) || {K, V} <- lists:sublist(SortedList, 10)].

get_chunk_size(FileName, ProcNum) ->
    {ok, #file_info{size=Size}} = file:read_file_info(FileName),
    Size div ProcNum.

split_on_last_newline(Bin) -> split_on_last_newline_1(Bin, size(Bin)).   
split_on_last_newline_1(Bin, Offset) when Offset > 0 ->
    case Bin of
        <<Data:Offset/binary,$\n,Tail/binary>> ->
            {Data, Tail};
        _ -> 
            split_on_last_newline_1(Bin, Offset - 1)
    end;
split_on_last_newline_1(Bin, _) -> {Bin, <<>>}.
    
scan_chunk(Bin) -> scan_chunk_1(Bin, 0, dict:new()).    
scan_chunk_1(Bin, Offset, Dict) when Offset =< size(Bin) - 34 ->
    case Bin of
        <<_:Offset/binary,"GET /ongoing/When/",_,_,_,$x,$/,Y1,Y2,Y3,Y4,$/,M1,M2,$/,D1,D2,$/,Rest/binary>> ->            
            case match_until_space_newline(Rest, 0) of
                {Rest1, <<>>} -> 
                    scan_chunk_1(Rest1, 0, Dict);
                {Rest1, Word} -> 
                    Key = <<Y1,Y2,Y3,Y4,$/,M1,M2,$/,D1,D2,$/, Word/binary>>,
                    scan_chunk_1(Rest1, 0, dict:update_counter(Key, 1, Dict))
            end;
        _ -> scan_chunk_1(Bin, Offset + 1, Dict)
    end;
scan_chunk_1(_, _, Dict) -> Dict.

match_until_space_newline(Bin, Offset) when Offset < size(Bin) ->
    case Bin of
        <<Word:Offset/binary,$ ,Rest/binary>> ->
            {Rest, Word};
        <<_:Offset/binary,$.,Rest/binary>> ->
            {Rest, <<>>};
        <<_:Offset/binary,10,Rest/binary>> ->
            {Rest, <<>>};
        _ ->
            match_until_space_newline(Bin, Offset + 1)
    end;
match_until_space_newline(_, _) -> {<<>>, <<>>}.

=======

分享到:
评论

相关推荐

    java全大撒大撒大苏打

    sdad

    (175820822)基于java的工资管理系统设计与实现

    本课程设计是某公司的工资管理系统。在这个计算机快速发展的世界里,计算机为信息处理提供了物美价廉的手段,对于推动我国管理信息处理现代化起到了重要作用。工资管理是一项琐碎、复杂而又十分细致的工作,工资计算、发放、核算的工作量很大,一般不允许出错,如果实行手工操作,每月发放工资须手工填制大量的表格,这就会耗费工作人员大量的时间和精力,计算机进行工资发放工作,不仅能够保证工资核算准确无误、快速输出,而且还可以利用计算机对有关工资的各种信息进行统计,既方便又快捷地完成员工工资的发放。 本课程设计过程中根据设计中的需求及对工资管理系统采用了模块化的设计思想,在机房我们在Windows XP 操作系统环境下,采用 myeclipse7作为开发工具,主要连接 Access 数据库来实现公司的工资管理系统的主要功能。在设计过程中,我们首先小组首先对整体的思路进行分析,然后进行分工。对数据库和类进行设计,实现了工资管理系统的功能。其功能主要包括公司用户管理、人员管理、部门管理、工资管理等功能.。内容来源于网络分享,如有侵权请联系我删除。另外如果没有积分的同学需要下载,请私信我。

    YOLO算法-水泥路面裂纹检测数据集-5005张图像带标签-裂纹.zip

    YOLO系列算法目标检测数据集,包含标签,可以直接训练模型和验证测试,数据集已经划分好,包含数据集配置文件data.yaml,适用yolov5,yolov8,yolov9,yolov7,yolov10,yolo11算法; 包含两种标签格:yolo格式(txt文件)和voc格式(xml文件),分别保存在两个文件夹中,文件名末尾是部分类别名称; yolo格式:<class> <x_center> <y_center> <width> <height>, 其中: <class> 是目标的类别索引(从0开始)。 <x_center> 和 <y_center> 是目标框中心点的x和y坐标,这些坐标是相对于图像宽度和高度的比例值,范围在0到1之间。 <width> 和 <height> 是目标框的宽度和高度,也是相对于图像宽度和高度的比例值; 【注】可以下拉页面,在资源详情处查看标签具体内容;

    基于鸟鸣声识别的鸟类分类系统项目源代码全套技术资料.zip

    基于鸟鸣声识别的鸟类分类系统项目源代码全套技术资料.zip

    zigbee CC2530无线自组网协议栈系统代码实现协议捕捉与数据分析.zip

    1、嵌入式物联网单片机项目开发例程,简单、方便、好用,节省开发时间。 2、代码使用IAR软件开发,当前在CC2530上运行,如果是其他型号芯片,请自行移植。 3、软件下载时,请注意接上硬件,并确认烧录器连接正常。 4、有偿指导v:wulianjishu666; 5、如果接入其他传感器,请查看账号发布的其他资料。 6、单片机与模块的接线,在代码当中均有定义,请自行对照。 7、若硬件有差异,请根据自身情况调整代码,程序仅供参考学习。 8、代码有注释说明,请耐心阅读。 9、例程具有一定专业性,非专业人士请谨慎操作。

    毕业设计前后端分离博客项目源代码.zip

    毕业设计前后端分离博客项目源代码.zip

    (170644008)Eclipse+MySql+JavaSwing选课成绩管理系统

    Eclipse+MySql+JavaSwing选课成绩管理系统,原文博客在https://blog.csdn.net/qq_50062694/article/details/124649345?spm=1001.2014.3001.5502。内容来源于网络分享,如有侵权请联系我删除。另外如果没有积分的同学需要下载,请私信我。

    IBM Process Mining流程挖掘

    什么是流程挖掘?为什么需要流程挖掘?流程挖掘面向的部门是哪些?流程挖掘之后做什么?如果想知道这些,请阅读此文。

    Android程序开发初级教程WORD文档doc格式最新版本

    ### Android程序开发初级教程(一):初识Android **平台概述** Google推出的Android操作系统平台已经正式亮相,这是一个基于Linux内核的开源操作系统。对于开发者而言,了解其架构和支持的开发语言至关重要。以下是Android平台的架构概览: **平台架构及功能** 1. **应用框架(Application Framework)**:包含可重用和可替换的组件,确保所有软件在该层面上的平等性。 2. **Dalvik虚拟机(Dalvik Virtual Machine)**:一个基于Linux的虚拟机,为Android应用提供运行环境。 3. **集成浏览器(Integrated Browser)**:基于开源WebKit引擎的浏览器,位于应用层。 4. **优化图形(Optimized Graphics)**:包括自定义的2D图形库和遵循OpenGL ES 1.0标准的3D实现。 5. **SQLite数据库**:用于数据存储。 6. **多媒体支持(Media Support)**:支持通用音频、视频以及多种图片格式(如MPEG4, H.264

    java毕设项目之ssm小型企业办公自动化系统的设计和开发+vue(完整前后端+说明文档+mysql+lw).zip

    项目包含完整前后端源码和数据库文件 环境说明: 开发语言:Java 框架:ssm,mybatis JDK版本:JDK1.8 数据库:mysql 5.7 数据库工具:Navicat11 开发软件:eclipse/idea Maven包:Maven3.3 服务器:tomcat7

    212) Outgrid - 多用途 Elementor WordPress 主题 v2.0.0.zip

    212) Outgrid - 多用途 Elementor WordPress 主题 v2.0.0.zip

    weixin138社区互助养老+ssm(论文+源码)-kaic.zip

    weixin138社区互助养老+ssm(论文+源码)_kaic.zip

    深圳建筑安装公司“高处作业安全技术操作规程”.docx

    深圳建筑安装公司“高处作业安全技术操作规程”

    计算机视觉项目:Swin-Transformer 【tiny、small、base】模型实现的图像识别项目:番茄病害图像分类

    【项目简介】 代码主干网络采用Swin-Transformer 家族系列,包括【tiny、small、base】三种模型。pretrained和freeze_layers参数为是否采用官方预训练模型和是否仅训练分类头。为了做对比消融试验,优化器采用了Adam和SGD、AdamW三种。损失函数采用多类别的交叉熵、学习率优化策略采用cos余弦退火算法 【评估网络】 评估的指标采用loss和准确率(accuracy),分别会在训练集和验证集上进行评估、输出、绘制曲线图像。同时会在训练集、验证集进行一系列评估,包含混淆矩阵、recall、precision、F1 score等等曲线图像,以及recall、precision、F1 score、特异度的输出信息等等。 【具体各类别的指标在json文件中查看】 【如果想要更换数据集训练,参考readme文件】 【本项目为8种番茄病害图片(约4k张数据),包含数据集和标签,可以一键运行】

    城市公交查询-java-基于springBoot的城市公交查询系统设计与实现(毕业论文)

    城市公交查询功能描述 城市公交查询系统的主要目的是为市民提供便捷的公交信息查询服务,帮助用户快速获取公交线路、站点、时刻表等信息,从而提高出行效率。以下是该系统可能具备的功能描述: 1. 公交线路查询 线路搜索:用户可以通过输入公交线路编号或线路名称,快速查询到该线路的详细信息。 线路详情:展示所选线路的起点、终点、途经站点、首末班车时间、发车间隔等信息。 线路图展示:提供线路的可视化地图,显示线路走向及各个站点位置。 2. 站点查询 站点搜索:用户可以通过输入站点名称或编号,查询该站点的相关信息。 站点详情:展示所选站点的上下车线路、周边设施、换乘信息等。 实时到站信息:提供该站点即将到达的公交车信息,包括预计到达时间和车牌号。 3. 实时公交信息 实时位置追踪:用户可以查看公交车的实时位置,了解公交车的行驶状态。 到站预测:根据实时数据,预测公交车到达各个站点的时间,帮助用户合理安排出行。 4. 换乘查询 换乘方案推荐:用户输入起点和终点后,系统提供最佳的换乘方案,包括所需的公交线路、换乘站点及步行距离。 换乘时间估算:计算并展示换乘所需的总时间,包括等车时间和步行时间。 5.

    交通旅游订票-JAVA-基于spring boot的交通旅游订票系统设计与实现(毕业论文)

    交通旅游订票功能描述 交通旅游订票系统是为了简化旅游出行过程,提升用户的预定体验。该系统通常集成了机票、火车票、汽车票、船票、景区门票等多种交通和旅游产品的预订、支付及管理功能。以下是该系统可能具备的功能描述: 1. 用户管理 用户注册与登录:提供游客注册与登录功能,支持邮箱、手机号等多种方式注册,保证用户信息安全。 个人信息管理:用户可以查看和编辑个人信息,如身份证号、联系方式、常用地址等。 乘客信息保存:可保存常用乘客信息,如身份证、护照、儿童票信息,方便快速预定。 2. 交通票务管理 票务查询:提供交通工具的实时查询功能,支持机票、火车票、汽车票、船票等的查询,包含出发时间、到达时间、票价、座位情况等信息。 多种票务类型支持:支持单程票、往返票、联程票、团体票等多种票种,满足不同用户需求。 票价比较:根据日期、交通工具等条件,自动比较票价,帮助用户选择最合适的票务。 票务预订与支付:提供便捷的在线预订和支付功能,支持多种支付方式(如银行卡、支付宝、微信等)。 票务改签与退票:用户可以在线申请改签和退票,并查看相关费用及政策。 3. 旅游产品预订 景点门票预订:用户可以在线选择

    企业数据管理系统项目源代码.zip

    企业数据管理系统项目源代码.zip

    java毕设项目之ssm高校专业信息管理系统设计与实现+jsp(完整前后端+说明文档+mysql+lw).zip

    项目包含完整前后端源码和数据库文件 环境说明: 开发语言:Java 框架:ssm,mybatis JDK版本:JDK1.8 数据库:mysql 5.7 数据库工具:Navicat11 开发软件:eclipse/idea Maven包:Maven3.3 服务器:tomcat7

    基于plc的污水处理,组态王动画仿真,带PLC源代码,组态王源代码,图纸,IO地址分配

    基于plc的污水处理,组态王动画仿真,带PLC源代码,组态王源代码,图纸,IO地址分配

    SINAMICS S120驱动第三方直线永磁同步电机系列视频-配置和优化.mp4

    SINAMICS S120驱动第三方直线永磁同步电机系列视频_配置和优化.mp4

Global site tag (gtag.js) - Google Analytics